# Night Fury - SR-2 [dual LGA1366]



## Dinnercore (Aug 12, 2019)

Hello there.







I´m doing another project-build, after finishing my previous project in december of last year ( https://www.techpowerup.com/forums/threads/realizing-my-childhood-dream-lga775-nvidia-sli.247398/ )

Since last time I focused on that childhood mGPU dream, I´m going for the dual CPU setup this time.
It was a very spontaneous decision, I was scanning local offers for old GPUs when I stumbled across this board and fell in love. Fully functional, already equipped with 48GB DDR3 2400MHz and dual X5690s.

Again it is one of those holy grail pieces that I never could have afforded as a teenager but always dreamed about what it would be like. Well now here it is 

I still need to sketch parts of the build, I had some trouble to find a case to fit this HPTX-monster which costs less than the board itself. My only real option was the Nanoxia Deep-Silence 6 tower. Not an ideal choice, because I´m a huge watercooling enthusiast and I plan to use water on atleast the board and CPUs, hopefully the GPU too.
In the end I decided to stick with the DB-6 because I can always use my external Mo-Ra3, so I´m not limited to the 2x 280mm radiators the case supports.

So, I got the board (EVGA SR-2), CPUs (X5690), RAM and a case. I also got a reservoir and a pump. I´m in the process of getting a full waterblock custom made for the board (should arrive in September). I will re-use the PSU from my previous build (EVGA 1600W T2).

What I still need to choose is a fitting GPU option and if it will be integrated into the waterloop or air-cooled. And I need to find some lighting options to hit a specific purple color. This might get a bit tricky, but I still got some time till September to figure something out before I can assemble it.
I wanted this to be a more subtle machine compared to the Nvidia theme I did before. So no fancy window-panel, no colored cables, no branded case. Instead I´m aiming for a big, black tower with an ominous purple glow from the top vents and the side-inlets of the front panel. The specific hue and intensity will be the most difficult part. It would be really cool if I could take it further but I´m not a case modder and every mod I have ever done fits in the ghetto mod section...

What I am most interested in however, is to see how this system will hold up to modern hardware and what kind of overclock I will be able to squeeze out of a dual socket setup.

First thing I can start on is taking apart the boards heatsink and give everything a very gentle but thorough clean. I will also lap the CPUs, maybe even delid them and replace the solder with liquid metal, but I´m not sure if I´ll do that. Seems a little unnecessary even to me 


*Parts List (so far):*
Mainboard: EVGA SR-2
CPU: 2x Xeon X5690
GPU: RTX 2080 Super (Inno3D iChill Frostbite)
RAM: 6x 8GB Vengeance Pro Series C11 2400MHz
Storage: 500GB SSD, 5TB HDD
PSU: EVGA Supernova 1600 T2

Case: Nanoxia Deep Silence 6 Rev. B
Fans: 7x 140mm, 8x 180mm, 5x 120mm, 2x 80mm
RGB-Controller: Cooler Master
Lighting: 2x RGB-LED strips

Radiators: 1x 240mm, 1x 280mm, 1x MoRa3
Fan Controller: Lamptron CU423
Pump/Res: XSPC Dual Bay with 2x D5 Vario pump
Tubing: Mayhems Ultra Clear 16/10mm
CPU-Blocks: Phobya UC-1 Extreme
Mainboardcooler: Liquid Extasy SR-2
Fittings: 24x 16/10 compression fittings by Alphacool
Quick-connectors: 2x Alphacool Eiszapfen Quick-Connectors to make the external radiator detachable

TIMs:
Thermal Grizzly Kryonaut on the chipset
Thermal Grizzly Minus Pad on the VRMs
Thermal Grizzly Conductonaut on the delidded CPUs going direct die contact with the cooler

*Pictures of the result:*


----------



## Yukikaze (Aug 12, 2019)

Once upon a time, I had a Skulltrail (2xQX9775) available to me at work, and I always wanted an SR-2 but I could never justify getting one, even years down the line. Kudos for this project!


----------



## Darmok N Jalad (Aug 12, 2019)

This era of Xeon reminds me of the old Musclecar era. These CPUs had a different purpose in their day, but can now be tricked out and tuned to be so much more. Today they are the fun retro CPUs we only dreamed about owning when they were new, but now we can attain them. I look forward to unleashing the power and nostalgia.


----------



## Mr.Scott (Aug 13, 2019)

Nice.
I have SR-2 with 5675's and Dom GT's on air.


----------



## Dinnercore (Aug 13, 2019)

I took a look under the heatsink cover of the SR-2 today and it looks really nasty. Not only is the dust nearly completly clogging the airflow, it smells like a 14 year old trying to hide his smoking habit from his parents. Intense deodorant smell with a strong undertone of cigarette smoke...









The good news is everything else looks good. Only minor 'damage' I could spot was this PCIe-pin sticking out a bit:






Gonna remove the heatsink, clean everything and move over to lapping the CPUs.

As for the GPU to choose I´m really torn on two things. First, I want to use a modern GPU that suits the CPU to create a balanced build, but on the same time I again feel like it would be very cool to get something completly silly like dual 980ti. 
The second thing about it is, do I stick with an EVGA theme and have board, PSU and GPU all from EVGA or do I just get whatever since it won´t be looked at 95% of the time anyway?


@Mr.Scott that seems like a solid result. Did you run some 3D stuff on it too? If so, what GPU did you pair it with?


----------



## Mr.Scott (Aug 13, 2019)

I have done no 3D benching to speak of on it.
I usually bench 3D on LGA 1151, unless it's AGP, then AM2NF3-VSTA.


----------



## Darmok N Jalad (Aug 14, 2019)

Dinnercore said:


> I took a look under the heatsink cover of the SR-2 today and it looks really nasty. Not only is the dust nearly completly clogging the airflow, it smells like a 14 year old trying to hide his smoking habit from his parents. Intense deodorant smell with a strong undertone of cigarette smoke...
> 
> 
> 
> ...


That’s one nasty HSF. Poor baby couldn’t breathe!


----------



## Mr.Scott (Aug 14, 2019)

Definitely.  It runs hot even whistle clean.


----------



## jaggerwild (Aug 14, 2019)

Them boards did command $450 not to long ago, I think it still about the same even not working condition.


----------



## Dinnercore (Aug 14, 2019)

jaggerwild said:


> Them boards did command $450 not to long ago, I think it still about the same even not working condition.



Yeah they seem to hold a price around that 400$ mark. I´m surprised to see how frequent these still are for sale tho, so far every time I´ve been looking there were atleast 1 or 2 available. Currently there is a guy on ebay selling one as broken with a weird 50/50 boot chance and the offers are already above 250€. 
I got it for 680€, including the CPUs which each don´t go for less than 80€ plus the 48GB of fast RAM and including two coolers:





All in all it wasn´t a steal, but I don´t feel like I overpaid either.



Mr.Scott said:


> I have done no 3D benching to speak of on it.
> I usually bench 3D on LGA 1151, unless it's AGP, then AM2NF3-VSTA.



I don´t plan to use this for 3D benching, but I´d like to pair it with a GPU that balances the CPU-power and I have no clue how well these Xeons in dual config hold up. I might stick an RX 5700 in there if I go with recent stuff.
For 3D-benching I´d always use the latest CPU platform possible.




Mr.Scott said:


> Definitely. It runs hot even whistle clean.



Which is why I decided I have to use water on the board. I read a lot about these boards/chipsets failing over time, probably due to heat-cycle stress. I don´t want to run it hot, this board deserves a comfortable retirement. Finding a used waterblock without a board is very difficult these days, which is why I was very happy to find someone still making the blocks for the SR-2 if you ask for it.


----------



## Darmok N Jalad (Aug 14, 2019)

The dual Xeon trays for the classic Mac Pro are also still really expensive. Granted, they usually come pre-equipped with X56xx CPUs, but they usually go for $400-600.


----------



## FreedomEclipse (Aug 14, 2019)

Dinnercore said:


> Yeah they seem to hold a price around that 400$ mark. I´m surprised to see how frequent these still are for sale tho, so far every time I´ve been looking there were atleast 1 or 2 available. Currently there is a guy on ebay selling one as broken with a weird 50/50 boot chance and the offers are already above 250€.



miners? I mean once the system is up and running you never need to take it down


----------



## Devastator0 (Aug 14, 2019)

May I make a suggestion & take a look at what TechYesCity on YouTube did with one of these if you haven’t already. He did an SR2  build not long ago and came out awesome. Perhaps join his Discord and drop a message about it?


----------



## Mr.Scott (Aug 15, 2019)

Dinnercore said:


> All in all it wasn´t a steal, but I don´t feel like I overpaid either.



I stole mine. 
$75 complete at an estate sale.
Just sitting all by itself in a box in somebody's junk room.
A 'right place at right time' score. I was lucky.


----------



## Dinnercore (Aug 18, 2019)

I took the heatsink off and found some more nasty stuff. Mostly just brown dust, but this little transistor caught my attention:





It must have gotten hot and been hot for a longer period of time. It looks like it let the magic smoke out. 





I have been trying to get the part but this number (ALHV / 1Z) turns up nothing for me. 
Lets just hope it is nothing important, if I´m not mistaken this is part of the power delivery for the fan that was with the heatsink. If I use my waterblock this hopefully will be worked around neatly. 

Still I would rather like to find this to replace it, better safe than sorry. 

Some more of the nasty stuff:





Looks like a worm ate through it. 









This is the fan cable connecting to the header close to the transistor in question. It shows some heat discoloration too, I think the poor chipset was cooking during the last days of use from the previous owner and the fan tried as hard as it could to compensate the lack of airflow.
I hope I got this poor thing in time to save it. 





This yellow tape is right above Q172, showing how hot it got. 













Time for a proper bath. For me and the board after working on this. 

As for the GPU question, I got a neat idea. This platform seems powerful enough for even a 1080 or higher as TechYesCity showed that is maxes out a 980ti most of the time. I´m on the same 1440p resolution and instead of investing in another card that is basically on par with my main systems Vega I´ll rather buy my upgrade now and use it in this for a while. This will eliminate any potential GPU bottleneck too, and we will see exactly what this dual Xeon platform is capable of. Been thinking towards the 5700 XT.


----------



## dont whant to set it"' (Aug 18, 2019)

@Dinnercore ,don't resolder unless wb / pot on die. Personal 2 penny's


----------



## Frick (Aug 18, 2019)

Some dude on here built tons of those systems for some customer. T_ski? Or something. Beastly things, I really wish there was a market for an equivalent these days.


----------



## Darmok N Jalad (Aug 18, 2019)

Just my own experience, but I had an old GeForce where I accidentally knocked a cap off the back. Card still worked just fine.


----------



## Dinnercore (Aug 18, 2019)

Darmok N Jalad said:


> Just my own experience, but I had an old GeForce where I accidentally knocked a cap off the back. Card still worked just fine.


For a cap sure, I´ve had the same experience. They usually just filter noise from a signal or supply line and one more or less is within spec and does not change much for functionality.

This Q172 / ALHV 1Z is not a cap tho, but a transistor of some kind. Maybe a mosfet. It is not as simple as a missing cap. If that thing is broken some functionality will be lost. In this case, since the fan was still spinning, I would guess it has to do with the fan speed control, more precise it is the power regulating part of the circuit and now the fan might be at a fixed speed. 
I did not test it, just let board post and boot once, was happy that it worked and did pay no attention to the fan since it was spinning. 

If all my assumptions about this SMD are correct, it may be ok to leave it be. Broken or not, I will not plug a fan in there again anyway.


----------



## Grog6 (Aug 19, 2019)

The a63a chip seems to be this:



			http://www.ti.com/lit/ds/symlink/lm321.pdf
		


A sot5 opamp. Compare the pinout to your board layout and verify.

That transistor isn't showing up; but if it's a fan driver, it's going to be a 1/2 A or so transistor, probably.

See which pin is connected to power, and post back.

This post made me find this:


			https://www.sphere.bc.ca/download/smd-codebook.pdf
		

Which is very cool. 


EDIT: looking at my board, if you run the fans slow, there's a bunch of power dropped across that transistor.
It may be normal.
I'd try it, and if it works, it's probably ok.
Worst case, it won't work at all, or will run full speed.


----------



## Dinnercore (Aug 19, 2019)

Grog6 said:


> The a63a chip seems to be this:
> 
> 
> 
> ...



Thank you so much, this is very useful stuff for me. 

I will poke the thing with my DMM when I run the board again, but this will have to wait until the waterblock arrives (eta during september). 
Looks like I got nothing to worry about tho, the part is not vital for the overall function. My guess for what this transistor does was quite accurate, funny how easy it is to figure these things out if you just look at it and think about it, even without having much background knowledge.


----------



## phill (Aug 23, 2019)

What a beautiful system    I remember picking mine up a while ago now, I think I'd paid something like £1400 or something and then had a 400 mile trip to go pick it up lol  

Just for memories, here's a before and after...











CPUs weren't a massively high spec being X5650's but man they can suck the juice from the wall!!  (thank god for solar panels!!)

Have an amazing time with it and I hope that it runs for years to come   I'll be watching the thread and seeing how things develop


----------



## Dinnercore (Aug 24, 2019)

phill said:


> What a beautiful system    I remember picking mine up a while ago now, I think I'd paid something like £1400 or something and then had a 400 mile trip to go pick it up lol
> 
> Just for memories, here's a before and after...
> 
> ...



What a beautiful build! Yours looks much more tidy than mine ever will be 

Some parts arrived for my build, but the most important one that keeps me from going much further will probably take two more weeks atleast.

I will be using a dual D5 bay equipped with two D5 Vario pumps. Can be controlled with a PWM signal or with a dial on the back of each pump. In theory that should be enough to get the water through the big loop.




Since the two Xeons will produce a lot of heat on top of the GPU and mainboard I decided to go for an external MoRa radiator from Watercool. I did a quick test fit to see if I can get away without an additional mounting kit for 180mm fans and I can. So this unit will be equipped in full 180mm push/pull config.




The CPU blocks are here too, Phobya UC-1 Extremes. They are cheap and reliable, I use the same one on my OC-bench with great results.

To help the big radiator and further increase thermal transfer area I got a 240mm and a 280mm radiator to mount inside the case.


 



Now that is a lot of fans, and the big ones do eat up 5W each. That calls for a serious fan controller, as the one built into the case only supports 30W at best.




This massive unit is up to the task. The reason why I picked especially the Lamptron CU423 was simply that I could not find a proper review or much material on it at all. So naturally I had to get one.

An 8-Pin for your fan supply + molex. The CU423 is insane, I love it!




Looking at the naked board I got a strange idea forming in my head, but this will need some further R&D before I decide to try it.




I had some trouble with my tube fittings, had to return them because each 6-pack I received was opened and missing one or two... Well since I´m still waiting on the boards wb there is no time lost.
Only major things I´m missing now is that block and a GPU.
I´m hoping that AMD will release a 5800 XT soon, but lets be real that won´t happen within the next 2-3 weeks :/


----------



## phill (Aug 25, 2019)

Dinnercore said:


> What a beautiful build! Yours looks much more tidy than mine ever will be
> 
> Some parts arrived for my build, but the most important one that keeps me from going much further will probably take two more weeks atleast.
> 
> ...



I know the build will be fantastic   

You might want to try out this fan controller at some point...  I think you'll find it amusing 

Looking forward to the next update!!


----------



## Darmok N Jalad (Aug 25, 2019)

I enjoy living vicariously through your project. These days all my creative energy (and most of my spare cash) goes into fixing up my 60 year old colonial. It has a closed loop refrigerated cooling system, but I dump my heat outside.


----------



## E-curbi (Aug 25, 2019)

Very cool motherboard bro, that SR-2. My dream rig a few years back.

I see so much amazing gear I want to build with, yet in the end cannot justify spending the dollars if only for the sheer pleasure of the build, yet when complete doesn't really help with getting my work out any faster, lol - the enthusiast vs the pragmatist. lol.

I have some of that amazing ddr3 from Corsair - back in the day like Scotty, I heard they fetch high prices the longer you hold onto them, so I'm planning on selling my Dominator GT 2133Mhz kit for $1000 in about ten years. 

The EVGA SR-3 DARK will actually mount up to my chassis without any mods, whereas the Asus Dominus and Gigabyte LGA3647 - Xeon W-3175X boards are way too large and extended.

Actually spun the idea of getting the SR-3 Dark around in my head, worthy of about 3hours of pure entertainment planning the logistics, before arriving back at reality. 

Good luck with your build plans.


----------



## bogmali (Aug 25, 2019)

I am working on mine as well, took it out from an older DangerDen case and mounted it on a Phobya WaCoolIT bench. Just waiting for a GT-730 card and I can start crunching with this baby again. Excuse the ghetto mounting of some parts


----------



## phill (Aug 25, 2019)

Hopefully this isn't de-railing the thread too much, but @bogmali how do you find the temps with the motherboard with them water blocked?


----------



## Dinnercore (Aug 25, 2019)

phill said:


> Hopefully this isn't de-railing the thread too much, but @bogmali how do you find the temps with the motherboard with them water blocked?


I´d like to know myself, that would be good point of reference for me.


----------



## bogmali (Aug 26, 2019)

phill said:


> Hopefully this isn't de-railing the thread too much, but @bogmali how do you find the temps with the motherboard with them water blocked?





Dinnercore said:


> I´d like to know myself, that would be good point of reference for me.



Back when I was tuning this setup for F@H competition runs, I used Lavalys' Everest (which is now known as AIDA64) to measure the NB at least. I have not run AIDA in a while so idk if it still has that feature


----------



## Darmok N Jalad (Aug 26, 2019)

bogmali said:


> Back when I was tuning this setup for F@H competition runs, I used Lavalys' Everest (which is now known as AIDA64) to measure the NB at least. I have not run AIDA in a while so idk if it still has that feature


HWiNFO64 will measure more stuff than you may ever care to know.


----------



## phill (Aug 27, 2019)

bogmali said:


> Back when I was tuning this setup for F@H competition runs, I used Lavalys' Everest (which is now known as AIDA64) to measure the NB at least. I have not run AIDA in a while so idk if it still has that feature



They are a little on the juciy side for power consumption that is a fact..  I was just curious since I've only had mine on air cooling since the day I've bought it but I've not thought I'd need to uprate any of the cooling.  That said, all of my crunching PCs need a damn good clean anyways  

It would be interesting to find out whenever you next use it just out of interest   I love these setups, EVGA did know what they were doing when they made these boards


----------



## Dinnercore (Aug 27, 2019)

I´m itching to complete this build, the wait is killing me and it´s only been two days since I last worked on it...

To keep myself busy I decided to delid my CPUs despite what I said initially. I got the time, might as well take the extra step. 





If you want to delid these Xeons keep in mind that they don´t fit in some delidding tools due to their slightly larger substrate compared to the Core-series chips. I ended up using a big vice.





I used a No. 10 medical scalpel from Braun to get the indium solder off the die. I´d say a No. 22 will be fine too, but they are a bit big. 





Marked them A and B to keep track of which socket they were in. I have never dealt with dual CPUs before and thought they might want to stay in the socket they used to work in  

All that is left to do now is polish the surface a bit. Let´s just hope there will be no unexpected delays with my mainboard block or I might get bored and start to sand the die too.


----------



## TheMadDutchDude (Aug 27, 2019)

Weren’t those chips soldered??


----------



## R-T-B (Aug 28, 2019)

TheMadDutchDude said:


> Weren’t those chips soldered??



He did mention taking a medical scapel to Indium solder...  so yes.


----------



## Dinnercore (Sep 20, 2019)

The waterblock for the board is gonna take a while it seems. These old parts are made in small batches which means they just now started the production of the batch my order is in. But I think it will be well worth the wait.

In the meantime I only did some minor touches, I installed some things in the case and did a quick testfire of the fans + fan-controller setup. I was a bit worried that those big 180mm fans will draw too much power for a single cable so I split them up in 2x4.








For the drive bay area I had some trouble with the front door of the case not being able to close when I install the reservoir so I put it at an offset further back into the case, same for the fan-controller. This way I can still close the door at the cost of not being able to close off the gap between the two with one of the drive bay covers. So I got an inline temp. sensor with display and will put that somewhere between the two.





In addition I also got a controller for RGB since the board does not support it and will re-use RGB-lighting strips from a previous PC of mine to create that glowing light for the top and front.

I can´t stand the wait, but it is what it is. Atleast GPU prices are falling with each passing week


----------



## Dinnercore (Nov 13, 2019)

Finally the waterblock is done. Holy XYZ that took a while, but I think it was worth to wait for it.





And, this is sitting on an open GPU box  Yes I now have all the parts and can start putting this thing together. It´s 4:30am, I could not sleep at all with everything sitting around so I´m getting to work now.


----------



## HammerON (Nov 13, 2019)

Sub'd for an awesome build!!!


----------



## R-T-B (Nov 13, 2019)

bogmali said:


> I am working on mine as well, took it out from an older DangerDen case and mounted it on a Phobya WaCoolIT bench. Just waiting for a GT-730 card and I can start crunching with this baby again. Excuse the ghetto mounting of some parts



Is it bad that I am tempted to try to figure out where in my city you live, just to gawk at this?


----------



## bogmali (Nov 14, 2019)

R-T-B said:


> Is it bad that I am tempted to try to figure out where in my city you live, just to gawk at this?



Closer than you think.......its not functional atm and I'm still working on it


----------



## Dinnercore (Nov 14, 2019)

I see many people in here, I hope no one expects a tidy professional and ultra-clean build from me because what you will see soon is my typical chaos. 

Spend ~ 12-13 hours working on it now and there were many complications. As I had finally mounted the mainboard in the case and I looked at the 16 fittings total in there, all kinda close to each other I felt for the first time like its growing a bit over my head. I only had 2x 90° fittings, all the other ones are straight... I think I will re-work this loop soon, for now tho I want to get this whole thing to power up atleast. 

Anyway I completed the loop and currently I´m in the process of filling and leak-testing. And I encountered a problem with my loop. The dual D5s can´t fill it.





There is still air trapped in them, as the reservoir is right in the middle of the whole loop instead of on the bottom. Can´t really do anything about that since it´s mounted in a drive bay. When I turn them on they just recycle the water in the reservoir and build a bit of pressure at the far end towards my external radiator but not enough to get the water all the way through. 
I don´t want to run the pumps with the air inside of them and even if I do they currently can´t push through the whole loop as I got air trapped everywhere that seems to be causing lots of back-pressure. 

I tried each pump individually, and they both build positive pressure in the right direction, so the orientation is fine. So far everything is leak-free and I measured all the vertical tube length where it has to push water 'up' to be ~ 1.4m. I have less than 5m total loop length and 'only' 2 CPU blocks, a single GPU block, a 240mm rad, 280mm rad and the external radiator. The total tube length includes the routing to the external radiator. 
So in total I thought the dual D5s can handle this... Seems like they can´t? EKWB has dual D5s in serial rated at 7m total head pressure, so I´m now a bit nervous if they can handle my loop.

Maybe it will sort out when I get rid of the air, but how I´ll do that I don´t know at this point.


----------



## R-T-B (Nov 14, 2019)

bogmali said:


> Closer than you think.......its not functional atm and I'm still working on it



Heh.  Now w1zzard knowing my screwy sleep and wake cycle suddenly makes sense...  /tinfoilhat


----------



## phill (Nov 14, 2019)

Dinnercore said:


> I see many people in here, I hope no one expects a tidy professional and ultra-clean build from me because what you will see soon is my typical chaos.
> 
> Spend ~ 12-13 hours working on it now and there were many complications. As I had finally mounted the mainboard in the case and I looked at the 16 fittings total in there, all kinda close to each other I felt for the first time like its growing a bit over my head. I only had 2x 90° fittings, all the other ones are straight... I think I will re-work this loop soon, for now tho I want to get this whole thing to power up atleast.
> 
> ...



When they quote the measurements, isn't that down to how high the liquid can be pushed??  I've really never taken any notice of it and even with single pumps in loops, I can't say I've ever noticed any issues otherwise when it comes to having the bubbles..  When the system is on for a few hours, all the bubbles works its way out of the system and it's then fine  
My loop on my 5960X seems to make all sorts of noise at the best of times, so to be honest I never really take much notice


----------



## Dinnercore (Nov 14, 2019)

phill said:


> When they quote the measurements, isn't that down to how high the liquid can be pushed??  I've really never taken any notice of it and even with single pumps in loops, I can't say I've ever noticed any issues otherwise when it comes to having the bubbles..  When the system is on for a few hours, all the bubbles works its way out of the system and it's then fine
> My loop on my 5960X seems to make all sorts of noise at the best of times, so to be honest I never really take much notice


Yes but they assume a closed system with no air in it. Water does not compress or expand under pressure, air does. And I don´t have little bubbles, I got air pockets trapped between the lower radiator, the pump and the upper radiator. Which still would be no problem for a dual D5 to push out IF the impellers would be the lowest point in the loop and fully under water. 

However I could not mount a regular pump + res combo in the bottom of the case as there is no space for it there. So now I have an air pocket from the lower part of the loop (240mm radiator and mainboard block) that just doesn´t want to go away. 
Currently the impellers are only half covered with water and so they make some splashes but only throw air around in a circle, because the air wants to stay on top and the loop back-pressure is pushing that air right back at the pumps.

What I´ll try is to keep the loop closed and take the bay out, flip it upside down and put it a bit lower. Once everything is filled it should be fine. Now that I´m typing this I wonder why I didn´t do that 3 hours ago... Guess sleep is not as optional as I want it to be.


----------



## phill (Nov 14, 2019)

Is there a way you can rock the rads/res to try and push the air out? 

Would it be an option to run the water loop without the components (i.e. just the water loop with no hardware in, just in case of a leak or pressure build up and tubing coming off etc.??) would that make any difference or help at all?

It's kinda hard to help with cooling as being there to see is sooo much easier


----------



## Dinnercore (Nov 14, 2019)

phill said:


> Is there a way you can rock the rads/res to try and push the air out?
> 
> Would it be an option to run the water loop without the components (i.e. just the water loop with no hardware in, just in case of a leak or pressure build up and tubing coming off etc.??) would that make any difference or help at all?
> 
> It's kinda hard to help with cooling as being there to see is sooo much easier



I´ve tried rocking back and forth, putting it upside down, putting the external rad upside down, shaking it gently. Nope.

Running just the loop would = a total rebuild. With the mainboard block involved and VRM cooler etc. this is the last thing I´ll try if all else fails. There is definitly no leak, all tubes are connected and I´ve already pushed the water through a full cycle by opening a port on the external rad and sucking the water in with a very long tube. So, no leaks, water makes it through the whole loop and all tubes are connected like they should. 
However there is still air trapped in the middle, which sits between a partly filled radiator and the pump/res combo. So It has water on both ends but air trapped exactly where the pumps sit...

I feel like I´m making progress tho, by manually pumping through using just my lungs I can get a little bit of air out each time and refill the reservoir. By now I have 1,4l water in there and it still has room for more. I hope I can work the big air pocket out like this, because I tried to remove the pump/res-bay and noticed I can´t. There is a lip that only allows it to be pushed out, which is not possible because of the tube lengths.
Well it´s a thing you gotta learn the hard way. Next time I won´t place the pumps like that.


----------



## phill (Nov 14, 2019)

Dinnercore said:


> I´ve tried rocking back and forth, putting it upside down, putting the external rad upside down, shaking it gently. Nope.
> 
> Running just the loop would = a total rebuild. With the mainboard block involved and VRM cooler etc. this is the last thing I´ll try if all else fails. There is definitly no leak, all tubes are connected and I´ve already pushed the water through a full cycle by opening a port on the external rad and sucking the water in with a very long tube. So, no leaks, water makes it through the whole loop and all tubes are connected like they should.
> However there is still air trapped in the middle, which sits between a partly filled radiator and the pump/res combo. So It has water on both ends but air trapped exactly where the pumps sit...
> ...



I don't suppose there's a bleed hole or something on the rad that you can possible use to help with the trapped air?  I know of some rads having them but otherwise it might just be a case of running the loop perhaps and letting the air work it's way out..  
With those style of pumps/res combo's I've never used.  The link in my post above where I show the updated build are typically the ones I use.  I try to separate CPU and GPU loops as well as when both are dumping a lot of heat into one, I try to control it so it's not getting too hot and then fans need to spin faster etc..  
Are there any way of running the loop with the caps off of the tops of the res to get any air out that way when the loop runs?  I'm unsure, just talking hopefully not rubbish but something that might help    Trapped air can be a nightmare in water loops


----------



## dont whant to set it"' (Nov 14, 2019)

Beasty build this one is for sure with respects to both altitude and speed.
let:typo


----------



## Dinnercore (Nov 14, 2019)

Thank you for the suggestions and keeping my spirits up @phill !

I figured it out, it was just the airlock in the pumps themselfs that hadn´t yet cleared out. I was to big of a sissy to just let them run half-dry for a while. I tilted the whole tower a couple of times sidesways and turned the pumps on and off many times, eventually they got started. 
And once these dual D5s start eating the water, they go absolutly crazy. I did not expect to see whirlwinds inside of my tubes - through nearly the whole length of them:






That is only speed 4/5. I can now rest in peace and catch some sleep while this thing does it´s final bleed and leak test.


----------



## phill (Nov 15, 2019)

You can also try slowing it down if you'd like to see if that made any difference to the trapped air   These pumps can move a serious amount of water and what we have in these systems are pretty much nothing for them in comparison     Sometimes a bit of doing the wrong thing helps it out, just glad its sorted for you now 

I'm real glad you have it all sorted out    Get the CPUs and GPU loaded, run it for a few hours, you'll see all the air eventually work out of the system as the heat introduced will get rid of the air bubbles   I'd really love to update some of my water loops, but....  
Anyways, so pleased it's now all up and running   Awesome work 

Sorry, I don't think I put in enough smilies


----------



## Dinnercore (Nov 16, 2019)

Oh at that point it was still far from 'up and running' for the system...

The past two days were an interesting set of ups and downs. This is for sure the most complicated thing I´ve build so far. There are still so many things to do, but I can give you all an update on what happened so far and where I stand right now.

Let´s start with me getting the waterblock and jumping into action. I immediatly finished the work on both CPUs, that means sanding and lapping the surface of each die to make them even.





I had hoped I would not have to remove the top layer, but there was no way around it. The die on this one was so uneven that while the center was already through the diffusion barrier, the corners still had traces of indium left on them...
So this is where I ended up with the whole die now being shiny from edge to edge.

Next up was mounting the waterblock.

(yes the top CPU was not yet lapped in this pic, don´t worry I did not forget about it!)




I went with Kryonaut on the ASICs and the Thermal Grizzly minus pads for the VRMs including the inductors. After installation I checked from the side with a strong flashlight if I had good contact all around and noticed that while it made contact, the copper surface of the cooler came very very close to the SMD caps on the chip substrates. So close in fact that I got a little nervous and for good measure I removed the cooler again and put a very thin slice of non-conductive thermal pad across them and remounted everything. Now I felt a lot better about it.

Next was installing the CPU blocks. I removed the sockets and used my own backplates to go direct die. I put liquid metal on the die and covered the surrounding SMDs with kapton-tape. Before mounting the second block I connected them with the first tube and my 90° fittings to have atleast one connection that does not have to go straight up, as these were my only 90° fittings. I did not plan that very well, but it resulted in something (for my eyes) beatiful which you will see later.





Having the board prepared I went back to my case. I installed drives, PSU etc.. I mounted the fans and radiators. After that I tried to fit the mainboard, and wow. It´s really large who would have thought. So large in fact that I had to remove the radiator in front of the drive bay, because it was blocking the 24-pin power connector. So I had to attach the connector, then put the mainboard in and then remount the radiator.

No, this time I actually did not forget to install the I/O-Shield beforehand! First time!
With the board, coolers and radiators in place it was time to build the loop. And that was like staring at a blank piece of paper with the goal to draw a cubistic horse. Once I had a start tho, it all kinda came together.
I only moved two connections around in the end so that I had the internal radiators sit between the board/CPU side and the GPU.

Speaking of the GPU, I made my life a little easier by buying a waterblock version. It was not my first choice brand, I had to go with Inno3D but they are ok I think and after all it´s just a reference model with a mediocre waterblock on top. Plus this is the first time I can have a warranty on a watercooled part, as in Germany as soon as you touch a screw you loose all warranty support.
It´s an RTX2080 Super, this should make for a perfectly CPU bottlenecked system and show exactly what this platform can and can not do.

So this is what the loop looks like:





The 'mess' of tubes you are looking at is the magnificent, beating heart of this beast. Let me explain the loop order:
We start with the tube that is split by fittings and the inline-temp sensor. This tube comes from the pump/res combo outflow and goes into the mainboard block. From there we go straight up into the right CPU, out of the right CPU into the left CPU, out of the left CPU into the drive bay radiator, out of that radiator into the top radiator, out of the top radiator into the GPU, out of the GPU into the VRM-cooler, out of the VRM cooler to the external radiator and from there back into the reservoir.
The line back from the external rad is hidden behind the top radiator, I had to get that in with the mainboard out of the case too.

This is madness you may say, well yes it is but there will be two D5s powering this heart.

Next up was filling the loop, leak testing and bleeding the air out. As you can imagine by looking at this now, getting the air out was a bit of a pain. But as another first for my builds this one had not a single leak. Apart from me leaving the fill port open while carrying it so I had some spills.

The D5s were airlocked from the start, so I used a long tube and my lung to fill most of it up. In total I got 1.6 liters of fluid in there. To bleed it, I let it run at different speed settings, from slow to full blast. After that there were still two major pockets in the system, one on the left CPU block. That one always got pushed down into the drive bay radiator but never made it fully through to the other side. I carefully removed the radiator with the tubes attached and tilted it a bit while running and gave it a shake. This got the air through, after that I had some big bubbles left in the GPU block. Same procedure, took the GPU out and carefully moved it around with the tubes attached.

Finally I had the air out. Next I had to do the power cables for the board and GPU. I managed them as well as I could but the big mainboard cable was not possible to hide. In the front it looks ok-ish, but
the backside is where the copper-snakes party:




This is the system wired up and filled:





Through all these steps I carefully checked underneath each socket multiple times to see if I had any liquid metal spills. Upon attaching the tubes I had to push down on the CPUs to get them on, that was one of the most nerve wrecking things as I knew I was pushing straight on a bare die with liquid metal and the socket of a rare/expensive collectors board.

But after all I now had no excuse to wait any longer with the first power-on. I think you all believe me when I say this was a special moment.

Pushed the button and... Absolute quietness. A bit too quiet to be comfortable. LEDs suddenly light up, temp-sensor is on, fan controller is on, board starts going through all post codes and when it spoke to me with it´s clacking speaker noise I knew it was as happy to be alive as I was.
But still absolutly no noise apart from the barely noticable pumps. The fans were not spinning. My controller was on but the fans were not. I checked the connections, all wired up.
Well with so much fluid in the system it can run passive for a while. I went into bios and checked my system temps. All looked perfectly fine.

I powered off again and went to troubleshoot the fan-controller... To be continued, I need to grab a coffee 
Build is running, no major issues but many small things that I will tell you about how I solved them soon and many things that I still need to do at this point.


EDIT: That is the vortex from the photo during bleeding the air out


----------



## Dinnercore (Nov 16, 2019)

Next up I looked at the fan controller, scratching my head why the fans would not spin. It turned on, it had all the lights going and everything.

I hooked it up outside of the case with a different PSU and the same thing happened except that one of the test fans, a very small one, turned on while the bigger ones did not. So I looked at the manual and found out that the controller defaults to an 'auto-mode' that applies 4V at all times and only ramps up when the temperature probes report higher temperatures (40°C and up).
You need to switch back into manual mode to control the fans yourself, otherwise you can push the buttons all day long and nothing will change.

Well I was pleased to find everything working and put it back in, had to do all the fan cables again etc.. Powered back on and this time I had fans too. So time for the windows installation.
I installed windows 10 and all the drivers for chipset, USB etc.. Got some monitoring software going and let CPU-Z stresstest run for a bit:





Don´t get fooled by the big temp delta between the cores, as you can see by the idle temps most of the core sensors read far to low. The 38°C / 39°C load figures seem to be most realistic, as that is what my OC-bench X5650 did with the same direct die, liquid metal cooling under the same model waterblock.
I´m a bit confused by the readouts from the SR-2. What does it mean by CPU-PWM? Is that an offset temperature to control the CPU-fan headers? Or do they mean the VRMs with that?
What are the nebulous 'System' and 'System 2' temps? They do seem to correspond with the water-temp of the loop so maybe those are NB and SB?

After that I turned everything off and hooked up my other drives. I usually start with only the boot drive attached, so that I can´t be an idiot and choose the wrong partition for installation. Yes that happened to me before...
After the restart I noticed something unpleasent, the fans were not spinning again. I had to switch back to manual mode on each channel again and set the speeds for each channel too. It did not remember my settings, which is odd because it should do that according to the manual. And if I remember correctly, it actually did that 3 months ago when I received the unit and tested it. What the heck.
The auto-translated manual is not very detailed so I can´t figure out if there is a button combination I need to press or something in order to save my profile. I´m currently looking into that issue.

Upon booting with all drives attached (I have 2x SSDs 250GB / 500GB and 2x HDDs 1TB / 4TB) it started loading windows 7 and BSODed. You read that correct, it attempted to boot Win 7 from a partition I had left on the other SSD. I had a very short moment where I thought I messed something up or the controller on the board could not handle my drives until I remembered that and I realized I saw the Win 7 loading screen and not Win 10.
Back into bios I found a weird issue with the RTC as the time had just been dialed back exactly 4 hours. No idea what caused that, but I set it to the correct time again and for now it seems to hold it.

Here is a quick peek of it running:





The LEDs from the GPU are burning holes in my camera, I need to play with the RGB software and set that up too, but that is for later. I also need to set up my RGB-strips and hook up the RGB controller I got. But the most pressing issue atm is to figure my fan controller out.
I would not mind setting it up with each boot IF it wouldn´t be such a long process. I need to hold the buttons on each channel for 3 seconds to switch over to manual mode, that alone is 12 seconds. After that I need to set the voltages on each channel with 0,5V bumps per click, that is another 40-60 button presses each time. If I can´t figure this out, the beautiful CU423 sadly has to go.

The GPU-blocks performance is another thing I´m considering to investigate next. It reached already 45°C with the GPU-Z render test at only 160W and 60% load. To be fair it did boost to full clockspeed of 2050MHz but at full load that temp would be hitting in the 60°C range. Absolutly horrible for a custom watercooled card. Either the block design is bad (by looking at it I can see multiple reasons why) or they used horrible TIM (most likely they did, that´s a common thing among pre-build GPUs) and/or the mounting pressure is not enough and I need to look into that.
Problem is, if I do anything to the card like changing TIM or removing the backplate to check the screws to the block I would loose the warranty. I might just run it as is for a year and then fix it myself, but that depends how horrible it really is. Some customer claimed his card with this block reached 70°C, I hope mine doesn´t. If so I´ll return it.

And thats all for now. Will report back as soon as I got most things sorted out


----------



## phill (Nov 16, 2019)

Looking great my man!!    So glad everything is working and running well   These Xeon's do not heat up very much at all at stock but I think with a little voltage put through them, they can most definitely get a little warmer  

I've now two X5675's but I've yet to have a play around with them...  I might just have to have a go but it's not the best loop in the world for me..  I love how your's is soooooo complex!!   Looks most definitely like a heart    Can't wait to see more....


----------



## Dinnercore (Nov 17, 2019)

I´m mostly done with it now. I did some finishing touches, added a few things like the RGB-strips and closed the side panels for the first time. 

Here it is teasing me since august, my 'Night Fury':







I´ve added another fan on the drive-bay radiator so atleast the lower part is now push/pull. As you can see from the light, I´ve also added the RGB strips and the controller from CoolerMaster. One strip on the top-front side and one on the front-backside of the case. Very simple to control, plugs into an empty USB-header on the board.
I matched to color of the card to the purple glow and lowered the brightness 3 steps, for the camera it´s still too much but in person it´s ok now. 

I finished the 'cable-management' on the backside. With that I mean I tied them up so that I can close the side panel without the use of excessive force. Please ignore the one white cable, it was all I had for RGB-extensions. 





The two yellow kapton-taped probes are temperature sensors for the fan-controller. I placed them close to the socket and I have two more with one just hanging somewhere in that cable mess and one taped to the top of an HDD right in the stream of my front intake.

I had to improvise with the RGB-cable connection, as my LEDs did not have the same connector as the controller and extension cable. 






I did manage to get one of the double-male connectors wedged in the female end of the cable but had to secure it with shrink-wrap. Works perfect and is more durable than I thought. 

As for the GPU, this is why I think the block design is not good:





The arrows mark where most of the flow will pass through as the direct passage from in to out is not blocked in any way. On the box of the card there is a rendered waterblock image that shows a completly different design. And they did team-up with Alphacool on this block, but I can´t find this design anywhere to buy on their website, they all have a more aggressive flow-guidance. I wonder why....
The card without my scribbles:




But thanks to a high flowrate it should be good enough, there is still some water going across the whole finstack.

Some more photos:



 

 

 

 

 



Sadly my camera has some trouble with the strong LEDs. The bright purple color comes throught mostly realistic, but the temp-display for example should be deep red but shows up orange.





The view behind the upper front door turned out okay the way it is I think. I was a bit worried the missing cover and recessed res / fan-controller would look horrible.





Again the bright LEDs upsetting my camera. 

As for the fan-controller I did not yet figure out the issue with it and how to save my profile. So I´m using a workaround: I hooked up the 280mm push/pull fans to the controller built into the case and only use two channels of the fancy controller. These can be set-up in a reasonable amount of time (exactly the time it takes for the SR-2 to complete it´s long boot time) and I can use the free two channels just as temperature gauges for the probes.
I have written an email to the manufacturer, the manual was a bit so and so on the english translation but they seem rather friendly. I hope to hear back from them, if they can´t help me I´m a bit torn between getting something else and keeping this unique look. Replacing it would be more simple than you might think, as the cables are standard between most of these units and I´d just have to unplug them from the back of the current one. 

Maybe I´ll just make setting it up each time a 'start-up procedure' like aircrafts have them and scribble it on a post-it note that I hide on the back of the swing-open door.













And from the outside it looks so tame and innocent. 



 



Excuse the bad smartphone pictures, it was kind of an afterthought to include pics of the outside. 

And after all this I was still not 100% satisfied with the air-cooling part of this. I´m fixed on constant improvement, maybe also on complicating things just a bit further each time and felt like I could do some more. A final touch. My usual ghetto-mod style was still missing. Something that lives on from my previous build. I held my face close to the board to find places emitting heat. I found the QPI-VRMs from each CPU to be the 'hottest' parts now. By hot I mean at current stock settings you could touch them and it felt skin-warm. Can´t be more than 36°C-38°C. Still. Let´s put some airflow over that:





Now it´s ready for the drag-strip. The part below the GPU has some activ airflow over it too by the side-panel intake fans that I decided to include. ALL intake fans (front/side and bottom PSU) are behind dust-filters. 
I´ve hidden the connectors for the side-panel fans in a neat spot beneath the lower SSD-tray. A simple noctua Y-cable going to the back. 





And with that it´s closed, beating heart and breathing lung. It´s alive. I got exactly the look I wanted it to have. Clean, tame on the outside. But there is this faint glow on the top vents, the deep but soft hum and that not so subtle external MO-RA 360. VERY quiet! The PSU fan makes the most noise but is muffled by the heavy noise-dampened side-panels. The big external radiator fans spin very slow at ~ 500RPM. 
The loop water-temp at idle is only 1,5-2°C above ambient but I think the system has a serious idle power consumption. 

Looking at this I feel pleased. Next will be overclocking and some game testing. But first I have to clean up my apartment a bit. There are zip ties everywhere, cable bags, coffee mugs, boxes on top of boxes.


----------



## Darmok N Jalad (Nov 17, 2019)

Very nice work. Do you plan to OC the Xeons? That era had a lot of headroom.


----------



## phill (Nov 17, 2019)

Dinnercore said:


> I´m mostly done with it now. I did some finishing touches, added a few things like the RGB-strips and closed the side panels for the first time.
> 
> Here it is teasing me since august, my 'Night Fury':
> 
> ...



For a rough estimate for you, 200w at idle with a pair of X5675's and then under load again at stock, 400w+   It's a bit of a beast


----------



## natr0n (Nov 17, 2019)

phill said:


> For a rough estimate for you, 200w at idle with a pair of X5675's and then under load again at stock, 400w+  It's a bit of a beast



Have you tested with HT off to see how low the watts get ? I dont have a watt meter yet. I do run my stock x5675's with ht off though.


----------



## Dinnercore (Nov 18, 2019)

Darmok N Jalad said:


> Very nice work. Do you plan to OC the Xeons? That era had a lot of headroom.



Yes I will OC them, 4.4GHz would be my first goal.

From the first testing I got to 4GHz on just 1.28Vcore without LLC (1.23V load). Both of these CPUs seem to do very well, much better than my X5650 sample but that was kind of expected given the big difference in binning / base frequency. 

BUT I encountered an issue with the secondary CPU and one core temp. I did not notice at first, because it was still so close to ambient and not heating up at all. But now at 4GHz the 4th core of CPU1 (2nd CPU) has a massive delta of 14°C over the other cores. At first glance I was thinking that maybe the sensor itself was to blame, but further investigation leads me to believe I used just a tiny bit to little LM on the die. Guess I was a bit too careful there. 
The core in question idles on the same 20°Cish values as the other cores on that CPU, even a bit below the average but under load it shows 52°C where as the others are at 38°C-42°C and it cools down slower than the others. While all other cores drop almost instantly back down, this one drops to ~30°C and takes two seconds to reach its idle temp. 
This makes me believe that there is a spot that does not have proper contact and I want to fix this before I go any higher. It will be a pain but I want to do it right. I don´t have to take the loop apart, luckily there is just enough room and tube length to get just the waterblock out.
And I have already tried everything else at this point, like tightening the screws a bit, loosening them up etc.. All I managed was to drop the temp on just that core by 2°C, which again supports my theory of too little LM.


----------



## Dinnercore (Nov 19, 2019)

phill said:


> For a rough estimate for you, 200w at idle with a pair of X5675's and then under load again at stock, 400w+   It's a bit of a beast



I´m sure my idle is much higher, without the speedstep and C-states. I might try to enable them again when I have found a stable OC that I´m happy with but it will still be very power hungry.

As for the higher core temp, I had to perform two surgeries. First I had the wrong CPU out as for whatever reason the AIDA64 CPU2 = CPU0 on the SR-2.


 

 

 



And after I had added some LM it now no longer posted. I got stuck at FF instantly after power on. My heart really sank quite low in that moment. I found the culprit an hour later, it was a bend pin in the socket. Right below the opening in the middle, the top row. There was one single pin that had probably caught on an SMD from the CPUs bottom while removing it.
After bending it back in place I got to code 68 and got stuck on that one. This time I had used very little mounting pressure on the springs, thinking that the pin may also have been a sign of too much pressure from the cooler.
Adding two turns on each screw and I finally got it running again. In windows I saw that I still had too little pressure as half of the cores on CPU 1 were hitting 65°C under load. Again I fixed that by adding another turn on the screws and finally I had everything back to where it was before but of course still with that single high core temp on the other CPU.
I performed the second surgery, atleast it was simple enough to work on the socket even with the tubes all over. This time everything went well, I added a little bit of LM and spread it around. Remounted the cooler with just the right amount of pressure (I´m starting to get a good feeling for that now).

Just to find out that all of this was not helping and I still hit exactly the same temp as before... So now I think the sensor itself is a bit more sensitive compared to the others. Thinking about it there is no other way a single core hits so much higher compared to all the others on that die after confirming that A) there is a good amount of liquid metal and B) my mounting pressure and alignment is spot on.
After all the other cores on the same die are very close to each other, IF there was a contact problem then atleast one adjacent core should be a little above the others as it has to take the additional heat from the neighbor.

In hindsight I should have tested both CPUs before delidding to spot something like this. And I´ll now continue like it is and not mess with anything further. After all, IF that CPU bites the dust it is easy to replace in terms of price, availability and work involved while the board is not.

Some CB15 runs:


----------



## phill (Nov 20, 2019)

It's a shame that you never did a quick test with them standard as then we could see whether or not the delidding helped or not...  I'm guessing that it might not have but I was also wondering, are you putting the block directly on the die of the CPU or are you still using the standard IHS?

Glad to hear it's all back up and running.  I literally sweat like god knows what when I have to mess about with the socket, always so worried I'll drop the CPU into the socket or just end up bricking the board because of the socket..  It's a nightmare loI 
I do need to get mine running Windows for a bit, I'd like to test the CPUs (both X5675 and my Ryzen 1700X's....)


----------



## Dinnercore (Nov 20, 2019)

phill said:


> It's a shame that you never did a quick test with them standard as then we could see whether or not the delidding helped or not...  I'm guessing that it might not have but I was also wondering, are you putting the block directly on the die of the CPU or are you still using the standard IHS?
> 
> Glad to hear it's all back up and running.  I literally sweat like god knows what when I have to mess about with the socket, always so worried I'll drop the CPU into the socket or just end up bricking the board because of the socket..  It's a nightmare loI
> I do need to get mine running Windows for a bit, I'd like to test the CPUs (both X5675 and my Ryzen 1700X's....)



I run direct-die, no IHS involved. I tested this method with a single X5650 on my OC-benchtable. I can give you those numbers.

Running 4GHz @ fixed 1.3V with 50% LLC (1.27V load voltage). QPI 1.3V / 182 bclk x22 / water temp 21.8°C / room ambient 18°C

For Stock IHS with Kryonaut paste on top: ~37°C idle / 54°C hottest core during load
With direct die Conductonaut: ~32°C idle / 46°C hottest core during load

That test showed a small improvement by using LM and going direct die. And my X5690s show nearly the same numbers for the 4.1GHz run! CPU0 is perfectly matching only exception is that single core on CPU1. 
Keep in mind that on my X5690s the water temp is unavoidably higher, the load voltage is a bit lower and I hit 43°C hottest core if I ignore the questionable one. 

I´ll see where I land further down the road. I saw the video from 'Tech YES City' and he pushed his stock IHS all the way up to 1.4V / 4,4GHz with LLC and hit 82°C core temp. I just don´t know his ambient / water temperatures. 

As for intense sweating, working on the socket I feel fine. My delidded chips have such a little weight that they can barely do any damage, I start to feel it when mounting the cooler onto a bare die. Since the die surface is so small underneath the cooler coldplate, it is very wobbly and if you don´t watch out you can tighten the CPU down with an angle into the socket. And that is definitly NOT good.


----------



## phill (Nov 20, 2019)

I would hope that the numbers are in fact lower because there's a direct touch to the die, my only fear of doing that would be if I over tightened something and either cracked the die or put too much pressure on the CPU pins underneath...  Just my luck that would be  

The X5650's I had seemed to be really low load temps, I think 40C under 100% load from WCG, which was great considering the rest of the loop and such.. I've not had any real time with the X5675's since I've swapped them over and pressed on lol  I need to get Linux Mint 19.3 installed as well and see how that behaves.  If it continues to mess about and be very sluggish, I'll ignore WCG under Linux and put it under Windows and see how that goes..  If it's any better, I'll leave it there 

I imagine the temps are due to the higher clock speeds and so on..  They do take a bit more power as well which is not unsurprising considering the extra clock speed.  I think it's nearly 500Mhz per thread (or it is exactly that...  Scary stuff)


----------



## Dinnercore (Nov 20, 2019)

I´ve hit 4.4GHz without much trouble, only 1 BSOD on the way as I tried to maintain LLC - off. But at 4.2GHz the Vdroop got so much that I had to enable LLC. 





Still got some headroom for voltage and temperature, and I have not yet hit the max. frequency on 1.32V.

There is however one issue I get with my RAM, as one stick seems to drop out above 170 BCLK. I could not bring it back yet, I tried higher DRAM voltage and loosened timings all the way to 11-13-13-31 244 (the rated timings on this kit for 2400 MHz). Still it´s not coming back. CPU-Z is not much help in identifying which CPU is missing a dimm, since it reads only 2 slots populated and thinks my board has 18 slots. 
I also tried higher PLL, VTT, IOH voltages but the 8GB stay MIA. I wouldn´t mind if its only the capacity thats missing, but it seems to hurt multithread performance a lot. One CPU is forced back to dual channel.
The missing 8GB show back up as soon as I lower BCLK again to values below 165-170. I have to look into the MCH-strap setting and hope I can get it back somehow. 









						Intel Xeon X5690 @ 4440.64 MHz - CPU-Z VALIDATOR
					

[5ctnbb] Validated Dump by Osmium-OC (2019-11-20 20:16:31) - MB: EVGA EVGA Classified SR-2 - RAM: 40960 MB




					valid.x86.fr


----------



## phill (Nov 20, 2019)

I take it your just running 6 sticks total for the moment?  3 x 2 for each CPU?  I had my previous X5650's running 200+ BCLK without too much issue, they weren't the best overclockers but they weren't to bad up to 3.6Ghz   Not bad for a 1Ghz per thread over stock I suppose 

Just a thought, what size sticks are they?  2Gb, 4Gb or bigger?


----------



## Dinnercore (Nov 20, 2019)

phill said:


> I take it your just running 6 sticks total for the moment?  3 x 2 for each CPU?  I had my previous X5650's running 200+ BCLK without too much issue, they weren't the best overclockers but they weren't to bad up to 3.6Ghz   Not bad for a 1Ghz per thread over stock I suppose
> 
> Just a thought, what size sticks are they?  2Gb, 4Gb or bigger?


I´m running 48GB in 6x8 GB. From what I read on the internet this is a common problem with X58 and especially the Xeons. Seems like some specific CPU steppings and serial numbers do better than others, but maybe it´s all just luck.


----------



## Darmok N Jalad (Nov 21, 2019)

The cheese grater era of Mac Pro managed to do a good job of handling lots of RAM, even though Apple didn’t say anything beyond 8GB sticks were supported officially. I think folks were having luck with 16 and 32GB sticks x 4, so the x-series Xeon could certainly handle it. I think the Mac Pro didn’t use x58 though, it used a workstation chipset. Maybe some of the problem is having 6 slots per CPU? The Mac only had 4 slots per CPU and dropped to dual channel when all 4 were populated.


----------



## phill (Nov 21, 2019)

Dinnercore said:


> I´m running 48GB in 6x8 GB. From what I read on the internet this is a common problem with X58 and especially the Xeons. Seems like some specific CPU steppings and serial numbers do better than others, but maybe it´s all just luck.



I do know there was some unsupported SR-2's running 96Gb of ram (all slots with 8Gb sticks) but how good they all were with that amount of ram I'm unsure.  I've gone from 48Gb total (12 x 4Gb) to 6 x 2Gb when testing   I think that might make it easier on the memory controllers which might help with the overclocking..  I know I was struggling after 4Ghz with 48Gb but that's all I had at the time...  
If you can, go with smaller sticks if you want a higher overclock...  It'll be the possible issue


----------



## JohnSimpson (Nov 21, 2019)

Wow, great! I didn't see anything similar so close.


----------



## Dinnercore (Nov 21, 2019)

Darmok N Jalad said:


> The cheese grater era of Mac Pro managed to do a good job of handling lots of RAM, even though Apple didn’t say anything beyond 8GB sticks were supported officially. I think folks were having luck with 16 and 32GB sticks x 4, so the x-series Xeon could certainly handle it. I think the Mac Pro didn’t use x58 though, it used a workstation chipset. Maybe some of the problem is having 6 slots per CPU? The Mac only had 4 slots per CPU and dropped to dual channel when all 4 were populated.


I don´t know, what I do know is that the SR-2 can handle triple channel and it can do my config 6 x 8GB with all 48GB running 2000MHz CL8 (saw some G.Skill sticks could do that). And I know that I previously had all 48GB running. https://valid.x86.fr/sb00w6

But after going back and forth a couple of times I can´t even make the 8GB show up at stock. 



phill said:


> I do know there was some unsupported SR-2's running 96Gb of ram (all slots with 8Gb sticks) but how good they all were with that amount of ram I'm unsure.  I've gone from 48Gb total (12 x 4Gb) to 6 x 2Gb when testing   I think that might make it easier on the memory controllers which might help with the overclocking..  I know I was struggling after 4Ghz with 48Gb but that's all I had at the time...
> If you can, go with smaller sticks if you want a higher overclock...  It'll be the possible issue



I don´t have smaller ones to try, my DDR3 stuff is limited. I´ll look around for some kits tho. 

What I find strange is that Windows, BIOS and CPU-Z can´t see every stick BUT Aida64 can:





It reads the information correctly from each DIMM and sees all 6. Whatever wichcraft that is. Aida64 having more access than bios??


----------



## basco (Nov 21, 2019)

got some ocz blade 2000cl7-8-7 tripple channel 3x2gb if ya wanna go the Elpida-hyper road but i think ya need more than 6 or 12gb


----------



## phill (Nov 21, 2019)

It might be a case of just taking out the sticks of ram and putting them back into the system, sometimes for whatever reason, that does help bring them back to life.   Failing that it might be a reseat of the CPUs, as some of the pads might not be making contact correctly and only showing so much ram up.  

If you'd like I could always send you some 2Gb sticks and you could do some testing with that?    I'm not sure that the postage would be very much and I think I have enough to send over possibly 12 sticks..  I'll take a look this evening for you


----------



## Dinnercore (Nov 21, 2019)

Thank you both for the instant support @basco and @phill ! I think I solved it, so I don´t need to get another kit just yet. But if I encounter some issue reaching clockspeeds again I just might contact you.



phill said:


> It might be a case of just taking out the sticks of ram and putting them back into the system, sometimes for whatever reason, that does help bring them back to life.



This was a good suggestion, I took them out and found a bit of thermal paste on the pins of one DIMM and its slot. Must have gotten there from the previous owner, as I had never taken the sticks out to check...
I treated all contacts with teslanol T6:





Its a contact cleaner for HiFi equipment, works very well with cleaning metal surfaces from all kinds of unwanted stuff.






It is back to all 48 again. I must have bumped into the sticks a lot while taking the CPUs out previously and that must have caused the dirty contact pad to loose connection.


EDIT: And this time it stays for good!


----------



## basco (Nov 21, 2019)

this was a prob on all x58 mainboards-tested a lot of them.


			KOC WebSite: Product Detail
		

all their products work really well
i sprayed kontakt 60 in sockets and all over the motherboard. keep care after cleaning with this-electrostatic is higher then normal- i think thats kontakt 61 for


----------



## Dinnercore (Nov 23, 2019)

I´ve spend some time fiddling with OC on the CPU and RAM. Once I hit unstable settings it really slowed progress down as there is so much to tweak and the boot time becomes a bit exhausting.

I was surprised to find the CPUs nearly maxing out the 2080 Super:


 

 

 



It hits 90%+ GPU load on 1440p and is more than capable of maxing out my 144hz refresh rate. Gameplay in PUBG and CS:GO was fluid, no hard stutters or framedrops. Temperatures are all in check, CPU staying well below 50°C while the GPU maxed out at 56°C.
The 2080 Super maintained a boost clock above 2010MHz at all times, sometimes reaching 2055MHz. No OC from me yet.

The first picture shows the water-temp after a 1 hour session. Very nice, my room had 19°C ambient.

And this is where I maxed out the CPUs for the moment:




A perfect 2100 score in CB15. Vcore had to go up to 1.4V in order to keep the 4.61GHz stable. I think going any higher would not gain much. I had to loosen the memory timings a bit, but now I´m error free in memtest!
Going to tighten them again next to find my limits.


----------



## phill (Nov 26, 2019)

@Dinnercore - Awesome work my man!!  This board can certainly deliver when you put the time in it..  The boot process is a nightmare when you want to do something quickly, its certainly a beast of a motherboard with a million different settings  

For giggles, plug in a watt meter if you dare


----------



## Dinnercore (Nov 26, 2019)

I ran FireStrike to test the overall performance with the stock GPU clocks and reached 22K points:





Not bad, this actually beats both my previous systems scores. My TR 1950X + Vega 64 on LC bios reached 20,2K while my 1800x + 1080ti build got to 20,6K:









						I scored 22 266 in Fire Strike
					

Intel Xeon Processor X5690, NVIDIA GeForce RTX 2080 SUPER x 1, 49144 MB, 64-bit Windows 10}




					www.3dmark.com
				











						I scored 20 257 in Fire Strike
					

AMD Ryzen Threadripper 1950X, AMD Radeon RX Vega 64 x 1, 32768 MB, 64-bit Windows 10}




					www.3dmark.com
				











						I scored 20 649 in Fire Strike
					

AMD Ryzen 7 1800X, NVIDIA GeForce GTX 1080 Ti x 1, 16384 MB, 64-bit Windows 10}




					www.3dmark.com
				




Without OC on the card that is a strong result in my eyes. Top 5% of results with a nearly 10 year old board and CPUs. The SR-2 is a marvel.


@phill : I plugged it into my watt-meter but got some wonky results. The same watt-meter also had strange measurements on my Threadripper build showing up to 1400W for it (that would be 600W CPU and 600W on the Vega...).

This is what I saw on the Night Fury:



4W when powered off.




660W on idle Win10 desktop.




1250W on the CPU-Z CPU stress test.




1280W during FireStrike combined test with a 1350W peak.

As much as I wish those numbers to be true, I doubt them. At idle that would be ~200W per CPU and no way I´m cooling a CPU that is drawing 200W down to near ambient temp. The 1250W load figure for a pure CPU based stress test is even more ridiculous. That would put it at 500W per CPU, yeah as if that would be coolable to 55°C with 23°C water temp...

No idea why my watt-meter hates my PCs, I highly suspect the SuperNova T2s from EVGA. Something with the input filtering must upset my watt-meter.

EDIT: And if you ask yes I did try the wall outlet directly too, same results.


----------



## phill (Nov 27, 2019)

Dinnercore said:


> I ran FireStrike to test the overall performance with the stock GPU clocks and reached 22K points:
> 
> 
> 
> ...



I'm sad to say that it might be rather accurate as well lol    I did say it liked to draw a bit of power from the wall.... 

I know mine draws 200w idle at stock speeds and over 400w when crunching for TPUs WCG team....  They certainly can and do draw power like it's blood or petrol pumping into a V8 !!  

Check this out....

Or this one....

I'll see if I can grab some stats for you when I ever get chance to play around with my pair of X5675's....


----------



## Dinnercore (Nov 27, 2019)

phill said:


> I'm sad to say that it might be rather accurate as well lol    I did say it liked to draw a bit of power from the wall....
> 
> I know mine draws 200w idle at stock speeds and over 400w when crunching for TPUs WCG team....  They certainly can and do draw power like it's blood or petrol pumping into a V8 !!
> 
> ...



I know they can draw a lot but I also know 100% my numbers are too high to be true. The first link has 4-way SLI running a power virus. In the second link I see that overclocked the power from the wall should be ~ half of what I measure.

I would love to finally have build a system that can put a load on my 1600W PSU, but I doubt it. Again, the '1300W' I see would be nearly 100% loss in form of heat. There is no way on earth my ambient cooling solution is soaking that up fast enough. You can get that with LN2, but not on ambient water.
I think buildzoid likes to mention that point often when he talks about VRMs and how much they could output in theory, and then says that anything above 300W on a CPU die will be uncoolable with ambient cooling solutions.
With 1250W power draw on a CPU-bench I would be cooling double of what is possible. Remember that infamous Intel 5GHz demo with the 1KW chiller unit? I would need something similar.


----------



## phill (Nov 27, 2019)

Let me find some results from my SR-2 overclocking, I'll see if I can post some up for you


----------

