# Turn a Dell Poweredge T630 into a rendering farm !



## blobster21 (Aug 13, 2020)

Hello,

This summer has been pretty calm, until my brother came to me with a special request : He heard i got hold of a retired (yet fully functionnal) Dell Poweredge T630 at work, and since i couldn’t do what i wanted with it, we decided to transform it into a rendering server for his company.

The goal is to invest in a powerfull production tool, in order to become *completely autonomous and self-sufficient* to render their own Cinema4D / Optane creations :










_« _*Bonjour Lab** is an interactive visual creation studio that aims to design and produce new storytelling approaches.*

_*Since 2013, we blend cutting edge technology and visual poetry to deliver tailor made, limitless, creative experiences.* 

*Experts in new visual production techniques, at the forefront of technological innovation and the design of immersive spaces, we conceive and craft innovative and meaningful experiences and places.*_
*Our expertise ranges from real time data driven interactive creations, to the design of immersive spaces and large scale media art. We work for company headquarters, showrooms, exhibition fairs, malls, flagships, brand events and cultural performances and institutions.

We advise about technologies and visual tendencies that would fit communication objectives. We create new visual designs daily to imagine new narratives that emphasize the brand’s message. We are always searching for new technologies to combine and integrate in new kinds of experiences. We conceive and create in an arts and crafts manner to best fit our clients’ needs. »*

Please take a look at their showcase website, you will see some truely unique and impressive visual creations, they will speak more than a thousand words ! : https://www.bonjour-lab.com/

Here are the specs :

- 2 x Intel Xeon E5 2630 V3 (a total of 16 cores / 32HT @2,4Ghz / 40MB L3 cache)
- 2 x 32Gb DDR4 PC2400 Registered
- 2 x Dell PSU Platinum 1100W each
- 2 x MSI Geforce RTX 2070 super OC GP (soon to become 4 of them)
- 2 x Kingston 128Gb set in RAID1 for the operating system
- 2 x Crucial BX500 2TB each to store the assets and the renderered scenes

Additionnal parts required to power this beast and to cool it decently :
- Dell GPU enablement kit ( 4 x 8pins to 8Pins+6pins power adapter cables)
- Dell GPU fan kit (4 additional fans)



Here’s the server :



 





There’s a lot of room in this case, you can shove up to 1536GB of RAM across all 24 RAM slots, that’s 768Gb per CPU.

Since we did *not* win at the lottery recently, we will be content with those 2 sticks of 32GB PC2400 registered DDR4.

Dell suggest to populate the white slots first, so that's what we did with our 2 sticks :







This motherboard can handle 4 x PCI 3.0 graphic cards, so we will start with 2 MSI Geforce RTX 2070 Ventus OC GP :







Populating 2 PCI-E slot with our first 2 GPUs comes at a price : we had to drop the original 2 x 717W PSUs in favor of 2 x 1100W Platinum PSUs, since it's the recommended configuration according with the Dell user's manual.

They will be mandatory when the server later grows to 4 GPUs anyway :



 





The server came equipped with 2 redundants and hot swappables system cooling fans. They are extremely powerfull and consequently very noisy, thats why i did my homework and found a way to disable dynamic speed shifting, in order to impose a set speed for everyday's use (before becoming deaf !) This involves some light scripting and the use of the embedded iDRAC controller (more later if you are interested).



 





Each CPU is cooled with what look like a fairly generic blocks of aluminium with fins :








Luckily, the cooling shroud has aerodynamically placed openings that direct the airflow across the entire system. The airflow passes through all the critical parts of the system, where the vacuum pulls air across the entire surface area of the heat sink, thus allowing increased cooling.





[--- end of 1rst part, thanks for watching this thread !! -----]


----------



## blobster21 (Aug 13, 2020)

space reserved for later use. thx


----------



## phill (Aug 17, 2020)

So nice to see you posting @blobster21 !!   

It looks like it will be a beast of a system...  Can't wait to see it unfold   

DDR4 registered RAM is a pain to find and I know the issues with buying more of the ram..  It ain't cheap!!  

Can't wait to see more of the build!!  Can you team it up with some R710's at all??


----------



## blobster21 (Aug 25, 2020)

Hi @phill !

I have been putting the 2 x R710 to good use during the home confinement up until now (eventhough i could not use them as i first intended for the TechPowerUp! WCG team)

Both are lovely, and remarkably quiet once you send the appropriate raw hex values to the Intelligent Plateform Management Interface (aka IPMI).

I'll catch up later this year, when my room temp goes below 28°C, and will start crunching again soon.

For now, both servers are living a peacefull life in a 6U open chassis near me :






Anyway, back to this rendering server project log, since i finally received the most important part of the server : the GPU enablement kit (a 4 ways power splitter daughter board, and 4 x 8pins to 8pins + 6 Pins GPU power cables)

I took a picture of the motherboard for quick reference, just to make sure i would be able to put everything back into place after this hardware upgrade :






then I proceeded to unscrew all mounting screws, remove all centering studs from the motherboard to set it completely free, and finally unplug all motherboard cables, in order to reach the power circuitry hidden under the motherboard tray :






All i had to do was to plug the daughter board into the place provided for this purpose:






Then attach all 4 power cables to feed 4 hungry GPUs:






Putting back everything into place was rather easy, then i put the cooling shroud over both Heatsinks, insert both hot swappable chassis fans, and proceeded to plug 2 GPU into the server: the first one in the first PCI 3.0 slot, and the second card in the third PCI 3.0 slot






But there was NO WAY i could properly close the side panel anymore : the power cables of the newly installed GPU on the upper PCI-E slot would bump against the locking mechanism ! Hopefully, moving the GPU to the second PCI 3.0 slot did the trick, but there's a downside to this : the GPU is almost touching the RAID controller, and those PERC H730 cards can run pretty hot, adding even more heat to an already extremely hot GPU card. But it's not like i had much choice so i settled for this setup :






I will have to give it a good thinking again otherwise *i will never be able to add 2 more GPUs in the future* (i'm already looking for some angled 8pins + 6pins adapters i could use in this server, if you guys know some brands and shops where to buy them in the EU, it would be pretty much appreciated)

On a side note : i thought i could use those 2 anti-saging support brackets which came with the server, unfortunately both RTX 2070 super are too bulky and the brackets won't fit anymore.






They were most likely designed to support quadro / nvs cards which are thinner than the one i put in the server. Well nevermind that 

That's it for this second update, i will post more soon, as there is one last piece of hardware to put in the server in order to cool it adequately : a massive 4 fans cooling rack i ordered on ebay a couple weeks ago. Teaser : those 4 suckers can work at any given speed between 1200 and *12000 *RPM when they are set to maximum speed 

Thanks for watching this thread !


----------



## biffzinker (Aug 25, 2020)

It didn’t click at first where I’ve seen the two Dell server’s before. There from you @phill?


----------



## phill (Aug 26, 2020)

blobster21 said:


> Hi @phill !
> 
> I have been putting the 2 x R710 to good use during the home confinement up until now (eventhough i could not use them as i first intended for the TechPowerUp! WCG team)
> 
> ...


So very glad to see and hear from you my good friend!!   

Glad to hear your making the most of 2 of them, do you still have the others as well?   I need to look into making the few servers I have left here quieter, I'm sure there's a way and I'll take a look into the Intelligent Plateform Management Interface (aka IPMI) you mentioned if I can Google it, probably not at 130am tho!

But anyways, enough de-railing    That server is going to rock for rendering!!    Will we be able to see the results of what the differences where at stock and with all the monster cards in place??  
Those cooling fans must be double Delta's for that speed rating!!  Jesus...  What size fans are they??  80mm??  

Looking forward to the next part update!!  



biffzinker said:


> It didn’t click at first where I’ve seen the two Dell server’s before. There from you @phill?


They where indeed


----------



## blobster21 (Aug 29, 2020)

Yes indeed @phill , those are 80mm fans, all 6 of them !

So, i received the last piece of hardware today : the DELL optional fans kit (Part # 56F1P, required for multiple GPUs cooling, or when you fill all 18 drives bays)











Those are hot swappables Delta brushless DC fans, rated 12V / 3A.






Before installing this optional ventilation system, there was a lot of room in front of the cooling shroud intake:






Putting the fan rack into place requires no efforts, as it slides nicely against the guiding rails on top & bottom case, landing exactely into the 4 DELL's proprietary PWN connectors






Here's the final result, once everything is plugged. At this point the hardware assembly is over.






I didn't go too far into the bios :
- Disable the onboard VGA chipset after the POST is done
- Disable the DELL lifecycle controller routine during the boot (it takes forever to complete)
- Enable the integrated DELL remote access card (IDRAC8)
- Activate the IPMI-over-lan feature
- Drop the use of the dedicated IDRAC nic in favor of the second lan-on-motherboard (LOM2)
- Set a static IP and some administrator credentials, to later access the IDRAC through the network.

The server roared with all 6 fans set at the default (immutable) 3600RPM speed, and after a painful 2 mins boot, Windows desktop finally loaded.

I gathered various informations through CPU-Z and GPU-Z, to make sure everything was just as expected :





Then benchmarked the server with 3Dmark basic edition :
















I think we're good go !

As usual, thanks for reading this post, see you soon for the last part of this project log.


----------



## phill (Sep 1, 2020)

Well mate, hats off to you sir and I can't wait to see more!!   

I'll drop you a PM about the IMPI setup, I have a feeling I could really do with that being setup!!  

Love the pics of the projects, I feel it makes it !!


----------

