Tuesday, August 4th 2020
Penguin Computing Packs 7616 Intel Xeon Platinum Cores in one Server Rack
In data centers of hyperscalers like Amazon, Google, Facebook, and ones alike, there is a massive need for more computing power. Being that data centers are space-limited facilities, it is beneficial if there is a system that can pack as much computing power as possible, in a smaller form factor. Penguin Computing has thought exactly about this problem and has decided to launch a TundraAP platform, designed specifically as a high-density CPU system. Using an Intel Xeon Platinum 9200 processor, the company utilizes Intel's processor with the highest core count - 56 cores spread on two dies, brought together by a single BGA package.
The Penguin Computing TundraAP system relies on Intel's S9200WK server system. In a 1U server, Penguin Computing puts two of those in one system, with a twist. The company implements a power disaggregation system, which is designed to handle and remove the heat coming from those 400 W TPD monster processors. This means that the PSU is moved from the server itself and moved on a special rack, so the heat from the CPUs wouldn't affect PSUs. The company uses Open Compute Project standards and says it improves efficiency by 15%. To cool those chips, Penguin Computing uses a direct-to-chip liquid cooling system. And if you are wondering how much cores the company can fit in a rack, look no further as it is possible to have as much as 7616 Xeon Platinum cores in just one rack. This is a huge achievement as the density is quite big. The custom cooling and power delivery system that the company built enabled this, by only allowing compute elements to be present in the system.
Source:
AnandTech
The Penguin Computing TundraAP system relies on Intel's S9200WK server system. In a 1U server, Penguin Computing puts two of those in one system, with a twist. The company implements a power disaggregation system, which is designed to handle and remove the heat coming from those 400 W TPD monster processors. This means that the PSU is moved from the server itself and moved on a special rack, so the heat from the CPUs wouldn't affect PSUs. The company uses Open Compute Project standards and says it improves efficiency by 15%. To cool those chips, Penguin Computing uses a direct-to-chip liquid cooling system. And if you are wondering how much cores the company can fit in a rack, look no further as it is possible to have as much as 7616 Xeon Platinum cores in just one rack. This is a huge achievement as the density is quite big. The custom cooling and power delivery system that the company built enabled this, by only allowing compute elements to be present in the system.
17 Comments on Penguin Computing Packs 7616 Intel Xeon Platinum Cores in one Server Rack
Then I see this is a reference design from Intel because literally no one else wasted their money designing anything for 9200...
Looks like this....
www.intel.com/content/www/us/en/products/servers/server-chassis-systems/server-board-s9200wk-systems.html Now that is a good question, each generation of watercooled supercomputer gets better... with SGI it used to be 2:1 or 3:1 rack to heat exchange.
Cray currently appears to do 4 cabinets to 1 which isn't exactly a rack... Cray also makes a 9200 version of this, it just isn't their flagship ;)
It took awhile to find this... HPE kinda sucks when it comes to product documentation lookup.
www.hpe.com/us/en/pdfViewer.html?docId=a50002389&parentPage=/us/en/products/compute/hpc/supercomputing/cray-exascale-supercomputer&resourceTitle=HPE+Cray+EX+Liquid-Cooled+Cabinet+for+Large-Scale+Systems+brochure
- It doesn't!
*famous last words*
it's 54,400 watts++
No, it's 15,000 roentgen.
These fuckers def require a specialized space... it's pretty hard to power multiple racks >30kw ea. Unless these are detuned/turbo disabled a rack can easily exceed 60kw as 54.4kw is just the cpus not ram, chipset, network fabric...
Intel: We did everything right!
Apple: Do you taste Metal?