- Joined
- Aug 19, 2017
- Messages
- 2,656 (0.99/day)
In data centers of hyperscalers like Amazon, Google, Facebook, and ones alike, there is a massive need for more computing power. Being that data centers are space-limited facilities, it is beneficial if there is a system that can pack as much computing power as possible, in a smaller form factor. Penguin Computing has thought exactly about this problem and has decided to launch a TundraAP platform, designed specifically as a high-density CPU system. Using an Intel Xeon Platinum 9200 processor, the company utilizes Intel's processor with the highest core count - 56 cores spread on two dies, brought together by a single BGA package.
The Penguin Computing TundraAP system relies on Intel's S9200WK server system. In a 1U server, Penguin Computing puts two of those in one system, with a twist. The company implements a power disaggregation system, which is designed to handle and remove the heat coming from those 400 W TPD monster processors. This means that the PSU is moved from the server itself and moved on a special rack, so the heat from the CPUs wouldn't affect PSUs. The company uses Open Compute Project standards and says it improves efficiency by 15%. To cool those chips, Penguin Computing uses a direct-to-chip liquid cooling system. And if you are wondering how much cores the company can fit in a rack, look no further as it is possible to have as much as 7616 Xeon Platinum cores in just one rack. This is a huge achievement as the density is quite big. The custom cooling and power delivery system that the company built enabled this, by only allowing compute elements to be present in the system.
View at TechPowerUp Main Site
The Penguin Computing TundraAP system relies on Intel's S9200WK server system. In a 1U server, Penguin Computing puts two of those in one system, with a twist. The company implements a power disaggregation system, which is designed to handle and remove the heat coming from those 400 W TPD monster processors. This means that the PSU is moved from the server itself and moved on a special rack, so the heat from the CPUs wouldn't affect PSUs. The company uses Open Compute Project standards and says it improves efficiency by 15%. To cool those chips, Penguin Computing uses a direct-to-chip liquid cooling system. And if you are wondering how much cores the company can fit in a rack, look no further as it is possible to have as much as 7616 Xeon Platinum cores in just one rack. This is a huge achievement as the density is quite big. The custom cooling and power delivery system that the company built enabled this, by only allowing compute elements to be present in the system.
View at TechPowerUp Main Site