Saturday, December 15th 2018
AMD 7nm EPYC "Rome" CPUs in Upcoming Finnish Supercomputer, 200,000 Cores Total
During the next year and a half, the Finnish IT Center for Science (CSC) will be purchasing a new supercomputer in two phases. The first phase consists of Atos' air-cooled BullSequana X400 cluster which makes use of Intel's Cascade Lake Xeon processors along with Mellanox HDR InfiniBand for a theoretical performance of 2 petaflops. Meanwhile, system memory per node will range from 96 GB up to 1.5 TB with the entire system receiving a 4.9 PB Lustre parallel file system as well from DDN. Furthermore, a separate partition of phase one will be used for AI research and will feature 320 NVIDIA V100 NVLinked GPUs configured in 4-GPU nodes. It is expected that peak performance will reach 2.5 petaflops. Phase one will be brought online at some point in the summer of 2019.
Where things get interesting is in phase two, which is set for completion during the spring of 2020. Atos' will be building CSC a liquid-cooled HDR-connected BullSequana XH2000 supercomputer that will be configured with 200,000 AMD EPYC "Rome" CPU cores which for the mathematicians out there works out to 3,125 64 core AMD EPYC processors. Of course, all that x86 muscle will require a great deal of system memory, as such, each node will be equipped with 256 GB for good measure. Storage will consist of an 8 PB Lustre parallel file system that is to be provided by DDN. Overall phase two will increase computing capacity by 6.4 petaflops (peak). With deals like this already being signed it would appear AMD's next-generation EPYC processors are shaping up nicely considering Intel had this market cornered for nearly a decade.When both phases are complete, the entire system will be capable of 11 petaflops of theoretical performance which is an increase of over five times what currently Finnish scientists had available. The system will be used by numerous agencies and universities in multiple studies such as astrophysics, drug development, nanoscience, and AI research. All that said, performance like this doesn't come cheap either with Finland investing €37 million ($41.8 million) in their endeavor to upgrade and update their high-performance computing infrastructure.
Source:
HPCwire
Where things get interesting is in phase two, which is set for completion during the spring of 2020. Atos' will be building CSC a liquid-cooled HDR-connected BullSequana XH2000 supercomputer that will be configured with 200,000 AMD EPYC "Rome" CPU cores which for the mathematicians out there works out to 3,125 64 core AMD EPYC processors. Of course, all that x86 muscle will require a great deal of system memory, as such, each node will be equipped with 256 GB for good measure. Storage will consist of an 8 PB Lustre parallel file system that is to be provided by DDN. Overall phase two will increase computing capacity by 6.4 petaflops (peak). With deals like this already being signed it would appear AMD's next-generation EPYC processors are shaping up nicely considering Intel had this market cornered for nearly a decade.When both phases are complete, the entire system will be capable of 11 petaflops of theoretical performance which is an increase of over five times what currently Finnish scientists had available. The system will be used by numerous agencies and universities in multiple studies such as astrophysics, drug development, nanoscience, and AI research. All that said, performance like this doesn't come cheap either with Finland investing €37 million ($41.8 million) in their endeavor to upgrade and update their high-performance computing infrastructure.
47 Comments on AMD 7nm EPYC "Rome" CPUs in Upcoming Finnish Supercomputer, 200,000 Cores Total
The other beeing pun intended? le: ortho;spelling
trog
It's hugely time consuming and why we are seeking quantum computing to become a reality, instead of the pet project it currently is.
These supercomputers tend to cost as much in power bills over a few years as the cost of building them in the 1st place: it's why server chips have much lower clocks VS regular desktops because it helps tremendously with the power bills.
The high consumption was actually part of my point. I thought that had more to do with data integrity, but will concede I could be wrong here.
Another aspect is probably also the fact that bottlenecking occurs, storage and RAM are much more important in this space. No point oversaturating anything. And on top of all that, there are limitations to what can be fitted under the IHS, before you straight burn a hole in your server + yield issues. High clocking many cores are progressively harder to make, its the whole reason Epyc and TR are so amazing, they cut that yield risk / the number of dies.