Raevenlord
News Editor
- Joined
- Aug 12, 2016
- Messages
- 3,755 (1.25/day)
- Location
- Portugal
System Name | The Ryzening |
---|---|
Processor | AMD Ryzen 9 5900X |
Motherboard | MSI X570 MAG TOMAHAWK |
Cooling | Lian Li Galahad 360mm AIO |
Memory | 32 GB G.Skill Trident Z F4-3733 (4x 8 GB) |
Video Card(s) | Gigabyte RTX 3070 Ti |
Storage | Boot: Transcend MTE220S 2TB, Kintson A2000 1TB, Seagate Firewolf Pro 14 TB |
Display(s) | Acer Nitro VG270UP (1440p 144 Hz IPS) |
Case | Lian Li O11DX Dynamic White |
Audio Device(s) | iFi Audio Zen DAC |
Power Supply | Seasonic Focus+ 750 W |
Mouse | Cooler Master Masterkeys Lite L |
Keyboard | Cooler Master Masterkeys Lite L |
Software | Windows 10 x64 |
Developing supercomputers isn't for the faint of heart. Much less it is for those that are looking for fast development and deployment time-frames. And as such, even as the world's supercomputers are getting increasingly faster and exorbitantly expensive to develop and deploy, players who want to stay ahead have to think ahead as well. To this end, the US Department of Energy has awarded a total of $258M in research contracts to six of the US's foremost tech companies to accelerate the development of Exascale Supercomputer technologies (AMD, Cray, Hewlett Packard Enterprise, IBM, Intel, and NVIDIA.) These companies will be working over a three year contract period, and will have to support at least 40% of the project cost - to help develop the technologies needed to build an exascale computer for 2021. It isn't strange that the companies accepted the grant and jumped at the opportunity: 60% savings in research and development they'd have to do for themselves is nothing to scoff at.
Supercomputers birthed from the project are expected to be in the exaFLOPS scale of computing performance, which is around 50 times more processing power than the generation of supercomputers being installed now. Since traditional supercomputing knowledge and materials are known to falter at the objective level of exaFLOPS performance, the PathForward program - which looks to ensure achievement of such systems in a timely fashion to ensure US leadership in the field of supercomputing - will need to see spurred research and development, which the $258M grant is looking out to do.
The DOE's Exascale Supercomputer Technology program looks to spur development in three areas: hardware, software, and application development. To this end, the involved companies are expected to play to their strengths: Cray and IBM will work on system-level challenges; HPE is to further develop their Memory-Driven Computing architecture (centered around byte-addressable non-volatile memory and new memory fabrics.); and Intel, AMD, and NVIDIA are all working on processing technology for the project (both traditional CPU and GPU acceleration,) along with I/O technology in the case of the former two.
The research - and actual development and deployment of an exascale computer - will take years to accomplish, but it's in the best interest of all the companies involved that this happens sooner rather than later. The US, for one, would very much like to recoup its lost standing as having the world's most powerful supercomputers - China has surpassed the stars and stripes in that regard, with their Titan supercomputers, which have taken the top two spots in the latest Top 500 list. China further has its own plans to build an exascale computer for 2020.
It's expected that exascale designs will carry on with the latest design paradigms of making heavy use of GPUs and wide processors in general; however, software will also be a huge part of the development effort, in making sure that there are performative ways of scaling workloads to what will forcibly be extremely wide designs. Storage memory and interconnect technologies will also be increasingly important in this kind of supercomputers, since a wide enough design will forcibly need to keep all computational resources fed with relevant information to tackle and to share. It's going to be a wild ride until then. Three years may look like a lot, but really, just put into perspective the increasing performance levels of our computer systems as of late. We've broken through the scale already now is time for the exa effort.
View at TechPowerUp Main Site
Supercomputers birthed from the project are expected to be in the exaFLOPS scale of computing performance, which is around 50 times more processing power than the generation of supercomputers being installed now. Since traditional supercomputing knowledge and materials are known to falter at the objective level of exaFLOPS performance, the PathForward program - which looks to ensure achievement of such systems in a timely fashion to ensure US leadership in the field of supercomputing - will need to see spurred research and development, which the $258M grant is looking out to do.
The DOE's Exascale Supercomputer Technology program looks to spur development in three areas: hardware, software, and application development. To this end, the involved companies are expected to play to their strengths: Cray and IBM will work on system-level challenges; HPE is to further develop their Memory-Driven Computing architecture (centered around byte-addressable non-volatile memory and new memory fabrics.); and Intel, AMD, and NVIDIA are all working on processing technology for the project (both traditional CPU and GPU acceleration,) along with I/O technology in the case of the former two.
The research - and actual development and deployment of an exascale computer - will take years to accomplish, but it's in the best interest of all the companies involved that this happens sooner rather than later. The US, for one, would very much like to recoup its lost standing as having the world's most powerful supercomputers - China has surpassed the stars and stripes in that regard, with their Titan supercomputers, which have taken the top two spots in the latest Top 500 list. China further has its own plans to build an exascale computer for 2020.
It's expected that exascale designs will carry on with the latest design paradigms of making heavy use of GPUs and wide processors in general; however, software will also be a huge part of the development effort, in making sure that there are performative ways of scaling workloads to what will forcibly be extremely wide designs. Storage memory and interconnect technologies will also be increasingly important in this kind of supercomputers, since a wide enough design will forcibly need to keep all computational resources fed with relevant information to tackle and to share. It's going to be a wild ride until then. Three years may look like a lot, but really, just put into perspective the increasing performance levels of our computer systems as of late. We've broken through the scale already now is time for the exa effort.
View at TechPowerUp Main Site