Friday, June 16th 2017
Exascale Supercomputer Technology Buoyed by $258M Grant by US Dept. of Energy
Developing supercomputers isn't for the faint of heart. Much less it is for those that are looking for fast development and deployment time-frames. And as such, even as the world's supercomputers are getting increasingly faster and exorbitantly expensive to develop and deploy, players who want to stay ahead have to think ahead as well. To this end, the US Department of Energy has awarded a total of $258M in research contracts to six of the US's foremost tech companies to accelerate the development of Exascale Supercomputer technologies (AMD, Cray, Hewlett Packard Enterprise, IBM, Intel, and NVIDIA.) These companies will be working over a three year contract period, and will have to support at least 40% of the project cost - to help develop the technologies needed to build an exascale computer for 2021. It isn't strange that the companies accepted the grant and jumped at the opportunity: 60% savings in research and development they'd have to do for themselves is nothing to scoff at.
Supercomputers birthed from the project are expected to be in the exaFLOPS scale of computing performance, which is around 50 times more processing power than the generation of supercomputers being installed now. Since traditional supercomputing knowledge and materials are known to falter at the objective level of exaFLOPS performance, the PathForward program - which looks to ensure achievement of such systems in a timely fashion to ensure US leadership in the field of supercomputing - will need to see spurred research and development, which the $258M grant is looking out to do.The DOE's Exascale Supercomputer Technology program looks to spur development in three areas: hardware, software, and application development. To this end, the involved companies are expected to play to their strengths: Cray and IBM will work on system-level challenges; HPE is to further develop their Memory-Driven Computing architecture (centered around byte-addressable non-volatile memory and new memory fabrics.); and Intel, AMD, and NVIDIA are all working on processing technology for the project (both traditional CPU and GPU acceleration,) along with I/O technology in the case of the former two.The research - and actual development and deployment of an exascale computer - will take years to accomplish, but it's in the best interest of all the companies involved that this happens sooner rather than later. The US, for one, would very much like to recoup its lost standing as having the world's most powerful supercomputers - China has surpassed the stars and stripes in that regard, with their Titan supercomputers, which have taken the top two spots in the latest Top 500 list. China further has its own plans to build an exascale computer for 2020.It's expected that exascale designs will carry on with the latest design paradigms of making heavy use of GPUs and wide processors in general; however, software will also be a huge part of the development effort, in making sure that there are performative ways of scaling workloads to what will forcibly be extremely wide designs. Storage memory and interconnect technologies will also be increasingly important in this kind of supercomputers, since a wide enough design will forcibly need to keep all computational resources fed with relevant information to tackle and to share. It's going to be a wild ride until then. Three years may look like a lot, but really, just put into perspective the increasing performance levels of our computer systems as of late. We've broken through the scale already now is time for the exa effort.
Sources:
ExascaleProject.org, Part 2, AnandTech
Supercomputers birthed from the project are expected to be in the exaFLOPS scale of computing performance, which is around 50 times more processing power than the generation of supercomputers being installed now. Since traditional supercomputing knowledge and materials are known to falter at the objective level of exaFLOPS performance, the PathForward program - which looks to ensure achievement of such systems in a timely fashion to ensure US leadership in the field of supercomputing - will need to see spurred research and development, which the $258M grant is looking out to do.The DOE's Exascale Supercomputer Technology program looks to spur development in three areas: hardware, software, and application development. To this end, the involved companies are expected to play to their strengths: Cray and IBM will work on system-level challenges; HPE is to further develop their Memory-Driven Computing architecture (centered around byte-addressable non-volatile memory and new memory fabrics.); and Intel, AMD, and NVIDIA are all working on processing technology for the project (both traditional CPU and GPU acceleration,) along with I/O technology in the case of the former two.The research - and actual development and deployment of an exascale computer - will take years to accomplish, but it's in the best interest of all the companies involved that this happens sooner rather than later. The US, for one, would very much like to recoup its lost standing as having the world's most powerful supercomputers - China has surpassed the stars and stripes in that regard, with their Titan supercomputers, which have taken the top two spots in the latest Top 500 list. China further has its own plans to build an exascale computer for 2020.It's expected that exascale designs will carry on with the latest design paradigms of making heavy use of GPUs and wide processors in general; however, software will also be a huge part of the development effort, in making sure that there are performative ways of scaling workloads to what will forcibly be extremely wide designs. Storage memory and interconnect technologies will also be increasingly important in this kind of supercomputers, since a wide enough design will forcibly need to keep all computational resources fed with relevant information to tackle and to share. It's going to be a wild ride until then. Three years may look like a lot, but really, just put into perspective the increasing performance levels of our computer systems as of late. We've broken through the scale already now is time for the exa effort.
9 Comments on Exascale Supercomputer Technology Buoyed by $258M Grant by US Dept. of Energy
Frankly, I'm sad that money isn't going to fusion research.
Super computers are always fun.
This one super computer is going to get almost as much funding as fusion research gets in a year.
Fusion isn't going to happen without a Manhattan Project level of commitment to it. All of the theories need to be tested in parallel with dialog between teams detailing what does and doesn't work. With enough perseverance and resources, it will be viable.
The ironic thing is that super computers can help with solving problems related to fusion power but should this exascale supercomputer get built, very little of its time will be dedicated to fusion.
Why? Coal, oil, and natural gas interests run deep in Washington/DoE.
Industry Tap isn't wrong on point #2: the pro "green" people need to be motivated to lobby for fusion research. Washington, as a whole, needs to make it a #1 priority. The benefits are numerous, just like the Manhattan Project (advancements in materials, magnets, lasers, lots of jobs, and so on).
By the way, wind and solar is not economically sound. USA recently broke a record for producing just 10% of the grid from those sources (8% wind, 2% solar) and the only reason why that happened is because the government is paying out millions in subsidies to make them economically viable. That money should go to fusion research because unlike wind/solar, fusion energy pays astronomical dividends. Once fusion works, energy will become almost free.
Even if fusion worked it would not be remotely free. That's simply ridiculous. They said the same thing about fission and how did that turn out? The investment in the hardware would be immense.
Back to the topic. I understand why there is a lot of interest in pushing the thresholds of computing. AI. But why the hell is the DOE funding this?
Here's a paper by some europeans. I haven't read the entire thing, but the abstract is interesting: It also mentions how fusion require vast amounts of materials much needed elsewhere.
Basically,this dude has it pretty much nailed down. I mean it would be amazing if it existed, but it would not be free in any case, and what is free again and is so cheap everyone is into it and is very popular even in popular culture? Solar. Fusion might be a thing some day (stellarator engine sounds way to cool to not make), but for electricity generation it's probably not worth it.
I don't know one physicist that says it is impossible. I actually know one that says we're going about it entirely wrong and we need to figure out the reaction chains first to get clean fusion. The fuel costs next to nothing and since there's not much in the way of moving parts except the steam-electric system, maintenance is also low.
Fission suffers the same problem as nuclear: there's a lot of fantastic ideas that solve most problems (bed reactors, thorium reactors, breeder reactors, and so on) with fission power generation but they're not being developed for commercial use for the same reason fusion isn't. The designs being used today aren't very different from those built 50-60 years ago. Because one of their primary goals is maintaining USA's nuclear weapons. Pretty much all the top super computers in the USA had that goal when constructed. Funny how that works, isn't it? "This dude" (Maury Markowitz) works for AP Solar and InPhase Power, companies that design and sell inverters for solar panel installations. If fusion takes off, he's out of work. Not exactly a good source.
Let me remind you of Castle Bravo: 400 lbs of lithium-deuterium produced 63 petajoules of energy in a fraction of a second.
We don't need to rely so heavily on the sun when we can put a star in a bottle here on Earth.