Wednesday, March 20th 2019
Without Silicon, Intel Scores First Exascale Computer Design Win for Xe Graphics - AURORA Supercomputer
This here is an interesting piece of tech news for sure, in that Intel has already scored a pretty massive design win for not one, but two upcoming products. Intel's "Future Xeon Scalable Processors" and the company's "Xe Compute Architecture" have been tapped by the U.S. Department of Energy for incorporation into the new AURORA Supercomputer - one that will deliver exascale performance. AURORA is to be developed in a partnership between Intel and Cray, using the later's Shasta systems and its "Slingshot" networking fabric. But these are not the only Intel elements in the supercomputer design: Intel's DC Optane persistent memory will also be employed (in an as-of-yet-unavailable version of it as well), making this a full win across the prow for Intel.The AURORA supercomputer is to be delivered to the Argonne National Laboratory by 2021, under a $500 million contract (with $146 million of these going to Cray). This is quite a big move for Intel, that ensures an incredible PR move for its CPUs and GPUs (even if for upcoming parts of those, whose performance figures aren't finalized by any means). This victory is particularly interesting in that both AMD and NVIDIA (especially NVIDIA) have been behind virtually all of the GPU compute AI acceleration supercomputer victories, so for Intel to snag this design win so early will definitely bring a good amount of attention to its Xe graphics architecture among institutions. AURORA has been designed to chew through data analytics, HPC and AI workloads at an exaFLOP pace, and will incorporate Intel's OneAPI for system integration.
Sources:
Intel AURORA Announcement, CNET
44 Comments on Without Silicon, Intel Scores First Exascale Computer Design Win for Xe Graphics - AURORA Supercomputer
And as far as performance, this is one of the very rare markets where performance per mm² and per watt are the alpha and omega. If you capture that, you capture the market, ie. Nvidia right now.
There have been 0 demonstrations of Xe, 0 talks of performance range, 0 launch dates given and 0 reasons to back this move.
"Back at the start of 2018 Intel designed a prototype discrete GPU using its 14nm Gen 9 execution units, packing 18 low-power EUs across three sub-slices (roughly analogous to Nvidia’s SMs) to offer simple, parallel graphics processing in a tiny, 64mm2 package. It subsequently showed the research off at the ISSCC event in February. "
What Intel might build off of after their iGPU is a whole different ball game.
It's as if people think the world revolves around PC enthusiasts...
A product doesn't have to exist to obtain funding, just a plausible feasibility study saying it can be done and a time frame with outcomes expected.
blank contract on something that even AMD can achieve? (not now ofc ... but given time ... hey! they have till 2021 ... right? :laugh: now they need to find some "fat cash wallet carrier" to do so) if it was about innovation i would understand ... but about raw computational power ... a bit less
funding for Xe? ... surely Xe is not a revolutionary GPU ... after all it's a Intel GPU
the only interesting point (and a bad one ) is that it will be a "all Intel" SC
It's a totally different world. For who really wants to find something new, Intel has already a second gen AI chip (VPU). Not a big deal to improve it and put it in the next GPU. It will already be embedded in the Xeon Cascade Lake (Intel DL Boost).
newsroom.intel.com/news/intel-unveils-intel-neural-compute-stick-2/
Intel is waaaay behind in mindshare on their various acquired IP... Altera, Movidius , nervana and no... they did not embed a mobius vpu chip in cascade lake...
They simply added another AVX extension AVX512VNNI as well as bfloat16.
Don't try and combine press releases. please and thank you.
Intel is not gaining traction on any of those projects outside of facebook.... they are all cool... but not fully baked. They just gave FB enough of a deal to help them polish up both Movidius and Nervana product lines.
Altera, flexible fpga, but gets utterly trounced by the T4, it was barely competitive to the p4 and Nvidia launched the T4 before they could get it out the door.
There are other more capable fpga's on the market and convincing people to go intel comes down to discounts. They are combining it with IB (infiniband) for some interesting offload capabilities.
Now... for Xe... there have been no performance estimates from it, from any side. not consumer, not server...
This is not a bid won because of Xe... but already won and delayed because of Intel's past failings... What this is ... is a win for Intel's ONE api, to make a language that handles all the details of the workloads and offloads it to the most capable accelerator for any given task.
This is not a win for Xe, but one in spite of it.
I live in the realm of supercomputers and love all the competition to nvidia that has risen up in the past few years.
This this is... DOE... Intel has to showcase efficiency not absolute performance per node. Which is why this is a win for cray and not omnipath.
Instead our government according to you gave them my money to think about building a successful product with my money. Sounds like theft to me, sounds like a recipe for debt. If Intel were truly smart they would buy black boxes of AMD or Nvidia proven hardware and make a profit, and that could be what they have to do when they fail.
Using stuff already available on the market means you end up with an obsolete system when it reaches production stage.
The only way for your supercomputer (or your fighter aircraft) to be cutting-edge at launch is to order it in this kind of tenders. This way the thing you want is developed alongside the technology it will use.