Wednesday, March 20th 2019

Without Silicon, Intel Scores First Exascale Computer Design Win for Xe Graphics - AURORA Supercomputer

This here is an interesting piece of tech news for sure, in that Intel has already scored a pretty massive design win for not one, but two upcoming products. Intel's "Future Xeon Scalable Processors" and the company's "Xe Compute Architecture" have been tapped by the U.S. Department of Energy for incorporation into the new AURORA Supercomputer - one that will deliver exascale performance. AURORA is to be developed in a partnership between Intel and Cray, using the later's Shasta systems and its "Slingshot" networking fabric. But these are not the only Intel elements in the supercomputer design: Intel's DC Optane persistent memory will also be employed (in an as-of-yet-unavailable version of it as well), making this a full win across the prow for Intel.
The AURORA supercomputer is to be delivered to the Argonne National Laboratory by 2021, under a $500 million contract (with $146 million of these going to Cray). This is quite a big move for Intel, that ensures an incredible PR move for its CPUs and GPUs (even if for upcoming parts of those, whose performance figures aren't finalized by any means). This victory is particularly interesting in that both AMD and NVIDIA (especially NVIDIA) have been behind virtually all of the GPU compute AI acceleration supercomputer victories, so for Intel to snag this design win so early will definitely bring a good amount of attention to its Xe graphics architecture among institutions. AURORA has been designed to chew through data analytics, HPC and AI workloads at an exaFLOP pace, and will incorporate Intel's OneAPI for system integration.
Sources: Intel AURORA Announcement, CNET
Add your own comment

44 Comments on Without Silicon, Intel Scores First Exascale Computer Design Win for Xe Graphics - AURORA Supercomputer

#26
Vayra86
Mark LittleI don't think we should look at it from a performance per transistor perspective but rather the attempts from Intel to try and devalue the GPU market in order to hurt its competitors.
Devalue... how? I don't follow. They are pumping money into it, and they will want a return on investment. That will lead to competition, which generally leads to growth.

And as far as performance, this is one of the very rare markets where performance per mm² and per watt are the alpha and omega. If you capture that, you capture the market, ie. Nvidia right now.
Posted on Reply
#27
Vya Domus
It's not unheard for such contracts to be commissioned before certain products hit the market officially but we are talking about something of which there is exactly zero information out there. There isn't even a trace of any prototyped silicon out there somewhere being tested as was the case with Larabee where Intel handed out some GPUs to select institutions before they were supposed to be officially released.
Posted on Reply
#28
Patriot
moproblems99I fail to see how one could expect a demo of a product that is being contracted to be built before it has been contracted to be built?
No one expects to see a demo of the supercomputer, however, the supercomputer can't be built with mythical parts with 0 proof of functionality.
There have been 0 demonstrations of Xe, 0 talks of performance range, 0 launch dates given and 0 reasons to back this move.
Posted on Reply
#29
biffzinker
Vya DomusThere isn't even a trace of any prototyped silicon out there somewhere being tested
Hmmm pretty sure I posted in this thread that Intel did make a prototype at the beginning of last year presented to ISSC. Do you happen to have unknown insider knowledge of what Intel is up to?

"Back at the start of 2018 Intel designed a prototype discrete GPU using its 14nm Gen 9 execution units, packing 18 low-power EUs across three sub-slices (roughly analogous to Nvidia’s SMs) to offer simple, parallel graphics processing in a tiny, 64mm2 package. It subsequently showed the research off at the ISSCC event in February. "

What Intel might build off of after their iGPU is a whole different ball game.
Posted on Reply
#30
moproblems99
PatriotThere have been 0 demonstrations of Xe, 0 talks of performance range, 0 launch dates given and 0 reasons to back this move.
Um, there have been 0 public demonstrations. That says nothing about what has happened that we don't know about.

It's as if people think the world revolves around PC enthusiasts...
Posted on Reply
#31
Caring1
SteevoHere we have tax dollars going to a product that doesn't exist, from a company who has never made it.
Isn't that the basis for research funding anywhere?
A product doesn't have to exist to obtain funding, just a plausible feasibility study saying it can be done and a time frame with outcomes expected.
Posted on Reply
#32
GreiverBlade
ah... Exascale? and to think a friend of mine thought it was a new tech and Intel was "lightyears" beyond his concurrent ... while it's "only" a "exaFLOPS capable SC"

blank contract on something that even AMD can achieve? (not now ofc ... but given time ... hey! they have till 2021 ... right? :laugh: now they need to find some "fat cash wallet carrier" to do so) if it was about innovation i would understand ... but about raw computational power ... a bit less

funding for Xe? ... surely Xe is not a revolutionary GPU ... after all it's a Intel GPU



the only interesting point (and a bad one ) is that it will be a "all Intel" SC
Posted on Reply
#33
SoNic67
Kids in their bedrooms, comparing gaming benchmarks and scores with Supercomputing requirements.
It's a totally different world. For who really wants to find something new, Intel has already a second gen AI chip (VPU). Not a big deal to improve it and put it in the next GPU. It will already be embedded in the Xeon Cascade Lake (Intel DL Boost).
newsroom.intel.com/news/intel-unveils-intel-neural-compute-stick-2/
Posted on Reply
#34
Patriot
SoNic67Kids in their bedrooms, comparing gaming benchmarks and scores with Supercomputing requirements.
It's a totally different world. For who really wants to find something new, Intel has already a second gen AI chip (VPU). Not a big deal to improve it and put it in the next GPU. It will already be embedded in the Xeon Cascade Lake (Intel DL Boost).
newsroom.intel.com/news/intel-unveils-intel-neural-compute-stick-2/
*Intel's subsidiary Movidius has a 2nd gen VPU, visual processing unit... it is the easiest problem to solve...it's all inferencing no models being made here... that's on distant heavy iron.

Intel is waaaay behind in mindshare on their various acquired IP... Altera, Movidius , nervana and no... they did not embed a mobius vpu chip in cascade lake...
They simply added another AVX extension AVX512VNNI as well as bfloat16.
Don't try and combine press releases. please and thank you.

Intel is not gaining traction on any of those projects outside of facebook.... they are all cool... but not fully baked. They just gave FB enough of a deal to help them polish up both Movidius and Nervana product lines.

Altera, flexible fpga, but gets utterly trounced by the T4, it was barely competitive to the p4 and Nvidia launched the T4 before they could get it out the door.
There are other more capable fpga's on the market and convincing people to go intel comes down to discounts. They are combining it with IB (infiniband) for some interesting offload capabilities.

Now... for Xe... there have been no performance estimates from it, from any side. not consumer, not server...
This is not a bid won because of Xe... but already won and delayed because of Intel's past failings...
Originally announced in April 2015, Aurora was planned to be delivered in 2018 and have a peak performance of 180 petaFLOPS. The system was expected to be the world's most powerful system at the time. The system was intended to be built by Cray based on Intel's 3rd generation Xeon Phi (Knights Hill microarchitecture). In November 2017 Intel announced that Aurora has been shifted to 2021 and will be scaled up to 1 exaFLOPS. The system will likely become the first supercomputer in the United States to break the exaFLOPS barrier. As part of the announcement Knights Hill was canceled and instead be replaced by a "new platform and new microarchitecture specifically designed for exascale".
What this is ... is a win for Intel's ONE api, to make a language that handles all the details of the workloads and offloads it to the most capable accelerator for any given task.
This is not a win for Xe, but one in spite of it.

I live in the realm of supercomputers and love all the competition to nvidia that has risen up in the past few years.
This this is... DOE... Intel has to showcase efficiency not absolute performance per node. Which is why this is a win for cray and not omnipath.
Posted on Reply
#35
Steevo
Caring1Isn't that the basis for research funding anywhere?
A product doesn't have to exist to obtain funding, just a plausible feasibility study saying it can be done and a time frame with outcomes expected.
I guess this is where I expect large companies to get off my tax dollars with their billions of dollars and build a product the market desires and to reap the financial reward of successful hard work.

Instead our government according to you gave them my money to think about building a successful product with my money. Sounds like theft to me, sounds like a recipe for debt. If Intel were truly smart they would buy black boxes of AMD or Nvidia proven hardware and make a profit, and that could be what they have to do when they fail.
Posted on Reply
#36
notb
SteevoI guess this is where I expect large companies to get off my tax dollars with their billions of dollars and build a product the market desires and to reap the financial reward of successful hard work.

Instead our government according to you gave them my money to think about building a successful product with my money. Sounds like theft to me, sounds like a recipe for debt. If Intel were truly smart they would buy black boxes of AMD or Nvidia proven hardware and make a profit, and that could be what they have to do when they fail.
The idea here is that you want your supercomputer to be cutting-edge. And designing, building and testing complicated systems takes years.
Using stuff already available on the market means you end up with an obsolete system when it reaches production stage.

The only way for your supercomputer (or your fighter aircraft) to be cutting-edge at launch is to order it in this kind of tenders. This way the thing you want is developed alongside the technology it will use.
Posted on Reply
#37
Patriot
SteevoI guess this is where I expect large companies to get off my tax dollars with their billions of dollars and build a product the market desires and to reap the financial reward of successful hard work.

Instead our government according to you gave them my money to think about building a successful product with my money. Sounds like theft to me, sounds like a recipe for debt. If Intel were truly smart they would buy black boxes of AMD or Nvidia proven hardware and make a profit, and that could be what they have to do when they fail.
To remind everyone once again, bid was won in 2015, the deployment has been delayed several times, this is not a win for Xe but for Intel's One API allowing mixed accelerators under one API. It's not for peak performance per node but peak efficiency and overall targeting the exaflop mark. Xe simply fills the hole left by the abandonment of the Phi lineup. Clickbait is bad.
Posted on Reply
Add your own comment
Dec 23rd, 2024 20:54 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts