Wednesday, March 20th 2019

Without Silicon, Intel Scores First Exascale Computer Design Win for Xe Graphics - AURORA Supercomputer

This here is an interesting piece of tech news for sure, in that Intel has already scored a pretty massive design win for not one, but two upcoming products. Intel's "Future Xeon Scalable Processors" and the company's "Xe Compute Architecture" have been tapped by the U.S. Department of Energy for incorporation into the new AURORA Supercomputer - one that will deliver exascale performance. AURORA is to be developed in a partnership between Intel and Cray, using the later's Shasta systems and its "Slingshot" networking fabric. But these are not the only Intel elements in the supercomputer design: Intel's DC Optane persistent memory will also be employed (in an as-of-yet-unavailable version of it as well), making this a full win across the prow for Intel.
The AURORA supercomputer is to be delivered to the Argonne National Laboratory by 2021, under a $500 million contract (with $146 million of these going to Cray). This is quite a big move for Intel, that ensures an incredible PR move for its CPUs and GPUs (even if for upcoming parts of those, whose performance figures aren't finalized by any means). This victory is particularly interesting in that both AMD and NVIDIA (especially NVIDIA) have been behind virtually all of the GPU compute AI acceleration supercomputer victories, so for Intel to snag this design win so early will definitely bring a good amount of attention to its Xe graphics architecture among institutions. AURORA has been designed to chew through data analytics, HPC and AI workloads at an exaFLOP pace, and will incorporate Intel's OneAPI for system integration.
Sources: Intel AURORA Announcement, CNET
Add your own comment

44 Comments on Without Silicon, Intel Scores First Exascale Computer Design Win for Xe Graphics - AURORA Supercomputer

#1
Steevo
Good to know some people are being paid to play by Intel. Corruption at it's finest.
Posted on Reply
#2
TheGuruStud
RIP tax money. No actual product, but are given a contract based on imaginary hardware. Sounds legit.
Posted on Reply
#3
notb
TheGuruStudRIP tax money. No actual product, but are given a contract based on imaginary hardware. Sounds legit.
They are paid to make hardware that matches particular requirements. What's the problem? This is how long term projects work.
Posted on Reply
#4
moproblems99
TheGuruStudRIP tax money. No actual product, but are given a contract based on imaginary hardware. Sounds legit.
I fail to see how one could expect a demo of a product that is being contracted to be built before it has been contracted to be built?
Posted on Reply
#5
Steevo
moproblems99I fail to see how one could expect a demo of a product that is being contracted to be built before it has been contracted to be built?
Usually we see working samples of a product before we order it, from companies who have made that product successful in the past.

Here we have tax dollars going to a product that doesn't exist, from a company who has never made it.

Would you pay $750 dollars for a new Intel GPU that they haven't made yet, nor have they ever made one? How about $7500, or $750,000?
Posted on Reply
#6
AndreiD
Shouldn't they most likely have some early samples though? It's known that AMD's Rome was sampling last year for example, which is how they got some wins.
Should be similar in this situation since Intel always samples hyperscalers or projects like this one way before it technically 'releases' any architecture.
Here it seems they got a win in everything but the fabric, which should inspire some confidence in their Xe Graphics project.
Posted on Reply
#7
TheGuruStud
AndreiDShouldn't they most likely have some early samples though? It's known that AMD's Rome was sampling last year for example, which is how they got some wins.
Should be similar in this situation since Intel always samples hyperscalers or projects like this one way before it technically 'releases' any architecture.
Intel went from nothing to high performance working hardware within a yr? They're still doodling on napkins.
Posted on Reply
#8
eidairaman1
The Exiled Airman
SteevoGood to know some people are being paid to play by Intel. Corruption at it's finest.
moproblems99I fail to see how one could expect a demo of a product that is being contracted to be built before it has been contracted to be built?
SteevoUsually we see working samples of a product before we order it, from companies who have made that product successful in the past.

Here we have tax dollars going to a product that doesn't exist, from a company who has never made it.

Would you pay $750 dollars for a new Intel GPU that they haven't made yet, nor have they ever made one? How about $7500, or $750,000?
A expensive and wasteful paperlaunch.
Posted on Reply
#9
moproblems99
SteevoWould you pay $750 dollars for a new Intel GPU that they haven't made yet, nor have they ever made one? How about $7500, or $750,000?
Geez, I didn't realize Intel never made GPUs before. Must be a misprint on my 4770K. I would surmise Intel was required to give some guarantees on that contract before they get paid anything. Government contracts are a lot different then piddly $750 GPU from Microcenter.
Posted on Reply
#10
R0H1T
There's probably some "rebates" attached to the XE order or quite likely a blank check, so long as Intel gets full price on Xeons.
Posted on Reply
#11
TheGuruStud
moproblems99Geez, I didn't realize Intel never made GPUs before. Must be a misprint on my 4770K. I would surmise Intel was required to give some guarantees on that contract before they get paid anything. Government contracts are a lot different then piddly $750 GPU from Microcenter.
Yeah, let me tell you, those igpus are stupid fast. So fast that people just don't even bother buying a discrete card, b/c they're faster than everything on the market.
Why didn't enterprise think of this? They'll save a fortune buying CPUs with igpu instead of Quadros!
R0H1TThere's probably some "rebates" attached to the XE order or quite likely a blank check, so long as Intel gets full price on Xeons.
Rebates into pockets of those making decisions.
Posted on Reply
#12
biffzinker
TheGuruStudYeah, let me tell you, those igpus are stupid fast. So fast that people just don't even bother buying a discrete card, b/c they're faster than everything on the market.
Why didn't enterprise think of this? They'll save a fortune buying CPUs with igpu instead of Quadros!

Rebates into pockets of those making decisions.
Well QuickSync has proven to be popular but that's another topic
Posted on Reply
#13
AndreiD
TheGuruStudIntel went from nothing to high performance working hardware within a yr? They're still doodling on napkins.
I think you're going off the premise the Xe Graphics project wasn't ongoing when Raja joined more than a year ago, they most likely have working samples at this point.
Posted on Reply
#14
TheGuruStud
AndreiDI think you're going off the premise the Xe Graphics project wasn't ongoing when Raja joined more than a year ago, they most likely have working samples at this point.
If they had anything worth a penny, they'd be bragging about it with made up performance numbers for hype.
Posted on Reply
#15
biffzinker
Anyone remember this from the beginning of last year?
Back at the start of 2018 Intel designed a prototype discrete GPU using its 14nm Gen 9 execution units, packing 18 low-power EUs across three sub-slices (roughly analogous to Nvidia’s SMs) to offer simple, parallel graphics processing in a tiny, 64mm2 package. It subsequently showed the research off at the ISSCC event in February.



Scale that prototype up to something the size of an RTX 2080, at 545mm2, and you could end up with some serious Intel GPU power. The early 14nm prototype only had 6 EUs per sub-slice, but with the 10nm Gen 11 chips using 16 EUs per sub-slice to make up its heady 64 EU count, each full GPU slice could potentially end up with around 48 EUs each.

Even just throwing some admittedly terrible, back-of-a-napkin maths at this, taking a 48 EU chunk of silicon that’s just 64mm2 (as the prototype GPU was) and scaling it up to the size of an RTX 2080 chip you could end up with more than 400 EUs in a discrete Intel Xe card. With 64 Gen 11 EUs offering at least 1 TFLOPS of processing power, if it scaled in a linear fashion, such a discrete GPU could end up with between 6 and 7 TFLOPs. That would put it around RTX 2070 levels of performance.
pc.watch.impress.co.jp/docs/column/kaigai/1107078.html#01_l.png
www.pcgamesn.com/intel/intel-xe-graphics-card-ice-lake-gpu
Posted on Reply
#16
SoNic67
When those contracts are written, the only thing that is nailed is the performance specification. It does not matter how Intel and Cray get there. There is also a bond that the contractor puts up as insurance that will deliver in time.

You people assume about things that you don't know how they are working. There are laws put in place by Congress about procurement, all put nicely together in something called CFR, FAR... look it up.
Posted on Reply
#17
notb
TheGuruStudYeah, let me tell you, those igpus are stupid fast. So fast that people just don't even bother buying a discrete card, b/c they're faster than everything on the market.
Why didn't enterprise think of this? They'll save a fortune buying CPUs with igpu instead of Quadros!
I know you really wanted to sound funny. And you do!

You're trying to convince us Intel is a decade behind Nvidia and AMD, but actually HD 630 has roughly 45-50% of GT 1030 performance - both on paper and in benchmarks:
www.notebookcheck.net/HD-Graphics-630-vs-GeForce-GT-1030-Desktop_7652_7996.247598.0.html

And now some figures for people with strong die size fallacy:
HD 630: ~40mm2
GT 1030: 70mm2
That's 57%.

To be fair, we would have to consider that part of the die is taken by media and encoding and this part is relatively larger in the smaller die. But even without this, you can see it's not that far off.

More importantly, IGP isn't actually optimized for performance. It is optimized for idle power consumption, which stays under 1W even during movie playback. GTX1050 needs 3W. Just saying.
Posted on Reply
#18
moproblems99
TheGuruStudYeah, let me tell you, those igpus are stupid fast. So fast that people just don't even bother buying a discrete card, b/c they're faster than everything on the market.
Why didn't enterprise think of this? They'll save a fortune buying CPUs with igpu instead of Quadros!
Last I checked, iGPUs work perfectly fine for anything not CAD related - which is most office work. My whole office uses iGPU - even the graphic designers.

Also, I guess by your judgement there is no need to buy AMD GPUS since NV cards are quite a bit faster.
Posted on Reply
#19
ptmmac
I know that Intel is not the company most people would expect for this design win. I think if you check you will see that one of the other bids for a supercomputer that is due in 2020 Nvidia and AMD won that bid.
This is something that will be used by the Defense Department and the military. I think it is a reasonable guess that these customers have prioritized silicon made in the US. I also think that if the government decided that they needed to replace all of their Intel chips after the Spectre hardware bug was discovered that this could explain why Intel has been missing new silicon and still making a huge profit.

I think that it is safe to say that in this case, while building the first exascale computer ever, the government is buying state of the art silicon for AI research. Perhaps the expense is high and includes the cost for Intel to get a new node operative for their purposes alone. If that is the case then this is a win for both of these two huge organizations. You can bet that you will hear more news about overspending in the Defense Department which is how they move money from regular acquisitions to off the books work. I am sure that building the first exascale computer is in this arena of Black budget purchases and that Intel has already been given lots of seed money to get this processor going. It is also worth noting that we just put a record-breaking $ 700 Billion into defense spending this year.

The fact is Intel has been caught with their pants down because of graphics accelerators advantages over cpu's in AI workloads. The demise of Intel's CEO is surely due to this mess. Doesn't anyone else wonder why Intel is damn profitable right now?
They set an all time record for earnings this December. That does not seem to make sense given their current explanations for missing a process node upgrade for 2 years in a row.
here is the pdf from their website
s21.q4cdn.com/600692695/files/doc_financials/2018/Q4/Q4'18-Earnings-Release_final.pdf
Posted on Reply
#20
TheGuruStud
moproblems99Last I checked, iGPUs work perfectly fine for anything not CAD related - which is most office work. My whole office uses iGPU - even the graphic designers.

Also, I guess by your judgement there is no need to buy AMD GPUS since NV cards are quite a bit faster.
Boy, you guys are REALLY stretching, here. Worthless igpu means it'll be useful for a supercomputer...just LOL. Go cry to mommy about how people are mean to intel.
Posted on Reply
#21
moproblems99
TheGuruStudBoy, you guys are REALLY stretching, here. Worthless igpu means it'll be useful for a supercomputer...just LOL. Go cry to mommy about how people are mean to intel.
  1. I don't care about AMD vs Intel
  2. I don't care about NV vs AMD
I'm not sure what your problem is but it appears you are incapable of having a conversation. Have a good day.
Posted on Reply
#22
TheGuruStud
notbI know you really wanted to sound funny. And you do!

You're trying to convince us Intel is a decade behind Nvidia and AMD, but actually HD 630 has roughly 45-50% of GT 1030 performance - both on paper and in benchmarks:
www.notebookcheck.net/HD-Graphics-630-vs-GeForce-GT-1030-Desktop_7652_7996.247598.0.html

And now some figures for people with strong die size fallacy:
HD 630: ~40mm2
GT 1030: 70mm2
That's 57%.

To be fair, we would have to consider that part of the die is taken by media and encoding and this part is relatively larger in the smaller die. But even without this, you can see it's not that far off.

More importantly, IGP isn't actually optimized for performance. It is optimized for idle power consumption, which stays under 1W even during movie playback. GTX1050 needs 3W. Just saying.
Hmmmm, it's almost as if intel needs a ground up design and can't just scale their current crapola. Geez, who would've known? So, thanks, you're starting to get it. It only does low power (and you're comparing it to nvidia cut down from monolithic design, I think it works pretty well).
Posted on Reply
#24
saki630
There is too much Tin Foil conspiracy going on in this thread. There are a few that dont want to bother understanding the contract, so they make a conspiracy about its legitimacy to promote the image of an America that is spiraling down out of control with tax payer money being funded into fantasy projects.

Keep watching your Alex Jones, I'll stick to my Crack Addicts chiropractor videos on the tube.
Posted on Reply
#25
Daven
notbI know you really wanted to sound funny. And you do!

You're trying to convince us Intel is a decade behind Nvidia and AMD, but actually HD 630 has roughly 45-50% of GT 1030 performance - both on paper and in benchmarks:
www.notebookcheck.net/HD-Graphics-630-vs-GeForce-GT-1030-Desktop_7652_7996.247598.0.html

And now some figures for people with strong die size fallacy:
HD 630: ~40mm2
GT 1030: 70mm2
That's 57%.

To be fair, we would have to consider that part of the die is taken by media and encoding and this part is relatively larger in the smaller die. But even without this, you can see it's not that far off.

More importantly, IGP isn't actually optimized for performance. It is optimized for idle power consumption, which stays under 1W even during movie playback. GTX1050 needs 3W. Just saying.
I don't think we should look at it from a performance per transistor perspective but rather the attempts from Intel to try and devalue the GPU market in order to hurt its competitors.
Posted on Reply
Add your own comment
Dec 23rd, 2024 14:35 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts