Wednesday, March 20th 2019
Without Silicon, Intel Scores First Exascale Computer Design Win for Xe Graphics - AURORA Supercomputer
This here is an interesting piece of tech news for sure, in that Intel has already scored a pretty massive design win for not one, but two upcoming products. Intel's "Future Xeon Scalable Processors" and the company's "Xe Compute Architecture" have been tapped by the U.S. Department of Energy for incorporation into the new AURORA Supercomputer - one that will deliver exascale performance. AURORA is to be developed in a partnership between Intel and Cray, using the later's Shasta systems and its "Slingshot" networking fabric. But these are not the only Intel elements in the supercomputer design: Intel's DC Optane persistent memory will also be employed (in an as-of-yet-unavailable version of it as well), making this a full win across the prow for Intel.The AURORA supercomputer is to be delivered to the Argonne National Laboratory by 2021, under a $500 million contract (with $146 million of these going to Cray). This is quite a big move for Intel, that ensures an incredible PR move for its CPUs and GPUs (even if for upcoming parts of those, whose performance figures aren't finalized by any means). This victory is particularly interesting in that both AMD and NVIDIA (especially NVIDIA) have been behind virtually all of the GPU compute AI acceleration supercomputer victories, so for Intel to snag this design win so early will definitely bring a good amount of attention to its Xe graphics architecture among institutions. AURORA has been designed to chew through data analytics, HPC and AI workloads at an exaFLOP pace, and will incorporate Intel's OneAPI for system integration.
Sources:
Intel AURORA Announcement, CNET
44 Comments on Without Silicon, Intel Scores First Exascale Computer Design Win for Xe Graphics - AURORA Supercomputer
Here we have tax dollars going to a product that doesn't exist, from a company who has never made it.
Would you pay $750 dollars for a new Intel GPU that they haven't made yet, nor have they ever made one? How about $7500, or $750,000?
Should be similar in this situation since Intel always samples hyperscalers or projects like this one way before it technically 'releases' any architecture.
Here it seems they got a win in everything but the fabric, which should inspire some confidence in their Xe Graphics project.
Why didn't enterprise think of this? They'll save a fortune buying CPUs with igpu instead of Quadros! Rebates into pockets of those making decisions.
www.pcgamesn.com/intel/intel-xe-graphics-card-ice-lake-gpu
You people assume about things that you don't know how they are working. There are laws put in place by Congress about procurement, all put nicely together in something called CFR, FAR... look it up.
You're trying to convince us Intel is a decade behind Nvidia and AMD, but actually HD 630 has roughly 45-50% of GT 1030 performance - both on paper and in benchmarks:
www.notebookcheck.net/HD-Graphics-630-vs-GeForce-GT-1030-Desktop_7652_7996.247598.0.html
And now some figures for people with strong die size fallacy:
HD 630: ~40mm2
GT 1030: 70mm2
That's 57%.
To be fair, we would have to consider that part of the die is taken by media and encoding and this part is relatively larger in the smaller die. But even without this, you can see it's not that far off.
More importantly, IGP isn't actually optimized for performance. It is optimized for idle power consumption, which stays under 1W even during movie playback. GTX1050 needs 3W. Just saying.
Also, I guess by your judgement there is no need to buy AMD GPUS since NV cards are quite a bit faster.
This is something that will be used by the Defense Department and the military. I think it is a reasonable guess that these customers have prioritized silicon made in the US. I also think that if the government decided that they needed to replace all of their Intel chips after the Spectre hardware bug was discovered that this could explain why Intel has been missing new silicon and still making a huge profit.
I think that it is safe to say that in this case, while building the first exascale computer ever, the government is buying state of the art silicon for AI research. Perhaps the expense is high and includes the cost for Intel to get a new node operative for their purposes alone. If that is the case then this is a win for both of these two huge organizations. You can bet that you will hear more news about overspending in the Defense Department which is how they move money from regular acquisitions to off the books work. I am sure that building the first exascale computer is in this arena of Black budget purchases and that Intel has already been given lots of seed money to get this processor going. It is also worth noting that we just put a record-breaking $ 700 Billion into defense spending this year.
The fact is Intel has been caught with their pants down because of graphics accelerators advantages over cpu's in AI workloads. The demise of Intel's CEO is surely due to this mess. Doesn't anyone else wonder why Intel is damn profitable right now?
They set an all time record for earnings this December. That does not seem to make sense given their current explanations for missing a process node upgrade for 2 years in a row.
here is the pdf from their website
s21.q4cdn.com/600692695/files/doc_financials/2018/Q4/Q4'18-Earnings-Release_final.pdf
- I don't care about AMD vs Intel
- I don't care about NV vs AMD
I'm not sure what your problem is but it appears you are incapable of having a conversation. Have a good day.books.google.nl/books?id=XLmBDwAAQBAJ&pg=PR13&lpg=PR13&dq=JSF+contractors&source=bl&ots=4JxS1EV_9g&sig=ACfU3U3pHOC64D-nUVHJr-IwcAW2ILE-Dg&hl=nl&sa=X&ved=2ahUKEwiUxKHYtpHhAhWIZFAKHXJjCvwQ6AEwBHoECAgQAQ#v=onepage&q=JSF contractors&f=false
Keep watching your Alex Jones, I'll stick to my Crack Addicts chiropractor videos on the tube.