• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Without Silicon, Intel Scores First Exascale Computer Design Win for Xe Graphics - AURORA Supercomputer

I know you really wanted to sound funny. And you do!

You're trying to convince us Intel is a decade behind Nvidia and AMD, but actually HD 630 has roughly 45-50% of GT 1030 performance - both on paper and in benchmarks:
https://www.notebookcheck.net/HD-Graphics-630-vs-GeForce-GT-1030-Desktop_7652_7996.247598.0.html

And now some figures for people with strong die size fallacy:
HD 630: ~40mm2
GT 1030: 70mm2
That's 57%.

To be fair, we would have to consider that part of the die is taken by media and encoding and this part is relatively larger in the smaller die. But even without this, you can see it's not that far off.

More importantly, IGP isn't actually optimized for performance. It is optimized for idle power consumption, which stays under 1W even during movie playback. GTX1050 needs 3W. Just saying.

I don't think we should look at it from a performance per transistor perspective but rather the attempts from Intel to try and devalue the GPU market in order to hurt its competitors.
 
I don't think we should look at it from a performance per transistor perspective but rather the attempts from Intel to try and devalue the GPU market in order to hurt its competitors.

Devalue... how? I don't follow. They are pumping money into it, and they will want a return on investment. That will lead to competition, which generally leads to growth.

And as far as performance, this is one of the very rare markets where performance per mm² and per watt are the alpha and omega. If you capture that, you capture the market, ie. Nvidia right now.
 
It's not unheard for such contracts to be commissioned before certain products hit the market officially but we are talking about something of which there is exactly zero information out there. There isn't even a trace of any prototyped silicon out there somewhere being tested as was the case with Larabee where Intel handed out some GPUs to select institutions before they were supposed to be officially released.
 
I fail to see how one could expect a demo of a product that is being contracted to be built before it has been contracted to be built?

No one expects to see a demo of the supercomputer, however, the supercomputer can't be built with mythical parts with 0 proof of functionality.
There have been 0 demonstrations of Xe, 0 talks of performance range, 0 launch dates given and 0 reasons to back this move.
 
There isn't even a trace of any prototyped silicon out there somewhere being tested
Hmmm pretty sure I posted in this thread that Intel did make a prototype at the beginning of last year presented to ISSC. Do you happen to have unknown insider knowledge of what Intel is up to?
02_o-png.119177

"Back at the start of 2018 Intel designed a prototype discrete GPU using its 14nm Gen 9 execution units, packing 18 low-power EUs across three sub-slices (roughly analogous to Nvidia’s SMs) to offer simple, parallel graphics processing in a tiny, 64mm2 package. It subsequently showed the research off at the ISSCC event in February. "

What Intel might build off of after their iGPU is a whole different ball game.
 
There have been 0 demonstrations of Xe, 0 talks of performance range, 0 launch dates given and 0 reasons to back this move.

Um, there have been 0 public demonstrations. That says nothing about what has happened that we don't know about.

It's as if people think the world revolves around PC enthusiasts...
 
Here we have tax dollars going to a product that doesn't exist, from a company who has never made it.
Isn't that the basis for research funding anywhere?
A product doesn't have to exist to obtain funding, just a plausible feasibility study saying it can be done and a time frame with outcomes expected.
 
ah... Exascale? and to think a friend of mine thought it was a new tech and Intel was "lightyears" beyond his concurrent ... while it's "only" a "exaFLOPS capable SC"

blank contract on something that even AMD can achieve? (not now ofc ... but given time ... hey! they have till 2021 ... right? :laugh: now they need to find some "fat cash wallet carrier" to do so) if it was about innovation i would understand ... but about raw computational power ... a bit less

funding for Xe? ... surely Xe is not a revolutionary GPU ... after all it's a Intel GPU



the only interesting point (and a bad one ) is that it will be a "all Intel" SC
 
Kids in their bedrooms, comparing gaming benchmarks and scores with Supercomputing requirements.
It's a totally different world. For who really wants to find something new, Intel has already a second gen AI chip (VPU). Not a big deal to improve it and put it in the next GPU. It will already be embedded in the Xeon Cascade Lake (Intel DL Boost).
https://newsroom.intel.com/news/intel-unveils-intel-neural-compute-stick-2/
 
Last edited:
Kids in their bedrooms, comparing gaming benchmarks and scores with Supercomputing requirements.
It's a totally different world. For who really wants to find something new, Intel has already a second gen AI chip (VPU). Not a big deal to improve it and put it in the next GPU. It will already be embedded in the Xeon Cascade Lake (Intel DL Boost).
https://newsroom.intel.com/news/intel-unveils-intel-neural-compute-stick-2/
*Intel's subsidiary Movidius has a 2nd gen VPU, visual processing unit... it is the easiest problem to solve...it's all inferencing no models being made here... that's on distant heavy iron.

Intel is waaaay behind in mindshare on their various acquired IP... Altera, Movidius , nervana and no... they did not embed a mobius vpu chip in cascade lake...
They simply added another AVX extension AVX512VNNI as well as bfloat16.
Don't try and combine press releases. please and thank you.

Intel is not gaining traction on any of those projects outside of facebook.... they are all cool... but not fully baked. They just gave FB enough of a deal to help them polish up both Movidius and Nervana product lines.

Altera, flexible fpga, but gets utterly trounced by the T4, it was barely competitive to the p4 and Nvidia launched the T4 before they could get it out the door.
There are other more capable fpga's on the market and convincing people to go intel comes down to discounts. They are combining it with IB (infiniband) for some interesting offload capabilities.

Now... for Xe... there have been no performance estimates from it, from any side. not consumer, not server...
This is not a bid won because of Xe... but already won and delayed because of Intel's past failings...
Originally announced in April 2015, Aurora was planned to be delivered in 2018 and have a peak performance of 180 petaFLOPS. The system was expected to be the world's most powerful system at the time. The system was intended to be built by Cray based on Intel's 3rd generation Xeon Phi (Knights Hill microarchitecture). In November 2017 Intel announced that Aurora has been shifted to 2021 and will be scaled up to 1 exaFLOPS. The system will likely become the first supercomputer in the United States to break the exaFLOPS barrier. As part of the announcement Knights Hill was canceled and instead be replaced by a "new platform and new microarchitecture specifically designed for exascale".

What this is ... is a win for Intel's ONE api, to make a language that handles all the details of the workloads and offloads it to the most capable accelerator for any given task.
This is not a win for Xe, but one in spite of it.

I live in the realm of supercomputers and love all the competition to nvidia that has risen up in the past few years.
This this is... DOE... Intel has to showcase efficiency not absolute performance per node. Which is why this is a win for cray and not omnipath.
 
Last edited:
Low quality post by SoNic67
Your handle is "Patriot". You forgot to mention who the CEO of nvidia and AMD are and in what country the actual research money will be spent.

"Jensen Huang founded NVIDIA in 1993 and has served since its inception as president, chief executive officer and a member of the board of directors.
Huang is a recipient of the Dr. Morris Chang Exemplary Leadership Award, and honorary doctorate degrees from Taiwan’s National Chiao Tung University...."
"Lisa Su (born 1969) is a Taiwanese American business executive and electrical engineer, and the CEO and president of Advanced Micro Devices (AMD)."

It's about time US tax dollar support research in US.

Ah, and this: https://www.law.cornell.edu/uscode/text/41/subtitle-IV/chapter-83
Who has actual manufacturing facilities in US?
 
Last edited:
Low quality post by Patriot
Jenson was born and raised in the US, and Lisa Su's family migrated when she was 3... They are both Americans... by spirit and by letter. (they are also related...something like first cousins once removed)
TSMC fabs both Nvidia and AMD gpu's Both companies are headquartered in silicon valley.
As for the cpu side of things... They are fab'd all around the world, only Intel has fabs... AMD is fab'd through TSMC or GloFail.
Intel is 75% in US but is also the only one with a fab in china for 3dXpoint.
Edit: wikipedia is wrong and doesn't know the difference between 3dnand and 3dxpoint which is based on it.
https://en.wikipedia.org/wiki/IM_Flash_Technologies
 
Last edited:
Low quality post by 64K
Yes Nvidia has headquarters in Santa Clara California. Nvidia spent 370 million dollars building their headquarters there:



Another interesting thing about Mr. Huang is that he is a self-made billionaire. As of right now he has 4.4 billion dollars in wealth.
 
Low quality post by Patriot
Yes Nvidia has headquarters in Santa Clara California. Nvidia spent 370 million dollars building their headquarters there:

Another interesting thing about Mr. Huang is that he is a self-made billionaire. As of right now he has 4.4 billion dollars in wealth.

While they are a fistfull of assholes to deal with... can't arbitrarily deny they are American ...They are truly the worst... they don't see themselves as a vendor selling components, they see server manufacturers as components for nvidia supercomputers.
 
Low quality post by moproblems99
While they are a fistfull of assholes to deal with... can't arbitrarily deny they are American ...They are truly the worst... they don't see themselves as a vendor selling components, they see server manufacturers as components for nvidia supercomputers.

How did this topic get to about where everyone is from? Who cares? It doesn't have a thing to do about building super computers.
 
Low quality post by Patriot
How did this topic get to about where everyone is from? Who cares? It doesn't have a thing to do about building super computers.
It doesn't matter in the absolute sense of things, unless you are concerned about security, then sources of components matters.
Government contracts in the US must first TRY and source from US vendors... the problem with Sonics argument as pointed out is... All the vendors are US based...
First try reading the thread, stop creating moproblems. :p
 
Low quality post by moproblems99
Isn't that the basis for research funding anywhere?
A product doesn't have to exist to obtain funding, just a plausible feasibility study saying it can be done and a time frame with outcomes expected.


I guess this is where I expect large companies to get off my tax dollars with their billions of dollars and build a product the market desires and to reap the financial reward of successful hard work.

Instead our government according to you gave them my money to think about building a successful product with my money. Sounds like theft to me, sounds like a recipe for debt. If Intel were truly smart they would buy black boxes of AMD or Nvidia proven hardware and make a profit, and that could be what they have to do when they fail.
 
I guess this is where I expect large companies to get off my tax dollars with their billions of dollars and build a product the market desires and to reap the financial reward of successful hard work.

Instead our government according to you gave them my money to think about building a successful product with my money. Sounds like theft to me, sounds like a recipe for debt. If Intel were truly smart they would buy black boxes of AMD or Nvidia proven hardware and make a profit, and that could be what they have to do when they fail.
The idea here is that you want your supercomputer to be cutting-edge. And designing, building and testing complicated systems takes years.
Using stuff already available on the market means you end up with an obsolete system when it reaches production stage.

The only way for your supercomputer (or your fighter aircraft) to be cutting-edge at launch is to order it in this kind of tenders. This way the thing you want is developed alongside the technology it will use.
 
I guess this is where I expect large companies to get off my tax dollars with their billions of dollars and build a product the market desires and to reap the financial reward of successful hard work.

Instead our government according to you gave them my money to think about building a successful product with my money. Sounds like theft to me, sounds like a recipe for debt. If Intel were truly smart they would buy black boxes of AMD or Nvidia proven hardware and make a profit, and that could be what they have to do when they fail.

To remind everyone once again, bid was won in 2015, the deployment has been delayed several times, this is not a win for Xe but for Intel's One API allowing mixed accelerators under one API. It's not for peak performance per node but peak efficiency and overall targeting the exaflop mark. Xe simply fills the hole left by the abandonment of the Phi lineup. Clickbait is bad.
 
Back
Top