Monday, February 10th 2020

Intel Xe Graphics to Feature MCM-like Configurations, up to 512 EU on 500 W TDP

A reportedly leaked Intel slide via DigitalTrends has given us a load of information on Intel's upcoming take on the high performance graphics accelerators market - whether in its server or consumer iterations. Intel's Xe has already been cause for much discussion in a market that has only really seen two real competitors for ages now - the coming of a third player with muscles and brawl such as Intel against the already-established players NVIDIA and AMD would surely spark competition in the segment - and competition is the lifeblood of advancement, as we've recently seen with AMD's Ryzen CPU line.

The leaked slide reveals that Intel will be looking to employ a Multi-Chip-Module (MCM) approach to its high performance "Arctic Sound" graphics architecture. The GPUs will be available in up to 4-tile configuration (the name Intel is giving each module), which will then be joined via Foveros 3D stacking (first employed in Intel Lakefield. This leaked slide shows Intel's approach starting with a 1-tile GPU (with only 96 of its 128 total EUs active) for the entry level market (at 75 W TDP) a-la DG1 SDV (Software Development Vehicle).
Then we move towards the midrange market through a 1-tile 128 EU unit (150 W), a 2-tile 256 EU unit (300 W) for the enthusiasts, and finally, a 4-tile, up to 512 EU — a 400-500 W beast reserved only for the Data Center. This last one is known to be reserved for the Data Center since the leaked slide (assuming it's legitimate) points to a 48 V input voltage, which isn't available on consumer solutions. Intel's design means that each EU has access to (at least by design) the equivalent of eight graphics processing cores per EU. That's a lot of addressable hardware, but we'll see if both the performance and power efficiency are there in the final products - we hope they are.
Sources: Digital Trends, via Videocardz
Add your own comment

50 Comments on Intel Xe Graphics to Feature MCM-like Configurations, up to 512 EU on 500 W TDP

#2
cucker tarlson
dafuq is coyote pass :laugh: :roll:
and I thought rocket lake was stupid.
Posted on Reply
#3
TheLostSwede
News Editor
And here I thought Intel was strongly opposed to glue...
Posted on Reply
#4
W1zzard
cucker tarlsondafuq is coyote pass
Is that when the coyote is putting acme dynamite under a bridge in the mountains?
Posted on Reply
#5
cucker tarlson
W1zzardIs that when the coyote is putting acme dynamite under a bridge in the mountains?
that's exactly what I thought.

Posted on Reply
#6
TechLurker
TheLostSwedeAnd here I thought Intel was strongly opposed to glue...
Intel would strongly insist theirs isn't glue, but cement.
Posted on Reply
#7
londiste
1-Tile Client product with common die
This might mean client product on 1-tile only. MCM is viable enough today and has been for a while for GPGPU uses.
Posted on Reply
#8
Cheeseball
Not a Potato
256 EUs should be around 7 TFLOPs, so it could be a RTX 2070 non-Super competitor?
Posted on Reply
#9
ppn
Cheeseball256 EUs should be around 7 TFLOPs, so it could be a RTX 2070 non-Super competitor?
the one with 7Tflops is the 500 watt 900Mhz*4096*2 so this means nothing for the real performance.
Posted on Reply
#10
Cheeseball
Not a Potato
ppnthe one with 7Tflops is the 500 watt 900Mhz*4096*2 so this means nothing for the real performance.
I think you mean 300W (as shown in the chart above). The 2-tile is 256 EU which should be more or less 7 TFLOPS.
Posted on Reply
#11
ppn
CheeseballI think you mean 300W (as shown in the chart above). The 2-tile is 256 EU which should be more or less 7 TFLOPS.
Then it will be a 300 watt card against the hypothetical but very close to 100 watt GTX 2660 Super 7nm+ 2048 Cuda / 128 bit .
Posted on Reply
#12
Chrispy_
Those are looking like some pretty grandiose plans for a company that hasn't launched a GPU in 22 years.

I'm hoping for competition as much as the next guy, but let's see if they can get the baby steps right and make a viable dGPU that people might want to buy first.

After all, if it's not a success, Intel will just can it and all of these roadmap ideas will be archived like Larabree was.
Posted on Reply
#13
dicktracy
Chrispy_Those are looking like some pretty grandiose plans for a company that hasn't launched a GPU in 22 years.

I'm hoping for competition as much as the next guy, but let's see if they can get the baby steps right and make a viable dGPU that people might want to buy first.

After all, if it's not a success, Intel will just can it and all of these roadmap ideas will be archived like Larabree was.
Intel can't afford to abandon GPU development because next-gen computing is not about CPUs anymore but mostly the GPU and AI. Whoever can dominate the GPU market will become tomorrow's "Intel." The biggest threat to Intel out of all their competitors today is Nvidia. If they can't get it right on the first try, they're forced to keep spending R&D until they can.
Posted on Reply
#14
R-T-B
W1zzardIs that when the coyote is putting acme dynamite under a bridge in the mountains?
This is the news coverage I come here for... made my day.
Posted on Reply
#15
TheGuruStud
They forgot the tiny print: 75% of power consumption is the interconnect. Efficiency is trash, we won't really produce this, but we have to market something.
Posted on Reply
#16
mastrdrver
I'll ask it again: Where are they going to make these GPUs at since they are capacity constrained with the CPUs? Are people really expecting Intel to stop making CPUs in which they have such a high markup on to sell a few GPUs that will not even come close?


And please don't say at Samsung, TSMC, etc. There is a vast difference not only in the process of making the chips but also is the type of transistors they make whether they're cmos, etc.

Also, is no one going to make mention about how this "leaked slide" looks like it's from the mid 2000s?
Posted on Reply
#17
TheGuruStud
mastrdrverAlso, is no one going to make mention about how this "leaked slide" looks like it's from the mid 2000s?
I mean... isn't that the mindset Intel is in (we're against athlon 64, we'll just pay our way out).
Posted on Reply
#18
Sybaris_Caesar
I for one am excited. They either stumble and fall and become the butt of our jokes or they pawn AMD with their superior driver support, the industry is gonna become exciting.
Posted on Reply
#19
R0H1T
mastrdrverAnd please don't say at Samsung, TSMC, etc. There is a vast difference not only in the process of making the chips but also is the type of transistors they make whether they're cmos, etc.
That's what some of the rumors said, & why not? You do know TSMC already fabs some of their products, probably Sammy as well so a relatively low margin product, as compared to Xeon, can definitely be made at other leading Fabs.
Posted on Reply
#20
mastrdrver
Samsung produces chipsets, TSMC produces nothing for Intel (at least that I could find). None of these so called rumors have said anything about the GPUs being made in another foundry.

This "Intel is going to produce a discrete GPU" things keep coming up every 5 years or so and never materialize because Intel is a CPU company.

edit: After a little searching, Intel has stated that the GPU will be built on its 10nm+. This is more proof that this is never going to happen because all the leaks about 10nm and 10nm+ from Intel is that it's a dumpster fire.
Posted on Reply
#21
londiste
mastrdrverThis "Intel is going to produce a discrete GPU" things keep coming up every 5 years or so and never materialize because Intel is a CPU company.
Designing and building a GPU takes a long time. 5 years actually sounds about right for a viable attempt.
Intel does far more than CPUs - there is the foundry, Flash/SSDs, XPoint, all kinds of NICs (including 5G, at least until recently), FPGAs, it has some foothold in AI/ML, bunch of interconnect stuff and I probably missed a few.
Posted on Reply
#22
DeathtoGnomes
KhonjelI for one am excited. They either stumble and fall and become the butt of our jokes or they pawn AMD Nvidia with their superior driver support, the industry is gonna become exciting.
fixed.

Nvidia is the goal post here, not AMD. The fight for 2nd place is moot because the real winner will be consumers.
Posted on Reply
#23
Vya Domus
Even for a highly specific applications 500W per card or whatever format this would be on is a ridiculous amount. As I pointed out many times MCM only makes sense when you have exhausted every other option architecturally, if the 96 EU DG1 is any indication of where Intel is right now performance wise, well, let's just say they are no where close to that situation. Unless the EUs themselves are completely revamped in some way and churn out significantly higher performance or efficiency the outlook on this is very grim.
londiste5 years actually sounds about right for a viable attempt.
Viable as in it's enough to make something but not a good enough if you want a competitive product. In 5 years there is a very good chance the process technology will change plus various other advancements would have occurred and you'll have a sub-par product before it's even released. Let's be real here, 5 years is an eternity in semiconductor space, the entire landscape could change several times over during this time.
Posted on Reply
#24
lemonadesoda
I think intel R&D are working hard to develop IP that they can license, or cross fertilize into CPU R&D or general compute R&D. I doubt there will be any competitive GPU anytime soon. It is also oddball that general purpose multi core compute is currently stuck on a GPU card. We need to make the leap of faith and keep CPU CPU, GPU GPU, and finally find a hardware and software standard for GP megacore vector matrix branch interrupt multitask compute. Extensions to CPU and extensions to GPU are far from optimal. They are all bolt ons, and are not always efficient depending on the compute goal. The market is fragmented with different approaches. CUDA and OpenCL are severely limited for decision-based algorithms, and are not great at AI, and have appalling latency for certain tasks. They are good at processing vast quantities of data with rudimentary transformations. But you wont get a CUDA based chess engine or solution to AI learning that is anywhere close to theoretical maximum efficiencies. Not even by orders of magnitude. There are accelerator cards targetted to specific compute scenarios, like financial industry, like the xilinx cards and others, but again, they are focussing on homogeneous processing of vast data sets, rather than on Learning, requiring heterogeneous thread/calc. So perhaps Intels GPU team are actually their R&D team trying to experiment and create IP for the broader compute problem. And those really ugly looking TDP numbers are actually OK when compared to equivalent CPU or CUDA code. Yep, we need a formal Press Release from Intel to get a better idea of whats going on.
Posted on Reply
#25
Sybaris_Caesar
DeathtoGnomesfixed.

Nvidia is the goal post here, not AMD. The fight for 2nd place is moot because the real winner will be consumers.
I don't think they can really dethrone Nvidia tbh, at least in the market we/I am interested in.

At worst Intel produces mediocre product but AMD falls behind because Intel does driver support very good. At best Intel GPUs handily beat AMD and slowly Radeon becomes irrelevant.
Posted on Reply
Add your own comment
Nov 21st, 2024 12:16 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts