Monday, November 6th 2017

Intel Announces "Coffee Lake" + AMD "Vega" Multi-chip Modules

Rumors of the unthinkable silicon collaboration between Intel and AMD are true, as Intel announced its first multi-chip module (MCM), which combines a 14 nm Core "Coffee Lake-H" CPU die, with a specialized 14 nm GPU die by AMD, based on the "Vega" architecture. This GPU die has its own HBM2 memory stack over a 1024-bit wide memory bus. Unlike on the AMD "Vega 10" and "Fiji" MCMs, in which a silicon interposer is used to connect the GPU die to the memory stacks, Intel deployed the Embedded Multi-Die Interconnect Bridge (EMIB), a high-density substrate-level wiring. The CPU and GPU dies talk to each other over PCI-Express gen 3.0, wired through the package substrate.

This multi-chip module, with a tiny Z-height, significantly reduces the board footprint of the CPU + discrete graphics implementation, when compared to having separate CPU and GPU packages with the GPU having discrete GDDR memory chips, and enables a new breed of ultra portable notebooks that pack a solid graphics muscle. The MCM should enable devices as thin as 11 mm. The specifications of the CPU and dGPU dies remain under the wraps. The first devices with these MCMs will launch by Q1 2018.
A video presentation follows.

Add your own comment

51 Comments on Intel Announces "Coffee Lake" + AMD "Vega" Multi-chip Modules

#26
thesmokingman
Man, you can just see the imprint of major traders shorting knowing all this in advance, Their dumping of the stock after financial forecasts, then the next days the gobbling of what they dumped after this release. It's all too freaking convenient.
Posted on Reply
#27
Imsochobo
R0H1TI understand that, I'm saying an AMD APU like this would only need IF to connect all the 3 on the same package. That should be higher bandwidth & likely less latency as well.
Interposer is probably different between HBM->Gpu and Gpu->cpu
If both gpu and cpu runs same VRM it's a big effeciency win
PatriotThis wasn't unthinkable... in fact it was rumored and this was the outcome. The rumor was AMD was going to license IP to Intel because Intel stopped licensing from Nvidia.

They both denied it was licensing... so it was surmised it has to be a full chip. Tada.
Conspiracy incoming!:
AMD needs money and sales, Intel wants to weaken Nvidia to have better chances at IP and AMD didn't want to give IP either so here they are?
Posted on Reply
#28
bug
R0H1TI understand that, I'm saying an AMD APU like this would only need IF to connect all the 3 on the same package. That should be higher bandwidth & likely less latency as well.
Ah, ok then, that makes sense. I genuinely didn't catch your drift.
Though with Vega already in place and IF added to the mix, I'd be a bit worried about power draw. But hey, let's see it before criticising it, right?
Posted on Reply
#30
Frick
Fishfaced Nincompoop
Hugh MungusWhat devices are these for?
Think Macbooks and gaming laptops, but now thinner.

Me I'm more interested in Raven Ridge and the MX150 from Nvidia.
Posted on Reply
#31
Dbiggs9
thesmokingmanMan, you can just see the imprint of major traders shorting knowing all this in advance, Their dumping of the stock after financial forecasts, then the next days the gobbling of what they dumped after this release. It's all too freaking convenient.
Yep, still 150M shorts, had to drive price down to keep losses down and buy long. wall street what is new.
Posted on Reply
#32
Steevo
I wonder how fanboys will spin this (on both sides) and more importantly, buy AMD stock.
Posted on Reply
#33
Gasaraki
FreezGood for Intel. Not so bad for AMD. NVidia.. poor NVidia. :shadedshu:
You think nVidia is scared? nVidia is so far ahead in AI, supercomputers and gaming processors that mid to low end integrated graphics is not going to faze them. High end gaming laptops will still be using high end Intel cpus with 1080s in them.
Posted on Reply
#34
BiggieShady
So this is Vega + Coffee Lake MCM ... and the video makes it look like there is a generic GPU and it's intel's own hbm+interposer implementation when really it's amd's implementation and intel's own fabrication (which might probably won't matter a lot, mind you)
Posted on Reply
#35
Steevo
After further inspection it seems this may be a way for AMD to gain access to Intel fabs, and based on the tech it seems to be all AMD with Intel cores soldered on after.
Posted on Reply
#36
Bansaku
Why is this surprising for some? Look at Samsung and Apple's partnership, despite being bitter rivals! If it's good for business, why not?
Posted on Reply
#37
ppn
256GBs=1024bit hbm2= 256bit gddr5= GTX1070Ti or 1/2 VEGA64.
Which On tsmc 7 nm would be pretty small around 100mm2 the 1070Ti and also 40%faster reaching 1080ti performance. Be nice to have a separate GFX on socket, no need to be integrated and soldered with CPU at all. Since GPU part gets irrelevant much faster.
Posted on Reply
#38
Dbiggs9
I think we will see this in all future products if this deal works out. Intel needs a GPU
Posted on Reply
#39
Assimilator
So:

* Intel gets graphics chips that can actually compete and finally kills their ailing "GT" graphics line. Dear god, it's about damn time.
* AMD gets something that (presumably) works better/more reliably than a silicon interposer, doesn't necessarily destroy good chips if it's faulty, and is probably cheaper too. Only question is around bandwidth, although I'm pretty sure if EMIB wasn't performant enough, they wouldn't be pairing it with HBM2.

This is an interesting move with interesting timing from AMD (although I imagine it was in the works long before Zen was a success) - it could mean that Raven Ridge isn't quite as good as we're all hoping. Or, it could simply mean that Intel is paying AMD a shitload of money.

I honestly believe that the end result will be Intel making a bid to buy RTG at some point in the future. Now that truly would make for interesting times...
Posted on Reply
#40
Jism
Durvelle27This seems like a shot in the foot for AMD APUs
This exactly.

AMD could sell Ryzen CPU's together with VEGA GPU's onto it's own packaging with HBM2. It would mean a bigger margin on the long run.

But, having Intel covering 80% of CPU market in both desktop, laptop and enterprise this is a great chance that dev's actually start working on AMD GCN technology and get the best out of AMD's GPU's.

The HBM2 is a very good technique. It kills the expensive cache Intel used to drop onto their CPU's for the IGP's. Or the motherboards with a soldered GDDR5 chip, or the systems with shared memory. Both sollutions are very limited and takes bandwidth, power and resources compared to HBM2.
Posted on Reply
#41
evernessince
GasarakiYou think nVidia is scared? nVidia is so far ahead in AI, supercomputers and gaming processors that mid to low end integrated graphics is not going to faze them. High end gaming laptops will still be using high end Intel cpus with 1080s in them.
Well it's a good thing those ultra high end laptops with 1080s in them only represent 0.001% of the market. The majority of Nvidia's mobile money is coming from lower end products and that's exactly what Intel and AMD are targeting.
Posted on Reply
#42
Parn
A future game console chip? Maybe Xbox One X mk2? Or PS5?
Posted on Reply
#43
scorpion_amd13
Someone has to say it... Now we really do know why Vega was so horribly late. This GPU really does seem to be the proverbial Jack of all trades (master of none part does seem to apply as well). I mean, it pulls off high-end GPU, mining GPU, professional GPU, IGP in APUs and it can also play nice with Intel cores on the same package? Am I missing something they didn't try to slap Vega onto? This couldn't have been easy. To be honest, it's also pretty impressive. It's also pretty amazing that this Intel+AMD project actually materialized. I thought Intel would at the very least go for some rebranding. I certainly never expected them to ever ditch "their own" GPUs and accept that they'll never manage to get a decent driver out for them. Just wow, I really am impressed.

What the hell is happening? Is this what travelling to an alternate reality feels? :)))))))))))))))))))))
Posted on Reply
#44
Imsochobo
ParnA future game console chip? Maybe Xbox One X mk2? Or PS5?
No, AMD will target with Ryzen-Vega.
Apple may be interested in Macbook pro and other oem's in that class.

Who'd want intel+amd gpu in an console, that's like throwing money out the window.... much better with ARM-Nvidia or X86-AMD which is the only choices for a high performance console.
Posted on Reply
#45
Aquinus
Resident Wat-man
Going the MCM route has this little advantage of scaling really well while keeping form factors small. This didn't used to be cheap to do but, it's getting cheaper and more realistic to do, even at the CPU level as we've seen with Threadripper/Epyc. The cheaper and more mature it gets the more common it will become. I honestly think this is the beginning of seeing MCM becoming the new way of "keeping up with Moore's law," because the traditional monolithic die isn't going to cut it because huge dies don't scale. Smaller dies have better yields and are less likely to have defects, which means higher clocks and more efficient circuits. Using more of them means better performance because heat can be spread out over a larger area. Before you know it, I bet you'll start seeing MCM GPU packages as well to combat the same problems, huge dies cost too much to produce. It's a problem that DRAM solved ages ago by using multiple chips instead of just making one massive one, we just didn't have a good way to do this with other components because the development hadn't been done yet.

tl;dr: Welcome to the future.
Posted on Reply
#46
Totally
GasarakiYou think nVidia is scared? nVidia is so far ahead in AI, supercomputers and gaming processors that mid to low end integrated graphics is not going to faze them. High end gaming laptops will still be using high end Intel cpus with 1080s in them.
Ahead in AI? Ahead of who? Microsoft, Facebook, Google, IBM, Samsung, even Honda and Toyota are releasing updates to their progress. Last we heard from Nvidia was that accelerator thing awhile back. I don't see the big deal there since the AI problem is software not hardware. Does Nvidia even have an AI initiative? Can't exactly claim they're ahead in AI just because they released a high throughput gpgpu part aimed at AI research.
Posted on Reply
#47
Nihilus
I feel like this was a message from Intel to Nvidia "you guys are dicks and we don't necessarily need you"
Posted on Reply
#48
bug
AquinusGoing the MCM route has this little advantage of scaling really well while keeping form factors small. ...
That's true, if by "scaling really well" you mean "allows us to put 2-4 chips on a die on the cheap".
It's a good solution for what we need right now, but it won't scale allowing us to glue 100 chips as easily ;)
Posted on Reply
#49
Aquinus
Resident Wat-man
bugThat's true, if by "scaling really well" you mean "allows us to put 2-4 chips on a die on the cheap".
It's a good solution for what we need right now, but it won't scale allowing us to glue 100 chips as easily ;)
No, but it's more realistic than shoving 100 cores on a single die and expecting good yields. It's also not like we have 100 core CPUs right now anyways and going this route makes things like 16 and 32c parts cheaper than as if they were built using a monolithic design with one massive die. It's just (in general,) cheaper to produce with better quality. It's a hands down, win-win.
Posted on Reply
#50
ktraj1
cadavecaThere is ZERO impact on APUs... MCM = Multi-Chip Module, which is most definitely NOT an APU or even an iGPU... it's an Intel CPU and AMD GPU married together on a single substrate. This simple gets rid of GPU PCB, as well as the distance between the CPU and GPU electrically, which should allow for lower latency between the two separate pieces of silicon.

The news article even explains how this works, so I'm very perplexed at how you came to this conclusion...
APUs are generally not seen as a replacement to a standalone graphics card. Instead, they look to replace the GPUs embedded on the motherboard. So it solves the distance problem and replace embedding GPU on the motherboard.
Posted on Reply
Add your own comment
Dec 18th, 2024 04:59 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts