Thursday, May 30th 2024

NVIDIA to Stick to Monolithic GPU Dies for its GeForce "Blackwell" Generation

NVIDIA's GeForce "Blackwell" generation of gaming GPUs will stick to being traditional monolithic die chips. The company will not build its next generation of chips as either disaggregated devices, or multi-chip modules. Kopite7kimi, a reliable source with NVIDIA leaks, says that the largest GPU in the generation, the "GB202," is based on a physically monolithic design. The GB202 is expected to power the flagship GeForce RTX 5090 (or RTX 4090 successor), and if NVIDIA sticking to traditional chip design for this, then it's unlikely that smaller GPUs will be any different.

In contrast, AMD started building disaggregated devices with its current RDNA 3 generation, with its top two chips, the "Navi 31" and "Navi 32," being disaggregated chips. An interesting rumor suggests that team red's RDNA 4 generation will see a transition from disaggregated chips to multi-chip modules—packages that contain multiple fully-integrated GPU dies. Back to the green camp, and NVIDIA is expected to use an advanced 4 nm-class node for its GeForce "Blackwell" GPUs.
Sources: kopite7kimi (Twitter), HXL (Twitter)
Add your own comment

30 Comments on NVIDIA to Stick to Monolithic GPU Dies for its GeForce "Blackwell" Generation

#26
bug
wolfAll that bothers me is the price, performance, specs, features and power consumption. How the sausage is made is below the waterline to a large extent, unless it's going to significantly impact the above mentioned considerations. For all the hoo haa about RDNA3 being chiplets, it didn't seem to matter all that much, at least not in a positive way for the consumers.
I agree about the relevant metrics.
The thing is, big chips are still expensive to make. Throw in the rumored 512 bit bus and GDDR7 and you have a ballpark for the upcoming pricing.
Posted on Reply
#27
TheinsanegamerN
The biggest point of interest for me will be the price. If the 5090 is 4090 price and offers a decent jump, that bodes very well for mid range and high end card prices.
Broken ProcessorIt's always safe to assume the customer is getting shafted by Nvidia these days.
I used to be so excited for new generation of GPU and CPU but I just can't anymore the pricing is leaving such a bad taste in my mouth I just buy peripherals or add to my water cooling. Honestly consoles never looked better.
Offering better performance at a higher price is so bad you'd rather get railroaded by console makers and their walled gardens?

The grass is supposed to be GREENER on the other side, not covered in barbed wire.
stimpy88So, if it's an old-fashioned monolithic die, then it's only half what the GB202 they have shown for the datacenter, as that's two dies glued together. So does that mean the 5090 will have a GB203 die?

Looks like the consumers are getting the shaft yet again, unless the 5080 gets the GB203 die as well, I suppose.

The performance uplift over Lovelace will be interesting with this series, as to me it sounds like a lot of overclocking is going to be needed to bring the big performance gains to these cards. Maybe 600w+ 5090s will be a thing, and 350w 5080s etc...?
I dont understand this, how is the customer getting the shaft? If the 5090 is a step over a 4090, then its not getting shafted. Now pricing may do that, certainly. Besides, if the GB203 pulls 600w, why complain, since a full size chip would be flat out impossible to cool.
ARFWhy not a Radeon with more VRAM? RX 7900 XT has 20 GB, RX 7800 XT has 16 GB?
I guess right now is a bad moment for buying, because the new generation is coming soon.
I'm assuming, because if someone wants RT, buying AMD is a terrible idea and gets you, at best, ampere tier performance.
AnarchoPrimitivRemember how the Radeon Fury X with HBM wasn't a "big deal", and now look at HBM....it's so valuable it can't be used in consumer cards. The first application or a new technology's isn't always guaranteed to make a big splash, but eventually it catches on to the point where we don't understand how we survived without it. Chiplets WILL happen, it's inevitable.
That's a nice cope. In reality, the HBM equipped failure x was an embarrassment in the market, and the HBM equipped Vega 56 and 64 failed to compete with the cheaper GDDR equipped Pascal cards. HBM still exists, sure, in cards that cost more then my car and sometimes nearly as much as my house. But in consumer products? Nope. Somehow, 10 years later we still survive in the consumer world without HMB just fine.
Posted on Reply
#28
bug
TheinsanegamerNI dont understand this, how is the customer getting the shaft? If the 5090 is a step over a 4090, then its not getting shafted. Now pricing may do that, certainly. Besides, if the GB203 pulls 600w, why complain, since a full size chip would be flat out impossible to cool.
This is the easiest to explain: unreleased product, unknown performance, vague price point... Clearly we're getting screwed :kookoo:
Posted on Reply
#29
Lycanwolfen
I'm surprised that Nvida has not gone into multi chip designs. There new Nvlink built on their pro card is fast enough to almost act as one chip. With multi chip designs it would be more powerfull GPU's with less heat production. Also might be less power hungry too as the balance between chips. Imagine a quad core GPU with Nvlinks between them You could in theory get more output and power than a huge chip at half the speed. Nivida seems to be going backwards I mean look at CPUs these days multi cores are normal. I mean yes I'm still a believer in SLI as two GPU's was always better than one. But since now Motherboards do not support it anymore have to look at the Idea of nvlinks between GPU's on a single card.
Posted on Reply
#30
ARF
LycanwolfenI'm surprised that Nvida has not gone into multi chip designs. There new Nvlink built on their pro card is fast enough to almost act as one chip. With multi chip designs it would be more powerfull GPU's with less heat production. Also might be less power hungry too as the balance between chips. Imagine a quad core GPU with Nvlinks between them You could in theory get more output and power than a huge chip at half the speed. Nivida seems to be going backwards I mean look at CPUs these days multi cores are normal. I mean yes I'm still a believer in SLI as two GPU's was always better than one. But since now Motherboards do not support it anymore have to look at the Idea of nvlinks between GPU's on a single card.
The key word here is "almost".
If Nvlink can provide up to 1.8 TB/s communication, then it's still considerably slower than the communications in monolithic chips, as seen from the benchmarks of L0 and L1 caches which can go up to 7-8-9 TB/s.
Posted on Reply
Add your own comment
Aug 14th, 2024 16:14 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts