Monday, March 11th 2024

NVIDIA GeForce RTX 50-series "Blackwell" to use 28 Gbps GDDR7 Memory Speed

The first round of NVIDIA GeForce RTX 50-series "Blackwell" graphics cards that implement GDDR7 memory are rumored to come with a memory speed of 28 Gbps, according to kopite7kimi, a reliable source with NVIDIA leaks. This is despite the fact that the first GDDR7 memory chips will be capable of 32 Gbps speeds. NVIDIA will also stick with 16 Gbit densities for the GDDR7 memory chips, which means memory sizes could remain largely unchanged for the next generation; with the 28 Gbps GDDR7 memory chips providing 55% higher bandwidth over 18 Gbps GDDR6 and 33% higher bandwidth than 21 Gbps GDDR6X. It remains to be seen what memory bus widths NVIDIA chooses for its individual SKUs.

NVIDIA's decision to use 28 Gbps as its memory speeds has some precedent in recent history. The company's first GPUs to implement GDDR6, the RTX 20-series "Turing," opted for 14 Gbps speeds despite 16 Gbps GDDR6 chips being available. 28 Gbps is exactly double that speed. Future generations of GeForce RTX GPUs, or even refreshes within the RTX 50-series could see NVIDIA opt for higher memory speeds such as 32 Gbps. When the standard debuts, companies like Samsung even plan to put up fast 36 Gbps chips. Besides a generational doubling in speeds, GDDR7 is more energy-efficient as it operates at lower voltages than GDDR6. It also uses a more advanced PAM3 physical layer signaling compared to NRZ for JEDEC-standard GDDR6.
Sources: kopite7kimi (Twitter), VideoCardz
Add your own comment

47 Comments on NVIDIA GeForce RTX 50-series "Blackwell" to use 28 Gbps GDDR7 Memory Speed

#1
Bwaze
You misspelled "Blackmail".

;-)
Posted on Reply
#2
Space Lynx
Astronaut
It will be interesting to see benchmarks, my guess is GDDR7 will help 4k gamers and 3440x1440 gamers, but any resolution below that may not benefit from the extra speed of GDDR7. None the less, it's nice to see progress still, we are lucky this industry even exists and they didn't just force cloud gaming down our throats, that day will come, but thankfully its not today.
Posted on Reply
#3
ARF
If AMD doesn't return with a large monolithic GPU, you will see how nvidia will charge $2900 for GB103 based RTX 5080 10-20% faster than RTX 4090 and won't even bother to release the much larger GB102 in RTX 5090.

Let's hope AMD and nvidia don't intentionally allign once again their lineups, so that Radeon RX 8700 XT is 10% faster than RX 7800 XT, and RTX 5060 is 5% faster than RTX 4060.
That will be a disaster, but at the same time the gamers will save some money because those lineups will make the negative buying decisions much easier.
Posted on Reply
#4
remekra
They should not return to monolithic design. They have the advantage that they already released chiplet based consumer GPU and server market one. Sooner or later nvidia will also switch to chiplet design but design and actual release are two different things. But remains to be seen if AMD will use that advantage.
Posted on Reply
#5
Dirt Chip
So one can expect the same amount of GDDR per GPU tire as the current gen.
Lovely.
We can all keep enjoying the endless ‘not enough memory vs it just allocating’ and you ‘want to be future proof vs by than your fps will tank anyway’ for another episode.

Also, NV must keep the 32gbps variant for the upcoming ‘super’ 5xxx series.
Posted on Reply
#6
Bwaze
ARFIf AMD doesn't return with a large monolithic GPU, you will see how nvidia will charge $2900 for GB103 based RTX 5080 10-20% faster than RTX 4090 and won't even bother to release the much larger GB102 in RTX 5090.

Let's hope AMD and nvidia don't intentionally allign once again their lineups, so that Radeon RX 8700 XT is 10% faster than RX 7800 XT, and RTX 5060 is 5% faster than RTX 4060.
That will be a disaster, but at the same time the gamers will save some money because those lineups will make the negative buying decisions much easier.
I was just wandering how Nvidia will pull + 50% peeformance / +50% price increase this generation without the public outrage, and one option is of course:

- approx. $2000 RTX 5080, which will be faster than RTX 4090 ("so youre getting your money's worth", reviewers will be paid to say)

- a tier above that will be a new "Titan" class card, but aimed at "Home AI acceleration". Price? Sky is the limit.
Posted on Reply
#7
HOkay
My hopes are not high for the 5000 series given the attention given to making AI HW atm & AMD not competing at the top end this time round :( On the plus side, my 4090 will remain relevant for longer I guess.
BwazeI was just wandering how Nvidia will pull + 50% peeformance / +50% price increase this generation without the public outrage, and one option is of course:

- approx. $2000 RTX 5080, which will be faster than RTX 4090 ("so youre getting your money's worth", reviewers will be paid to say)

- a tier above that will be a new "Titan" class card, but aimed at "Home AI acceleration". Price? Sky is the limit.
I don't think even Nvidia would be as bold as jumping to $2k for the 5080. I agree with your just faster than the 4090 though, my guess is just under 4090 MSRP with 10% more performance so Jensen can pretend he's our friend.
Posted on Reply
#8
ARF
remekraThey should not return to monolithic design.
Monolithic is better.
Posted on Reply
#9
3x0
ARFMonolithic is better.
On-chip communication is only one factor in the grand scheme of things ;)
Posted on Reply
#10
Onasi
ARFMonolithic is better.
Theoretically, if we are talking in a vacuum, sure. It’s also unsustainable in terms of yields as the designs get denser and more complex. We already have seen it with the AD102. There is a good reason AMD is on chiplets, Intel is going to chiplets and for NV it’s the question of when, not if.
Posted on Reply
#11
zo0lykas
In last 30 days i seen 4 or 5 post, telling the same over and over and over again.

Now every time nvidia farts we get a new post..
Posted on Reply
#12
evernessince
ARFMonolithic is better.
No, chiplet is 100% better. For starters there are papers demonstrating that a chiplet based architecture with an active interposer can achieve lower latency than a monolithic design (like the one done by the University of Toronto). An active interposer gives the chip a dedicated point to point communication layer, which will be superior to a monolithic design that has to route wires around CPU features particularly as complexity scales up.

A monolithic chip is smaller individually but that's essentially irrelevant given that chiplets are larger horizontally. It does not make the chip extend beyond the height of caps for example and does not increase size requirements for devices. The larger chiplet based design would be easier to cool as well. Of course both of the above depend on the exact chiplet design, Intel's chiplets for example are much closer together so the size difference between it and a monolithic design is going to be less.

Of course chiplets also allow modularity, superior binning (each individual chiplet on a die can be binned), chips to exceed the reticle limit, they are cheaper to produce, and they have higher yield as compared to the same chip in a monolithic design.

AMD has 3 chiplet designs for it's entire CPU lineup: The IO die, Zen 4 core die, and the Zen 4c core die. Meanwhile Intel needs dozens of designs to address the same markets.

This is why Intel is switching to chiplet, it's just better.
Posted on Reply
#13
N/A
that was the case of GDDR5 7Gbps as well for low cost. expect 60 series with 28-32Gbps 70-series 42Gbps
unless they come up with GDDR7X again like G5X and G6X before just for a buzzword and otherwise nothingburger
Posted on Reply
#14
ARF
OnasiWe already have seen it with the AD102.
Meanwhile Navi 31 has a hard time to keep up with AD103. That is much smaller and less expensive to make.
Please, don't tell me fairy tales. Why did AMD abandon the failed Navi 41? Because the chiplet approach doesn't work and they leave large portions of performance on the table.
evernessinceMeanwhile Intel needs dozens of designs to address the same markets.
That's because Intel is a large corporation and it can afford it. I, for one, also support the idea that it must cut at least 50% of the projects because they are not necessary and instead waste time and money.
Posted on Reply
#15
Dristun
Bwaze- a tier above that will be a new "Titan" class card, but aimed at "Home AI acceleration". Price? Sky is the limit.
As much as I personally want a card like that to happen, there are two points that make it highly unlikely:
1) They already have a sky-high priced RTX6000 ADA, which is an almost fully unlocked AD102 with 48GB - exactly what one would want for "home AI acceleration." They're charging $7000 for it and it's consistently out of stock.
2) They can charge even more by allocating their production towards dedicated AI chips! Those go for 15k+ while the actual die size is around the same. Sure, packaging costs, even more memory and all that but, you know, that's where the margins are.

At the end of the day I wouldn't be too surprised if they stopped bothering with gaming altogether. IMHO this might end up even worse than mining for home GPU market - that stuff went in cycles, miners weren't paying 20k for a card even in the worst moments and they wanted the same product as we do. AI people want a different product - and they can pay the price for it, hogging up all the supply they can get. If they keep doing that - where's the incentive to sell us the cards or make them better? Even high-end will become like mid- and low-end already has, with measly +5-10% boosts every couple of years.
Posted on Reply
#16
napata
evernessinceNo, chiplet is 100% better. For starters there are papers demonstrating that a chiplet based architecture with an active interposer can achieve lower latency than a monolithic design (like the one done by the University of Toronto). An active interposer gives the chip a dedicated point to point communication layer, which will be superior to a monolithic design that has to route wires around CPU features particularly as complexity scales up.
And HBM is better than GDDR6x. And Graphene is better than Silicium. For regular consumers chiplets are purely a cost saving measure with performance/efficiency downsides. Although I'm not even sure if they're cheaper to make GPU-wise as I'm sure the 7900XTX is more expensive in chip cost than the 4080. We don't know if chiplets are to blame though for RDNA3's shortcomings.

Of course at one point chiplets will be a necessity because of rising costs but I don't think of that as a positive for consumers. It's going to be all about increasing margins.
AMD has 3 chiplet designs for it's entire CPU lineup: The IO die, Zen 4 core die, and the Zen 4c core die.
That's not really true, is it? AMD also uses monolithic CPUs.
Posted on Reply
#17
Chomiq
ARFMonolithic is better.
Searching is fun:
Posted on Reply
#18
napata
ChomiqSearching is fun:
I bet they failed to Google this at Nvidia. You should mail it to them.
Posted on Reply
#19
evernessince
ARFMeanwhile Navi 31 has a hard time to keep up with AD103. That is much smaller and less expensive to make.
Is it? Where are you getting this info?

The GCD of Navi 31 is smaller and the MCDs are tiny, for all we know it could very well be cheaper even if the total die space used is higher.
ARFPlease, don't tell me fairy tales. Why did AMD abandon the failed Navi 41? Because the chiplet approach doesn't work and they leave large portions of performance on the table.
I'd like to point out that AMD is whopping Intel in the server and enterprise space and has the fastest consumer desktop processors as well. To me it seems you are purposefully ignoring things that don't favor your point.

AMD pointed out why they could not yet scale up their GPUs, bandwidth.
ARFThat's because Intel is a large corporation and it can afford it. I, for one, also support the idea that it must cut at least 50% of the projects because they are not necessary and instead waste time and money.
You are implying that a company is wasting money for the fun of it. I'm sure their CEO and shareholders would highly disagree.
napataAnd HBM is better than GDDR6x. And Graphene is better than Silicium. For regular consumers chiplets are purely a cost saving measure with performance/efficiency downsides.
The X3D CPUs absolutely prove this false. The binning of the 5950X does as well.
napataAlthough I'm not even sure if they're cheaper to make GPU-wise as I'm sure the 7900XTX is more expensive in chip cost than the 4080. We don't know if chiplets are to blame though for RDNA3's shortcomings.
The 7900 XTX has a die size of 304mm2 and the MCDs have a size of 34mm2. The cost is similar to that of a mid-ranged GPU. RDNA3 doesn't reach 4090 level performance because AMD was unable to get a 2nd GCD working.
napataI bet they failed to Google this at Nvidia. You should mail it to them.
Nvidia wrote a paper in 2017 about how chiplets are better FYI.
Posted on Reply
#20
Onasi
ARFMeanwhile Navi 31 has a hard time to keep up with AD103. That is much smaller and less expensive to make.
So are we legitimately comparing what is a first consumer GPU chip using chiplets with a technology that’s at its peak (monolithic chip GPU) and immediately coming to conclusion that chiplets are worthless? That’s cool. Should I remind you that the OG Zen also had a “hard time” keeping up with the 7700K in many tasks? Guess that chiplet approach was worthless too.
Posted on Reply
#21
Chomiq
napataI bet they failed to Google this at Nvidia. You should mail it to them.
I guess that's why Nvidia GPUs are becoming cheaper with every generation, right? Right?

Posted on Reply
#22
bug
napataI bet they failed to Google this at Nvidia. You should mail it to them.
Monolithic is better for efficient designs, multi-chip is better for scaling. There is a scaling threshold before you need to go multi-chip. The thing is, GPUs right now are right on the edge. Monolithic is becoming expensive to build, but multi-chip isn't justified just yet.

Nvidia can afford to build expensive monoliths, because AI will gobble up anything (and likes efficiency). AMD... just tries to play catch up.
Posted on Reply
#23
ARF
OnasiSo are we legitimately comparing what is a first consumer GPU chip using chiplets with a technology that’s at its peak (monolithic chip GPU) and immediately coming to conclusion that chiplets are worthless? That’s cool. Should I remind you that the OG Zen also had a “hard time” keeping up with the 7700K in many tasks? Guess that chiplet approach was worthless too.
Let's look at the "grand scheme of things".
Intel hasn't moved to chiplets and I doubt they have plans about it.
DG2-512 - monolithic design
DG2-128 - monolithic design

Nvidia as well:
(future) GB202 - monolithic design
(future) GB203 - monolithic design
(future) GB205 - monolithic design
(future) GB206 - monolithic design
(future) GB207 - monolithic design

AD102 - monolithic design
AD103 - monolithic design
AD104 - monolithic design
AD106 - monolithic design
AD107 - monolithic design

AMD:
(future) Navi 40 larger - monolithic design
(future) Navi 40 smaller - monolithic design

Navi 31 - non-monolithic design
Navi 32 - non-monolithic design

Navi 33 - monolithic design

Navi 21 - monolithic design
Navi 22 - monolithic design
Navi 23 - monolithic design
Navi 24 - monolithic design

What is possible - Navi 31 and Navi 32 are the first and last chiplet designs.
Much like AMD's previous mistake with HBM which they no longer use, or the abandoned multi-GPU / MCM products
Posted on Reply
#24
GhostRyder
ARFLet's look at the "grand scheme of things".
Intel hasn't moved to chiplets and I doubt they have plans about it.
DG2-512 - monolithic design
DG2-128 - monolithic design

Nvidia as well:
(future) GB202 - monolithic design
(future) GB203 - monolithic design
(future) GB205 - monolithic design
(future) GB206 - monolithic design
(future) GB207 - monolithic design

AD102 - monolithic design
AD103 - monolithic design
AD104 - monolithic design
AD106 - monolithic design
AD107 - monolithic design

AMD:
(future) Navi 40 larger - monolithic design
(future) Navi 40 smaller - monolithic design

Navi 31 - non-monolithic design
Navi 32 - non-monolithic design

Navi 33 - monolithic design

Navi 21 - monolithic design
Navi 22 - monolithic design
Navi 23 - monolithic design
Navi 24 - monolithic design

What is possible - Navi 31 and Navi 32 are the first and last chiplet designs.
Much like AMD's previous mistake with HBM which they no longer use, or the abandoned multi-GPU / MCM products
Well I don't believe AMD is dropping the chiplet design, just that the smaller dies will still be monolithic and that is going to be the focus of the beginning of the RX 8800 Series and below first before the high end chiplets come out. Of course that's all leaks/rumor at the moment so it could be wrong either way.

I am all for faster and more efficient memory, we really need it as we keep having higher and higher density requirements.
Posted on Reply
#25
Zforgetaboutit
The title claims "... Blackwell to use...". The article says it's a rumor.

I want your titles to be honest 100% of the time, and the articles to be consistent with the titles.
Posted on Reply
Add your own comment
Nov 19th, 2024 12:33 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts