Monday, March 11th 2024

NVIDIA GeForce RTX 50-series "Blackwell" to use 28 Gbps GDDR7 Memory Speed

The first round of NVIDIA GeForce RTX 50-series "Blackwell" graphics cards that implement GDDR7 memory are rumored to come with a memory speed of 28 Gbps, according to kopite7kimi, a reliable source with NVIDIA leaks. This is despite the fact that the first GDDR7 memory chips will be capable of 32 Gbps speeds. NVIDIA will also stick with 16 Gbit densities for the GDDR7 memory chips, which means memory sizes could remain largely unchanged for the next generation; with the 28 Gbps GDDR7 memory chips providing 55% higher bandwidth over 18 Gbps GDDR6 and 33% higher bandwidth than 21 Gbps GDDR6X. It remains to be seen what memory bus widths NVIDIA chooses for its individual SKUs.

NVIDIA's decision to use 28 Gbps as its memory speeds has some precedent in recent history. The company's first GPUs to implement GDDR6, the RTX 20-series "Turing," opted for 14 Gbps speeds despite 16 Gbps GDDR6 chips being available. 28 Gbps is exactly double that speed. Future generations of GeForce RTX GPUs, or even refreshes within the RTX 50-series could see NVIDIA opt for higher memory speeds such as 32 Gbps. When the standard debuts, companies like Samsung even plan to put up fast 36 Gbps chips. Besides a generational doubling in speeds, GDDR7 is more energy-efficient as it operates at lower voltages than GDDR6. It also uses a more advanced PAM3 physical layer signaling compared to NRZ for JEDEC-standard GDDR6.
Sources: kopite7kimi (Twitter), VideoCardz
Add your own comment

47 Comments on NVIDIA GeForce RTX 50-series "Blackwell" to use 28 Gbps GDDR7 Memory Speed

#26
Baba
ZforgetaboutitThe title claims "... Blackwell to use...". The article says it's a rumor.

I want your titles to be honest 100% of the time, and the articles to be consistent with the titles.
That would not get the same amount of clicks.

If true, that makes perfect business sense. Nvidia will start low and have the ability to do a mid-cycle refresh with faster memory.
Posted on Reply
#27
napata
evernessinceThe 7900 XTX has a die size of 304mm2 and the MCDs have a size of 34mm2. The cost is similar to that of a mid-ranged GPU. RDNA3 doesn't reach 4090 level performance because AMD was unable to get a 2nd GCD working.
Even Kepler said the 7900XTX is a good deal more expensive than the 4080 and he's generally very pro-AMD. Don't forget the 4080 only has a 379 mm² die so it's barely bigger than the GCD of 7900XTX. The 7900XTX is still a 500+ mm² GPU. N31 still has 3 different cuts despite chiplets so what did chiplets actually gain AMD?

You keep mentioning CPUs because for GPUs it's just not better at this point in time and the main reason is that it's still cheap enough to make monolithic dies. Like I said before even for CPUs AMD still uses monolithic designs.
Nvidia wrote a paper in 2017 about how chiplets are better FYI.
They seem to have forgotten about that paper with Blackwell still being monolithic almost 10 years later.

Clearly for Nvidia it's not better at the moment as either the cost to make it work negates the savings or the performance/efficiency hit is too big. It's so weird to me how people on here act as if they know better than the hardware engineers at Nvidia and that you can just Google the anwer or read some papers.
ChomiqI guess that's why Nvidia GPUs are becoming cheaper with every generation, right? Right?

Funnily enough the 4090 was actually cheaper than the 3090 if you take into account inflation. What's AMD's excuse for the increasing prices? They've moved to chiplets now.
Posted on Reply
#28
Zforgetaboutit
BabaThat would not get the same amount of clicks.
My first reply edits were not so polite, but I gave them the benefit of the doubt. Let's see if they change the title to increase our trust in them.
Posted on Reply
#29
Chomiq
napataFunnily enough the 4090 was actually cheaper than the 3090 if you take into account inflation. What's AMD's excuse for the increasing prices? They've moved to chiplets now.
Still cheaper. Wanna talk yields next?
Posted on Reply
#30
HeadRusch1
BwazeYou misspelled "Blackmail".

;-)
You, Sir or Madam, "Get It". :) Well Said.
Posted on Reply
#31
evernessince
napataEven Kepler said the 7900XTX is a good deal more expensive than the 4080 and he's generally very pro-AMD. Don't forget the 4080 only has a 379 mm² die so it's barely bigger than the GCD of 7900XTX. The 7900XTX is still a 500+ mm² GPU. N31 still has 3 different cuts despite chiplets so what did chiplets actually gain AMD?
Who? If this is a rumor mill person I don't care.

379mm2 is not what I'd call "barely bigger" than 304mm2 either. That's a significant difference in terms of yield.

What you don't seem to understand when it comes to chiplets is that total die area is not as direct an indicator of costs as it is with monolithic. The GCD is only 304mm2 while the MCDs are a mere 34mm2. The yield for the MCDs is going to be insanely high due to the very small size and you will get a ton per wafer, making them extremely cheap. Yield decreases exponentially as die size increases so in fact by having more smaller chiplets, regardless of whether the total die area is greater, means the total chip can be cheaper to manufacture. This is why AMD is able to produce server CPUs at a lower cost than Intel's server CPUs while also scaling higher.
napataYou keep mentioning CPUs because for GPUs it's just not better at this point in time and the main reason is that it's still cheap enough to make monolithic dies. Like I said before even for CPUs AMD still uses monolithic designs.
It's a first generation GPU chiplet design. Zen 1 was not better right out of the gate either so it's stands to reason that we apply logic evenly here.

You are certainly mistaken if you think monolithic GPUs are cheap to make. High-end GPUs are several times larger than CPUs and by extension the yield and cost factors are vastly worse. This is why GPUs have historically been manufactured on a more mature node. You might only get a handful of good 600mm2 dies per wafer because 1) each die is large 2) each defect in the die wastes 600mm2 of space. Compare that to a 300mm2 die for example where each defect only impacts 300mm2 of space. You are wasting half the space per defect.
napataThey seem to have forgotten about that paper with Blackwell still being monolithic almost 10 years later.
No, it's just really hard to implement a chiplet based GPU. AMD has stated that they require magnitudes more bandwidth for inter GCD communication. This is why AMD introduced the fan-out links with the 7000 series and will likely further push what the infinity fabric is capable of handling on their GPUs.
napataClearly for Nvidia it's not better at the moment as either the cost to make it work negates the savings or the performance/efficiency hit is too big. It's so weird to me how people on here act as if they know better than the hardware engineers at Nvidia and that you can just Google the anwer or read some papers.
Clearly? You are assuming that whatever Nvidia has on the market now is what Nvidia thinks is the best possible product they will ever have, which is almost certainly false. There are so many other factors you are jumping over to try to force a conclusion that just isn't there. We don't know Nvidia's opinion on chiplets outside of the paper they published or what technical hurdles might be in the way preventing them from implementing a chiplet based architecture. You can't jump to the conslusion that no chiplet based products means that Nvidia doesn't think chiplets aren't a good approach. You are just completely guessing at that point.
napataFunnily enough the 4090 was actually cheaper than the 3090 if you take into account inflation. What's AMD's excuse for the increasing prices? They've moved to chiplets now.
Both the 7900 XTX and 6900 XTX have an MSRP of $1,000. The 7700 XT has an MSRP of $409 while the 6700 XT had an MSRP of $479.

Not sure how this is relevant though. When I was discussing cost, I'm referring to the cost to produce. Not the amount Nvidia or AMD will charge. Those are two completely different things. A theoritical price increase to the customer says nothing of production costs.
Posted on Reply
#32
remekra
ARFLet's look at the "grand scheme of things".
Intel hasn't moved to chiplets and I doubt they have plans about it.
DG2-512 - monolithic design
DG2-128 - monolithic design

Nvidia as well:
(future) GB202 - monolithic design
(future) GB203 - monolithic design
(future) GB205 - monolithic design
(future) GB206 - monolithic design
(future) GB207 - monolithic design

AD102 - monolithic design
AD103 - monolithic design
AD104 - monolithic design
AD106 - monolithic design
AD107 - monolithic design

AMD:
(future) Navi 40 larger - monolithic design
(future) Navi 40 smaller - monolithic design

Navi 31 - non-monolithic design
Navi 32 - non-monolithic design

Navi 33 - monolithic design

Navi 21 - monolithic design
Navi 22 - monolithic design
Navi 23 - monolithic design
Navi 24 - monolithic design

What is possible - Navi 31 and Navi 32 are the first and last chiplet designs.
Much like AMD's previous mistake with HBM which they no longer use, or the abandoned multi-GPU / MCM products
Guess they missed that info when they released MI300, another chiplet based GPU and first APU of that scale. Of course in that market latency doesn't matter that much so they were able to pull off a lot more than just MCDs on a separate chiplets.
AMD was first to introduce chiplet based GPUs to both consumer and enterprise market, it's a big advantage that we will need to see if they will use. Once they got chiplets figured out for GPUs completely they can just scale up like they did with Zen. And no monolithic GPU would be able to compete.
Either nvidia will also switch to chiplets by then or they will be outperformed. They are behind in packaging tech and ask Intel how that worked out for them, back when they were laughing that Zen 1 is just glued together CPU.
Posted on Reply
#33
Readlight
Videocard memory faster than RAM module.
Posted on Reply
#34
Tek-Check
ARFIf AMD doesn't return with a large monolithic GPU
This is not going to happen expect for entry dies. Even entry die could soon evolve into one compute chiplet once they standardise the size of GPU compute chiplets for client market. It will be mostly scalable SKUs, just like it is on CPU CCDs.
ARFMonolithic is better.
Only to a degree and up to the size of ~820 mm2. It's more complex. Big monolithic chips are also more expensive and bring less yields as there is more risk of failure at the bigger size.
Why do you think AMD moved to chiplets in CPUs with Zen and achieved such success with this approach?
They have already moved to full chiplet approach i data center GPUs with Instinct MI300.
Next step is more chiplets in client GPUs. It will take a few years, segment by segment. Just watch it happen.
ARFMeanwhile Navi 31 has a hard time to keep up with AD103. That is much smaller and less expensive to make.
Please, don't tell me fairy tales. Why did AMD abandon the failed Navi 41? Because the chiplet approach doesn't work and they leave large portions of performance on the table.
How about you putting some effort to understand that transition to chiplets is complex, it takes time and it does not happen fully and miraculously over one generation of products? It's a multi generational effort of incremental improvements.

We do not know if and what happens with 'Navi 41'. It's rumours. Perhaps it takes more time to perfect it. They already have bigger chiplet designs for data center GPU Instinct MI300, so chiplets work 100% and MI300 sells like hot cakes right now. For client GPUs, it certainly takes a complex effort from multiple teams of engineers to perfect it because this is supposed to be the first multi compute chiplet, unlike Navi 31. It will just take more time to get it done. You do not need to like it. Just stop being prejudiced about it and stay tuned.
ARFLet's look at the "grand scheme of things".
...
What is possible - Navi 31 and Navi 32 are the first and last chiplet designs.
Much like AMD's previous mistake with HBM which they no longer use, or the abandoned multi-GPU / MCM products
You could have posted the same list of CPUs in 2017 and say that Zen was Beh and Meh and that they should drop chiplets. Nonsense.
Your "grand scheme of things" only looks into two generations of products. Short-sighted. You have no idea what it coming in next couple of years. That's how "grand" this "scheme of things" is.
napataIt's so weird to me how people on here act as if they know better than the hardware engineers at Nvidia and that you can just Google the anwer or read some papers.
It is you who claims to know better than engineers in AMD.
Posted on Reply
#35
T_Zel
Something everyone seems to be missing in this discussion about the merits of chiplet based GPUs: ASML's next-gen High-NA EUV machines have a halved reticle limit compared to current EUV processes. That means a 4090 die is physically far too large to be produced on future cutting edge nodes. I'd suggest a 4080 sized die is realistically around the biggest you can expect to see in the consumer space on these nodes. But these nodes are also going to be hideously expensive, so pushing the reticle limit and suffering the resulting high defect losses is going to look very unattractive compared to making chiplets work.
Posted on Reply
#36
R-T-B
ARFMeanwhile Navi 31 has a hard time to keep up with AD103. That is much smaller and less expensive to make.
Please, don't tell me fairy tales. Why did AMD abandon the failed Navi 41? Because the chiplet approach doesn't work and they leave large portions of performance on the table.
*cough* *cough* Ryzen *cough* * cough*
Posted on Reply
#37
Metroid
BwazeI was just wandering how Nvidia will pull + 50% peeformance / +50% price increase this generation without the public outrage, and one option is of course:

- approx. $2000 RTX 5080, which will be faster than RTX 4090 ("so youre getting your money's worth", reviewers will be paid to say)

- a tier above that will be a new "Titan" class card, but aimed at "Home AI acceleration". Price? Sky is the limit.
If the gpu mafia wants that then we are toast, not even nvidia or amd is controlling the price anymore, the gpu mafia controls everything. The only way to remove that much influence the GPU mafia has at the moment is to create another competitive environment, somewhere else, far away from China and Taiwan or Asia for that matter.
Posted on Reply
#38
R-T-B
MetroidIf the gpu mafia wants that then we are toast, not even nvidia or amd is controlling the price anymore, the gpu mafia controls everything. The only way to remove that much influence the GPU mafia has at the moment is to create another competitive environment, somewhere else, far away from China and Taiwan or Asia for that matter.
What on earth is the "gpu mafia" if not the manufacturers?
Posted on Reply
#39
phints
Well of course the increasing need for memory bandwidth and lithography nodes are part of what brings on a new gen.

TSMC 3N and GDDR7 will help bring on the typical large performance and efficiency gains for RTX 5000. Along with a couple new hardware features, IPC tweaks, and software improvements of course.
Posted on Reply
#40
Night
It also uses a more advanced PAM3 physical layer signaling compared to NRZ for JEDEC-standard GDDR6.
GDDR7 will use NRZ as well, not just PAM3 signaling. It will be used when there's low bandwidth traffic thus lowering power consumption. GDDR6X for example uses only PAM4 signaling.
Posted on Reply
#41
Waldorf
Funny to see how apply a different metric to pc/gaming hw pricing.
ignoring for a moment that they run a business and want to make money (shareholders), not to make "US" happy.

do "you" also go to a Porsche/Bentley (or similar) dealer and tell them you want their 4 door performance suv for the price of a VW or Toyota,
or that you should be able to get a 10 room mansion for the price of a 2 bedroom condo?
right.

short of having a +100K income/6 digit lottery win, i will never buy a (new) xx80Ti/xx90,
but that doesnt mean i will sour it for the folks that can and do.
instead of looking at a product and then "whine" how much it costs, buy the product that fits the wallet,
or dont if you dont like the offering, and have it (negatively) impact their sales, no one forces you to buy anything.
Posted on Reply
#42
bug
WaldorfFunny to see how apply a different metric to pc/gaming hw pricing.
ignoring for a moment that they run a business and want to make money (shareholders), not to make "US" happy.

do "you" also go to a Porsche/Bentley (or similar) dealer and tell them you want their 4 door performance suv for the price of a VW or Toyota,
or that you should be able to get a 10 room mansion for the price of a 2 bedroom condo?
right.

short of having a +100K income/6 digit lottery win, i will never buy a (new) xx80Ti/xx90,
but that doesnt mean i will sour it for the folks that can and do.
instead of looking at a product and then "whine" how much it costs, buy the product that fits the wallet,
or dont if you dont like the offering, and have it (negatively) impact their sales, no one forces you to buy anything.
One small correction: this is about a non-consumer part, so it's more akin to us demanding a F1 car for the price of a Corolla.

Like you, I have no problem with the existence of higher prices in the consumer space. But I do have a problem in not being able to pay $200-300 and getting a capable card in return. I mean, compare what a 460 could do, compared to a 480. And then think of how a 4060 compares to a 4080.
Posted on Reply
#43
Waldorf
except your comparing past price, not its increase in relation, especially to other things.

when i was kid my father had a porsche turbo, back then equivalent to what a house cost (germany, ~50K).
the car now goes for about 100-150K (decently equipped, not fully loaded stuff), most single family homes now START at 150K.
and its the same for everything else, as i dont pay for bread/milk, what i paid 40y ago.

a GTX460 was ~200. how many times faster is a 200$ card (e.g. 1660Ti) now?

and we havent even talked about bad (console) ports and/or missing optimization,
all contributing to the fact i need more "HP" to get similar FPS, but has NTOHING to do with the cost of the card.

(and to make sure before some complain: i never have made any statement, that im happy with what parts/gpus cost..)
Posted on Reply
#44
bug
Waldorfexcept your comparing past price, not its increase in relation, especially to other things.
I can rephrase that: 10 years ago a $200-300 bought me a card that could run anything I threw at it at medium settings or more. Something that can do the same today is $600-700. I'm not sure what you think I should be considering, to make that increase make sense.
Posted on Reply
#45
Waldorf
its comparing power needed to run a game the cost, not relative perf.
also not taking any "changes" on your side into account, as i doubt you were running the same moni res/fps you do now,
and ignores what i already mentioned.
all contributing to the fact i need more raw power do do the same, but has nothing to do with the increase in perf, compared to its increase in price.
Posted on Reply
Add your own comment
Jun 3rd, 2024 09:42 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts