Tuesday, June 11th 2024

Possible Specs of NVIDIA GeForce "Blackwell" GPU Lineup Leaked

Possible specifications of the various NVIDIA GeForce "Blackwell" gaming GPUs were leaked to the web by Kopite7kimi, a reliable source with NVIDIA leaks. These are specs of the maxed out silicon, NVIDIA will carve out several GeForce RTX 50-series SKUs based on these chips, which could end up with lower shader counts than those shown here. We've known from older reports that there will be five chips in all, the GB202 being the largest, followed by the GB203, the GB205, the GB206, and the GB207. There is a notable absence of a successor to the AD104, GA104, and TU104, because NVIDIA is trying a slightly different way to approach the performance segment with this generation.

The GB202 is the halo segment chip that will drive the possible RTX 5090 (RTX 4090 successor). This chip is endowed with 192 streaming multiprocessors (SM), or 96 texture processing clusters (TPCs). These 96 TPCs are spread across 12 graphics processing clusters (GPCs), which each have 8 of them. Assuming that "Blackwell" has the same 256 CUDA cores per TPC that the past several generations of NVIDIA gaming GPUs have had, we end up with a total CUDA core count of 24,576. Another interesting aspect about this mega-chip is memory. The GPU implements the next-generation GDDR7 memory, and uses a mammoth 512-bit memory bus. Assuming the 28 Gbps memory speed that was being rumored for NVIDIA's "Blackwell" generation, this chip has 1,792 GB/s of memory bandwidth on tap!
The GB203 is the next chip in the series, and poised to be a successor in name to the current AD103. It generationally reduces the shader counts, counting on the architecture and clock speeds to more than come through for performance; while retaining the 256-bit bus width of the AD103. The net result could be a significantly smaller GPU than the AD103, for better performance. The GB203 is endowed with 10,752 CUDA cores, spread across 84 SM (42 TPCs). The chip has 7 GPCs, each with 6 TPCs. The memory bus, as we mentioned, is 256-bit, and at a memory speed of 28 Gbps, would yield 896 GB/s of bandwidth.

The GB205 will power the lower half of the performance segment in the GeForce "Blackwell" generation. This chip has a rather surprising CUDA core count of just 6,400, spread across 50 SM, which are arranged in 5 GPCs of 5 TPCs, each. The memory bus width is 192-bit. For 28 Gbps, this would result in 672 GB/s of memory bandwidth.

The GB206 drives the mid-range of the series. This chip gets very close to matching the CUDA core count of the GB205, with 6,144 of them. These are spread across 36 SM (18 TPCs). The 18 TPCs span 3 GPCs of 6 TPCs, each. The key differentiator between the GB205 and GB206 is memory bus width, which is narrowed to 128-bit for the GB206. With the same 28 Gbps memory speed being used here, such a chip would end up with 448 GB/s of memory bandwidth.

At the entry level, there is the GB207, a significantly smaller chip with just 2,560 CUDA cores, across 10 SM, spanning two GPCs of 5 TPCs, each. The memory bus width is unchanged at 128-bit, but the memory type used is the older generation GDDR6. Assuming NVIDIA uses 18 Gbps memory speeds, it ends up with 288 GB/s on tap.

NVIDIA is expected to double down on large on-die caches on all its chips, to cushion the memory sub-systems. We expect there to be several other innovations in the areas of ray tracing performance, AI acceleration, and certain other features exclusive to the architecture. The company is expected to debut the series some time in Q4-2024.
Source: kopite7kimi (Twitter)
Add your own comment

141 Comments on Possible Specs of NVIDIA GeForce "Blackwell" GPU Lineup Leaked

#101
Tigerfox
Then better wait until the 24GB-(Super-)version with 8x3GB arrives later.

We only just 16GB on xx80(S) and then on xx70TiS, after staying with 8GB on xx80 for two generation and then upgrading to 10/12GB and for three generations on xx70(Ti) and then upgradig to 12GB. Don't expect NV to give you another upgrade after just on generation!
Posted on Reply
#102
ARF
TigerfoxThen better wait until the 24GB-(Super-)version with 8x3GB arrives later.
TigerfoxWe only just 16GB on xx80(S) and then on xx70TiS, after staying with 8GB on xx80 for two generation and then upgrading to 10/12GB and for three generations on xx70(Ti) and then upgradig to 12GB. Don't expect NV to give you another upgrade after just on generation!
Someone at nvidia must take the responsibility that the products it launches lack enough amount of VRAM, and need to be updated according to the current games market.

GTX 980 2015 4GB
GTX 980 Ti 2015 6GB
GTX 1080 2017 8GB
GTX 1080 Ti 2017 11GB
RTX 2080 2018 8GB
RTX 2080 Ti 2018 11GB
RTX 3080 2020 10/12GB
RTX 3080 Ti 2021 12GB
RTX 4080 2022 16GB
RTX 5080 2024 or 2025 16GB?

Good luck selling that.
Posted on Reply
#103
GoldenX
That "5060" feels like yet another 4060 bruh moment.
Posted on Reply
#104
Chrispy_
ARFErr, nope.
GB202 = 28GB
GB203 = 24GB
GB205 = 20GB
GB206 = 16GB
GB207 = 12GB

This is ok now.
Trust me, I *WANT* to be wrong, but all evidence points towards Nvidia opting for the lower capacity.

Primarily, I don't think higher density GDDR7 dies are available yet, and when they finally are, they will come with a profit-eating higher cost. Nvidia will likely justify it on the $3000 5090, and maybe eventually on the 5080 Ti/Super/Ultra/Whatever but for the cost-effective models at xx50/60/70 we're going to get screwed because those cards are all about maximising Nvidia's profit, not pleasing end-users.

"If you want more VRAM, spend more money, you filthy peasants."
- Jacket Man, probably.
Keullo-eIs a kilowatt PSU already needed for a single card?
It was with the 30-series, mostly due to spikes tripping the safeties on many high-end 850W units rather than average power draw of a 3080-equipped system exceeding 850W sustained power draw.
Posted on Reply
#105
Dr. Dro
Durvelle27Your post definitely smells of fanboying :wtf:

Which is so laughable considering AMD has no problem competing with Nvidias offerings outside of the RTX 4090

The RX 7900XTX Trades blows with the RTX 4080 Super mostly edging it out
The RX 7900XT beats the RTX 4070Ti Super
The RX 7900GRE Beats the RTX 4070 Super
The RX 7800XT Beats the RTX 4070
etc....

All while offering much better prices




1 to 4% faster in raster, 20% slower in RT, higher median power consumption, very hit and miss driver support, lacks access to the Nvidia ecosystem. The 4080 will provide a better gaming experience, I can guarantee you that.

only circumstances I'd consider the 7900XTX is if you can purchase it for $300 less
Posted on Reply
#106
evernessince
ARFYes, but those are not typical gaming loads, this is something that you would want from a workstation card, not a GeForce.

I agree that the VRAM on the lower models is not enough, and the GB203 specs are too low.
The xx90 cards are prosumer products. A huge portions of their sales, particularly the 4090, is from AI and professional work.

Nvidia is factoring that in when deciding how much VRAM to equip their xx90 cards with. We've had two generations now with 24GB and prosumer workloads are demanding more VRAM than that.

Games themselves could probably use more VRAM as well if Nvidia wasn't intentionally holding back the market in that regard. You can't expect devs to use VRAM most people don't have and by extension this means Nvidia has great influence over how much VRAM games will use. If they continue to put 8GB VRAM cards out then 8GB of VRAM will continue to be enough so long as they are the dominent player in the market. Of course I expect it to increase but only when Nvidia absolutely has to, as goes with the trend of giving people as little as possible for their money whether that be up front or in regards to longevity.

It's ironic, in the enterprise market Nvidia caters to what customers what while in the gaming market gamers cater to what Nvidia wants. If gamers are going to argue for Nvidia that 8GB is still fine, perhaps they deserve 8GB cards ad infinium. It's a self perpetuating prophecy.
Posted on Reply
#107
Legacy-ZA
Not impressed, that 5080 is already gimped right out the gate, and because it is a supposed xx80 card, nVidia will already charge $1000, get screwed nVidia.

I am done with GPU's, this machine will all likely, be my last, it's been fun, but I just can't bother anymore. At certain tiers nVidia, you have priced yourself out of the market, everyone has their limits and you pissed on their concerns. No one like being punked or ripped off.
Posted on Reply
#108
Lionheart
AssimilatorI really wish NVIDIA had decided to increase the VRAM capacity and bus width over Ada. Not because more VRAM and a wider bus actually does anything for performance, but because it would at least stop Radeon fanboys crying about how NVIDIA is screwing buyers over. News flash, the 88% of people who own an NVIDIA GPU only feel screwed over by AMD's inability to compete.
Do you ever stop kissing Nvidia's ass?
Posted on Reply
#109
64K
Legacy-ZANot impressed, that 5080 is already gimped right out the gate, and because it is a supposed xx80 card, nVidia will already charge $1000, get screwed nVidia.

I am done with GPU's, this machine will all likely, be my last, it's been fun, but I just can't bother anymore. At certain tiers nVidia, you have priced yourself out of the market, everyone has their limits and you pissed on their concerns. No one like being punked or ripped off.
It's a shame but you're not alone. I see too many PC gamers expressing your sentiments and the road ahead only looks worse. The reality is that most PC gamers and console gamers are in the same boat. They have definite limits on what they can spend on hardware. Nvidia has pushed pricing to such an extreme that many, who ordinarily would not have, turned to the second hand market which introduces additional risks. iirc you were one that was struggling to upgrade during the mining craze. Can't blame the pricing on that anymore. Now it just amounts to simple greed by Nvidia and retailers.
Posted on Reply
#110
Why_Me
Five pages so far. I saw the title of this thread at the top of the forum just now and I knew it wouldn't disappoint. :)
Posted on Reply
#111
Daven
Isaakwhen the amount of complaints is visibly superior to NVIDIA users. I've switched to NVIDIA shortly after and had no such episode since. Such a stark difference might sound ridiculous, but yeah, not once did I have a significant issue after updating NVIDIA drivers.
And that folks is the myth in action with the desired effect.

Would you believe me if I said I had so many Nvidia driver problems that I switched to AMD and never had any again?
Posted on Reply
#112
Isaak
DavenAnd that folks is the myth in action with the desired effect.

Would you believe me if I said I had so many Nvidia driver problems that I switched to AMD and never had any again?
Yes.
Posted on Reply
#113
wolf
Better Than Native
Neo_MorpheusThe Ngreedia fanbois are really insane.

They make it sound like all AMD gpus are at least half as slow to their Ngreedia counterpart.
Posted on Reply
#114
hsew
I’m going to diverge from the never-ending driver quality argument and just point out that RTX Voice/Broadcast, RTX Remix, ChatRTX, and soon, G-ASSIST, ACE, etc… either don’t have any equivalent from AMD or the equivalent is half-baked in comparison. There is just so much more you can do with one team’s products than the other…

Edit: forgot to mention
Canvas
Omniverse
NVENC
Posted on Reply
#115
Ruru
S.T.A.R.S.
hsewI’m going to diverge from the never-ending driver quality argument and just point out that RTX Voice/Broadcast, RTX Remix, ChatRTX, and soon, G-ASSIST, ACE, etc… either don’t have any equivalent from AMD or the equivalent is half-baked in comparison. There is just so much more you can do with one team’s products than the other…

Edit: forgot to mention
Canvas
Omniverse
NVENC
I didn't even know besides NVENC about those, so I can presume that a typical user doesn't know about those either.
Posted on Reply
#116
Dr. Dro
Keullo-eI didn't even know besides NVENC about those, so I can presume that a typical user doesn't know about those either.
Canvas is basically using generative AI to create photorealistic images out of simple mspaint-like strokes, and Omniverse is practically why Nvidia is worth $3 trillion right now

www.nvidia.com/en-us/studio/canvas/
www.nvidia.com/en-us/omniverse/
Posted on Reply
#118
Assimilator
LionheartDo you ever stop kissing Nvidia's ass?
Do you ever stop crying?
Posted on Reply
#119
Sunny and 75
Keullo-eIs a kilowatt PSU already needed for a single card?
As long as GPUs only require one GEN5 power connector, there will be no need for 1000W and above PSUs, though keep in mind that power supplies are most efficient when running at 50% load.

Things may change if we ever see a GPU with two GEN5 power connectors. In that case, we need at least a 1200W PSU (ideally 1500W+) to power something like the upcoming 5090 which could end up sipping 800W, if not that then maybe a 6090.
Posted on Reply
#120
Vayra86
RedelZaVednoThe GB203 only 256-bit & 10,752 CUDA cores looks barely better than 4080specs and we all know more cache is not a silver bullet. It could br that 5080 will be no faster or even a bit slower than 4090 in rasterization at 4K and above. No cempetition = no progress :banghead:
You forget the key bonus points that will sell this one at the same price as a 4080

- DLSS4.x with ultra super framerate acceleration, definitely better than 3, so you can't miss it, and it can only run on Blackwell because Huang said so
- Even betterrer RT performance, which is what Nvidia is going to tout as the 'real performance gap' versus Ada, don't mind raster, its no longer relevant, there is always an RT game to distort reality
- Some subscription to some Nvidia service model (3 months of free GF Now?)
- A lot of marketing to drive the above home
Posted on Reply
#121
evernessince
Legacy-ZANot impressed, that 5080 is already gimped right out the gate, and because it is a supposed xx80 card, nVidia will already charge $1000, get screwed nVidia.

I am done with GPU's, this machine will all likely, be my last, it's been fun, but I just can't bother anymore. At certain tiers nVidia, you have priced yourself out of the market, everyone has their limits and you pissed on their concerns. No one like being punked or ripped off.
Unfortunately $1,000 I suspect would be on the low end of expected pricing. Nvidia wanted to charge $1,000 for the gimped 4080 until that got canceled due to backlash. Looks like for the 5000 series the gimped xx80 will now be the default but I'm not so sure Nvidia will have the price follow suit. That's kind of been Nvidia's strategy anyways, if something receives backlash just try again later with a different name or on the sly. The GPP is a good example, Nvidia fully implemented it with the 4000 series. All the top model brands are reserved for Nvidia only cards now. Not a single ounce of outrage. Clearly Nvidia's strategy works.
Posted on Reply
#122
R0H1T
GoldenXThat "5060" feels like yet another 4060 bruh moment.
You mean the rebadged 5050 :laugh:
Posted on Reply
#123
wolf
Better Than Native
Vayra86You forget the key bonus points that will sell this one at the same price as a 4080

- DLSS4.x with ultra super framerate acceleration, definitely better than 3, so you can't miss it, and it can only run on Blackwell because Huang said so
- Even betterrer RT performance, which is what Nvidia is going to tout as the 'real performance gap' versus Ada, don't mind raster, its no longer relevant, there is always an RT game to distort reality
- Some subscription to some Nvidia service model (3 months of free GF Now?)
- A lot of marketing to drive the above home
Let's not forget AMD's bonus to sell their GPU's for the same price as an Nvidia product they are competitive in raster to;

-FSR SR FG SS XXX 4.1, definitely better than FSR 1.0, and just as good as DLSS because it's open source and Lisa Su Bae said so
-still lacking in RT performance, but more sponsored games with half assed effects to show they're barely behind in games where you struggle to spot the difference.
- +++++ VRAM so they can claim at least one spec advantage and cater to people who want more than a decade of high textures, well past the GPU's ability to push good fps in modern AAA titles
-some ridiculous marketing or PR blunder because it wouldn't be Radeon Technologies Group without them managing an own goal too
Posted on Reply
#124
Vayra86
wolfLet's not forget AMD's bonus to sell their GPU's for the same price as an Nvidia product they are competitive in raster to;

-FSR SR FG SS XXX 4.1, definitely better than FSR 1.0, and just as good as DLSS because it's open source and Lisa Su Bae said so
-still lacking in RT performance, but more sponsored games with half assed effects to show they're barely behind in games where you struggle to spot the difference.
- +++++ VRAM so they can claim at least one spec advantage and cater to people who want more than a decade of high textures, well past the GPU's ability to push good fps in modern AAA titles
-some ridiculous marketing or PR blunder because it wouldn't be Radeon Technologies Group without them managing an own goal too
Yay for stagnation!
Posted on Reply
#125
mechtech
Dr. Dro1 to 4% faster in raster, 20% slower in RT, higher median power consumption, very hit and miss driver support, lacks access to the Nvidia ecosystem. The 4080 will provide a better gaming experience, I can guarantee you that.

only circumstances I'd consider the 7900XTX is if you can purchase it for $300 less
That's my GPU budget lol

I just turn down settings rather than spending more money lol
Posted on Reply
Add your own comment
Nov 22nd, 2024 03:36 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts