• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA RTX 50-series "GB20X" GPU Memory Interface Details Leak Out

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.68/day)
Location
Ex-usa | slava the trolls
If this is true, then it is rather disapointing. Will be 3rd flagship card in a row with 24gb vram, which is a limiting factor already in some games at 8k.

The thing is that DisplayPort 1.4 which is offered by nvidia is not ready for 8K. Even DP 2.1 is not ready.
If you are going to stay with an nvidia card, you must wait for a DisplayPort 2.2 or 3.0.

And are AMD GPUs doing significantly better at 8K with their wider buses?

AMD will not offer a big Navi 4 next generation. The RX 7900 XTX will be their halo card till Navi 50 sometime in 2026-2027.

It gets so tiring with people spewing nonesense about stuff they obviously have no clue about, or experience with...
As seen above i AM playing at 8k, and it works great in alot of titles, but some are limited at vram...
Witcher 3 remaster with ultra raytracing is an example of a game where it runs fine at 8k - right until it runs out of vram, and fps absolutely tanks due to vram swapping.

The February 2024 Steam survey says ~60% have a 1080p monitor resolution and another ~20% have 1440p. 4K is in the low single digits. And 8K is that one guy above.

This only goes to prove that people are stuck in 2010 and do not want to move forward. Which is a shame - they prefer the awful screen-door effect that those low-quality low-res monitors tend to offer.
 
Joined
Feb 24, 2023
Messages
2,924 (4.71/day)
Location
Russian Wild West
System Name DLSS / YOLO-PC
Processor i5-12400F / 10600KF
Motherboard Gigabyte B760M DS3H / Z490 Vision D
Cooling Laminar RM1 / Gammaxx 400
Memory 32 GB DDR4-3200 / 16 GB DDR4-3333
Video Card(s) RX 6700 XT / R9 380 2 GB
Storage A couple SSDs, m.2 NVMe included / 240 GB CX1 + 1 TB WD HDD
Display(s) Compit HA2704 / MSi G2712
Case Matrexx 55 / Junkyard special
Audio Device(s) Want loud, use headphones. Want quiet, use satellites.
Power Supply Thermaltake 1000 W / Corsair CX650M / DQ550ST [backup]
Mouse Don't disturb, cheese eating in progress...
Keyboard Makes some noise. Probably onto something.
VR HMD I live in real reality and don't need a virtual one.
Software Windows 10 and 11
people are stuck in 2010 and do not want to move forward.
There are such things as:
• Third world countries. Life there doesn't guarantee you the means for buying semi-decent computers, let alone next-gen hardware.
• Priorities. Not everyone treats their PC as their best toy. Many people only use their PC on their sick leaves or during extremely unfavourable weather.
• Health conditions. Some people have vision disorders which make 1080p to 2160p upgrade pointless.
• Just being broke. A dude wants to buy an RTX 4090 and a top-tier display but can't remotely afford that.

And no, "Homeless? Just buy a house" doesn't work.
 
Joined
Aug 26, 2021
Messages
370 (0.32/day)
Got to admit I'm interested in 5090 it's looking like a possible beast depending on what silicone they use hopefully it's still x02. I've enjoyed my time with my 5700xt and 6800xt but the 3 7900xtx I've had have all been a bug ridden mess so maybe green will do this time round since apparently AMD is on the back foot next gen.
 
Last edited:
Joined
Sep 26, 2022
Messages
209 (0.27/day)
I'm not so sure games in the next 3 years won't use 24GB at lower resolutions than 8K. It's more than a possibility. Flagship graphics cards are not supposed to be at their limit in consumer applications right out of the gate, especially when you are talking an xx90 class card. Extra VRAM enables future games to push what they can do, it doesn't have to have an immediate and obvious impact. The 1080 Ti is a great example of that, the card is still relevant today thanks to it's 11GB of VRAM.

Mind you IMO the biggest problem with Nvidia sticking with 24GB for it's flagship card would be that it curtails it's usefulness for AI. I can already reach 32GB VRAM usage at 1024x1024 on SDXL, never mind newer AI models that are currently in testing that will certainly be out by the time this GPU drops. Nvidia's next gen cards can be amazing performance wise but if AMD is releasing it's next gen flagship with 36 or 48GB for example that's going to attrack a lot of high end customers over to them.
I think it's a good choice IMO, 24GB is great for learning AI and the overwhelming majority of professional (freelance, small company) usage, but they are first and foremost gaming GPU, when a consumer choose 4090 for AI, it's a penalty for us because the gaming price is inflated and a penalty for Nvidia because said consumer is not choosing the more expensive AI GPU options.

Frankly for a punctual AI usage that needs more than 24GB on a personal workstation, I think the alternative maybe would be to use a cloud provider and run your dataset on the cloud IF possible. On AWS I think in some regions you can already have EC2 instances with Nvidia GPUs

Regarding games: Games need to work on a maximum of PC, if a game cannot run @4K on a 5090, I can't even imagine how it runs on the actual consoles and I would really question the developers rather than Nvidia on the 24GB choice here.

In summary: If a game in 2025 needs more than 24GB for 4K/High in 2024, I'd choose to give money to developers that know how to actually optimize games. For instance TLOU part 1 released in a horrendous state, albeit they fixed some issues, I'd never give my money on a port of that quality.

But I'm not really worried, the limiting factors for gaming today are the consoles and they have a shared pool of 16GB, so I really don't see how a computer with 16/32GB DDR5 & 24GB of vRAM can fail to max the game.
 
Joined
Sep 15, 2015
Messages
1,064 (0.32/day)
Location
Latvija
System Name Fujitsu Siemens, HP Workstation
Processor Athlon x2 5000+ 3.1GHz, i5 2400
Motherboard Asus
Memory 4GB Samsung
Video Card(s) rx 460 4gb
Storage 750 Evo 250 +2tb
Display(s) Asus 1680x1050 4K HDR
Audio Device(s) Pioneer
Power Supply 430W
Mouse Acme
Keyboard Trust
Bits are volt levels?
 
Joined
Jun 24, 2017
Messages
172 (0.06/day)
They didn't want consumer-grade-GPU batched doing computing stuff. Solution: cripple FP64/FP32 performance.
They don't want consumer-grade-GPU batched doing AI stuff. Solution: cripple memory amount and bandwith.

You all know their idea is just to run the games/computing on cloud (their service) and stream them to your computer/console (your money).

[[ Add here economical, ecological and other bullshit the-verge-like marketing arguments and some hand-made scarfs. ]]

And this is how you have complete control on the market prices: everthing as a service.
 
Last edited:
Joined
Mar 29, 2023
Messages
1,045 (1.78/day)
Processor Ryzen 7800x3d
Motherboard Asus B650e-F Strix
Cooling Corsair H150i Pro
Memory Gskill 32gb 6000 mhz cl30
Video Card(s) RTX 4090 Gaming OC
Storage Samsung 980 pro 2tb, Samsung 860 evo 500gb, Samsung 850 evo 1tb, Samsung 860 evo 4tb
Display(s) Acer XB321HK
Case Coolermaster Cosmos 2
Audio Device(s) Creative SB X-Fi 5.1 Pro + Logitech Z560
Power Supply Corsair AX1200i
Mouse Logitech G700s
Keyboard Logitech G710+
Software Win10 pro
My question would be, "Is it really worth it?" considering this from both a gamer's & game developer's perspective. Are any games even able to take full advantage of 8k rendering? I highly doubt it. The textures would need to be huge, and given that there's only so much you can do with textures to keep model poly-count down without negatively affecting appearance, this would have to be increased as well.
I'm sure it still looks good, but the only purpose I could see in wasting such an insane amount of resources to run 8k would be for a large display bigger than a 48" where any pixelation is undesired. Beyond that, it makes no sense to me and becomes more of an issue of practicality(i.e. how far back a pair of eyes need to be from the screen).

I remember 2 yrs ago someone I know was talking about running his console games at 8k on his 8k TV, and I correctly pointed out that those games are not being played at 8k, but instead everything is being upscaled & that he just wasted a lot of money on an 8k display if that was his only intent.
Imo, 8k just isn't worth it at present(unless you have some kind of special need) and by far would have more of a niche commercial application than gaming. When they do start making games for 8k, expect to start paying even more hefty prices.

It varies per game - as you can see with the second screenshot of the talos principle 2, assets do scale with 8k thanks to nanite - so ue5 games are going to look amazingly detailed.

But if we take horizon zero dawn as an example - no, the assets are clearly not super high quality, and yet 8k makes an immense difference. At 4k the image is (comparatively) rather blurry, and there is quite a bit of shimmering - at 8k the image gets completely cleaned up. Being crystal clear, and no shimmering etc.

As for your friend running a console on an 8k tv - i hope for your sake that you can see why it isn't in any way comparable.
 
  • Love
Reactions: ARF

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.68/day)
Location
Ex-usa | slava the trolls
It varies per game - as you can see with the second screenshot of the talos principle 2, assets do scale with 8k thanks to nanite - so ue5 games are going to look amazingly detailed.

But if we take horizon zero dawn as an example - no, the assets are clearly not super high quality, and yet 8k makes an immense difference. At 4k the image is (comparatively) rather blurry, and there is quite a bit of shimmering - at 8k the image gets completely cleaned up. Being crystal clear, and no shimmering etc.

As for your friend running a console on an 8k tv - i hope for your sake that you can see why it isn't in any way comparable.

:eek:

There are problems. But the most emphasized one is that no one advertises 4K gaming even today.
You have plenty of great graphics cards for 4K gaming, but no one tells the potentials users that it's worth it to buy a 4K monitor for $200.

8K can easily work:
1. Textures compression;
2. Upscaling;
3. DisplayPort higher and better than DP 2.1 UHBR20's 77.37 Gbit/s throughput.
 
Joined
Apr 14, 2022
Messages
739 (0.79/day)
Location
London, UK
Processor AMD Ryzen 7 5800X3D
Motherboard ASUS B550M-Plus WiFi II
Cooling Noctua U12A chromax.black
Memory Corsair Vengeance 32GB 3600Mhz
Video Card(s) Palit RTX 4080 GameRock OC
Storage Samsung 970 Evo Plus 1TB + 980 Pro 2TB
Display(s) Acer Nitro XV271UM3B IPS 180Hz
Case Asus Prime AP201
Audio Device(s) Creative Gigaworks - Razer Blackshark V2 Pro
Power Supply Corsair SF750
Mouse Razer Viper
Keyboard Asus ROG Falchion
Software Windows 11 64bit
They didn't want consumer-grade-GPU batched doing computing stuff. Solution: cripple FP64/FP32 performance.
They don't want consumer-grade-GPU batched doing AI stuff. Solution: cripple memory amount and bandwith.

You all know their idea is just to run the games/computing on cloud (their service) and stream them to your computer/console (your money).

[[ Add here economical, ecological and other bullshit the-verge-like marketing arguments and some hand-made scarfs. ]]

And this is how you have complete control on the market prices: everthing as a service.

I agree, but the service like the GeForce Now is useful to many. Even myself.
When I'm at my parents home for holidays etc. I can use my mother's laptop and play whatever I want for the time I'm there.
Also not everyone is capable of buying a full fledged PC to play games. But a half decent laptop is more than enough to stream from the cloud.

The prices on the gpu are high because of the demand. Not because they push the industry to the streaming service direction, necessarily.
 
Joined
Dec 31, 2020
Messages
969 (0.69/day)
GDDR7 can reach 64Gb down the line and it's bigger 14x14mm so there is no problem producing 48 and 96GB for professional cards, until then just buy the latter.
To achieve 24GB 3090 used dual sided piggie backed 16Gb memory really. That could still happen in first generation.

4080 can deliver 1440p144 on average, and if 5080 is 50% faster that's still not a 4K card. So forget it, 8K144 is impossible before RTX 9090 on anything other than fortnight or a 10 year old game.

GB203 5080 108(112 -4)SM/Rop 14336 -512 shaders

GB205 5070 62(64 -2)SM's 5888 shader and a virtual copy of 3070/4070. Keeping the trend alive of zero shaders added and 25% performance gain just clock it higher. 3.3GHz N3 node 200mm2. 12GB again
And 7680 shader 5070 Ti.

GB206 5060 42SM's 38-40 enabled Samsung 4 node with 16GB option
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.68/day)
Location
Ex-usa | slava the trolls
4080 can deliver 1440p144 on average

The RTX 4080 can deliver 2160p100-120 on average with slightly loosier settings.

This graph shows average at maxed out, but maxed out is an overkill and the settings should be dialed down.

1710074461527.png
 
Joined
Apr 14, 2022
Messages
739 (0.79/day)
Location
London, UK
Processor AMD Ryzen 7 5800X3D
Motherboard ASUS B550M-Plus WiFi II
Cooling Noctua U12A chromax.black
Memory Corsair Vengeance 32GB 3600Mhz
Video Card(s) Palit RTX 4080 GameRock OC
Storage Samsung 970 Evo Plus 1TB + 980 Pro 2TB
Display(s) Acer Nitro XV271UM3B IPS 180Hz
Case Asus Prime AP201
Audio Device(s) Creative Gigaworks - Razer Blackshark V2 Pro
Power Supply Corsair SF750
Mouse Razer Viper
Keyboard Asus ROG Falchion
Software Windows 11 64bit
But what is interesting are the capacities. GDDR7 will be produced in a base capacity of 16 Gb (2GB) and likely no lower. This is the same capacity that is the maximum and now predominantly used capacity in the current GDDR6 generation. It allows you to fit 8GB on a graphics card with a 128-bit bus, 12GB on a 192-bit bus, 16GB on a 256-bit bus, and so on. With clamshell mode, where there are twice as many chips, double capacities are then possible, which is the case with the RTX 4060 Ti 16GB, but that’s not used often.

But on top of that, 24Gb GDDR7 chips are also going into production, for which the effective capacity would be 3GB per chip. And that will allow for graphics cards with 50% more memory capacity. So in addition to the previously cited numbers, it will also be possible to have 12GB graphics cards with a 128-bit bus, 18GB graphics cards with a 192-bit bus, 24GB graphics cards with a 256-bit bus, or a capacity of 36GB for GPUs with a 384-bit bus width. A clamshell design would double those again.


That would be interesting.
5060 with 12, 5070 with 18GB, 5080 with 24 and 5090 with 36GB of VRAM.
 
Joined
Jun 18, 2021
Messages
2,542 (2.05/day)
To achieve 24GB 3090 used dual sided piggie backed 16Gb memory really

The 3090 used 8Gb chips on each side at first to achieve the 24GB, 16Gb came later with the 3090ti with the move to single side.

GDDR7 will have 24Gb modules so that will change the math a bit, probably not on the launch models other than maybe 5090/5080 but certainly on the refreshes down the line.
 
Joined
Oct 10, 2018
Messages
147 (0.07/day)
GDDR7 can reach 64Gb down the line and it's bigger 14x14mm so there is no problem producing 48 and 96GB for professional cards, until then just buy the latter.
To achieve 24GB 3090 used dual sided piggie backed 16Gb memory really. That could still happen in first generation.

4080 can deliver 1440p144 on average, and if 5080 is 50% faster that's still not a 4K card. So forget it, 8K144 is impossible before RTX 9090 on anything other than fortnight or a 10 year old game.

GB203 5080 108(112 -4)SM/Rop 14336 -512 shaders

GB205 5070 62(64 -2)SM's 5888 shader and a virtual copy of 3070/4070. Keeping the trend alive of zero shaders added and 25% performance gain just clock it higher. 3.3GHz N3 node 200mm2. 12GB again
And 7680 shader 5070 Ti.

GB206 5060 42SM's 38-40 enabled Samsung 4 node with 16GB option

I assume that 5060 would be made on GB207 rather than 206 but don't forget generational die size decreasing, with increasing cores. P107 was supporting 768 cores (GTX 1050 Ti), TU107 was supporting 1024 cores (GTX 1650 mobile), GA107 was supporting 2560 cores (RTX 3050), AD107 was supporting 3072 cores (RTX 4060), and I think we would see 3584 cores in GB207 for used 5060 8GB/16GB but 12GB maybe would be coming after rumours of different configs. I believe RTX 5060 would have performance that is between RTX 3070 Ti and 3080.

My expectations,

RTX 5050 8GB 2560 cores (performance of 4060 also 75W) for $169
RTX 5060 12GB (I hope) 3584 cores for $329
RTX 5060 Ti 16GB 5120 cores for $399
RTX 5070 16GB/18GB 7680 cores for $599
RTX 5080 24GB 12288 cores for $899
RTX 5090 36GB 22528-24576 cores for $1699
 
Joined
May 3, 2018
Messages
2,881 (1.21/day)
I assume that 5060 would be made on GB207 rather than 206 but don't forget generational die size decreasing, with increasing cores. P107 was supporting 768 cores (GTX 1050 Ti), TU107 was supporting 1024 cores (GTX 1650 mobile), GA107 was supporting 2560 cores (RTX 3050), AD107 was supporting 3072 cores (RTX 4060), and I think we would see 3584 cores in GB207 for used 5060 8GB/16GB but 12GB maybe would be coming after rumours of different configs. I believe RTX 5060 would have performance that is between RTX 3070 Ti and 3080.

My expectations,

RTX 5050 8GB 2560 cores (performance of 4060 also 75W) for $169
RTX 5060 12GB (I hope) 3584 cores for $329
RTX 5060 Ti 16GB 5120 cores for $399
RTX 5070 16GB/18GB 7680 cores for $599
RTX 5080 24GB 12288 cores for $899
RTX 5090 36GB 22528-24576 cores for $1699
5090 is first card released and this article clearly states GDDR7 3GB modules aren't coming until 2026 or so. It will have to remain 24GB.
 
Joined
Jun 24, 2017
Messages
172 (0.06/day)
I agree, but the service like the GeForce Now is useful to many. Even myself.
When I'm at my parents home for holidays etc. I can use my mother's laptop and play whatever I want for the time I'm there.
Also not everyone is capable of buying a full fledged PC to play games. But a half decent laptop is more than enough to stream from the cloud.

The prices on the gpu are high because of the demand. Not because they push the industry to the streaming service direction, necessarily.
I disagree.
Among other reasons I don't know, prices are high because production is more profitable in sectors like workstations and specially big datacenters of all kinds. Sells and distributions is easier and profit per waffer substantailly higher. This includes consoles one chip every some years: millions of replicas.

If they wanted to produce more ram, or more nand or more GPUs they could, but they just cartel to keep prices high. Prove, simple:
Evaluate the performance, the tecnologies, ram, etc. evolution in the datacenter market vs. consumer market.
Another example is consumer nand which is getting "faster" but worse in terms or writes/endurance at each iteration and almost stuck in prices.
Another example is consumer amount of available ram. Stuck in the 16-32gigs for more than 10 years. --> if you need more go to [130].
Another example is HBM, ECC, number of pcie lanes, 10gbit, --> why do you need it? go to [130]
Another new trend are the E cores and P cores, AVX512 deprecation, etc. more bullshit. --> Need more P cores? go to [130]

I don't say consumer market is not improving, I say an "artificial gap"/"deliberate extreme segmentation" is being created between consumer an "pro*" "datacenter" market.
[130] I say that gap is generated by the same providing on-cloud services (and a big bunch of freelance programmers and tech-gurus [hand-made scarf here]).
And I say the low amount of RAM and narrow GPU buses on GPUs is part of that strategy as well, neither BIM costs, not "limited resources", not "consumer needs", not "its enough for 4k", [put more lame tweet rationalizations here].

Its like any other sector, drugs, netflix, etc. Starts cheap, you get used, substitutes dissapear, prices increase, service lowers quality, prices increase, you prepare your next busines. <-- MBA reduced to one line.


Do you know parsec btw?
*Thats another topic: workstation crumble.
 
Top