Thursday, March 7th 2024

First GPUs Implementing GDDR7 Memory Could Stick with 16 Gbit Chips, 24 Gbit Possible

Some of the first gaming GPUs that implement the next-generation GDDR7 memory standard, will stick to 16 Gbit memory chip densities (2 GB), according to kopite7kimi, a reliable source with NVIDIA GeForce leaks. 16 Gbit is what is standard for the current RTX 40-series graphics cards, which ensures that a GPU with 256-bit memory bus gets 16 GB of video memory; the ones with 192-bit get 12 GB; and the ones with 128-bit get 8 GB. The flagship RTX 4090 uses twelve of these chips over its 384-bit memory bus for 24 GB.

Kopite7kimi's leak could have a different connotation, that much like the RTX 30-series "Ampere" and RTX 40-series "Ada," NVIDIA might not use JEDEC-standard GDDR7 on all product segments, and might co-engineer an exclusive standard with a DRAM company with memory bus signaling and power management technologies most optimal to its graphics architecture. It co-developed the GDDR6X with Micron Technology to do exactly this. GDDR7 comes with data-rates as high as 32 Gbps, which will be the top speed for the first round of GDDR7 chips that come out toward the end of 2024, heading into 2025. The second round of GDDR7 chips slated for late-2025 going into 2026, could go as fast as 36 Gbps. This is similar to how the first GDDR6 chips were 14-16 Gbps, and the next round did 18-20 Gbps.
Another very interesting development is a 3DCenter.org article that reports that GDDR7 will see asymmetric densities such as 24 Gbit and 48 Gbit. A 24 Gbit density GDDR7 chip has 3 GB, which over a 192-bit memory bus gives 18 GB; over a 256-bit memory bus yields 24 GB; and over a 128-bit memory bus yields 12 GB. Assuming GPU manufacturers want to keep board costs low, and narrow the memory bus width to take advantage of the 32 Gbps speed; they now have the option of using these 24 Gbit chips to make up for in video memory sizes. Of course, the GDDR7 standard has higher densities, such as 32 Gbit or even 64 Gbit, but those will be exotic, expensive, and be restricted to the professional-visualization market—think RTX Blackwell Generation or Radeon Pro RDNA 4.
Sources: kopite7kimi (Twitter), VideoCardz, 3DCenter.org
Add your own comment

7 Comments on First GPUs Implementing GDDR7 Memory Could Stick with 16 Gbit Chips, 24 Gbit Possible

#1
Gameslove
Big disappointment, no gaming GPU with over 24+ VRAM until 2026.
Posted on Reply
#2
Tomorrow
GamesloveBig disappointment, no gaming GPU with over 24+ VRAM until 2026.
Likely yes. RDNA 4 will reportedly not even use GDDR7 at all and if Nvidia will stick with their limited VRAM pools and ever increasing prices then it seems this could another generation to write off as not worth it.
Posted on Reply
#3
Steevo
TomorrowLikely yes. RDNA 4 will reportedly not even use GDDR7 at all and if Nvidia will stick with their limited VRAM pools and ever increasing prices then it seems this could another generation to write off as not worth it.
Large on die cache has mitigated the need for VMEM to have high bandwidth and instead have better latency and even that isn't as big of a deal since caches and better memory controllers are paving the way for better utilization.
Posted on Reply
#4
ghazi
Generally speaking, capacity and speed are inversely correlated, so it's not surprising to see the first gen still 16Gbit.
GamesloveBig disappointment, no gaming GPU with over 24+ VRAM until 2026.
What do you actually need it for in a gaming GPU? The only use I could see right now would be if AMD wanted to put out another 256-bit flagship with 32GB. Clamshell shouldn't be as big of an issue with this standard GDDR7 as it was with the GDDR6X housefire also.
Posted on Reply
#5
Macro Device
And now I wish someone launches a sub 40 W GPU with 9 GB (96 bit bus) of VRAM. Even at Ada Lovelace efficiency rate, it's more than enough for leprechaun sized gaming computers. APUs like 8600G are cool but this yet to exist GPU will be at least twice as fast.
Posted on Reply
#6
kondamin
Ah yes I can’t wait for the $2000 5090 256bit 16GB
Posted on Reply
#7
THU31
SteevoLarge on die cache has mitigated the need for VMEM to have high bandwidth and instead have better latency and even that isn't as big of a deal since caches and better memory controllers are paving the way for better utilization.
But it hasn't mitigated the need for VRAM capacity, and that's what actually caused the biggest issue with the 40 series.

There's just no way the 5070 will have 12 GB, because if it does, it's actually DOA. I upgrade every generation, so 12 GB didn't bother me last time. But in 2025? GTFO.
Posted on Reply
Dec 26th, 2024 10:03 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts