Wednesday, March 2nd 2022

NVIDIA GeForce "Ada Lovelace" Memory Bus-width Info Leaked

The deluge of NVIDIA leaks continue following the major cyber-attack on the company, with hackers getting away with sensitive information about current and upcoming products. The latest in this series covers the memory bus widths of the next-generation RTX 40-series GPUs based on the "Ada Lovelace" graphics architecture. There is early-information covering the streaming multiprocessor (SM) counts of each GPU, and their large on-die caches.

The top-of-the-line AD102 silicon allegedly has a 384-bit wide memory bus, similar to its predecessor. The next-best AD103 silicon has a 256-bit wide memory bus. Things get very interesting with the AD104, which has a 192-bit wide memory bus. The AD104 is a revelation here, because it succeeds a long line of NVIDIA GPUs with 256-bit memory buses (eg: GA104, TU104, GP104, GM204, etc). This confirms the theory that much like AMD, NVIDIA is narrowing the memory bus widths in the lower segments to cut board costs, and compensate for the narrower bus-width with large on-die caches, high memory data-rates, and other memory-management optimizations.
Keeping with the theme we described above, the AD106 is expected to feature a 128-bit wide bus, while its predecessors, the GA106 and TU106, have 192-bit. Interestingly, NVIDIA didn't cheap out with bus-width on its smallest AD107 chip, which continues to have 128-bit bus width. We expect NVIDIA to use faster memory data-rates across the board. For the lower-end, the company could tap into 16 Gbps chips if they are priced right, and bring GDDR6X to the performance segment.
Source: kopite7kimi (Twitter)
Add your own comment

17 Comments on NVIDIA GeForce "Ada Lovelace" Memory Bus-width Info Leaked

#1
Steevo
Just like tesselation, perfect AF, high quality color, and huge cache AMD did it first, now nvidia is going to come along and fuck it up so they seem better and first.




Count down to trolls
Posted on Reply
#2
nguyen
3070 has 30% less bandwidth than 2080Ti (256bit vs 352bit) and less L2 Cache (4MB vs 5.5MB), yet still able to perform similarly to 2080Ti (until 3070 ran out of VRAM buffer at 4K+RT).

Looks to me like bigger cache only benefit the high end part and not the lower end, best examples are 6900XT vs 3090 and 6500XT vs "any old GPU from 6 years ago"
Posted on Reply
#3
Metroid
so the 4070 will have 192 while the 3070 have 256. Nvidia and AMD lowering memory bus more and more, that is how is meant to backward a generation. Nvidia learning with AMD in that regard. I guess they are trying very hard to charge more and more and give less and less every generation.
Posted on Reply
#4
pavle
Can't afford to give the customers 2x performance over previous generation. I don't see any point in upgrading when said condition isn't met or exceeded.
Posted on Reply
#5
ShurikN
Makes sense. Bigger bus is more complex, therefore more expensive. It's one of the main reasons we no longer see 512-bit mem bus.

The great thing is, the savings will be passed down to us consumers /s
Posted on Reply
#6
stimpy88
Oh god, so we have another 3 years of $700+ 8GB, 10GB, and 12GB crappy cards to put up with. nGreedia, 8GB is for the bargain basement, 16GB is mid-range now.
Posted on Reply
#7
nguyen
stimpy88Oh god, so we have another 3 years of $700+ 8GB, 10GB, and 12GB crappy cards to put up with. nGreedia, 8GB is for the bargain basement, 16GB is mid-range now.
?? There should be 16Gb GDDR6X (2GB) module now (3090 Ti suppose to have them) so 256bit GPU can have 16GB VRAM and 24GB for 384bit GPU
Posted on Reply
#8
watzupken
Metroidso the 4070 will have 192 while the 3070 have 256. Nvidia and AMD lowering memory bus more and more, that is how is meant to backward a generation. Nvidia learning with AMD in that regard. I guess they are trying very hard to charge more and more and give less and less every generation.
I feel the lost in memory bus from 256 to 192 bit may not be that big of a problem. You can look at the current RX 6700 XT vs 5700 XT just to get an idea. The large cache definitely helped pick up the slack. And generally, the 70 series cards will be marketed as 1440p target resolution but still allowing decent 4K performance. So that extra memory bandwidth may not be that meaningful until you hit 4K. It’s just like when you look at RTX 3070 vs 3070 Ti, you actually don’t see meaningful improvement in performance despite the Ti variant having a significant memory bandwidth advantage.
For AMD and Nvidia, cutting the memory bus will result in real estate and cost savings. In addition, you can be sure it’s going to arrive with 12 GB of VRAM, as opposed to the potential of 8GB VRAM again since Nvidia tends to offer as minimal VRAM as possible. Not sure if they will be so generous to offer 16GB as I’ve been waiting to see if they will ever release a 16GB Ampere desktop card. Given we are close to the end of the Ampere cycle, it looks unlikely.
Posted on Reply
#9
InVasMani
It might be as much about deterring mining as cost savings. It don't think it will prevent mining, but could shift the desire and focus away from the biggest offenders.
Posted on Reply
#10
Metroid
watzupkenI feel the lost in memory bus from 256 to 192 bit may not be that big of a problem. You can look at the current RX 6700 XT vs 5700 XT just to get an idea. The large cache definitely helped pick up the slack. And generally, the 70 series cards will be marketed as 1440p target resolution but still allowing decent 4K performance. So that extra memory bandwidth may not be that meaningful until you hit 4K. It’s just like when you look at RTX 3070 vs 3070 Ti, you actually don’t see meaningful improvement in performance despite the Ti variant having a significant memory bandwidth advantage.
For AMD and Nvidia, cutting the memory bus will result in real estate and cost savings. In addition, you can be sure it’s going to arrive with 12 GB of VRAM, as opposed to the potential of 8GB VRAM again since Nvidia tends to offer as minimal VRAM as possible. Not sure if they will be so generous to offer 16GB as I’ve been waiting to see if they will ever release a 16GB Ampere desktop card. Given we are close to the end of the Ampere cycle, it looks unlikely.
For gaming yes, for anything else other than gaming no, memory bus is very important.
Posted on Reply
#11
lexluthermiester
SteevoCount down to trolls
Irony, my favorite form of humor!
btarunr
From this list it seems 64bit memory bus is gone! And I say good riddance!
Posted on Reply
#12
Ruru
S.T.A.R.S.
lexluthermiesterFrom this list it seems 64bit memory bus is gone! And I say good riddance!
I guess that they just don't make so low-end stuff (like GT 1030) anymore and they just concentrate on the mid-end and faster stuff.
Posted on Reply
#13
lexluthermiester
MaenadFINI guess that they just don't make so low-end stuff (like GT 1030) anymore and they just concentrate on the mid-end and faster stuff.
I would love to see low-profile versions of the 3050 or even a GT 3040.
Posted on Reply
#14
Ruru
S.T.A.R.S.
lexluthermiesterI would love to see low-profile versions of the 3050 or even a GT 3040.
Yeah the RTX A2000 looked promising with its amazing efficiency.
Posted on Reply
#15
lexluthermiester
MaenadFINYeah the RTX A2000 looked promising with its amazing efficiency.
It really does but we need a consumer version of that, preferably single slot.
Posted on Reply
#16
Ruru
S.T.A.R.S.
lexluthermiesterIt really does but we need a consumer version of that, preferably single slot.
Yeah, that's what I meant, the reviewed "Quadro" was a good example that it could be possible as it was cool and very quiet with that dual-slot cooler with just a cheap aluminium heatsink. :)
Posted on Reply
#17
ppn
There is the 103 now. Previously 106 was the home for RTX 2060 and 2070. Also 1070 and 1080 that used the same 104 die. Nothing can be simply taken for granted. the size of ad 104 is the same as ga106, 280mm2. So 192 bit is not anything surprising. Even a 500mm die can be 256 bit in the case of RTX 2080. But again those are all rumors and there is no way 5nm can be produced at 600mm2, 420 is the limit, the same way 10nm is limited to 840, so those l2 caches are highly unlikely. The step 10nm to 5nm should provide 2,66-3,33 transistor density, it should be doable to increase the cuda core by 71%, and fit within that limit and even add more cache but not that big.
Posted on Reply
Add your own comment
Nov 21st, 2024 12:52 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts