Tuesday, September 13th 2022
NVIDIA AD102 "Ada" Packs Over 75 Billion Transistors
NVIDIA's next-generation AD102 "Ada" GPU is shaping up to be a monstrosity, with a rumored transistor-count north of 75 billion. This would put over 2.6 times the 28.3 billion transistors of the current-gen GA102 silicon. NVIDIA is reportedly building the AD102 on the TSMC N5 (5 nm EUV) node, which offers a significant transistor-density uplift over the Samsung 8LPP (8 nm DUV) node on which the GA102 is built. The 8LPP offers 44.56 million transistors per mm² die-area (MTr/mm²), while the N5 offers a whopping 134 MTr/mm², which fits in with the transistor-count gain. This would put its die-area in the neighborhood of 560 mm². The AD102 is expected to power high-end RTX 40-series SKUs in the RTX 4090-series and RTX 4080-series.
Source:
kopite7kimi (Twitter)
31 Comments on NVIDIA AD102 "Ada" Packs Over 75 Billion Transistors
Not sure about Ampere and Ada, but I'd happily take ~10% die space for RT acceleration and Tensor cores over a 10% increase in raster performance.
Given a 7600XT is said to beat a 6950XT, I easily beleive 7900XT will be 2x 6900XT and that's what the inside information has been saying all along. So 4090 Ti should be similarly 2x 3090Ti again in the ball park of leaks.
But I can understand nVidia's goal of accelerating real work rather than gaming.
The figures regarding million transistors per mm² die-area (MTr/mm²) that you are taking are completely uncorrelatable and the comparison result just wrong.
If I understood you took something like a GA102 result regarding 8LPP (44.56MTr/mm²) and tried to compare it with something like a N5 Apple A14 SOC (134 MTr/mm²).
You will get closer results if you calculate (accordingly for logic/SRAM/analog etc) based on foundry tech sites like WikiChip for example (slightly different from official TSMC claims, for example TSMC N10 vs N16 logic density scaling claim is 2X while WikiChip gives 1.82X or N5 vs N7 TSMC claim is 1.84X while WikiChip gives 1.87X but in this case for example TSMC compared a whole CPU block)
According to WikiChip Samsung's 8LPP is around 17% denser than TSMC's N10 regarding logic and if you compare Apple's 10nm and 5nm SOCs the actual scaling is just 2.73X!
Logic scaling scales very differently from caches/analog for example (e.g. N5 vs N7 logic scaling is 1.84X, SRAM 1.35X and analog 1.2X only!)
So if you take 2 completely different designs the compared results will be completely wrong.
Anyway, if the 75b+ figure is true this means at least 45b+ for AD103.
If the 96MB cache implementation is similar to AMD's infinity cache and Nvidia uses 6T SRAM for example the transistor count is inconsequential (4.6b transistors+redundancy/ overhead)
So comparing 7GPC designs (AD103 vs GA102 10752 Cuda cores both) the transistor increase per GPC is just insane, I wonder what extra features Ada will implement and at what DX level will end up being in the future.
TSMC logic density scaling by WikiChip:
Apple SOC density scaling example:
At some point in the future when RT perf is actually good and many more games support RT and DLSS then sure.
Im running 2080Ti right now at 1440p 165Hz.
Oh noes I r wrong it is 102.
The previous rumor was around 600mm² which seems difficult since regular N4 is around 6% denser than N5, maybe we have a customized node like in Turing's case (TSMC 12nm "FFN")
I say this after watching it happen numerous times from both brands.
Bear in mind too iirc of that 10% die area, roughly 2/3 of that is RT and 1/3 is Tensor, so ~3% die space for Tensor alone, easy yes for me and indeed many people like me. Well I've used that die area for hundreds of hours of RT/DLSS enabled gaming, so I'd say they definitely are at least gaming features too.
People are thinking that this chip will be amazingly fast based solely upon the number of transistors. It won't come close to that because it will be severely power limited.