Friday, September 23rd 2022

NVIDIA AD103 and AD104 Chips Powering RTX 4080 Series Detailed

Here's our first look at the "AD103" and "AD104" chips powering the GeForce RTX 4080 16 GB and RTX 4080 12 GB, respectively, thanks to Ryan Smith from Anandtech. These are the second- and third-largest implementations of the GeForce "Ada" graphics architecture, with the "AD102" powering the RTX 4090 being the largest. Both chips are built on the same TSMC 4N (4 nm EUV) silicon fabrication process as the AD102, but are significantly distant from it in specifications. For example, the AD102 has a staggering 80 percent more number-crunching machinery than the AD103, and a 50 percent wider memory interface. The sheer numbers at play here, enable NVIDIA to carve out dozens of SKUs based on the three chips alone, before we're shown the mid-range "AD106" in the future.

The AD103 die measures 378.6 mm², significantly smaller than the 608 mm² of the AD102, and it reflects in a much lower transistor count of 45.9 billion. The chip physically features 80 streaming multiprocessors (SM), which work out to 10,240 CUDA cores, 320 Tensor cores, 80 RT cores, and 320 TMUs. The chip is endowed with a healthy ROP count of 112, and has a 256-bit wide GDDR6X memory interface. The AD104 is smaller still, with a die-size of 294.5 mm², a transistor count of 35.8 billion, 60 SM, 7,680 CUDA cores, 240 Tensor cores, 60 RT cores, 240 TMUs, and 80 ROPs. Ryan Smith says that the RTX 4080 12 GB maxes out the AD104, which means its memory interface is physically just 192-bit wide.
Sources: Ryan Smith (Twitter), VideoCardz
Add your own comment

152 Comments on NVIDIA AD103 and AD104 Chips Powering RTX 4080 Series Detailed

#26
Daven
DenverThe 6 extra chips are just infinity cache, apparently.

I wonder how well the GPU would work without it, Unlike RDNA2, RDNA3 Flagship will have huge bandwidth to play with.
The six extra chips are cache AND memory controllers. These must be counted as die area as they would typically be part of monolithic chip and the GPU wouldn’t work without them.
Posted on Reply
#27
trsttte
So they pumped 3 different chips right from the start? that's new, as is the 4080 not using the top 102 one.

Also wasn't jensen saying moore law is dead? Seems pretty alive to me when the 3080 used a 600+mm2 chip and the 4080 is using a 300+mm2 chip :D
Posted on Reply
#28
thunderingroar
295 mm² 192bit bus card for 1100€, good luck with that one NV

For reference previous biggest (consumer) 104 die cards were:

GA104 - RTX 3070ti 392 mm² 256bit ~600€
TU104 - RTX 2080S 545 mm² 256bit ~699€
Posted on Reply
#29
ModEl4
Guwapo77That is a lot of assumptions, it will be interesting to see if you are correct.
The die size differences (12-12.5% for AD102/Navi31 and 9-8% for AD103/Navi32) are based on the figures that leakers claimed for AMD.
The performance/W is just my estimation (4090 will be at max -10% less efficient if compared at the same TBP)
AMD fans saying otherwise just isn't doing AMD a favour because anything more it will to disappointment.
Even what I'm saying probably is too much, because if you take a highly OC Navi31 flagship partner card like Powercolor Red devil, Asus strix, Asrock Formula and the TBP is close to 450W, what i just said is that Navi31 flagship will be regarding performance 100% and 4090 90% which probably isn't going to happen...
Posted on Reply
#30
regs
AD103 supposed to be RTX 4070 and AD104 supposed to be RTX 4060, but as there is no competition, they renamed them up and bumped prices 3 times up.
Posted on Reply
#31
AnarchoPrimitiv
regsAD103 supposed to be RTX 4070 and AD104 supposed to be RTX 4060, but as there is no competition, they renamed them up and bumped prices 3 times up.
No competition? What do you mean by that? RDNA2 matched or beat 30 series in raster, FSR 2.0 has great reviews, and most certainly RDNA3 will compete, and because AMD's chiplet approach should be cheaper to manufacture, RDNA3 should offer better performance per dollar....but despite all of that, everyone will buy Nvidia and reward their behavior and perpetuate Nvidia's constant price increases in perpetuity.

Let's be honest everyone, AMD could release a GPU that matched Nvidia in every way including raytracing, and have FSR equal to DLSS in every way and charge less than Nvidia for it, and everyone would STILL buy Nvidia (which only proves consumer choices are quite irrational and are NOT decided by simply comparing specs, as the existence of fanboys should testify to...)...and as long as that's true, the GPU market will ALWAYS be hostile to consumers. The ONLY way things are going to improve for consumers is if AMD starts capturing marketshare and Nvidia is punished by consumers... but based on historical precedent, I have no hope for that...

And I don't believe Intel's presence would have improved the situation much, not as much as a wholly new company in the GPU space would have, because Intel would have leveraged success in the GPU market (which would have probably been carved away from AMD's limited marketshare instead of Nvidia's and would have resulted in Nvidia's marketshare remaining at 80% and AMD's 20% being divided between AMD and Intel) to further marginalize AMD in the x86 space (for example, by using influence with OEMs to have an Intel CPU matched with an Intel GPU and further diminish AMDs position among OEMs, which is how Intel devastated AMD in the 2000s BTW), and it would have been trading a marginally better GPU market for a much worse CPU market, imo. Although it'd never happen, what would be really improve the market would be if Nvidia got broken up like AT&T did in the 80s...
Posted on Reply
#32
windwhirl
Z-GT1000They marketing the card as RTX 4080 12GB for take more money from buyers when is in the reality the RTX 4070, is time to tell the truth to the people and no take any more bullshit from Nvidia
... the 4080 16 GB variant is barely a 4070, tbh, much less the 12 GB variant.
Posted on Reply
#33
jaszy
regsAD103 supposed to be RTX 4070 and AD104 supposed to be RTX 4060, but as there is no competition, they renamed them up and bumped prices 3 times up.
How so?

GK104 GTX680 @ 294mm2 full die with 1536 Cuda = $499, adjusted for inflation $645 USD.

GP104 GTX1080 @ 314mm2 full die with 2560 Cuda = $599, adjusted for inflation, $740 USD.


I'll agree the 4080 12GB is overpriced, but it's not the first time Nvidia has done this relatively speaking. :) Not to defend Nvidia.. but.. They've been doing this crap for years.

If they priced it at $700 it would have been more in line with some of their previous G104 full die x80 class GPUs. Margins are obviously higher now.. Hardware EE is more expensive too.
Posted on Reply
#34
windwhirl
jaszyHow so?

GK104 GTX680 @ 294mm2 full die with 1536 Cuda = $499, adjusted for inflation $645 USD.

GP104 GTX1080 @ 314mm2 full die with 2560 Cuda = $599, adjusted for inflation, $740 USD.


I'll agree the 4080 12GB is overpriced, but it's not the first time Nvidia has done this relatively speaking. :)
Look at the sheer difference in SM counts between the 4080 16 GB and the 4090. The 4090 has a lot more SMs than the 4080 16 GB (128 vs 76). So the 4080 16 GB variant is around 60% of the 4090, and the 4080 12 GB variant with 60 SM is around 47% of the 4090.

en.wikipedia.org/wiki/GeForce_30_series

Meanwhile, the 3090 vs the 3080: 82 vs 68, which means the 3080 has around 83% of the 3090's SM count activated. And the 3070 Ti has 58% of the SM count of the 3090.

So, yes. You know what, the 4080 16 GB variant is actually a 4070 Ti. And the 4080 12 GB variant reminds of the 3060 Ti actually (since both have around 47% of their respective lineups' 90 card core count).

So, Nvidia basically named them both "4080" just so they didn't show up as asking +1000 euros for a 60-70 class card.
Posted on Reply
#36
windwhirl
ARFI have some performance figures:

Cyberpunk 2077 @3840x2160 RT on:

RTX 4090: 46.5 FPS
RTX 3090 Ti: 23.6 FPS ASRock Radeon RX 6950 XT OC Formula Review - Ray Tracing | TechPowerUp




GeForce RTX 4090 Performance Figures and New Features Detailed by NVIDIA (wccftech.com)
Worthless comparison because the test systems are completely different. W1zzard was using a Ryzen 5800X, with 16 GB of RAM, on Windows 10.

Those guys were using an unknown version of Windows 11, a Core i9 12900K, and 32 GB of RAM, speed and timings unknown
Posted on Reply
#37
ARF
windwhirlWorthless comparison because the test systems are completely different. W1zzard was using a Ryzen 5800X, with 16 GB of RAM, on Windows 10.

Those guys were using an unknown version of Windows 11, a Core i9 12900K, and 32 GB of RAM, speed and timings unknown
At 4K it doesn't matter. :D
Posted on Reply
#38
Fleurious
I’m assuming once 3000 series stock sells out nVidia will quietly release a 4070 that is identical to the 4080 12gb and replaces it.
Posted on Reply
#39
JalleR
I am looking forward to the 4080 10GB and the 4080 8GB, i think one of those would be good for an upgrade for my wife, or maybe a 4080 6GB..........................................................................................................
Posted on Reply
#40
jaszy
windwhirlLook at the sheer difference in SM counts between the 4080 16 GB and the 4090. The 4090 has a lot more SMs than the 4080 16 GB (128 vs 76). So the 4080 16 GB variant is around 60% of the 4090, and the 4080 12 GB variant with 60 SM is around 47% of the 4090.

en.wikipedia.org/wiki/GeForce_30_series

Meanwhile, the 3090 vs the 3080: 82 vs 68, which means the 3080 has around 83% of the 3090's SM count activated. And the 3070 Ti has 58% of the SM count of the 3090.

So, yes. You know what, the 4080 16 GB variant is actually a 4070 Ti. And the 4080 12 GB variant reminds of the 3060 Ti actually (since both have around 47% of their respective lineups' 90 card core count).

So, Nvidia basically named them both "4080" just so they didn't show up as asking +1000 euros for a 60-70 class card.
So you do realize that SM count isn't linear performance generation to generation, right? Nvidia has also moved around the "class" of GPU's for multiple generations.

Like I said, the 4080 12GB isn't too far off from what cards like the GTX680 or GTX1080 were if you factor inflation. The only difference these days is that Nvidia moved the "top end" to a higher goal post. Thats it..

Is the 4080 12GB overpriced? Yes, but it isn't too far off from certain previous x80 GPUs with full G104 dies. Like I said EE design/cooling is also WAY more expensive these days. We're not talking about 150-200w cards anymore.

Am I the only one who realizes Nvidia has been doing this shit for years?
Posted on Reply
#41
ARF
jaszyGTX680 or GTX1080
Speaking of which... if history is anything to go by, then we won't see competition from Radeon. Those were exactly the worst times for Radeon with Vega 64 and HD 7970.

What does AMD prepare to counter the RTX 4090 launch? :confused:
Posted on Reply
#42
jaszy
ARFSpeaking of which... if history is anything to go by, then we won't see competition from Radeon. Those were exactly the worst times for Radeon with Vega 64 and HD 7970.

What does AMD prepare to counter the RTX 4090 launch? :confused:
Who knows. Hope the leaks aren't true. AMD seems to be going the NVIDIA route by downgrading specs per generation to ensure people "upgrade" sooner.

And I don't trust AMD to be a savior either. MSRP pricing on later released RX6000 cards durring the mining crisis was a joke..

The truth is, both these companies only care about your dollar. Let them fight for it.
Posted on Reply
#43
RH92
The Quim Reaper..are the gaming tech review sites going to persist in playing along with Nvidia's sham naming of this card or will they have the integrity to call it out for what it actually is?
First of all TPU is not a ''gaming'' tech reviewer , it is an information website centered on technology .

Secondly , reviewers are legally bound to call a given products what the manufacturer calls it in their presentations , it's not like they can go out and call it random names ... They can express their oppinion on the naming/segmentation but they can't make up names so no need to make a fuss about it .

As long as they provide accurate information about the perf and price/perf ratio then they've done their job. It's up to the customer to make the final decision based on this information .
Posted on Reply
#44
Fasola
AnarchoPrimitivAnd I don't believe Intel's presence would have improved the situation much, not as much as a wholly new company in the GPU space would have, because Intel would have leveraged success in the GPU market (which would have probably been carved away from AMD's limited marketshare instead of Nvidia's and would have resulted in Nvidia's marketshare remaining at 80% and AMD's 20% being divided between AMD and Intel) to further marginalize AMD in the x86 space (for example, by using influence with OEMs to have an Intel CPU matched with an Intel GPU and further diminish AMDs position among OEMs, which is how Intel devastated AMD in the 2000s BTW), and it would have been trading a marginally better GPU market for a much worse CPU market, imo. Although it'd never happen, what would be really improve the market would be if Nvidia got broken up like AT&T did in the 80s...
To be fair, I don't think I've ever seen an Intel CPU paired with an AMD dGPU in a laptop.
Posted on Reply
#45
RH92
PumperInsane world when a 4090 is the best value GPU.
There is no information out there to allow for an educated oppinion about value , for this you need reviews first ...
Posted on Reply
#46
Soul_
So, they are basically 4070 and 4060.
Posted on Reply
#47
ARF
Soul_So, they are basically 4070 and 4060.
Yes, the confusion comes from the seemingly large leap but bear in mind that the old generation was built on Samsung 8N, which is a 12nm process node in reality.
Posted on Reply
#48
Soul_
ARFYes, the confusion comes from the seemingly large leap but bear in mind that the old generation was built on Samsung 8N, which is a 12nm process node in reality.
I don't think we are confused about anything here. There were node leaps in the past as well, case in point, from Maxwell to Turing, but never seen the Core ratio of Titan (biggest chip) to 1080 be this skewed.
Posted on Reply
#49
AnotherReader
ARFYes, the confusion comes from the seemingly large leap but bear in mind that the old generation was built on Samsung 8N, which is a 12nm process node in reality.
It is closest to TSMC's 10 nm; TSMC's 12 nm is 16 nm with different standard cells.
Posted on Reply
#50
metalslaw
Which means a 4080ti (when it arrives) will probably be only 10% faster than 4080 16gb (76/80 SM's enabled), using a maxed out AD103 die (with 80/80 SM's enabled).

Which will still leave a huge gap to the 4090...

I guess they could cut down the AD102 die a ton, to create something truly in the middle, of the 4090 and 4080 16gb. But I think it's just going to be a maxed out ad103 die. :/
Posted on Reply
Add your own comment
Dec 19th, 2024 18:03 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts