Friday, September 23rd 2022

NVIDIA AD103 and AD104 Chips Powering RTX 4080 Series Detailed

Here's our first look at the "AD103" and "AD104" chips powering the GeForce RTX 4080 16 GB and RTX 4080 12 GB, respectively, thanks to Ryan Smith from Anandtech. These are the second- and third-largest implementations of the GeForce "Ada" graphics architecture, with the "AD102" powering the RTX 4090 being the largest. Both chips are built on the same TSMC 4N (4 nm EUV) silicon fabrication process as the AD102, but are significantly distant from it in specifications. For example, the AD102 has a staggering 80 percent more number-crunching machinery than the AD103, and a 50 percent wider memory interface. The sheer numbers at play here, enable NVIDIA to carve out dozens of SKUs based on the three chips alone, before we're shown the mid-range "AD106" in the future.

The AD103 die measures 378.6 mm², significantly smaller than the 608 mm² of the AD102, and it reflects in a much lower transistor count of 45.9 billion. The chip physically features 80 streaming multiprocessors (SM), which work out to 10,240 CUDA cores, 320 Tensor cores, 80 RT cores, and 320 TMUs. The chip is endowed with a healthy ROP count of 112, and has a 256-bit wide GDDR6X memory interface. The AD104 is smaller still, with a die-size of 294.5 mm², a transistor count of 35.8 billion, 60 SM, 7,680 CUDA cores, 240 Tensor cores, 60 RT cores, 240 TMUs, and 80 ROPs. Ryan Smith says that the RTX 4080 12 GB maxes out the AD104, which means its memory interface is physically just 192-bit wide.
Sources: Ryan Smith (Twitter), VideoCardz
Add your own comment

152 Comments on NVIDIA AD103 and AD104 Chips Powering RTX 4080 Series Detailed

#1
The Quim Reaper
the RTX 4080 12 GB
..are the gaming tech review sites going to persist in playing along with Nvidia's sham naming of this card or will they have the integrity to call it out for what it actually is?
Posted on Reply
#2
Denver
AMD's high-end chip will be smaller than Nvidia's mid-end, now that's an engineering feat.

In power draw/TDP we already know that it will win by a considerable margin. Now we need to know the performance :P
Posted on Reply
#3
Squared
I can't help but think that a 192-bit memory interface seems incredibly low-end for a $900 GPU. I remember that in the past high-end GPUs had 384-bit memory interfaces, and AMD's R9 290X had a 512-bit interface, then there were cards with HBM (although those had a lower memory clock speed). I guess this generation relies heavily on GDDR6X and SRAM cache? That did seem to work out for AMD in the last generation. It also seems strange that the 16 GB model has the same name but roughly 33% more resources in every way.
Posted on Reply
#4
Z-GT1000
They marketing the card as RTX 4080 12GB for take more money from buyers when is in the reality the RTX 4070, is time to tell the truth to the people and no take any more bullshit from Nvidia
Posted on Reply
#5
Pumper
Insane world when a 4090 is the best value GPU.
Posted on Reply
#6
Jimmy_
Z-GT1000They marketing the card as RTX 4080 12GB for take more money from buyers when is in the reality the RTX 4070, is time to tell the truth to the people and no take any more bullshit from Nvidia
Agreed.
Posted on Reply
#7
dj-electric
The Quim Reaper..are the gaming tech review sites going to persist in playing along with Nvidia's sham naming of this card or will they have the integrity to call it out for what it actually is?
The products should be named officially the names they were given. Tech press naming it whatever they feel like will only create more confusion.
This sucks, but press has to abide to those things for the better good
Posted on Reply
#8
ZetZet
DenverAMD's high-end chip will be smaller than Nvidia's mid-end, now that's an engineering feat.

In power draw/TDP we already know that it will win by a considerable margin. Now we need to know the performance :p
No it won't be. You can't throw out MCD area, just because AMD moved them off the main chip...
Posted on Reply
#9
P4-630
Performance wise we haven't seen any reviews yet...
The only thing I'd moan about for now is the costs....
Posted on Reply
#10
TheoneandonlyMrK
dj-electricThe products should be named officially the names they were given. Tech press naming it whatever they feel like will only create more confusion.
This sucks, but press has to abide to those things for the better good
No they don't, horse shit naming schemes need pointing out to noobs and only an Nvidia shareholder or fan would disagree.

Nvidia makes the confusion, not the press.

If the press cause more confusion,. Good it might make buyer's think more before purchasing.

A 104 die is not a X80 card GPU over nvidia's entire life span until today.
Posted on Reply
#11
Dimitriman
BTW TSMC 4N it is not 4nm process but 5nm with enhancements...
Posted on Reply
#12
rusTORK
So, even RTX 4080_16 is cut down since it's CUDA core count is 9 728, but full AD103 is 10 240.
At the same time, RTX 4080_12 and AD104 have got same CUDA core count - 7 680.

RTX 3080 Story:

1. RTX 3080_10 - GA102-200 => 8 704 CUDA cores (320bit);
2. RTX 3080_Ti - GA102-225 => 10 240 CUDA cores (384bit);
3. RTX 3080_12 - GA102-220 => 8 960 CUDA cores (384bit).

So, they hold GA102-220 for later release.

RTX 3090 Story:
1. RTX 3090 - GA102-300 => 10 496 CUDA cores (384bit);
2. RTX 3090_Ti - GA102-350 => 10 752 CUDA cores (384bit).
Posted on Reply
#13
phill
Lets see how this plays out.... "Insert Popcorn meme here"
Posted on Reply
#14
TheoneandonlyMrK
DimitrimanBTW TSMC 4N it is not 4nm process but 5nm with enhancements...
You say that like 5nm WAS 5nm.

No part of it was Afaik.
Posted on Reply
#15
Dimitriman
TheoneandonlyMrKYou say that like 5nm WAS 5nm.

No part of it was Afaik.
It's all about marketing for sure.
Posted on Reply
#16
renz496
The Quim Reaper..are the gaming tech review sites going to persist in playing along with Nvidia's sham naming of this card or will they have the integrity to call it out for what it actually is?
doesn't matter what the card being called. just look at GTX1630. despite the naming the card still being priced like previous gen x50 from nvidia. nvidia can call it RTX4050 and the price will still going to be $900.
Posted on Reply
#17
Dirt Chip
In the end, as always, what's matter is performance per $.

The name is the last important part fallow by memory bus.
Posted on Reply
#18
TheinsanegamerN
Man, nvidia calling a chip 103 instead of 102 really ruffled people's feathers.

Who cares if its 102, 103, 104, or 42069? What matters is the perf/$, not the codename of the hip itself.
Posted on Reply
#19
Denver
ZetZetNo it won't be. You can't throw out MCD area, just because AMD moved them off the main chip...
The 6 extra chips are just infinity cache, apparently.

I wonder how well the GPU would work without it, Unlike RDNA2, RDNA3 Flagship will have huge bandwidth to play with.
Posted on Reply
#21
ModEl4
DenverAMD's high-end chip will be smaller than Nvidia's mid-end, now that's an engineering feat.

In power draw/TDP we already know that it will win by a considerable margin. Now we need to know the performance :p
No it certainly won't.
According to rumors Navi31 will be only 12% smaller than AD102.
Navi 32 will be only -9% from AD103.
You are probably comparing only the compute chiplet with AD103!
But in performance/Watt RDNA3 will be better but not by much, it all depends from the TDP that AMD will target! (If you undervolt also 4090 and target 350W for example, probably they will have very close performance/Watt!
Posted on Reply
#22
Guwapo77
TheinsanegamerNMan, nvidia calling a chip 103 instead of 102 really ruffled people's feathers.

Who cares if its 102, 103, 104, or 42069? What matters is the perf/$, not the codename of the hip itself.
It matters because the perf/$ for that chip would typically be around the $500 MSRP. The 70 class card represented the sweet spot for most money conscious gamers. Nvidia has effectively hid the 70 card into the 80 series with a $300 price bump. Remember, new 70 series cards was typical the performance of the out going 80 series from the previous gen, not this time.
Posted on Reply
#23
Daven
ZetZetNo it won't be. You can't throw out MCD area, just because AMD moved them off the main chip...
Exactly. All chips and chiplets should be added together for total die area.
Posted on Reply
#24
Guwapo77
ModEl4No it certainly won't.
According to rumors Navi31 will be only 12% smaller than AD102.
Navi 32 will be only -9% from AD103.
You are probably comparing only the compute chiplet with AD103!
But in performance/Watt RDNA3 will be better but not by much, it all depends from the TDP that AMD will target! (If you undervolt also 4090 and target 350W for example, probably they will have very close performance/Watt!
That is a lot of assumptions, it will be interesting to see if you are correct.
Posted on Reply
#25
ModEl4
DenverThe 6 extra chips are just infinity cache, apparently.

I wonder how well the GPU would work without it, Unlike RDNA2, RDNA3 Flagship will have huge bandwidth to play with.
No they aren't just Infinity cache.
At 7nm 128MB infinity cache was around 78mm², so 96MB around 58.5mm².At 6nm with the same T libraries it would be around 51mm²-54mm² or at this ballpark.
Even if they targeted much higher throughput using higher T libraries, I don't see them more than double from that, so 108mm² at max.
The die area of the chiplets in Navi31 will be according to rumors 225mm² at least, so what you're saying doesn't add up imo.
Posted on Reply
Add your own comment
Dec 19th, 2024 12:35 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts