Monday, April 29th 2024

NVIDIA Builds Exotic RTX 4070 From Larger AD103 by Disabling Nearly Half its Shaders

A few batches of GeForce RTX 4070 graphics cards are based on the 5 nm "AD103" silicon, a significantly larger chip than the "AD104" that powers the original RTX 4070. A reader has reached out to us with a curiously named MSI RTX 4070 Ventus 3X E 12 GB OC graphics card, saying that TechPowerUp GPU-Z wasn't able to detect it correctly. When we took a closer look at their GPU-Z submission data, we found that the card was based on the larger "AD103" silicon, looking at its device ID. Interestingly, current NVIDIA drivers, such as the 552.22 WHQL used here, are able to seamlessly present the card to the user as an RTX 4070. We dug through older versions of GeForce drivers, and found that the oldest driver to support this card is 551.86, which NVIDIA released in early-March 2024.

The original GeForce RTX 4070 was created by NVIDIA by enabling 46 out of 60 streaming multiprocessors (SM), or a little over 76% of the available shaders. To create an RTX 4070 out of an "AD103," NVIDIA would have to enable 46 out of 80, or just 57% of the available shaders, and just 36 MB out of the 64 MB available on-die L2 cache. The company would also have to narrow the memory bus down to 192-bit from the available 256-bit, to drive the 12 GB of memory. The PCB footprint, pin-map, and package size of both the "AD103" and "AD104" are similar, so board partners are able to seamlessly integrate the chip with their existing AD104-based RTX 4070 board designs. End-users would probably not even notice the change until they fire up diagnostic utilities and find them surprised.
Why NVIDIA would make RTX 4070 using the significantly larger "AD103" silicon, is anyone's guess—the company probably has a stash of chips that are good enough to match the specs of the RTX 4070, so it would make sense to harvest the RTX 4070 out of them, which could sell for at least $500 in the market. This also opens up the possibility of RTX 4070 SUPER cards based on this chip, all NVIDIA has to do is dial up the SM count to 56, and increase the L2 cache available to 48 MB. How the switch to AD103 affects power and thermals, is an interesting thing to look out for.

Our next update of TechPowerUp GPU-Z will be able to correctly detect RTX 4070 cards based on AD103 chips.
Add your own comment

57 Comments on NVIDIA Builds Exotic RTX 4070 From Larger AD103 by Disabling Nearly Half its Shaders

#1
Ruru
S.T.A.R.S.
Nothing new here. There was 2060s made out of 2080 dies as well for example. Better use those crappy chips for lower end SKUs if they have enough working shaders etc.
Posted on Reply
#2
Onasi
This is normal end-of-gen stuff from NV when the yields are better and they have a lot of defective but not TOO defective dies. Nothing to see here really.
Posted on Reply
#3
P4-630
Maybe these chips are better to cool, since the total die area will be larger and part of it un-used...
Posted on Reply
#4
lemonadesoda
P4-630Maybe these chips are better to cool, since the total die area will be larger and part of it un-used...
Better to cool only if the bigger cooler is used, and the "unfused" parts of the die are not concentrated all in one spot.

Compared to a regular AD104, the AD103 may be more inefficient as a 4070, because even though shaders and memory bus are "fused out" is there any wasted power with these idle circuits? Answer yes, but it might be *insignificant*, but equally, might be noticeable on regular desktop idle.
Posted on Reply
#5
Ruru
S.T.A.R.S.
lemonadesodaBetter to cool only if the bigger cooler is used, and the "unfused" parts of the die are not concentrated all in one spot.

Compared to a regular AD104, the AD103 may be more inefficient as a 4070, because even though shaders and memory bus are "fused out" is there any wasted power with these idle circuits? Answer yes, but it might be *insignificant*, but equally, might be noticeable on regular desktop idle.
At least with EVGA 2060 KO, the difference between it and a "normal" 2060 isn't that huge.

www.techpowerup.com/review/evga-geforce-rtx-2060-ko/31.html
Posted on Reply
#6
P4-630
lemonadesodaBetter to cool only if the bigger cooler is used
I'm sure all GPU coolers these days are already somewhat over-sized compared to the dies needed to cooled.
Posted on Reply
#7
Assimilator
Wait, I thought plain 4070 was being discontinued in favour of 4070 SUPER?
Posted on Reply
#8
dj-electric
AssimilatorWait, I thought plain 4070 was being discontinued in favour of 4070 SUPER?
No, NVIDIA has clearly stated that the 549 SEP RTX 4070 will continue to be in full production.

Posted on Reply
#9
Onasi
AssimilatorWait, I thought plain 4070 was being discontinued in favour of 4070 SUPER?
Nah, that was the 4070Ti non-S. The 4070 just received a price cut. Honestly, not enough of one - going to 450 and adjusting the stack below accordingly would make more sense.
Posted on Reply
#10
Assimilator
Thanks guys - I can't keep up with all of NVIDIA's stupid product shenanigans anymore...
Posted on Reply
#11
dj-electric
AssimilatorThanks guys - I can't keep up with all of NVIDIA's stupid product shenanigans anymore...
I'm hands on deck dawn to sunset on these things and I still find myself double triple checking product and branding related stuff. Handling current hardware product naming schemes is an absolute mess in all markets.
Posted on Reply
#12
Assimilator
dj-electricI'm hands on deck dawn to sunset on these things and I still find myself double triple checking product and branding related stuff. Handling current hardware product naming schemes is an absolute mess in all markets.
Let's be honest here, it's mostly NVIDIA. I just don't understand how they could conclude that calling the refresh of 4070 Ti "4070 TI SUPER" is less confusing than "4075 Ti".
Posted on Reply
#13
64K
AssimilatorLet's be honest here, it's mostly NVIDIA. I just don't understand how they could conclude that calling the refresh of 4070 Ti "4070 TI SUPER" is less confusing than "4075 Ti".
4070 Ti Super is already a mouthful. MSI even took it a step further in absurdity with their 4070 Ti Super Expert card.
Posted on Reply
#14
TheDeeGee
AssimilatorLet's be honest here, it's mostly NVIDIA. I just don't understand how they could conclude that calling the refresh of 4070 Ti "4070 TI SUPER" is less confusing than "4075 Ti".
Normal Tie, Long Tie, Super Long Tie.

I would like to see GT and GTX return again.

4070
4070 GT
4070 GTX
Posted on Reply
#15
Dr. Dro
AssimilatorThanks guys - I can't keep up with all of NVIDIA's stupid product shenanigans anymore...
Rather inconsequential for the end-user, anyway. The tradeoff of using a low-quality larger die is that it'll be slightly less power efficient, but as we've seen with the 2060 KO, it's not a huge difference + you get *some* of the benefits from the larger chip, such as it being easier to cool due to larger die area, and in some cases, very slightly better compute performance. Otherwise they should be about the same.

The only cards that were retired were the RTX 4080 and 4070 Ti, which have been made more or less redundant with their Super refreshes.
TheDeeGeeNormal Tie, Long Tie, Super Long Tie.

I would like to see GT and GTX return again.

4070
4070 GT
4070 GTX
Never happening. Ever. The GTX branding is done for good as raytracing and AI are here to stay and they're both pillars of not only Nvidia's graphics business but beyond that - of modern computing, whether grumpy old folks like it or not.
Posted on Reply
#16
64K
TheDeeGeeNormal Tie, Long Tie, Super Long Tie.

I would like to see GT and GTX return again.

4070
4070 GT
4070 GTX
Can't happen. It's all about pushing Ray Tracing with Nvidia for years now. Hence the RTX branding.
Posted on Reply
#17
Random_User
If the price is right, doesn't matter what it is made from. At this point it can't be made from scrapped 4090 chips, and be a decent product. The only concern, is it worth, and how much it takes to repurpose the defective dies, vs using the designated 4070/Ti chip. 2060 KO, wasn't bad, after all. AMD did the same with entire Navi 21 stack, from 6950XT to 6800 non-XT.
In any case, AMD made their Octa cores from scrapped dual CCD counterparts, and sold it to thousands of people. No one would have know, until the problems started to pop out. And I doubt that these GPUs can be any worse than that.
Posted on Reply
#18
Onasi
Dr. DroNever happening. Ever. The GTX branding is done for good as raytracing and AI are here to stay and they're both pillars of not only Nvidia's graphics business but beyond that - of modern computing, whether grumpy old folks like it or not.
Kay. How about RTX, RT, RTS, RS, RTSE and RTUltra? Sounds good, I see no problem with this scheme. Throw in some Ti and Super into the mix.
I want to go into the shop and ask where the RTUltra 7080 Ti Super Founders Edition is.
Posted on Reply
#19
Dr. Dro
OnasiKay. How about RTX, RT, RTS, RS, RTSE and RTUltra? Sounds good, I see no problem with this scheme. Throw in some Ti and Super into the mix.
I want to go into the shop and ask where the RTUltra 7080 Ti Super Founders Edition is.
Sorry, already happened. I love my ASUS Republic of Gamers GeForce RTX 4080™ Strix White Overclocked Edition 16 GB GDDR6X 256-bit High Performance Graphics Card mate :roll:

Hardware branding has gotten so desperate because let's face it... it's gotten plenty fast and with the exception of some cases, you can't tell a 6 year old computer from a brand new one apart. The user experience is going to be virtually the same, and even games will run well on ye olde i7-8700K machine. They have to force these exotic names and bet big on the latest trendy nonsense like AI to keep up appearances.

It's not only affecting computers, too. Phones are more or less on the same boat, other than higher-refresh screens and ever better cameras, the core experience has been stable for some time.
Posted on Reply
#21
Chrispy_
Do these larger dies have increased power draw and therefore lower efficiency or are the fused off parts completely dead with no ancillary logic running to support them?

Presumably even if the dead parts are fused off entirely and 'cold', the links between logic clusters are physically longer and therefore use more power; Whether that's a significant amount or negligible - I do not know.
Posted on Reply
#22
Dr. Dro
Chrispy_Do these larger dies have increased power draw and therefore lower efficiency or are the fused off parts completely dead with no ancillary logic running to support them?

Presumably even if the dead parts are fused off entirely and 'cold', the links between logic clusters are physically longer and therefore use more power; Whether that's a significant amount or negligible - I do not know.
They're fused off, but they're not completely dead, just inactive. The power incursion should be quite minimal. The thing is that the larger chip consumes more power in itself, regardless of how many units are enabled. But it shouldn't be anywhere near what a higher-end SKU with more cores enabled will consume. EVGA's TU104 2060 KO is the perfect sample case.

Posted on Reply
#23
Chrispy_
So about 8-10% more power-hungry in the case of the 2060KO. Not a disaster, but also not great either.
Posted on Reply
#24
AvrageGamr
Slightly more powerful than my 4070. Probably a little higher wattage also.
Posted on Reply
#25
Franzen4Real
Random_UserAMD did the same with entire Navi 21 stack, from 6950XT to 6800 non-XT.
In any case, AMD made their Octa cores from scrapped dual CCD counterparts, and sold it to thousands of people. No one would have know, until the problems started to pop out. And I doubt that these GPUs can be any worse than that.
We can go back much, much further. I had a Radeon 6950 that you could bios flash to unlock to a 6970 (read about it right here on TPU back in the day, thanks Wiz! lol), and an Athlon X3 that could potentially unlock to the quad core variant. That was before the days of physically fusing off portions of the silicon. 2010 to be exact....wow, I feel really old now.

All of this binning/fusing/repurposing has been going on for at least 14 years now, that is the earliest experience I can remember having with it. I do not want to even imagine the cost of CPU's/GPU's if this was not the standard practice.
Posted on Reply
Add your own comment
Nov 22nd, 2024 05:59 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts