Monday, April 29th 2024
NVIDIA Builds Exotic RTX 4070 From Larger AD103 by Disabling Nearly Half its Shaders
A few batches of GeForce RTX 4070 graphics cards are based on the 5 nm "AD103" silicon, a significantly larger chip than the "AD104" that powers the original RTX 4070. A reader has reached out to us with a curiously named MSI RTX 4070 Ventus 3X E 12 GB OC graphics card, saying that TechPowerUp GPU-Z wasn't able to detect it correctly. When we took a closer look at their GPU-Z submission data, we found that the card was based on the larger "AD103" silicon, looking at its device ID. Interestingly, current NVIDIA drivers, such as the 552.22 WHQL used here, are able to seamlessly present the card to the user as an RTX 4070. We dug through older versions of GeForce drivers, and found that the oldest driver to support this card is 551.86, which NVIDIA released in early-March 2024.
The original GeForce RTX 4070 was created by NVIDIA by enabling 46 out of 60 streaming multiprocessors (SM), or a little over 76% of the available shaders. To create an RTX 4070 out of an "AD103," NVIDIA would have to enable 46 out of 80, or just 57% of the available shaders, and just 36 MB out of the 64 MB available on-die L2 cache. The company would also have to narrow the memory bus down to 192-bit from the available 256-bit, to drive the 12 GB of memory. The PCB footprint, pin-map, and package size of both the "AD103" and "AD104" are similar, so board partners are able to seamlessly integrate the chip with their existing AD104-based RTX 4070 board designs. End-users would probably not even notice the change until they fire up diagnostic utilities and find them surprised.Why NVIDIA would make RTX 4070 using the significantly larger "AD103" silicon, is anyone's guess—the company probably has a stash of chips that are good enough to match the specs of the RTX 4070, so it would make sense to harvest the RTX 4070 out of them, which could sell for at least $500 in the market. This also opens up the possibility of RTX 4070 SUPER cards based on this chip, all NVIDIA has to do is dial up the SM count to 56, and increase the L2 cache available to 48 MB. How the switch to AD103 affects power and thermals, is an interesting thing to look out for.
Our next update of TechPowerUp GPU-Z will be able to correctly detect RTX 4070 cards based on AD103 chips.
The original GeForce RTX 4070 was created by NVIDIA by enabling 46 out of 60 streaming multiprocessors (SM), or a little over 76% of the available shaders. To create an RTX 4070 out of an "AD103," NVIDIA would have to enable 46 out of 80, or just 57% of the available shaders, and just 36 MB out of the 64 MB available on-die L2 cache. The company would also have to narrow the memory bus down to 192-bit from the available 256-bit, to drive the 12 GB of memory. The PCB footprint, pin-map, and package size of both the "AD103" and "AD104" are similar, so board partners are able to seamlessly integrate the chip with their existing AD104-based RTX 4070 board designs. End-users would probably not even notice the change until they fire up diagnostic utilities and find them surprised.Why NVIDIA would make RTX 4070 using the significantly larger "AD103" silicon, is anyone's guess—the company probably has a stash of chips that are good enough to match the specs of the RTX 4070, so it would make sense to harvest the RTX 4070 out of them, which could sell for at least $500 in the market. This also opens up the possibility of RTX 4070 SUPER cards based on this chip, all NVIDIA has to do is dial up the SM count to 56, and increase the L2 cache available to 48 MB. How the switch to AD103 affects power and thermals, is an interesting thing to look out for.
Our next update of TechPowerUp GPU-Z will be able to correctly detect RTX 4070 cards based on AD103 chips.
57 Comments on NVIDIA Builds Exotic RTX 4070 From Larger AD103 by Disabling Nearly Half its Shaders
Compared to a regular AD104, the AD103 may be more inefficient as a 4070, because even though shaders and memory bus are "fused out" is there any wasted power with these idle circuits? Answer yes, but it might be *insignificant*, but equally, might be noticeable on regular desktop idle.
www.techpowerup.com/review/evga-geforce-rtx-2060-ko/31.html
I would like to see GT and GTX return again.
4070
4070 GT
4070 GTX
The only cards that were retired were the RTX 4080 and 4070 Ti, which have been made more or less redundant with their Super refreshes. Never happening. Ever. The GTX branding is done for good as raytracing and AI are here to stay and they're both pillars of not only Nvidia's graphics business but beyond that - of modern computing, whether grumpy old folks like it or not.
In any case, AMD made their Octa cores from scrapped dual CCD counterparts, and sold it to thousands of people. No one would have know, until the problems started to pop out. And I doubt that these GPUs can be any worse than that.
I want to go into the shop and ask where the RTUltra 7080 Ti Super Founders Edition is.
Hardware branding has gotten so desperate because let's face it... it's gotten plenty fast and with the exception of some cases, you can't tell a 6 year old computer from a brand new one apart. The user experience is going to be virtually the same, and even games will run well on ye olde i7-8700K machine. They have to force these exotic names and bet big on the latest trendy nonsense like AI to keep up appearances.
It's not only affecting computers, too. Phones are more or less on the same boat, other than higher-refresh screens and ever better cameras, the core experience has been stable for some time.
Presumably even if the dead parts are fused off entirely and 'cold', the links between logic clusters are physically longer and therefore use more power; Whether that's a significant amount or negligible - I do not know.
All of this binning/fusing/repurposing has been going on for at least 14 years now, that is the earliest experience I can remember having with it. I do not want to even imagine the cost of CPU's/GPU's if this was not the standard practice.