Tuesday, May 9th 2023
NVIDIA GeForce RTX 4070 Variant Could be Refreshed With AD103 GPU
Hardware tipster kopite7kimi has learned from insider sources that a variant of NVIDIA's GeForce RTX 4070 graphic card could be lined up with a different GPU - the AD103 instead of the currently utilized AD104-derived AD104-250-A1. The Ada Lovelace-based architecture is a staple across the RTX 40-series of graphics cards, but a fully unlocked AD103 is not yet attached to any product on the market - it will be a strange move for NVIDIA to refresh or expand the mid-range RTX 4070 lineup with a much larger GPU, albeit in a reduced form. A cut down variant of the AD103 is currently housed within NVIDIA's GeForce RTX 4080 graphics card - its AD103-300-A1 GPU has 9728 CUDA Cores and Team Green's engineers have chosen to disable 5% of the full article's capabilities.
The hardware boffins will need to do a lot of pruning if the larger GPU ends up on the rumored RTX 4070 sort-of upgrade - the SKU's 5,888 CUDA core count spec would require a 42% reduction in GPU potency. It is somewhat curious that the RTX 4070 Ti has not been mentioned by the tipster - you would think that the more powerful card (than the standard 4070) would be the logical and immediate candidate for this type of treatment. In theory NVIDIA could be re-purposing dies that do not meet RTX 4080-level standards, thus salvaging rejected material and repurposing it for step down card models.According to TPU's GPU database the NVIDIA AD103: "uses the Ada Lovelace architecture and is made using a 5 nm production process at TSMC. With a die size of 379 mm² and a transistor count of 45,900 million it is a large chip. AD103 supports DirectX 12 Ultimate (Feature Level 12_2). For GPU compute applications, OpenCL version 3.0 and CUDA 8.9 can be used. Additionally, the DirectX 12 Ultimate capability guarantees support for hardware-ray tracing, variable-rate shading and more, in upcoming video games. It features 10240 shading units, 320 texture mapping units and 112 ROPs. Also included are 320 tensor cores which help improve the speed of machine learning applications. The GPU also contains 80 ray tracing acceleration cores."Further reading: Ada Architecture Whitepaper
Sources:
kopite7kimi Tweet, VideoCardz, Tweak Town
The hardware boffins will need to do a lot of pruning if the larger GPU ends up on the rumored RTX 4070 sort-of upgrade - the SKU's 5,888 CUDA core count spec would require a 42% reduction in GPU potency. It is somewhat curious that the RTX 4070 Ti has not been mentioned by the tipster - you would think that the more powerful card (than the standard 4070) would be the logical and immediate candidate for this type of treatment. In theory NVIDIA could be re-purposing dies that do not meet RTX 4080-level standards, thus salvaging rejected material and repurposing it for step down card models.According to TPU's GPU database the NVIDIA AD103: "uses the Ada Lovelace architecture and is made using a 5 nm production process at TSMC. With a die size of 379 mm² and a transistor count of 45,900 million it is a large chip. AD103 supports DirectX 12 Ultimate (Feature Level 12_2). For GPU compute applications, OpenCL version 3.0 and CUDA 8.9 can be used. Additionally, the DirectX 12 Ultimate capability guarantees support for hardware-ray tracing, variable-rate shading and more, in upcoming video games. It features 10240 shading units, 320 texture mapping units and 112 ROPs. Also included are 320 tensor cores which help improve the speed of machine learning applications. The GPU also contains 80 ray tracing acceleration cores."Further reading: Ada Architecture Whitepaper
25 Comments on NVIDIA GeForce RTX 4070 Variant Could be Refreshed With AD103 GPU
The name of the cards means nothing
Nvidia want to continue with the Crypto prices,
Real4050--> 450€ , Real4060--> 600€ , Real4060 Ti--> 800€ , Real4070--> 1100€ ,
if you don't want to pay it, they turn around, change the names, and they try it again:
Now Real4060Ti is called 4070 Ti, buy it, it only costs 800€
Now Real4050 is called 4060 Ti, buy it, it only costs 450€
Also there's a mistake in that green table in the article, ADA is using 5nm process, not 4. So what you're saying is nvidia is competing against 7900xtx with 70 series and against 7900xt with 60 series? Cool.
I also hate FSR though.
I think they are both meh products but I can see why people would choose one over the other.
They are not far enough in price for me to feel the 7900XTX is worth buying over it now if AMDs slides at the RDNA3 launch where accurate then sure but they were off by like 20%
As an aside- marketing and stupid consumers have done a number on this enthusiast hobby.
Are you trying to say if someone likes the 4080 better and can afford it they shouldn't buy it number 1 its their money number 2 people place different values on their own hobbies. I only game 4 hours a week but I still don't mind spending quite a bit on a gpu.
And while I would give you that on a 3-600 usd range where people don't typically have as much disposable income on every dollar counting but someone who can afford a 1000 usd gpu likely can afford a 1200 one and if the 1200 dollar one offers them features they prefer that is just the way the cookie crumbles.
AMD needs to do a lot better in a lot of areas before I consider them again they improved quite a bit with the 6000 series but I feel the 7000 series in general is still a step back.... FSR still isn't very good and RT performance in RT heavy games is terrible to the point that you can't even use it with their gpus.
Again if someone only cares about Raster then sure more power to them.
But this is probably just about salvage.
Even if that does happen I just expect slightly more compelling tier for tier cards at the original MSRPs at best.
For similar reasons, my other gaming workstation PC has ASUS TUF 4090 OC 24 GB (89.9 TFLOPS @ 2745 Mhz).
From www.cgdirector.com/rtx-4080-review-content-creation/
For games RT...
From www.techpowerup.com/review/amd-radeon-rx-7900-xtx/34.html
For high-end NVIDIA customers with RT focus, RX 7900 XTX is side grade from RTX 3080 Ti / RTX 3090 / RTX 3090 Ti, but with an improved raster and higher VRAM.
RX 7900 XT 20 GB is a good upgrade from RTX 3070 Ti 8 GB level GPUs.
Going from a 1070 to 1080ti was an increase of 87% in cuda cores for a 50% increase in performance.
Going from a 2070 to a 2080ti was an increas of 88% in cuda cores for a 56% increase in performance.
Going from a 3070 to 3090 is 80% more cuda cores for about 40% more performance that goes up to around 50% in RT assuming the 3070 doesn't run out of vram.
4070 to 4090 is a 178% increase in cuda cores for 84% more performance but around 100% in RT
This was always a diminished returns sorta thing due to cache/ROP/TMU and now RT/Tensor cores as well.
Also lets be real if a developer targeted a 4090 and only a 4090 while making a game that is probably the only scenario where you would get a comparable increase a game that runs on a 4090 also has to run on a 1650 lol.
The problem is GDDRx memory manufacturers, not NVIDIA.
That and it kind of balances out to the positive because when you're not in a game the thing pulls maybe 14 watts and sits at maybe 40c without the fans running. As I'm writing this it is bouncing between 14 and 20 watts and is dead silent.
With that said, I'm going to have to remind you of three things:
- that there are posts on this board with people praising the 3090ti's RT performance not 9 months ago
- The 7900XTX has RT performance on par with a 3090ti.
- The 4070ti has RT performance on par with the 3090ti.
That's objectively not terrible unless you're claiming the 4070ti RT is "terrible to the point you can't even use it."You've very clearly never owned an RT-capable AMD GPU or you'd how disingenuous and dishonest you sound.
Even though I'm not super high on the 4080 due to it's asking price I would pick it over the 7900XTX
and for multiple reasons I think the 4070ti is a pretty terrible card it really has no redeeming quality other than efficiency. That's just my opinion of it if others think it's the best thing since sliced bread good for them.
At the end of the day if someone looks at any of these cards and decides they are best for them more power to them and honestly I hope they serve them well for years to come.
but there is a third one, 1660 had 48 rops. and this would be the equivalent of 4070 having 96, and it only has 64.
Now with 4090 being exactly 2x4070 faster in 4K, considering 4070 should be loosing 10% of efficiency at 4K this is a terrible result for the 4090.
4090 with 1008 GBs is the equivalent of 4070 with 336 MBs in the sense that it would be a disaster. Therefore 4090 should have 1512 GB/s and this is where GDDR7 comes into play.
And only then we can expect it to be 2.5x faster. loosing some efficiency but not as bad as a 1080 Ti and 2080 Ti that only had 38% more bandwidth and Rops than 70-class, so +55% perf. made sense.
I've never had a flagship scale linearly with bandwidth there was always diminished returns the massive L2 cache is likely offsetting this a bit.
I for one am ok with 100% more performance in RT heavy games it's more than what we got last generation for about the same price increase. My guess is if the 4090 was 150% faster than the 4070 it would be a lot more expensive.
The 2080ti was 328% more expensive than the 1660ti the 4090 is only 166% more expensive than the 4070 so not a very good comparison really from a cost increase perspective. Also TPU database only shows the 2080ti as 92% faster than the 1660ti.