Monday, August 1st 2022
NVIDIA GeForce RTX 40 Series "AD104" Could Match RTX 3090 Ti Performance
NVIDIA's upcoming GeForce RTX 40 series Ada Lovelace graphics card lineup is slowly shaping up to be a significant performance uplift compared to the previous generation. Today, according to a well-known hardware leaker kopite7kimi, we are speculating that a mid-range AD104 SKU could match the performance of the last-generation flagship GeForce RTX 3090 Ti graphics card. The full AD104 SKU is set to feature 7680 FP32 CUDA cores, paired with 12 GB of 21 Gbps GDDR6X memory running on a 192-bit bus. Coming with a large TGP of 400 Watts, it should have a performance of the GA102-350-A1 SKU found in GeForce RTX 3090 Ti.
Regarding naming this complete AD104 SKU, it should end up as a GeForce RTX 4070 Ti model. Of course, we must wait and see what NVIDIA decides to do with the lineup and what the final models will look like.
Sources:
@kopite7kimi (Twitter), via VideoCardz
Regarding naming this complete AD104 SKU, it should end up as a GeForce RTX 4070 Ti model. Of course, we must wait and see what NVIDIA decides to do with the lineup and what the final models will look like.
121 Comments on NVIDIA GeForce RTX 40 Series "AD104" Could Match RTX 3090 Ti Performance
I believe either performance is higher or wattage much lower. 50W less for the same performance looks too little to me.
Pretty underwhelming. Maybe the Samsung node was not as bad as we had believed.
Also im glad you know already how N31 will perform and what issues (if any) it will have.
We normal people will wait for reviews and prices before deciding. Not blindly buying from company N.
Add to that 400W heating in your room the costs of air conditioning and it's becoming a quite expensive hobby.
On a second thought, the RTX 3070Ti has awful performance per watt, especially compared to the RTX 3070 and RX6800, so there's hope my intended upgrade RTX 4070 will perform close to the Ti version but at much lower power draw.
As for Navi 31 and latency and frame rate consistency and stuff, this is NOT CrossFire. Don't have high hopes there that Navi will be a disaster. Just wait and see.
And no, upgrading the power supply is NOT a small price to pay. Because is not just the power supply that could be an over $100 expense. It's also power consumption that will be a cost that will keep adding up for every hour of gaming.
It's going to have as many transistors as GA102, it's going to use as much power as GA102, and it's (knowing Nvidia) not going to be cheap either. The only plus side is that TSMC5 is a denser process node, which should reduce manufacturing cost, but we all know Nvidia chose Samsung 8nm for Ampere because TSMC7 wouldn't budge on cost.
Starting from this August our electricity bill will cost double than what it used to, even my low power 12100F+undervolted GTX 1070 system will cost me around 10$/month with my casual use case. 'barely 1-3 hours/day gaming rest is light use'
So yeah this kind of power draw is a big nope for me, most likely I will just upgrade to a 3060 Ti/6700 XT and be done with it.
People want big performance leaps and 4k 200Hz with the 2 year cycle with the same power draw and they are being as unrealistic as Nvidia and AMD
Set a wattage limit, stay in the limit no matter what they release, or shut up about it.
I seriously do not understand those that cheer for Nvidia or Intel... In terms of pure self-interest and what's more advantageous for the consumer, everyone should be cheering for AMD. The better AMD does against Intel and Nvidia, the more likely we get larger performance increases between generations, the more likely prices go down, the more likely innovation is pushed further, faster.
We all remember what the CPU market was prior to ryzen, right? 4% generational increases, 4 core stagnation, and all at a high price...alder lake and raptor lake would not exist without Ryzen.
And let's look at the GPU market, without RDNA2 mounting such a fierce competition, there's no doubt Nvidia's cards would be more expensive than they already are... (BTW, AMD is able to compete with Nvidia while having less than half the R&D budget, $5.26 billion vs $2 billion and AMD has to divide that $2 billion between graphics and x86 and x86 being the larger, more lucrative market, it must get the majority of those funds). And look at the latest Nvidia generation to be released, all the rumors of huge power consumption increases are evidence that Nvidia is doing everything in its power to push performance and all due to RDNA3.
I'm not saying everyone should be AMD fanboys, but don't the people who cheer on Intel and Nvidia realize that, at least until AMD has gotten closer to 50% market share in dGPU and x86 (especially enterprise and mobility, the two most lucrative x86 segments), victories for Intel and Nvidia inherently equate to losses for consumers? That these people who wish for AMD failure would have us plunged into a dark age even worse than the pre-ryzen days in both x86 and graphics... Sorry for the off topic rant, but I just don't get it when people are cheering for Nvidia prior to the products even being released, and by extension, cheering for reduced competition in the market... I guess the desire to create a parasocial relationship with whatever brand they deem to most likely be the winner is stronger than supporting what's best for your own material self-interest.
GTX 1080 was 25% faster when comparing OC versions and 20% for the OC-OC. And it did that with 10% less transistors 7200/8000. 600 shrinking to 300mm2. and the impressive 59% improvement between FE and OC-OC.
Now 4070 Ti has more transistors, L2 and ROPs, But cut to 192 bit bus and using same memory speed G6X. no improvement there. Bandwidth is cut to less than half.
I remember all the rumors not knowing how to count CUDA cores before the 3000 series launch. Looks like that might be the case again.
The only thing I know is that the next generation will be more expensive and it will take a long time to get entry-level and mid-range cards, stores are suffering from overstock of GPUs...
The confusion with GA102 Ampere is that 5376 Cuda can only do fp32 and 5376 Cuda that can execute both fp32 or int32, but not at the same time, so a total of 10752, but when INT32 is running the FP32 is greatly reduced.
This could change in Ada, 7680 fp32 AND 3840 int32 separately not shared =11520