• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA RTX 4090 Doesn't Max-Out AD102, Ample Room Left for Future RTX 4090 Ti

If performance is 10-20 percent higher than anything that AMD has, then it will be branded as a Titan GPU. The reason the 30 series didn't have one is partially due to how close AMD was in performance. They will not risk the headline, "Titan loses"
nVidia discovered that people are gullible so they made them believe the 3090 was a Titan and price it accordingly, even the so called tech media (who for the most part are philistines with oversized egos praising their lords nVidia/AMD or Intel) fell for it.
 
Datacenters will get all the fully functional dies, gamers get the broken scraps.
That's generally how product segmentation works with silicon. You design your best, biggest chip, and the imperfect products are made into a cut down version and salvaged. Besides, what gamer needs a full fat AD102 (anyone else wondering where AD100 is)? Even many of us here on a tech enthusiast forum are lamenting the power draw of the 4080, as cut down as it is from the full fat chip.
 
They weren't bad at all...

...on launch day.

This is how tech advancement works.
They're part of the reason the Ampere stack was (-is) such a horrible mess, and got revised almost entirely with higher capacities of VRAM later on. Even on launch day we had a 10GB 3080 that was already short on VRAM in titles at launch. Its a complete departure from what we're used to getting from an x80 tier product.

So, this is how Nvidia's lack of TSMC works, you mean. Because now we're back on TSMC and suddenly we cán get decent VRAM capacities (all on GDDR6X this time, btw, and all but the largest capacities under 300W) from the get-go alongside numerous core/transistor count improvements and an overall performance boost.

Stop fooling yourself. This was clear since launch and was then proven by Nvidia's own release cadence plus what came before and after Ampere, now. The consensus was, is and will be that early Ampere is the all time low in relative core power to VRAM of the last decade; numbers don't lie. Its also the only gen built on Samsung, mind, only the consumer line, the real stuff got TSMC anyway.

The only reason Ampere is competitive, in the end, is the fact it can do DLSS/RT earlier than RDNA2 could do FSR proper. Everything other than its feature set is objectively worse on Ampere. Its less efficient even though it may (should?) have an architectural advantage.
 
Last edited:
Back
Top