• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA RTX 4090 Doesn't Max-Out AD102, Ample Room Left for Future RTX 4090 Ti

D

Deleted member 185088

Guest
If performance is 10-20 percent higher than anything that AMD has, then it will be branded as a Titan GPU. The reason the 30 series didn't have one is partially due to how close AMD was in performance. They will not risk the headline, "Titan loses"
nVidia discovered that people are gullible so they made them believe the 3090 was a Titan and price it accordingly, even the so called tech media (who for the most part are philistines with oversized egos praising their lords nVidia/AMD or Intel) fell for it.
 

hat

Enthusiast
Joined
Nov 20, 2006
Messages
21,744 (3.33/day)
Location
Ohio
System Name Starlifter :: Dragonfly
Processor i7 2600k 4.4GHz :: i5 10400
Motherboard ASUS P8P67 Pro :: ASUS Prime H570-Plus
Cooling Cryorig M9 :: Stock
Memory 4x4GB DDR3 2133 :: 2x8GB DDR4 2400
Video Card(s) PNY GTX1070 :: Integrated UHD 630
Storage Crucial MX500 1TB, 2x1TB Seagate RAID 0 :: Mushkin Enhanced 60GB SSD, 3x4TB Seagate HDD RAID5
Display(s) Onn 165hz 1080p :: Acer 1080p
Case Antec SOHO 1030B :: Old White Full Tower
Audio Device(s) Creative X-Fi Titanium Fatal1ty Pro - Bose Companion 2 Series III :: None
Power Supply FSP Hydro GE 550w :: EVGA Supernova 550
Software Windows 10 Pro - Plex Server on Dragonfly
Benchmark Scores >9000
Datacenters will get all the fully functional dies, gamers get the broken scraps.
That's generally how product segmentation works with silicon. You design your best, biggest chip, and the imperfect products are made into a cut down version and salvaged. Besides, what gamer needs a full fat AD102 (anyone else wondering where AD100 is)? Even many of us here on a tech enthusiast forum are lamenting the power draw of the 4080, as cut down as it is from the full fat chip.
 
Joined
Sep 17, 2014
Messages
22,048 (6.01/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
They weren't bad at all...

...on launch day.

This is how tech advancement works.
They're part of the reason the Ampere stack was (-is) such a horrible mess, and got revised almost entirely with higher capacities of VRAM later on. Even on launch day we had a 10GB 3080 that was already short on VRAM in titles at launch. Its a complete departure from what we're used to getting from an x80 tier product.

So, this is how Nvidia's lack of TSMC works, you mean. Because now we're back on TSMC and suddenly we cán get decent VRAM capacities (all on GDDR6X this time, btw, and all but the largest capacities under 300W) from the get-go alongside numerous core/transistor count improvements and an overall performance boost.

Stop fooling yourself. This was clear since launch and was then proven by Nvidia's own release cadence plus what came before and after Ampere, now. The consensus was, is and will be that early Ampere is the all time low in relative core power to VRAM of the last decade; numbers don't lie. Its also the only gen built on Samsung, mind, only the consumer line, the real stuff got TSMC anyway.

The only reason Ampere is competitive, in the end, is the fact it can do DLSS/RT earlier than RDNA2 could do FSR proper. Everything other than its feature set is objectively worse on Ampere. Its less efficient even though it may (should?) have an architectural advantage.
 
Last edited:
Top