• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

I interpret the 5000 series like this.

The thing has a 2 star rating bro, let's be serious and a bunch of people complaining they got a pair of shoes and a box of nuts and bolts :

1736282552230.png
 
The thing has a 2 star rating bro, let's be serious and a bunch of people complaining they got a pair of shoes and a box of nuts and bolts :

View attachment 378855
Go browse "what's your latest tech purchase thread" search for 4080. Then come to me saying they couldn't be had at MSRP.

Bored with this convo.
 
Bored with this convo.
Just tell me something, would you buy that after seeing that it has 2 stars and one guy claiming he got a pair of stinking shoes instead ?

Be honest.

Bro really just tried to convince me that the most scam online listing ever is a legit "4080 Super below MSRP". Wild.
 
Rtx5090: actually a titan.

Rtx5080: gap is similar to gtx670 vs gtx titan so this is actually a 5070.

Rtx5070: has very close memory characteristics to 5080. So its not gtx660ti. Its just a cropped 5070 (due to production fault) so its cheaper.

Rtx5060 ti: with more memory but slower memory than 5070, it looks like a gtx 660 ti 3GB version. Its truly a 60ti segment. If price is right, could be the sweet spot for the most popular resolution that is 1080p.

Rtx5060: half the memory, low bandwidth, the low end that is still way better in low settings, I think people will not buy this much. Except when they have no space in pc for a bulky heatspreader.

The 5090 is not a titan. All the Titan cards excluding the last one priced above $1,200 had FP64 at 1:3. The 5090 does not. Half of all the titan cards in general had this capability, even the $1,000 - $1,200 ones. It also lacks the VRAM to be called a Titan. 32GB is just enough for it to keep up with some current AI and professional workloads (not all and not anything coming in the next few years) and if you go ask most professionals they'll tell you Nvidia did the bare minimum. 48GB would be a Titan but more akin to the $1,200 Titans. I think people are completely desensitized in regards to price tags for these high end cards.

The pricing of this series reflect the max that Nvidia can exploit at each tier. Stopping production before the launch of next gen is a brilliant business move by Nvidia because it allows them to see the maximum amount the market is willing to bear for a given product, creates FOMO, and allowed Nvidia to create positive reception around their product by making them appear cheap relative to rumors. It's no coincidence that the 5090 is priced at around what 4090s are selling for when supply constrained. Ditto goes for the 5080 and 4080, the 4080 always held about $1,000 worth of value as the used market bore out.

Looking up and down the stack, the amount of additional resources you are getting isn't increasing much save for the 5090 which is proportionately increasing in price. As HWUB pointed out, the numbers indicate that perf per dollar this gen appears to land between a bit less than poor to decent. They also pointed out that perf per dollar really needed to increase significantly to make up for the lack of improvement last gen. At the end of the day games are becoming significantly more demanding and perf per dollar hasn't been keeping pace. 5000 series appears to be a refinement gen, I expect so-so raster uplift and the prices reflect that. Power efficiency may also be largely stagnent.

I have to give it to Nvidia, they are extremely good at manipulating public perception of their products. The best of any company that I'm aware of. That paired with legitimately talented engineers is a force to be reckoned with.
 
I have to give it to Nvidia, they are extremely good at manipulating public perception of their products. The best of any company that I'm aware of. That paired with legitimately talented engineers is a force to be reckoned with.
So good engineering and good marketing... This is manipulation? Lol.

If product = good, marketing = good, every generation, with long support (RTX 20xx getting new DLSS besides frame gen in 2025, other much older cards still on current branch drivers etc.) it's not manipulation to have positive public perception.
 
So good engineering and good marketing... This is manipulation? Lol.

If product = good, marketing = good, every generation, with long support (RTX 20xx getting new DLSS besides frame gen in 2025, other much older cards still on current branch drivers etc.) it's not manipulation to have positive public perception.
Nvidia has been making the fastest gpus across almost if not every single metric (raster, rt, ai, cuda, you name it) for a decade now, but marketing is the reason they are successful... :banghead:
 
Nvidia has been making the fastest gpus across almost if not every single metric (raster, rt, ai, cuda, you name it) for a decade now, but marketing is the reason they are successful... :banghead:
Yup, seems people are bitter about it and want to find fault, as usual. 50 series launch and NV presentation most exciting tech from CES so far. 9950X3D launching in March bit late/boring and nothing else from AMD/Intel besides some laptop CPUs which still use old tech and non K options for desktop.
 
Yup, seems people are bitter about it and want to find fault, as usual. 50 series launch and NV presentation most exciting tech from CES so far. 9950X3D launching in March bit late/boring and nothing else from AMD/Intel besides some laptop CPUs which still use old tech and non K options for desktop.
There are faults with nvidia (bought a shield and they stopped gamestream support, now they are even removing it all together from their new control panel, lol), but man, the grass isn't greener on the other side, it's completely rotten.
 
There are faults with nvidia (bought a shield and they stopped gamestream support, now they are even removing it all together from their new control panel, lol), but man, the grass isn't greener on the other side, it's completely rotten.
Shield was cool for the upscaler, glimpse of things to come. I didn't like how it wasn't refreshed with new models into 2020s.

RTX video has gotten quite good to where I leave it on all the time. It's nice to know the small premium you pay keeps you getting updates like that in the long term.

But yeah, good bit of cognitive dissonance around.
 
At this point it's easier to just adjust everything to die sizes to get where each GPU "should" go (or how much NV is making money from them).

Ada (2022)
[Name] : [Die size] | [relative wafer usage]
AD102 : 609mm2 | 100%
AD104 : 294mm2 | ~25%
AD106 : 188mm2 | ~15%
(keep in mind this is AREA, so you can actually fit ~4x AD104 chips into the same space as 1x AD102, also I don't into account yields or PCB/cooler at all... also, also - making/designing/developing bigger chip may be harder/costlier than smaller)
In another words : You can make 188x AD102 chips, OR around 609x AD106 (assuming perfect yields on both cases).

Here's "profit" comparison (I count only full die since NV pays for all silicon, regardless if it works or not) :
[AD102] 188x 4090 (MSRP 1600$) = 300 800$
[AD106] 609x 4060 Ti 16GB (MSRP 500$) = 304 500$
^That's how to make Money, BUT...
---------------------------------------------------------
[AD102] 1x 4090 (MSRP 1600$) = 1600$
[AD104] 4x 4070 Ti 12GB (MSRP 800$) = 3200$
NV, will profit at least 2x more from selling a 4070 Ti than a single 4090 from pure die perspective (again, yields for full core should be lower than "hacked" one from 4090).

Ampere (2020/2021) :
GA102 : 628mm2 | 100%
GA104 : 392mm2 | ~62,5%
GA106 : 276mm2 | ~44% (NV can make ~2.25x more GA106 vs GA102.
Pricing is... meaningless since you couldn't buy those cards at MSRP back in 2020/2021.
Regardless, from wafer area usage - you can make either 69x GA102, or 157x GA106 (again, perfect scaling on all wafers).
In which case :
[GA102] 69x 3090 Ti (MSRP 2000$) = 138 000$
[GA106] 157x 3060 12GB (MSRP 330$ = 51 829$
So, in this case selling RTX 3090 Ti was vastly more profitable than 3060s (again, assuming yields were perfect [they were not], and 3060 12GB uses slightly cut core spec so real yields get boosted a bit).
Way different than almost the same values on Ada.

Turing (2018/2019) :
TU102 : 754mm2 | 100%
TU104 : 545mm2 | ~72%
TU106 : 445mm2 | ~59%
"Wafer area adjusted" :
[TU102] 445x RTX Titan (MSRP 2500$) : 1 112 500$
[TU106] 754x RTX 2070 (MSRP 500$) : 377 000$
^Again, RTX Titan was WAY more profitable from NV standpoint than mid range card.

Pascal (2015/2017) :
GP102 : 471mm2 | 100%
GP104 : 314mm2 | ~67%
GP106 : 200mm2 | ~42,5%
"Wafer area adjusted" :
[GP102] 200x GTX Titan Xp (MSRP 1200$) : 240 000$
[GP104] 300x GTX 1080 (MSRP 600$) : 180 000$
[GP106] 471x GTX 1060 6GB (MSRP 300$) : 141 300$
In Pascal case I adjusted all cards (assuming wafer area price usage is the same), the more performance you need the more NV get's payed (and this SHOULD be the norm)

And 900 series (2014/2015) as it's last 40nm (on very mature process) :
GM200 : 601mm2 | 100%
GM204 : 398mm2 | ~66%
GM206 : 228mm2 | ~38%
"Wafer area adjusted" for all cards (ie. amount dies made is equal in wafer size consumed) :
[GM200] 45 372x Titan X(M) (1000$) : 45 372 000$
[GM204] 68 514x GTX 980 (550$) : 37 682 700$
[GM206] 119 599x GTX 960 (200$) : 23 919 800$
Again, we can see that most profitable was Titan...

This was a good math exercise :D
I hope I didn't made an error, but just wanted to check what this will come up with.
Maybe someone finds this interesting.
Sadly, I don't know die area number for Blackwell stuff yet :/
 
Last edited:
Back
Top