• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 5060 Ti 16 GB SKU Likely Launching at $499, According to Supply Chain Leak


GM200 was the last 28nm flagship @ 600mm2. Only had room for 24 SM. 250W TDP via Titan X. Don't think it would be too smart to backtrack that far on die config.

The base AD107 (4060) @ 159mm2 from last gen had 24 SM and it's like 2x stronger just from architectural improvement. 6.691 TFLOPS vs 15 TFLOPS.

Might be more logical to go with a different fab (Intel/Samsung) and work out a pricing deal for lower end stuff. They actually did this for the 1050 TI IIRC (Samsung).

As for VRAM, 5060/TI and 5070 will inevitably get 3GB dies to replace the current 2GB ones. Base 9070 really cuts into the 5070 in every aspect, at least per "MSRP". More CU (56 vs 48 SM), 4GB more VRAM.. etc..

---

AMD and NVIDIA are actually real close in regards to SM/CU count ATM. It's pretty damn linear for the FP32 TFLOP metric. I would say NVIDIA has a leverage still given they can cram 84 SM into a similar sized die (GB203 vs Navi48).

9070XT (64CU) is better than 5070 TI in raw FP32, but the 5070 TI (70/84 SM. 83% of GB203) still edges out in games. Could be an AMD fine wine situation where drivers inevitably improve things. Who knows.

Agreed, except on the "fine wine". AMD's traditionally broken, utterly incompetent drivers are what led to "fine wine". It's basically cope for buy now, enjoy it to its fullest 2-3 years down the road. For once, it seems this was a key point for the launch of RDNA 4, and most major problems were addressed out of the gate. Expect a similar level of improvement over time compared to what you expect from NV this time around.
 
Never gonna happen. How many times has this been tried, twice? People are too fucking broke for this shit, I feel like NVIDIA can only keep it afloat because they literally give themselves their own GPUs.
I wish it never happens but they will push for it, be sure of that! Once everyone has Fiber at home better get really to see it happening a lot more often. (Unfortunately)
 
I wish it never happens but they will push for it, be sure of that! Once everyone has Fiber at home better get really to see it happening a lot more often. (Unfortunately)
Other companies can try it and they'll fail. It will literally never work out, ever. Even mobile games would be a more attractive option if they tried that shit, at least you own the hardware and they can be arguably more worth playing than the AAA trash that actually requires the new desktop GPUs.
 
Except they're not. 20 series is a big exception. Whole generation was overly large. An outlier if you will.

Compare x80 going back to GTX680 and you'll see what I mean.
Yeah, 20 series was overall kinda big. That being said, the 3080 was an even bigger cut (though yes, the rest of the series tapers off as you go down). The 3080 was also the last Nvidia GPU I was at least vaguely excited by.

A 4080 or 5080 at the same 80ish% cut of the flagship die would've been phenomenal. Ada and now Blackwell could've both had nearly Pascal-tier 80 class cards. Instead we get half the halo product at half the halo product's price, and people seem to just be okay with that.
 
Other companies can try it and they'll fail. It will literally never work out, ever. Even mobile games would be a more attractive option if they tried that shit, at least you own the hardware and they can be arguably more worth playing than the AAA trash that actually requires the new desktop GPUs.
Sony and Microsoft both said it's getting harder and harder and more and more expensive to get more powerful consoles each generation, and every single company manufacturing and/or designing hardware will tell you the same thing too. In the future it will become too costly, so they will push Gaming via Streaming "thanks" to Optical Fiber. Trust me I don't want that to happen I will always prefer my own Hardware and being able to play Offline if I want to. But companies only care about profits and they will go wherever the bigger margins are.

Yeah, 20 series was overall kinda big. That being said, the 3080 was an even bigger cut (though yes, the rest of the series tapers off as you go down). The 3080 was also the last Nvidia GPU I was at least vaguely excited by.

A 4080 or 5080 at the same 80ish% cut of the flagship die would've been phenomenal. Ada and now Blackwell could've both had nearly Pascal-tier 80 class cards. Instead we get half the halo product at half the halo product's price, and people seem to just be okay with that.
The 5090 is just that...but at $2000 (MSRP)+
Tax or $4000 by getting scalped lol.

Nvidia are making the x90 GPUs almost impossible to get for most people whereas the 1080 Ti was almost a TITAN (slightly lower core count and just 1GB VRAM less), and was very accessible comparatively... The 5080 having half of the 5090 die for $1000 is an absolute joke too.

But Nvidia has always done so shady stuff, like with Kepler for example they sold the GTX 680 for a pretty hefty price even though it was hugely cut:
GTX 680: 1536 CUDA Cores + 256-bit bus and 4Gbps chips : $500 (released March 2012)
GTX 780: 2304 CUDA Cores + 384-bit bus and 6Gbps chips : $650 (released May 2013)
GTX 780 Ti: 2880 CUDA Cores + 384-bit bus and 7Gbps chips : $700 (released Nov 2013)
 
But Nvidia has always done so shady stuff, like with Kepler for example they sold the GTX 680 for a pretty hefty price even though it was hugely cut:
GTX 680: 1536 CUDA Cores + 256-bit bus and 4Gbps chips : $500 (released March 2012)
GTX 780: 2304 CUDA Cores + 384-bit bus and 6Gbps chips : $650 (released May 2013)
GTX 780 Ti: 2880 CUDA Cores + 384-bit bus and 7Gbps chips : $700 (released Nov 2013)

Price adjusted to 2025 based on inflation:

$696 USD for a 8 SM 294mm2 full die. (GK100 was never released till 700 series, had to be revised as GK110)
$900 USD for a 12 SM 561mm2 die cut down to 80% 12/15 SM.
$960 USD for a 15 SM 561mm2 full die.

Only difference now is that EE cost more, along with requiring a higher PCB layer count for signal integrity (also adds $).

780TI peaked out at 250W and used high low side VRM components. Modern 5080 is like 15 count SPS and draws 360W (Spikes to 500W 1ms per igor's mesurments) due to dense and significantly higher SM counts.

Nvidia are making the x90 GPUs almost impossible to get for most people whereas the 1080 Ti was almost a TITAN (slightly lower core count and just 1GB VRAM less), and was very accessible comparatively... The 5080 having half of the 5090 die for $1000 is an absolute joke too. . The 5080 having half of the 5090 die for $1000 is an absolute joke too

They could prob sell the FE 5080 for like $750, but they've always pushed margins as you can see with older cards. I'll agree that current AIB pricing is absolutely ridiculous.

I don't think 5080 @ MSRP is too far off from legacy NVIDIA, all things considered..

I mean, they were selling a 314mm2 full die 20 SM GTX1080 in 2016 for $600 ($800 adjusted) and no one cared because performance shot up significantly.

TSMC 4N seems to favor larger dies for SM ratio. The problem is, Nvidia also wants to make money and move more cards. Yield on a 750mm2 flagship is gonna be much lower.

I get it.. you want top end performance for a lower price.. Market shifted. Isn't 2000-2017 anymore.
 
Price adjusted to 2025 based on inflation:

$696 USD for a 8 SM 294mm2 full die. (GK100 was never released till 700 series, had to be revised as GK110)
$900 USD for a 12 SM 561mm2 die cut down to 80% 12/15 SM.
$960 USD for a 15 SM 561mm2 full die.

Only difference now is that EE cost more, along with requiring a higher PCB layer count for signal integrity (also adds $).

780TI peaked out at 250W and used high low side VRM components. Modern 5080 is like 15 count SPS and draws 360W (Spikes to 500W 1ms per igor's mesurments) due to dense and significantly higher SM counts.
Technology increases, so does the amount of CUDA Cores etc. Wafers are more expensive but you also get a lot more chips per wafer than before... and back then GPUs topped at 250W too, but Nvidia stopped that with Ampere because Samsung 8nm node that was "crap" but Samsung needed the money so they sold their chips for really cheap, hence the low pricing. TSMC are now leaders and asking for a lot for new nodes. We definitely need more competition...

They could prob sell the FE 5080 for like $750, but they've always pushed margins as you can see with older cards. I'll agree that current AIB pricing is absolutely ridiculous.

I don't think 5080 @ MSRP is too far off from legacy NVIDIA, all things considered..

I mean, they were selling a 314mm2 full die 20 SM GTX1080 in 2016 for $600 ($800 adjusted) and no one cared because performance shot up significantly.

TSMC 4N seems to favor larger dies for SM ratio. The problem is, Nvidia also wants to make money and move more cards. Yield on a 750mm2 flagship is gonna be much lower.

I get it.. you want top end performance for a lower price.. Market shifted. Isn't 2000-2017 anymore.
They definitely could but Nvidia saw that people during the pandemic were spending $1000 and more on 3070 & 3080 GPUs so now that's what we get, unfortunately... And AIBs are a joke too, even though Nvidia have definitely lowered their margins over the years (and EVGA were probably right to quit).

Nobody would complain about GTX 10 GPUs because they had amazing Performance, Efficiency and Price! The price/performance ratio was insane compared to what we have today... Mostly when all modern games struggle to run properly with UE5 and with Heavy RT or PT. And back then we didn't even need DLSS nor Frame Generation!

And yes they could have made the same GPUs with the same pricing on TSMC 3nm but they chose not to because they want and love their huge margins (and their investors too..).
 
Last edited:
Technology increases, so does the amount of CUDA Cores etc. Wafers are more expensive but you also get a lot more chips per wafer than before... and back then GPUs topped at 250W too, but Nvidia stopped that with Ampere because Samsung 8nm node that was "crap" but Samsung needed the money so they sold their chips for really cheap, hence the low pricing. TSMC are now leaders and asking for a lot for new nodes. We definitely need more competition...


They definitely could but Nvidia saw that people during the pandemic were spending $1000 and more on 3070 & 3080 GPUs so now that's what we get, unfortunately... And AIBs are a joke too, even though Nvidia have definitely lowered their margins over the years (and EVGA were probably right to quit).

Nobody would complain about GTX 10 GPUs because they had amazing Performance, Efficiency and Price! The price/performance ratio was insane compared to what we have today... Mostly when all modern games struggle to run properly with UE5 and with Heavy RT or PT. And back then we didn't even need DLSS nor Frame Generation!

And yes they could have made the same GPUs with the same pricing on TSMC 3nm but they chose not to because they want and love their huge margins (and their investors too..).

The increment of performance bump hasn't increased significantly since the GTX 10 series.

20 series was actually quite shitty per SM increment generationally.. CUDA was halved or something like that.. 30 series was a bit better (went back to GTX10 CUDA layout), but SM count was skewed in a way that shifted an entire product stack. These two generations are responsible for why things are the way they are today. You can also blame GPU mining.

Theres also misconception here. If the die is the same size as a previous generation (ie 400mm2), the wafer metics are similar until the point of yield rate. You get more dies on the lower end when you're able to shrink stuff.

TSMC 12nm and 5nm are both 300mm or 12inch wafers. The dies are legitimately more expensive.


For example: 4060TI uses a 36 SM 188mm2 die with 2 SM disabled (34 enabled). Previous generation full enabled GA106 was 30 SM @ 276mm2 and GA107 was 20 SM @ 200mm2. I do think you're looking at the whole concept backwards, hence why we disagree a bit.

I agree on the notion of competition. NVIDIA has abused a lead for too long.

Edit: To clarify.. There are generational performance uplifts per SM count with 40/50 series. but they're much smaller than the GTX680 to GTX1080 run.

GTX1080 with full 20 SM performed super close to a GTX2060 with 30 SM (disabled from 36)... This generation was dictated by overly large TSMC 12 dies.

30 series was more linear to 10 series. IE: 3060 12GB with 28 SM being closer to a 1080 TI, also 28 SM, but on a flagship die.. Same -2SM disabled from 30 full. 276mm2 (Samsung 8) vs 471mm2 (TSMC 16). 3060 has half the TMUs, little over half the rops, and around 75% of potential bandwidth, which is going to reflect how they perform side by side.

You can say NVIDIA regressed in a way, at least on raster/SM level via 20/30. they used die shrinks and SM increments as a bandaid. (increased TDP).

This is also the reason AMD looks better today with the 9070's. They're more or less matching NVIDIA at SM/CU and thermal package via monolithic die. I haven't seen competition like this since HD58XX series, where AMD/ATI had a 300mm2 competing vs a flagship 500mm2 fermi.

AMD unfortunately has no plans for a true GB205 (5070) competitor.. which means they're just gonna cut the larger 357mm2 Navi48 for a 12GB lower CU count model. The smaller NAVI is basically half the size (<200mm2).. 32 CU/2048 SP. Competition for 5060 TI, but slightly worse just looking at the bigger variations.

tl;dr. I still dont think MSRPs are too far off from legacy... especially considering modern EE design. It's just NVIDIA doing NVIDIA things. AMD competing will drive price lower. (ignoring inflation and global econ.)
 
Last edited:
Back
Top