Friday, April 4th 2025

NVIDIA GeForce RTX 5060 Ti 16 GB SKU Likely Launching at $499, According to Supply Chain Leak
NVIDIA's unannounced GeForce RTX 5060 Ti 16 GB and 8 GB models are reportedly due for an official unveiling mid-way through this month; previous reports have suggested an April 16 retail launch. First leaked late last year, the existence of lower end "Blackwell" GPUs was "semi-officially" confirmed by system integrator specification sheets—two days ago, reportage pointed out another example. Inevitably, alleged launch pricing information has come to light as we close in on release time—courtesy of Board Channels; an inside track den of some repute. The "Expert No. 1" account has alluded to fresh Team Green rumors; they reckon that the company's incoming new model pricing will be "relatively aggressive."
Supply chain whispers indicate that NVIDIA will repeat its (previous-gen) MSRP guide policies, due to the GeForce RTX 5060 Ti cards offering "estimated similar performance" to GeForce RTX 4060 Ti options. Speculative guide price points of $499 and $399 are anticipated—according to industry moles—for the GeForce RTX 5060 Ti 16 GB and RTX 5060 Ti 8 GB SKUs (respectively). Expert No. 1 has tracked recent GeForce RTX 4060 Ti price cuts; intimating the clearing out of old-gen stock. Team Green's GeForce RTX 5060 design is reportedly a more distant prospect—slated for arrival next month—so supply chain leakers have not yet picked up on pre-release MSRP info.
Sources:
Board Channels, VideoCardz, Notebookcheck
Supply chain whispers indicate that NVIDIA will repeat its (previous-gen) MSRP guide policies, due to the GeForce RTX 5060 Ti cards offering "estimated similar performance" to GeForce RTX 4060 Ti options. Speculative guide price points of $499 and $399 are anticipated—according to industry moles—for the GeForce RTX 5060 Ti 16 GB and RTX 5060 Ti 8 GB SKUs (respectively). Expert No. 1 has tracked recent GeForce RTX 4060 Ti price cuts; intimating the clearing out of old-gen stock. Team Green's GeForce RTX 5060 design is reportedly a more distant prospect—slated for arrival next month—so supply chain leakers have not yet picked up on pre-release MSRP info.
182 Comments on NVIDIA GeForce RTX 5060 Ti 16 GB SKU Likely Launching at $499, According to Supply Chain Leak
The 5070 is a lower tier GPU than the 1650 Super was in the Turing days and this is even worse. Selling it for anything above $150 is an insult.
You keep saying it to expensive without any facts of what the product costs to produce, let alone the value provided to the customer. Thanks my ignore list was hungry for a new reg.
On the subject of 16 gigs 5060ti vs limited VRAM 5070, the former is the better buy in my opinion. I got a 3080 FE 10G at which I initially thought was a good deal considering market conditions, but oh man, that VRAM was so crippling.
The problem that we are facing is that we're getting closer and closer to that "nanometer wall" every day, and Moore's Law is dying too, therefore things become more and more expensive to make... but the truth is that we all knew that RT/PT was too demanding and with engines like UE5 that run poorly on most PCs with a lot of traversal stutters and huge performance drops (maybe too demanding for current hardware too?!), a lot of gamers feel that it's not always worth it... And Nvidia are not making things better with each new generation, RTX 50s being the worst ever (almost 0% IPC increase in Raster and RT/PT vs Lovelace, most GPUs barely got a performance bump (and have had crippled hardware over the years), Nvidia didn't use TSMC 3nm and Blackwell efficiency is not really better either, therefore efficiency went to the gutter.. there are still melting connectors 4 years after its release, Nvidia drivers are worse and worse (even though they used to be extremely reliable), etc.
I really think people don't mind paying a premium if it's worth it, but when you see the PS5 and PS5 Pro at $500 and $700 when an RTX 5080 is $1000 but you still need to buy all the other parts of the system too, then you wonder if it's really worth it. We all know what their end goal is... Streaming! They want Gamers to Stream their games from the Cloud and have them pay a Subscription for it, that's all. PC Hardware is more and more expensive each generation and sooner or later most people won't be able to afford it. Therefore Games will be optimized for Cloud Gaming and that's it.
Regarding RT/PT, we did not "ask for them" because we all knew it was too demanding. Look at PT games (Cyberpunk 2077, Alan Wake 2, Indiana Jones, Black Myth: Wukong, etc.) if you try to run them at Native 4K on a 5090 (which is $2000 at MSRP) you get ~30fps... You need DLSS Performance (1080p) to get a good framerate and then use FG to get more fluidity, but it adds Latency and Artifacts... then there's MFG but it's even worse since there are even more artifacts. So yes we all want RT/PT games because it looks amazing, but the performance cost is too big as of now, that's the problem.
The Last of Us Part II
Horizon Forbidden West
God of War: Ragnarök
Red Dead Redemption 2
Kingdom Come: Deliverance II
Forza Horizon 5
Microsoft Flight Simulator 2024
And:
Cyberpunk 2077 (looks great even without any RT/PT)
Metro Exodus: Enhanced Edition (runs great on most PCs even if it has RT by default)
But Nvidia, same Epic Games with UE5 engine or Unity with their Unity engine are all pushing for games with RT and even Path Tracing so the industry is going this way too...
They also know the market very well, and they see what people are willing to pay, so, they take advantage of this
the only real solution is for people to stop spending so much for vid cards and the prices will eventually come down
Nvidia were already making huge margins back then and now they're extremely high, no company except maybe Apple have so high margins. So yes they could definitely afford to lower their prices.
Massive generational gains are pretty much done with, but it's not like they're pushing smaller dies, it's just the focus shifted to RT and AI. Pre RTX peaked out at 20 SM for the same segment of x80 cards.
VRAM stagnation is real, I agree. It's intentional to upsell and make the lineup more variable as price performance used to favor the 60/70 class in most situations.. Now it's ironically the "$750" 5070 TI for this generation, which lines up closer to a legacy 70 card than anything else. 83% of die enabled.
Die sizes haven't really changed, its just SM/CUDA count favors a flagship 600mm2+ config.
AMD could only fit 64 CU in a 357mm2 config via NAVI48, but theres prob more room in regards RT config here.. design is more dense than GB203.
VRAM capacity has been an issue for a while and even 12GB is not great either, 16GB is the minimum for future proofing your GPU (at least for a few years). Next-Gen consoles will probably have 24GB VRAM (if not 32GB) so Next-Gen GPUs will probably need 32GB+ to run games at 4K Ultra settings etc. on PC. GDDR7 will probably need to have 4GB chips by then.
AMD RDNA 4 is a great architecture, and I believe UDNA (ex RDNA 5) will bring a good performance bump again (Raster + RT/PT). Let's hope UDNA will also include High-End GPUs to compete with Nvidia x90 GPUs too.
Then the 4060 was barely faster than the 3060 Ti and it looks like history's about to repeat itself.
Nvidia has been selling smaller and smaller GPUs in each product tier except its top SKU for around the same prices after accounting for inflation. That's why you still see similar performance improvements per generation at the top end (5090 vs 4090, 4090 vs 3090/Ti etc) but further down it really doesn't pan out (5070 vs 4080, 4060 vs 3060 Ti etc).
It's just idealism - I've called out on the "shrinkflation" for years at this point, the unfortunate reality is that they don't have enough perfect wafers to satisfy demand throughout.
In regards of yields, I'm willing to bet they've just put most their eggs in the enterprise basket and GeForce just gets a footnote of "oh yeah we should make some of those too". Yes, the chips all come from the same wafers, but GeForce is probably less than a single-digit percentage of their revenue at this point, and having all of it get scalped means higher margins for them anyway, so why bother making a good amount of them when the enterprise stuff has even bigger margins anyway? Nvidia is "no longer a graphics company", after all...
With current Hardware we could already do amazing things if games were really optimized on PC... And with Next-Generation of consoles that will probably have a 12c/24t ZEN 6 CPU w/ 3D V-Cache, 24GB (or maybe 32GB) GDDR7 and maybe the equivalent of a 7900 XTX but on UDNA (ex RDNA 5) architecture!
Of interest, the software monitoring didnt detect the delayed frames (common issue when VRAM bottlenecked), low quality textures, random triangle symbols. He put in a weaker card with more VRAM and was far more playable.
The Generation following this moved down to full die a 28nm 294mm2 GK104.. SM got halved, but there were actual architectural improvements given 8 SM on the 680 outclassed 16 SM on the 580. These huge gaps in uplift don't really exist anymore.
I agree with you if this is your main point.
---
The argument with AMD is simply that they're still behind of NVIDIA at the same class of die. 357mm2 is close to 378mm2. (64 vs 84 SM/CU) Big Navi (GB202 competitor) was obviously canceled.
*I would still favor the 9070XT if MSRP's actually existed* Except they're not. 20 series is a big exception. Whole generation was overly large. An outlier if you will.
Compare x80 going back to GTX680 and you'll see what I mean.
GM200 was the last 28nm flagship @ 600mm2. Only had room for 24 SM. 250W TDP via Titan X. Don't think it would be too smart to backtrack that far on die config.
The base AD107 (4060) @ 159mm2 from last gen had 24 SM and it's like 2x stronger just from architectural improvement. 6.691 TFLOPS vs 15 TFLOPS.
Might be more logical to go with a different fab (Intel/Samsung) and work out a pricing deal for lower end stuff. They actually did this for the 1050 TI IIRC (Samsung).
As for VRAM, 5060/TI and 5070 will inevitably get 3GB dies to replace the current 2GB ones. Base 9070 really cuts into the 5070 in every aspect, at least per "MSRP". More CU (56 vs 48 SM), 4GB more VRAM.. etc..
---
AMD and NVIDIA are actually real close in regards to SM/CU count ATM. It's pretty damn linear for the FP32 TFLOP metric. I would say NVIDIA has a leverage still given they can cram 84 SM into a similar sized die (GB203 vs Navi48).
9070XT (64CU) is better than 5070 TI in raw FP32, but the 5070 TI (70/84 SM. 83% of GB203) still edges out in games. Could be an AMD fine wine situation where drivers inevitably improve things. Who knows.