• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 4060 Ti 16GB Review—Not

IMO Nvidia are just embarassed about the whole thing.

It was a knee-jerk reaction to reviewers and customers destroying the 4060 Ti's lack of VRAM. The problem with the 4060 Ti's performance isn't just a lack of VRAM though, it's the lack of bandwidth to feed that VRAM.

Look at the 3060 Ti which also has just 8GB VRAM - it's performance doesn't nosedive as badly as the 4060 Ti does when approaching or exceeding the VRAM buffer, because it's on a 256-bit bus with twice the bandwidth and can do the undesirable data juggling twice as fast. Both cards stutter where 12GB cards don't, but the stuttering on an 8GB 3060 Ti or 3070 is far less intrusive and settles down faster too.
While the narrower bus hurts at higher resolutions, a wider bus with the same amount of memory wouldn't help the situation when it runs out of VRAM. In that case, it would be limited by the PCIe link to the host. Since the PCIe link is much slower than even the 128-bit GDDR6 interface, relying on it is not a good solution.
 
While the narrower bus hurts at higher resolutions, a wider bus with the same amount of memory wouldn't help the situation when it runs out of VRAM. In that case, it would be limited by the PCIe link to the host. Since the PCIe link is much slower than even the 128-bit GDDR6 interface, relying on it is not a good solution.
I think it was techspot that did bandwidth testing on the 4060Ti last month, and their verdict was that the 4060 Ti's bandwidth problem was worse than it's VRAM problem. Both are serious problems at $400 in 2023, for sure, but I'm not 100% clear on how bandwidth affects stuttering, only that cards with more VRAM bandwidth stuttered less.

I guess the 8GB variant of the 4060 Ti is also crippled by its 8x PCIe lanes, with the older Ampere cards like the 3060 Ti having 16 PCIe lanes. I guess that's the horror-show where you are pulling in data from system RAM over the PCIe lanes and the self-inflicted stutterfest caused by cheaping-out on VRAM is exacerbated by yet another bottleneck in the PCIe lanes?
 
I think it was techspot that did bandwidth testing on the 4060Ti last month, and their verdict was that the 4060 Ti's bandwidth problem was worse than it's VRAM problem. Both are serious problems at $400 in 2023, for sure, but I'm not 100% clear on how bandwidth affects stuttering, only that cards with more VRAM bandwidth stuttered less.

I guess the 8GB variant of the 4060 Ti is also crippled by its 8x PCIe lanes, with the older Ampere cards like the 3060 Ti having 16 PCIe lanes. I guess that's the horror-show where you are pulling in data from system RAM over the PCIe lanes and the self-inflicted stutterfest caused by cheaping-out on VRAM is exacerbated by yet another bottleneck in the PCIe lanes?
Crippled memory on a crippled bus on crippled lanes. Anything else to add?
 
Average consumer in 2016:
Oh this 3GB 1060 is a good deal...heeeeey.

Average consumer in 2023:
I heard the models w lower VRAM are rip-offs, better get the 16gb 4060 Ti...heeeeey.
 
Average consumer in 2016:
Oh this 3GB 1060 is a good deal...heeeeey.

Average consumer in 2023:
I heard the models w lower VRAM are rip-offs, better get the 16gb 4060 Ti...heeeeey.
Funnily enough, NV tried to pull a 1060-3/6 with the 4080-12/16. Guess its price tag made it so people picked up their pitchforks in protest.
 
Last edited:
Good thing Intel is progressing with their drivers so one can get a proper 16GB card for cheap.
 
Watch out for the VRAM boogeyman, he'll catch you in all the worst optimised games and unrealistic scenarios!
 
Spain has some in stock but ouch at the price!

1689759753857.png


1689759542344.png
 
I'd have been interested in these 16GB variants for workstations/modelling builds as the VRAM is extremely valuable for realtime CAD work in a few of our key applications.

Puget Systems have already tested the 8GB 4060 Ti though, and the 128-bit bandwidth makes even the 8GB version useless. It's often slower than the 3060 in asset-heavy viewport tests. Not the 3060 Ti, the cheaper 3060 (which incidentally has 50% more VRAM).

At the insane asking price, I don't think I can even justify it as a cost-effective Quadro alternative. A brand-new A4000 (3070Ti with 16GB) is just a much better buy for around 60% more money, and often it'll be more than 60% better for CUDA/viewport performance. At least the 3060 12GB was a decent stand-in for expensive Quadro-tier cards where ISV support wasn't mandatory, but this 4060 Ti is so hamstrung and expensive that there's no point. Either keep buying 3060 12GB cards for half the money, or just pony up and get the A4000 anyway.

It's such a fail, I don't have my own words for it, so - to quote Puget Systems - "not recommended".
 
Last edited:
It's so bizarre to see a company basically advertise a new product as "we tried to dupe you with a product that was gimped just a little while ago but look we fixed it, buy it for another 100$, pretty please?"
 
I'd say this is the card that should have originally come out, at the original 8GB card's price point of $400, but then I'd be ignoring the crippled memory bit rate that, from the looks of even Nvidia's own slides, makes the added 8GB practically useless.
 
What is even worse is this 4060 Ti card has more VRAM then the 4070 and 4070 Ti cards. :roll:
 
Noone saw that also 2060 Super and 3060 Ti has been shown with bars for bigger memory size version? It stinks like hell...
 
We don't need another 4060Ti, memory variants included. As Steve would say, just a waste of silicon.
 
Why launch something that they're ashamed to show off, or even to supply stores with? Nvidia is not making much sense these days.
 
They totally failed to disguise or even vaguely camouflage the crippling bandwidth shortage - and that's not something I remember happening to RDNA2 cards when they introduced infinitycache.
When RDNA2 came out the prices were so out of whack that nobody noticed/cared.
6600XT vs 5700/5700XT did not fare all too well either, especially in situations relying on memory bandwidth. If I remember correctly even the relative results were pretty much the same as 4060ti and 3060ti/3070.

Edit:
$329 MSRP 5700 got 83/84% and $399 MSRP 5700XT got 92/94% vs $379 MSRP 6600XT (https://www.techpowerup.com/review/msi-radeon-rx-6600-xt-gaming-x/28.html)
$399 MSRP 3060Ti gets 89/91% and $499 MSRP 3070 gets 101/104% vs $399 MSRP 4060Ti (https://www.techpowerup.com/review/nvidia-geforce-rtx-4060-ti-founders-edition/32.html)
At 2160p in both cases there is a pretty severe falloff for newer cards.
 
Last edited:
nVidia doesn't want people to use those for ML, where the amount of VRAM is crucial. Even shit performance of 128bit memory is not really a problem, you just have to wait a few seconds more for the model to finish, but you can't do much at all with less than 16GB so they just want to push people to their expensive products. It was the same with cryptocurrency mining when they crippled consumer products to sell overpriced "dedicated" (a.k.a. not software crippled but otherwise the same) hardware.
 
nVidia doesn't want people to use those for ML, where the amount of VRAM is crucial. Even shit performance of 128bit memory is not really a problem, you just have to wait a few seconds more for the model to finish, but you can't do much at all with less than 16GB so they just want to push people to their expensive products. It was the same with cryptocurrency mining when they crippled consumer products to sell overpriced "dedicated" (a.k.a. not software crippled but otherwise the same) hardware.
Except that nobody forced Nvidia to launch the 16 GB 4060 Ti. They could have just not done it and let everyone in need for such VRAM buy the more expensive cards.
 
In regards to FPS lows, my observation is when it comes to texture and shader VRAM starvation issues, reported framerate and frametime by monitoring tools such as GPUZ are not affected, something digital foundry publically commented on when they reviewed FF7 remake. Because they use non industry standards for monitoring this stuff is how they were able to pick up on it, and the fact they also play the games they review.

But of course another issue is even when there is no stuttering, there is issues like textures either not loading at all, or a lower LoD version loading up, so you get the texture instead thats designed only to be used at a distance. This is something the review industry is going to have to learn to pick up on, the days of just running automated tools to monitor FPS metrics during in game benchmarks are gone I think.
 
Possible answers:
1 - to prove AMD marketing wrong when they say vram matters;
2 - to fool those who only see big numbers (including the price tag);
3 - to prove Jensen right on his adage "The more you buy, the more you save".

Definitely neither of these :kookoo:
Correct answer:
1 - to prove there are nvidia buyers (blind ones) who would spend any scalped super overcharged amount for whatever potato is offered by their "beloved" jensen - RTX 4060 Ti is nothing more than an RTX 4050 with a fake name and fake price tag..
 
Back
Top