Thursday, January 19th 2023
NVIDIA GeForce RTX 4060 Ti Possible Specs Surface—160 W Power, Debuts AD106 Silicon
NVIDIA's next GeForce RTX 40-series "Ada" graphics card launch is widely expected to be the GeForce RTX 4070 (non-Ti), and as we approach Spring 2023, the company is expected to ramp up to the meat of its new generation, with xx60-segment, beginning with the GeForce RTX 4060 Ti. This new performance-segment SKU debuts the 4 nm "AD106" silicon. A set of leaks by kopite7kimi, a reliable source with NVIDIA leaks, shed light on possible specifications.
The RTX 4060 Ti is based on the AD106 silicon, which is expected to be much smaller than the AD104 powering the RTX 4070 series. The reference board developed at NVIDIA, codenamed PG190, is reportedly tiny, and yet it features the 16-pin ATX 12VHPWR connector. This is probably set for 300 W at its signal pins, and adapters included with graphics cards could convert two 8-pin PCIe into one 300 W 16-pin connector. The RTX 4060 Ti is expected to come with a typical graphics power value of 160 W.At this point we don't know whether the RTX 4060 Ti maxes out the AD106, but its rumored specs read as follows: 4,352 CUDA cores across 34 streaming multiprocessors (SM), 34 RT cores, 136 Tensor cores, 136 TMUs, and an unknown ROP count. The GPU is expected to feature a 128-bit wide GDDR6/X memory interface, and 8 GB could remain the standard memory size. NVIDIA is expected to use JEDEC-standard 18 Gbps GDDR6 memory, which should yield 288 GB/s of memory bandwidth. It will be very interesting to see how much faster the RTX 4060 Ti is over its predecessor, the RTX 3060 Ti, given that it has barely two-thirds the memory bandwidth. NVIDIA has made several architectural improvements to the memory sub-system with "Ada," and the AD106 is expected to get a large 32 MB L2 cache.
Sources:
kopite7kimi (Twitter), VideoCardz
The RTX 4060 Ti is based on the AD106 silicon, which is expected to be much smaller than the AD104 powering the RTX 4070 series. The reference board developed at NVIDIA, codenamed PG190, is reportedly tiny, and yet it features the 16-pin ATX 12VHPWR connector. This is probably set for 300 W at its signal pins, and adapters included with graphics cards could convert two 8-pin PCIe into one 300 W 16-pin connector. The RTX 4060 Ti is expected to come with a typical graphics power value of 160 W.At this point we don't know whether the RTX 4060 Ti maxes out the AD106, but its rumored specs read as follows: 4,352 CUDA cores across 34 streaming multiprocessors (SM), 34 RT cores, 136 Tensor cores, 136 TMUs, and an unknown ROP count. The GPU is expected to feature a 128-bit wide GDDR6/X memory interface, and 8 GB could remain the standard memory size. NVIDIA is expected to use JEDEC-standard 18 Gbps GDDR6 memory, which should yield 288 GB/s of memory bandwidth. It will be very interesting to see how much faster the RTX 4060 Ti is over its predecessor, the RTX 3060 Ti, given that it has barely two-thirds the memory bandwidth. NVIDIA has made several architectural improvements to the memory sub-system with "Ada," and the AD106 is expected to get a large 32 MB L2 cache.
164 Comments on NVIDIA GeForce RTX 4060 Ti Possible Specs Surface—160 W Power, Debuts AD106 Silicon
RTX 3060 Ti Bandwidth 440 GB/s
www.techpowerup.com/gpu-specs/geforce-rtx-3060-ti.c3681
RTX 4060 Ti Bandwidth 288 GB/s
www.techpowerup.com/gpu-specs/geforce-rtx-4060-ti.c3890
The 4060 Ti will have the same Memory Bus width as a RTX 3050
www.techpowerup.com/gpu-specs/geforce-rtx-3050-8-gb.c3858
The 4060 Ti will be a way overpriced GPU. It shouldn't even be called a 4060 Ti.
Did the memory compression improve tho?
If you have 50% better memory compression on a 128 bit bus, it will work just as fast as same speed memory on a 192 bit bus, even tho teoretical bandwith is much lower. They can also use even faster memory chips and beat it
Look at perf/price and maybe perf/watt, not at useless specs. Performance is what matters in the end and 4070 Ti beats most of last gen cards easily at 1440p which is resolution the card aims at, still does 4K fairly well and performs better than 6900XT and 3090 here and that is 192 bit vs 256/384 bit
If you think that doesn't matter you're being oblivious because it's those things that actually dictate price and performance.
WITH DLSS 3.0 $600
DLSS 3.0 doubles the price. Right? :p
This 4060Ti is about 60% of a 4070Ti, but I very much doubt it will only cost 60% of "$799" ($497)
It also doesn't bode well for the vanilla 4060, as it's likely to be a cut-down variant of AD106 on a 128-bit bus and therefore limited to 8GB VRAM.
8GB VRAM was the right answer for 2019, but games have moved on and despite not needing all 12GB, I'm glad the 3060 had 12GB instead of 6GB, because 6GB was provably too little. I strongly suspect that 8GB will be a problem for these cards before they reach the end of their typical 3-5 year replacement cycle for the bulk of gamers.
3090 owners were pissed because 3080 was so close in performance and they still had to pay so much more for it. Now the 90 owners know they are getting what they are paying for.
Not sure why you think it's a disaster just because prices are high, everyone knew prices were going to be high, because they had tons of cards and chips left from last generation, that needed to be sold without loosing big money. When 3000 and 6000 series are sold out, prices will drop. That is why I am waiting a few months to pick up a 4080-4090 or 7900 XTX when we see more custom models and better availablity.
4080-4090 or 7900 XTX is a huge upgrade from my 3080 Ti and platform will be upgraded too, probably going Ryzen 7800X3D/7900X3D
Nvidia aren't known for being reasonable, and the $329MSRP of the 3060 was a very tough pill to swallow given how much it had been cut down from the $399 3060Ti, and how little performance advantage it had over the $299 2060 which was also seen as poor value compared to prior generations and the competition from AMD.
AMD need marketshare and with a bit of luck they'll be aggressively competitive with the 7600-series - they definitely did that with the 7900XTX vs 4080 and if they can undercut both the 4060 models by 15-20% then it'll be a good thing for the mainstream market.
The Maxwell 980 and 980 Ti were more of the same thing. The 980 was upper midrange and the 980 Ti was high end.
The Pascal 1080 and 1080 Ti were more of the same thing. The 1080 was upper midrange and the 1080 Ti was high end
The Turing 2080 and 2080 Ti were more of the same thing. The 2080 was upper midrange and the 2080 Ti was high end.
The Ampere 3080 and 3080 Ti were different. They were both high end
But now we are back to the same thing again. The Ada 4080 is upper midrange and the 4080 Ti is high end.
I like the "value" of the 4070 Ti as sad as it is, being the 3080 is really hard to find, especially at a decent price.
This card will have so-so performance at 1920x1080, and bad performance once you increase the resolution, settings, and with new games.
Do you think that the 3050 will outperform it?
If it can be faster with less memory bit, why do you care?
The cost\pref will give you the right indication if that product worth buying (probably not). Anything else is just miss leading spec numbers.
- Back in the bad old days when an entry-level GTS450 was $129 ($189 in today's money with inflation and tariffs) you had 1080p30 or 720p60 performance in the games of 2010. 1080p30 is now the realm of APUs and iGPS because no dGPUs are sold that are that slow in today's games. Even the terrible RX6400 at $159 is better than 720p60 in just about everything.
- For midrange, let's go back a decade to the popular GTX 660 using a xx106 die, it was $230 at launch ($320 in today's money) and it was comfortably a 1080p60 card, These days an RX6600 or 3060 are capable of running 1080p144 in many titles, with the option to upscale to 1440p with FSR/DLSS with the benefit of VRR and triple-digit framerates.
- At the high-end, we're now talking about 4K120 with raytracing, or 300fps with DLSS3. Even 6 years ago, gaming at even 4K60 was a problem. The prices have gone up for sure, and the cost of high-end stuff is now pretty disgusting - but you can't hold high-end cards today to the same standards as several generations ago, because those cards were incapable of doing what today's high-end cards can do.
I always like to remind people that better graphics do not make a game good. They can help, but the game mechanics, level design, art-style and community experience are going to be just as enjoyable at 1080p60 on medium settings 99% of the time. Sure, you can play at 4K-ultra with DLSS3 on your 4090 but it won't fix anything you don't like about the game already and improved graphics are always a case of diminishing returns. For a very long time (~15 years) 1080p60 was the gold standard for mainstream, and that's now trivial for any midrange card and within reach of entry-level models. The last 3+ years have seen a strong shift towards 1080p144 and that's a huge difference over the 1080p60 we judged older generations by.If a 4050 at $400 now delivers the gaming experience that an (inflation-adjusted) $400 card from the past used to deliver, then we're not any worse off, we're just being mislead by marketing names; Nvidia obviously want to upsell to customers and one of the best ways to do this is to make people buy higher tiers than they usually would by messing with the naming of tiers and padding up the upper-end of the product stack with extremely expensive, low-volume parts.
www.techpowerup.com/review/zotac-geforce-gtx-660/17.html
GTX 660 from 10 years ago gets 43fps at 1280x800 "very high" in Metro2033.
The RTX 3060 has no problem running the exact same game at 1080p "ultra" at 144Hz without dropping a single frame. It can manage the same feat at 100-120FPS in the Redux edition which is a far more demanding HD remaster with better lighting, shadows, volumetric effects etc.
My point is that not only are games themselves getting more demanding over time, but that our expectations of resolution and framerate are also increasing year-on-year. A lot of the GPU reviews from 10+ years ago are testing at 1024x768. That's only 85% of 720p, and back in those days, antialiasing was a luxury that you could only enable if you had framerate to spare. The convention of "FullHD, 60fps" used to be high-end, and now it's entry-level, regardless of what game you're looking at from any decade.
That doesn't matter anyway, the point is if this is going to be based on an xx106 chip that's clearly a downgrade, there is no going around it. Not to mention that out of the few xx60tis Nvidia made, most were based on an xx104 GPU, how about that ?
960 had no Ti, was GM206
1060 had two variants and the faster one was GP106
1660 had many variants, Ti and Super were both Tu116
2060 KO/vanilla/Super were all TU106
3060 Ti was the exception to the rule and was the first time they've done this since Kepler, when 104 was the biggest consumer GPU die Nvidia made, period.