Thursday, January 19th 2023

NVIDIA GeForce RTX 4060 Ti Possible Specs Surface—160 W Power, Debuts AD106 Silicon

NVIDIA's next GeForce RTX 40-series "Ada" graphics card launch is widely expected to be the GeForce RTX 4070 (non-Ti), and as we approach Spring 2023, the company is expected to ramp up to the meat of its new generation, with xx60-segment, beginning with the GeForce RTX 4060 Ti. This new performance-segment SKU debuts the 4 nm "AD106" silicon. A set of leaks by kopite7kimi, a reliable source with NVIDIA leaks, shed light on possible specifications.

The RTX 4060 Ti is based on the AD106 silicon, which is expected to be much smaller than the AD104 powering the RTX 4070 series. The reference board developed at NVIDIA, codenamed PG190, is reportedly tiny, and yet it features the 16-pin ATX 12VHPWR connector. This is probably set for 300 W at its signal pins, and adapters included with graphics cards could convert two 8-pin PCIe into one 300 W 16-pin connector. The RTX 4060 Ti is expected to come with a typical graphics power value of 160 W.
At this point we don't know whether the RTX 4060 Ti maxes out the AD106, but its rumored specs read as follows: 4,352 CUDA cores across 34 streaming multiprocessors (SM), 34 RT cores, 136 Tensor cores, 136 TMUs, and an unknown ROP count. The GPU is expected to feature a 128-bit wide GDDR6/X memory interface, and 8 GB could remain the standard memory size. NVIDIA is expected to use JEDEC-standard 18 Gbps GDDR6 memory, which should yield 288 GB/s of memory bandwidth. It will be very interesting to see how much faster the RTX 4060 Ti is over its predecessor, the RTX 3060 Ti, given that it has barely two-thirds the memory bandwidth. NVIDIA has made several architectural improvements to the memory sub-system with "Ada," and the AD106 is expected to get a large 32 MB L2 cache.
Sources: kopite7kimi (Twitter), VideoCardz
Add your own comment

164 Comments on NVIDIA GeForce RTX 4060 Ti Possible Specs Surface—160 W Power, Debuts AD106 Silicon

#26
64K
Dirt ChipWhy are you all so sentimental about memory bit size and die name (or naming at all)?
Sufficient GDDR and price\pref is what matters, all else is just random spec blubber.
Because:

RTX 3060 Ti Bandwidth 440 GB/s

www.techpowerup.com/gpu-specs/geforce-rtx-3060-ti.c3681

RTX 4060 Ti Bandwidth 288 GB/s

www.techpowerup.com/gpu-specs/geforce-rtx-4060-ti.c3890

The 4060 Ti will have the same Memory Bus width as a RTX 3050

www.techpowerup.com/gpu-specs/geforce-rtx-3050-8-gb.c3858

The 4060 Ti will be a way overpriced GPU. It shouldn't even be called a 4060 Ti.
Posted on Reply
#27
las
64KBecause:

RTX 3060 Ti Bandwidth 440 GB/s

www.techpowerup.com/gpu-specs/geforce-rtx-3060-ti.c3681

RTX 4060 Ti Bandwidth 288 GB/s

www.techpowerup.com/gpu-specs/geforce-rtx-4060-ti.c3890

The 4060 Ti will have the same Memory Bus width as a RTX 3050

www.techpowerup.com/gpu-specs/geforce-rtx-3050-8-gb.c3858

The 4060 Ti will be a way overpriced GPU. It shouldn't even be called a 4060 Ti.
The 4060 Ti will obviously beat 3060 Ti ... It will probably have 3070/3070Ti performance, at 160 watts

Did the memory compression improve tho?

If you have 50% better memory compression on a 128 bit bus, it will work just as fast as same speed memory on a 192 bit bus, even tho teoretical bandwith is much lower. They can also use even faster memory chips and beat it

Look at perf/price and maybe perf/watt, not at useless specs. Performance is what matters in the end and 4070 Ti beats most of last gen cards easily at 1440p which is resolution the card aims at, still does 4K fairly well and performs better than 6900XT and 3090 here and that is 192 bit vs 256/384 bit
Posted on Reply
#28
64K
lasThe 4060 Ti will obviously beat 3060 Ti ... It will probably have 3070/3070Ti performance, at 160 watts

Did the memory compression improve tho?

If you have 50% better memory compression on a 128 bit bus, it will work just as fast as same speed memory on a 192 bit bus, even tho teoretical bandwith is much lower. They can also use even faster memory chips and beat it

Look at perf/price and maybe perf/watt, not at useless specs. Performance is what matters in the end and 4070 Ti beats most of last gen cards easily at 1440p which is resolution the card aims at, still does 4K fairly well and performs better than 6900XT and 3090 here and that is 192 bit vs 256/384 bit
The bandwidth is taking into account the likely VRAM speed on the 4060 TI. When the 4060 Ti comes out and the performance is benched here on TPU and the MSRP is likely $150 more than the 3060 Ti then we can revisit what a disaster the Ada stack continues to be. If the performance goes up some but the price goes up as well then how is it an upgrade?
Posted on Reply
#29
Vya Domus
Dirt ChipWhy are you all so sentimental about memory bit size and die name (or naming at all)?
Die names are not just random worthless information, Nvidia uses the same nomenclature every time for a reason. If a certain card now uses a chip that's a lower tier according to the silicon naming scheme that means they've just upsold their customers a product that is inferior according to their own internal classification compared to the previous generation. In other words they're giving you less for more money.

If you think that doesn't matter you're being oblivious because it's those things that actually dictate price and performance.
Posted on Reply
#30
john_
Without DLSS 3.0 $300
WITH DLSS 3.0 $600

DLSS 3.0 doubles the price. Right? :p
Posted on Reply
#31
TumbleGeorge
64KMSRP is likely
I bet that all BOM include R&D of "4060 ti" is around
64K$150
;)
Posted on Reply
#32
Chrispy_
The 4070Ti is about half a 4090 in silicon, and costs half as much.

This 4060Ti is about 60% of a 4070Ti, but I very much doubt it will only cost 60% of "$799" ($497)

It also doesn't bode well for the vanilla 4060, as it's likely to be a cut-down variant of AD106 on a 128-bit bus and therefore limited to 8GB VRAM.

8GB VRAM was the right answer for 2019, but games have moved on and despite not needing all 12GB, I'm glad the 3060 had 12GB instead of 6GB, because 6GB was provably too little. I strongly suspect that 8GB will be a problem for these cards before they reach the end of their typical 3-5 year replacement cycle for the bulk of gamers.
Posted on Reply
#33
Zunexxx
Minus Infinity160 bit bus, $599, raster about 370 Ti levels at best. AD104 should have been for 4060 Ti, AD103 for 4070/4070Ti, AD102 4080/4090. AD106 for 4050.
30 series is the outlier. The xx80 never had 102 die before, it only for 104/103. 102 was always for anything 80ti and above. So no, just because 30 series gave the 80 class a taste of what the 90 class feels doesn't make it a norm. We are just back to "normal" die choice again.
Chrispy_The 4070Ti is about half a 4090 in silicon, and costs half as much.

This 4060Ti is about 60% of a 4070Ti, but I very much doubt it will only cost 60% of "$799" ($497)

It also doesn't bode well for the vanilla 4060, as it's likely to be a cut-down variant of AD106 on a 128-bit bus and therefore limited to 8GB VRAM.

8GB VRAM was the right answer for 2019, but games have moved on and despite not needing all 12GB, I'm glad the 3060 had 12GB instead of 6GB, because 6GB was provably too little. I strongly suspect that 8GB will be a problem for these cards before they reach the end of their typical 3-5 year replacement cycle for the bulk of gamers.
4060ti at 499 seems legit, and then 4060 at 399-429. 4050 probably at 299-329
Vya DomusDie names are not just random worthless information, Nvidia uses the same nomenclature every time for a reason. If a certain card now uses a chip that's a lower tier according to the silicon naming scheme that means they've just upsold their customers a product that is inferior according to their own internal classification compared to the previous generation. In other words they're giving you less for more money.

If you think that doesn't matter you're being oblivious because it's those things that actually dictate price and performance.
30 series was an outlier, look at pascal and Turing, did the 80 tier ever get a 102 die? Never, not even during the Maxwell or Kepler era did the 80 tier ever get a 102 die. So a 104/103 die for 4080 is just very normal. Get used to what's normal, not abnormal.

3090 owners were pissed because 3080 was so close in performance and they still had to pay so much more for it. Now the 90 owners know they are getting what they are paying for.
Posted on Reply
#34
las
64KThe bandwidth is taking into account the likely VRAM speed on the 4060 TI. When the 4060 Ti comes out and the performance is benched here on TPU and the MSRP is likely $150 more than the 3060 Ti then we can revisit what a disaster the Ada stack continues to be. If the performance goes up some but the price goes up as well then how is it an upgrade?
4070 Ti uses 75-100 watts less than 3080, is 20-25-30% faster, have 2GB more VRAM and is the same price. Does this mean 3080 users upgrade to 4070 Ti? Probably not, but 4070 Ti is the better card hands down and 3080 users probably go 4080+ or 7900 XTX anyway

Not sure why you think it's a disaster just because prices are high, everyone knew prices were going to be high, because they had tons of cards and chips left from last generation, that needed to be sold without loosing big money. When 3000 and 6000 series are sold out, prices will drop. That is why I am waiting a few months to pick up a 4080-4090 or 7900 XTX when we see more custom models and better availablity.

4080-4090 or 7900 XTX is a huge upgrade from my 3080 Ti and platform will be upgraded too, probably going Ryzen 7800X3D/7900X3D
Posted on Reply
#35
Zunexxx
64KBecause:

RTX 3060 Ti Bandwidth 440 GB/s

www.techpowerup.com/gpu-specs/geforce-rtx-3060-ti.c3681

RTX 4060 Ti Bandwidth 288 GB/s

www.techpowerup.com/gpu-specs/geforce-rtx-4060-ti.c3890

The 4060 Ti will have the same Memory Bus width as a RTX 3050

www.techpowerup.com/gpu-specs/geforce-rtx-3050-8-gb.c3858

The 4060 Ti will be a way overpriced GPU. It shouldn't even be called a 4060 Ti.
Vram isn't the only thing that dictates data transfer, cache also affects it, AD has increased cache by 16x so it makes up for quite a bit of bandwidth
Posted on Reply
#36
Chrispy_
Zunexxx4060ti at 499 seems legit, and then 4060 at 399-429. 4050 probably at 299-329
That would be reasonable, if you consider inflation and the MSRP of 20-series and 30-series equivalents.

Nvidia aren't known for being reasonable, and the $329MSRP of the 3060 was a very tough pill to swallow given how much it had been cut down from the $399 3060Ti, and how little performance advantage it had over the $299 2060 which was also seen as poor value compared to prior generations and the competition from AMD.

AMD need marketshare and with a bit of luck they'll be aggressively competitive with the 7600-series - they definitely did that with the 7900XTX vs 4080 and if they can undercut both the 4060 models by 15-20% then it'll be a good thing for the mainstream market.
Posted on Reply
#37
64K
las4070 Ti uses 75-100 watts less than 3080, is 20-25-30% faster, have 2GB more VRAM and is the same price. Does this mean 3080 users upgrade to 4070 Ti? Probably not, but 4070 Ti is the better card hands down and 3080 users probably go 4080+ or 7900 XTX anyway

Not sure why you think it's a disaster just because prices are high, everyone knew prices were going to be high, because they had tons of cards and chips left from last generation, that needed to be sold without loosing big money. When 3000 and 6000 series are sold out, prices will drop. That is why I am waiting a few months to pick up a 4080-4090 or 7900 XTX when we see more custom models and better availablity.

4080-4090 or 7900 XTX is a huge upgrade from my 3080 Ti and platform will be upgraded too, probably going Ryzen 7800X3D/7900X3D
The Kepler 780 and 780 Ti got the big chip. You are probably thinking of the 680 which had nothing to do with the 780/780 Ti except that they were both the Kepler architecture. The 770 was the refresh of the 680. The 680 is where Nvidia started to muddy the water and it continues to this very day. The 680/770 were upper midrange GPUs just like the 4080 is upper midrange. When the 4080 was released it's MSRP was $1,200. I call that a serious disaster for an upper midrange GPU. You have to look at the specs. All of the Ada GPUs are way overpriced. Most gamers can't even afford $550 for a lower midrange Ada. What will the entry level Ada 4050 be? $400? The pricing is absurd and almost everyone here knows it.

The Maxwell 980 and 980 Ti were more of the same thing. The 980 was upper midrange and the 980 Ti was high end.

The Pascal 1080 and 1080 Ti were more of the same thing. The 1080 was upper midrange and the 1080 Ti was high end

The Turing 2080 and 2080 Ti were more of the same thing. The 2080 was upper midrange and the 2080 Ti was high end.

The Ampere 3080 and 3080 Ti were different. They were both high end

But now we are back to the same thing again. The Ada 4080 is upper midrange and the 4080 Ti is high end.
Posted on Reply
#38
Nater
lasNah last gen cards are EoL and not worth touching unless you get a huge discount.

People always act like you do when new gen cards come out and beats their old card, I know the feeling.

4070 Ti beats 6800XT in every way possible and it will beat it even more in 6-12 months due to optimizations and new games coming out. It even beats 6900XT, 3080 Ti and 3090 with ease, pretty much performs like a 3090 Ti at 1440p, so nah, 6800XT is not close at all...
I think the point is, it's close enough that it's not an upgrade path. You could sit down at 2 PC's right next to each other and take the Pepsi challenge. 99% of people who sat down to play would not know the difference at 1440P. Put $270 on the table w/ the 6800XT rig and tell them they get to keep that if they choose that PC, and they'll take it.

I like the "value" of the 4070 Ti as sad as it is, being the 3080 is really hard to find, especially at a decent price.
Posted on Reply
#39
Sake
las4070 Ti uses 75-100 watts less than 3080, is 20-25-30% faster, have 2GB more VRAM and is the same price. Does this mean 3080 users upgrade to 4070 Ti? Probably not, but 4070 Ti is the better card hands down and 3080 users probably go 4080+ or 7900 XTX anyway

Not sure why you think it's a disaster just because prices are high, everyone knew prices were going to be high, because they had tons of cards and chips left from last generation, that needed to be sold without loosing big money. When 3000 and 6000 series are sold out, prices will drop. That is why I am waiting a few months to pick up a 4080-4090 or 7900 XTX when we see more custom models and better availablity.

4080-4090 or 7900 XTX is a huge upgrade from my 3080 Ti and platform will be upgraded too, probably going Ryzen 7800X3D/7900X3D
Everybody is annoyed because they obviously reduced the tier one - two steps and ask way more money that what it is deserved. They name it one tier higher, but this is no problem for the marketing team to defend the nvidia products on all forums. Now everything is more expensive, but the price of the video cards it's just absurd. When i see people still trying to promote this garbage it makes me sick.
Posted on Reply
#40
ARF
ZunexxxVram isn't the only thing that dictates data transfer, cache also affects it, AD has increased cache by 16x so it makes up for quite a bit of bandwidth
The victim cache cannot compensate the higher stress when increasing the resolution to 3840x2160.
This card will have so-so performance at 1920x1080, and bad performance once you increase the resolution, settings, and with new games.
Posted on Reply
#41
Dirt Chip
64KBecause:

RTX 3060 Ti Bandwidth 440 GB/s

www.techpowerup.com/gpu-specs/geforce-rtx-3060-ti.c3681

RTX 4060 Ti Bandwidth 288 GB/s

www.techpowerup.com/gpu-specs/geforce-rtx-4060-ti.c3890

The 4060 Ti will have the same Memory Bus width as a RTX 3050

www.techpowerup.com/gpu-specs/geforce-rtx-3050-8-gb.c3858

The 4060 Ti will be a way overpriced GPU. It shouldn't even be called a 4060 Ti.
It can have memory bit of a potato but if the preformance is there, what's to it?
Do you think that the 3050 will outperform it?

If it can be faster with less memory bit, why do you care?

The cost\pref will give you the right indication if that product worth buying (probably not). Anything else is just miss leading spec numbers.
Posted on Reply
#42
JustBenching
Dirt ChipShame for the use of 12vhpwr in the lower mid range level, a 160-200w gpu need just one 8pin+PCIE. The use of adapter will be silly.

NV should enabled AIB to choose the power connector with sub 200w GPU so the good old and cost efficienct 8 pin will be an option.

Next on we will have the 100w 4050 with that connector...
The problem is all new psus are atx3 with 16pins. It would be silly to buy a new gpu and a new psu and then requiring an adapter, right?
Dirt ChipIt can have memory bit of a potato but if the preformance is there, what's to it?
Do you think that the 3050 will outperform it?

If it can be faster with less memory bit, why do you care?

The cost\pref will give you the right indication if that product worth buying (probably not). Anything else is just miss leading spec numbers.
Because we dont want to buy performance, we want to buy specs /sarcasm
NaterI think the point is, it's close enough that it's not an upgrade path. You could sit down at 2 PC's right next to each other and take the Pepsi challenge. 99% of people who sat down to play would not know the difference at 1440P. Put $270 on the table w/ the 6800XT rig and tell them they get to keep that if they choose that PC, and they'll take it.

I like the "value" of the 4070 Ti as sad as it is, being the 3080 is really hard to find, especially at a decent price.
Then you activate rt and... Yeah
Posted on Reply
#43
Dirt Chip
fevgatosThe problem is all new psus are atx3 with 16pins. It would be silly to buy a new gpu and a new psu and then requiring an adapter, right?
Not all. The sub 800w mostly don't have one. You don't need ATX3 PSU for mid range or lower GPU. It just a waste of money.
Posted on Reply
#44
Chrispy_
64KThe Kepler 780 and 780 Ti got the big chip. You are probably thinking of the 680 which had nothing to do with the 780/780 Ti except that they were both the Kepler architecture. The 770 was the refresh of the 680. The 680 is where Nvidia started to muddy the water and it continues to this very day. The 680/770 were upper midrange GPUs just like the 4080 is upper midrange. When the 4080 was released it's MSRP was $1,200. I call that a serious disaster for an upper midrange GPU. You have to look at the specs. All of the Ada GPUs are way overpriced. Most gamers can't even afford $550 for a lower midrange Ada. What will the entry level Ada 4050 be? $400? The pricing is absurd and almost everyone here knows it.

The Maxwell 980 and 980 Ti were more of the same thing. The 980 was upper midrange and the 980 Ti was high end.

The Pascal 1080 and 1080 Ti were more of the same thing. The 1080 was upper midrange and the 1080 Ti was high end

The Turing 2080 and 2080 Ti were more of the same thing. The 2080 was upper midrange and the 2080 Ti was high end.

The Ampere 3080 and 3080 Ti were different. They were both high end

But now we are back to the same thing again. The Ada 4080 is upper midrange and the 4080 Ti is high end.
The words entry/midrange/high-end are all moving targets though.
  • Back in the bad old days when an entry-level GTS450 was $129 ($189 in today's money with inflation and tariffs) you had 1080p30 or 720p60 performance in the games of 2010. 1080p30 is now the realm of APUs and iGPS because no dGPUs are sold that are that slow in today's games. Even the terrible RX6400 at $159 is better than 720p60 in just about everything.
  • For midrange, let's go back a decade to the popular GTX 660 using a xx106 die, it was $230 at launch ($320 in today's money) and it was comfortably a 1080p60 card, These days an RX6600 or 3060 are capable of running 1080p144 in many titles, with the option to upscale to 1440p with FSR/DLSS with the benefit of VRR and triple-digit framerates.
  • At the high-end, we're now talking about 4K120 with raytracing, or 300fps with DLSS3. Even 6 years ago, gaming at even 4K60 was a problem. The prices have gone up for sure, and the cost of high-end stuff is now pretty disgusting - but you can't hold high-end cards today to the same standards as several generations ago, because those cards were incapable of doing what today's high-end cards can do.
I always like to remind people that better graphics do not make a game good. They can help, but the game mechanics, level design, art-style and community experience are going to be just as enjoyable at 1080p60 on medium settings 99% of the time. Sure, you can play at 4K-ultra with DLSS3 on your 4090 but it won't fix anything you don't like about the game already and improved graphics are always a case of diminishing returns. For a very long time (~15 years) 1080p60 was the gold standard for mainstream, and that's now trivial for any midrange card and within reach of entry-level models. The last 3+ years have seen a strong shift towards 1080p144 and that's a huge difference over the 1080p60 we judged older generations by.

If a 4050 at $400 now delivers the gaming experience that an (inflation-adjusted) $400 card from the past used to deliver, then we're not any worse off, we're just being mislead by marketing names; Nvidia obviously want to upsell to customers and one of the best ways to do this is to make people buy higher tiers than they usually would by messing with the naming of tiers and padding up the upper-end of the product stack with extremely expensive, low-volume parts.
Posted on Reply
#45
TumbleGeorge
Chrispy_we're not any worse off
That is the problem. After so many years I expect for equal money much better performance not equal performance..
Posted on Reply
#46
Chrispy_
TumbleGeorgeThat is the problem. After so many years I expect for equal money much better performance not equal performance..
We have that.

www.techpowerup.com/review/zotac-geforce-gtx-660/17.html
GTX 660 from 10 years ago gets 43fps at 1280x800 "very high" in Metro2033.

The RTX 3060 has no problem running the exact same game at 1080p "ultra" at 144Hz without dropping a single frame. It can manage the same feat at 100-120FPS in the Redux edition which is a far more demanding HD remaster with better lighting, shadows, volumetric effects etc.

My point is that not only are games themselves getting more demanding over time, but that our expectations of resolution and framerate are also increasing year-on-year. A lot of the GPU reviews from 10+ years ago are testing at 1024x768. That's only 85% of 720p, and back in those days, antialiasing was a luxury that you could only enable if you had framerate to spare. The convention of "FullHD, 60fps" used to be high-end, and now it's entry-level, regardless of what game you're looking at from any decade.
Posted on Reply
#47
Vya Domus
Zunexxx30 series was an outlier, look at pascal and Turing, did the 80 tier ever get a 102 die?
What are you talking about ? xx80tis have always been 102 class or better. That's not "80" class ? Then what is it ?

That doesn't matter anyway, the point is if this is going to be based on an xx106 chip that's clearly a downgrade, there is no going around it. Not to mention that out of the few xx60tis Nvidia made, most were based on an xx104 GPU, how about that ?

Posted on Reply
#48
Chrispy_
Vya DomusNot to mention that out of the few xx60tis Nvidia made, most were based on an xx104 GPU, how about that ?

xx104 has been x80 class for the last 8+ years. The 'rule' you're trying to apply has been broken for longer than the brief period it was initially valid for - which was just Fermi and Kepler alone in 2010-2012

960 had no Ti, was GM206
1060 had two variants and the faster one was GP106
1660 had many variants, Ti and Super were both Tu116
2060 KO/vanilla/Super were all TU106
3060 Ti was the exception to the rule and was the first time they've done this since Kepler, when 104 was the biggest consumer GPU die Nvidia made, period.
Posted on Reply
#49
Vya Domus
Chrispy_xx104 has been x80 class for the last 8 years. The 'rule' you're trying to apply has been broken for longer than the short period it was valid for now.
You're still missing the point, that classification indicates where the GPU is in terms of performance. If a product gets a GPU higher or lower on that rung it means it's performance tier has moved accordingly.
Posted on Reply
#50
N/A
GTX 780 was a 110 chip the big one. 560sq.mm then 980 got smaller to 398. 1080 298, 2080 -398 again. 3080 was lucky to get 102 because the 103 7680/320 bit was scrapped, and it made sense to carve it out of the defective 102 instead of a fully functional 103, that is a rarity and would have been too weak to compete with 6800 xt. rTx 2070 was the first 106-based 70 tier. So nothing can be set in stone forever.
Posted on Reply
Add your own comment
Dec 20th, 2024 14:14 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts