Friday, May 5th 2023

Palit GeForce RTX 4060 Ti GPU Specs Leaked - Boost Clocks of Up to 2685 MHz & 18 GB/s GDDR6 Memory

More leaks are emerging from Russia regarding NVIDIA's not-yet-officially-confirmed RTX 4060 Ti GPU family - two days ago Marvel Distribution (RU) released details of four upcoming Palit custom design cards, again confirming the standard RTX 4060 Ti GPU configuration of 8 GB VRAM (plus 128-bit memory bus). Earlier today hardware tipster momomo_us managed to track down some more pre-launch time info (rumors point to late May), courtesy of another Russian e-retailer (extremecomp.ru). The four Palit Dual and StormX custom cards from the previous leak are spotted again, but this new listing provides a few extra details.

Palit's four card offerings share the same basic specification of 18 GB/s GDDR6 memory, pointing to a maximum theoretical bandwidth of up to 288 GB/s - derived from the GPU's confirmed 8 GB 128-bit memory interface. The standard Dual variant appears to have a stock clock speed of 2310 MHz, the StormX and StormX OC models are faster at 2535 MHz and 2670 MHz (respectively), and the Dual OC is the group leader with 2685 MHz. The TPU database's (speculative) entry for the reference NVIDIA GeForce RTX 4060 Ti GPU has the base clock listed as 2310 MHz, and the boost clock at 2535 MHz - so the former aligns with the Palit Dual model's normal mode of operation (its boost clock number is unknown), and the latter lines up with the standard StormX variant's (presumed) boost mode. Therefore the leaked information likely shows only the boosted clock speeds for Palit's StormX, StormX OC and Dual OC cards.
Sources: momomo_us Photo Tweet, Wccftech
Add your own comment

29 Comments on Palit GeForce RTX 4060 Ti GPU Specs Leaked - Boost Clocks of Up to 2685 MHz & 18 GB/s GDDR6 Memory

#2
Selaya
aaand it's not even gddr6x.

KEKW
Posted on Reply
#3
Fluffmeister
Yep, sadly this won't be able to play the shitty AAA games you're all rushing out and beta testing.
Posted on Reply
#4
Darksword
Memory Bandwidth

3060 Ti: 448.0 GB/s
4060 Ti: 288.0 GB/s

Good times. :shadedshu:
Posted on Reply
#5
Darmok N Jalad
Hey that’s the same amount of memory bandwidth as my 5600XT…which launched over 3 years ago for $279. Progress in the 6-line of GPUs!
Posted on Reply
#6
N/A
DarkswordMemory Bandwidth

3060 Ti: 448.0 GB/s
4060 Ti: 288.0 GB/s

Good times. :shadedshu:
4070 is being able to stand up to 3080 with just 504 GBs versus 760 GBs thanks to a L2$, that is 33% less bandwidth, the rest being the same in terms 30 Gflops provided by 5888 Cuda and 64 Rops but operating at 50% higher frequency resulting the performance of 8704 / 96

Pretty much the same 4060 Ti is the equivalent of 6144 / 72. The problem here is the Rops. 48 Rops versus 96. That is absolute crap.
Posted on Reply
#7
Dr. Dro
FluffmeisterYep, sadly this won't be able to play the shitty AAA games you're all rushing out and beta testing.
and neither will the RX 7600 XT. At least with the last generation, if you were unhappy with the RTX 3070's framebuffer, you could get a 6700 XT.

Nowhere to run now.
Posted on Reply
#8
Pooch
Dr. Droand neither will the RX 7600 XT. At least with the last generation, if you were unhappy with the RTX 3070's framebuffer, you could get a 6700 XT.

Nowhere to run now.
But everywhere to hide (1660 super 336 GB/s). My next move will be the 3060 12GB model (360GB/s). I play them, they don't play me. Ascendance is imminent.
Posted on Reply
#10
sLowEnd
DarkswordMemory Bandwidth

3060 Ti: 448.0 GB/s
4060 Ti: 288.0 GB/s

Good times. :shadedshu:
Comparing the memory bandwidth between two different architectures doesn't usually come to any useful performance predictions. It's a "huh, neat" figure at best.

The GTX 760 has 80GB/s more memory bandwidth than the GTX 960. Guess which card performs 20% worse than the other?
The RX 480 matches the R9 390 with a 128GB/s memory bandwidth deficit.
Posted on Reply
#11
undorich
with all the fake frames , thats more than enough ^^ im waiting for an apu delivering 2080-3070performance with a low of 35W max. also im not buying into x86-64 anymore and also i will never spend a dimme on sdram/ddr 123456789 ..... its all old tech. stuff modular, lets do like with heaters and cars. ban it and presure the industry to come up with something new. how long will this bs go on ? nothing changed over the years, just press the last drop out of the stone .
Posted on Reply
#12
fevgatos
Darmok N JaladHey that’s the same amount of memory bandwidth as my 5600XT…which launched over 3 years ago for $279. Progress in the 6-line of GPUs!
But the 4060ti will probably have a 20 times bigger cache though
Posted on Reply
#13
Fatalfury
i thought Nvidia pulled out of Russia??
Posted on Reply
#14
Dr. Dro
PoochBut everywhere to hide (1660 super 336 GB/s). My next move will be the 3060 12GB model (360GB/s). I play them, they don't play me. Ascendance is imminent.
Being fair, absolute memory bandwidth isn't as much of an issue as it used to be but both companies are hedging their product's performance on large on-die caches.

Last generation with 6950 XT (512 GB/s) vs. 3090 Ti (1013 GB/s) already showed how AMD's design desperately relies on its cache hit rate, with the performance drastically reduced in software that doesn't have a high hit rate or in situations where raw memory bandwidth is heavily demanded of (usage of ultra high quality textures and/or resolutions above 1440p, and most notably, ray tracing).

The end result is that AMD's design was mediocre for 4K/Ultra gaming and had poor ray tracing performance, something that the RTX 3090 and its refresh are perfectly capable of doing well. On the flip side however as long as you didn't enable ray tracing, AMD's design was faster at lower resolutions, especially 1080p, so in the end these turned out to be great cards. The RTX 3090 tends to fall behind the even the vanilla RX 6800 in pure raster workloads at 1080p on games that are friendly to AMD's architecture.

All in all the conclusion you can draw from this is the same, the RTX 4060 Ti and the RX 7600 XT are both strictly designed for 1080p gaming and I suspect their performance is going to fall off a cliff once they're run at 1440p and 4K will be completely unworkable.
Posted on Reply
#15
Pooch
sLowEndComparing the memory bandwidth between two different architectures doesn't usually come to any useful performance predictions. It's a "huh, neat" figure at best.

The GTX 760 has 80GB/s more memory bandwidth than the GTX 960. Guess which card performs 20% worse than the other?
The RX 480 matches the R9 390 with a 128GB/s memory bandwidth deficit.
If the bandwidth of the newer generation product is lower than the previous gen product, you can assume, between the two compared models that there is some memory handicap, I only upgrade to a model that has at least the same (192bit) or higher numbers of every facet of the card, minus clock speeds, as with the increase in shader cores the speeds(Mhz) are usually less compared to a lesser count of cores. for instance the 1660 super has 1408 cores @ 1530Mhz and the 3060 has 3584 cores @ 1320Mhz. In conclusion there is a lot to be said about the memory bandwidth of cards today and how it relates to game performance in different scenarios, but overall, for every upgrade, I wouldn't recommend going backwards -
3060 Ti: 448.0 GB/s
4060 Ti: 288.0 GB/s < --- going backwards like 2 steps forward one step back line dancing.

And I think the large on die caches are clouding this issue. Sounds gimmicky and a way for them to have you think that low memory bandwidth is OK.
Posted on Reply
#16
Dr. Dro
Fatalfuryi thought Nvidia pulled out of Russia??
Despite economic sanctions by the West, I've read that life largely goes on as usual in Russia. They've had access to consumer electronics such as iPhones at practically the same prices that are practiced in Europe, as well as most goods that they'd normally have by the way of grey imports.
Posted on Reply
#17
Fatalfury
Dr. DroDespite economic sanctions by the West, I've read that life largely goes on as usual in Russia. They've had access to consumer electronics such as iPhones at practically the same prices that are practiced in Europe, as well as most goods that they'd normally have by the way of grey imports.
yea ur absolutely right. Also since Nvidia have lots on unsold inventory with cryptomine crash+ lifes returning to normal after lockdown.
Selling them to Russia via china would only do GOOD for them without caring about the other things.
Posted on Reply
#18
mama
$200 card max.
Posted on Reply
#19
sLowEnd
PoochIf the bandwidth of the newer generation product is lower than the previous gen product, you can assume, between the two compared models that there is some memory handicap, I only upgrade to a model that has at least the same (192bit) or higher numbers of every facet of the card, minus clock speeds, as with the increase in shader cores the speeds(Mhz) are usually less compared to a lesser count of cores. for instance the 1660 super has 1408 cores @ 1530Mhz and the 3060 has 3584 cores @ 1320Mhz. In conclusion there is a lot to be said about the memory bandwidth of cards today and how it relates to game performance in different scenarios, but overall, for every upgrade, I wouldn't recommend going backwards -
3060 Ti: 448.0 GB/s
4060 Ti: 288.0 GB/s < --- going backwards like 2 steps forward one step back line dancing.
Could you repeat what you said, in English this time?
Posted on Reply
#20
Lew Zealand
PoochIf the bandwidth of the newer generation product is lower than the previous gen product, you can assume, between the two compared models that there is some memory handicap, I only upgrade to a model that has at least the same (192bit) or higher numbers of every facet of the card, minus clock speeds, as with the increase in shader cores the speeds(Mhz) are usually less compared to a lesser count of cores. for instance the 1660 super has 1408 cores @ 1530Mhz and the 3060 has 3584 cores @ 1320Mhz. In conclusion there is a lot to be said about the memory bandwidth of cards today and how it relates to game performance in different scenarios, but overall, for every upgrade, I wouldn't recommend going backwards -
3060 Ti: 448.0 GB/s
4060 Ti: 288.0 GB/s < --- going backwards like 2 steps forward one step back line dancing.

And I think the large on die caches are clouding this issue. Sounds gimmicky and a way for them to have you think that low memory bandwidth is OK.
What matters is FPS for your money. Paper specs are no way to buy a video card, otherwise you'd expect Nvidia cards to be 2-3x the speed of AMD ones based on core counts.

4090 - 16384 cores
7900XT - 5376 cores

205% more cores but only 46% faster. Even in RT it's only 80% faster. But then it doesn't cost 3x the AMD card, only 2x.
Posted on Reply
#21
Pooch
Lew ZealandWhat matters is FPS for your money. Paper specs are no way to buy a video card, otherwise you'd expect Nvidia cards to be 2-3x the speed of AMD ones based on core counts.

4090 - 16384 cores
7900XT - 5376 cores

205% more cores but only 46% faster. Even in RT it's only 80% faster. But then it doesn't cost 3x the AMD card, only 2x.
I don't know of too many outlets that let you rent video cards, as in not going by paper specs. Regardless so far I have chosen very well, for the money spent and the end result after some compromises yes, in game settings, but 90 percent of the time Im able to facilitate imo an improved graphical scenario compared to default settings and in some cases what ever highest setting. All in an effort to maintain clear visual fidelity and high framerates ie over 60fps at a minimum. SO far so good, so im thinking since the 1660 super has done so well and frankly is doing very well even currently in Hogwarts legacy and cyberpunk. coincidently Hogwarts recommended card IS the 1660 super and think that's odd but whatever. Usually its a more powerful higher end card for rec spec. The 3060 is the next logical choice not just for me, but for a lot of people who would like 12 GB vram and have quite a but more shading power but not spend an insane amount. Because any higher and there are certain system requirements that a lot of people wont meet, like power draw and size restrictions. And yea am5 and ddr5 and pci 5 are upon us but really there is a lot of headroom left for people with the previous gen platform. Here is my current system

Amd 5600x
16gb 3600 xmp
gigabyte x570
wd sn770 nvme 1tb
evga 1660 super oc
corsair rm650 psu

This would be the final upgrade to the system, unless I wanted to double the system ram it wouldnt really make a difference but other than that it would be the last thing and that 3060 would make this system viable for another at the most, 5 years. Instead of building a whole new platform for couple grand cause it would be kik ash, spend around 3 to 400 now for a card.

p.s. I forgot to mention I play at 1440p only. since I got my latest monitor a 27 lg hdr gsync 1440 has been the main stay and really who would want to go back? Thing is its key to know exactly what settings in the game to choose to allow for the framerates to stay above the all mighty 60 FPS, for instance, in hogwarts legacy, if you turn the effects setting to high or ultra, it totally changes the reflection effect on just about everything and kills framerate, but heres the thing, it looks better on med than it does on the higher setting. Why? Because the higher settings uses a completely different relfection effect that doesnt even look as good but uses more power. Most likely its raymarched reflections for high and ultra and cubemap for medium and lower, but a very high quality cubemap. So its these "compromises" lol are what im talking about, its very specific and time consuming, for some games, but its worth the effort because like i say in the end it often reflects something even better looking than so called ultra setting. And finally let me say this, this is the enthusiast aspect to all of this computer stuff, this is what its all about, tinkering and testing and retesting to find the OPTIMAL settings. For your pc. I love the word optimal btw. OPTIMIZE!
Posted on Reply
#22
N/A
it happened before with GTX 960 and GTX 760 where we had the same step back, 192 to 112 GB/s but the performance was the same. So clearly if you're lookin for any improvement this isn't for you.

After this weak generation comes GDDR7 with 576GB's over the same 128 bit bus. But they may decide to stick with GDDR6 for the low end, worst case 360 GB/s.

Things like DLSS and neural compression just add latencly, but the L2 cache is a good thing that probably keeps the complete frame on-die instead in the memory, so there is less burden on the inerface overall.
Posted on Reply
#23
Pooch
N/Ait happened before with GTX 960 and GTX 760 where we had the same step back, 192 to 112 GB/s but the performance was the same. So clearly if you're lookin for any improvement this isn't for you.

After this weak generation comes GDDR7 with 576GB's over the same 128 bit bus. But they may decide to stick with GDDR6 for the low end, worst case 360 GB/s.

Things like DLSS and neural compression just add latencly, but the L2 cache is a good thing that probably keeps the complete frame on-die instead in the memory, so there is less burden on the inerface overall.
I understand and agree with all that you are saying, the only point im trying to make is that there is a prudent upgrade path for ones that don't have as much cash and I think I have discovered it. The 3060 is a generous upgrade over the 1660 super, under the right settings and supporting system(my current system) I expect to double my framerates(in some games). im expecting at least a 25 fps increase in all major titles flight sim, cyberpunk, hogwarts. I also never use dsr fsr upscaling of any kind, just straight no vsync cause the gsync monitor handles it all, 1440. Its a fast ips.
Posted on Reply
#24
Dr. Dro
Lew ZealandWhat matters is FPS for your money. Paper specs are no way to buy a video card, otherwise you'd expect Nvidia cards to be 2-3x the speed of AMD ones based on core counts.

4090 - 16384 cores
7900XT - 5376 cores

205% more cores but only 46% faster. Even in RT it's only 80% faster. But then it doesn't cost 3x the AMD card, only 2x.
Isn't Ada a dual issue design just like Ampere? Which means that it's actually 8192 units that double up FP32 workloads (for example, 3090 is marketed as 10496 CUDA cores but actually contains 5248 shader units spread across 82 SM blocks/compute cores out of the 84 present in a full die such as 3090 Ti)?

In any case, while the physical die area of Navi 31 is smaller in comparison to AD102, it doesn't contain the humongous L2 cache its competitor has, nor any of its specialty features such as tensor processing cores, which should bring the die area that's dedicated to shader processing on Navi 31 significantly closer to the amount of area used in AD102 for the same purpose. I genuinely believe AMD has designed it targeting AD102, from their earliest narrative and claimed performance gains over the 6950 XT... except that they failed so miserably to achieve that I actually pity them.

I don't understand why the RX 7900 XTX turned out to be such a disaster, it must contain very severe and potentially unfixable hardware errata, because if you look at it objectively, it's architected really well. I am no GPU engineer, but I don't really see any major problem with the way RDNA 3 is architected and how its resources are managed and positioned internally. Even its programmability seems to be at least as flexible as the others. At a first glance, it seems like a really thought out architecture from programmability to execution, but it just doesn't pull its weight when put next to Ada. I refuse to believe AMD's drivers are that bad, not after witnessing first hand the insane amount of really hard work the Radeon team put on it, even if I sound somewhat unappreciative of said efforts sometimes (but trust me, I am not). It's a really good read and even for a layman you should be able to more or less end up with an understanding of the hardware's inner workings:

www.amd.com/system/files/TechDocs/rdna3-shader-instruction-set-architecture-feb-2023_0.pdf

Despite my often harsh tone towards AMD, I really think they have the potential to reverse this and make an excellent RDNA 4 that will be competitive with Blackwell, regardless, I don't think I will end up with a 5090 on my system if NVIDIA keeps their pricing scheme that way.
Posted on Reply
#25
sLowEnd
Dr. DroIsn't Ada a dual issue design just like Ampere? Which means that it's actually 8192 units that double up FP32 workloads (for example, 3090 is marketed as 10496 CUDA cores but actually contains 5248 shader units spread across 82 SM blocks/compute cores out of the 84 present in a full die such as 3090 Ti)?

In any case, while the physical die area of Navi 31 is smaller in comparison to AD102, it doesn't contain the humongous L2 cache its competitor has, nor any of its specialty features such as tensor processing cores, which should bring the die area that's dedicated to shader processing on Navi 31 significantly closer to the amount of area used in AD102 for the same purpose. I genuinely believe AMD has designed it targeting AD102, from their earliest narrative and claimed performance gains over the 6950 XT... except that they failed so miserably to achieve that I actually pity them.

I don't understand why the RX 7900 XTX turned out to be such a disaster, it must contain very severe and potentially unfixable hardware errata, because if you look at it objectively, it's architected really well. I am no GPU engineer, but I don't really see any major problem with the way RDNA 3 is architected and how its resources are managed and positioned internally. Even its programmability seems to be at least as flexible as the others. At a first glance, it seems like a really thought out architecture from programmability to execution, but it just doesn't pull its weight when put next to Ada. I refuse to believe AMD's drivers are that bad, not after witnessing first hand the insane amount of really hard work the Radeon team put on it, even if I sound somewhat unappreciative of said efforts sometimes (but trust me, I am not). It's a really good read and even for a layman you should be able to more or less end up with an understanding of the hardware's inner workings:

www.amd.com/system/files/TechDocs/rdna3-shader-instruction-set-architecture-feb-2023_0.pdf

Despite my often harsh tone towards AMD, I really think they have the potential to reverse this and make an excellent RDNA 4 that will be competitive with Blackwell, regardless, I don't think I will end up with a 5090 on my system if NVIDIA keeps their pricing scheme that way.
Dunno. Wouldn't be the first time something turned out unexpectedly bad. The 2900XT quickly pops up in my mind. The follow-up HD3870 that came out a few months after had 30% less memory bandwidth and the same amount of shaders, ROPs, and TMUs, but performed identically and drew less power.
www.techpowerup.com/review/sapphire-hd-3870/21.html
Posted on Reply
Add your own comment
May 21st, 2024 19:57 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts