Friday, May 5th 2023
Palit GeForce RTX 4060 Ti GPU Specs Leaked - Boost Clocks of Up to 2685 MHz & 18 GB/s GDDR6 Memory
More leaks are emerging from Russia regarding NVIDIA's not-yet-officially-confirmed RTX 4060 Ti GPU family - two days ago Marvel Distribution (RU) released details of four upcoming Palit custom design cards, again confirming the standard RTX 4060 Ti GPU configuration of 8 GB VRAM (plus 128-bit memory bus). Earlier today hardware tipster momomo_us managed to track down some more pre-launch time info (rumors point to late May), courtesy of another Russian e-retailer (extremecomp.ru). The four Palit Dual and StormX custom cards from the previous leak are spotted again, but this new listing provides a few extra details.
Palit's four card offerings share the same basic specification of 18 GB/s GDDR6 memory, pointing to a maximum theoretical bandwidth of up to 288 GB/s - derived from the GPU's confirmed 8 GB 128-bit memory interface. The standard Dual variant appears to have a stock clock speed of 2310 MHz, the StormX and StormX OC models are faster at 2535 MHz and 2670 MHz (respectively), and the Dual OC is the group leader with 2685 MHz. The TPU database's (speculative) entry for the reference NVIDIA GeForce RTX 4060 Ti GPU has the base clock listed as 2310 MHz, and the boost clock at 2535 MHz - so the former aligns with the Palit Dual model's normal mode of operation (its boost clock number is unknown), and the latter lines up with the standard StormX variant's (presumed) boost mode. Therefore the leaked information likely shows only the boosted clock speeds for Palit's StormX, StormX OC and Dual OC cards.
Sources:
momomo_us Photo Tweet, Wccftech
Palit's four card offerings share the same basic specification of 18 GB/s GDDR6 memory, pointing to a maximum theoretical bandwidth of up to 288 GB/s - derived from the GPU's confirmed 8 GB 128-bit memory interface. The standard Dual variant appears to have a stock clock speed of 2310 MHz, the StormX and StormX OC models are faster at 2535 MHz and 2670 MHz (respectively), and the Dual OC is the group leader with 2685 MHz. The TPU database's (speculative) entry for the reference NVIDIA GeForce RTX 4060 Ti GPU has the base clock listed as 2310 MHz, and the boost clock at 2535 MHz - so the former aligns with the Palit Dual model's normal mode of operation (its boost clock number is unknown), and the latter lines up with the standard StormX variant's (presumed) boost mode. Therefore the leaked information likely shows only the boosted clock speeds for Palit's StormX, StormX OC and Dual OC cards.
29 Comments on Palit GeForce RTX 4060 Ti GPU Specs Leaked - Boost Clocks of Up to 2685 MHz & 18 GB/s GDDR6 Memory
No way, :laugh:
KEKW
3060 Ti: 448.0 GB/s
4060 Ti: 288.0 GB/s
Good times. :shadedshu:
Pretty much the same 4060 Ti is the equivalent of 6144 / 72. The problem here is the Rops. 48 Rops versus 96. That is absolute crap.
Nowhere to run now.
The GTX 760 has 80GB/s more memory bandwidth than the GTX 960. Guess which card performs 20% worse than the other?
The RX 480 matches the R9 390 with a 128GB/s memory bandwidth deficit.
Last generation with 6950 XT (512 GB/s) vs. 3090 Ti (1013 GB/s) already showed how AMD's design desperately relies on its cache hit rate, with the performance drastically reduced in software that doesn't have a high hit rate or in situations where raw memory bandwidth is heavily demanded of (usage of ultra high quality textures and/or resolutions above 1440p, and most notably, ray tracing).
The end result is that AMD's design was mediocre for 4K/Ultra gaming and had poor ray tracing performance, something that the RTX 3090 and its refresh are perfectly capable of doing well. On the flip side however as long as you didn't enable ray tracing, AMD's design was faster at lower resolutions, especially 1080p, so in the end these turned out to be great cards. The RTX 3090 tends to fall behind the even the vanilla RX 6800 in pure raster workloads at 1080p on games that are friendly to AMD's architecture.
All in all the conclusion you can draw from this is the same, the RTX 4060 Ti and the RX 7600 XT are both strictly designed for 1080p gaming and I suspect their performance is going to fall off a cliff once they're run at 1440p and 4K will be completely unworkable.
3060 Ti: 448.0 GB/s
4060 Ti: 288.0 GB/s < --- going backwards like 2 steps forward one step back line dancing.
And I think the large on die caches are clouding this issue. Sounds gimmicky and a way for them to have you think that low memory bandwidth is OK.
Selling them to Russia via china would only do GOOD for them without caring about the other things.
4090 - 16384 cores
7900XT - 5376 cores
205% more cores but only 46% faster. Even in RT it's only 80% faster. But then it doesn't cost 3x the AMD card, only 2x.
Amd 5600x
16gb 3600 xmp
gigabyte x570
wd sn770 nvme 1tb
evga 1660 super oc
corsair rm650 psu
This would be the final upgrade to the system, unless I wanted to double the system ram it wouldnt really make a difference but other than that it would be the last thing and that 3060 would make this system viable for another at the most, 5 years. Instead of building a whole new platform for couple grand cause it would be kik ash, spend around 3 to 400 now for a card.
p.s. I forgot to mention I play at 1440p only. since I got my latest monitor a 27 lg hdr gsync 1440 has been the main stay and really who would want to go back? Thing is its key to know exactly what settings in the game to choose to allow for the framerates to stay above the all mighty 60 FPS, for instance, in hogwarts legacy, if you turn the effects setting to high or ultra, it totally changes the reflection effect on just about everything and kills framerate, but heres the thing, it looks better on med than it does on the higher setting. Why? Because the higher settings uses a completely different relfection effect that doesnt even look as good but uses more power. Most likely its raymarched reflections for high and ultra and cubemap for medium and lower, but a very high quality cubemap. So its these "compromises" lol are what im talking about, its very specific and time consuming, for some games, but its worth the effort because like i say in the end it often reflects something even better looking than so called ultra setting. And finally let me say this, this is the enthusiast aspect to all of this computer stuff, this is what its all about, tinkering and testing and retesting to find the OPTIMAL settings. For your pc. I love the word optimal btw. OPTIMIZE!
After this weak generation comes GDDR7 with 576GB's over the same 128 bit bus. But they may decide to stick with GDDR6 for the low end, worst case 360 GB/s.
Things like DLSS and neural compression just add latencly, but the L2 cache is a good thing that probably keeps the complete frame on-die instead in the memory, so there is less burden on the inerface overall.
In any case, while the physical die area of Navi 31 is smaller in comparison to AD102, it doesn't contain the humongous L2 cache its competitor has, nor any of its specialty features such as tensor processing cores, which should bring the die area that's dedicated to shader processing on Navi 31 significantly closer to the amount of area used in AD102 for the same purpose. I genuinely believe AMD has designed it targeting AD102, from their earliest narrative and claimed performance gains over the 6950 XT... except that they failed so miserably to achieve that I actually pity them.
I don't understand why the RX 7900 XTX turned out to be such a disaster, it must contain very severe and potentially unfixable hardware errata, because if you look at it objectively, it's architected really well. I am no GPU engineer, but I don't really see any major problem with the way RDNA 3 is architected and how its resources are managed and positioned internally. Even its programmability seems to be at least as flexible as the others. At a first glance, it seems like a really thought out architecture from programmability to execution, but it just doesn't pull its weight when put next to Ada. I refuse to believe AMD's drivers are that bad, not after witnessing first hand the insane amount of really hard work the Radeon team put on it, even if I sound somewhat unappreciative of said efforts sometimes (but trust me, I am not). It's a really good read and even for a layman you should be able to more or less end up with an understanding of the hardware's inner workings:
www.amd.com/system/files/TechDocs/rdna3-shader-instruction-set-architecture-feb-2023_0.pdf
Despite my often harsh tone towards AMD, I really think they have the potential to reverse this and make an excellent RDNA 4 that will be competitive with Blackwell, regardless, I don't think I will end up with a 5090 on my system if NVIDIA keeps their pricing scheme that way.
www.techpowerup.com/review/sapphire-hd-3870/21.html