Friday, May 5th 2023

Palit GeForce RTX 4060 Ti GPU Specs Leaked - Boost Clocks of Up to 2685 MHz & 18 GB/s GDDR6 Memory

More leaks are emerging from Russia regarding NVIDIA's not-yet-officially-confirmed RTX 4060 Ti GPU family - two days ago Marvel Distribution (RU) released details of four upcoming Palit custom design cards, again confirming the standard RTX 4060 Ti GPU configuration of 8 GB VRAM (plus 128-bit memory bus). Earlier today hardware tipster momomo_us managed to track down some more pre-launch time info (rumors point to late May), courtesy of another Russian e-retailer (extremecomp.ru). The four Palit Dual and StormX custom cards from the previous leak are spotted again, but this new listing provides a few extra details.

Palit's four card offerings share the same basic specification of 18 GB/s GDDR6 memory, pointing to a maximum theoretical bandwidth of up to 288 GB/s - derived from the GPU's confirmed 8 GB 128-bit memory interface. The standard Dual variant appears to have a stock clock speed of 2310 MHz, the StormX and StormX OC models are faster at 2535 MHz and 2670 MHz (respectively), and the Dual OC is the group leader with 2685 MHz. The TPU database's (speculative) entry for the reference NVIDIA GeForce RTX 4060 Ti GPU has the base clock listed as 2310 MHz, and the boost clock at 2535 MHz - so the former aligns with the Palit Dual model's normal mode of operation (its boost clock number is unknown), and the latter lines up with the standard StormX variant's (presumed) boost mode. Therefore the leaked information likely shows only the boosted clock speeds for Palit's StormX, StormX OC and Dual OC cards.
Sources: momomo_us Photo Tweet, Wccftech
Add your own comment

29 Comments on Palit GeForce RTX 4060 Ti GPU Specs Leaked - Boost Clocks of Up to 2685 MHz & 18 GB/s GDDR6 Memory

#26
Blitzkuchen
great an 4060ti with 128bit and 8gb (like 3050) for 499$ :laugh:
Posted on Reply
#27
Pooch
sLowEndCould you repeat what you said, in English this time?
For you, no.

Knowing now your attitude in general, I wouldn't piss on you if you were on fire. By the way, it's would not could. For that proper English you require.
Posted on Reply
#28
Dr. Dro
Oof, things got spicy in here :eek: calm down boys we're all mates here :)
sLowEndDunno. Wouldn't be the first time something turned out unexpectedly bad. The 2900XT quickly pops up in my mind. The follow-up HD3870 that came out a few months after had 30% less memory bandwidth and the same amount of shaders, ROPs, and TMUs, but performed identically and drew less power.
www.techpowerup.com/review/sapphire-hd-3870/21.html
It seems as such. I have no idea, really. Nvidia had something similar with the GTX 480 and 580, the 580 released only some 9 months later... it's not that the 480 performed inadequately, but rather that its GF100 core's power and thermals were horrible and that was a problem they rectified with the GF110 chip used in the latter.

Despite not at all being the kind of product I am usually interested in buying, I'm eagerly awaiting the reviews to know how would 7600 XT (N33) scales vs. 6600 XT (N23). If the monolithic design indeed shows large gains over the previous generation, I would guess something related to the chiplet architecture is the root cause for the 7900 XTX's underwhelming performance. Interconnect bandwidth, perhaps? I suppose we will know sooner or later.
Posted on Reply
#29
sLowEnd
Dr. DroOof, things got spicy in here :eek: calm down boys we're all mates here :)
No, I love myself a good argument, but I'm sure the mods wouldn't like the direction I'd take this one. I'll leave it here.
Posted on Reply
Add your own comment
Dec 18th, 2024 03:37 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts