Tuesday, May 9th 2023
NVIDIA GeForce RTX 4060 Ti Available as 8 GB and 16 GB, This Month. RTX 4060 in July
In what could explain the greater attention by leaky taps on the GeForce RTX 4060 Ti compared to its sibling, the RTX 4060, NVIDIA is preparing a staggered launch for its RTX 4060-series. We're also learning that there are as many as three SKUs in the series—the RTX 4060 Ti 8 GB, the RTX 4060 Ti 16 GB, and the RTX 4060. All three will be announced later this month, however, only the RTX 4060 Ti 8 GB will be available to purchase at the time. The RTX 4060 Ti 16 GB and RTX 4060 will be available from July.
At this point, little is known about what segments the 8 GB and 16 GB variants of the RTX 4060 Ti besides memory size. The RTX 4060 Ti 8 GB is rumored to feature 34 out of 36 streaming multiprocessors (SM) physically present on the 5 nm "AD106" silicon, which gives NVIDIA some theoretical headroom to enable a few more shaders. These 34 work out to 4,352 CUDA cores, while a fully unlocked AD106 has 4,608. The RTX 4060 is a significantly different SKU that's based on a maxed out "AD107" silicon, with 30 SM, or 3,840 CUDA cores, although it should be possible for some RTX 4060 cards be based on a heavily cut-down AD106.
Sources:
MEGAsizeGPU (Twitter), VideoCardz
At this point, little is known about what segments the 8 GB and 16 GB variants of the RTX 4060 Ti besides memory size. The RTX 4060 Ti 8 GB is rumored to feature 34 out of 36 streaming multiprocessors (SM) physically present on the 5 nm "AD106" silicon, which gives NVIDIA some theoretical headroom to enable a few more shaders. These 34 work out to 4,352 CUDA cores, while a fully unlocked AD106 has 4,608. The RTX 4060 is a significantly different SKU that's based on a maxed out "AD107" silicon, with 30 SM, or 3,840 CUDA cores, although it should be possible for some RTX 4060 cards be based on a heavily cut-down AD106.
120 Comments on NVIDIA GeForce RTX 4060 Ti Available as 8 GB and 16 GB, This Month. RTX 4060 in July
The main issue was that Nvidia had to use overpriced memory (GDDR6X) since their GPU needed all that bandwidth to perform. Since it was pricey, the amount of memory on Nvidia GPU stayed stable for few years and people got used to it. The other issue is memory size and bus size.
Memory chips are 32 bit. So you take your bus size, your memory chip size and you get your total ram.
What could help is non binary chip size. 3 GB chips could give 12 GB to the 4060.
Now Most GPU could use more memory and they would just run fine. VRAM is like RAM. You want to have always enough and it's better to have enough free space than being thigh.
GPU can always swap data in and out, but that use precious time and bandwidth that you could use on something else. And even with high main system memory bandwidth and PCI-E 4.0 16x bandwidth, it still take time to load what you need. This is why you have stutter when you lack of ram.
In the past, VRAM was mostly used to store Frame buffer (rendered frames waiting to be displayed) and textures. And this was the main reason why someone would say you need X amount of memory to run X resolution. At that time, the frame buffer was using a significant portion of the VRAM
Today, it must store way more. Almost Every shader you run will generate some data that will need to be reused at some point. Also, now most game will use various buffer to exchange things with CPU and there is also the need to keep data from previous frames for all the temporal effects. That is one of the reason why memory requirement between resolution are much closer than their actual pixel counts.
GPU get more powerful as they compute more stuff. That stuff need to be stored somewhere. I wouldn't buy the 8 GB variant. i find that the 6 GB 3060 was already too low on memory. The 4060 might cost more, but will last way longer.
Without desktop PC IGP being enabled, Windows 10/11 DWM and non-gaming apps will consume VRAM and the PC game doesn't have the full 12 GB VRAM. Check your Windows Task Manager's dedicated GPU memory usage before you run a game.
2GB x 4 32-bit chips has 8GB. For 16 GB with a 128-bit bus, it would need four 4GB density chips which don't exist. The clamshell configuration doubles the chip count usage.
RTX 3060 12GB used a 192-bit bus with six 2GB density chips.
NVIDIA's 48 GB VRAM cards are in a clamshell configuration which uses 2GB x 24 chips.
GDDR6W has a 4GB chip density.
If they would copy AMD (who are since years more generous without senseless overstacking), their lineup would look like that:
RTX 4090 (24GB) / RTX 4090ti (24GB)
RTX 4080 (20GB) / RTX 4080ti (20GB)
RTX 4070 (16GB) / RTX 4070ti (16GB)
RTX 4060 (12GB) / RTX 4060ti (12GB)
RTX 4050 (10GB) / RTX 4050ti (10GB)
I know I'm a particular case, but unless AMD can come up with something really awesome, I'm not getting an AMD card simply because I'm nauseated by the hordes of fanboys out there blaming Nvidia every move they make.
RTX 4090 could potentially be 192 or 384 bit
RTX 4080 could potentially be 160 or 320 bit
RTX 4070 could potentially be 128 or 256 bit
RTX 4060 could potentially be 96 or 192 bit
for the 4050, well, it would need to be 160 bit minimum for 10 GB
Unless they figure it out how to produce non binary size memory chip (ex 3 GB instead of 2 per 32 bit).
Or do you wanna tell me Nvidia, the absolute market leader of GPU's, in incapable to develop GPU's with adequate bus sizes for adequate memory sizes? Come on.
Understanding is so '90s.
Releasing a 16GB 60 series card while your more premium 4070 cards get 12 GB? They’re going to be all over the pace if they start releasing larger capacity SKUs between the 4070/4080 with super and Ti models with 16/20GB vram capacities. Apparently Nvidia enjoys bullets in their feet.
Why you don’t find this comical is weird. I feel for their board partners. While I miss EVGA the shenanigans Nvidia is currently partaking in is wild and I can definitely see why someone wouldn’t want to work with them as an AIB.