Friday, March 26th 2021
NVIDIA GeForce RTX 3070 Ti Could be Offered in Both 8 GB and 16 GB SKUs
Uniko's Hardware, a usual spot for leaks and information on upcoming hardware, has put forward that NVIDIA could be looking to introduce two versions of its upcoming RTX 3070 Ti graphics card. The difference would be dual-sided GDDR6X memory or not, which would make available memory capacities for this card in the league of either 8 GB (the same as the RTX 3070) or 16 GB running at 19 Gbps.
The intention with the RTX 3070 Ti is to bring the fight back to AMD, who released a pretty good offering to the market in the form of the RX 6800 and RX 6800 XT graphics cards - both featuring 16 GB of GDDR6 memory. NVIDIA is looking to improve its market position compared to AMD by offering both the RTX 3070 and RTX 3070 Ti on the market. It could also be a time for NVIDIA to release another cryptomining-crippled graphics card - and this time to try and do it right by not releasing a driver that unlocks that particular effort. The card is rumored for launch come May, though we've already seen an unprecedented number of delays for NVIDIA's new SKUs - a sign that there is indeed a problem in the upstream semiconductor offering field.
Source:
Videocardz
The intention with the RTX 3070 Ti is to bring the fight back to AMD, who released a pretty good offering to the market in the form of the RX 6800 and RX 6800 XT graphics cards - both featuring 16 GB of GDDR6 memory. NVIDIA is looking to improve its market position compared to AMD by offering both the RTX 3070 and RTX 3070 Ti on the market. It could also be a time for NVIDIA to release another cryptomining-crippled graphics card - and this time to try and do it right by not releasing a driver that unlocks that particular effort. The card is rumored for launch come May, though we've already seen an unprecedented number of delays for NVIDIA's new SKUs - a sign that there is indeed a problem in the upstream semiconductor offering field.
79 Comments on NVIDIA GeForce RTX 3070 Ti Could be Offered in Both 8 GB and 16 GB SKUs
And that 4 year life span... I used to upgrade like every other year (because I only bought mid-range and could get most of the cost back). And I know there are those who upgrade with each and every generation just because. But I don't have the numbers to know where most of the people stand. Judging by Steam numbers, you are probably right.
Also if you are going to step up to 4K then plan to be upgrading cards often.
Just pick what works for the screen, if ya pay attention to what is actually "used" versus how much it allocates on installation, won't have any issues. When a utility says your 8GB card is using 5.8 MB and then you insert the 4 GB version and get same fps, same user experience, same screen quality as the 8 GB var ... no it is not ***using*** 5.8 MB
1080p ==> 3 - 4 GB
1440p ==> 6 - 8GB
2160p ==> 12 - 16 GB That's not proof, it's taking advantage of uneducated consumers. Been the same claim since the nVidia 500 series... test 2 variants of a card at the resolution it's intended for and the higher one and no difference in user experience.
This was one of the 1st "exposes" on the myth ... but it's been done for for 600, 900, 1000 series also
alienbabeltech.com/main/gtx-770-4gb-vs-2gb-tested/3/ Geez, imagine if car companies and women's fashions, text books, software, etc. started pulling that stuff. Allocated and used are two very, very different things. Think of your master card,... lets say you have a $5,000 limit and $500 charged. How much much of your credit line are you using ? ... $500.
Now when you apply for a car loan ... what will the credit report show is allocated against your potential lime of credit. MasterCard reports $5000. You are not ***using it** at the time.... but it$5000 has been ***allocated*** to be used whenever you want. Same principle here and nvidia has explained this on several occasions.
See above link: They tried to install a 2 GB version of max payne and it would not allow the install because of insufficient VRAM at that resolution. So they installed the 4GB car, tested it... then put 2 GB card back in ... same fps, same graphics quality, same user experience.
"We spoke to Nvidia’s Brandon Bell on this topic, who told us the following: GFX Card utilities "all report the amount of memory requested by the GPU, not the actual memory usage. Cards will larger memory will request more memory, but that doesn’t mean that they actually use it. They simply request it because the memory is available."
Look at the TPU reviews. The 3 GB and 6GB 1060 reviews here on TPU are a great example. The 6 GB card has 11% more shaders than the 3 GB card so it has an inherent speed advantage aside from VRAM. And when we look at the relative speed of the cards at 1080p ... the 6GB card has a 6% speed advantage. So if this VRAM thing is true that 3 GB is wholly inadequate for 1080p ... then when we look at 1440p we should see a critical drop in performance w/ 3 GB at 1440p ... but there isn't same 6%
Now that's not to say that no game will have issues with some games at some point. Poor console ports for example have been one of the most common exceptions. But for the most part ... by the time you run out of VRAM, you will fun out of playable. Not really relevant if you get 33% more fps when the result is 15 vs 20 fps. it's unplayable either way.
On thing I like about this abundance of options is seeing W1zzard take this issue on and get a data set for the 3000 series, and see if it comes out the same way as 500, 600, 700, 900 and 1000 series
or maybe nVidia just want to compete with that too.
Are you saying its better we get a single capacity per SKU 'so that customers understand'?
This is not the same thing as just using the bigger number for bigger sales. There is a price gap and the net performance is similar, but there are use cases for higher capacities and clearly there is also demand for it. Back in the day, when Nvidia played that game, there was also, for example, SLI with shared memory pools.
Let's make a distinction between tech aficionado and what we consider normal... just people being oblivious to what they're spending on is not normal. Some go through life saying 'ignorance is bliss' and if they do trip over something its always the fault of everyone else. Facilitating that kind is bad. Really bad. And we're knee deep into it, the US especially because its so easy to sue and succeed. Exactly. Nvidia is pretty clear on its type-numbering and how the stack is organized, and also about how much VRAM is on each GPU.
The one time they dropped the ball, they missed on details about a mere 0.5GB. The box clearly said 4 GB though. This also completely doesn't mix well with the idea that memory capacity would be an unknown concept.
What's worse is the OEM versions of similar SKUs that are actually pretty different, I remember the versions of Kepler well, but also their Maxwell pilot project in 7xx series, and running old architecture on mobile chips. THAT is something to get pissed about. Because that is truly misleading, same as rebranding half your stack and placing a few new things above it.
Though I'd rather have crappy names on great products than the other way around.
Because the main conclusion form that should be that 3060 should not need more than half of those 12GB.
Or rather, not being able to call 3080 a 3080Ti (it would be an overpriced crap, like 2080Ti, so 20GB of expensive VRAM would not be a problem)
The leaks were never about two 3070 Ti variants - they were about two variants of the GA104 die with 6144 CUDA cores. Then leakers assumed [incorrectly] that they were both gaming cards. But the 16GB variant of that die has now been released... its on nVidia's website lol.
Its insane to think nVidia would have any reason to sell a 16GB gaming card when they could use the scarce VRAM to sell more expensive 12GB 3080Ti's which will beat AMD's best cards.