Friday, December 18th 2020
NVIDIA GeForce RTX 3060 to Come in 12GB and 6GB Variants
NVIDIA could take a similar approach to sub-segmenting the upcoming GeForce RTX 3060, as it did for the "Pascal" based GTX 1060, according to a report by Igor's Lab. Mr Wallossek predicts a mid-January launch for the RTX 3060 series, possibly on the sidelines of the virtual CES. NVIDIA could develop two variants of the RTX 3060, one with 6 GB of memory, and the other with 12 GB. Both the RTX 3060 6 GB and RTX 3060 12 GB probably feature a 192-bit wide memory interface. This would make the RTX 3060 series the spiritual successors to the GTX 1060 3 GB and GTX 1060 6 GB, although it remains to be seen if the segmentation is limited to the memory size, and doesn't also go into the chip's core-configuration. It's likely that the RTX 3060 series goes up against AMD's Radeon RX 6700 series, with the RX 6700 XT being rumored to feature 12 GB of memory across a 192-bit wide memory interface.
Source:
Igor's Lab
126 Comments on NVIDIA GeForce RTX 3060 to Come in 12GB and 6GB Variants
Yes, you can find cases where 6 GB is not enough at this performance metric, especially in the future.
But that said, more than 6 GB is not NEEDED. THATS A FACT. I think our definitions of NEED are different. If getting the 6 GB matches performance of the 12 GB is to move a slider one click left once in a while (or refrain from adding 8k textures), I would argue that 12 GB is not NEEDED, especially if it saves me $$.
Looking forward to having this wonderful conversation again when we have 8 GB and 16 GB GTX 3070 cards. The privilege of consumer choice is just too much for some to handle.
Oh and quit trying to put the burden of proof on those that say more than 6 GB is enough. Key word is ENOUGH. When you say that more than 6 GB is needed, that is spreading misinformation to uninformed consumers (causing them unnecessary spending) so you need to back it up.
I have greatly enjoyed this excursion into fantasyland with you and people who agree with you, however, life is calling..
The card was about 20% cheaper so it was a valid option for those trying to save a little cash. I would argue that 3 GB vram back then was more of a hindrance than 6 GB today as most high end GPUs had 8 GB back then while cards like the 3070 still have 8 GB today. Yes, when Wicher 4 or GTA 6 comes out, we will be limited to 1080/1440p medium on BOTH the 6 GB and 12 GB 3060 cards due to lack of performance.
Same goes for CP77 and the GTX 1060 today...
There are plenty of professionals or prosumers who actually need more VRAM, but gaming on this card wouldn't if the initial review proves it's fine.
Don't get me wrong, having options for pro users is good. My issue is with those who claims these cards are more "future proof" for gaming, which they are not. If this happens, a reviewer will clearly notice it, and it will be clearly mentioned in reviews.
If it doesn't happen, it will likely be fine until it's obsolete.
I don't know the price of RTX 3060, but let's assume it's $300. What would the price for the 12GB model be do you think? $50 extra?
Considering RTX 3060 Ti is $400 and will be much more powerful, it then becomes a question why pay the ~$50 extra for some "future proofing" when you can pay another ~$50 and get something which is very likely to age much better.
The 12GB model would only offer some degree of "future proofing" if it also offered significantly faster VRAM. In real life, utilizing more VRAM will also require more of other resources. If it does have problems with current games, that is easily provable when it's released. If it doesn't, it will likely be fine until it's obsolete as games can't use more VRAM within a single frame without either having more bandwidth or lowering frame rate.
While you can certainly find a scenario where it runs out of VRAM, but that's irrelevant if it can't hit a stable 60 FPS or more at those settings. You don't buy a midrange card now to play games 3+ years from now in 4K as a slideshow at 15-30 FPS. :rolleyes:
The balance is shit and if you dont want to believe or see that, stop whining, buy your card and see what happens next, eh? Im not calling you an "amature" for doing so, all I and others say is that you will be having a card that will struggle 2 years later, while the higher VRAM versions are overkill and not cost effective. Its just not an optimal product... are you saying it is?
If all you play is reviews, 3DMark and canned benches then yes great cards I guess...? If you do more, for which gaming PCs are usually well equipped (modding, tweaking, quality improvements over reductions) then saying that the stock 6, 8 or 10 GB is just fine is a strong act of cognitive dissonance. If not straight up fanboi blindness, OR you dont care and think RTX or DLSS hold more value.
Thats fine too. But stop selling the illusion that cards that have steadily lost relative VRAM ro core power over the past two generations are still in a good place. They are really not. I never heard anyone say "give me a 6-7GB 2080ti", did you? Because thats really what youve got here. Im playing Cyberpunk right now at 3440x1440, on a 1080 with 8GB and 6,5-7GB in use. 50 FPS. Seems well balanced... very much unlike cards with double core power and the SAME Vram. Come again?! The baseline demands for what games can allocate and use is going up as it did last console gen - and last gen the cards DOUBLED their capacities twice over ; Maxwell with 4 and Pascal with 8 in the high end. Since then... we got less. You will find that a last gen example is not the best indicator of next gen perfornance, especially not a Pc first title. And even that one is keen to go over 6GB for a mainstream res.
There is far too much guesswork and feelings involved here. This should be a rational conclusion based on data to show whether 6 GB is enough (for realistic scenarios) or if Nvidia screwed up.
I'm tired of all these opinionators on YouTube etc. claiming that x GB is too little contrary to what benchmarks shows.
You too should wait for serious reviews instead of concluding a new card automatically needs more than the previous.
It doesnt feel right and so far every time I followed that gut sense, I found proof later down the line. So... its like religion ;) Believe as you will, I have learned too often going by launch results is going to be too limited a perspective.
Remember the Fury X with its 4GB?! Now there is a nice example as even its huge bandwidth didnt save its lack of capacity. Equal to a 6GB 980Ti on launch.... decimated in the years after. Not in a few games... but in most.
Similar case: Kepler 2GB cards that quickly lost their edge several years later against the 3GB 7970, only the 3GB 780ti remained relevant deep into Maxwell gen. Call it fine wine... or maybe just 1GB of VRAM?
ALL of those cards had core oomph to spare but VRAM was missing. Quality reductions wouldnt change that - the balance just wasnt right.
BTW I never watch youtubers (as in NEVER) and I have no social media accounts. I figure this out all on my own and was saying this since the first day a 3080 came out with a horribly low 10GB. Its not hard to compare that to what we had and conclude something is missing.
But gamers who play slideshows at less than 60 FPS, clearly don't care about performance, and isn't very relevant for a discussion of whether more VRAM makes a specific card more future proof. Or have you just been looking for evidence to support your predetermined conclusion?
I used to have a similar opinion myself many years ago, but the history keeps repeating itself; people buy extra VRAM for "future proofing", yet cards become obsolete for other reasons long before that happens.
Resource streaming has made VRAM capacity if anything less important, and bandwidth more important in later years. Bandwidth will never replace capacity. But you need both bandwidth and computational power if you want to utilize extra VRAM capacity.
Fury X lacked computational power (and didn't it also have some latency issues??). The same goes for Radeon VII, which was hailed as "future proof" with both memory capacity and bandwidth, yet lacked performance. There certainly is a balancing act, but this balance is revealed by benchmarks, not guesswork.
You're twisting things to fit your narrative here. High refresh gamer here, but flexible enough to see that sub 60 is perfectly playable - IF the frametimes are stable and guess what... you want VRAM for that and no swapping
And no, I dont look for evidence, I just remember all of it very well because its not review and google based, but from actual, first hand experience. Stuff that raised eyebrows because like yourself my basic idea still is that the majority of GPUs is indeed well balanced especially higher up the stack. Ampere I view as a break from that norm and Nvidia has a motive too, as they are using an inferior node and something has to give. The double VRAM versions now only work to reinforce that thought.
The whole point of the card being supposedly more "future proof" with extra VRAM was in order to sustain higher performance, then if it can't even reach 60 FPS, then what's the point? This is simply laughable. :rolleyes: Please keep in mind that I'm not saying people need >60 FPS to game here, just that whole argument about "future proofing" is ultimately flawed if you need VRAM to have high performance but don't have the order means to get there. Arguments like that might convince some, but those of us who are experienced graphics programmers see straight through that BS. Firstly, people like you have continuously complained about Nvidia not being "future proof" at least since Maxwell, due to AMD often offering ~33-100% more VRAM for their counterparts. This argumentation is not new with Ampere, it comes around every single time. Yet history has shown Nvidia's cards have been pretty well balanced over the last 10 years. I'll wait for the evidence to prove Ampere is any less capable. Now you're approaching conspiracy territory.
You know very well that TSMC's 7nm capacity is maxed out, and Nvidia had to spread their orders to get enough capacity for launching consumer products (yet it's still not enough).
Double VRAM versions (assuming they arrive) will come because the market demands it, not because Nvidia has evil intentions.
For the pro vram crowd yeah having more vram can't hurt as worst case it will perform the same and you can sleep soundly knowing your "future" is safe. :D
For the anti vram crowd for most games on most settings at least for now it would be on par with the more vram version and when future happens the performance impact might be less than the percentage of money saved.
Let say you save 20% when buying 6gb version and present time has same performance and in the future is 20% behind well you had same performance in the present and got what you paid for in the future so still can feel good about the purchase.
As an example I'll use 1060 3gb vs 6b which is not perfect example as both don't have the same gpu count but close enough still something can be learned.
In the initial review 1060 6gb was 14% faster in 4k compared to the 3GB and looking at todays data for the same cards 1060 6GB is 31% faster at 4k than the 3GB version..
So call it that it lost 17% of performance over time, but that is only in resolution/detail that neither 3gb or 6gb will be able produce playable fps and if you look at the new reviews at 1080p the difference between 3gb and 6gb is similar at 17%.
So having that in mind if I would have saved 20% on getting the 3GB model I definitely got better deal at the time I bought the card and for the future.
If 3050ti 6GB has exact same specs as 12GB and costs 20% cheaper looking from price/performance standpoint one would make a good argument that it's better purchase than the 12G version and if the 12GB 3050ti price is close to the 3060ti the 3060ti will be better purchase all day everyday even though it will have less VRAM.
So now that I had time to chew on this and make educated guess if anybody questions the existence of either 6GB or 12GB 3050ti I would say 12GB makes less sense from current and future price/performance standpoint.
If you make an absolute argument yeah the 12GB will be always better card than the 6GB but that's like you being offered both at the same price and you be a fool to pick the 6GB but that is not the reality of things there is something called price $$$ and that together with fps are the only valid metric everything else like model number, shader count, amount of vram etc .. is all fart in the wind.
And people who can't accept the 6GB version is nothing but I mental barrier "how come 6GB card can match 11GB card like 1080ti it has to have 12GB vram" well ... it can.
I just can't conclude that the current capacities are sufficient going forward, there are too many writings on the wall and if you want to stay current you need Nvidia's support. Not just for DLSS, or RTX... but also for things like RTX I/O. They are creating lots of boundaries by offering lower VRAM and intend to fix it with proprietary stuff. I'm not buying that shit, as much as I didn't buy into Gsync until it didn't carry a premium anymore... and honestly, even now that I have it, I'm not even using it because every single time a small subset of situations exist where its not flawless and worry-free.
This is all more of the same. Use the basic, non-altered, open to everyone tech or you're fucked and essentially just early adopting forever. I'm not doing that for my hard earned money. Make it work everywhere and make it no-nonsense or I'm not a fan. History shows its impossible to cater the driver for every single game coming out - and waiting for support is always nasty. You're not wrong, but nobody can look into the future with 100% certainty. We're all predicting here. And I think, this time around, the predictions on both sides have rational arguments to take them seriously. Its absolutely NOT certain the current amount is sufficient going forward.
Do I trust Nvidia and AMD to manage memory allocation correctly? Sure. But they are definitely going to deploy workarounds to compensate for whatever trick the consoles do. Even AMD has a tech now that accelerates the memory. And in Nvidia's case, I reckon those workarounds are going to cost performance. Or worded differently: They won't age well. To this I can agree - although I don't complain about future proof on most cards at all - the two last Nvidia gens just lack it ;). Most cards are well balanced and I think Turing also has its capacities still in order even if slightly reduced already. But I also know Nvidia is right on the edge of what the memory can handle versus what the core can provide. It echoes in their releases. The 9 and 11GBps updates for Pascal, for example, earned same shader count cards about 3-4% in performance.
With the 3080 most of all, but also in some way for the 3070, I believe they crossed the line. In the same way as they did with 970 - and I reckon if we review it now against a number of current cards with 4GB and up, the 970 will punch below its weight compared to launch. The same goes for the Fury X example I gave against the 980ti - that WAS actually benched and the results are in some cases absolutely ridiculous. Its not a one-off, need more? 660ti was bad balance and only saved by a favorable price point... 680 vs 7970 - hell even the faster VRAM 770 with 2GB can't keep up.
Evidence for Ampere will have to come in by itself, I am truly interested in a topic or deep dive like that 3 years down the line. Eh, 'motive' is meant in the sense that Nvidia offers less VRAM to have more headroom for the core clock. Not some nefarious plan to rule the world - just the thought that yields weren't great and might be improving now, too, making the upgrades feasible. And yes, an upside of offering low VRAM caps is that cards won't last as long. Its just a fact and a shareholder likes that. Very fair! Now, include the relevance of resale value. That 6GB card has earned 17% of it over time, the other one lost about as much and has less life in it going forward. This makes the 6GB one fit for resale, and 3GB one ready for some low-end rig you might have in the house.
And don't underestimate Nvidia's own insight here, they have access to many games coming in the next couple of years, and considering this RTX 3060 will be on the market for ~2-2.5 years, I don't think they will release something they expect to be obsolete within that timeframe. And these days Nvidia seems to want to keep generations longer with semi-refreshes rather than quick iterations. Can you elaborate what you mean about "tricks" here?
Do you mean like Infinity Cache? I haven't yet taken a deep look into it, but it's certainly possible to do some smart caching and get some performance boost. Games generally have a huge pile of "static" assets (textures, meshes) and some smaller framebuffers used to render to. Generally speaking, putting the framebuffers in a cache would be wise, since they are moved back and forth constantly, while caching static assets only helps if they are read repeatedly within a few thousand clock cycles. Caching can't really help your sustained bandwidth unless you prevent the same data from being read over and over.
But I'm curious of how universal these performance gains will be over time.
Which tricks from Nvidia are you thinking about?
As for cards going obsolete and Nvidia's knowledge - of course. This is called planned obscolescence and its a thing nowadays - and its also a fact not all products suffer from it in equal measure.
I don't think we disagree. 2-2,5 years is not the timeframe I think will be problematic. I'm talking 3+ years, when the new consoles get their higher volume of current-gen-first games as devs get used to the systems. And we know that these cards are very much capable of lasting that long, especially if you're not playing on 4K. I see way too many people here saying cards 'need' to be upgraded every gen or every other gen... but I think that with the large variety of choice in resolutions now, cards have WAY more wiggle room, especially in the top half of the stack.
I'm experiencing this first-hand - I'm using a 2016 high end GPU in 2020 - going into 2021 - went from 1080p to WQHD and now start reaching my VRAM cap, but still have playable framerates. On an 8GB card with half the core of what's out now., or even less. Am I compromising on settings left and right? Sure - but nothing really painful and its surprising how far the quality slider can go more often than not.
2.) Memory compression is much better. Just compare how well the GTX 3 GB card compares against the 4 GB RX 570, which often falls on its face.
3.) Benchmarks are always Ultra everything. You would be hard pressed to find a benchmark where 4 GB is not enough for 1080p medium, which is about as good as you will be able to do with a Fury class card in today's games. Its fine wine. Even the rare GTX 680 4GB loses to Tahiti.
www.google.com/amp/s/www.techspot.com/amp/article/2001-doom-eternal-older-gpu-test/
Even at 1080p ultra, the gtx1050 2GB is close to the 3GB version. 3GB gets lower bandwidth, but you would have expected it to still do better. Moreover, wasn't their a delay on 2 GB GDDR6x chips? There was no way that nVidia was going to make a 512 bit card just to get 16 GB of vram. AMD is able to get away with GDDR6, which has plenty of 2 GB stock, since they are using a huge cache in their design.
Let me tell you that VRAM is almost a non-issue. I just had two cases where I had to do something.
In Wolfenstein I had to turn down image streaming from Ultra to High. Then it runs at 1440p 60 FPS with DLSS balanced without any framepacing issues and smoothly as butter, with RTX on and Mein Leben! It doesn't even have an impact on texture quality, as the texture quality on Image Streaming High and Ultra is exactly the same, it just changes memory allocation budget.
In BF5 I had to turn off restrict GPU memory. Runs at well above 1080p60 on Ultra and DXR Medium without DLSS and nearly 1440p60. Alternatively, you could turn textures to high, difference is negitable.
That's it, in other RTX games, VRAM is not an issue at all. Control for example runs beautifully in 1440p60 and DLSS Performance with RT Reflections on and volumetrics to medium, other settings max. Rock solid 60.
In Cyberpunk I can have 1440p60 thanks to DLSS Performance without RTX and high settings without issues (and even contact shadows, face lighting on and AO on High so basically Ultra in some aspects) VRAM is a non issue there. With image sharpening DLSS Performance looks very close to native 1440p, because TAA has aggressive sharpening enabled while DLSS sharpening is at 0.
With RTX, I get a stable 1440p30 FPS using all(!) RTX features on and Lighting to medium and it just looks next gen. I'm playing it that way as the difference with RTX is huge. However, there is a memory leak in this game which degradates performance over time (and especially fast with RTX in busy scenes) and restarting fixes it. Other people with more VRAM than me seem to have similar issues, so this seems to be an issue with the garbage collection of the game, more VRAM just delays that effect.
Next, I've heard some misinformation regarding the Series S and X here.
The Series S has 10 GB total RAM, which 2 GB is reserved for the OS, so for games the total amount of RAM is 8 GB. That is not VRAM though, as its a shared config. For Series X, Microsoft allocated 3.5 GB for the CPU and since the Series S has similar CPU performance and CPU RAM should be pretty static along these two consoles, you should substract that amount from the 8 GB total RAM, meaning there is around 4.5 GB left as VRAM, depending on how CPU intensive the game is.
And we have solid evidence of this being the case, look at Sea of Thieves. I can max the game out at 1440p60, while Series S has significantly reduced texture and visual quality compared to my card and the Series X and of course runs at lower resolution.
Or look at Watch Dogs, even the Series X is not using the high quality texture pack and the Series S significantly reduces LOD, resolution and overall detail to combat the lack of RAM and power. If the Series X had truly 10 GB as VRAM, it would be able to run the high quality texture pack alongside RT without any issues, just like the 3080. In reality, even these 10 GB GPU optimized memory are a shared config, so in a CPU intensive game like Watch Dogs, the CPU needs more than 3.5 GB which leaves less VRAM for the GPU. On PC, you have DRAM for the CPU.
This is the very reason Nvidia still has a 6 GB RTX 3060, because that is a lot better than the lowest common denominator in next gen titles, the Series S and with DLSS saving VRAM it just increases that further.
The next gen jump in texture fidelity will still happen, despite the low amount of VRAM in next gen consoles. Because a new technology called Sampler Feedback alongside DirectStorage SSD streaming (known as RTX I/O on Nvidia GPUs) will act as an 2.5x multiplier for effective VRAM, as with SFS, engines and devs can control MIP levels of textures much more efficiently and thus, only rendering textures in the highest detail which are really noticed by the player. That will enable the next gen jump in texture fidelity without the need to increase physical VRAM.
If they go ahead and add the extra chips anyway, it'd be hard to utilize them.
Remember what happened with the GTX 970? Where are these figures coming from? 249 seems too low for a 3060, considering it could have 80% of the performance of a 3060 Ti - that's speculation, but it could.
80% of the performance at 60% of the price? That's a 33% improvement in price/performance ratio; which is unlikely considering the 3060 Ti is currently at the top of the chart in the 30 series cards. At best I think it'll match the price/performance ratio at $349.