Friday, December 18th 2020

NVIDIA GeForce RTX 3060 to Come in 12GB and 6GB Variants

NVIDIA could take a similar approach to sub-segmenting the upcoming GeForce RTX 3060, as it did for the "Pascal" based GTX 1060, according to a report by Igor's Lab. Mr Wallossek predicts a mid-January launch for the RTX 3060 series, possibly on the sidelines of the virtual CES. NVIDIA could develop two variants of the RTX 3060, one with 6 GB of memory, and the other with 12 GB. Both the RTX 3060 6 GB and RTX 3060 12 GB probably feature a 192-bit wide memory interface. This would make the RTX 3060 series the spiritual successors to the GTX 1060 3 GB and GTX 1060 6 GB, although it remains to be seen if the segmentation is limited to the memory size, and doesn't also go into the chip's core-configuration. It's likely that the RTX 3060 series goes up against AMD's Radeon RX 6700 series, with the RX 6700 XT being rumored to feature 12 GB of memory across a 192-bit wide memory interface.
Source: Igor's Lab
Add your own comment

126 Comments on NVIDIA GeForce RTX 3060 to Come in 12GB and 6GB Variants

#101
lexluthermiester
Caring1The 1060 3GB is irrelevant in this discussion as it uses a different chip than the 6GB variant.
But it uses the same memory bus and the GPU is close enough to make the comparison valid.
Posted on Reply
#102
Condelio
bugI'd go for the 3070. But not right now. Give it till February or March so the prices will settle down first.
yeah, i´m trying to wait because of prices too. i´m seeing a little bit more stock, a few more choices, and a super slow tendency of decreasing prices. When i need to relax and wait, i read w11zard conclusions in gpu reviews about Vram needs haha
Posted on Reply
#103
evernessince
CondelioHey, nice discussion over here. Although gpu conversations tend to escalate too much for my taste haha... sometimes i have the same doubts about ram being enough. When gtx1070 was launched i bought it with confidence because 8gb looked almost overkill back then. More ram obviously doesn't hurt, but i followed 1060-3 vs 1060-6 drama and as time passed, 5fps seemed the most common difference, and now i think they are both kind of outdated. In the end, for me, the deciding factor is money, and in my country argentina, 1060-6 costed way too much than 1060-3, almost 80% at times if i'm remembering correctly.

Now i'm at a crossroad. An rtx3060ti costs more or less 890usd here, rtx 3070 costs 1000usd, and an rtx3080 costs 1622usd. Prices are inflated all over the world, but prices in Argentina are even more inflated. I don't play much, i work too much, but when i do, i have a 4k TV that obviously would like to try with CP2077 with RT enabled. I´d like to buy 3070, but now the ram does seem to be limiting a little bit? 3080 seems to be the minimum premium experience to my eyes. I don´t like to overspend, but i'm seeing too much difference betweeen x70 and x80. Maybe wait for 3070ti with 16gb of ram? i´m seeing all cards struggling with CP2077, but rtx3080 at least is losing the battle with a little more dignity.

any thoughts? i have the money already saved, but i'm making a big effort to wait a little bit for nvidia to counter attack amd´s RAM decisions.
I'm personally waiting for the 3070 Ti to see what it brings. Currently running a 1080 Ti. Waiting to see what AMD's annual large driver update brings as well, which shouldn't be far off.
Posted on Reply
#104
Nihilus
lexluthermiesterAnother narrow-focused and incorrect opinion not supported by historical information. Why do you bother with that argument? Don't you want more memory on your GPU? Or is it pride?
How is an opinion incorrect. Are you really that pompous? You seem to have to argue with anyone that doesn't agree with you.

Yes, you can find cases where 6 GB is not enough at this performance metric, especially in the future.

But that said, more than 6 GB is not NEEDED. THATS A FACT. I think our definitions of NEED are different. If getting the 6 GB matches performance of the 12 GB is to move a slider one click left once in a while (or refrain from adding 8k textures), I would argue that 12 GB is not NEEDED, especially if it saves me $$.

Looking forward to having this wonderful conversation again when we have 8 GB and 16 GB GTX 3070 cards. The privilege of consumer choice is just too much for some to handle.

Oh and quit trying to put the burden of proof on those that say more than 6 GB is enough. Key word is ENOUGH. When you say that more than 6 GB is needed, that is spreading misinformation to uninformed consumers (causing them unnecessary spending) so you need to back it up.
Posted on Reply
#105
Caring1
Just my opinion, but not having enough can be an issue, having too much, not so much.
Posted on Reply
#106
lexluthermiester
NihilusHow is an opinion incorrect.
When it conflicts with widely known information.
NihilusBut that said, more than 6 GB is not NEEDED. THATS A FACT.
No, that's your opinion. It is flawed and just a tiny bit(read hugely) misguided.
NihilusKey word is ENOUGH.
Enough right now for some games at lower resolutions and only just. The 3060 6GB is in the exact same situation the GTX 460 768MB, GTX 560 1GB, GTX 660 1.5GB, GTX 760 1.5GB, GTX 960 2GB and GTX 1060 3GB were all in, not enough VRAM for future games and a waste of potential long term performance. Future games will demand more VRAM. The 3060 6GB will be needlessly handicapped by the lack of VRAM and as a result it will suffer the same fate as it's predecessors. If you want to limit yourself to 1080p or 1440p at lower details, then yes 6GB will be enough for now. The 3060 12GB will still be the better value buy. Yes, yes.

I have greatly enjoyed this excursion into fantasyland with you and people who agree with you, however, life is calling..
Posted on Reply
#107
Nihilus
lexluthermiesterBut it uses the same memory bus and the GPU is close enough to make the comparison valid.
Well it has 10% less cores and 10% less bandwidth so there is a difference. Guess what - It had about 10% less performance on average, even 1% lows.
The card was about 20% cheaper so it was a valid option for those trying to save a little cash. I would argue that 3 GB vram back then was more of a hindrance than 6 GB today as most high end GPUs had 8 GB back then while cards like the 3070 still have 8 GB today.
lexluthermiesterIf you want to limit yourself to 1080p or 1440p at lower details, then yes 6GB will be enough for now.
Yes, when Wicher 4 or GTA 6 comes out, we will be limited to 1080/1440p medium on BOTH the 6 GB and 12 GB 3060 cards due to lack of performance.

Same goes for CP77 and the GTX 1060 today...
Posted on Reply
#108
efikkan
bugAlso lexluthermiester's point that you may be doing some compute, ML stuff is also valid. But that's rather rare and not really what was being discussed here. (Hell, with the "right" learning set, you can run out of 128GB VRAM.)
Which is why I mentioned it in #75.
There are plenty of professionals or prosumers who actually need more VRAM, but gaming on this card wouldn't if the initial review proves it's fine.
Don't get me wrong, having options for pro users is good. My issue is with those who claims these cards are more "future proof" for gaming, which they are not.
lexluthermiester
efikkanWhen a GPU truly runs out of VRAM, it starts swapping, swapping will cause serious stutter, not once, but fairly constantly.
Which is a perfect argument for more VRAM... Funny that...
If this happens, a reviewer will clearly notice it, and it will be clearly mentioned in reviews.
If it doesn't happen, it will likely be fine until it's obsolete.

I don't know the price of RTX 3060, but let's assume it's $300. What would the price for the 12GB model be do you think? $50 extra?
Considering RTX 3060 Ti is $400 and will be much more powerful, it then becomes a question why pay the ~$50 extra for some "future proofing" when you can pay another ~$50 and get something which is very likely to age much better.
The 12GB model would only offer some degree of "future proofing" if it also offered significantly faster VRAM.
lexluthermiesterFuture games will demand more VRAM.<snip>
I have great enjoyed this excursion into fantasyland with you and people who agree with you, however, life is calling..
In real life, utilizing more VRAM will also require more of other resources.
evernessinceYes, architecture, drivers, and the game engine can compensate to an extent for a lack of VRAM but do you really buy a graphics card because it might have enough VRAM for current games and likely won't for future titles?
If it does have problems with current games, that is easily provable when it's released. If it doesn't, it will likely be fine until it's obsolete as games can't use more VRAM within a single frame without either having more bandwidth or lowering frame rate.
While you can certainly find a scenario where it runs out of VRAM, but that's irrelevant if it can't hit a stable 60 FPS or more at those settings. You don't buy a midrange card now to play games 3+ years from now in 4K as a slideshow at 15-30 FPS. :rolleyes:
Posted on Reply
#109
Vayra86
Minus InfinityWell they don't care about 3060 Ti cannibalising the 3070 sales.
What sales? The card is hardly available...
NihilusWell it has 10% less cores and 10% less bandwidth so there is a difference. Guess what - It had about 10% less performance on average, even 1% lows.
The card was about 20% cheaper so it was a valid option for those trying to save a little cash. I would argue that 3 GB vram back then was more of a hindrance than 6 GB today as most high end GPUs had 8 GB back then while cards like the 3070 still have 8 GB today.



Yes, when Wicher 4 or GTA 6 comes out, we will be limited to 1080/1440p medium on BOTH the 6 GB and 12 GB 3060 cards due to lack of performance.

Same goes for CP77 and the GTX 1060 today...
You still miss the point here. THE ENTIRE AMPERE stack is missing 2-4 GB for the core power these GPUs offer. What yiu are looking at here is a full stack of 1060s with the shader count of the 6GB version, but the relative VRAM of the 3GB version.

The balance is shit and if you dont want to believe or see that, stop whining, buy your card and see what happens next, eh? Im not calling you an "amature" for doing so, all I and others say is that you will be having a card that will struggle 2 years later, while the higher VRAM versions are overkill and not cost effective. Its just not an optimal product... are you saying it is?

If all you play is reviews, 3DMark and canned benches then yes great cards I guess...? If you do more, for which gaming PCs are usually well equipped (modding, tweaking, quality improvements over reductions) then saying that the stock 6, 8 or 10 GB is just fine is a strong act of cognitive dissonance. If not straight up fanboi blindness, OR you dont care and think RTX or DLSS hold more value.

Thats fine too. But stop selling the illusion that cards that have steadily lost relative VRAM ro core power over the past two generations are still in a good place. They are really not. I never heard anyone say "give me a 6-7GB 2080ti", did you? Because thats really what youve got here.
efikkanWhile you can certainly find a scenario where it runs out of VRAM, but that's irrelevant if it can't hit a stable 60 FPS or more at those settings. You don't buy a midrange card now to play games 3+ years from now in 4K as a slideshow at 15-30 FPS. :rolleyes:
Im playing Cyberpunk right now at 3440x1440, on a 1080 with 8GB and 6,5-7GB in use. 50 FPS. Seems well balanced... very much unlike cards with double core power and the SAME Vram. Come again?! The baseline demands for what games can allocate and use is going up as it did last console gen - and last gen the cards DOUBLED their capacities twice over ; Maxwell with 4 and Pascal with 8 in the high end. Since then... we got less.
RandallFlaggYeah, most of the benchmarks of PS5 / XBox Series X have shown it gets about the same performance as a 1660. With the graphics detail up, they can't maintain 30fps and are often running in the low 20 fps range on CyberPunk. They have to go into 'performance' mode aka reduced detail to get 60 FPS, and even then they can't maintain it.

A 1660 Ti by comparison gets an average of 43fps on this same game with ultra details, and doesn't have the luxury of dynamic resolution scaling that the xbox is using to maintain its fairly pathetic sub 30 fps in its visual mode.

This is the same story as I've seen with every console release. When new, they can compete with upper-midrange PC GPUs, but there is a lot of hype that they're faster than that. They aren't. All else being equal a 1660 Ti or Super looks to be equal to the pinnacle of next gen console performance.


www.techpowerup.com/review/cyberpunk-2077-benchmark-test-performance/5.html
You will find that a last gen example is not the best indicator of next gen perfornance, especially not a Pc first title. And even that one is keen to go over 6GB for a mainstream res.
Posted on Reply
#110
efikkan
Vayra86Im playing Cyberpunk right now at 3440x1440, on a 1080 with 8GB and 6,5-7GB in use. 50 FPS. Seems well balanced... very much unlike cards with double core power and the SAME Vram. Come again?! The baseline demands for what games can allocate and use is going up as it did last console gen - and last gen the cards DOUBLED their capacities twice over ; Maxwell with 4 and Pascal with 8 in the high end. Since then... we got less.
Once again, people need to stop using the allocated memory as a measure of utilized memory. And remember that Ampere utilizes memory more efficiently than Pascal. The true measure of whether 6 GB is too little for RTX 3060 be revealed in reviews, in terms of serious stutter. Any game settings it wouldn't reach at least 60 FPS is irrelevant anyway.

There is far too much guesswork and feelings involved here. This should be a rational conclusion based on data to show whether 6 GB is enough (for realistic scenarios) or if Nvidia screwed up.
I'm tired of all these opinionators on YouTube etc. claiming that x GB is too little contrary to what benchmarks shows.
You too should wait for serious reviews instead of concluding a new card automatically needs more than the previous.
Posted on Reply
#111
Vayra86
efikkanOnce again, people need to stop using the allocated memory as a measure of utilized memory. And remember that Ampere utilizes memory more efficiently than Pascal. The true measure of whether 6 GB is too little for RTX 3060 be revealed in reviews, in terms of serious stutter. Any game settings it wouldn't reach at least 60 FPS is irrelevant anyway.

There is far too much guesswork and feelings involved here. This should be a rational conclusion based on data to show whether 6 GB is enough (for realistic scenarios) or if Nvidia screwed up.
I'm tired of all these opinionators on YouTube etc. claiming that x GB is too little contrary to what benchmarks shows.
You too should wait for serious reviews instead of concluding a new card automatically needs more than the previous.
You assume sub 60 is irrelevant as you do assume a lot more. My experience says something different. Right now we have a massive number of people playing sub 60FPS games that just got launched, and it plays just fine.

It doesnt feel right and so far every time I followed that gut sense, I found proof later down the line. So... its like religion ;) Believe as you will, I have learned too often going by launch results is going to be too limited a perspective.

Remember the Fury X with its 4GB?! Now there is a nice example as even its huge bandwidth didnt save its lack of capacity. Equal to a 6GB 980Ti on launch.... decimated in the years after. Not in a few games... but in most.

Similar case: Kepler 2GB cards that quickly lost their edge several years later against the 3GB 7970, only the 3GB 780ti remained relevant deep into Maxwell gen. Call it fine wine... or maybe just 1GB of VRAM?

ALL of those cards had core oomph to spare but VRAM was missing. Quality reductions wouldnt change that - the balance just wasnt right.

BTW I never watch youtubers (as in NEVER) and I have no social media accounts. I figure this out all on my own and was saying this since the first day a 3080 came out with a horribly low 10GB. Its not hard to compare that to what we had and conclude something is missing.
Posted on Reply
#112
efikkan
Vayra86You assume sub 60 is irrelevant as you do assume a lot more. My experience says something different. Right now we have a massive number of people playing sub 60FPS games that just got launched, and it plays just fine.
Gamers are free to do whatever they want.
But gamers who play slideshows at less than 60 FPS, clearly don't care about performance, and isn't very relevant for a discussion of whether more VRAM makes a specific card more future proof.
Vayra86It doesnt feel right and so far every time I followed that gut sense, I found proof later down the line. So... its like religion ;) Believe as you will, I have learned too often going by launch results is going to be too limited a perspective.
Or have you just been looking for evidence to support your predetermined conclusion?
I used to have a similar opinion myself many years ago, but the history keeps repeating itself; people buy extra VRAM for "future proofing", yet cards become obsolete for other reasons long before that happens.

Resource streaming has made VRAM capacity if anything less important, and bandwidth more important in later years.
Vayra86Remember the Fury X with its 4GB?! Now there is a nice example as even its huge bandwidth didnt save its lack of capacity. Equal to a 6GB 980Ti on launch.... decimated in the years after.
Bandwidth will never replace capacity. But you need both bandwidth and computational power if you want to utilize extra VRAM capacity.
Fury X lacked computational power (and didn't it also have some latency issues??). The same goes for Radeon VII, which was hailed as "future proof" with both memory capacity and bandwidth, yet lacked performance. There certainly is a balancing act, but this balance is revealed by benchmarks, not guesswork.
Posted on Reply
#113
Vayra86
efikkanGamers are free to do whatever they want.
But gamers who play slideshows at less than 60 FPS, clearly don't care about performance, and isn't very relevant for a discussion of whether more VRAM makes a specific card more future proof.


Or have you just been looking for evidence to support your predetermined conclusion?
I used to have a similar opinion myself many years ago, but the history keeps repeating itself; people buy extra VRAM for "future proofing", yet cards become obsolete for other reasons long before that happens.

Resource streaming has made VRAM capacity if anything less important, and bandwidth more important in later years.


Bandwidth will never replace capacity. But you need both bandwidth and computational power if you want to utilize extra VRAM capacity.
Fury X lacked computational power (and didn't it also have some latency issues??). The same goes for Radeon VII, which was hailed as "future proof" with both memory capacity and bandwidth, yet lacked performance. There certainly is a balancing act, but this balance is revealed by benchmarks, not guesswork.
Ah 50 FPS is a slideshow now, gotcha. I really do wonder what all those 4K adopters have been watching then the last 2-3 years. OR how anyone ever managed to view motion in standard NTSC format 50hz TV....

You're twisting things to fit your narrative here. High refresh gamer here, but flexible enough to see that sub 60 is perfectly playable - IF the frametimes are stable and guess what... you want VRAM for that and no swapping

And no, I dont look for evidence, I just remember all of it very well because its not review and google based, but from actual, first hand experience. Stuff that raised eyebrows because like yourself my basic idea still is that the majority of GPUs is indeed well balanced especially higher up the stack. Ampere I view as a break from that norm and Nvidia has a motive too, as they are using an inferior node and something has to give. The double VRAM versions now only work to reinforce that thought.
Posted on Reply
#114
efikkan
Vayra86Ah 50 FPS is a slideshow now, gotcha. I really do wonder what all those 4K adopters have been watching then the last 2-3 years. OR how anyone ever managed to view motion in standard NTSC format 50hz TV....
Well, now you're going around in circles.
The whole point of the card being supposedly more "future proof" with extra VRAM was in order to sustain higher performance, then if it can't even reach 60 FPS, then what's the point? This is simply laughable. :rolleyes:
Vayra86You're twisting things to fit your narrative here. High refresh gamer here, but flexible enough to see that sub 60 is perfectly playable - IF the frametimes are stable and guess what... you want VRAM for that and no swapping
Please keep in mind that I'm not saying people need >60 FPS to game here, just that whole argument about "future proofing" is ultimately flawed if you need VRAM to have high performance but don't have the order means to get there.
Vayra86And no, I dont look for evidence, I just remember all of it very well because its not review and google based, but from actual, first hand experience.
Arguments like that might convince some, but those of us who are experienced graphics programmers see straight through that BS.
Vayra86Stuff that raised eyebrows because like yourself my basic idea still is that the majority of GPUs is indeed well balanced especially higher up the stack.
Firstly, people like you have continuously complained about Nvidia not being "future proof" at least since Maxwell, due to AMD often offering ~33-100% more VRAM for their counterparts. This argumentation is not new with Ampere, it comes around every single time. Yet history has shown Nvidia's cards have been pretty well balanced over the last 10 years. I'll wait for the evidence to prove Ampere is any less capable.
Vayra86Ampere I view as a break from that norm and Nvidia has a motive too, as they are using an inferior node and something has to give. The double VRAM versions now only work to reinforce that thought.
Now you're approaching conspiracy territory.
You know very well that TSMC's 7nm capacity is maxed out, and Nvidia had to spread their orders to get enough capacity for launching consumer products (yet it's still not enough).
Double VRAM versions (assuming they arrive) will come because the market demands it, not because Nvidia has evil intentions.
Posted on Reply
#115
r9
I'm reading through this thread and trying to figure out who to troll, but it's not easy as both sides have valid points and the reality is probable somewhere in the middle.
For the pro vram crowd yeah having more vram can't hurt as worst case it will perform the same and you can sleep soundly knowing your "future" is safe. :D
For the anti vram crowd for most games on most settings at least for now it would be on par with the more vram version and when future happens the performance impact might be less than the percentage of money saved.
Let say you save 20% when buying 6gb version and present time has same performance and in the future is 20% behind well you had same performance in the present and got what you paid for in the future so still can feel good about the purchase.

As an example I'll use 1060 3gb vs 6b which is not perfect example as both don't have the same gpu count but close enough still something can be learned.
In the initial review 1060 6gb was 14% faster in 4k compared to the 3GB and looking at todays data for the same cards 1060 6GB is 31% faster at 4k than the 3GB version..
So call it that it lost 17% of performance over time, but that is only in resolution/detail that neither 3gb or 6gb will be able produce playable fps and if you look at the new reviews at 1080p the difference between 3gb and 6gb is similar at 17%.
So having that in mind if I would have saved 20% on getting the 3GB model I definitely got better deal at the time I bought the card and for the future.

If 3050ti 6GB has exact same specs as 12GB and costs 20% cheaper looking from price/performance standpoint one would make a good argument that it's better purchase than the 12G version and if the 12GB 3050ti price is close to the 3060ti the 3060ti will be better purchase all day everyday even though it will have less VRAM.

So now that I had time to chew on this and make educated guess if anybody questions the existence of either 6GB or 12GB 3050ti I would say 12GB makes less sense from current and future price/performance standpoint.

If you make an absolute argument yeah the 12GB will be always better card than the 6GB but that's like you being offered both at the same price and you be a fool to pick the 6GB but that is not the reality of things there is something called price $$$ and that together with fps are the only valid metric everything else like model number, shader count, amount of vram etc .. is all fart in the wind.

And people who can't accept the 6GB version is nothing but I mental barrier "how come 6GB card can match 11GB card like 1080ti it has to have 12GB vram" well ... it can.
Posted on Reply
#116
FLOWrescent
NordicI have a 1060 3gb and have not had to reduce texture levels yet. It depends on the games you play and I am not playing the latest and greatest.
This is the way. It pisses me off when low end is generalized. Regardless of what kind of hardware someone owns, most people aren't stuck completely to current mainstream releases. Like casuals still play csgo with 3080, that thing is 8 years old with outdated graphics that makes 3080 look like a wasted asset, but again fun over graphics. I wouldn't play original Doom and it's clones if they weren't fun even if their graphics are sometimes 8 or 16 bit. From technical point of view I agree that the hardware improvements must go on, but we are talking about utilizing these cards anyways for gaming as a top priority. For rendering workloads and other stuff where huge amount of VRAM is required definitely such configs don't make sense.
Posted on Reply
#117
Vayra86
MxPhenom 216You will only have fetching happening if data the GPU needs is not in cache. And typically chips cache designs and memory management have functionality where it will look ahead and make sure what it needs next is available and fetch or swap accordingly so that what it needs is available via the cache. There is logic in cache too that will only trigger the need to fetch from GPU ram or system RAM if a miss is triggered in the cache control logic.

The amount of clock cycles to fetch from cache vs GPU ram to system RAM is increase at each layer of memory in the system. GPU ram will require less cycles then system RAM etc. And then obtaining data in real time for whatever storage medium is available is even worst, by a lot.

Good points. But now we also have consoles that can access a storage media faster than a PC can do it. So effectively, they might be fetching faster than any normal PC with storage over SATA600. They will use that to their advantage and a regular GPU just can't. Nvidia said something about RTX I/O to counteract that. Looking forward to seeing more of it, but if you believe that technology is there to save them, then you also have to believe that the VRAM that's there isn't enough for the next gen of games. And what other conclusion can you draw anyway... they don't develop it just because they're bored and want to bleed money.

I just can't conclude that the current capacities are sufficient going forward, there are too many writings on the wall and if you want to stay current you need Nvidia's support. Not just for DLSS, or RTX... but also for things like RTX I/O. They are creating lots of boundaries by offering lower VRAM and intend to fix it with proprietary stuff. I'm not buying that shit, as much as I didn't buy into Gsync until it didn't carry a premium anymore... and honestly, even now that I have it, I'm not even using it because every single time a small subset of situations exist where its not flawless and worry-free.

This is all more of the same. Use the basic, non-altered, open to everyone tech or you're fucked and essentially just early adopting forever. I'm not doing that for my hard earned money. Make it work everywhere and make it no-nonsense or I'm not a fan. History shows its impossible to cater the driver for every single game coming out - and waiting for support is always nasty.
efikkanFeel is the key here. Such discussions should be based on rational arguments not feelings. ;)
Cards manage memory differently, and newer cards do it even better than RX 580.
While there might be outliers which manage memory badly, games in general do have a similar usage pattern between memory capacity (used within a frame), bandwidth and computational workloads. Especially capacity and bandwidth are tied together, so if you need to use much more memory in the future, you also need more bandwidth. There is no escaping that, no matter how you feel about it.
You're not wrong, but nobody can look into the future with 100% certainty. We're all predicting here. And I think, this time around, the predictions on both sides have rational arguments to take them seriously. Its absolutely NOT certain the current amount is sufficient going forward.

Do I trust Nvidia and AMD to manage memory allocation correctly? Sure. But they are definitely going to deploy workarounds to compensate for whatever trick the consoles do. Even AMD has a tech now that accelerates the memory. And in Nvidia's case, I reckon those workarounds are going to cost performance. Or worded differently: They won't age well.
efikkanFirstly, people like you have continuously complained about Nvidia not being "future proof" at least since Maxwell, due to AMD often offering ~33-100% more VRAM for their counterparts. This argumentation is not new with Ampere, it comes around every single time. Yet history has shown Nvidia's cards have been pretty well balanced over the last 10 years. I'll wait for the evidence to prove Ampere is any less capable.
To this I can agree - although I don't complain about future proof on most cards at all - the two last Nvidia gens just lack it ;). Most cards are well balanced and I think Turing also has its capacities still in order even if slightly reduced already. But I also know Nvidia is right on the edge of what the memory can handle versus what the core can provide. It echoes in their releases. The 9 and 11GBps updates for Pascal, for example, earned same shader count cards about 3-4% in performance.

With the 3080 most of all, but also in some way for the 3070, I believe they crossed the line. In the same way as they did with 970 - and I reckon if we review it now against a number of current cards with 4GB and up, the 970 will punch below its weight compared to launch. The same goes for the Fury X example I gave against the 980ti - that WAS actually benched and the results are in some cases absolutely ridiculous. Its not a one-off, need more? 660ti was bad balance and only saved by a favorable price point... 680 vs 7970 - hell even the faster VRAM 770 with 2GB can't keep up.

Evidence for Ampere will have to come in by itself, I am truly interested in a topic or deep dive like that 3 years down the line.
efikkanNow you're approaching conspiracy territory.
You know very well that TSMC's 7nm capacity is maxed out, and Nvidia had to spread their orders to get enough capacity for launching consumer products (yet it's still not enough).
Double VRAM versions (assuming they arrive) will come because the market demands it, not because Nvidia has evil intentions.
Eh, 'motive' is meant in the sense that Nvidia offers less VRAM to have more headroom for the core clock. Not some nefarious plan to rule the world - just the thought that yields weren't great and might be improving now, too, making the upgrades feasible. And yes, an upside of offering low VRAM caps is that cards won't last as long. Its just a fact and a shareholder likes that.
r9I'm reading through this thread and trying to figure out who to troll, but it's not easy as both sides have valid points and the reality is probable somewhere in the middle.
For the pro vram crowd yeah having more vram can't hurt as worst case it will perform the same and you can sleep soundly knowing your "future" is safe. :D
For the anti vram crowd for most games on most settings at least for now it would be on par with the more vram version and when future happens the performance impact might be less than the percentage of money saved.
Let say you save 20% when buying 6gb version and present time has same performance and in the future is 20% behind well you had same performance in the present and got what you paid for in the future so still can feel good about the purchase.

As an example I'll use 1060 3gb vs 6b which is not perfect example as both don't have the same gpu count but close enough still something can be learned.
In the initial review 1060 6gb was 14% faster in 4k compared to the 3GB and looking at todays data for the same cards 1060 6GB is 31% faster at 4k than the 3GB version..
So call it that it lost 17% of performance over time, but that is only in resolution/detail that neither 3gb or 6gb will be able produce playable fps and if you look at the new reviews at 1080p the difference between 3gb and 6gb is similar at 17%.
So having that in mind if I would have saved 20% on getting the 3GB model I definitely got better deal at the time I bought the card and for the future.

If 3050ti 6GB has exact same specs as 12GB and costs 20% cheaper looking from price/performance standpoint one would make a good argument that it's better purchase than the 12G version and if the 12GB 3050ti price is close to the 3060ti the 3060ti will be better purchase all day everyday even though it will have less VRAM.

So now that I had time to chew on this and make educated guess if anybody questions the existence of either 6GB or 12GB 3050ti I would say 12GB makes less sense from current and future price/performance standpoint.

If you make an absolute argument yeah the 12GB will be always better card than the 6GB but that's like you being offered both at the same price and you be a fool to pick the 6GB but that is not the reality of things there is something called price $$$ and that together with fps are the only valid metric everything else like model number, shader count, amount of vram etc .. is all fart in the wind.

And people who can't accept the 6GB version is nothing but I mental barrier "how come 6GB card can match 11GB card like 1080ti it has to have 12GB vram" well ... it can.
Very fair! Now, include the relevance of resale value. That 6GB card has earned 17% of it over time, the other one lost about as much and has less life in it going forward. This makes the 6GB one fit for resale, and 3GB one ready for some low-end rig you might have in the house.
Posted on Reply
#118
efikkan
Vayra86You're not wrong, but nobody can look into the future with 100% certainty. We're all predicting here.
Of course, no one can know the future. But some can certainly do more educated guesses than others.
And don't underestimate Nvidia's own insight here, they have access to many games coming in the next couple of years, and considering this RTX 3060 will be on the market for ~2-2.5 years, I don't think they will release something they expect to be obsolete within that timeframe. And these days Nvidia seems to want to keep generations longer with semi-refreshes rather than quick iterations.
Vayra86Do I trust Nvidia and AMD to manage memory allocation correctly? Sure. But they are definitely going to deploy workarounds to compensate for whatever trick the consoles do. Even AMD has a tech now that accelerates the memory. And in Nvidia's case, I reckon those workarounds are going to cost performance. Or worded differently: They won't age well.
Can you elaborate what you mean about "tricks" here?
Do you mean like Infinity Cache? I haven't yet taken a deep look into it, but it's certainly possible to do some smart caching and get some performance boost. Games generally have a huge pile of "static" assets (textures, meshes) and some smaller framebuffers used to render to. Generally speaking, putting the framebuffers in a cache would be wise, since they are moved back and forth constantly, while caching static assets only helps if they are read repeatedly within a few thousand clock cycles. Caching can't really help your sustained bandwidth unless you prevent the same data from being read over and over.
But I'm curious of how universal these performance gains will be over time.
Which tricks from Nvidia are you thinking about?
Posted on Reply
#119
lexluthermiester
efikkanOf course, no one can know the future. But some can certainly do more educated guesses than others.
So true. And those that look at historical trends as well as how things are currently turning out will be the ones better informed and thus prepared...
Posted on Reply
#120
Vayra86
efikkanOf course, no one can know the future. But some can certainly do more educated guesses than others.
And don't underestimate Nvidia's own insight here, they have access to many games coming in the next couple of years, and considering this RTX 3060 will be on the market for ~2-2.5 years, I don't think they will release something they expect to be obsolete within that timeframe. And these days Nvidia seems to want to keep generations longer with semi-refreshes rather than quick iterations.


Can you elaborate what you mean about "tricks" here?
Do you mean like Infinity Cache? I haven't yet taken a deep look into it, but it's certainly possible to do some smart caching and get some performance boost. Games generally have a huge pile of "static" assets (textures, meshes) and some smaller framebuffers used to render to. Generally speaking, putting the framebuffers in a cache would be wise, since they are moved back and forth constantly, while caching static assets only helps if they are read repeatedly within a few thousand clock cycles. Caching can't really help your sustained bandwidth unless you prevent the same data from being read over and over.
But I'm curious of how universal these performance gains will be over time.
Which tricks from Nvidia are you thinking about?
www.nvidia.com/en-us/geforce/news/rtx-io-gpu-accelerated-storage-technology/

As for cards going obsolete and Nvidia's knowledge - of course. This is called planned obscolescence and its a thing nowadays - and its also a fact not all products suffer from it in equal measure.

I don't think we disagree. 2-2,5 years is not the timeframe I think will be problematic. I'm talking 3+ years, when the new consoles get their higher volume of current-gen-first games as devs get used to the systems. And we know that these cards are very much capable of lasting that long, especially if you're not playing on 4K. I see way too many people here saying cards 'need' to be upgraded every gen or every other gen... but I think that with the large variety of choice in resolutions now, cards have WAY more wiggle room, especially in the top half of the stack.

I'm experiencing this first-hand - I'm using a 2016 high end GPU in 2020 - going into 2021 - went from 1080p to WQHD and now start reaching my VRAM cap, but still have playable framerates. On an 8GB card with half the core of what's out now., or even less. Am I compromising on settings left and right? Sure - but nothing really painful and its surprising how far the quality slider can go more often than not.
Posted on Reply
#121
lexluthermiester
Vayra86I'm experiencing this first-hand - I'm using a 2016 high end GPU in 2020 - going into 2021 - went from 1080p to WQHD and now start reaching my VRAM cap, but still have playable framerates. On an 8GB card with half the core of what's out now., or even less. Am I compromising on settings left and right? Sure - but nothing really painful and its surprising how far the quality slider can go more often than not.
Yeah, but you go 4k that all changes...
Posted on Reply
#122
Nihilus
Vayra86Remember the Fury X with its 4GB?! Now there is a nice example as even its huge bandwidth didnt save its lack of capacity. Equal to a 6GB 980Ti on launch.... decimated in the years after. Not in a few games... but in most.
1.) It's impossible to make this claim without having a theoretical 8 GB Fury X to compare it. Lack of performance may be driver / architecture support. (Just look how bad Vega does in some titles)

2.) Memory compression is much better. Just compare how well the GTX 3 GB card compares against the 4 GB RX 570, which often falls on its face.

3.) Benchmarks are always Ultra everything. You would be hard pressed to find a benchmark where 4 GB is not enough for 1080p medium, which is about as good as you will be able to do with a Fury class card in today's games.
Vayra86Similar case: Kepler 2GB cards that quickly lost their edge several years later against the 3GB 7970, only the 3GB 780ti remained relevant deep into Maxwell gen. Call it fine wine... or maybe just 1GB of VRAM?
Its fine wine. Even the rare GTX 680 4GB loses to Tahiti.
www.google.com/amp/s/www.techspot.com/amp/article/2001-doom-eternal-older-gpu-test/

Even at 1080p ultra, the gtx1050 2GB is close to the 3GB version. 3GB gets lower bandwidth, but you would have expected it to still do better.
efikkanNow you're approaching conspiracy territory.
You know very well that TSMC's 7nm capacity is maxed out, and Nvidia had to spread their orders to get enough capacity for launching consumer products (yet it's still not enough).
Double VRAM versions (assuming they arrive) will come because the market demands it, not because Nvidia has evil intentions.
Moreover, wasn't their a delay on 2 GB GDDR6x chips? There was no way that nVidia was going to make a 512 bit card just to get 16 GB of vram. AMD is able to get away with GDDR6, which has plenty of 2 GB stock, since they are using a huge cache in their design.
Posted on Reply
#123
dampflokfreund
I have a 6 GB 2060 laptop which I have been using with RTX enabled in any title that is available. I also have this laptop connected to a 1440p monitor.

Let me tell you that VRAM is almost a non-issue. I just had two cases where I had to do something.

In Wolfenstein I had to turn down image streaming from Ultra to High. Then it runs at 1440p 60 FPS with DLSS balanced without any framepacing issues and smoothly as butter, with RTX on and Mein Leben! It doesn't even have an impact on texture quality, as the texture quality on Image Streaming High and Ultra is exactly the same, it just changes memory allocation budget.

In BF5 I had to turn off restrict GPU memory. Runs at well above 1080p60 on Ultra and DXR Medium without DLSS and nearly 1440p60. Alternatively, you could turn textures to high, difference is negitable.

That's it, in other RTX games, VRAM is not an issue at all. Control for example runs beautifully in 1440p60 and DLSS Performance with RT Reflections on and volumetrics to medium, other settings max. Rock solid 60.

In Cyberpunk I can have 1440p60 thanks to DLSS Performance without RTX and high settings without issues (and even contact shadows, face lighting on and AO on High so basically Ultra in some aspects) VRAM is a non issue there. With image sharpening DLSS Performance looks very close to native 1440p, because TAA has aggressive sharpening enabled while DLSS sharpening is at 0.

With RTX, I get a stable 1440p30 FPS using all(!) RTX features on and Lighting to medium and it just looks next gen. I'm playing it that way as the difference with RTX is huge. However, there is a memory leak in this game which degradates performance over time (and especially fast with RTX in busy scenes) and restarting fixes it. Other people with more VRAM than me seem to have similar issues, so this seems to be an issue with the garbage collection of the game, more VRAM just delays that effect.

Next, I've heard some misinformation regarding the Series S and X here.

The Series S has 10 GB total RAM, which 2 GB is reserved for the OS, so for games the total amount of RAM is 8 GB. That is not VRAM though, as its a shared config. For Series X, Microsoft allocated 3.5 GB for the CPU and since the Series S has similar CPU performance and CPU RAM should be pretty static along these two consoles, you should substract that amount from the 8 GB total RAM, meaning there is around 4.5 GB left as VRAM, depending on how CPU intensive the game is.

And we have solid evidence of this being the case, look at Sea of Thieves. I can max the game out at 1440p60, while Series S has significantly reduced texture and visual quality compared to my card and the Series X and of course runs at lower resolution.

Or look at Watch Dogs, even the Series X is not using the high quality texture pack and the Series S significantly reduces LOD, resolution and overall detail to combat the lack of RAM and power. If the Series X had truly 10 GB as VRAM, it would be able to run the high quality texture pack alongside RT without any issues, just like the 3080. In reality, even these 10 GB GPU optimized memory are a shared config, so in a CPU intensive game like Watch Dogs, the CPU needs more than 3.5 GB which leaves less VRAM for the GPU. On PC, you have DRAM for the CPU.

This is the very reason Nvidia still has a 6 GB RTX 3060, because that is a lot better than the lowest common denominator in next gen titles, the Series S and with DLSS saving VRAM it just increases that further.

The next gen jump in texture fidelity will still happen, despite the low amount of VRAM in next gen consoles. Because a new technology called Sampler Feedback alongside DirectStorage SSD streaming (known as RTX I/O on Nvidia GPUs) will act as an 2.5x multiplier for effective VRAM, as with SFS, engines and devs can control MIP levels of textures much more efficiently and thus, only rendering textures in the highest detail which are really noticed by the player. That will enable the next gen jump in texture fidelity without the need to increase physical VRAM.
Posted on Reply
#124
Nihilus
dampflokfreundI have a 6 GB 2060 laptop which I have been using with RTX enabled in any title that is available. I also have this laptop connected to a 1440p monitor.

Let me tell you that VRAM is almost a non-issue. I just had two cases where I had to do something.

In Wolfenstein I had to turn down image streaming from Ultra to High. Then it runs at 1440p 60 FPS with DLSS balanced without any framepacing issues and smoothly as butter, with RTX on and Mein Leben! It doesn't even have an impact on texture quality, as the texture quality on Image Streaming High and Ultra is exactly the same, it just changes memory allocation budget.

In BF5 I had to turn off restrict GPU memory. Runs at well above 1080p60 on Ultra and DXR Medium without DLSS and nearly 1440p60. Alternatively, you could turn textures to high, difference is negitable.

That's it, in other RTX games, VRAM is not an issue at all. Control for example runs beautifully in 1440p60 and DLSS Performance with RT Reflections on and volumetrics to medium, other settings max. Rock solid 60.

In Cyberpunk I can have 1440p60 thanks to DLSS Performance without RTX and high settings without issues (and even contact shadows, face lighting on and AO on High so basically Ultra in some aspects) VRAM is a non issue there. With image sharpening DLSS Performance looks very close to native 1440p, because TAA has aggressive sharpening enabled while DLSS sharpening is at 0.

With RTX, I get a stable 1440p30 FPS using all(!) RTX features on and Lighting to medium and it just looks next gen. I'm playing it that way as the difference with RTX is huge. However, there is a memory leak in this game which degradates performance over time (and especially fast with RTX in busy scenes) and restarting fixes it. Other people with more VRAM than me seem to have similar issues, so this seems to be an issue with the garbage collection of the game, more VRAM just delays that effect.

Next, I've heard some misinformation regarding the Series S and X here.

The Series S has 10 GB total RAM, which 2 GB is reserved for the OS, so for games the total amount of RAM is 8 GB. That is not VRAM though, as its a shared config. For Series X, Microsoft allocated 3.5 GB for the CPU and since the Series S has similar CPU performance and CPU RAM should be pretty static along these two consoles, you should substract that amount from the 8 GB total RAM, meaning there is around 4.5 GB left as VRAM, depending on how CPU intensive the game is.

And we have solid evidence of this being the case, look at Sea of Thieves. I can max the game out at 1440p60, while Series S has significantly reduced texture and visual quality compared to my card and the Series X and of course runs at lower resolution.

Or look at Watch Dogs, even the Series X is not using the high quality texture pack and the Series S significantly reduces LOD, resolution and overall detail to combat the lack of RAM and power. If the Series X had truly 10 GB as VRAM, it would be able to run the high quality texture pack alongside RT without any issues, just like the 3080. In reality, even these 10 GB GPU optimized memory are a shared config, so in a CPU intensive game like Watch Dogs, the CPU needs more than 3.5 GB which leaves less VRAM for the GPU. On PC, you have DRAM for the CPU.

This is the very reason Nvidia still has a 6 GB RTX 3060, because that is a lot better than the lowest common denominator in next gen titles, the Series S and with DLSS saving VRAM it just increases that further.

The next gen jump in texture fidelity will still happen, despite the low amount of VRAM in next gen consoles. Because a new technology called Sampler Feedback alongside DirectStorage SSD streaming (known as RTX I/O on Nvidia GPUs) will act as an 2.5x multiplier for effective VRAM, as with SFS, engines and devs can control MIP levels of textures much more efficiently and thus, only rendering textures in the highest detail which are really noticed by the player. That will enable the next gen jump in texture fidelity without the need to increase physical VRAM.
Solid post here.
Posted on Reply
#125
cst1992
windwhirlMy thoughts were that 12 were too much and 6 is cutting it too close. 8 GB would have been better, I think.
I don't think 8 is doable on a 192-bit memory bus. The 3060 Ti has 8, but the bus for that is 256-bit.
If they go ahead and add the extra chips anyway, it'd be hard to utilize them.
Remember what happened with the GTX 970?
vctr249 and 299 respectively, at 349 it makes no sense to sell it as for 50 bucks(in theory) more you have 3060ti.
Where are these figures coming from? 249 seems too low for a 3060, considering it could have 80% of the performance of a 3060 Ti - that's speculation, but it could.
80% of the performance at 60% of the price? That's a 33% improvement in price/performance ratio; which is unlikely considering the 3060 Ti is currently at the top of the chart in the 30 series cards. At best I think it'll match the price/performance ratio at $349.
Posted on Reply
Add your own comment
Dec 22nd, 2024 10:38 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts