NVIDIA could take a similar approach to sub-segmenting the upcoming GeForce RTX 3060, as it did for the "Pascal" based GTX 1060, according to a report by Igor's Lab. Mr Wallossek predicts a mid-January launch for the RTX 3060 series, possibly on the sidelines of the virtual CES. NVIDIA could develop two variants of the RTX 3060, one with 6 GB of memory, and the other with 12 GB. Both the RTX 3060 6 GB and RTX 3060 12 GB probably feature a 192-bit wide memory interface. This would make the RTX 3060 series the spiritual successors to the GTX 1060 3 GB and GTX 1060 6 GB, although it remains to be seen if the segmentation is limited to the memory size, and doesn't also go into the chip's core-configuration. It's likely that the RTX 3060 series goes up against AMD's Radeon RX 6700 series, with the RX 6700 XT being rumored to feature 12 GB of memory across a 192-bit wide memory interface.
126 Comments on NVIDIA GeForce RTX 3060 to Come in 12GB and 6GB Variants
Your average computer now has about 10^8 times more memory that one from the 40s, that works out to an order of magnitude more memory every decade. Imagine having someone from that era read this forum.
Never buy the option with more memory guys, it's clearly a bad idea.
Even you can't deny the fact that using more VRAM within a single frame at the same frame rate would require more memory bandwidth? Well we know current cards often are constrained by bandwidth already, so throwing more VRAM at them for "future proofing" without also adding more bandwidth is illogical. :kookoo: Cut it with your straw man arguments. This is just trolling.
The argument is whether more memory can be utilized for the intended purposes.
I've seen No Man's Sky use well over 7 GB of VRAM. It starts at 4 or 5 GB, but it ramps up rather quickly once you start exploring. The screenshot shows the Radeon Overlay Metrics, with over 6 GB of VRAM either in use or allocated after a few minutes in-game, at 1080p.
To be fair, I do have NMS fully maxed out, though. But it shows that games are starting to push it, and that's why I feel like 6 GB is cutting it too close. There might be exceptions, though. IIRC, there were a few Radeon cards a long time ago that came out with double the usual memory, but with the same bandwidth, that suffered performance loss because of that.
The proper way to test whether a card have enough memory is to do a proper review. If it suffers from to little, the stutter will be severe. If it does well, it's very likely to do well for 3+ years going forward. Feel is the key here. Such discussions should be based on rational arguments not feelings. ;)
Cards manage memory differently, and newer cards do it even better than RX 580.
While there might be outliers which manage memory badly, games in general do have a similar usage pattern between memory capacity (used within a frame), bandwidth and computational workloads. Especially capacity and bandwidth are tied together, so if you need to use much more memory in the future, you also need more bandwidth. There is no escaping that, no matter how you feel about it.
I certainly hope cards does this properly. That's for the system and graphics combined. A typical PC has 16GB system memory or more.
There are so few instances where a card got obsoleted because of VRAM, I can probably count them on my fingers. GPU HP is what determines if a video card will age nicely, I pay little attention to VRAM. Actually, you're wrong on this one. If it were up to GPU makers, you'd be upgrading with each and every generation they make, much like people do with their smartphones.
It's just that those of us that know more about GPUs are able to pick those having more chances to last us 2 or 3 generations.
That is probably why I tend to look down on people that put future proofing at the top. Video cards (and products in general, sad as that is) are just not built for that. I'll take a "future proof" video card, given a chance, but I will not fault a video card that works well today, simply because it might not work as well tomorrow.
Also, people fret about DLSS not being "true 4k", but you barely hear a word about how consoles are upscaling all the time, yet they are advertised now as "4k capable".
There is one thing and one thing only consoles are a better pick then PCs: convenience/ease of use. Not capability. Of any kind.
I don't mind having extra VRAM if it didn't affect the price. The problem is that it does, and we fool people into paying extra for "future proofing" when in reality the card will become obsolete just as fast. You can go ahead an waste all the money you want, but don't mislead others. If there were no downsides, I'll take 4x the VRAM. There is no pride here, I'm just being pragmatic.
Assuming this card will have a 192-bit memory bus and 14 Gbps memory, this will yield 336 GB/s. Now assume a game will read 6 GB during a single frame (not that a game would read the entire VRAM per frame), that would yield a maximum of 56 FPS, so the argument that future games will need more than 6 GB for this card is highly unrealistic, unless they mismanage memory terribly. If that were true, overclocking memory would yield no significant performance improvement. So I guess other's extensive experience in graphics doesn't count then.
When a GPU truly runs out of VRAM, it starts swapping, swapping will cause serious stutter, not once, but fairly constantly. If the shortage is large enough, some games will even display popping textures etc. Anyone doing a serious review of the card will notice if this happens.
While it's true, but does it really matter? DLSS is the best thing since checkerboard upscaling.
Sadly, no use for me as you'll find out why by checking my bio's system specs.
Also @lexluthermiester 's point that you may be doing some compute, ML stuff is also valid. But that's rather rare and not really what was being discussed here. (Hell, with the "right" learning set, you can run out of 128GB VRAM.)
A 1660 Ti by comparison gets an average of 43fps on this same game with ultra details, and doesn't have the luxury of dynamic resolution scaling that the xbox is using to maintain its fairly pathetic sub 30 fps in its visual mode.
This is the same story as I've seen with every console release. When new, they can compete with upper-midrange PC GPUs, but there is a lot of hype that they're faster than that. They aren't. All else being equal a 1660 Ti or Super looks to be equal to the pinnacle of next gen console performance.
www.techpowerup.com/review/cyberpunk-2077-benchmark-test-performance/5.html
Yes, architecture, drivers, and the game engine can compensate to an extent for a lack of VRAM but do you really buy a graphics card because it might have enough VRAM for current games and likely won't for future titles?
Look at HWUB's re-review of the 1060 3GB after only 18 months:
The big difference here is that you are not buying a $200 card that's intended for those on a budget, the 3060 is going to be at least $100 more if not close to double the price.
This argument was acceptable for the 1060 3GB due to it's price, it's not for the 3060.
Now i'm at a crossroad. An rtx3060ti costs more or less 890usd here, rtx 3070 costs 1000usd, and an rtx3080 costs 1622usd. Prices are inflated all over the world, but prices in Argentina are even more inflated. I don't play much, i work too much, but when i do, i have a 4k TV that obviously would like to try with CP2077 with RT enabled. I´d like to buy 3070, but now the ram does seem to be limiting a little bit? 3080 seems to be the minimum premium experience to my eyes. I don´t like to overspend, but i'm seeing too much difference betweeen x70 and x80. Maybe wait for 3070ti with 16gb of ram? i´m seeing all cards struggling with CP2077, but rtx3080 at least is losing the battle with a little more dignity.
any thoughts? i have the money already saved, but i'm making a big effort to wait a little bit for nvidia to counter attack amd´s RAM decisions.