I guarantee more games will use that extra VRAM much more quickly than start using DirectStorage, especially nvidia's custom add-on to it that requires explicit programming to use fully. Games today use 6-8GB of RAM regularly and they're mostly ports from consoles with 8GB of total system RAM. That 6-8 is what happens when you have the 6 for the GPU and 2 for the system of the consoles plus some change for 4k resolution textures.
This generation, it's going to be consoles with 16GB with probably 4-6GB on the system and 10-12GB-ish for the GPU. When your GPU in your cutting edge PC has less VRAM than or the same (10GB GPU-optimized memory in Xbox Series X) as your console is dedicating to its GPU, then there's a problem because you're going to want expanded headroom to make up for the fact that DirectStorage will not get as much use on PC as it will on Xbox where it's an absolute given. On PC, it's an optional feature for only the latest version of cards and only for users with the absolute fastest NVME drives (of which most people won't have), which means it won't be used for years to come. Look how long RTX adoption took and is taking.
So yeah. Having more VRAM makes your $700 investment last longer. Like 1080 Ti-levels of lasting. Nvidia thinks they can convince enough people to buy more cards with less memory, which is why these early launch cards are going without the extra memory. It'll look fine right now, but by the end of next year, people will start feeling the pinch, but it won't be until 2022 that they'd really notice. If you buy your card and are fine replacing it by the time games are being fully designed for Xbox Series X and PS5 instead of Xbox One X and PS4, then buy the lower memory cards and rock on.
I don't buy a GPU but once in a while, so I'll be waiting for the real cards to come. Don't want to get stuck having to buy a new card sooner than I like because I was impatient.
The Xbox Series X has 16Gb of RAM, of which 2.5GB is reserved for the OS and the remaining 13.5GB is available for software. 10GB of those 13.5 are of the full bandwidth (560GB/s?) variety, with the remaining 3.5GB being slower due to that console's odd two-tiered RAM configuration. That (likely) means that games will
at the very most use 10GB of VRAM, though the split between game RAM usage and VRAM is very likely not going to be 3.5:10. Those would be edge cases at the very best. Sony hasn't disclosed this kind of data, but given that the PS5 has a less powerful GPU, it certainly isn't going to need more VRAM than the XSX.
That might be seen as an indication that a more powerful GPU might need more than 10GB of RAM, but then you need to remember that consoles are developed for 5-7-year life cycles, not 3-year ones like PC dGPUs. 10GB on the 3080 is going to be more than enough, even if you use it for more than three years. Besides, VRAM allocation (which is what
all software reads as "VRAM use") is not the same as VRAM that's actually
being used to render the game. Most games have aggressively opportunistic streaming systems that pre-load data into VRAM in case it's needed. The entire point of DirectStorage is to reduce the amount of unnecessarily loaded data -
which then translates to a direct drop in "VRAM use". Sure, it also frees up more VRAM to be actually put to use (say, for even higher resolution textures), but the chances of that becoming a requirement for games in the next few years is essentially zero.
Also, that whole statement about "I guarantee more games will use that extra VRAM much more quickly than start using DirectStorage, especially nvidia's custom add-on to it that requires explicit programming to use fully" does not compute. I mean, DirectStorage is an API, so obviously you need to put in the relevant API calls and program for it for it to work. That's how APIs work, and it has zero to do with "Nvidia's custom add-on to it" - RTX-IO is from all we know a straightforward implementation of DS. Anything else would be pretty stupid of them, given that DS is in the XSX and will as such be in most games made for that platform in the near future, including PC ports. For Nvidia to force additional programming on top of this would make no sense, and it would likely place them at a competitive disadvantage given the likelihood that AMD will be adding DS-compatibility to their new GPUs as well...
980 Ti was released 5.5 years ago with 6GB VRAM and still very capable of 1440p gaming today.
Sounds like lowering the texture details from Ultra to High is too hard for some people
, also those exact same people who would complain about performance per dollars too, imagine having useless amount of VRAM would do to the perf/usd...
Some people apparently see it as deeply problematic when a GPU that could otherwise deliver a cinematic ~24fps instead delivers 10 due to a VRAM limitation. Oh, I know, there have been cases where the FPS could have been higher - even in playable territories - if it wasn't for the low amount of VRAM, but that's exceedingly rare. In the vast majority of cases, VRAM limitations kick in at a point when the GPU is already delivering sub-par performance and you need to lower settings anyway. But apparently that's hard for people to accept, as you say. More VRAM has for a decade or so been the no-benefit upsell of the GPU business. It's really about time people started seeing through that crap.