The main reason why a game would gain performance from more VRAM is that it's swapping. But generally speaking, more VRAM wouldn't increase performance if everything else remains the same. But it's kind of pointless pushing a low-end card to this extreme just to find a bottleneck.
In theory I wouldn't mind if consoles had 4x the amount of VRAM, if it didn't drive up cost significantly, and games utilized in a sensible way. It is possible to use more memory to add more details in backgrounds etc., but this is the chicken and the egg problem once again.
PCIe 3.0 x8 + out of memory = not enough bandwidth to address RAM. If it had x16, it wouldn't be nearly as bad.
If the game is at the point where it's swapping heavily, even 16x wouldn't save it, as latency would also be a huge problem forcing the framerate to a crawl.
x8 is enough though for resource streaming, if it's done properly.
Most users do not get near the max system ram, so my personal opinion is vram should be the same.
Why should you pay for VRAM that you don't need?
Granted you should have a little margin, but beyond that, what's the point?
A user some time ago posted a screenshot of Resident Evil 2 needing nearly 14GB vram. There's a thread on this here on TPU.
There is a difference in allocating and actually needing, some games allocate huge buffers.
Also, was this measurement of the game's own usage, or total usage? Since background tasks can consume a lot, especially Chrome.
I would have to agree with AMD, but only for most future games or for people maxing out their games.
Buying extra VRAM for "future proofing" has rarely if ever paid off in the past. Generally speaking, the need for performance increases just as much (at least how games are commonly balanced), so the card is obsolete long before you get to enjoy that extra VRAM for gaming.
I for instance, have a GTX 680 4 GB in one machine and a GTX 1060 3 GB in another. Guess which one plays games better?
I have the impression that Unreal Engine 5 will work and stream textures directly from the super duper fast SSD bypassing the heavy need for VRAM.
It will be interesting to see what they utilize it for.
But if a game is going to have like ~50 GB of data per level (uncompressed), just a couple of such games will eat up that entire SSD.
Generally it would make more sense to store the assets with lossy compression at about 1:10 ratio, which usually retains good enough details for grainy textures, then decompress in the CPU and send uncompressed data to the GPU. This needs to be prefetched and ready though, but that's not a problem for a well crafted game engine.