Sorry, but no. You started out by arguing from the viewpoint of gamers needing more VRAM - i.e. basing your argument in customer needs. Regardless of your intentions, shifting the basis of the argument to the viewpoint of the company is a dramatic shift that introduces conflicting interests to your argumentation, which you need to address.
Again, I have to disagree. I don't give a rodent's behind about the viewpoint of Nvidia. They provide a service to me as a (potential) customer: providing compelling products. They, however, are in it for the profit, and often make choices in product segmentation, pricing, featuresets, etc. that are clearly aimed at increasing profits rather than providing benefits to the customer. There are of course relevant arguments to be presented in terms of whether what customers may need/want/wish for is feasible in various ways (technologically, economically, etc.), but that is as much of the viewpoint of the company as should be taken into account here. Adopting an Nvidia-internal perspective on this is meaningless for anyone who doesn't work for Nvidia, and IMO even meaningless for them unless that person is in a decision-making position when it comes to these questions.
There will definitely be games requiring more than 10k of VRAM
But 10GB? Again, I have my doubts. Sure, there will always be outliers, and there will always be games that take pride in being extremely graphically intensive. There will also always be settings one can enable that consume massive amounts of VRAM if desired, mostly with negligible if noticeable at all impacts on graphical quality. But beyond that, the introduction of DirectStorage for Windows and alongside that the
very likely beginning of SSDs being a requirement for most major games in the future will directly serve to decrease VRAM needs. Sure, new things can be introduced to take up the space freed up by not prematurely streaming in assets that never get used, but the chance of those new features taking up all that was freed up plus a few GB more is very, very slim. Of course not every game will use DirectStorage, but every cross-platform title launching on the XSX will at least have it as an option - and removing it might necessitate rearchitecting the entire structure of the game (adding loading screens, corridors, etc.), so it's not something that can be removed easily.
SLI? That's a gaming feature. And you don't even need SLI for gaming with DX12 multi-adapter and the like. Compute workloads do not care one iota about SLI support. NVLink does have some utility if you're teaming up the GPU to work as one, but it's just as likely (for example in huge database workloads, which can consume
massive amounts of memory) that each GPU can do the same task in parallel, working on different parts of the dataset, in which case PCIe handles all the communication needed. The same goes for things like rendering.
...and? Increasing the amount of VRAM to 20GB won't change the bandwidth whatsoever, as the bus width is fixed. For that to change they would have to add memory channels, which we know there are two more of on the die, so that's possible, but then you're talking either 11/22GB or 12/24GB - the latter of which is where the 3090 lives. The other option is of course to use faster rated memory, but the chances of Nvidia introducing a new SKU with twice the memory
and faster memory is essentially zero at least until this memory becomes dramatically cheaper and more widespread. As for the change in memory amount between the 2080 and the 3080, I think it's perfectly reasonable, both because the amount of memory isn't directly tied to feeding the GPU (it just needs to be enough; more than that is useless) but bandwidth is (which has seen a notable increase), and because - once again - 10GB is likely to be plenty for the vast majority of games for the foreseeable future.
The entire point of DirectStorage, which Nvidia made a massive point out of supporting with the 3000-series, is precisely to handle this in the same way as on consoles. So that statement is fundamentally false. If a game uses DirectStorage on the XSX, it will also do so on W10 as long as the system has the required components. Which any 3000-series-equipped system will have. Which will, once again,
reduce VRAM usage.
It absolutely makes the statement useless. That's how marketing works (at least in an extremely simplified and partially naive view): you pick the best aspects of your product and promote them. Analysis of said statements very often show them to then be meaningless when viewed in the most relevant context. That doesn't make the statement
false - Nvidia
could likely make an Ampere GPU delivering +90% perf/W over Turing, if they wanted to - but it makes it misleading given that it doesn't match the in-use reality of the products that are actually made. I also really don't see how the +50% perf/W for RDNA 2 claim can be reflected in any reviews yet, given that no reviews of any RDNA 2 product exist yet (which is natural, seeing how no RDNA 2 products exist either).