I haven't run into VRAM limitations with any of my GPU purchases in the past 8 years with the sole exception of the R9 Fury X (for obvious reasons), so I couldn't really answer. I normally don't buy lower-end GPUs and that I opted for a 4080 this time is most unusual (and frankly motivated by money, as I wanted to buy an expensive display and overhaul the rest of my PC as well), I always buy cards at the halo tier. The few games I played on a 4 GB GPU recently (my laptop's 3050M) did require compromises in texture quality to be playable, but they were playable nonetheless, at settings that I personally consider acceptable for a low-end laptop GPU.
Anyway, let's be realistic for a moment here. When the 1080 Ti launched 7 years ago in the Pascal refresh cycle, 11 GB was truly massive. And we all know that 11 GB was because Nvidia only removed a memory chip to lower the bandwidth and ensure that the 2016 Titan X retained a similar level of performance (despite having slower memory) and that there was a significant enough gap between it and the then-new Titan Xp, which came with the complete GP102 chip and the fastest memory that they had at the time. For most of its lifetime you could simply enable all settings and never worry about that.
GPU memory capacities began to rise after the Turing and RDNA generation but we were still targeting 8 GB for the performance segment parts (RTX 2080, 5700 XT), and these cards weren't really all that badly choked by these. Then came RDNA 2 and started to give everyone 12 and 16 GB like candy, but that still didn't give them a very distinct advantage over Ampere, that was still lingering on 8-10 GB, and then eventually 12 GB for the high end. Nvidia positioned the RTX 3090 with 24 GB to basically assume both the role of gaming flagship and a replacement for the Titan RTX, while removing a few perks of the latter and technically lowering the MSRP by $1000 (which was tbh an acceptable tradeoff). We know reality is different, but if we exclude the RTX 3090, then only AMD really offered a high VRAM capacity.
This generation started to bring higher VRAM capacities on both sides, and it's only now with games that were designed first and foremost for the PS5 and Xbox Series X starting to become part of gamers' rosters and benchmark suites, all featuring advanced graphics, new texture streaming systems, tons of high-resolution assets, etc. that we've begun to see that being utilized to a bigger extent. IMO, this places cards like the RTX 3080 in a bad spot, but is it really a deal breaker? With the exception of a noteworthy game I'm about to mention, we've yet to see even the 3080 fall dramatically behind its AMD counterpart despite having 6 GB less.
I agree, more VRAM is better. But realistically speaking, it's not a life or death situation, at least not yet, and I don't think it'll be one for some time to come, especially if you're willing to drop textures from "ultra" to "high" and not run extreme raytracing settings. Perhaps the only game where 10-12 GB has become a true limitation at this point is The Last of Us, which seems to use VRAM so aggressively that it's not going to perform on a GPU that has less than 16 GB of VRAM, as evident here given the RX 6800 XT wins out against the otherwise far superior 4070 Ti.
Frankly, a game that *genuinely* needs so much VRAM probably has more than a few optimization issues, but I digress. We're not "fighting against memory increases", I'm just making the case that VRAM amounts are currently adequate, if not slightly lacking across most segments, as you see, even then the aforementioned scenario involves 4K and very high graphics settings.