Nope, they never were. Double VRAM catered to a good 10~15% of market demand that was aimed at SLI/Crossfire buyers that would drain the late-in-gen-stock for Nvidia and AMD.
Okay, I should have added the qualifier "for the pat 3-4 generations (i.e. since SLI/CF died)". You're right that they served a purpose back when that was still a feasible approach to improving performance. Me cheaping out and getting the 512MB HD 4850s instead of the 1024MB ones back in 2008 or so was exactly the reason why I had to get a 6950 in 2011 - there was no way those 512MB cards could perform passably at 1440p when I got my U2711. But then, 1440p gaming in 2011 was ... pushing it.
It was always a great way to move units, and for those reasons dual GPU was always priced about 10% more favorable in FPS/dollar, even if you took some support issues in your stride.
Sure. But then SLI/CF has been entirely irrelevant since at least 2015. Yes, I know even 10 series supported SLI, but by that point game/driver support was so poor as to be essentially nonexistent.
Spec illiterate was never expected beyond the midrange, and never catered to. And it isn't today.
Yes it absolutely is. Gaming has
exploded in recent years, and the sheer amount of sales of GPUs today despite their ridiculous pricing is by itself proof that people are willing to pay
way unreasonable prices compared to what products are worth. A significant part of that can likely be attributed to people either new to the hobby or uninterested in learning, who just want to buy something good. There will
always be more uninformed customers than well-informed ones. Period. Thus, catering to the spec-illiterate is a
massive part of marketing, and always has been. The main change in PC gaming is that a decade ago it was still a relatively niche hobby mostly (but not entirely) limited to enthusiasts, while today it's ubiquitous.
There is only market demand and how do we sell units. What part of that is marketing and what's real demand? Good luck drawing that line. A salesman knows: business is about 'creating demand'. Demand is demand. Today, Nvidia releases those units because there is demand, because the market spoke out against low VRAM amounts, and Nvidia wasn't capable of sourcing enough chips for the double amount, or any unholy combination of these factors.
You're contradicting yourself several times here. You say demand can be created, yet you claim "the market spoke out against low VRAM amounts", as if that occurred spontaneously and out of nothing. Or, maybe, VRAM amounts have been heavily marketed since at least Vega? One-upmanship in VRAM amounts has been a popular GPU marketing sport for quite a few years. You're entirely right that the relationship between marketing and demand is incredibly complex and anything but linear - I've never claimed otherwise - but ideas and beliefs do not appear spontaneously, and there is a
staggering amount of superstition and misunderstanding among even enthusiasts about how PCs work. Nobody is immune to this - this debate is proof of that.
Still, your assertion that "today, Nvidia releases those because there is demand" is a cop-out. They have a responsibility to educate and not mislead their customers. And they certainly aren't forced by "the market" to produce dumb SKUs. They
can, because doing so is an easy and cynical way of playing off of customer superstitions, earning them more money as they can charge more of a premium for """better""" products. But, especially taking Nvidia's market position and
massive mindshare into account, they could just as well not do so, and would likely sell just as many GPUs. The amount of customers lost to 8GB AMD competitors would be
microscopic.
If you apply the marketing tactic about VRAM to the segment below midrange, ie OEM-midrange prebuilts, laptops, and casual gaming gpus then yes, you would be right - that is the segment of illiterate buyers that say 'moar better' without looking at what's behind the numbers.
You seem to be under the impression that only well-informed enthusiasts are willing to spend a couple of thousand dollars on a PC. This was true even half a decade ago, but today? Not even close. There are
tons of reasonably wealthy idiots in the world, and an ever-increasing number of them are gamers.
The 3090 also got a double VRAM version, which was specifically aimed at 'creators'. Similarly, Titans were marketed specifically at some gray area of enthusiasts (the last group you'd expect to be tech illiterate) that would be semi pro as well.
Don't all 3090s have 24GB? The 3090 is a fundamentally weird GPU, as it doesn't get creator-focused drivers (it's not a Titan), but costs as much, while having 2x the RAM such a card would need. There's a reason they've launched a 12GB 3080 Ti - it performs the same at a lower BOM cost. The 3090 is IMO mainly a flex - it's Nvidia making a
true flagship SKU, one that is "extra everything" without it really making sense beyond demonstrating that it can be done. Which is precisely where ultra-luxury products tend to live.
No, because i didn't let it past the limit. But i'm sure that's what would happen, 1440p
But don't just panic this is one example, RE games are notorious for eating VRAM, just like RE2 remake did back in the day
Not panicking at all, I was just curious - "letting it past the limit" is exactly what would have made this interesting, as I sincerely doubt you would have seen performance issues before you exceeded 8GB by a relatively significant amount of allocated data - as illustrated by the FC6 examples in this thread. Reported VRAM usage is not equal to actual VRAM usage.
Sometimes I wish VRAM slots became a thing. My brother had such a card (because the seller was convinced it was the next big thing), don't remember what card it was. Riva? Matrox?
I was about to say, didn't some GPUs have that back in the
really old days, when GPUs were new? I sincerely doubt it would be doable today though - SODIMMs have a 64-bit interface after all, so you'd need 4 SODIMMs spaced at equal distance from the package for it to work. Even on the back of the card that would be
really difficult. You could of course make a denser connector for GPU memory modules, perhaps some sort of mezzanine connector with a ton of pins, but that would get expensive fast, and signal integrity and power would still be an issue. One possible stopgap solution would be to stick a single SODIMM socket on the GPU for a second layer of RAM, not for direct GPU access but for pre-pre-caching. A 16GB DDR4-3200 SODIMM is pretty cheap, needs little power, and would be able to feed the VRAM much faster than any storage medium, even using DirectStorage. That would likely be sufficient to alleviate nearly any VRAM bottleneck as long as the game is even remotely well written in how it streams in assets.