"conventional PCI Express connectors can adequately handle power demands up to 375 W" - considering that no consumer card should eat more than 375 W, there should be no need of a 12-pin connector on a consumer card, ever.
Even at 250W things get loud and challenging to cool, so somewhere around 300W would be a hard limit for most.
Props to AMD, why fix something what isn't broken?
Wait, wut? How many revisions this new connector needs?
I think this XKCD strip is highly relevant:
It's yet another standard we didn't need and "no body" asked for, that should have been the end of the discussion. Don't forget it's generally more valuable to have lasting standard than having the "optimal" standard.
Wrong. 10 GB is miserable even at 1440p.
Today the bare minimum is 14 GB, but in order to stay future-proof for at least 2-3 years you need 20 GB
Can we please cut it with this nonsense?
Each architecture utilizes VRAM differently, comparing VRAM size across vendors is a fool's errand. And large VRAM isn't "future proofing", not unless you want to look at pretty slide shows.
Its only Amd fans who dont know what means
VRAM allocation and VRAM usage? Right?
<snip>
Better to sound rational than talking BS about prices and Vrams 24/7 like some butthurt amd fans
And yet most of them buys Nvidia cards anyways when the dust settles…
Currently, unless you are pushing the most extreme settings at 4K resolution, 16 GB+ is nothing but comfy, that is all. Realistically, you are looking at very few games that can make real use of more than 16 GB, mostly modded Creation Engine games with high resolution texture mods. Last I was playing Fallout 76, I had some texture mods that constantly ran my 3090 into the 24000 MB range. Not exactly optimized, regardless.
There are many uses of large VRAM outside gaming, but in gaming, it's mostly a gimmick. Graphics cards don't have enough bandwidth to utilize it in a single frame anyways.
In regards to (unofficial) texture mods;
1) Results for most game engines will be undesirable, as more advanced engines are using calibrated LoD algorithms, textures, shaders and sometimes dynamic loading of assets. LoD algorithms mix multiple mip levels of textures, and the results will generally be very wasteful if you just replace (some) textures with higher resolution ones without recalibrating everything, in best case scenarios you're looking at very high VRAM allocation for very little visual "improvement" (most of it wasted due to mip levels), but in many cases there can be flickering, glitching or loading issues/popping in cases with dynamic loading.
2) What is the added benefit?
Unless someone has access to higher quality raw material or create new better assets, we're not really adding truly
better textures. In most cases it's just upscaled textures with some noise and filtering added, so there isn't any more information in the texture, just an illusion of higher resolution. It's just as pointless as these "AI" upscaling algorithms - no real information is added. (Most games can(and some already do) achieve the same result with a "detail texture" or noise added in shaders at very little cost)
Or an analogy; it's about as
smart as turning up the sharpness on your TV believing you get a better picture.