I am not counting AMD in any way. Noisy, worth nothing, but for Mac's. 256 bit bus is really low, 512 bit is essential today, I want to see 512 bit HBM. Core count not always represent more productivity AMD is not good at Ray-tracing. Nearly 15 years, I am not buying AMD graphics cards. I did not forget failing Sapphire cards, all of a sudden death. My last was a HD6850, that time nVidia was more or less DX 12 capable starting 4xx, but AMD not. You can still use 4xx and 5xx Geforce cards with Windows 11 but not with AMD from that time. Nearly every 3 week a new driver comes from nVidia, by AMD not even every 3 months. Never use AMD, let them used at Mac's as a hell....
Wow, it's really clear that you haven't been near an AMD GPU in a decade if that's how you think things are. AMD has released at least a GPU driver per month for at least all of 2022 (that's all that's listed on their '
previous drivers' site (current drivers are
here)), and from my recollection for far longer than that. Whether it's important to you that a 2010 GPU still works in W11 is for you to decide, but I don't see that as a big issue - if your GPU is that old, most likely the rest of your hardware isn't W11 compatible anyway. It also obviously stands to reason that a much larger company like Nvidia will have more resources for long term support. Also, there's no W11 driver download on Nvidia's site for anything older than the 600 series, FWIW.
Oh, and what you're saying about memory buses is nonsense - you can't just do a 1:1 comparison between bus width today and ten years ago and pretend that it's the same - the VRAM itself is
far faster, you have memory compression leading to significant speedups on top of that, and of course memory is utilized very differently in games today v. 10 years ago (asset streaming v. preloading etc.). It's still true that overall effective memory bandwidth has gone way down relative to the compute power of GPUs (mostly because compute has gone up
massively, while memory hasn't become all that much faster), but that's unavoidable if you want to keep GPUs in usable form factors and at even somewhat affordable prices. Oh, and 512-bit HBM would
suck. The whole point of HBM is its massive bus width - the Fury X had a 4096-bit bus. HBM clocks much lower than GDDR, but makes up for that with more bus width. What you want is at least 2048-bit HBM (current HBM is 4-8 times faster than the HBM1 on the Fury X), but ideally even more.
It's just as interesting how some minimize one of the most successful video card launches. Probably only Pascal can dethrone such a performance boom from one generation to another (only in gaming), but these "others" send the discussion into the weeds. I'm wodering why?
The prices of raw materials and energy have increased enormously, salaries have also increased, but wait for the good times when an RTX x090 will be 400% more competitive than its predecessor and will cost $49.9.
Prices are closely related to demand. If the demand is high, the prices will be high.
RTX 4090:
1. The first video card that allows you a high (60+) fps with maximum details (including RT) in 4K
2. A decent fps in 8K
3. Explosion in Content Creation, by far the biggest jump in performance from one generation to another.
The 4090 is definitely a very fast GPU, but there have been plenty of calculations done showing that its generational gains aren't
that special - it's just that the past couple of generations have had particularly small gains, while this is more back to the previous norm. On the other hand it only manages this based on a 1.5-2x node jump.
Also, BOM costs are indeed higher than previously, so some cost increases do make sense, but you need to remember that this is a GPU with an MSRP 2-3x higher than its predecessors (except for the 30 series, which brought prices to this level to begin with). Also, salaries have increased? Really? Where? For who? In most of the Western world, middle class wages have been stagnant for decades.