Mostly thanks to 3080 being slightly overtuned (the first xx80 ever to come with a 320 bit bus) and double-layer VRAM hindering the 3090 die's power budget.
It's
technically true that the 3080 is the only xx80 with a 320-bit bus, but the GTX 480, 580, and 780 were all 384-bit, wider than 320-bit, so the point doesn't really stand, IMO.
Aside from that, differences in memory buses are more a result of RAM technology and cache design, rather than directly indicating a GPU's performance tier.
For example, the RTX 3060 Ti had a 256-bit bus, but competed against the RX 6700 XT which had a 192-bit bus and extra cache. The RTX 4070 was the same or higher tier of the next generation, and similar to the 6700 XT, had a 192-bit bus with extra cache. The 320-bit RTX 3080 was slower and had much less VRAM capacity than the 256-bit RX 6900 XT.
Bus width is part of the comparison, for sure, but I don't think it makes sense to base judgements of a GPU on bus width alone, without also accounting for the cache or the type and capacity of VRAM connected to it.
Either way, I can still concede the argument that the RTX 3080 was overtuned. But even if had been based on the GA103 die (which is what Nvidia had allegedly originally intended, before realising that Samsung 8nm yields were worse than expected, and Samsung supposedly gave them a better deal on GA102), the RTX 3090 would have still only been about 15% faster than the 3080.
Plus, the RTX 3090 Ti didn't have double-layer VRAM, and still wasn't that much faster than the 3090, and had atrocious efficiency, while costing a ridiculous $2000.
It deserved all the freedom in the world for that. There was zero GPUs faster than it at the time and in the free market, the best decides how much they cost.
I don't agree at all with the implication here that being the fastest GPU justifies charging arbitrarily high prices. The RTX 4090 had even
less competition than the RTX 3090, and delivered a huge uplift over the RTX 4080 (which was itself significantly faster than the RTX 3090, and significantly more expensive than the 3080), despite the 4090 not being much more expensive than the 3090. The RTX 4090 actually justified its price compared to the 4080.
The RTX 3090 was just
bad except for mining and AI, and it doesn't get anywhere as much criticism as it deserves (I guess at least it was significantly cheaper than the Titan RTX? But 24GB VRAM wasn't as revolutionary as it was the generation before, and the Titan supported a few Quadro/Pro driver features which the 3090 didn't). The 6900 XT was 90% as fast as the RTX 3090 and more efficient, for 2/3 the price, while the RX 7900 XTX was only about 80% as fast as the 4090 and didn't have an efficiency advantage.
...
I agree with your point about the Titan RTX's power limit, but that's also applicable to most other Titan GPUs, most of which had the same TDP as their 80 Ti counterparts. A stock Titan X Pascal was often
slower than a 1080 Ti, but had more cores and VRAM and could be significantly faster if overclocked with a good enough cooler.
The 5080's main problem is the lack of a 9090 XT to show it its place. Simple as that. It's surely weaker than we all wanted it to be but it's still not totally stagnant.
I definitely agree with
that.
I would love it if the RX 9070 XT is able to match the RTX 4080 Super, as some (possibly optimistic?) leaks have indicated. If it's <$600 and only 5-10% slower, maybe AMD actually
will have something to show the 5080 its place?
It would still be a much more definitive showing if AMD had a 9080 XT which matches the 5080 at a lower price, and a 9090 XT which beats it while still being substantially cheaper than the 5090.