Others would say a 7900xtx allowed them to play native 4k60 gaming for half the price, if sometimes not much less; 1080p RT, not unlike 9070 xt for $600 or a still $1000 nvidia GPU with less raster perf/ram.
Granted without FSR4 or DLSS3, but that's the point of progress. Now you have those features and they are cheaper (and up-scaling quality better), but not the raster/ram for 4k.
With nVIDIA they have moved the needle zero since RTX4 outside selling the small compute/bw differences from 4080->5080 to eventually outdate the former; AMD rebalanced for the actual specs/markets.
Without features that would be outdated in good playability over time come new generations that move the goalpost; while raster stays stationary; as RT likely will now as well given we now have a standard.
It's ~50TF for 1080p60 (or 1440p up-scaled) on RDNA4/nvidia.
Now we can pretty much safely say things like the PS6 will have that spec + whatever extra grunt/ram it takes to upscale to 4k60. Which, 5080, btw, doesn't. That is another area it is purposely cut-off from acheiving.
On purpose. Eventually all these little things will add up and you realize this is what nVIDIA does. They do it on purpose. They don't have to but they do. To kill longevity and force upgrades.
What you see as technological advantages truly are in-fact genius marketing gimmicks. They are use of smaller area of the die (for less ops; 4/8-bit) to do operations generally required of FP32 units.
Which is why products like the 5080 exist, which are not 4k GPUs (or even great 1440p RT GPUs) but still cost $1000. To sell those gimmicks (such as 1080p->1440p/4k up-scaling w/ FG) instead of actual performance.
This saves die space, and allows them to sell a cheaper/smaller chip crutching on those features. Follow? It is in-fact genius. You must understand WHY they do it, not only that what you see as benefits occur.
The 4090 is a very good GPU. It is/was overpriced, but it is in-fact a very good GPU. It has and will last a long time, although nVIDIA has already started to keep it under 60fps in 4k or 1440pRT up-scaling scenarios; they won't completely go that route until they can sell a cheaper replacement. This will continue as further features and/or implementations of DLSS/FG become incorporated into more games.
Please understand most do not buy a 90, they buy the crap that is obsoleted or not-qute-good-enough for the spec they advertise, and actually are good-enough for the GPU spec one tier down from AMD.
5080 is only one DLSS update (or spec hike from replacement GPUs) away from not being able to stay in the 48hz VRR range of most monitors in 1440pRT.
Remember that I said that when it stutters after those occur. BECAUSE THAT IS WHAT NVIDIA DOES. They plan it. Because they are very smart, and people are very short-sighted.
I am going to laugh come UDNA and Rubin. You will realize 4090 is not quite as high-end as you believe. It is/was a very good chip for it's time, and a decent spec, but AMD will absolutely compete.
Again, you can pretty much understand the stack to be capable of 6144sp * 1/2/3 at high clocks or 1/2/3/4 at low clocks across a multiple of 128-bit bus. 4090 would be less than a 2x setup at 3nm clocks.
nVIDIA will probably sell a 192-bit chip with 9216sp (to replace 5080 for good-enough 1440pRT barely, but not great 4k/up-scaling) bc that's nVIDIA.
AMD will probably cut down a 256-bit chip (6*1792?) and completely destroy it for less money, because that is AMD. They will make what 5080 *should* have been, and probably sell it for current 9070 xt prices.
My *guess* is that AMD will do 1/2/3 at ~$400/800/1200. I *could* be wrong, but it's feasible. At that point you will not be wrong that you had a very good (if extremely expensive) GPU for four years.
And I will say a ~9070 XT is ~$400, capable of native 1080p RT (or 1440p up-scaling), a GPU better than 4090 keeping 1440pRT (and 4k-upscaling) for less than a '80' level gpu, and a much faster GPU much cheaper than 4090 ever was that keeps 4k60RT (unlike $3000 5090); other skus (perhaps cut-down) for certain scenario/use-cases (like 1080p->4k), or a less-than-full bottom-end SKU that can OC for 1080p.
Perhaps at that point you will flaunt path tracing at 1440p if not only 4k resolution for much more money. I will not care. I will tell people to buy the GPU that can do the thing for their sitch.
I will be happy those GPUs do not cost $1000, $2000, and $3000 as nVIDIA chose to often place them respectfully (and those earlier nvidia gpus perhaps not quite good-enough for those specs long-term).
And I will not be wrong that many more people will buy those and be happy for a long-time, in much higher availability and much lower price.
Importance of each our own prerogative, but general experience for as many people as possible/value/longevity mine.
And perspective.
Exactly, so many are being short sighted to realize just how badly the market is being damaged by Nvidia being unfair to their partners and consumers.
Sure, the Geforce buyers can spend thousands to play games with extra shinyness, but ray tracing still isn't there in terms of performance or affordability for a majority of buyers. And instead of increasing performance Nvidia wants to sell software gimmicks and fake frames.
Largely correct, although obviously there is something to be said for pushing for higher rate of ops and implementation of those features (although I would argue not in exchange for other capabilties, as they do).
I would say RT is here with the 9070 xt (for native 1080p/1440p users that up-scale with FSR4)! Might not keep 60fps forever, especially if FSR4 improves (to make 1080p->4k better like DLSS4), but still good.
Certainly in VRR range given it is literally designed to keep 60fps in the 1080p/1440 up-scaled scenario right now. Hence, it's probably a safe bet (unlike almost any nVIDIA GPU for any standard spec).
Granted, that *is* still expensive to a lot of people, and I fully respect that. Hence read what I wrote about UDNA. I think that will fall to ~$400, which I think will bring it to *pretty much* everyone.
Perhaps even cut-down SKUs that can overclock to that playable level (as AMD/ATi has been known to do). Maybe the stack is cheaper (350/700), I don't know, but certainly more accessible/consistent regardless.
Always remember nVIDIA purposely withheld that capability with 5070 to keep that out of the range of many general consumers. As well as limiting buffer to 12GB as another obsolescence/upsell technique.
This is why what they do is scummy, and AMD is literally carrying the flag for standardization and balance of those features to each tier as quickly as possible; and I expect that to continue.
If nVIDIA continues to do what they do with Rubin (and perhaps PT), or perhaps has to create an affordable generation (through dense process/lower clocks/Micron ram), I don't know. They could try either.
If the former, it will be their general path repeated. If it is the later, it will be because of competition from AMD (especially if AMD redesign cache for greater bw for consistant RT/PT clocks/ability).