It depends on what "decent" means to you. Which res? Which refresh rate are you targeting?
My guess is next-gen will be at least halfway there and the gen after that may finally democratize RT. The only problem, next gen isn't coming for at least another year
Yeah. I agree.
I think Navi 5 *could* bring decent RT to the mainstream, while nvidia will respond with either a new chip and/or series....but that's 2025(/2026?). Regardless, it will probably happen by the time/for the chip design used inside of new consoles, but probably be a premium/unusable feature until then.
I think there's a pretty good chance Blackwell will bring the same divide as Ada; Sure, the 384-bit/512-bit chips will probably be nice (and expensive), but there's a fair chance the chip below them will be 192-bit/32gbps and compete with (and probably slightly beat) N4x/BM(/ad103). RT on all those parts may still (relatively) suck. I still think we're looking at/waiting for a 3nm 256-bit chip for it to be realistic for how most people want to use it, and that will come when it's cost-effective for AMD and then nVIDIA is forced to relegate their crown jewel to the mainstream. I imagine this might be the 'X' generation that's spoken of in that slide deck, and that it might be N3P (which AMD may wait to use), but I don't know.
I think we're waiting for a few things to line up. That could be a newer/better-yielding/cheaper process (like N3P) as well as 4GB (32Gb) GDDR7. I have to imagine AMD is itching to have a 256-bit/32GB chip to put in a new PS/xbox, and they might just be waiting for those stars to align before releasing Navi 5. While I could absolutely see nVIDIA trying to get away with using 12GB for a native 3nm chip, I just don't think AMD will; I think even 128-bit will be 16GB. That said, there's always the chance GB205 (et al) may use GDDR6(x) which then makes 256-bit/16GB more likely, and the possibility that some-what mainstream GPU may have a chance at using the feature adequately.
It's just one of those things that once Navi 4/BM release, I don't see why you'd wait for GB205 which will probably be similarish performance, the same or less ram, and likely more expensive (given how nVIDIA refuses to drop the price on 4070ti/4080).
While I can't speak to everyone's disposition, I look at this chart a hint toward the future:
First of all this uses 7800xt 1440p/60 as a baseline, which I think is realistic for what to expect in the future (ps5 pro). I think 7900xt/4080 is realistic future-proofing until the end of generation (and games fall to FSR/DLSS 4k balanced 30/60fps, if not rarely 1080p on OG PS5).
While speculation, I think it's fair to guess 4070 super would be ~61fps and 4070ti Super (still weird) roughly ~76fps, as that would be splitting the difference between 4070/4070ti and 4070ti/4080. 4080S ~90fps.
I truly believe (BM?/)N4xpro will be ~7168sp/2800mhz, similar to ps5pro (7680/2600mhz?), 12gb, and slightly faster than a hypothetical 4070 super. I also think it (they?) will OC to the realm of 4070ti, just like 7800xt, while 4070S will not (because market segmentation and proven nvidia overclocking suckage). My guess is ~$450, while 7800xt drops to ~$400.
N4XXT (okay, that's pretty horrible naming too) is a wildcard, but it's important to realize 7800xt is clocked at a paltry 2425mhz and uses a max of 250w, which is somewhere between a travesty and extremely foreboding for what they plan for the future, granted the price/perf is good. We see 7700xt clock up to 3113mhz (artificial limit?),
and the computer hardware community's own Freddie Mercury stand-in reminds us 7900xtx can do ~3200-3300mhz. As I've kind of spoken about before, with 20gbps ram 7800xt could've clocked up to 2900mhz and been fine...but AMD distinctly chose not to do that (or even really allow it because power limit), clearly favoring lower prices and to push sales of 7900xt at this/that moment in time. With 24gbps we could be talking up to almost 3500mhz as a possibility, if even a slight one. When you take what Apple achieved on N4P/N5P (A16 is 3.46ghz, M2 is 3-3.66ghz), or even the 11% TSMC promises N4P over N5 (and 4% for n4x on top of that if AMD chose to use it), this is actually hilariously realistic. All they really need to do is give it a max PL of 375W (max of two 8-pin connectors) and let people go ham. I think a more realistic scenario is 8192sp and a stock clock of 3-3.25ghz depending on how much they care about how 7900xt looks comparatively, but like I said, wild card. The important thing to realize is this card (and conceivably a BM alternative) have to fight against 4070ti/4070tititititi in pricing. I think it's a foregone conclusion to be ~$600, but we shall see. Point is, unless AMD completely screws the pooch, the chip could/should literally be up to 40% faster (a typical generational hike) than a 7800xt at stock, or 20% faster than it's absolute performance or a stock 4070ti; conceivably right on top of 7900xt/4080. If not at stock, certainly with overclocking. I could see AMD locking down memory to 3200mhz (25.6gbps, 6.6_%), but that still gives the GPU room to ~3500mhz with 8192sp, or 3700+ if 7680sp. This would make anything below a 4090 essentially moot wrt typical playable settings, barring some major price adjustments and/or amazing value skus from nVIDIA.
I remain incredibly curious what nVIDIA will do in the face of this, because it ain't a pipe dream. It's a very realistic scenario that's likely going to make nVIDIA look like fools wrt pricing, more-so than 7800xt already does. Even if they slot in a 4080 Super, who cares? You *might* get 4K60 DLSS balanced with RT in AW2, but I still think it's a tough sell imho because the tangible real-world performance benefit just isn't going to be there, especially for the likely price premium ($1000?).