Once again, we see AMD hurt by their own hype.
I think it's fine that AMD offers comparable rasterizing performance to RTX 4080 at $200 less, even with poor RT performance. Not all gamers need RT performance yet, so it's fine to let each buyer decide which is best for them. My only objection here is the price of both, $800 for RTX 4080 and $700 for RX 7900 XTX would have been a more fair price.
If AMD's recent history is anything to go by, then we can expect two things:
1 - FineWine™ will make both 7900s a bit better, especially that low load power consumption;
Unfortunately, many people are still making excuses for AMD (whether it's subconscious or not).
I'm disappointed to see even this review tries to make excuses for "missing driver optimizations". This is the sort of stuff I've come to expect from the likes of Hardware Unboxed and LTT, but not TPU.
Normally, architectural changes are the first to be implemented in a new driver, normally long before engineering samples are done. The only reason to postpone implementation of a core architectural feature is if there is some issue with the hardware. So I see no reason to expect any significant change from driver updates, at least not anything to make the product compete at a higher performance tier.
If anything, we should expect AMD's launch drivers to be the most mature. Their architectural changes tend to be more conservative and the corresponding driver changes relatively minor. AMD also have far less gimmicks in their drivers.
Except for the odd bug here and there, what we see now is probably what we will see 3 and 6 months from now too.
Ah, yes, launch driver bugs; very typical of AMD. You know, more of you should be demanding “FineWine” performance at launch and not 6-12 months later.
AMD FineWine is a myth. We've heard this nonsense from the 200/300 series, 400/500 series, Vega series and so on. So many expect significant performance to be unleashed "shortly" after release, but it never happens. E.g. RX 480/580 didn't outclass GTX 1060 back in the days and it still doesn't today.
We need to judge products for what they are, not what they are portrayed as in some "fanboy utopia".
That simply means that the driver compiler is still not optimized for RDNA3 after all of this development time, and yes, it’s resource intensive, but most people base decisions on launch performance.
Driver compiler? Optimized for RDNA3?
The driver compiler is just a normal compiler, either MSVC, GCC or LLVM.
If you're thinking of the shader compiler, which is a part of the runtime driver, it must be tailored to the GPU architecture, otherwise it will simply not work.