It was basically AMD's Maxwell moment... until it all imploded with RDNA3 and the same kind of gains didn't show up again.
You only have so many Maxwell moments, and it all just happened because both companies used suboptimal architectures and that won't happen again because they learn. With Nvidia it was Kepler, which was beaten by GCN2 later, and with AMD it was GCN2/3 which was beaten by Maxwell and was too inefficient and blown up, in general GCN was, all versions performed suboptimally unless you used low level API (DX12/VK) or Asynchronous Compute, which both led to the huge engine being used properly, especially true with Fury X. It was either this or use very high resolutions, for Fury X it was 4K for example, way too many shaders and a suboptimal driver for DX11 which had issues filling it.
Chiplets is just not ready, unless they find a way to tackle latency's.
The latency cost it performance, but the main issue was that Nvidia is just too rich too good, basically AMD had a 4080 with 384 Bit instead of 256 bit and other superfluous parts vs a huge 4090 chip which AMD could never compete with, way more transistors, and if you calculate that 5/6nm Chiplet mix into 5nm pure, it's just about 450-480mm² vs 600mm² ADA102, so no chance competing with a smaller GPU. Exactly that size also was missing in performance, about 20-30%. No surprises and no dark magic here, Nvidia is not doing anything special, just investing more money. It's the upside of concentrating on 1 product, GPU and not just doing it on the side like AMD does, their main business is still CPU. GPUs are only very good at AMD in Datacenter, Instinct, not in consumer stuff. But they are trying to consolidate that with UDNA, just like Nvidia does since a long time, at least Volta times.
The two main thing that killed RDNA3 is :
- Increased power usage to move data between the Memory controller die and the main die. Power efficiency is still really important today to be able to maximise performance. Also high power board cost more to produce than cheaper one.
Increased power usage doesnt matter, I already said it could not compete because it was smaller. It competed well with the 4080 and that's it, but too expensive to produce for the price. Remember, 4080 way smaller chip with smaller bus vs the 7900 XTX which was clearly bigger and more expensive bus config, for MORE money - it didn't sell well, the 4080, but it still sold better than XTX.
The efficiency of RDNA 3 was still good, so that was not the issue. Yes Nvidias efficiency was naturally better with pure 5nm vs 5/6nm mix, but not far off.
If RDNA3 7900 XTX was beating (at least in raster) the 4090 by 10-15% minimum, things could have been different. I think AMD was not enough aggressive with RDNA3 and they ended up getting beat by Nvidia.
They will never be, AMD is a mixed processor company and Nvidia is purely GPU (nearly, aside from the few small ARM cpus they make), so ofc Nvidia will do all-in whereas AMD will always be more concentrated on multiple things and more on their traditional CPU business. Ryzen is in fact the GeForce of CPUs and has the same (toxic) mind share at times.
Nvidia went all out. AMD didn't and that is why they lost that generation.
AMD never won against Nvidia since over 15 years, and back then in HD 5000 times it only happened because GTX 400 was a hot and loud disaster. Funny enough that was a mid size chip with new node beating a huge chip of Nvidia, and the older huge chips of Nvidia on a older node (GTX 200 and 400). The only other small "win" they had was with R9 290X, which was very temporarily, they were a bit faster than 780 and Titan and the answer to that was fast by Nvidia, the 780 Ti, I don't count that very temporary win as a W for AMD. So in other words, the GPU branch was still named "ATI" when AMD had a W against Nvidia, and the HD 5850/5870 sold out as well.