AMD should have priced the 9070 XT less than $600 because the 5080 which is $1000 has a similar die size? Are you serious?
Let me try to justify why I mentioned this earlier.
It was not about Nvidia but AMD catching up in performance per area
1. Foundries have 300mm
wafers, they are cut into
parts which we call dies, the dies have a
physical limit
(YES I KNOW there is a new method to use the full wafer, but no consumer chip is using that)
3. The cost of a single
wafer has gone up
4. Transistors/area density has hit a few walls, no more >2x density for a single 50% shrink, so it's more difficult to cool down tiny dies when they dissipate 300W...so peak frequencies have hit a wall too
5. TDP has hit the 300-400W wall due to material properties limitations and the average consumer doesn't want a 1kW heater on their desk
So what is left, if you want to
make a guess for the best possible high-end GPU on a specific architecture, for example "what would be the peak theoretical performance of a Navi 4x at 750mm2 die size" the only route of improvement left for
monolithic dies, is the size.
Since TSMC releases their "average density" improvements for each node, we can use that.
If you take Navi 3x and AD10x and calculate what a shrink to the new process node would be, then compare to what was actually released, you end up with a performance number which is approximately what the architectural improvements are between previous/next-gen
It's not a perfect 1:1 scaling, usually it's around 70-90% for the largest dies depending on power constraints, however you can get a very good estimate.
While it's nice to compare perf/watt, absolute performance and all the other metrics, there is one metric that matters most for the companies and that's the cost per GPU in order to market these.
AMD has been behind in performance per die size for a long time, i think since Pascal, which is one of the reasons they were not releasing "high-end" models every now and then
It makes no sense for AMD to compete in performance when nvidia can achieve the same performance at a much lower cost.
Before that, it was the opposite. At some point, they got so far ahead that nvidia had their own GPU oven memes, but then Bulldozer happened and AMD has been playing catch-up.
Unlike Intel, nvidia did not sit back and watch but actually innovated, so there wasn't a "Zen moment" on the GPU side.
While Navi 4x architecture seems to be closest they have been, they are still not there, but maybe they're close-enough
I don't think we'll ever see 350mm2 dies sold for 400$ again, since it's not only the cost of the die that increased but everything around it like VRAM chips, increased cooler and MOSFET cost due to higher power, increased PCB quality due to higher signal requirements...etc etc
HOWEVER IF AMD reaches parity on performance per die size, the cost of the GPU board is pretty much the same, so while NV might be pricing their top-end higher and higher...AMD desperately wants that market and would definitely release be able to release cheaper cards,
without losing money and I WISH Intel stays in the game, then we might see interesting innovations.
Edit:
All that inflation and VRAM requirement talk is bullshit, the 980Ti was a 600mm2 die with 6GB VRAM released for 650$ and the 1080Ti was 470mm2 die with 11GB VRAM released for 700$ after 10months because there was no competition from AMD anymore, this was ~9 years ago
Funny - all those cherry-picked charts for GPU die size and prices begin at 2014 when AMD stopped competing