The actual wafer costs are not the only issue. The problem is nVidia's designs suck FOR GAMING. They are blowing their transistor budget with bad designs. Doesn't matter if wafers are double the price, you could still make a much better GPU for a lot less money. One GPU design is for 5 different use cases. Bad idea.
GTX 1080 Ti: 12 billion transistors.
RTX 3080: 28 billion.
RTX 4090: 76 billion.
We are talking 7x the old density here. We could be buying a GTX 1080 Ti, at 3Ghz, with double speed VRAM, less than 100mm2. $200-$300 product, same speed as the RTX 4070 Ti in normal rastered games thanks to that double performance. Die size half the RTX 4050 die. LOL. There's a lot of crap thrown in to the die design to add those "features" that I don't want.
As for the RTX 3080, we have 2.7x that density. We could be buying the RTX 3080 with a 30 percent speed boost, <250mm2. $400. Faster than the 4070 Ti for raster, just as good for RT. $400. Same profit margins as the old 3080. We know the RTX 3080 design works. A shrunk design at higher frequencies will also work. This isn't "pie in the sky" thinking.
TSMC handed nVidia the victory. NVidia gave TSMC terrible designs for gaming. It's all ethereum mining, application performance, RT this DLSS that. I just want the 1080 Ti for $200, running with double clock speeds. So would everyone else.
They call it vision. I call it stupidity. AMD had a chance to go a different way, and they messed also. This isn't an nVidia versus AMD thing. This is a "our video card companies suck and they are ruining gaming" thing. Intel had such an opportunity here to target gamers instead of the datacenter, they messed up also.
Even the best of the bunch, the RTX 4090 doesn't look so hot. 7x the transistors for 3x performance. But that's running at 2x the clock speed. Basically 14x transistor*frequency for 3x gaming FPS. Bad. Those transistors are not being used effectively for the games you play.