It makes little sense if you want to buy an Ada Lovelace GPU only to limit it to run as fast as a RTX 3090. Then you are better off just getting the RTX 3090 and undervolting it in the first place. Undervolting is not some magic that just works on every single card out there. The mileage just like overclocking is subjective. The reason why Nvidia is pushing the GPU so hard which requires such a high power consumption may be to try and keep up with competition. When you cannot go as wide, you have to keep up by pushing clockspeed, which is not going to be pretty when it comes to power consumption.
You misunderstood, with the comment regarding undervolting testing not making the news i was referring in some sites like Igor's that already undervolted the
3090Ti to 300W (so no Ada which hasn't released yet...) and found out that it nearly matches reference 3090 (it's -5% slower vs 3090 Suprim X and -1.4% vs 3080Ti Suprim X) so although the 3090 Ti 480W Suprim X consumes +60% more than the undervolted to 300W card, the performance loss for the 300W card is only 10%.
Rumors of Navi 31 cards have 2.5x more CUs than existing Navi 21, while Nvidia is not able to double their CUDA cores with Ada Lovelace.
The architectures are not going to be the same, so we don't know what performance it's company is going to extract based only on the CU count, let's suppose that the leaks are correct about CU/Cuda core counts, so we know them, but you know what we also know, we know the frequency capabilities of each process according to TSMC (OC on air premium cards):
16nm TSMC 100% 2GHz
8nm Samsung 100-102.5% 2.05GHz
7nm TSMC 135-140% 2.7-2.8GHz
6nm TSMC 141.5-147% 2.835-2.94GHz
5nm TSMC 155-160% 3.1-3.2GHz
Of course the Architectures must be optimised for high frequency to hit these theoretical differences.
So the jump in frequency for Nvidia probably isn't going to be the same as AMD's and of course more importantly there are more technical reasons like the pixel-fillrate/bandwidth ratios deltas for the new architectures vs the old ones, the pixel-fillrate/texel-fillrate/FP32 TF ratio which for the AD102 is going to be the same as GA102, while AMD's ratio isn't going to be the same, Nvidia's infinity cache addition while AMD already had it in Navi 21 etc...
Based on the numbers you provided, I will be surprised that you just lose just 5% performance. I don't even know how you derive the "magical" 5% performance lost.
Very few people will buy a card that draws 900W of power. Even if I could afford it, I won't, unless I have some specialized need for such a card. Buyers will generally be people that need the CUDA/ Tensor cores for their work, or hardcore PC enthusiasts. Not only will the card cost a bomb, but you need some hard core cooling to keep the card in a manageable temp. And even if you have some custom water cooler setup for it, you need very power air conditioning in your room/ enclosed area to avoid the place becoming a sauna. Even with current higher end cards, I am observing room temps creeping up whenever the GPU is under sustained load.
To be fair i said 5% or around that range, and then i clarified in my next post with a 5-8%.
When 300W 3090Ti is only 10% slower vs 480W 3090Ti and the power difference is +60% , i assumed that with a +29% power difference 270W->350W , 350W->450W, 450W->580W (or 470W->600W if the TDP ends up at 600W) the performance deficit is going to be in the 5-8% range.
I didn't thought about it too much but it doesn't sound unreasonable imo.
Regarding your comment about power consumption/900W/buyers of that card etc i agree 100%, i said in the past (for 3090Ti also) that with so much power consumption for me the performance is irrelevant, (especially if the delta vs 3090 is so little).