Average gaming power vs. average gaming power in TPU's benchmarks, they are
219W vs. 273W, which makes the 5700 XT consume 80% of the 2080 Ti's power, or the 2080Ti consume 125% the power of the 5700 XT. I guess I should have looked up the numbers more thoroughly (saying 215 vs. 275 did skew my percentages a bit), but overall, your 17% number is inaccurate. Comparing TDPs between manufacturers isn't a trustworthy metric due to the numbers being defined differently.
As for the 5700 being stretched in efficiency somehow proving they're further behind: obviously not, which you yourself mention. The 5700 XT is a comparatively small die, which AMD chose to push the clocks of to make it compete at a higher level than it was likely designed for originally. The 2080 Ti on the other hand is a classic wide-and-(relatively-)slow big die GPU, which gives it plenty of OC headroom if the cooling is there, but also makes it operate in a more efficient DVFS range. AMD could in other words compete better simply by building a wider chip and clocking it lower. Given just how much more efficient the 5700 non-XT is (
166W average gaming power! With the 5700 XT just winning by ~14%!) we know even RDNA 1 can get a lot more efficient than the 5700 XT (not to mention the 5600 XT, of course, which beats any Nvidia GPU out there for perf/W). And the 2080 Ti still can't get 2x the performance of the 5700 non-XT (+54-76% depending on resolution). Which tells us that AMD could in theory build a slightly downclocked double 5700 non-XT and clean the 2080 Ti's clock at the same power, as long as the memory subsystem keeps up. Of course they never built such a GPU, and it's entirely possible there are architectural bottlenecks that would have prevented this scaling from working out, but the efficiency of the architecture and node is there. And RDNA 2 GPUs promise to improve that both architecturally and from the node. We also know that they can clock to >2.1GHz
even in a console (which means limited power delivery and cooling), so there's definitely improvements to be found in RDNA 2.
The point being: if AMD is finally going to compete in the high end again, they aren't likely to go "hey, let's clock the snot out of this relatively small GPU" once again, but rather design as wide a GPU as is reasonable within their cost/yield/balancing/marketability constraints.
Then they might go higher on clocks if it looks like Nvidia are pulling out all the stops, but I would be downright shocked if the biggest big Navi die had less than 80 CUs (all might not be active for the highest consumer SKU of course). They might still end up releasing a >350W clocked-to-the-rafters DIY lava pool kit, but if so that would be a reactive move rather than one due to design constraints (read: a much smaller die/core count than the competition) as in previous generations (RX 590, Vega 64, VII, 5700 XT).
I don't think anyone will mind a 250W Big Navi being more than 20% behind Ampere if said Ampere is 350W or more. On the other hand, if it was more than 20% Ampere at the same power? That would be a mess indeed - but it's looking highly unlikely at this point. If Nvidia decided to go bonkers with power for their high end card, that's on them.