Personally I'm not pinning my hopes on anything nor am I expecting anything. We won't know anything definitively until the dust settles. The node shrink isn't the only metric to consider when looking at AMD and RDNA2 improvements that can be made or realized is what I was alluding toward. Now with that in mind 30% is more achievable than 50% w/o a doubt. We don't know anything about the clock speeds or how it might be handled and achieved perhaps it's a short burst clock speed, but only sustained briefly similar to Intel's turbo and perhaps not across all stream cores. We literally know just about nothing about it officially AMD is being tight lipped about it and playing their cards close.
It's true that we don't know anything about how these future products will work, but we do have some basic guidelines from the history of silicon manufacturing. For example, your comparison to Intel's boost strategy is misleading - in Intel's case, boost is a short-term clock speed increase that bypasses baseline power draw limits but must operate within the thermal and voltage stability limits of the silicon (otherwise it would crash, obviously). Thus, the only thing stopping the chip from operating at that clock speed all the time is power and cooling limitations. Which is why desktop chips on certain motherboards and with good coolers can often run at these speeds 24/7. GPUs already do this - that's why they have base and boost speed specs - but
no GPU has ever come close to 2.5 GHz with conventional cooling.
RDNA 1 is barely able to exceed 2 GHz when overclocked with air cooling. It wouldn't matter whatsoever if a boost spec going higher than this was for a short or long period, as
it would crash. It would not be stable, no matter what. You can't bypass stability limits by shortening the time spent past those limits, as you can't predict when the crash will happen. So, reaching 2.5GHz, no matter the duration, would then mean exceeding the maximum stable clock of RDNA 1 by near 25%.
Without a node change, just a tweaked node. Would that alone be possible? Sure. Not likely, but possible. But it would cost a lot of power, as we have seen by the changes Intel has made to their 14nm node to reach their high clocks - higher clocks require higher voltages, which increase power draw.
The issue comes with the leaks
also saying that this will happen
at 150W (170W in other leaks), down from 225W for a stock 5700 XT and more like 280W for one operating at ~2GHz. Given that power draw on the same node increases more than linearly as clock speeds increase, that would mean a
massive architectural and node efficiency improvement
on top of significant tweaks to the node to reach those clock speeds at all. This is where the "this isn't going to happen" perspective comes in, as the likelihood for both of these things coming true at the same time is so small as to render it impossible.
And remember, these things stack, so we're not talking about the 30-50% numbers you're mentioning here (that's clock speed alone), we're talking an outright >100% increase in perf/W if the rumored numbers are all true. That, as I have said repeatedly, is completely unprecedented in modern silicon manufacturing. I have no problem thinking that AMD's promised "up to 50%" perf/W increase might be true (especially given that they didn't specify the comparison, so it might be between the least efficient RDNA 1 GPU, the 5700 XT, and an ultra-efficient RDNA 2 SKU similar to the 5600 XT). But even a sustained 50% improvement would be
extremely impressive and
far surpassing what can typically be expected without a node improvement.
Remember, even Maxwell only beat Kepler by ~50% perf/W, so if AMD is able to match that it would be one hell of an achievement. Doubling that is out of the question. I would be very, very happy if AMD managed a 50% overall improvement, but even 30-40% would be very, very good.