Turing was nowhere near 30% perf/SM over Pascal. More like 10%. Any further gains came from more SMs and higher clocks.
That sounds rather unlikely to me, though I'm no expert by any stretch of the imagination. The jump from 28nm to 16nm did not halve power for Nvidia, so going from 12nm to 7nm EUV doesn't sound likely to do so either. Beyond that they're moving between foundries (at least for some GPUs), so comparisons could be difficult.
It's not analogous to AMD's move from Vega on GloFo 12nm to Navi on TSMC 7nm either, as that is a completely new architecture with very significant efficiency improvements. You'd be better off looking at the Radeon Vega 64 vs. the Radeon VII, as those are very similar in design but on a new node with slightly bumped clocks, and that improved perf/W by < 30%.
Not really, no. The first is reporting on IPC, which is (at least supposed to be) an average number based on a number of tests. An "up to" number in this case could thus be 18%, 30% or 45% - it's impossible to know, as we only know the average. Look at SPEC testing, for example - gen-to-gen IPC (clock equalized) testing in that normally results in a wide array of performance differences. The other reports on one specific single data point with little context. Is this a high number, an average, or a low? We have no idea, but given that it's an officially released number from the company itself, it's logical to infer it to be above average to present the product in the best light.