I wonder if the HELA naming is analogous to Asus Thor PSUs.
While on the topic of energy efficiency, the new nodes are more efficient than the older nodes, it’s just the entire industry is moving towards stuffing more and more transistors in the same area, at which point the switching losses and power leakage becomes exponential.
And while Radeon this generation has higher peak average current draw at 20~ms than GeForce, the latter literally fries it’s own protection circuit running ultra-optimised games. Whether that is fault of the protection circuit or the GPU core, I tend to lean towards latter pulling almost 1000w transient at uS scale. I hope Nvidia shoot itself on the foot with 500w next gen, and serve to be an example of why not to shove transistor until core energy density approach nuclear pile, with transient towards rocket nozzles.
At least on CPU side we have PL2 and PTT figures, GPU boosting algorithm and power use is just unknown until observed. That and a high end GPU easily pulls more power than CPU.