No we are not, you are. No one uses a PC in the way you describe, I feel like I am repeating myself indefinitely, the bigger the chip and the higher the power consumption and the worse the battery life gets, even if performance/watt has increased. The only situation when that's not true is if the chip is used at maximum load until the battery dies, almost no one uses their device like that.
Kinda the opposite. Say you have a 100 Watt-hour battery (yes, they are rated in Wh, not W) and SoC/CPU with max power draw of 20W. If you run that SoC/CPU at full-tilt the whole time, you always get 5 hours of battery life. Don’t matter what the perf/watt is. Sure, there’s the user experience.... higher perf/watt maybe gets you snappier response, more data crunching, more FPS, whatever have you in that
same 5 hours. But as you say, no one uses their device that way.
Now say you have a short-ish workload that lets you clock-gate so you
don’t consume the full 20W all the time, then perf/watt matters again. Clearly, if you have SoC #1 that chews through the job at 20W for 1 hour then idles vs a higher perf/watt (but same max 20W) SoC #2 that gets done in 45 minutes (0.75 hours), then #1 just ate 20% battery whereas #2 ate only 15%.
Now
all things being equal,
@Fouquin ’s point is that a 1-hour job @20W and a 2-hour job @10W would chew through the same amount of battery. I’m guessing what you’re arguing when you say “efficiency” is that power scaling
isn’t linear, so with the same micro-arch etc., @10W it’s really like a 1:45 job (17.5% batt) and not 2 hours (20% batt). So the lower TDP system won. Point taken.
But to that,
@Fouquin ’s counter-argument is that if you have different micro-archs with maybe better perf/watt, then @20W you really could be looking at a 45 minute job (15% batt), swinging the balance back in favor of the higher TDP SoC for the same job.