Νo, that's not what Im arguing at all. No wonder you disagree since you haven't even understood the point.
Let me try once more, in the hopes you get it.
CPU A is at 100w and scores 100 points at stock (let's say it's the 5950x)
CPU B shows up and it is running at 170w and scores 150 points (let's say it's the 7950x).
CPU B looks more inefficient than A, but when you actually test them both at same wattage, CPU B can score 120 points at 100w. So it is more efficient. So if you are after efficiency, you can run CPU B at the same wattage cpu A was at, and still beat it in both performarnce and efficiency. Yet here we have people complaining about how inefficient zen 4 are. The same thing was going on with alderlake..
If you still don't get what im saying, I give up
But this is the thing, and where this argument keeps going in circles:
both these statements are true. There is no contradiction between the two. At stock, CPU B
is, unequivocally, less efficient than CPU A. That doesn't mean that it doesn't have the
potential to be
more efficient
- but that's not how it's configured from the factory. All silicon implementations of an architecture have a wide range of possible efficiencies at various tuning and performance levels. But that means that CPU B
can be more efficient, given appropriate tuning. It just
isn't (in this example) at stock.
Two things spring from this:
First: non-iso power comparisons don't necessarily give a good picture of the architectural or implemented efficiency of each design, as they are tuned differently. What they do is give a representative picture of actual, real-world
product efficiency. What people buy and put into their PCs. Then again, iso power measurements don't really give a good picture of efficiency either, as you're still just measuring a single point along a complex curve for each, and there's nothing in that measurement telling you whether these tuning levels are "equal" (as if that's possible) along their respective curves. Comparing two chips at, say, 150W can also be
extremely problematic if one chip is pushed to its limits at that point while the other can go much further. For an actual overview of architectural efficiency that is worth anything at all, you need to run a wide range of tests across a wide range of wattages - anything less is just as flawed as non-iso power testing.
The second thing springing from both statements being true: the major question here is what you're actually looking for - practical, useful information that's generally applicable, or specialized information that's only applicable in specialized settings. This is where we've been disagreeing for a long, long time, as I think the generally applicable information gained from looking at real-world stock behaviour is
by far the most important data, while you care only about the highly specialized niche of people actually willing and able to tune their chips manually.
Of course, once we start looking past either pure ST or nT applications, as well as looking at power draws across a range of various workloads, things get very complicated very quickly, as there's
a ton of variability in how each specific workload will interact with each specific CPU both architecturally and in terms of its physical implementation. I really, really wish there was someone doing comprehensive power monitoring across their whole range of CPU testing, but there isn't - and it's understandable, as that's a massive, massive amount of work. Anandtech seemed to be working towards that at one point, but never actually got there, and sadly the decline of that site has been ever more obvious in recent years.
I've seen more people who do this for a living who try to do serious OC than not. That 95% perf for 25% less power is true, nothing new about it all, and totally irrelevant.
The only time you benefit from that power savings is when you start to push all core workloads. How often do you do that? Very rarely for most people. And what's the actual benefit for that loss of response when you need it? Virtually nothing because for most people, when they do push all core work, it's for very short periods (like, 5 seconds).
The flip side is you can get 110% for 50% more power.
If that 10% saves you 45 minutes a day because your income depends on rendering / video editing and so on, and you make 30$/hr, that's like getting $21 more per day.
Compared to paying an extra .02c per day in power, it is flatly a no-brainer for the people who need more speed.
So yeah, my comment about where are the performance enthusiasts was kind of a joke and kind of not.
All of the so far released CPUs are AMD X CPUs, they are OC capable up-tuned chips for enthusiasts. The more mundane non X chips will come later. If you are not into OC / high performance CPUs, why are you here.
All AMD CPUs can be OC'd, they don't follow Intel's limitations there. But I disagree with your overall conclusion here. Why? Because - outside of the downright silly and vastly oversimplified calculation you're basing your argument on - the vast majority of us don't actually do these types of work. Most of us are PC enthusiasts - hobbyists - or gamers, or some mix of the above. And, crucially, there are
a lot of use cases where this type of logic either doesn't apply or just isn't valid.
As to your calculation:
- If you do that kind of work for a large company, on a salary, then you gain nothing from that speed-up save for possibly having to do more work. Also, are you just sitting on your ass doing nothing during that render? No, you're doing other work. So, increasing that speed
might benefit your workflow - or it might get in the way of other necessary tasks, or it might just make your boss more money while you're just left with a bigger workload - on a fixed wage, that theoretical $21 of yours goes into your boss' pocket, not yours.
- If you're a freelancer, contractor, or running your own business, you
might get the opportunity to make more money from such a speedup, but only if you are constantly in a state of having more than enough work. If you don't then, congrats, you've now got slightly more free time - which is of course also nice, but you could have had that already by just scheduling your renders for the end of the day.
In both of these cases, the applicability of your logic is
extremely narrow. That doesn't make it
wrong, it just makes it myopic.
And, of course this also fails to take into account a whole bunch of other factors that play into this:
- Scheduling renders for EOD/overnight means less heat dumped into your workspace while you're there, potentially increasing comfort (and saving on AC costs if applicable)
- Running renders slower but more efficiently overnight - when there's plenty of time for them to finish - can save you meaningful electricity costs in the long run
- Overclocking production gear is generally considered a huge no-no due to instability and the possibility of errors. Saving 10% of time on a render isn't much help if you have to do it all again because one frame got partially corrupted.
... oh, and if a 10% speedup saves you 45 minutes a day, then you're already running 7.5 hours of renders a day - meaning this isn't a workstation, but a dedicated render rig - which would be runnning 24/7 anyway. Once you're at that level, setting up a second render rig - or renting an off-site render farm - will be the next step, not an OC.