• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Can you help me make a comprehensive efficiency graph for CPUs?

I'm working on creating a graph showing how different CPUs perform across their entire range of supported wattages. Just like this:
2.jpg


Run the very brief benchmark at different wattages, from the lowest to the highest.
Are you aware of a serious flaw in your method? The power on the horizontal axis is the set limit, not the actual consumption. You haven't even described how you set that limit.

In your example, the CPU simply can't consume more than ~25W, and score more than ~420 in ST, regardless of the limit. You'll see the same thing in MT, depending on how high you go - the 7800X3D refuses to exceed ~80W, for example. So the only usable part of the curves is where they rise linearly (more or less). The horizontal part is misleading.
 
Are you aware of a serious flaw in your method? The power on the horizontal axis is the set limit, not the actual consumption. You haven't even described how you set that limit.

In your example, the CPU simply can't consume more than ~25W, and score more than ~420 in ST, regardless of the limit. You'll see the same thing in MT, depending on how high you go - the 7800X3D refuses to exceed ~80W, for example. So the only usable part of the curves is where they rise linearly (more or less). The horizontal part is misleading.

I'm not sure I'm aware, can you expand? I set a power limit within ThrottleStop and verify it's hitting that limit in the same program. What do you mean by it not being the actual consumption?

Completely agree on the curve, thanks!
 
Yup, what I don't understand is how you measure both at the same time, so you can multiply them?


I specifically talked about power reporting for CPUs from ANY software. For example, power reporting on NVIDIA graphics cards is quite accurate, because they have dedicated circuitry on the PCB that really measures current using a shunt resistor + voltage
So I am to guess you are talking about v-core and the amps used to come up with power?

So you're saying even if I measure the 12v (on some boards there is actual pads to test with the multimeter) and use the amp clamp on the cpu 12v eps connector(s) that it would be very vastly different power usage vs what the firmware in bios gives us? (Would need 2 multimeters)

If all this was that much inaccurate, then the sensors wouldn't need to be there.

Since we know the v-core is set to X and we know the VRMs output X amps, that's how we get the wattage conclusion. Obviously this is very simplified, I'm talking about v-core and cpu package power are we talking about the same thing?

So anything after the VRM circuit can't be measured?

The board monitors VRMs for nothing with willy nilly sensor readings. There's gotta be some method to the madness. Perhaps I just don't know what it is.

 
Last edited:
Cpu-z is too light of a work load. I recommend people use CBr23 or something heavy instead.

14700K with my 4000mhz 20 cores profile which is 150w (with some swing depending on app) Cpu-z reported only 135w.
Thanks so much, I didn't identify that due to the thermal limitations. After refrigerating my laptop, I confirmed CPU-Z doesn't pull as much as other benchmarks. Although, I would still take the quick testing CPU-Z provides over that last bit of power it takes away. However, if there are better programs that also bench very quickly, I'd like to know!
 
Thanks so much, I didn't identify that due to the thermal limitations. After refrigerating my laptop, I confirmed CPU-Z doesn't pull as much as other benchmarks. Although, I would still take the quick testing CPU-Z provides over that last bit of power it takes away. However, if there are better programs that also bench very quickly, I'd like to know!
Quickly is dependent on CPU used, how many cores and the frequency.

So for some systems with like 32 threads would complete a Cinebench run rather quickly. But if they have an older quad core, or even a newer one, the benchmark would take longer to complete the render. However, using the AVX2 instructions is what hammers a heavy load on the cpu.

I mean it's a really neat idea, but a lot of work. To try and get a 50w load all the way to 300w would take a really long time. I couldn't accomplish this within just an hour or something. Just because we are looking at 5w increments. Maybe 25w increments, this wouldn't look so intimidating.
 
Others have given some good points on other things, but I want to raise an issue with using CPU-Z as the measuring bar for performance.

It's... not a good reflection of real world performance, and in fact, you're going to run into this issue with most synthetics because in reality, you trade off accuracy by boiling performance down to one number, and finding a synthetic that accurately represents that average is easier said than done.

Even the "good" ones like Passmark have this problem. Go look up the single core scores of the 5800X and 5800X3D and you'll see exactly what I mean; it rates the 5800X slightly higher, whereas the 5800X3D will only ever score lower if the cache isn't helping at all, which makes it obvious the cache isn't being factored at all in whatever method their benchmark uses, which means... you guessed it, the synthetic suddenly doesn't accurately represent real world performance. Maybe it represents "desktop/productivity" performance fair enough, but even some of that stuff may (or may not) see increases from cache.

Passmark is very aware of this shortcoming, which is why they came up with a "gaming ranking"... yet this has a very big problem of it's own! Look at what CPU is currently topping the chart. Not the 7800X3D. Not the 7950X3D. It's the 7900X3D. Huh? Why's that? Let's look deeper. Wait... the 5600X3D is topping the 5800X3D too! Now it's a bit more apparent what's going on. The Ryzen 5 and lower half of the Ryzen 9 tiers use CCDs with 6 cores instead of 8. But all the X3D chips have the same 64MB extra cache. What they are doing is clearly averaging "cache per core" which results in the 6 core CCD models scoring higher. In reality, it doesn't work this way at all! L3 cache is shared, at least on single CCD CPUs. I think it still is on multi-CCD CPUs but then you add latency to cross CCDs which negates the effect. This is why, say, a 5800X has 32 MB cache, a 5900X has 64 MB, and a 5800X3D has 96 MB, but only the latter sees the uplift. The middle one has the same +32MB uplift but it's spread across two CCDs.

In short, even the "good" synthetics have some serious flaws. And CPU-Z's benchmark is not regarded as a "good" one. Certain CPUs had architectural adjustments that brought them real world performance uplifts, but CPU-Zs benchmark would see little to no difference.

For reasons like this, I'm not a fan of using synthetics as an accurate average of performance. I understand why it's done; you need to reduce variables and using the same measuring method does that... but what happens when that measuring method itself is consistent... but consistently wrong? This happens.
 
So you're saying even if I measure the 12v (on some boards there is actual pads to test with the multimeter) and use the amp clamp on the cpu 12v eps connector(s) that it would be very vastly different power usage vs what the firmware in bios gives us?
Correct. What you are proposing is how I measure power in my CPU reviews: power flowing through the cable, apples-to-apples, using external measurement devices that are independent of motherboard/CPU/BIOS/settings and thus can't be cheated

If all this was that much inaccurate, then the sensors wouldn't need to be there.
The sensor exists for power limits, it is sufficiently accurate for this purpose. But then mobo vendors found out how to trick it, so they could get higher scores in reviews, because they haxxed the sensor to see 75 W when it's actually 80 or 85 W. You can repro this by playing with LLC and related settings while observing temperature and package power (Intel)
 
Correct. What you are proposing is how I measure power in my CPU reviews: power flowing through the cable, apples-to-apples, using external measurement devices that are independent of motherboard/CPU/BIOS/settings and thus can't be cheated


The sensor exists for power limits, it is sufficiently accurate for this purpose. But then mobo vendors found out how to trick it, so they could get higher scores in reviews, because they haxxed the sensor to see 75 W when it's actually 80 or 85 W. You can repro this by playing with LLC and related settings while observing temperature and package power (Intel)
But that power droop would be reflected by drooping effective clocks. So, no. Their scores actually suck. This is why I don't suggest under-volting at all, or stay away from those conversations. I don't want people beating my scores anyways XD.

But I think I understand the explanation.

So the software is accurate enough, just not dead nuts accurate. There's essentially no government for Adaptive sway or skew.
 
But that power droop would be reflected by drooping effective clocks
Power drop? The CPU power limit defaults to 75 W out of the box. So if it reaches at that power limit it will not clock higher. Now suddenly the mobo vendor haxxes the power sensor so that the CPU thinks it has 10 W more to go, before reaching "75 W", so it automagically clocks higher and the motherboard wins in reviews where boards are compared at stock settings
 
Power drop? The CPU power limit defaults to 75 W out of the box. So if it reaches at that power limit it will not clock higher. Now suddenly the mobo vendor haxxes the power sensor so that the CPU thinks it has 10 W more to go, before reaching "75 W", so it automagically clocks higher and the motherboard wins in reviews where boards are compared at stock settings
Sounds just like Gigabyte used to be.
 
Now suddenly the mobo vendor haxxes the power sensor so that the CPU thinks it has 10 W more to go, before reaching "75 W", so it automagically clocks higher and the motherboard wins in reviews where boards are compared at stock settings
Exactly this is shown with HWiNFO sensor "Power Reporting Deviation" at least on AM4. I'm not sure the sensor exists on AM5.
Maybe AM5 users can confirm that with HWiNFO.

The "skew" is done on current (A). I remember some AM4 AsRock boards to have specific settings on this so the user could bring current close back to "reality". But you need something to tell how much "off" current is. Thats PRD.
What is maybe interesting is that some boards even over state current so the CPU end up to underperform and by a lot.
So not sure how these vendors are configuring things in the end, and if all this deviation is intentional on all cases.

For example look at this post below (#553) on HWiNFO forums where PRD was 128%
R5 5600
PPT was reading 76W during CB-R23 but with PRD 128% the true power of the CPU was down to
76 / 1.28 = 59.4W
All core frequency was down to 4.1GHz because of low true power.


That B450 board of course did not have any settings for adjusting current readout. Even my X570 does not have those.
So only option was to increase PPT. After setting PPT to 97W and run CB-R23 again the CPU was up to 4.4GHz all core and now PRD shown as 126%
97 / 1.26 = 77W true power
 
Exactly this is shown with HWiNFO sensor "Power Reporting Deviation" at least on AM4. I'm not sure the sensor exists on AM5.
Maybe AM5 users can confirm that with HWiNFO.
It does not. I assume that AM5 power reporting is more accurate, but who knows for sure.
 
It does not. I assume that AM5 power reporting is more accurate, but who knows for sure.
Yeah we have to ask Martin form HWiNFO on that...

EDIT:
There is the answer...

 
Last edited:
Power drop? The CPU power limit defaults to 75 W out of the box. So if it reaches at that power limit it will not clock higher. Now suddenly the mobo vendor haxxes the power sensor so that the CPU thinks it has 10 W more to go, before reaching "75 W", so it automagically clocks higher and the motherboard wins in reviews where boards are compared at stock settings
Sounds like a sure fire way to cause cpu degradation. But 10w isn't exactly a big amount to make or break a benchmark score. Could almost say within margin of error really.

Yes droop. VID at idle may be 1.35v. At load 1.28v with an LLC setting of 4 in an Asus board.

With these near default settings, the cpu effective clocks droop. Drop. W/e. So for competitive benchmarking, I only need to keep my effective clocks higher than the next guy at the same frequency. My scores will be better.

Anyhow, you're saying the VRM amp reading is what is skewed by the mobo manufacturer, likely not the v-core because that's easily measurable.

I'm lucky to have used my cpu in 4 separate boards. Seems to run identically or near it in all of them. And each has a different firmware flashed to them. Benchmark scores between them all, also near identical scores. Current TUF board is using the factory release Bios and firmware for Raptor lake.
 
Yeah we have to ask Martin form HWiNFO on that...

EDIT:
There is the answer...

That's good to know, thanks. :) Although, judging by the 7800X3D review here on TPU (power was measured at the socket, not by software, AFAIK), I'd say the power sensor reading on AM5 is (somewhat) accurate.
 
I'm not sure I'm aware, can you expand? I set a power limit within ThrottleStop and verify it's hitting that limit in the same program. What do you mean by it not being the actual consumption?

Completely agree on the curve, thanks!
Ah, I see you've mentioned ThrottleStop before. So you set the limit and check if PKG Power hits that limit? In this case, all is fine. But you'd better leave out the points where PKG Power doesn't reach the limit - and that would be the horizontal part of the curve (or both curves).
 
Back
Top