• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel lying about their CPUs' TDP: who's not surprised?

Nah! Yes you are right. Things that "annoy" humans may affect our quality of life. But is that really criteria you want to use to decide which CPU is better?

Are you really suggesting AMDs don't get hot too?

What you are describing to me is poor design by the laptop maker or PC builder. Poor choice of fans, inadequate case cooling, etc.

That may be true but you are suggesting they are failing because the processors are failing and in particular, that those with Intels are failing at a faster rate! Not buying it. Show us evidence.

Frankly, I cannot recall the last time I saw a CPU (Intel or AMD) that just decided to die.

I'm mostly referring to laptops and mobile devices, which is where the TDP matters so much and where it causes issues. You chalk it up to laptop makers, I chalk it up to a combination of them and Intels current approach to clocking. The line has become VERY thin and this also spills over the usability side of a device.

Is AMD different? I'm not saying that (not sure why you keep asking), but I do think they are more honest about advertising their TDPs, and the results generally spell that out too.

As for the CPUs dying. No. Me either. But aggressive power demands do take a toll on circuitry and power delivery elsewhere, and so does heat. With lots of stuff packed together, this is no improvement. And again, this must be related to Intel's need to produce spec sheets that mean something in terms of marketing. 'Look, we gained another 100mhz and the TDP Is still the same'. Is it, really?
 
Last edited:
- Quality of life: high temperature peaks are low quality of life; your fans get noisy. Your hands on a laptop get hot. I didn't mean durability/endurance. Laptop CPUs did always get hot, but its a difference if they slowly creep to 80C and then even slower to 85C, or if they boost straight to 85C and then cool back to 50 to start it all over again, all the time. The behaviour has changed, and Sandy Bridge was, for Core, in the optimal position. 22nm made a big dent, partly due to increased density. But when Intel started needing those last few hundred megahertz to keep competing, the limits have been stretched further and further. Yes, I do believe devices with Intel CPUs that boost aggressively are liable to last shorter than they used to in the past. Time will tell, but the average lifetime of recent laptops is nothing to write home about in general. Is AMD different? I don't think that is the subject, and I think they have a lot of work especially on mobile CPUs left to do.

- Aggressive temp cycling means what is described above. The limits are moved ever closer to the absolute boundaries of what the chip can do without burning to a crisp. What used to peak briefly at 80C, now peaks to 85C or more. At the same time, idle temps have actually dropped due to more efficient power states, and because idle requires lower clocks than it used to due to IPC gains.

As always the devil is in the details, and Intel is doing a fine job creating a box of details that cross the line.

I'm not going to start pointing fingers and flinging the F-word around here, but the irony is that no x86 laptop CPU boosts harder, more dynamically and more frequently than the Renoir CPUs do. Period. And because all of desktop Zen 2 and Zen 3 behaves the same way, the poor "quality of life" and "aggressive temp cycling" is 95% of what makes any Ryzen a Ryzen from 2019 onwards. In fact, the aggressive boost improves user experience if anything, because it improves the system's response to user input in the fraction of a second (a few milliseconds for CPPC on Ryzen, low double digits for Speedshift on Intel).

I don't see any Matisse, Renoir or Vermeer CPUs dying because of high temperature peaks and temp cycling. Some of the desktop chips die without any reason because AMD still doesn't know how to write firmware or work out the quality control of their N7FF chips, but that's a different story. Neither do I see Kaby-R, Coffee, Comet and Ice Lake CPUs randomly dying because they make aggressive use of their boost envelope.

I don't get where this argument is going. The high temps reflect much more on individual laptop makers' abysmal thermal solutions than the CPUs themselves. You do realize that PROCHOT is a thing, preventing the CPU from turning into molten slag if you don't spend ever waking minute monitoring package temp?

Do they last shorter on a smaller process and higher clocks? Probably. Is that going to make an 8-year-old laptop more desirable than a 12-year-old laptop?
 
Last edited:
This is not new. I get that you had an i5 3xxx that ran cool at 4.3ghz @Vayra86 (you can still buy i5's that do that) -- and I get that Sandy Bridge ran cool since it wasn't pushed to the max. But when you're comparing the top of the line chip to an old i5 i think you're skewing your memory a bit.

My Computers included:
Macbook 15" pro retina - ivy bridge, idled in the mid 60's cinebenched at 100C still being used by my mother in law
Dell XPS (2014) - hot as heck
Alienware 17" 2015- also insanely hot, had to disable turbo to keep VRM from throttling

Desktops:
6700K - super hot....
1800X - a 95W chip that sucked down 165W - would crash at 4.1Ghz with anything less than a 360mm water setup (280mm aio would crash)
8700K - also sucked down 165W but was faster
7820X - melted my house 125W, sucked down 250W thick 240 AIO minimum, overloaded air cooling
10850K - 225W in avx loads at 5.0 - but 10 cores and much faster, 85C but requires water.

This is why my memory i think is 'warped' since it's really nothing new at the high end. It's just 10 cores are hot on 14nm at 5Ghz... that's kind of to be expected. They are not hot at 4.8Ghz, in fact they are quite chilly. at 3.7 im sure they really do sit around 140W.

If you get the 10600K i think you will find it runs quite cool, is cheap, runs on cheap boards and has no issues whatsoever. If you get a 5800X 8 core - I think you will find it too runs quite hot despite being a 95W chip --especially on a board that automatically sets the most aggressive PBO settings out of the box.
 
Last edited:
The sky isn't falling... it has fallen back in the days of the introduction of AVX2/FMA to desktop processors. Even though Sandy Bridge introduced AVX to desktop, it probably wasn't used widely until Ivy Bridge appeared and brought AVX2.
 
I'd suggest you get a better motherboard the, because its holding back your performance greatly.
It is not the CPU is actually slightly overperforming compared to benchmark
 
Tell that to my 3200g
60w part has never gone past 30 at max load
How do you know this? Have you measured it in some manner?

Frankly, I cannot recall the last time I saw a CPU (Intel or AMD) that just decided to die.
Neither can I. The last time I saw a CPU "die" was because of thermal concerns(clogged heatsink, poor airflow in the case). CPU resiliency has improved greatly over the last 20 years. Perhaps this is why Intel and AMD both do not worry to much about stating certain specs, because they do not fear their IC product dying.
 
Neither can I. The last time I saw a CPU "die" was because of thermal concerns(clogged heatsink, poor airflow in the case). CPU resiliency has improved greatly over the last 20 years. Perhaps this is why Intel and AMD both do not worry to much about stating certain specs, because they do not fear their IC product dying.
Controls, limits and their management has come a long way. CPUs throttle when temperature gets too high, they do have current limits so overloading them is difficult and I believe they have at least detection for voltage spikes as well. All of that is old, tried-and-true and fast enough to make a CPU really-really reliable and resilient.
 
Hot topic today wow.

I'm in the camp that of "I'm not surprised" with the caveat that "by anything Intel does to mislead consumers for profit"
 
In other contexts, it is a perfectly valid engineering concept
It should always be an engineering concept. It should be thermal "design" power, not thermal "what we decide to tell you" power.
How is a person supposed to know if a plain 150W tower cooler is enough for their K series CPU, or they need a proper watercooled setup that can handle 200+W of CPU heat dissipation? Printing 125W on the box and actually pulling 265W should be a banned practice.

To anyone that might want something to play with..

Intel has a little thing called Power Gadget that shows how much power a cpu is pulling. It'll show my 2680v2's at 95-100w full load and 4790k 60w on normal use. Maybe some of you guys can give it a try on the 10 series.
It's software?
I use CoreTemp for reading CPU power consumption.
Could be inaccurate, but may not be either.
 
It should always be an engineering concept. It should be thermal "design" power, not thermal "what we decide to tell you" power.
How is a person supposed to know if a plain 150W tower cooler is enough for their K series CPU, or they need a proper watercooled setup that can handle 200+W of CPU heat dissipation? Printing 125W on the box and actually pulling 265W should be a banned practice.
Absolutely, I wholeheartedly agree.
 
Also the idle -- the 4690K doesn't have all the newer power saving tech so it idles at like 56W stock
Who told you that? Mine idles at 9.6W power consumption @800MHz 0.76V.
My CPU pulls 56W at 3.5GHz Prime95.
At 4+GHz it starts to go to 75+W power consumption.
It maxed out at 83W, but I found I could push it to 100W at 4.3 without issue.
Past 4.3 the overclock is not stable.

KillaWatt perhaps
CoreTemp.

So yeah. I think its pretty clear what happened since Skylake. Base clocks were steadily reduced while turbos were elevated, then Intel rewrote their definition of what turbo should mean, they changed some details and added more premium modes of turbo (lmao) so the old ones would seem somehow worse... except now you have a beautiful cocktail of turbos that cannot sustain even for two seconds because you'll either burn a hole in your socket or your CPU itself just runs straight into thermal shutdown.
Is this shitshow why Swan had to step down? Faster CPUs no matter what?
 
Neither of those are reliable enough to be dependable. They are an ok ballpark reference but should not be used as a defacto method of measurement. KillaWatt type solutions are much more accurate.

To determine CPU power usage? Not even.

That only determines power draw for the entire system at the wall. There's no accurate way to break that down to individual component power draw - the efficiency of the power supply changes based on load. Comparing idle and full load from a kill-a-watt doesn't take into account power supply loss.
 
It should always be an engineering concept. It should be thermal "design" power, not thermal "what we decide to tell you" power.
How is a person supposed to know if a plain 150W tower cooler is enough for their K series CPU, or they need a proper watercooled setup that can handle 200+W of CPU heat dissipation? Printing 125W on the box and actually pulling 265W should be a banned practice.
Intel doesn't send out review samples for nothing. If you have some kind of load that is pushing 265W, it's even more critical to pay attention to the reviews.

There used to be a time when Intel made and sold their own motherboards. Now, if you paired a default Intel motherboard and cpu together, and your typical loads resulted in 2x TDP -- that's definitely a talking point.
 
It should always be an engineering concept. It should be thermal "design" power, not thermal "what we decide to tell you" power.
How is a person supposed to know if a plain 150W tower cooler is enough for their K series CPU, or they need a proper watercooled setup that can handle 200+W of CPU heat dissipation? Printing 125W on the box and actually pulling 265W should be a banned practice.


It's software?
I use CoreTemp for reading CPU power consumption.
Could be inaccurate, but may not be either.
Yes software. I trust Intel's reading of power draw over a third party and same for Ryzen Master. Hwinfo is close for trust but everything takes salt.
 
I read through this, and found myself asking if any power user/enthusiast building a custom high-performance overclocking rig actually cares about TDP, which is merely a warranty that a processor will operate within a given set of parameters at which it will not exceed a specified power consumption level.
 
Hi,
lol yeah I'd guess those people would just be wondering how to cool the little devil
video yes GIF
 
I read through this, and found myself asking if any power user/enthusiast building a custom high-performance overclocking rig actually cares about TDP, which is merely a warranty that a processor will operate within a given set of parameters at which it will not exceed a specified power consumption level.
I have NEVER once, Not even in the 24 years of building have I once considered TDP as a build/Buy point.
I have read through this thread and also have come to this pondering.
I am however an AMD fanboy And can surely say the FX chips SUCK power and ASS! Need a fing 850W PSU just to power an FX8300 and HOLY crap the POWER crazy CHIP can heat a Double wide in Alaska!
SO yeah never really gave a crap about TDP...

Hi,
lol yeah I'd guess those people would just be wondering how to cool the little devil
video yes GIF
RIGHT!?!?! LMFAO!
I mean I am thinking going back to liquid cooling for the 3700X! LOL.

Oh and no I do not think they are lying at all no one is. The CPU can and does do that TDP if you set it up exactly the way they did. so there is that too.. :slap:
 
I didnt go through almost 100 post and I dont know if and how many said it already...

Both Intel and AMD are not pulling numbers out of their arse.
The TDP ratings are Thermal Design Power and wont tell you the max power draw but the heat output towards the cooler under certain operating conditions, or so the meant to be. If you want max power draw look for CPU PackagePower or CPU PPT(PackagePowerTracking)

AMD as TDP refers to the heat towards the cooler under certain conditions (ambient temp). A specific Tdelta between CPU and ambient. Not all heat produced by the CPU is going to the cooler. Some of it is going through the CPU substrate to the socket and the board and get dissipated from there.

For example all Ryzen 3000 series are like this:

65W TDP, 88W PPT
95W TDP, 125W PPT
105W TDP, 142W PPT

Intel on the other hand is different. Intel CPUs have 2 PowerLevel stages. PL1 and PL2
As TDP they refer to PL1 power draw and that is the max sustainable power draw of the CPU. PL2 is much higher than that but by default for a certain period of time called "Tau" each (PL2/Tau) different for every CPU.

Intel-PL1-PL2-Tau-640x638.jpg
 
And you can determine CPU usage rather easily by isolating measurements taken. You'll note I said "KillaWatt type". There are other forms of power measurement devices.

Please share the "KillaWatt" type device.
 
Neither of those are reliable enough to be dependable. They are an ok ballpark reference but should not be used as a defacto method of measurement. KillaWatt type solutions are much more accurate.
yeah but its not gonna make a 60 watt draw show up as 30w
 
Back
Top