• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel lying about their CPUs' TDP: who's not surprised?

1) that's been true for a while - for both vendors, the chips will boost past specs if allowed to,
Tell that to my 3200g
60w part has never gone past 30 at max load
 
I don't know what's wrong with the memory of some of you (@phanbuey ... wth?!) but I very much know 100% certain I had an i5 3570K running 24/7 at 4.2 Ghz and package power under 70W. The CPU was rated at 77W and I ran just over stock voltage for that OC. So that's 4 cores doing full time turbo speeds with long-term power usage about 10% below the rating.

So yeah. I think its pretty clear what happened since Skylake. Base clocks were steadily reduced while turbos were elevated, then Intel rewrote their definition of what turbo should mean, they changed some details and added more premium modes of turbo (lmao) so the old ones would seem somehow worse... except now you have a beautiful cocktail of turbos that cannot sustain even for two seconds because you'll either burn a hole in your socket or your CPU itself just runs straight into thermal shutdown.

Kaby Lake quads even suffered from this as the first gen Intel starting clocking to the moon on 14nm, and since Coffee Lake it has become progressively worse. Intel then made a thinner IHS to combat some of the issues, they suddenly figured out how to solder stuff underneath, and even with all these measures they still feel the need to start responding in topics for K-CPUs saying as much as 'Don't OC'. Meanwhile, the sales department gets into a room with mobo makers to make sure multi core enhancement settings are active on stock settings. Thx!

Wake the f up already. This is NOT business as usual and this has already been past any sense of normal for multiple generations. I'm not sure how people can be so oblivious, either you WANT to be fooled or you're seriously losing the plot.

'Rated at base'... o_O A base you'd expect from an Atom in 2012. I have a Coffee Lake CPU right now that needs to shed 200mhz of OC in the summer.

I'm never touching this Intel Core arch again.
 
Last edited:
So yeah. I think its pretty clear what happened since Skylake. Base clocks were steadily reduced while turbos were elevated, then Intel rewrote their definition of what turbo should mean, they changed some details and added more premium modes of turbo (lmao) so the old ones would seem somehow worse... except now you have a beautiful cocktail of turbos that cannot sustain even for two seconds because you'll either burn a hole in your socket or your CPU itself just runs straight into thermal shutdown.
This was a bit later and much simpler. Skylake 6700K and Kaby Lake 7700K fit into TDP more or less fine. Coffee Lake and 8700K added 2 cores and clocked these up by a lot to compete with Ryzens and that obviously sent the power consumption up. Intel then started playing hide-and-seek by releasing their specs in one form but allowing and suggesting motherboard manufacturers to ignore spec settings, primarily power limits and boost period (at least in default settings).

On desktop we still have the same cores, just more of them. So 8700K went to 130-ish W at full blast. 9900K went to 200W because real clocks rose in addition to 2 more cores and 10900K simply added 2 more cores to the mix with power going to 250W at worst case scenario. Whether these maximums are what you get in real usage is a different matter but when planning for motherboard VRM and cooling you need to take maximums into account.

Tell that to my 3200g
60w part has never gone past 30 at max load
My 2400G ran at 90W limit at first. After BIOS update or two it adhered to the spec 65W power limit - and some settings like cTDP were lost which I was quite pissed about.
If I wanted to be conspiracy theories type of person I would say this might have had something to do with me using the same board with the same BIOS version that was in Raven Ridge reviewers' pack... :D

But back to talking seriously, Zen and Zen+ adhered to a power limit set at TDP. Zen2 - Ryzen 3000 non-APUs - is where shenanigans started on AMD side.
 
Last edited:
Yeah the order of things is not entirely right up there I see now :D But the net result stands. Intel's producing a load of junk atm and they're blatantly lying about it, trying to pass it off as somehow energy friendly.
 
Tell that to my 3200g
60w part has never gone past 30 at max load
That I find hard to believe.
I'm typing this on an i5 Laptop that the killawatt meter says is using around 30W
 
hope you didn't pay more than $180 or so... i think i lucked out at ~$150-160, however you want to calculate the $20 combo savings at Microcenter... (before the fall hardware stock madness)
I paid ~200EUR from it.

That I find hard to believe.
I'm typing this on an i5 Laptop that the killawatt meter says is using around 30W
Wondering how much my Thinkpad with its i5-4210M consumes actually.
 
That explains why it uses the power.

I want you to explain why intels marketing has the lower TDP CPU using more power than the higher wattage one.

The marketing and TDP ratings are the problem here, not the technical reasons why they use the electricity - the magical lighting inside the melted sand make zappy zappy hot, but intels fudging the numbers really badly here.

Using more power because the reviewer presumably used the same cooler for everyone. They will boost as high as the cooling allows. Cheaper cooling, less power. I assume they say it's a 65W part because ... well people want less power use, which makes sense. Slap a 65W cooling solution to it and it'll be a lower wattage part. The K series has always been branded as higher power stuff. It's market segmentation, pure and simple.
 
I'm typing this on an i5 Laptop that the killawatt meter says is using around 30W
so does mine
cinibench r20 is what i use to check it it loads up all 4c at 100%
 
That's turbo boost for you. TDP is rated at "base frequency", that depressingly low figure well under the turbo speed. For example, take the 10900k, base frequency 3.7GHz, 125w. All bets are off once turbo kicks in.
This. The 9980H in my MacBook Pro has a TDP of 45 watts, under turbo that can easily be as much as 80 to 90 watts for the short duration and 65 watts for long duration. Honestly, I want a CPU that can do the best it can given thermal and power constraints, but I would like it if Intel was a little more honest about power consumption under boost conditions, same deal with AMD. I honestly don't really care about base clock TDP because most of the time when I care about it, I'm not at base clocks. I'm somewhere between the base clock and the max boost clock.
 
Low quality post by xrobwx71
The worst part is i know intel fanboys who rabidly defend these stats and say its lies


one is still on an i7 970 "intels done me great all these years, i trust them!"
Says the AMD fanboy? :) :toast: /j
 
Don't most Z490 MB VRMxs rival TR4? In fact the B550 boards did the same thing. That is all you need to know as a 720 AMP VRM is stupid for a CPU with a TDP of 100 Watts. It is true though that most 65 Watt AMD CPUs do maintain that threshold to within 10 to 20 watts. Threadripper are balls to the wall so if Intel is like that and the 105 Watt 5800X also is like that it supports the need (illusion) that you need a 12 or 14 phase VRM.
 
Low quality post by xrobwx71
Its not fanboyism when they point out actual, provable flaws.
Yes, and had we been sitting around a table together, we would have all had a laugh.
 
Seems straight forward to me, at least for intel.. 65w sitting there doing nothing, move the mouse and your at 125w, open a web page and you get the full 250 lol.

AMD? I don't know.. I ran it at stock for a month. Now its overclocked and its no different than my 3770K with a hard clock on it. So not that great, but not terrible.
 
I knew that part :p


Mine doesn't; it maxes at 100W.

The 10 core CPUs are extremely efficient when the C states are enabled while sitting at the desktop.


Just don't run Prime95 Small FFTs at that speed or else you will have to multiply the TDP by 2.
While I respect your input without limit and I'm replying to the thread more than to you, I seriously think that what a computer pulls while sat doing nothing is irrelevant, turn it off and anything pulls zero.

And I think this because to my mind a pc sat idling is a total f#@£&ING waste of power time and money and I would never allow such in my presence.

Turn it off ,or put it to use, simple.

And this isn't new hell no.

And I use a 8750H(home) and a latitude (dell) (work)5410 i5.

No bias , no bull, all work as well as I would ideally like within reason obviously and none are That bad on power use in reality, with measured expectations.
 
actually i'm not surprised since intel still run 14nm and from that based it's hard to get it while AMD is getting better in the market
so they do that or in my opinion framing it
 
TDP is just a marketing concept, not an engineering concept.
 
Tell that to my 3200g
60w part has never gone past 30 at max load

I'd suggest you get a better motherboard the, because its holding back your performance greatly.
 
Seems straight forward to me, at least for intel.. 65w sitting there doing nothing, move the mouse and your at 125w, open a web page and you get the full 250 lol.
AMD? I don't know.. I ran it at stock for a month. Now its overclocked and its no different than my 3770K with a hard clock on it. So not that great, but not terrible.
I know this was a joke but it actually seems to be the other way around. When idle, just showing desktop and running the few background processes I have, i5 was at 6W but R5 is at 30W. Thankfully the B550 board I have is a bit more efficient than my Z370 board (and the B450 I had previously) so the overall difference for the entire computer is ~15W.

Ryzen's IO Die seems to consume a good 10-15W and this has considerable effect at idle.

TDP is just a marketing concept, not an engineering concept.
For CPUs today? Unfortunately yes.
In other contexts, it is a perfectly valid engineering concept. Thermal Design Power, should indicate the maximum amount of heat components needs to dissipate so that cooling can be designed properly.
 
TDP is just a marketing concept, not an engineering concept.
No it's an engineering concept that marketing completely bullshitifies.

It's roots were sound, workable, informative and proportional, now it's a shit show especially with regards to Intel.
 
No it's an engineering concept that marketing completely bullshitifies.

It's roots were sound, workable, informative and proportional, now it's a shit show especially with regards to Intel.
TDP was also never meant, and still isn't meant to be a measure of power consumption. It is measure of thermal output to determine heatsink size.
 
TDP was also never meant, and still isn't meant to be a measure of power consumption. It is measure of thermal output to determine heatsink size.
For a chip there is no real difference, is there? Practically all the power that goes in comes out as heat.
 
Hi,
Never been on default turbo clocks to know what power it uses.
All core baby.
 
Back
Top