# Intel lying about their CPUs' TDP: who's not surprised?



## cst1992 (Feb 3, 2021)

Intel’s Desktop TDPs No Longer Useful to Predict CPU Power Consumption | ExtremeTech
					

Intel's higher-end desktop CPU TDPs no longer communicate anything useful about the CPUs power consumption under load. ...




					www.extremetech.com
				




Lying about power consumption numbers to make your products look good is just despicable.
Thank God I have a 4690k which means I don't have to deal with this mess.


----------



## phanbuey (Feb 3, 2021)

cst1992 said:


> Intel’s Desktop TDPs No Longer Useful to Predict CPU Power Consumption | ExtremeTech
> 
> 
> Intel's higher-end desktop CPU TDPs no longer communicate anything useful about the CPUs power consumption under load. ...
> ...



1) that's been true for a while - for both vendors, the chips will boost past specs if allowed to,
2) peak power consumption depends on mobo turbo limits, and most of the mobo manufacturers 'cheat' and allow the chip to boost beyond specs (which is fine, it's a type of overclock) as per that article.
3) yes 10 cores on 14nm is power hungry, but it still more efficient than a 4690k perf/watt.  A 4690k can peak at 152W which is really not that far behind the 10700k and slightly over the 10600k.

Neither one of those is a 'mess' nor is it a reason to stick to a 4690k.


----------



## Fouquin (Feb 3, 2021)

Hey, you've caught up to 2015! Boy you're in for a surprise, Intel also gets stuck on 14nm for *5 years*! I know, crazy right?


----------



## cst1992 (Feb 3, 2021)

I knew that part 



phanbuey said:


> A 4690k can peak at 152W


Mine doesn't; it maxes at 100W.


----------



## hat (Feb 3, 2021)

That's turbo boost for you. TDP is rated at "base frequency", that depressingly low figure well under the turbo speed. For example, take the 10900k, base frequency 3.7GHz, 125w. All bets are off once turbo kicks in.


----------



## phanbuey (Feb 3, 2021)

Also the idle -- the 4690K doesn't have all the newer power saving tech so it idles at like 56W stock, where the 10 series sit at 15-35W depending on flavor... if you leave your computer on alot you would actually save a bit of power going to the newer chips.


----------



## unclewebb (Feb 3, 2021)

phanbuey said:


> where the 10 series sit at 15-35W


The 10 core CPUs are extremely efficient when the C states are enabled while sitting at the desktop.



http://imgur.com/i4tnKgl


Just don't run Prime95 Small FFTs at that speed or else you will have to multiply the TDP by 2.


----------



## Kissamies (Feb 3, 2021)

Their TDP should have been considered as a joke for few years already. But I wouldn't mind if just the cooler is fit to do its thing.


----------



## Mussels (Feb 3, 2021)

The worst part is i know intel fanboys who rabidly defend these stats and say its lies


one is still on an i7 970 "intels done me great all these years, i trust them!"


----------



## phanbuey (Feb 3, 2021)

Mussels said:


> The worst part is i know intel fanboys who rabidly defend these stats and say its lies
> 
> 
> one is still on an i7 970 "intels done me great all these years, i trust them!"



I'm so confused... AMD and Intel have been doing the same thing for FOREVER ?... the phenoms were rated for 94W that sucked down over 200W... Zen 3 while extremely efficient also consumes over its rated TDP... OP is posting on a chip rated for a TDP of 88W that at stock config will eat over 150W.  *Thermal *Design Power (TDP) != Power Consumption.

What exactly is the problem?  Is it that motherboards are yolo boosting to the moon because they can?  Is it because intel can't get its sh*t together and is still on 14nm?   I guess I am missing the part where we decided this was Intel's fault for lying...


----------



## Kissamies (Feb 3, 2021)

phanbuey said:


> What exactly is the problem?  Is it that motherboards are yolo boosting to the moon because they can?  I guess I am missing the part where we decided this was Intel's fault for lying...


My opinion would be that

a) the cheap motherboards have hard time with their crappy VRMs on higher end chips
b) getting better cooling always costs more


----------



## phanbuey (Feb 3, 2021)

Chloe Price said:


> My opinion would be that
> 
> a) the cheap motherboards have hard time with their crappy VRMs on higher end chips
> b) getting better cooling always costs more



Ok but... this is partially true.  GIGABYTE and ASROCK cheap boards have severe VRM issues and are terrible but - you can get $140-150 z490 boards from Asus and MSI all run a 10850K/10900 at 5+ghz with no vrm issues whatsoever.

Cooling a hot chip is expensive for sure -- but considering that the 5600x is going roughly for the same price as a 10850K right now (within about $25) , you're still looking at the "budget" option.  5800x comes with no cooler either and is about $120 more.


----------



## hat (Feb 3, 2021)

phanbuey said:


> I'm so confused... AMD and Intel have been doing the same thing for FOREVER ?... the phenoms were rated for 94W that sucked down over 200W... Zen 3 while extremely efficient also consumes over its rated TDP... OP is posting on a chip rated for a TDP of 88W that at stock config will eat over 150W.  *Thermal *Design Power (TDP) != Power Consumption.
> 
> What exactly is the problem?  Is it that motherboards are yolo boosting to the moon because they can?  Is it because intel can't get its sh*t together and is still on 14nm?   I guess I am missing the part where we decided this was Intel's fault for lying...


Like I was saying, TDP is measured at "base frequency". Both Intel and AMD are using some form of turbo boost that will go very close to the limit of the silicon, provided the power delivery and cooling is good enough.

TDP wasn't _terrible_ with older chips, like my 2600k, because turbo boost wasn't as aggressive and we were stuck with quad cores until the 8th Core generation. With 8 and even 10 core chips pushed to the limit, you're going to see a lot of power consumption. I can make my 2600k draw tons of power too, if I slapped a huge cooler on it, clocked it to 5GHz and flooded it with enough voltage to keep up.


----------



## Kissamies (Feb 3, 2021)

phanbuey said:


> Ok but... this is partially true.  GIGABYTE and ASROCK cheap boards have severe VRM issues and are terrible but - you can get $140-150 z490 boards from Asus and MSI all run a 10850K/10900 at 5+ghz with no vrm issues whatsoever.
> 
> Cooling a hot chip is expensive for sure -- but considering that the 5600x is going roughly for the same price as a 10850K right now (within about $25) , you're still looking at the "budget" option.  5800x comes with no cooler either and is about $120 more.


I'd still get a 3600 for a bang for buck setup like I did several month ago..


----------



## Mussels (Feb 3, 2021)

phanbuey said:


> I'm so confused... AMD and Intel have been doing the same thing for FOREVER ?... the phenoms were rated for 94W that sucked down over 200W... Zen 3 while extremely efficient also consumes over its rated TDP... OP is posting on a chip rated for a TDP of 88W that at stock config will eat over 150W.  *Thermal *Design Power (TDP) != Power Consumption.
> 
> What exactly is the problem?  Is it that motherboards are yolo boosting to the moon because they can?  Is it because intel can't get its sh*t together and is still on 14nm?   I guess I am missing the part where we decided this was Intel's fault for lying...


its because the numbers have meant less and less every generation, to the point a 65W CPU use more power than a 125W in *INTELS OWN PRODUCT STACK*







AMD's current ones are more the accepted norm, with TDP being an 'average' and the reality being a little higher (65W = 85W, 105W = 140W)

Intels OTOH... 65W = 214W and 125W = 204W


----------



## R0H1T (Feb 3, 2021)

phanbuey said:


> I'm so confused... AMD and Intel have been doing the same thing for FOREVER ?


AMD's definition of TDP is something else & doesn't really translate into power consumption directly. We've been over this, also I'll add that AMD does enforce their "TDP limits" more stringently & there's a whole host of other settings that affect it as well like PPT, EDC,TDC et al.


----------



## tabascosauz (Feb 3, 2021)

I really don't see how any of this is "lying" if one has the brain capacity to put two and two together, and figure out that TDP doesn't govern jack shit related to actual power consumption for either company, and hasn't ever in recent memory. The only reason it's in the spotlight now is because of the wide gulf between base clock and boost clock. PL1 isn't some new kid on the block.

Now for mobile chips, Intel does some nefarious marketing manipulation using its TDP-up and TDP-down mechanism to misrepresent what its chips actually do in a practical TDP configuration. Now THAT's borderline lying.

As for the "but my 4690K is efficient" LOL nice one, try putting an AVX load on that chip for once and see what "TDP" means. Sitting here looking over at my 4790K, trying to think of all the times when 88W ever meant anything to it


----------



## Mussels (Feb 3, 2021)

tabascosauz said:


> I really don't see how any of this is "lying" if one has the brain capacity to put two and two together, and figure out that TDP doesn't govern jack shit related to actual power consumption for either company, and hasn't ever in recent memory. The only reason it's in the spotlight now is because of the wide gulf between base clock and boost clock. PL1 isn't some new kid on the block.
> 
> Now for mobile chips, Intel does some nefarious marketing manipulation using its TDP-up and TDP-down mechanism to misrepresent what its chips actually do in a practical TDP configuration. Now THAT's borderline lying.
> 
> As for the "but my 4690K is efficient" LOL nice one, try putting an AVX load on that chip for once and see what "TDP" means. Sitting here looking over at my 4790K, trying to think of all the times when 88W ever meant anything to it



please use your logic to explain how the 65W 10700 uses more power than the 125W 10700k


----------



## Frick (Feb 3, 2021)

hat said:


> That's turbo boost for you. TDP is rated at "base frequency", that depressingly low figure well under the turbo speed. For example, take the 10900k, base frequency 3.7GHz, 125w. All bets are off once turbo kicks in.



This is all there's to it.

Disable boost and it's fine.



Mussels said:


> please use your logic to explain how the 65W 10700 uses more power than the 125W 10700k



Boosts slightly higher?


----------



## Mussels (Feb 3, 2021)

Frick said:


> Boosts slightly higher?



That explains why it uses the power.

I want you to explain why intels marketing has the lower TDP CPU using more power than the higher wattage one.

The marketing and TDP ratings are the problem here, not the technical reasons why they use the electricity - the magical lighting inside the melted sand make zappy zappy hot, but intels fudging the numbers really badly here.


----------



## Gungar (Feb 3, 2021)

Mussels said:


> please use your logic to explain how the 65W 10700 uses more power than the 125W 10700k



Lower quality silicon.


----------



## Mussels (Feb 3, 2021)

Gungar said:


> Lower quality silicon.


best answer so far, tbh.

Still doesnt excuse intel from advertising half the TDP (THERMAL design power) for the chip that uses more power and produces more heat


----------



## oxrufiioxo (Feb 3, 2021)

Mussels said:


> please use your logic to explain how the 65W 10700 uses more power than the 125W 10700k



Most likely down to most Z motherboards running the chip without power limits  to begin with or the reviewer manually doing it and it having a worse ihs than the k chip and most likely being a lesser bin so higher voltage to hit whatever boost frequency.

Asus is the only company afaik that enforces power limits at least till you enable xmp lol


----------



## plonk420 (Feb 3, 2021)

Chloe Price said:


> I'd still get a 3600 for a bang for buck setup like I did several month ago..



hope you didn't pay more than $180 or so... i think i lucked out at ~$150-160, however you want to calculate the $20 combo savings at Microcenter... (before the fall hardware stock madness)


----------



## londiste (Feb 3, 2021)

That article is about 3.5 years too late.
The CPU that started this was 8700K which ran at~130W at full blast (the "boost" 4.3GHz on all cores) and that came out in autumn 2017. High-end Intel CPUs have only gotten worse since.
AMD isn't slouch any more either, since Ryzen 3000-series there is this complex system for power limit which in large picture boils down to power limit at 1.35x TDP.



phanbuey said:


> I'm so confused... AMD and Intel have been doing the same thing for FOREVER ?... the phenoms were rated for 94W that sucked down over 200W...


This has not been going on forever. There have been CPUs that exceed TDP to some extent but this was not a major issue until a few years ago. Top-end Phenom was 1100T and it generally stayed in its 125W TDP. There was the FX 9590 that went over 200W but it also had TDP of 220W 



hat said:


> TDP wasn't _terrible_ with older chips, like my 2600k, because turbo boost wasn't as aggressive and we were stuck with quad cores until the 8th Core generation.


TDP was not terrible because it was enforced as power limit. The part about not having to fudge the numbers or outright lie because the chips genuinely did not draw more than TDP of course helped


----------



## lexluthermiester (Feb 3, 2021)

cst1992 said:


> Mine doesn't; it maxes at 100W.


And you arrived at this conclusion how? KillaWatt perhaps?



Mussels said:


> The worst part is i know intel fanboys who rabidly defend these stats and say its lies


We all know they are.


Mussels said:


> one is still on an i7 970 "intels done me great all these years, i trust them!"


REALLY? ROFLMAO!!


----------



## Hachi_Roku256563 (Feb 3, 2021)

phanbuey said:


> 1) that's been true for a while - for both vendors, the chips will boost past specs if allowed to,


Tell that to my 3200g
60w part has never gone past 30 at max load


----------



## Vayra86 (Feb 3, 2021)

I don't know what's wrong with the memory of some of you (@phanbuey ... wth?!) but I very much know 100% certain I had an i5 3570K running 24/7 at 4.2 Ghz and package power under 70W. The CPU was rated at 77W and I ran just over stock voltage for that OC. So that's 4 cores doing full time turbo speeds with long-term power usage about 10% below the rating.

So yeah. I think its pretty clear what happened since Skylake. Base clocks were steadily reduced while turbos were elevated, then Intel rewrote their definition of what turbo should mean, they changed some details and added more premium modes of turbo (lmao) so the old ones would seem somehow worse... except now you have a beautiful cocktail of turbos that cannot sustain even for two seconds because you'll either burn a hole in your socket or your CPU itself just runs straight into thermal shutdown.

Kaby Lake quads even suffered from this as the first gen Intel starting clocking to the moon on 14nm, and since Coffee Lake it has become progressively worse. Intel then made a thinner IHS to combat some of the issues, they suddenly figured out how to solder stuff underneath, and even with all these measures they still feel the need to start responding in topics for K-CPUs saying as much as 'Don't OC'. Meanwhile, the sales department gets into a room with mobo makers to make sure multi core enhancement settings are active on stock settings. Thx!

Wake the f up already. This is NOT business as usual and this has already been past any sense of normal for multiple generations. I'm not sure how people can be so oblivious, either you WANT to be fooled or you're seriously losing the plot.

'Rated at base'...  A base you'd expect from an Atom in 2012. I have a Coffee Lake CPU right now that needs to shed 200mhz of OC in the summer.

I'm never touching this Intel Core arch again.


----------



## londiste (Feb 3, 2021)

Vayra86 said:


> So yeah. I think its pretty clear what happened since Skylake. Base clocks were steadily reduced while turbos were elevated, then Intel rewrote their definition of what turbo should mean, they changed some details and added more premium modes of turbo (lmao) so the old ones would seem somehow worse... except now you have a beautiful cocktail of turbos that cannot sustain even for two seconds because you'll either burn a hole in your socket or your CPU itself just runs straight into thermal shutdown.


This was a bit later and much simpler. Skylake 6700K and Kaby Lake 7700K fit into TDP more or less fine. Coffee Lake and 8700K added 2 cores and clocked these up by a lot to compete with Ryzens and that obviously sent the power consumption up. Intel then started playing hide-and-seek by releasing their specs in one form but allowing and suggesting motherboard manufacturers to ignore spec settings, primarily power limits and boost period (at least in default settings).

On desktop we still have the same cores, just more of them. So 8700K went to 130-ish W at full blast. 9900K went to 200W because real clocks rose in addition to 2 more cores and 10900K simply added 2 more cores to the mix with power going to 250W at worst case scenario. Whether these maximums are what you get in real usage is a different matter but when planning for motherboard VRM and cooling you need to take maximums into account.



Isaac` said:


> Tell that to my 3200g
> 60w part has never gone past 30 at max load


My 2400G ran at 90W limit at first. After BIOS update or two it adhered to the spec 65W power limit - and some settings like cTDP were lost which I was quite pissed about.
If I wanted to be conspiracy theories type of person I would say this might have had something to do with me using the same board with the same BIOS version that was in Raven Ridge reviewers' pack... 

But back to talking seriously, Zen and Zen+ adhered to a power limit set at TDP. Zen2 - Ryzen 3000 non-APUs - is where shenanigans started on AMD side.


----------



## Vayra86 (Feb 3, 2021)

Yeah the order of things is not entirely right up there I see now  But the net result stands. Intel's producing a load of junk atm and they're blatantly lying about it, trying to pass it off as somehow energy friendly.


----------



## Caring1 (Feb 3, 2021)

Isaac` said:


> Tell that to my 3200g
> 60w part has never gone past 30 at max load


That I find hard to believe.
I'm typing this on an i5 Laptop that the killawatt meter says is using around 30W


----------



## Kissamies (Feb 3, 2021)

plonk420 said:


> hope you didn't pay more than $180 or so... i think i lucked out at ~$150-160, however you want to calculate the $20 combo savings at Microcenter... (before the fall hardware stock madness)


I paid ~200EUR from it.



Caring1 said:


> That I find hard to believe.
> I'm typing this on an i5 Laptop that the killawatt meter says is using around 30W


Wondering how much my Thinkpad with its i5-4210M consumes actually.


----------



## Frick (Feb 3, 2021)

Mussels said:


> That explains why it uses the power.
> 
> I want you to explain why intels marketing has the lower TDP CPU using more power than the higher wattage one.
> 
> The marketing and TDP ratings are the problem here, not the technical reasons why they use the electricity - the magical lighting inside the melted sand make zappy zappy hot, but intels fudging the numbers really badly here.



Using more power because the reviewer presumably used the same cooler for everyone. They will boost as high as the cooling allows. Cheaper cooling, less power. I assume they say it's a 65W part because ... well people want less power use, which makes sense. Slap a 65W cooling solution to it and it'll be a lower wattage part. The K series has always been branded as higher power stuff. It's market segmentation, pure and simple.


----------



## Hachi_Roku256563 (Feb 3, 2021)

Caring1 said:


> I'm typing this on an i5 Laptop that the killawatt meter says is using around 30W


so does mine 
cinibench r20 is what i use to check it it loads up all 4c at 100%


----------



## Aquinus (Feb 3, 2021)

hat said:


> That's turbo boost for you. TDP is rated at "base frequency", that depressingly low figure well under the turbo speed. For example, take the 10900k, base frequency 3.7GHz, 125w. All bets are off once turbo kicks in.


This. The 9980H in my MacBook Pro has a TDP of 45 watts, under turbo that can easily be as much as 80 to 90 watts for the short duration and 65 watts for long duration. Honestly, I want a CPU that can do the best it can given thermal and power constraints, but I would like it if Intel was a little more honest about power consumption under boost conditions, same deal with AMD. I honestly don't really care about base clock TDP because most of the time when I care about it, I'm not at base clocks. I'm somewhere between the base clock and the max boost clock.


----------



## xrobwx71 (Feb 3, 2021)

Mussels said:


> The worst part is i know intel fanboys who rabidly defend these stats and say its lies
> 
> 
> one is still on an i7 970 "intels done me great all these years, i trust them!"


Says the AMD fanboy?    /j


----------



## Toothless (Feb 3, 2021)

xrobwx71 said:


> Says the AMD fanboy?    /j


Its not fanboyism when they point out actual, provable flaws.


----------



## kapone32 (Feb 3, 2021)

Don't most Z490 MB VRMxs rival TR4? In fact the B550 boards did the same thing. That is all you need to know as a 720 AMP VRM is stupid for a CPU with a TDP of 100 Watts. It is true though that most 65 Watt AMD CPUs do maintain that threshold to within 10 to 20 watts. Threadripper are balls to the wall so if Intel is like that and the 105 Watt 5800X also is like that it supports the need (illusion) that you need a 12 or 14 phase VRM.


----------



## xrobwx71 (Feb 3, 2021)

Toothless said:


> Its not fanboyism when they point out actual, provable flaws.


Yes, and had we been sitting around a table together, we would have all had a laugh.


----------



## 95Viper (Feb 3, 2021)

Keep it to the topic.

Thank You.


----------



## freeagent (Feb 3, 2021)

Seems straight forward to me, at least for intel.. 65w sitting there doing nothing, move the mouse and your at 125w, open a web page and you get the full 250 lol.

AMD? I don't know.. I ran it at stock for a month. Now its overclocked and its no different than my 3770K with a hard clock on it. So not that great, but not terrible.


----------



## TheoneandonlyMrK (Feb 3, 2021)

cst1992 said:


> I knew that part
> 
> 
> Mine doesn't; it maxes at 100W.





unclewebb said:


> The 10 core CPUs are extremely efficient when the C states are enabled while sitting at the desktop.
> 
> 
> 
> ...


While I respect your input without limit and I'm replying to the thread more than to you, I seriously think that what a computer pulls while sat doing nothing is irrelevant, turn it off and anything pulls zero.

And I think this because to my mind a pc sat idling is a total f#@£&ING waste of power time and money and I would never allow such in my presence.

Turn it off ,or put it to use, simple.

And this isn't new hell no.

And I use a 8750H(home) and a latitude (dell) (work)5410 i5.

No bias , no bull, all work as well as I would ideally like within reason obviously and none are That bad on power use in reality, with measured expectations.


----------



## micropage7 (Feb 3, 2021)

actually i'm not surprised since intel still run 14nm and from that based it's hard to get it while AMD is getting better in the market
so they do that or in my opinion framing it


----------



## Vario (Feb 3, 2021)

TDP is just a marketing concept, not an engineering concept.


----------



## newtekie1 (Feb 3, 2021)

Isaac` said:


> Tell that to my 3200g
> 60w part has never gone past 30 at max load



I'd suggest you get a better motherboard the, because its holding back your performance greatly.


----------



## londiste (Feb 3, 2021)

freeagent said:


> Seems straight forward to me, at least for intel.. 65w sitting there doing nothing, move the mouse and your at 125w, open a web page and you get the full 250 lol.
> AMD? I don't know.. I ran it at stock for a month. Now its overclocked and its no different than my 3770K with a hard clock on it. So not that great, but not terrible.


I know this was a joke but it actually seems to be the other way around. When idle, just showing desktop and running the few background processes I have, i5 was at 6W but R5 is at 30W. Thankfully the B550 board I have is a bit more efficient than my Z370 board (and the B450 I had previously) so the overall difference for the entire computer is ~15W.

Ryzen's IO Die seems to consume a good 10-15W and this has considerable effect at idle.



Vario said:


> TDP is just a marketing concept, not an engineering concept.


For CPUs today? Unfortunately yes.
In other contexts, it is a perfectly valid engineering concept. Thermal Design Power, should indicate the maximum amount of heat components needs to dissipate so that cooling can be designed properly.


----------



## TheoneandonlyMrK (Feb 3, 2021)

Vario said:


> TDP is just a marketing concept, not an engineering concept.


No it's an engineering concept that marketing completely bullshitifies.

It's roots were sound, workable, informative and proportional, now it's a shit show especially with regards to Intel.


----------



## newtekie1 (Feb 3, 2021)

theoneandonlymrk said:


> No it's an engineering concept that marketing completely bullshitifies.
> 
> It's roots were sound, workable, informative and proportional, now it's a shit show especially with regards to Intel.


TDP was also never meant, and still isn't meant to be a measure of power consumption. It is measure of thermal output to determine heatsink size.


----------



## londiste (Feb 3, 2021)

newtekie1 said:


> TDP was also never meant, and still isn't meant to be a measure of power consumption. It is measure of thermal output to determine heatsink size.


For a chip there is no real difference, is there? Practically all the power that goes in comes out as heat.


----------



## ThrashZone (Feb 3, 2021)

Hi,
Never been on default turbo clocks to know what power it uses.
All core baby.


----------



## Toothless (Feb 3, 2021)

To anyone that might want something to play with.. 

Intel has a little thing called Power Gadget that shows how much power a cpu is pulling. It'll show my 2680v2's at 95-100w full load and 4790k 60w on normal use. Maybe some of you guys can give it a try on the 10 series.


----------



## RandallFlagg (Feb 3, 2021)

Doing power consumption comparisons at max turbo all the time is about as logical as testing a cars fuel efficiency at full throttle max speed for an extended period of time.  Unless you are a track racer, that's meaningless.  

TDP is for determining *average* minimum heat dissipation in stock form.  Max power consumption under turbo is by default something that only lasts 5-8 seconds, and then drops so that it can maintain that TDP average.  

If these sites were interested in coming up with a useful power metric, they would use some standard benchmark representing typical workload to measure overall power consumption in the real world, and in the real world most PCs are sitting around under 5% CPU usage.  For cars, they have EPA, which is why no one would get away with measuring MPG on a race track.  We don't have anyone defining that in this space so people get to make these idiotic hyperbolic arguments.


----------



## Bill_Bright (Feb 3, 2021)

Mussels said:


> The worst part is i know intel fanboys who rabidly defend these stats and say its lies


I disagree. *The worst part is pi$$-poor journalism misrepresenting the facts with falsehoods* and the opposing fanboys who rapidly pile on to defend the article and its falsehoods and then use that to attack the competition without even doing any fact checking to see if the article is biased or factual!

Note where the article says (my *bold underline* added),


			
				Extreme Tech said:
			
		

> On paper, an Intel CPU’s TDP is the *maximum power consumed* under a sustained workload _at base frequency._



Anybody can see in seconds that is false! That is NOT how Intel defines TDP! Using the same CPU as the article did, and as seen in the ARK for the Core-i7 10700k, if you hover over TDP to see Intel's definition, it clearly says, (again, my *bold underline* added). 


			
				Intel said:
			
		

> Thermal Design Power (TDP) represents the *average power*, in watts, the processor *dissipates *when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload.



Come on everyone! Power "consumed" does NOT and never has equaled power "dissipated"! Nothing made by Man is 100% efficient! The CPU would not generate any heat if it was. Nor does maximum equal average. 

It is pretty clear the purpose of that article is simply to launch another bashing session against the big bad Intel even though AMD's published TDP specs are vague and deceptive too!

The fault does NOT belong with Intel, or AMD but on the entire processor industry - which includes VIA, NVIDIA, Qualcomm, Motorola and others. The industry needs to get together and come up with an industry standard for terms and how such values are measured and published - in a similar way they all came together years ago to create the ATX Form Factor standard.


----------



## londiste (Feb 3, 2021)

@RandallFlagg, "typical" is very difficult to find. Intel's big numbers are usually what you get with Prime95. Even heavier productivity workloads trail by a considerable margin. Anything lower than that - desktop usage scenarios, gaming, will probably fit in TDP anyway and will vary by a large margin.



Bill_Bright said:


> Come on everyone! Power "consumed" does NOT and never has equaled power "dissipated"!


What percentage from the power that goes into CPU is used for anything else but radiating heat?
Btw, how this works in reality is just the opposite of what you said. CPUs are incredibly *in*efficient - they use small amount of power to do useful work (i.e. compute stuff) and rest is wasted as heat.


----------



## freeagent (Feb 3, 2021)

londiste said:


> I know this was a joke but it actually seems to be the other way around. When idle, just showing desktop and running the few background processes I have, i5 was at 6W but R5 is at 30W. Thankfully the B550 board I have is a bit more efficient than my Z370 board (and the B450 I had previously) so the overall difference for the entire computer is ~15W.
> 
> Ryzen's IO Die seems to consume a good 10-15W and this has considerable effect at idle.
> 
> ...


I was looking at the wall. They are pretty similar. Not bad for 6 vs 4 cores. The quad was getting a buttload of voltage too. Z77 vs B550..


----------



## TheoneandonlyMrK (Feb 3, 2021)

newtekie1 said:


> TDP was also never meant, and still isn't meant to be a measure of power consumption. It is measure of thermal output to determine heatsink size.


And I did say engineers Did use it correctly back in the day didn't I, and that marketing has confused it's use dramatically.
So we agree then yes, or no?!.

Looking back might not have expressed my point adequately before .


----------



## Bill_Bright (Feb 3, 2021)

londiste said:


> What percentage from the power that goes into CPU is used for anything else but radiating heat?


That's the problem, isn't. There is no industry standard dictating how such values can be determine. It is not like a motor, for example, where you can accurately measure the power consumed and compare it to the turning power of the spinning motor. 

It is not like a power supply where you can measure the voltage and current at the wall and compare it output voltage and current. 

How to you accurately measure CPU output power and then compare that to the amount consumed and then use that to compare that to competing processors AND THEN use that data to determine which processor can do more "work" in a given amount of time? 


londiste said:


> Btw, how this works in reality is just the opposite of what you said. CPUs are incredibly *in*efficient - they use small amount of power to do useful work (i.e. compute stuff) and rest is wasted as heat.


 NO!!!!!!! I NEVER said anything of the sort! I was pretty clear that CPUs generate heat - that clearly means they are inefficient. I specifically said nothing man-made is 100% efficient. That includes CPUs.


----------



## londiste (Feb 3, 2021)

Bill_Bright said:


> NO!!!!!!! I NEVER said anything of the sort! I was pretty clear that CPUs generate heat - that clearly means they are inefficient. I specifically said nothing man-made is 100% efficient. That includes CPUs.


Sorry, I misunderstood what you meant


----------



## tussinman (Feb 3, 2021)

RandallFlagg said:


> Doing power consumption comparisons at max turbo all the time is about as logical as testing a cars fuel efficiency at full throttle max speed for an extended period of time.  Unless you are a track racer, that's meaningless.
> 
> TDP is for determining *average* minimum heat dissipation in stock form.  _*Max power consumption under turbo is by default something that only lasts 5-8 seconds, and then drops so that it can maintain that TDP average. *_
> 
> *If these sites were interested in coming up with a useful power metric, they would use some standard benchmark representing typical workload to measure overall power consumption in the real world, and in the real world most PCs are sitting around under 5% CPU usage*.  For cars, they have EPA, which is why no one would get away with measuring MPG on a race track.  We don't have anyone defining that in this space so people get to make these idiotic hyperbolic arguments.


Good point. That would explain why the techpowerup gaming consumption (average, not just random spikes) shows the 10500 and 5600x as nearly identical. Even the non k 10700 is only showing 8 watts higher than the 5600x when there both at full clocks/strength yet if you only measured the short spikes the 10700 would be way higher


----------



## londiste (Feb 3, 2021)

Bill_Bright said:


> That's the problem, isn't. There is no industry standard dictating how such values can be determine. It is not like a motor, for example, where you can accurately measure the power consumed and compare it to the turning power of the spinning motor.


Physics dictates the power consumed must go somewhere. In IC, there really are not many places for the energy to go but heat. It might give off some minor RF radiation (hopefully not) but even indirectly all the other conversion chains end up in heat. Anything else is a very minor fraction of a percent, if even that. For our purposes - the same power that goes into CPU will come out as heat.


tussinman said:


> Good point. That would explain why the techpowerup gaming consumption (average, not just random spikes) shows the 10500 and 5600x as nearly identical. Even the non k 10700 on max turbo is only showing 8 watts higher than the 5600x when there both at full clocks/strength


Gaming is not a heavy load. Even games that we consider properly loading CPU cores and threads are not using large parts of actual CPU die. Even more, when it comes to Intel CPUs not using AVX2 (which almost no games use) will bring power consumption down by a lot.


----------



## newtekie1 (Feb 3, 2021)

londiste said:


> For a chip there is no real difference, is there? Practically all the power that goes in comes out as heat.



For a thermal solution standpoint there is. The peak power can be way above the rated TDP and the cooler can still handle it in bursts.  Intel has been taking advantage of this since Turbo Boost was invented.  It has never been a absolute limit in power consumption.  It is also why the turbo was, until the recent generations, governed completely by temperature.  Now it is governed by power and temperature to keep things at least somewhat reasonable. Thermal solutions don't really care about peaks in heat output, they just absorb them and keep going.  However, if you have a chip that says it is going to output 95w and then it constantly outputs 125w and you put a heatsink designed for 95w on that CPU, then it is going to have thermal problems. But the TDP is a rating for heatsinks, it is basically saying if you want the advertised performance out of this CPU, your heatsink better be able to handle this much heat.

For example, my 8700K will boost to 4.6GHz when under full load(Cinebench) and consume almost 140w.  This is the default behavior of the Z390 motherboard I have it in. The motherboard decides the power limit, because the motherboard manufacturer knows what their board is capable of delivering and for how long.  In that Z390 motherboard, that 140w only lasts for about 60 seconds before it start to dial back, as long as the CPU cooler can keep up(which mine has no problem doing).  By the end of a Cinebench run the CPU is running at 4.4GHz and the power consumption is back down below 100w.  However, if I take that same 8700K and put in in the B365 motherboard that I have it never goes over 100w and 4.3GHz at full load.  But at any time, I put a heatsink that can't handle that higher heat output on the CPU, then it will detect the higher temperatures and throttle back to 95w or less if needed.

But the entire point I'm trying to make is that the TDP was never an absolute power limit on the CPU, and there is not guarantee that the CPU won't consume more than that, and this goes back to Nehalem.


----------



## londiste (Feb 3, 2021)

Turbo was not governed completely by temperature. There have been power limits in place for a long while. Power limits simply were not hit or were not hit in a significant way.
Stock 8700K will not boost to 4.6GHz on all cores, not even with the fudged power settings. Frequency table is 4.3GHz for max all-core turbo. If yours does, it's MCE or equivalent in motherboard BIOS.


----------



## RandallFlagg (Feb 3, 2021)

tussinman said:


> Good point. That would explain why the techpowerup gaming consumption (average, not just random spikes) shows the 10500 and 5600x as nearly identical. Even the non k 10700 is only showing 8 watts higher than the 5600x when there both at full clocks/strength yet if you only measured the short spikes the 10700 would be way higher



I've run windows perfmon for several different days for myself, with a 5 second resolution.  Example below.

100% usage for >=5s is not a scenario for me.   I know 100% sometimes happens during things like file decompress, but it doesn't last long enough to show up here. I'm sure some will come in talking about encoding or some such but that's red herring crap IMO, it's like someone talking about how often they do tractor pulls with their Hyundai.

The big spikes at the start of the day is running a VM, which is not a typical workload.  The end of the day, that's gaming.  Note it never never goes much over 50% on any core for > 5s.  The rest of the time while working and normal stuff like browsing / listening to itunes / youtube and so on, it's pretty damn near zero.


----------



## Bill_Bright (Feb 3, 2021)

londiste said:


> For our purposes - the same power that goes into CPU will come out as heat.


Oh bullfeathers! That is NOT true at all. Also not true is you speaking for "our" purposes.

You are essentially dismissing all the "work" a CPU does. That's just silly and does NOT accurately reflect the laws of physics you call upon to justify your claims.

A 65W CPU today does a heck of a lot more "work" in the same amount of time while consuming significantly less energy than a 65W CPU from years past. That would be impossible if what you claimed was true.



> Practically all the power that goes in comes out as heat.


So what? That is NOT the point - despite how much you want it to be. You keep dismissing, ignoring, or don't understand (I don't know which) the most important point and that is the amount of work being done with the amount of energy that is NOT going up in heat.

"Machine 1" consumes 100W of energy per minute and gives off 95W in the form of heat. It moves 10 buckets of water 10 feet in that minute.

"Machine 2" consumes 100W of energy per minute and gives off 95W in the form of heat. But it moves 20 buckets of water 10 feet in that minute.

See the difference? That's what matters for "our" purposes.


----------



## londiste (Feb 3, 2021)

Bill_Bright said:


> Oh bullfeathers! That is NOT true at all. Also not true is you speaking for "our" purposes.
> You are essentially dismissing all the "work" a CPU does. That's just silly and does NOT accurately reflect the laws of physics you call upon to justify your claims.


OK, let me go back to definitions. Work as in what happens inside a CPU. Transistors switch, electrons move and all that stuff. CPU performance does not really come into play at this stage. It could be an arbitrary amount of transistors switching back and forth (well, ideally staggered switching to get even remotely steady consumption over time).

We were talking about TDP, power consumption and resulting heat output, no?

Edit: CPU performance does not play a part in how ICs use power. Unless you are saying that higher CPU performance will result in consumed power going to something else than heat. I would really like to see source or at least reasoning for that.



Bill_Bright said:


> A 65W CPU today does a heck of a lot more "work" in the same amount of time while consuming significantly less energy than a 65W CPU from years past. That would be impossible if what you claimed was true.


Split this into a separate quote. The major factor for this is evolution towards smaller manufacturing processes, making transistors smaller and more efficient (less energy to switch).
If you want to nitpick then yes, this is very simplified and does not account for many other factors. The first things that come into mind are voltages used along with their efficiency curves and potential architectural efficiency gains.


----------



## Bill_Bright (Feb 3, 2021)

londiste said:


> CPU performance does not really come into play at this stage.


Of course it does. Performance determines how much "work" can be accomplished in a given amount of time with a given amount of energy. 


londiste said:


> For all intents and purposes it could be an arbitrary amount of transistors switching back and forth


What??? Do think those gates are just flipping and flopping back and forth for fun or no reason? NOOOOO! They are doing "work"! Crunching numbers. Processing data. 

I go back to my previous statement. You keep dismissing, ignoring, or just plain don't understand that the amount of work being accomplished cannot just summarily be omitted from the equation when determining a processor's (or any machine's) efficiency. Work must be factored in too! 

For the purpose of the this thread in relation to Intel's definition of TDP, that value is used to determine how much cooling is required. It is NOT meant as a means to compare that Intel CPU to an AMD CPU. That's why if you go to that Intel CPU's ARK again (see here) and click on the "?" next to TDP, you will see where it directs readers to the Datasheet for "_thermal solution_ requirements". It does not mention efficiency or work accomplished. Work load, yes, but that is not the same as work accomplished.


----------



## londiste (Feb 3, 2021)

OK, my statement that I stand by is that power going into an IC will come out as heat.
This was a response to what you said above:


Bill_Bright said:


> Power "consumed" does NOT and never has equaled power "dissipated"!


----------



## Bill_Bright (Feb 3, 2021)

londiste said:


> OK, my statement that I stand by is that power going into an IC will come out as heat.


It is still wrong, or at least incomplete Why? Because _some_ of the power going in is being consumed to do work (flip gates, crunch numbers, etc.) too. 

I don't understand why you can't or refuse to see that. 

It is like an incandescent light bulb. No argument (at least I hope not) that "most" of the energy consumed is being converted into heat and not light. But it is still an indisputable fact that some (even if a small amount) of the energy being consumed is indeed, being used for "work", or in this case, to create light.


----------



## Vayra86 (Feb 3, 2021)

newtekie1 said:


> TDP was also never meant, and still isn't meant to be a measure of power consumption. It is measure of thermal output to determine heatsink size.



Bingo... but how do you review something with random heatsinks at the exact measure of the TDP they put in specs?

It will either not perform optimally, or it will brutally exceed TDP. Usually the CPUs do the latter and then start doing the former. Yoyo'ing to keep up, and if you remove the lock on it, they go all over the place. What used to be a simple vcore adjustment is now a whole range of tricks to keep thermal headroom and still extract some semblance of an OC.

In the end its just the same thing. Power = heat.



Bill_Bright said:


> It is still wrong, or at least incomplete Why? Because _some_ of the power going in is being consumed to do work (flip gates, crunch numbers, etc.) too.
> 
> I don't understand why you can't or refuse to see that.
> 
> It is like an incandescent light bulb. No argument (at least I hope not) that "most" of the energy consumed is being converted into heat and not light. But it is still an indisputable fact that some (even if a small amount) of the energy being consumed is indeed, being used for "work", or in this case, to create light.



Yes, and then we touch upon the issue of 'efficiency'. Intel has, over the past generations, constantly nudged its processors to clock higher 'when they can' which is an efficiency killer, and a heatwave guarantee. The why behind that is only to look good on spec sheets and in reviews with optimal conditions, while the quality of life of using such a CPU has steadily gone to the shitter. Aggressive temperature cycling doesn't really prolong the lifetime of any component in a system either.

That's a steep price for 5 Gigahurtz to look good. And that is why the TDP as it is being used now is a complete lie, when combined with the specs they show us. If you don't read the Intel Bible on Turbo states that is.

But if you think this through... the work being moved is irrelevant in a discussion about TDP. Performance per watt, does not relate to output temperature. The only thing that relates to temps, is the actual power going in. After all, in a comparison you're looking at an *infinite *amount of work. No matter how much it moves, you will need all the power it can parse through and this will result in the same temperatures.


----------



## mouacyk (Feb 3, 2021)

cst1992 said:


> Intel’s Desktop TDPs No Longer Useful to Predict CPU Power Consumption | ExtremeTech
> 
> 
> Intel's higher-end desktop CPU TDPs no longer communicate anything useful about the CPUs power consumption under load. ...
> ...


Nowhere in that article was the word "lying" ever used.  Anybody who buys into K processors with 4+ cores should already know what they have to cool and not be surprised at temp spikes to 99C under inadequate heat dissipation.  Overclockers have known this for a decade now.  If you intended to cast a blanket net, you've missed quite a few other fish.


----------



## Bill_Bright (Feb 3, 2021)

Vayra86 said:


> The why behind that is only to look good on spec sheets and in reviews with optimal conditions, while the quality of life of using such a CPU has steadily gone to the shitter. Aggressive temperature cycling doesn't really prolong the lifetime of any component in a system either.


Quality of life? I have seen nothing to suggest Intels have a shorter life expectancy than AMDs. Got a link?

And of course Intel wants their CPUs to look good. That's called marketing. Its why Truck Maker A claims their truck is #1 because it gets better gas mileage and Truck Maker B claims theirs is #1 because it can pull more weight and why Truck Maker C claims theirs is #1 because it has more horsepower - and they all are right! 

Aggressive temperature cycling? What does that even mean? EVERY CPU can and does go from cold (ambient) when idle to fully temperature when pushed in just a few clock cycles and then back to cold again just as quickly when the load drops back to idle again. Temperature cycling is dependent on the load and cooling.


----------



## Vayra86 (Feb 3, 2021)

Bill_Bright said:


> Quality of life? I have seen nothing to suggest Intels have a shorter life expectancy than AMDs. Got a link?
> 
> And of course Intel wants their CPUs to look good. That's called marketing. Its why Truck Maker A claims their truck is #1 because it gets better gas mileage and Truck Maker B claims theirs is #1 because it can pull more weight and why Truck Maker C claims theirs is #1 because it has more horsepower - and they all are right!
> 
> Aggressive temperature cycling? What does that even mean? EVERY CPU can and does go from cold (ambient) when idle to fully temperature when pushed in just a few clock cycles and then back to cold again just as quickly when the load drops back to idle again. Temperature cycling is dependent on the load and cooling.


- Quality of life: high temperature peaks are low quality of life; your fans get noisy. Your hands on a laptop get hot. I didn't mean durability/endurance. Laptop CPUs did always get hot, but its a difference if they slowly creep to 80C and then even slower to 85C, or if they boost straight to 85C and then cool back to 50 to start it all over again, all the time. The behaviour has changed, and Sandy Bridge was, for Core, in the optimal position. 22nm made a big dent, partly due to increased density. But when Intel started needing those last few hundred megahertz to keep competing, the limits have been stretched further and further. Yes, I do believe _devices with Intel CPUs that boost aggressively_ are liable to last shorter than they used to in the past. Time will tell, but the average lifetime of recent laptops is nothing to write home about in general. Is AMD different? I don't think that is the subject, and I think they have a lot of work especially on mobile CPUs left to do.

- Aggressive temp cycling means what is described above. The limits are moved ever closer to the absolute boundaries of what the chip can do without burning to a crisp. What used to peak briefly at 80C, now peaks to 85C or more. At the same time, idle temps have actually _dropped_ due to more efficient power states, and because idle requires lower clocks than it used to due to IPC gains.

As always the devil is in the details, and Intel is doing a fine job creating a box of details that cross the line.


----------



## Deleted member 202104 (Feb 3, 2021)

Vayra86 said:


> - Quality of life: high temperature peaks are low quality of life; your fans get noisy. Your hands on a laptop get hot. I didn't mean durability/endurance. Laptop CPUs did always get hot, but its a difference if they slowly creep to 80C and then even slower to 85C, or if they boost straight to 85C and then cool back to 50 to start it all over again, all the time. The behaviour has changed, and Sandy Bridge was, for Core, in the optimal position. 22nm made a big dent, partly due to increased density. But when Intel started needing those last few hundred megahertz to keep competing, the limits have been stretched further and further. Yes, I do believe _devices with Intel CPUs that boost aggressively_ are liable to last shorter than they used to in the past. Time will tell, but the average lifetime of recent laptops is nothing to write home about in general. Is AMD different? I don't think that is the subject, and I think they have a lot of work especially on mobile CPUs left to do.
> 
> - Aggressive temp cycling means what is described above. The limits are moved ever closer to the absolute boundaries of what the chip can do without burning to a crisp. What used to peak briefly at 80C, now peaks to 85C or more. At the same time, idle temps have actually _dropped_ due to more efficient power states, and because idle requires lower clocks than it used to due to IPC gains.
> 
> As always the devil is in the details, and Intel is doing a fine job creating a box of details that cross the line.



Quality of life?  Really?

One of the most ridiculous things I've ever read on a tech site - and that's saying a lot.


----------



## Deleted member 205776 (Feb 3, 2021)

my locked i7-8700 be like 120w while gaming (advertised 65w)

my unlocked 3900x be like 95w while gaming (advertised 105w)

double the cores lol

if Intel had started measuring their rated TDP from boost clocks, it'd be a different story


----------



## Bill_Bright (Feb 3, 2021)

Vayra86 said:


> - Quality of life: high temperature peaks are low quality of life; your fans get noisy. Your hands on a laptop get hot. I didn't mean durability/endurance.


Nah! Yes you are right. Things that "annoy" humans may affect our quality of life. But is that really criteria you want to use to decide which CPU is better?

Are you really suggesting AMDs don't get hot too? 

What you are describing to me is poor design by the laptop maker or PC builder. Poor choice of fans, inadequate case cooling, etc. 


Vayra86 said:


> but the average lifetime of recent laptops is nothing to write home about in general.


That may be true but you are suggesting they are failing because the processors are failing and in particular, that those with Intels are failing at a faster rate! Not buying it. Show us evidence. 

Frankly, I cannot recall the last time I saw a CPU (Intel or AMD) that just decided to die.


----------



## Vayra86 (Feb 3, 2021)

Bill_Bright said:


> Nah! Yes you are right. Things that "annoy" humans may affect our quality of life. But is that really criteria you want to use to decide which CPU is better?
> 
> Are you really suggesting AMDs don't get hot too?
> 
> ...



I'm mostly referring to laptops and mobile devices, which is where the TDP matters so much and where it causes issues. You chalk it up to laptop makers, I chalk it up to a combination of them and Intels current approach to clocking. The line has become VERY thin and this also spills over the usability side of a device.

Is AMD different? I'm not saying that (not sure why you keep asking), but I do think they are more honest about advertising their TDPs, and the results generally spell that out too.

As for the CPUs dying. No. Me either. But aggressive power demands do take a toll on circuitry and power delivery elsewhere, and so does heat. With lots of stuff packed together, this is no improvement. And again, this must be related to Intel's need to produce spec sheets that mean something in terms of marketing. 'Look, we gained another 100mhz and the TDP Is still the same'. Is it, really?


----------



## tabascosauz (Feb 3, 2021)

Vayra86 said:


> - Quality of life: high temperature peaks are low quality of life; your fans get noisy. Your hands on a laptop get hot. I didn't mean durability/endurance. Laptop CPUs did always get hot, but its a difference if they slowly creep to 80C and then even slower to 85C, or if they boost straight to 85C and then cool back to 50 to start it all over again, all the time. The behaviour has changed, and Sandy Bridge was, for Core, in the optimal position. 22nm made a big dent, partly due to increased density. But when Intel started needing those last few hundred megahertz to keep competing, the limits have been stretched further and further. Yes, I do believe _devices with Intel CPUs that boost aggressively_ are liable to last shorter than they used to in the past. Time will tell, but the average lifetime of recent laptops is nothing to write home about in general. Is AMD different? I don't think that is the subject, and I think they have a lot of work especially on mobile CPUs left to do.
> 
> - Aggressive temp cycling means what is described above. The limits are moved ever closer to the absolute boundaries of what the chip can do without burning to a crisp. What used to peak briefly at 80C, now peaks to 85C or more. At the same time, idle temps have actually _dropped_ due to more efficient power states, and because idle requires lower clocks than it used to due to IPC gains.
> 
> As always the devil is in the details, and Intel is doing a fine job creating a box of details that cross the line.



I'm not going to start pointing fingers and flinging the F-word around here, but the irony is that no x86 laptop CPU boosts harder, more dynamically and more frequently than the Renoir CPUs do. Period. And because all of desktop Zen 2 and Zen 3 behaves the same way, the poor "quality of life" and "aggressive temp cycling" is 95% of what makes any Ryzen a Ryzen from 2019 onwards. In fact, the aggressive boost improves user experience if anything, because it improves the system's response to user input in the fraction of a second (a few milliseconds for CPPC on Ryzen, low double digits for Speedshift on Intel).

I don't see any Matisse, Renoir or Vermeer CPUs dying because of high temperature peaks and temp cycling. Some of the desktop chips die without any reason because AMD still doesn't know how to write firmware or work out the quality control of their N7FF chips, but that's a different story. Neither do I see Kaby-R, Coffee, Comet and Ice Lake CPUs randomly dying because they make aggressive use of their boost envelope.

I don't get where this argument is going. The high temps reflect much more on individual laptop makers' abysmal thermal solutions than the CPUs themselves. You do realize that PROCHOT is a thing, preventing the CPU from turning into molten slag if you don't spend ever waking minute monitoring package temp?

Do they last shorter on a smaller process and higher clocks? Probably. Is that going to make an 8-year-old laptop more desirable than a 12-year-old laptop?


----------



## phanbuey (Feb 3, 2021)

This is not new.   I get that you had an i5 3xxx that ran cool at 4.3ghz @Vayra86 (you can still buy i5's that do that) -- and I get that Sandy Bridge ran cool since it wasn't pushed to the max.  But when you're comparing the top of the line chip to an old i5 i think you're skewing your memory a bit.

My Computers included:
Macbook 15" pro retina - ivy bridge, idled in the mid 60's cinebenched at 100C still being used by my mother in law
Dell XPS (2014) - hot as heck
Alienware 17" 2015- also insanely hot, had to disable turbo to keep VRM from throttling

Desktops:
6700K - super hot....
1800X - a 95W chip that sucked down 165W - would crash at 4.1Ghz with anything less than a 360mm water setup (280mm aio would crash)
8700K - also sucked down 165W but was faster
7820X - melted my house 125W, sucked down 250W thick 240 AIO minimum, overloaded air cooling
10850K - 225W in avx loads at 5.0 - but 10 cores and much faster, 85C but requires water.

This is why my memory i think is 'warped' since it's really nothing new *at the high end*.  It's just 10 cores are hot on 14nm at 5Ghz... that's kind of to be expected.  They are not hot at 4.8Ghz, in fact they are quite chilly. at 3.7 im sure they really do sit around 140W.

If you get the 10600K i think you will find it runs quite cool, is cheap, runs on cheap boards and has no issues whatsoever.  If you get a 5800X  8 core - I think you will find it too runs quite hot despite being a 95W chip --especially on a board that automatically sets the most aggressive PBO settings out of the box.


----------



## mouacyk (Feb 3, 2021)

The sky isn't falling... it has fallen back in the days of the introduction of AVX2/FMA to desktop processors.  Even though Sandy Bridge introduced AVX to desktop, it probably wasn't used widely until Ivy Bridge appeared and brought AVX2.


----------



## Hachi_Roku256563 (Feb 3, 2021)

newtekie1 said:


> I'd suggest you get a better motherboard the, because its holding back your performance greatly.


It is not the CPU is actually slightly overperforming compared to benchmark


----------



## lexluthermiester (Feb 3, 2021)

Isaac` said:


> Tell that to my 3200g
> 60w part has never gone past 30 at max load


How do you know this? Have you measured it in some manner?



Bill_Bright said:


> Frankly, I cannot recall the last time I saw a CPU (Intel or AMD) that just decided to die.


Neither can I. The last time I saw a CPU "die" was because of thermal concerns(clogged heatsink, poor airflow in the case). CPU resiliency has improved greatly over the last 20 years. Perhaps this is why Intel and AMD both do not worry to much about stating certain specs, because they do not fear their IC product dying.


----------



## londiste (Feb 3, 2021)

lexluthermiester said:


> Neither can I. The last time I saw a CPU "die" was because of thermal concerns(clogged heatsink, poor airflow in the case). CPU resiliency has improved greatly over the last 20 years. Perhaps this is why Intel and AMD both do not worry to much about stating certain specs, because they do not fear their IC product dying.


Controls, limits and their management has come a long way. CPUs throttle when temperature gets too high, they do have current limits so overloading them is difficult and I believe they have at least detection for voltage spikes as well. All of that is old, tried-and-true and fast enough to make a CPU really-really reliable and resilient.


----------



## DeathtoGnomes (Feb 3, 2021)

Hot topic today wow. 

I'm in  the  camp  that of "I'm not surprised" with the caveat  that "by  anything Intel does to mislead consumers for profit"


----------



## cst1992 (Feb 3, 2021)

londiste said:


> In other contexts, it is a perfectly valid engineering concept


It should always be an engineering concept. It should be thermal "design" power, not thermal "what we decide to tell you" power.
How is a person supposed to know if a plain 150W tower cooler is enough for their K series CPU, or they need a proper watercooled setup that can handle 200+W of CPU heat dissipation? Printing 125W on the box and actually pulling 265W should be a banned practice.



Toothless said:


> To anyone that might want something to play with..
> 
> Intel has a little thing called Power Gadget that shows how much power a cpu is pulling. It'll show my 2680v2's at 95-100w full load and 4790k 60w on normal use. Maybe some of you guys can give it a try on the 10 series.


It's software?
I use CoreTemp for reading CPU power consumption.
Could be inaccurate, but may not be either.


----------



## londiste (Feb 3, 2021)

cst1992 said:


> It should always be an engineering concept. It should be thermal "design" power, not thermal "what we decide to tell you" power.
> How is a person supposed to know if a plain 150W tower cooler is enough for their K series CPU, or they need a proper watercooled setup that can handle 200+W of CPU heat dissipation? Printing 125W on the box and actually pulling 265W should be a banned practice.


Absolutely, I wholeheartedly agree.


----------



## Hachi_Roku256563 (Feb 3, 2021)

lexluthermiester said:


> How do you know this? Have you measured it in some manner?


yes using ryzen master


----------



## cst1992 (Feb 3, 2021)

phanbuey said:


> Also the idle -- the 4690K doesn't have all the newer power saving tech so it idles at like 56W stock


Who told you that? Mine idles at 9.6W power consumption @800MHz 0.76V.
My CPU pulls 56W at 3.5GHz Prime95.
At 4+GHz it starts to go to 75+W power consumption.
It maxed out at 83W, but I found I could push it to 100W at 4.3 without issue.
Past 4.3 the overclock is not stable.



lexluthermiester said:


> KillaWatt perhaps


CoreTemp.



Vayra86 said:


> So yeah. I think its pretty clear what happened since Skylake. Base clocks were steadily reduced while turbos were elevated, then Intel rewrote their definition of what turbo should mean, they changed some details and added more premium modes of turbo (lmao) so the old ones would seem somehow worse... except now you have a beautiful cocktail of turbos that cannot sustain even for two seconds because you'll either burn a hole in your socket or your CPU itself just runs straight into thermal shutdown.


Is this shitshow why Swan had to step down? Faster CPUs no matter what?


----------



## lexluthermiester (Feb 3, 2021)

Isaac` said:


> yes using ryzen master





cst1992 said:


> CoreTemp.


Neither of those are reliable enough to be dependable. They are an ok ballpark reference but should not be used as a defacto method of measurement. KillaWatt type solutions are much more accurate.


----------



## Deleted member 202104 (Feb 3, 2021)

lexluthermiester said:


> Neither of those are reliable enough to be dependable. They are an ok ballpark reference but should not be used as a defacto method of measurement. KillaWatt type solutions are much more accurate.



To determine CPU power usage?  Not even.

That only determines power draw for the entire system at the wall.  There's no accurate way to break that down to individual component power draw - the efficiency of the power supply changes based on load.  Comparing idle and full load from a kill-a-watt doesn't take into account power supply loss.


----------



## mouacyk (Feb 3, 2021)

cst1992 said:


> It should always be an engineering concept. It should be thermal "design" power, not thermal "what we decide to tell you" power.
> How is a person supposed to know if a plain 150W tower cooler is enough for their K series CPU, or they need a proper watercooled setup that can handle 200+W of CPU heat dissipation? Printing 125W on the box and actually pulling 265W should be a banned practice.


Intel doesn't send out review samples for nothing.  If you have some kind of load that is pushing 265W, it's even more critical to pay attention to the reviews.

There used to be a time when Intel made and sold their own motherboards.  Now, if you paired a default Intel motherboard and cpu together, and your typical loads resulted in 2x TDP -- that's definitely a talking point.


----------



## Toothless (Feb 3, 2021)

cst1992 said:


> It should always be an engineering concept. It should be thermal "design" power, not thermal "what we decide to tell you" power.
> How is a person supposed to know if a plain 150W tower cooler is enough for their K series CPU, or they need a proper watercooled setup that can handle 200+W of CPU heat dissipation? Printing 125W on the box and actually pulling 265W should be a banned practice.
> 
> 
> ...


Yes software. I trust Intel's reading of power draw over a third party and same for Ryzen Master. Hwinfo is close for trust but everything takes salt.


----------



## Arctucas (Feb 3, 2021)

I read through this, and found myself asking if any power user/enthusiast building a custom high-performance overclocking rig actually cares about TDP, which is merely a warranty that a processor will operate within a given set of parameters at which it will not exceed a specified power consumption level.


----------



## ThrashZone (Feb 3, 2021)

Hi,
lol yeah I'd guess those people would just be wondering how to cool the little devil


----------



## trickson (Feb 3, 2021)

Arctucas said:


> I read through this, and found myself asking if any power user/enthusiast building a custom high-performance overclocking rig actually cares about TDP, which is merely a warranty that a processor will operate within a given set of parameters at which it will not exceed a specified power consumption level.


I have NEVER once, Not even in the 24 years of building have I once considered TDP as a build/Buy point.
I have read through this thread and also have come to this pondering.
I am however an AMD fanboy And can surely say the FX chips SUCK power and ASS! Need a fing 850W PSU just to power an FX8300 and HOLY crap the POWER crazy CHIP can heat a Double wide in Alaska!
SO yeah never really gave a crap about TDP...



ThrashZone said:


> Hi,
> lol yeah I'd guess those people would just be wondering how to cool the little devil


RIGHT!?!?! LMFAO!
I mean I am thinking going back to liquid cooling for the 3700X! LOL.

Oh and no I do not think they are lying at all no one is. The CPU can and does do that TDP if you set it up exactly the way they did. so there is that too..


----------



## lexluthermiester (Feb 3, 2021)

weekendgeek said:


> That only determines power draw for the entire system at the wall.


And you can determine CPU usage rather easily by isolating measurements taken. You'll note I said "KillaWatt type". There are other forms of power measurement devices.


----------



## Zach_01 (Feb 3, 2021)

I didnt go through almost 100 post and I dont know if and how many said it already...

Both Intel and AMD are not pulling numbers out of their arse.
The TDP ratings are Thermal Design Power and wont tell you the max power draw but the heat output towards the cooler under certain operating conditions, or so the meant to be. If you want max power draw look for CPU PackagePower or CPU PPT(PackagePowerTracking)

AMD as TDP refers to the heat towards the cooler under certain conditions (ambient temp). A specific Tdelta between CPU and ambient. Not all heat produced by the CPU is going to the cooler. Some of it is going through the CPU substrate to the socket and the board and get dissipated from there.

For example all Ryzen 3000 series are like this:

65W TDP, 88W PPT
95W TDP, 125W PPT
105W TDP, 142W PPT

Intel on the other hand is different. Intel CPUs have 2 PowerLevel stages. PL1 and PL2
As TDP they refer to PL1 power draw and that is the max sustainable power draw of the CPU. PL2 is much higher than that but by default for a certain period of time called "Tau" each (PL2/Tau) different for every CPU.


----------



## Deleted member 202104 (Feb 3, 2021)

lexluthermiester said:


> And you can determine CPU usage rather easily by isolating measurements taken. You'll note I said "KillaWatt type". There are other forms of power measurement devices.



Please share the "KillaWatt" type device.


----------



## Hachi_Roku256563 (Feb 3, 2021)

lexluthermiester said:


> Neither of those are reliable enough to be dependable. They are an ok ballpark reference but should not be used as a defacto method of measurement. KillaWatt type solutions are much more accurate.


yeah but its not gonna make a 60 watt draw show up as 30w


----------



## lexluthermiester (Feb 3, 2021)

Isaac` said:


> yeah but its not gonna make a 60 watt draw show up as 30w


No, but if you know your baseline power usage, calculating the draw from the CPU is trivial.



weekendgeek said:


> Please share the "KillaWatt" type device.








						List of electrical and electronic measuring equipment - Wikipedia
					






					en.wikipedia.org
				




Have fun.


----------



## Deleted member 202104 (Feb 3, 2021)

lexluthermiester said:


> And you can determine CPU usage rather easily by isolating measurements taken. You'll note I said "KillaWatt type". There are other forms of power measurement devices.





lexluthermiester said:


> No, but if you know your baseline power usage, calculating the draw from the CPU is trivial.
> 
> 
> 
> ...



So in other words, you have nothing.


----------



## lexluthermiester (Feb 3, 2021)

weekendgeek said:


> So in other words, you have nothing.


That's not it at all. There are methods of determining power usage in a given circuit and tools to help in such an effort. I don't like the attitude you displayed toward me and as such I'm not inclined to do your research for you. Have fun.


----------



## Deleted member 202104 (Feb 4, 2021)

lexluthermiester said:


> That's not it at all. There are methods of determining power usage in a given circuit and tools to help in such an effort. I don't like the attitude you displayed toward me and as such I'm not inclined to do your research for you. Have fun.



What attitude?  I simply asked for you to share the type of device you recommended to measure CPU power usage since you stated it was more accurate than software tools like HWinfo, or RyzenMaster.


----------



## lexluthermiester (Feb 4, 2021)

weekendgeek said:


> What attitude? I simply asked for you to share the type of device you recommended to measure CPU power usage


It was the way it was stated. Kinda came off aggressive and condescending.



weekendgeek said:


> it was more accurate than software tools like HWinfo, or RyzenMaster.


No software tool can be anything but a ballpark estimation(and most of the time, not all that close to actual usage) because of all the variations motherboard makers put into their products. More precise methods for measuring power usage are available, a simple one being a KillaWatt meter, which is an amazingly accurate type of device for how simple it is. Testing methods are simple. Measure power at idle, then induce a CPU load and measure again. That's your CPU usage under load. Prime95 is exceptional at this task as you can configure it to run on the CPU(within L2/L3 cache) alone.

Devices for more in depth and precise measurements are available, but are generally more costly than the average user would want to spend for such a task.


----------



## Hachi_Roku256563 (Feb 4, 2021)

lexluthermiester said:


> No, but if you know your baseline power usage, calculating the draw from the CPU is trivial.


the cpu draw is not above 30
you can say margin of error is 40w
in the event the apu is on the usage goes from 40-50
probs why its a 65w part


----------



## newtekie1 (Feb 4, 2021)

londiste said:


> Turbo was not governed completely by temperature. There have been power limits in place for a long while. Power limits simply were not hit or were not hit in a significant way.
> Stock 8700K will not boost to 4.6GHz on all cores, not even with the fudged power settings. Frequency table is 4.3GHz for max all-core turbo. If yours does, it's MCE or equivalent in motherboard BIOS.



I know a stock 8700k won't all core boost to 4.6GHz. Like I said, that is the default behavior in *THE* Z390 board I put it in. The default behavior of the 8700k is what I see in a B365 board. But what that shows is how different motherboards change the behavior of the CPU.


----------



## lexluthermiester (Feb 4, 2021)

Isaac` said:


> the cpu draw is not above 30
> you can say margin of error is 40w
> in the event the apu is on the usage goes from 40-50
> probs why its a 65w part


Sorry, that's not the way an APU works. When you induce a CPU load, only the CPU portion of the APU is used. The GPU side stands mostly idle.


----------



## Deleted member 202104 (Feb 4, 2021)

lexluthermiester said:


> It was the way it was stated. Kinda came off aggressive and condescending.
> 
> 
> No software tool can be anything but a ballpark estimation(and most of the time, not all that close to actual usage) because of all the variations motherboard makers put into their products. More precise methods for measuring power usage are available, a simple one being a KillaWatt meter, which is an amazingly accurate type of device for how simple it is. Testing methods are simple. Measure power at idle, then induce a CPU load and measure again. That's your CPU usage under load. Prime95 is exceptional at this task as you can configure it to run on the CPU(within L2/L3 cache) alone.
> ...



Ok, we're back where we started.

The method you describe doesn't take into consideration the efficiency of the power supply.  Maybe the power supply is only 80% efficient at idle, but 90% efficient when the full CPU load is induced.  The amount of power measured at the kill-a-watt only reflects what the PSU is drawing and we can't determine that the power difference between idle and load isn't effected by power supply efficiency.


----------



## Hachi_Roku256563 (Feb 4, 2021)

lexluthermiester said:


> Sorry, that's not the way an APU works. When you induce a CPU load, only the CPU portion of the APU is used. The GPU side stands mostly idle.


when i say the apu is enabled
it kinda means its being used
cause its a cpu and GPu load


----------



## Nuckles56 (Feb 4, 2021)

trickson said:


> I have NEVER once, Not even in the 24 years of building have I once considered TDP as a build/Buy point.
> I have read through this thread and also have come to this pondering.
> I am however an AMD fanboy And can surely say the FX chips SUCK power and ASS! Need a fing 850W PSU just to power an FX8300 and HOLY crap the POWER crazy CHIP can heat a Double wide in Alaska!
> SO yeah never really gave a crap about TDP...


As someone who lives in a hot climate and needs to run aircon for a good part of the year if want to be comfortable when running the PC hard for a long time, I very much do consider how much power the system draws and I'm always happy with more efficient hardware and tbh AMD are doing better at that then intel are at stock settings. The power draw of the old FX series processors made sure I never even looked that way when I set up my first rig and same with the r9 GPUs.


----------



## lexluthermiester (Feb 4, 2021)

weekendgeek said:


> The method you describe doesn't take into consideration the efficiency of the power supply.


You're right, it doesn't. No method is perfect. However, where we are measuring actual power used, it is very accurate.



Isaac` said:


> when i say the apu is enabled
> it kinda means its being used
> *cause its a cpu and GPu load*


No, it isn't. When you induce a CPU load, only the CPU is used. The GPU stands idle. Contrariwise when you induce a GPU load, the CPU more or less stands idle.


----------



## Hachi_Roku256563 (Feb 4, 2021)

y


lexluthermiester said:


> No, it isn't. When you induce a CPU load, only the CPU is used. The GPU stands idle. Contrariwise when you induce a GPU load, the CPU more or less stands idle.


you can load both up at the same time
both are at 100%


----------



## trickson (Feb 4, 2021)

Nuckles56 said:


> As someone who lives in a hot climate and needs to run aircon for a good part of the year if want to be comfortable when running the PC hard for a long time, I very much do consider how much power the system draws and I'm always happy with more efficient hardware and tbh AMD are doing better at that then intel are at stock settings. The power draw of the old FX series processors made sure I never even looked that way when I set up my first rig and same with the r9 GPUs.


Well that's one. 
Most users even heavy over clockers do not use TDP in determining there choices.
I think it comes into play when determining the HSF or cooling I will be using for sure but nothing past that.


----------



## lexluthermiester (Feb 4, 2021)

Isaac` said:


> y
> 
> you can load both up at the same time
> both are at 100%


True. But when you are testing power draw loads, you only want to load one or the other, not both.


----------



## 80-watt Hamster (Feb 4, 2021)

Bill_Bright said:


> Of course it does. Performance determines how much "work" can be accomplished in a given amount of time with a given amount of energy.
> 
> What??? Do think those gates are just flipping and flopping back and forth for fun or no reason? NOOOOO! They are doing "work"! Crunching numbers. Processing data.
> 
> ...



Work in this context doesn't have anything to do with CPU performance.  Work in a physical system is all about the coversion of energy.  What you're considering work here is only work in the conceptual sense, the number of calculations per second, which isn't a physical quantity.  We start with electrical energy (joules) that is applied over time (joules/second=watts).  Forms of energy are kinetic, potential, chemical, radiant, and thermal.  Electricity is a form of potential energy, which we then call on to flip transitors for our logic gates (and run some fans and probably RGB lights these days).  Work happens when that potential energy takes another form.  Inside a CPU, the only conversion that can happen is to thermal.  Unless you manage to set it on fire (chemical/radiant), blow the lid off (kinetic) or something equally dramatic.  Simply put, energy in equals energy out.  The only energy in is potential/electrical, so unless I'm missing something big, all the energy out is thermal.


----------



## trickson (Feb 4, 2021)

There is one thing I am getting from all this.
Intel is NOT lying!


----------



## Hachi_Roku256563 (Feb 4, 2021)

lexluthermiester said:


> True. But when you are testing power draw loads, you only want to load one or the other, not both.


but i want to measure power being drawn by the cpu 
that includes that apu
there is 0 point saying my cpu draws 40w
if it draws 70w while gaming owing the the apu kicking in


----------



## Aquinus (Feb 4, 2021)

80-watt Hamster said:


> The only energy in is potential/electrical, so unless I'm missing something big, all the energy out is thermal.


I'm glad that at least one person remembers the law of conservation of energy.


----------



## trickson (Feb 4, 2021)

Isaac` said:


> but i want to measure power being drawn by the cpu
> that includes that apu
> there is 0 point saying my cpu draws 40w
> if it draws 70w while gaming owing the the apu kicking in


Holy CRAP why? What is the POINT? Pay your power bill and figure it out!


----------



## Hachi_Roku256563 (Feb 4, 2021)

trickson said:


> Holy CRAP why? What is the POINT? Pay your power bill and figure it out!


its called i dont trust my psu


----------



## trickson (Feb 4, 2021)

Isaac` said:


> its called i dont trust my psu


Okay WHAT?
Well that is because you got yourself a POS you need to get yourself a real PSU a Corsair TX or CX Gold! you got the PSU gitters is all and Corsair will take all them away for good!


----------



## Zach_01 (Feb 4, 2021)

80-watt Hamster said:


> Work in this context doesn't have anything to do with CPU performance.  Work in a physical system is all about the coversion of energy.  What you're considering work here is only work in the conceptual sense, the number of calculations per second, which isn't a physical quantity.  We start with electrical energy (joules) that is applied over time (joules/second=watts).  Forms of energy are kinetic, potential, chemical, radiant, and thermal.  Electricity is a form of potential energy, which we then call on to flip transitors for our logic gates (and run some fans and probably RGB lights these days).  Work happens when that potential energy takes another form.  Inside a CPU, the only conversion that can happen is to thermal.  Unless you manage to set it on fire (chemical/radiant), blow the lid off (kinetic) or something equally dramatic.  Simply put, energy in equals energy out.  The only energy in is potential/electrical, so unless I'm missing something big, all the energy out is thermal.


Couldn’t agree more. Bottom line is that all electric power is transformed into heat inside CPU. The thing is that Intel states as TDP only 1 power stage of CPU, the lowest one, and AMD a portion of that overall CPU heat that “believes” or measures will be dissipated from CPU to cooler under the max power draw.

As stated on post#96


----------



## newtekie1 (Feb 4, 2021)

lexluthermiester said:


> Testing methods are simple. Measure power at idle, then induce a CPU load and measure again. That's your CPU usage under load.



And you're just assuming CPU power draw is 0 when idle?  Without knowing idle power draw, this method gives no usable information in regards to CPU power draw under load and isn't any more accurate than software methods for reading CPU power. In fact it's probably less accurate than software readings.

The best way to get CPU power draw is to put a multi-meter in line with the 12v CPU plug and directly measure the Amps going to the CPU. But even that isn't perfect, as it won't account for the inefficacy of the VRMs.



lexluthermiester said:


> True. But when you are testing power draw loads, you only want to load one or the other, not both.



This is a gray area. The TDP rating of the CPU includes the iGPU as well.  It's a package rating.  So technically, if you are trying to compare it to the rating on the box, the iGPU is included. But on the other hand, when will you ever see both the CPU cores and the iGPU fully loaded?  Even during gaming that isn't going to happen.


----------



## lexluthermiester (Feb 4, 2021)

newtekie1 said:


> And you're just assuming CPU power draw is 0 when idle?


Not at all. I have done enough testing to know that regardless of the system being tested, the idle power usage is always very low, most of the time less than 50watts. Therefore testing power usage for one component or another is simple after a baseline is established.


newtekie1 said:


> The best way to get CPU power draw is to put a multi-meter in line with the 12v CPU plug and directly measure the Amps going to the CPU.


That is also a valid method, but is a bit more technical and involves some risk.


----------



## watzupken (Feb 4, 2021)

phanbuey said:


> I'm so confused... AMD and Intel have been doing the same thing for FOREVER ?... the phenoms were rated for 94W that sucked down over 200W... Zen 3 while extremely efficient also consumes over its rated TDP... OP is posting on a chip rated for a TDP of 88W that at stock config will eat over 150W.  *Thermal *Design Power (TDP) != Power Consumption.
> 
> What exactly is the problem?  Is it that motherboards are yolo boosting to the moon because they can?  Is it because intel can't get its sh*t together and is still on 14nm?   I guess I am missing the part where we decided this was Intel's fault for lying...


The problem with Intel is this, they claimed that their product is faster than/ competitive against competitors' product, but they don't tell you the full story about the power consumption in order to get there. Their marketing material always takes aim at AMD, but reveals nothing about the power inefficiencies. I agree that TDP is now used to measure the guaranteed base clockspeed, but that base that Intel offers is very low and nowhere near competitive even if I were to compare it with an AMD with the same number of cores and TDP. You are right to say that the power required for boost will be higher, but taking the Ryzen 5900X as an example it has a TDP of 105W, with a max of 142W power limit at stock settings. Compared to a 65/95W Intel processor that actually guzzles down 250W or more, the latter is significantly more misleading. While this don't seem like a problem, but if someone who is not aware of this goes and buy a cheap motherboard, power supply and get a budget cooler, you can imagine the problems ahead.



Mussels said:


> please use your logic to explain how the 65W 10700 uses more power than the 125W 10700k


Logic and proof are here.









						Intel Core i7-10700 vs Core i7-10700K Review: Is 65W Comet Lake an Option?
					






					www.anandtech.com
				




It could be a poorer binned chip being tested, but I believe Intel will keep the better binned chip for the K series chips since they are meant to run at high clockspeed and "overclockable" too. Where as the non K version runs at a lower clockspeed and locked out from overclocking.


----------



## ViperXTR (Feb 4, 2021)

Curiously, do the PSU calculator websites take these power draw into account?


----------



## londiste (Feb 4, 2021)

ViperXTR said:


> Curiously, do the PSU calculator websites take these power draw into account?


Most probably use TDP.



watzupken said:


> Logic and proof are here.
> 
> 
> 
> ...


The results are no doubt correct and lower-binned non-K CPU getting worse efficiency is not surprising.

I would still suspect this motherboard does something wrong. Would have liked Anandtech to look into that a little bit. Just turning off the limits (or moving them to where they do not matter) is an "interesting" approach. Based on what I have seen with previous sockets-platforms, non-Z motherboards are usually not doing these shenanigans and in most cases non-K CPUs get stock settings or close to that. Anandtech is running a high-end board (which are known to take aggressive approach). I cannot avoid thinking about MCE back when Coffee Lake came out and motherboards started applying their own understanding of settings unless you very specifically asked for stock


----------



## Hachi_Roku256563 (Feb 4, 2021)

trickson said:


> Well that is because you got yourself a POS you need to get yourself a real PSU a Corsair TX or CX Gold! you got the PSU gitters is all and Corsair will take all them away for good!


i trust my thermal take litepowers better then anything on the market
big name 80+g 600w psu 5 years
thermal take litepower
11+years
i just think its a little underspeced for what its doing
its not but im over scared


----------



## HansRapad (Feb 4, 2021)

TDP is measured on base clock, but most CPU turboing above base clock

GPU do this too


----------



## Vayra86 (Feb 4, 2021)

newtekie1 said:


> And you're just assuming CPU power draw is 0 when idle?  Without knowing idle power draw, this method gives no usable information in regards to CPU power draw under load and isn't any more accurate than software methods for reading CPU power. In fact it's probably less accurate than software readings.
> 
> The best way to get CPU power draw is to put a multi-meter in line with the 12v CPU plug and directly measure the Amps going to the CPU. But even that isn't perfect, as it won't account for the inefficacy of the VRMs.
> 
> ...



Exactly... multi meter and segmented load comparisons gets you closest, but still isn't entirely accurate.

But... the calculated draw from within software based on the actual processor metrics might still turn out to be a more accurate display of the actual power draw of that specific component, given those caveats.

This was the point @weekendgeek was trying to make. All measuring methods are in some way inaccurate because we're talking about a box of components linked together, AND with variable loads. This is the exact same thing that muddies the waters with Intel's TDP spec. They use different TDPs now for peak draw for example and they renamed turbos to fit that new limit.

Still, the per-core suspected / calculated wattages and their totals from say HWInfo provide a very _plausible_ measure of actual power drawn by that specific component. After all, the CPU 'knows' what it needs, so why would the software not report that with accuracy? The polling is done on a pretty high rate. I think you're getting just as good an _impression_ of the power draw when you use software, especially if you're just comparing to a Kill-a-Watt measuring from the wall (EVEN if you load separate components).


----------



## newtekie1 (Feb 4, 2021)

lexluthermiester said:


> Not at all. I have done enough testing to know that regardless of the system being tested, the idle power usage is always very low, most of the time less than 50watts. Therefore testing power usage for one component or another is simple after a baseline is established.



And that doesn't tell you the idle power of the CPU itself so you have no baseline to tell CPU power draw. All your method tells you is how much extra the CPU draws under load, not how much it is actually drawing.


----------



## qubit (Feb 4, 2021)

To be honest, I don't see what's wrong with what Intel is doing here.

Using this technique, in the future, they can build 32-core monster CPUs with 10W TDP and then just blame the user when their rig melts down trying to pull 700W. What's not to like?


----------



## freeagent (Feb 4, 2021)

On these new CPU's you should only be looking at PL2 TDP. Because at that point nothing else matters.


----------



## qubit (Feb 4, 2021)

freeagent said:


> On these new CPU's you should only be looking at PL2 TDP. Because at that point nothing else matters.


I'd like to see a rig built to withstand the harshest of heat producing benchmarks (Furmark for CPUs, effectively) at a big overclock, with lowish temperatures and just what kind of hardware it takes to do this. Might have to go cryo.


----------



## lexluthermiester (Feb 4, 2021)

newtekie1 said:


> And that doesn't tell you the idle power of the CPU itself so you have no baseline to tell CPU power draw. All your method tells you is how much extra the CPU draws under load, not how much it is actually drawing.


When the CPU is idle, it's power draw is minimal. The comparison between idle and full load is what matters. It's a very simple concept.


----------



## freeagent (Feb 4, 2021)

Its not even the wattage that's brutal, wattage was easy to deal with before with certain previous generations. The cores are so small and dense now, but the power is still there.. and that's what makes it hard to tame. Them being stuck at their node obviously doesn't help. 

Just think, if it wasn't for TSMC AMD could have been where Intel is sitting because we all know they don't fab their own warez.

I don't know.. to me its not a big deal. I know to expect heat, and I am prepared for it.


----------



## trickson (Feb 4, 2021)

lexluthermiester said:


> When the CPU is idle, it's power draw is minimal. The comparison between idle and full load is what matters. It's a very simple concept.


Doesn't sound so simple.
If so why 6 pages to describe it?


----------



## lexluthermiester (Feb 4, 2021)

trickson said:


> Doesn't sound so simple.
> If so why 6 pages to describe it?


Yes, exactly!


----------



## trickson (Feb 4, 2021)

So now have we or can we state that no one (including Intel) is lying?. That thermal heat and power go hand in hand and that this is all so confusing that sticking a 2 pound ball of metal with fans on it sucks and is all we have to cool with or water.


----------



## Bill_Bright (Feb 4, 2021)

80-watt Hamster said:


> Work in this context doesn't have anything to do with CPU performance. Work in a physical system is all about the coversion of energy. What you're considering work here is only work in the conceptual sense


 No I'm not. Please read the entire exchange that prompted my comments to understand what I was saying instead of just pulling out of context my comment then claiming what I said is wrong!

When I said "performance" I specifically said, "_Performance determines how much "work" can be accomplished in a given amount of time with a given amount of energy._" So "performance" in this context was about the amount of "work" being done, not how fast it was performed.

And yes, it is indeed the conversion of energy. I agree completely.

Had you taken the time to read and understand the entire exchange, you would have seen that I was responding to the incorrect claim that "_energy in equals dissipation_". That is, dissipation in the form of heat. While heat is a big part of that "conversion of energy", it does not "equal" the energy "in" (being consumed) because some of that energy in is being converted in "work" with "work" being the crunching of numbers - running the program (flipping and flopping gates).

****

@Mods - Since it is apparent many are not reading the entire thread before commenting, and as such, are taking comments out of context, and because it has now splintered off into many OT tangents, I pose the thread be closed.


----------



## cst1992 (Feb 4, 2021)

Zach_01 said:


> As TDP they refer to PL1 power draw and that is the max sustainable power draw of the CPU. PL2 is much higher than that but by default for a certain period of time called "Tau" each (PL2/Tau) different for every CPU.
> 
> View attachment 186868


Talking about PL1 here, not PL2.
I limited PL1 to 100W in the BIOS and got max 98.9W with CoreTemp, so I consider that a accurate enough indicator of actual CPU package power draw.



Bill_Bright said:


> "Machine 1" consumes 100W of energy per minute and gives off 95W in the form of heat. It moves 10 buckets of water 10 feet in that minute.
> 
> "Machine 2" consumes 100W of energy per minute and gives off 95W in the form of heat. But it moves 20 buckets of water 10 feet in that minute.


There's no real way to tell how much energy consumed by the CPU is utilized the way you state, and how much is not.
So I agree with what is said - best to assume all the power going in the CPU comes out as heat(for designing a cooler anyway).



Bill_Bright said:


> some of that energy in is being converted in "work" with "work" being the crunching of numbers - running the program (flipping and flopping gates).


True, but how much? There's no way to know.


----------



## Deleted member 202104 (Feb 4, 2021)

Bill_Bright said:


> No I'm not. Please read the entire exchange that prompted my comments to understand what I was saying instead of just pulling out of context my comment then claiming what I said is wrong!
> 
> When I said "performance" I specifically said, "_Performance determines how much "work" can be accomplished in a given amount of time with a given amount of energy._" So "performance" in this context was about the amount of "work" being done, not how fast it was performed.
> 
> ...



Some light reading:






						Conservation of energy - Wikipedia
					






					en.wikipedia.org


----------



## trickson (Feb 4, 2021)

Seems if you really want to know that much about the power usage, Well then maybe go to school and figure it out! 
6 going on 7 pages of TDP and it's not like you can fix it or anything about it save one thing , DEAL WITH IT AND MOVE ON!


----------



## Bill_Bright (Feb 4, 2021)

cst1992 said:


> There's no real way to tell how much energy consumed by the CPU is utilized the way you state, and how much is not.


Huh? I NEVER, NOT ONCE, stated any way. Those were just examples that you [hopefully, but apparently couldn't ] use to picture the issue. 


cst1992 said:


> So I agree with what is said - best to assume all the power going in the CPU comes out as heat(for designing a cooler anyway).


"All" of the power? NO! And again, that is just wrong! If "all" of the power was being converted to heat, no "work" would be getting done. 

Did you not understand my analogy using the incandescent light bulbs in post #68 above? Certainly "most" of the power going in is converted to heat. But "some" of that power going in is being converted into light. You can NOT leave that conversion to light out of the equation.


----------



## newtekie1 (Feb 4, 2021)

lexluthermiester said:


> When the CPU is idle, it's power draw is minimal. The comparison between idle and full load is what matters. It's a very simple concept.



No, the comparison between idle and full load is not what matters. We are talking the actual power draw of the CPU, not the difference between idle and load. Those are two very different numbers. And you suggested this method as an alternative to software readings that you claim are less accurate.  Sorry, but you're wrong.  Your method is worse than the software readings(which are reading hardware sensors by the way). Your method will not give actual CPU power draw and actual CPU power draw is what matters.


----------



## trickson (Feb 4, 2021)

newtekie1 said:


> No, the comparison between idle and full load is not what matters. We are talking the actual power draw of the CPU, not the difference between idle and load. Those are two very different numbers. And you suggested this method as an alternative to software readings that you claim are less accurate.  Sorry, but you're wrong.  Your method is worse than the software readings(which are reading hardware sensors by the way). Your method will not give actual CPU power draw and actual CPU power draw is what matters.


The CPU manufacture has a tool it's called a gauge and they use this to calculate the TDP I am sure of it. The only way the consumer is ever going to know exactly how much the CPU alone is drawing is if you can isolate it and test it can you do this?
If not all this is just pure BS and now is off the rails. 
6 Pages and it's just as confusing as the first post! 
Your CPU takes in power that power is expelled into heat and energy that must be cooled in some way. once it maxes out or reach the TDP limit the CPU will cook and you will be pissed so figure it out!


----------



## Vayra86 (Feb 4, 2021)

trickson said:


> The CPU manufacture has a tool it's called a gauge and they use this to calculate the TDP I am sure of it. The only way the consumer is ever going to know exactly how much the CPU alone is drawing is if you can isolate it and test it can you do this?
> If not all this is just pure BS and now is off the rails.
> 6 Pages and it's just as confusing as the first post!
> Your CPU takes in power that power is expelled into heat and energy that must be cooled in some way. once it maxes out or reach the TDP limit the CPU will cook and you will be pissed so figure it out!



The gist and conclusion to this topic is that if you want to be safe with Intel, take the highest TDP they can write down about that CPU as your target to base cooling on.

Ergo, Intel is producing 125W and up CPUs across virtually half to 2/3rd of the stack. That's being honest with each other. They report those only for the K CPUs, but those actually go further on peak boost. With 11th gen, they settled for the 125W because there was no way back, but 10th gen...

Let's look at the 65W TDP 10900.,







Meanwhile, with 50W on idle... so there's a least 5W in there already from CPU, being generous:








						Intel Core i9-10900 Review - Fail at Stock, Impressive when Unlocked
					

In our Core i9-10900 review we're taking a close look at what can be gained from unlocking the power limit of this 65 W processor. Results are impressive: up to 40% faster apps and performance that rivals the Core i9-10900K at much lower pricing, but heat output is increased, too.




					www.techpowerup.com
				




Somebody tell me, how 140-50 ends up being 'somewhere around 65'.

And let's not even mention 'max turbo' 
Oh yeah, and let's not use the IGP either, because that won't end well.

Also, strange how they can mention 'Up to' with every clock except Base, but they can't mention 'Up To' when it comes to TDPs. Very strange indeed.

Also somebody explain how I should view that 125W TDP given that Max Turbo already hits royally over that number.


----------



## ThrashZone (Feb 4, 2021)

Hi,
Default clocks/ turbo on 10900k is pretty bad
Just running R20 it will throttle like grandmas wheel chair before it ends lol
You at the least have to activate MCE/ multicore enhancement and remove all limits or you'd be very disappointed in your score.


----------



## newtekie1 (Feb 4, 2021)

Vayra86 said:


> Also somebody explain how I should view that 125W TDP given that Max Turbo already hits royally over that number.



140w is royally over 125w?  And that is whole system numbers, not just the CPU.


----------



## unclewebb (Feb 4, 2021)

ThrashZone said:


> Just running R20 it will throttle like grandmas wheel chair before it ends


Here is a 10850K set to run at the same default speed as a 10900K. As R20 is just finishing, the CPU is still running at its full rated speed.



 

The Intel recommended default turbo power limits for the 10900K are 125W long term and 250W short term. The default turbo time limit is 56 seconds. R20 is a short test. A 10900K should have no problem completing R20 at full speed without a hint of throttling. 

In a longer test like R23, then the turbo power limit will drop to 125W and it will be throttle city.  

Instead of Intel is lying about TDP, the real problem is that Intel CPUs cannot deliver their full rated performance indefinitely when they drop down to their rated TDP. Most consumers do not understand this. Their mobile CPUs do the same thing. Long term throttling so they do not exceed rated TDP.

Intel is like a shady used car salesman. They only tell you what you want to hear. Run a quick R20 test in the store and everything looks great. Head out to the mountains and try to go up a long grade and your shiny new car will be throttling along in the slow lane.


----------



## Hachi_Roku256563 (Feb 4, 2021)

In my head a cpu should not go over the tdp
sure by 10-30w for turbo
but these new Intel cups are like double there tdp
that means if you dont have enough headroom in your psu POOF


----------



## trickson (Feb 4, 2021)

unclewebb said:


> Intel is like a shady used car salesman. They only tell you what you want to hear. Run a quick R20 test in the store and everything looks great. Head out to the mountains and try to go up a long grade and your shiny new car will be throttling along in the slow lane.


LOL.
How true but is it not advantageous for a Big Tech giant to stretching out the truth some? They are not lying just exaggerating.
I mean I have never heard a car salesman say this car is an absolute piece of crap you need it you should buy it. Though this is just what they are saying!


----------



## freeagent (Feb 4, 2021)

unclewebb said:


> Instead of Intel is lying about TDP, the real problem is that Intel CPUs cannot deliver their full rated performance indefinitely when they drop down to their rated TDP. Most consumers do not understand this. Their mobile CPUs do the same thing. Long term throttling so they do not exceed rated TDP.


That's a good point..


----------



## newtekie1 (Feb 4, 2021)

unclewebb said:


> Instead of Intel is lying about TDP, the real problem is that Intel CPUs cannot deliver their full rated performance indefinitely when they drop down to their rated TDP. Most consumers do not understand this. Their mobile CPUs do the same thing. Long term throttling so they do not exceed rated TDP.



Isn't that the entire idea behind Turbo Boost though and always has been?  As long as they can maintain their base clock, which Intel tells you, then that is what you are guaranteed.  Anything beyond that is a bonus thanks to Turbo Boost.  People have been complaining like you are since Turbo Boost came out, and I've even seen people call dropping below the boost clock thermal throttling even back in the early days when it was largely temperature based.

But the it's a boost, a boost is not meant to be a permanent performance increase. This is how boosting works. Are the boost clocks in graphics cards guaranteed? No. Are the boost clocks on AMD's CPUs guaranteed? No.



Isaac` said:


> In my head a cpu should not go over the tdp
> sure by 10-30w for turbo
> but these new Intel cups are like double there tdp
> that means if you dont have enough headroom in your psu POOF



There are a few things wrong with this statement.  First, no one should be running a PSU on the limit like that. Second, if you have at least a half decent PSU, you won't have a poof, the unit will just shut down.


----------



## unclewebb (Feb 4, 2021)

trickson said:


> LOL


It would be fun to go to BestBuy or similar to see if the salesman gives you the whole truth about Intel TDP or just part of the truth. Start up Cinebench R23 and 5 minutes later you will be able to ask one of two questions.

1) Why is this 5 GHz CPU running so slow?

or

2) Why is this 125W CPU sucking 250W? 

Kind of like a car company that advertises that their car can go 200 mph or get 50 mpg. Sure it can. Just not at the same time.


----------



## trickson (Feb 4, 2021)

newtekie1 said:


> Isn't that the entire idea behind Turbo Boost though and always has been?  As long as they can maintain their base clock, which Intel tells you, then that is what you are guaranteed.  Anything beyond that is a bonus thanks to Turbo Boost.  People have been complaining like you are since Turbo Boost came out, and I've even seen people call dropping below the boost clock thermal throttling even back in the early days when it was largely temperature based.
> 
> But the it's a boost, a boost is not meant to be a permanent performance increase. This is how boosting works. Are the boost clocks in graphics cards guaranteed? No. Are the boost clocks on AMD's CPUs guaranteed? No.


Well said thank you.
One thing I have always done is replace the stock cooling and try to OVERKILL on the HSF. 
What it comes down to is cooling before you hit the Throttle point right?


----------



## ThrashZone (Feb 4, 2021)

unclewebb said:


> Here is a 10850K set to run at the same default speed as a 10900K. As R20 is just finishing, the CPU is still running at its full rated speed.
> 
> View attachment 186982
> 
> ...


Hi,
Still pretty bad to me and yes it does throttle on R20 too and why I asked on ROG forum why and got this
short story was activate MCE


----------



## trickson (Feb 4, 2021)

unclewebb said:


> It would be fun to go to BestBuy or similar to see if the salesman gives you the whole truth about Intel TDP or just part of the truth. Start up Cinebench R23 and 5 minutes later you will be able to ask one of two questions.
> 
> 1) Why is this 5 GHz CPU running so slow?
> 
> ...


LOL oh okay so I get it now.
It's like the time I went to buy a Chevy Cruse the speed"O" said 140MPH I was like Laughing in the salesman's face " I said there is NO way a 4 cylinder 1.2L could ever reach that speed unless it was supped up like mega!
Even then you wont have a car that lasts very long. 
But like I said just beef the HSF and PSU to the max and stop worrying about the TDP or do I could careless at this point.


----------



## freeagent (Feb 4, 2021)

I see the challenge with new Intel overclocks now.. getting the clocks is the easy part.. just like it always is.. but now its good luck trying to maintain them 

Very clever..


----------



## ThrashZone (Feb 4, 2021)

freeagent said:


> I see the challenge with new Intel overclocks now.. getting the clocks is the easy part.. just like it always is.. but now its good luck trying to maintain them
> 
> Very clever..


Hi,
Bingo Johnny we have a winner


----------



## trickson (Feb 4, 2021)

freeagent said:


> I see the challenge with new Intel overclocks now.. getting the clocks is the easy part.. just like it always is.. but now its good luck trying to maintain them
> 
> Very clever..


Right.
I hate all the boost talk myself my CPU should run at it's rated 3.6GHz 24/7 as for the speed boost crap? Well it does shoot up to 4.4Ghz but it don't hold it not for any real time so It's more a "Look what I can do" More than this is what I will do!


----------



## 80-watt Hamster (Feb 4, 2021)

unclewebb said:


> Kind of like a car company that advertises that their car can go 200 mph or get 50 mpg. Sure it can. Just not at the same time.



Not sure that illustrates the point you're trying to make, because a car like that would be AMAZING.


----------



## londiste (Feb 4, 2021)

Vayra86 said:


> Also somebody explain how I should view that 125W TDP given that Max Turbo already hits royally over that number.


Max turbo is effectively overclocking. You are manually removing any normal limits.


----------



## Bill_Bright (Feb 4, 2021)

@ThrashZone - Started watching that first GamersNexus video in your post #156 thinking I would give it 5 minutes before I got bored. Didn't happy. I watched the whole thing!
Nice find!

Very interesting how they determined it was the motherboard makers and their little tweaks - or cheats as they were called - to make their boards look better at the so called (but not) default settings published by Intel. Settings that resulted in inaccurate readings that, in turn, caused some to bash Intel when not deserved. 

There was still plenty of fault to go around, with Intel too. But I think everyone who thinks Intel is the biggest fattest liar here aught to view it.



londiste said:


> Max turbo is effectively overclocking.


Kinda sorta. If the CPU is designed to run at those levels for sustained periods of time, is it really "over" clocking? Or is the base clock the "under" clock speed before the marketing weenies got their mitts in the mix?


----------



## cst1992 (Feb 4, 2021)

80-watt Hamster said:


> Not sure that illustrates the point you're trying to make, because a car like that would be AMAZING.


Get a Tesla then.
The 2021 Model S gets 100+ MPGe at highway speed and the Plaid version has a 200 mph top speed.


----------



## unclewebb (Feb 4, 2021)

ThrashZone said:


> it does throttle on R20 too


The 16:00 minute mark of the first video you posted shows that Cinebench R20 power consumption is 200W. That is well under the 250W short term turbo power limit that Intel recommends the 10900K should be set to. It does not take 56 seconds to complete R20 so a 10900K should have no problem completing this benchmark at its full rated speed with zero power limit throttling. 

If the BIOS sets a 10900K to the default turbo values, 125W long, 250W, short and 56 seconds, Cinebench R20 will run at full speed for the entire test. Turbo boost does not last indefinitely. If you run Cinebench R20 multiple times back to back, the turbo boost reserve will be gone and the CPU will throttle based on the long term 125W limit. 



newtekie1 said:


> As long as they can maintain their base clock


I know people have been saying this for a long time but I cannot remember seeing any documentation from Intel that guarantees anything. Since the 2nd Gen Core i, Intel has always recommended that the long term turbo power limit be set equal to the TDP. This is still recommended with the 10th Gen. When I first boot up after installing a new BIOS version, my motherboard stops and specifically asks if I want to set the CPU up to the default power limits or not. If I select Yes, it sets the power limits to the Intel default values.



Bill_Bright said:


> caused some to bash Intel when not deserved


I agree. I bought an Intel CPU with a 125W TDP rating and when set to default specs, it runs at a maximum of 125W. I got exactly what was advertised. No complaints. I am even happier that Intel left the power limits unlocked so I can jack them up sky high, overclock this CPU and get more performance than what I paid for. Thanks Intel.


----------



## londiste (Feb 4, 2021)

Bill_Bright said:


> Kinda sorta. If the CPU is designed to run at those levels for sustained periods of time, is it really "over" clocking? Or is the base clock the "under" clock speed before the marketing weenies got their mitts in the mix?


This is the stupid grey area Intel has created but this is out of spec. 
Max Turbo in that review means power limits lifted to maximum possible values, effectively removing power limits from equation altogether.


----------



## Hachi_Roku256563 (Feb 4, 2021)

newtekie1 said:


> There are a few things wrong with this statement. First, no one should be running a PSU on the limit like that. Second, if you have at least a half decent PSU, you won't have a poof, the unit will just shut down.


a psu shutting down is what i meant
i mean a psu may not be on a limit a 125wcpu
with almost 100+w free sounds plenty 
and then the cpu turbos up and uses all of it
and the Poof 
computer has shutdown


----------



## Bill_Bright (Feb 4, 2021)

londiste said:


> This is the stupid grey area Intel has created but this is out of spec.


I would not call it stupid. In fact, I would call it smart. These features (and make no mistake, AMD does it too) allow a processer to increase performance when needed and throttle back to conserve energy and reduce heat when the extra boost is not needed. I find that very cleaver indeed - and I'm not even a tree hugger!


----------



## Selaya (Feb 4, 2021)

The _stupid_ part of this, well _fiasco_ is the fact that Intel isn't enforcing any of the guidelines they've set forth - they're basically like _yeah whatever do whatever fuck you want, it's your problem_ when it comes to default turbo (limits) behavoir, which in turn led to GN Steve's rant video. Given the fact that Intel is seriously uncompetitive when their CPUs throttle down to base clock due to the node disadvantage yeah that behavior is absolutely like, shady at best.


----------



## Arctucas (Feb 4, 2021)

"TurboBoost"?

Hmm... I just lock all cores at 50x, set manual Vcore, set VCore Mode to Adaptive, Default VDroop, disable C-states, enable HyperThreading, Windows Power Option to High Performance, ... Rock-n-Roll.


----------



## TheoneandonlyMrK (Feb 4, 2021)

HansRapad said:


> TDP is measured on base clock, but most CPU turboing above base clock
> 
> GPU do this too


For intel yes AMD use Tdp that won't be exceeded in default config while loaded and boosting as high as it can, I think that's the point, Intel use nebula's bull###t. ..
@Arctucas with the stock cooler or a 125watt one?!.


----------



## Bill_Bright (Feb 4, 2021)

Selaya said:


> The _stupid_ part of this, well _fiasco_ is the fact that Intel isn't enforcing any of the guidelines they've set forth


 Come on. You gotta know if Intel were to even think of "forcing" guidelines on anybody, the Intel haters/AMD fanboys would be all over the "big brother" aspect of this just as, if not more, than the Microsoft haters are all over Microsoft whenever they push something on us - even when it is for the good of mast majority of users.

 (Can't wait to see how the MS haters in this forum reply to that!  )


----------



## Aquinus (Feb 4, 2021)

Bill_Bright said:


> I would not call it stupid. In fact, I would call it smart. These features (and make no mistake, AMD does it too) allow a processer to increase performance when needed and throttle back to conserve energy and reduce heat when the extra boost is not needed. I find that very cleaver indeed - and I'm not even a tree hugger!


This. Boost algorithms are designed to run at full performance given the constraints placed around them. In this respect, I think both Intel and AMD do a good job. I think people have to understand that CPUs are designed these days to take advantage of bursty load. It's one of the reasons why mobile devices these days feel a heck of a lot more responsive than they used to without it.



Bill_Bright said:


> (Can't wait to see how the MS haters in this forum reply to that!  )


I'm sure people have opinions about me using a Mac as a daily driver.


----------



## newtekie1 (Feb 5, 2021)

unclewebb said:


> I know people have been saying this for a long time but I cannot remember seeing any documentation from Intel that guarantees anything. Since the 2nd Gen Core i, Intel has always recommended that the long term turbo power limit be set equal to the TDP. This is still recommended with the 10th Gen. When I first boot up after installing a new BIOS version, my motherboard stops and specifically asks if I want to set the CPU up to the default power limits or not. If I select Yes, it sets the power limits to the Intel default values.



The definition of the base clock Processor Base Frequency, in the tech world, is that is the clock speed you are guaranteed. Intel openly tells you want the base clock is.  This isn't like when AMD released graphics cards and only told you the boost clock speed and everyone complained when the GPUs ran slower than that.

And this is right on Intel's website:



> The processor base frequency is the operating point where TDP is defined.



So if people don't like their processors going beyond the TDP, turn of turbo boost and STFU about it because Intel is very clear on this subject.


----------



## lexluthermiester (Feb 5, 2021)

newtekie1 said:


> The definition of the base clock, in the tech world, is that is the clock speed you are guaranteed.


Incorrect. The base clock is the clock from which the total operating clock is derived which is why CPU's have multipliers and have for 30+years. You are talking about *Base Operating Frequency*. If you are going to insult and attempt(poorly) to correct someone like @unclewebb, who knows a LOT more about tech than you do, try NOT to embarrass yourself in the process.


----------



## unclewebb (Feb 5, 2021)

newtekie1 said:


> And this is right on Intel's website:


You are absolutely right.



> *TDP*
> Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload. Refer to Datasheet for thermal solution requirements.



With Turbo Boost disabled, my 10850K runs Prime95 Small FFTs at 95W so it is well under the 125W TDP rating. 

Now can we all agree that Intel is not lying about TDP?


----------



## RandallFlagg (Feb 5, 2021)

theoneandonlymrk said:


> For intel yes AMD use Tdp that won't be exceeded in default config while loaded and boosting as high as it can, I think that's the point, Intel use nebula's bull###t. ..
> @Arctucas with the stock cooler or a 125watt one?!.



AMD configures its 65W "TDP" parts with an 88 Watt PPT - max power to the socket.  

The 5600X is a 65W TDP parts that drew 74W avg in this test.  In other words, it drew more than its rated TDP.

The 10600K in this image is a 125W TDP part that actually drew 103 watts avg.  

By default the intel rig is doing a better job staying inside its TDP.    

Both of these can go way beyond that limit if you power unlock them and OC them.  

The main difference being that AMD maxes out at 88W PPT.  At that point you are done on AMD platforms with a 65W part.

Intel has no such limit imposed, it will just shut down when it overheats.

In other words, Intel is a way better platform for a tweaker/tuner.


----------



## TheoneandonlyMrK (Feb 5, 2021)

RandallFlagg said:


> AMD configures its 65W "TDP" parts with an 88 Watt PPT - max power to the socket.
> 
> The 5600X is a 65W TDP parts that drew 74W avg in this test.  In other words, it drew more than its rated TDP.
> 
> ...


I'd like to look into how Tom's achieve that table , got a link.
One still pulls less, does more and as for overclocking, that's debatable with infinity fabric clocking working so well.


----------



## RandallFlagg (Feb 5, 2021)

theoneandonlymrk said:


> I'd like to look into how Tom's achieve that table , got a link.
> One still pulls less, does more and as for overclocking, that's debatable with infinity fabric clocking working so well.



Efficiency, thats quite different from TDP so smacks a bit an early lead in to goal post shifting there.  If you can't win on one thing, just change the topic right?

But here's your link :








						AMD Ryzen 5 5600X Review: The Mainstream Knockout
					

Kill the body and the head will die




					www.tomshardware.com


----------



## TheoneandonlyMrK (Feb 5, 2021)

RandallFlagg said:


> Efficiency, thats quite different from TDP so smacks a bit an early lead in to goal post shifting there.  If you can't win on one thing, just change the topic right?
> 
> But here's your link :
> 
> ...


I'm not five or a troll, I don't Need to win, Ty for the link.


----------



## londiste (Feb 5, 2021)

5600X power limit is at 76W (Package Power).
The way these limits are set up, it can change based on load and temperature and all that in theory but I have not yet seen it be anything else than 76W.


----------



## bencrutz (Feb 5, 2021)

RandallFlagg said:


> Efficiency, thats quite different from TDP so smacks a bit an early lead in to goal post shifting there.  If you can't win on one thing, just change the topic right?
> 
> But here's your link :
> 
> ...


that's cute, according to W1zzard 5600X is way more efficient than 10600K







and for power consumption:


----------



## InVasMani (Feb 5, 2021)

phanbuey said:


> I'm so confused... AMD and Intel have been doing the same thing for FOREVER ?... the phenoms were rated for 94W that sucked down over 200W... Zen 3 while extremely efficient also consumes over its rated TDP... OP is posting on a chip rated for a TDP of 88W that at stock config will eat over 150W.  *Thermal *Design Power (TDP) != Power Consumption.
> 
> What exactly is the problem?  Is it that motherboards are yolo boosting to the moon because they can?  Is it because intel can't get its sh*t together and is still on 14nm?   I guess I am missing the part where we decided this was Intel's fault for lying...


Careful your fanboy is showing being impartial. You wouldn't want that let your inner AMD/Intel fandom show.


----------



## Vayra86 (Feb 5, 2021)

newtekie1 said:


> 140w is royally over 125w?  And that is whole system numbers, not just the CPU.


This is a *65W *TDP CPU as per the Intel slide right above it. I even specifically mentioned that even if you reduce all of the idle load (50W) you'll still be grossly out of spec.


----------



## InVasMani (Feb 5, 2021)

Chloe Price said:


> I'd still get a 3600 for a bang for buck setup like I did several month ago..


3300X to me is perhaps the sweet spot king on performance/efficiency. Pricing on the other hand these days try again.


----------



## Vayra86 (Feb 5, 2021)

unclewebb said:


> Here is a 10850K set to run at the same default speed as a 10900K. As R20 is just finishing, the CPU is still running at its full rated speed.
> 
> View attachment 186982
> 
> ...



Thank you, at least there is some semblance of sanity and cognitive functioning left on this forum.


----------



## Hachi_Roku256563 (Feb 5, 2021)

lexluthermiester said:


> Incorrect. The base clock is the clock from which the total operating clock is derived which is why CPU's have multipliers and have for 30+years. You are talking about *Base Operating Frequency*. If you are going to insult and attempt(poorly) to correct someone like @unclewebb, who knows a LOT more about tech than you do, try NOT to embarrass yourself in the process.


your wrong he is actually right lol


----------



## londiste (Feb 5, 2021)

Vayra86 said:


> This is a *65W *TDP CPU as per the Intel slide right above it. I even specifically mentioned that even if you reduce all of the idle load (50W) you'll still be grossly out of spec.


Assuming this runs at official spec - which it surprisingly seems to be doing - for the short-ish turbo time (8s or 28s, depending on which Intel's spec version we are looking at, with real values from motherboards often at 56s) CPU can run at 1.25x TDP. That is 81-something W. Compared to idle, some other components also get load (plus PSU efficiency if it was measured at wall) so whole-system consumption rising by 91W from idle to multi-core load does not seem worrying.

Similarly 65W TDP 3700X uses 93W more when comparing 53W at idle to 146W at multi-core (and 3700X idles at about 10W higher compared to Intel CPUs). Its spec says 1.35x TDP for power limit, so 88W. Again, rest of the components (plus potentially PSU efficiency) makes it run at about spec.

Got curious and checked the 3700X review as well - 2700 went from 46W to 123W. Again taking rest of the components etc into account, Ryzen 2000 is the last generation of CPUs that actually put power limit to where TDP is set.


----------



## Vayra86 (Feb 5, 2021)

londiste said:


> Assuming this runs at official spec - which it surprisingly seems to be doing - for the short-ish turbo time (8s or 28s, depending on which Intel's spec version we are looking at, with real values from motherboards often at 56s) CPU can run at 1.25x TDP. That is 81-something W. Compared to idle, some other components also get load (plus PSU efficiency if it was measured at wall) so whole-system consumption rising by 91W from idle to multi-core load does not seem worrying.
> 
> Similarly 65W TDP 3700X uses 93W more when comparing 53W at idle to 146W at multi-core (and 3700X idles at about 10W higher compared to Intel CPUs). Its spec says 1.35x TDP for power limit, so 88W. Again, rest of the components (plus potentially PSU efficiency) makes it run at about spec.
> 
> Got curious and checked the 3700X review as well - 2700 went from 46W to 123W. Again taking rest of the components etc into account, Ryzen 2000 is the last generation of CPUs that actually put power limit to where TDP is set.



And in both cases, this is a bad movement that brings a bill of some kind towards the end-user, be it cooling, overall power usage, whatever... they're exceeding the TDPs they put on the spec sheet while their older parts did not.

If all parts did always, we'd have a different discussion I think, but that wasn't the case, despite what some here are adamant to keep claiming - but it never was even on the high end. You got better parts there and the lower parts in the stack simply carried much more headroom, also in voltages. There were tons of Intel Ivy Bridge, Haswell and Broadwell CPUs that could run comfortably at vcores well below stock, _even _while running all core turbo's for their turbo frequency specified on the sheets. Maybe in that event some would need stock volts. But higher? Rarely if ever... I've seen more i7 quads run below the specified 1.2V or even 1.150V than I can count.


----------



## Zach_01 (Feb 5, 2021)

Don’t confuse TDP with total max power consumption. It’s 2 different things. Users make the mistake and think that they are equal. There is a reason they use “Thermal” in TDP.



RandallFlagg said:


> AMD configures its 65W "TDP" parts with an 88 Watt PPT - max power to the socket.
> 
> The 5600X is a 65W TDP parts that drew 74W avg in this test.  In other words, it drew more than its rated TDP.
> 
> ...


5600X has a 65W TDP and ~75W PPT.
75W is with PB (precision boost) on, meaning all core boost (whatever that freq is). It’s way above base freq. If you disable PB it will drop to maybe 40-50W PPT for base freq.
Now if you turn PB on and PBO (PB Override) also on, it may draw even higher that 75W if temperature (primarily) allow it.

The 65W rating is what cooler the CPU needs while it is on 75W PPT on specific ambient conditions.


----------



## londiste (Feb 5, 2021)

Vayra86 said:


> And in both cases, this is a bad movement that brings a bill of some kind towards the end-user, be it cooling, overall power usage, whatever... they're exceeding the TDPs they put on the spec sheet while their older parts did not.
> If all parts did always, we'd have a different discussion I think, but that wasn't the case, despite what some here are adamant to keep claiming - but it never was even on the high end.


Yup.

There were always exceptions - mostly at high end, 4790K comes to mind from recent times - but ever since power limits were implemented in CPUs they have been generally set at TDP. For a long time this did not even matter because CPU did not manage to use that much power anyway. And then it gradually changed with more and more power fed into CPUs. Power limit shenanigans are a fairly recent development as well. It really started with 8000-series Core for Intel and 3000-series Ryzen for AMD.

At the same time, there is some merit in the reasoning both Intel and AMD (and if we look further from desktop and x86, also other CPU manufacturers) use to accompany these changes. The explanation basically boils down to averaging out the power consumption metrics over time to not exceed TDP "by much", so that whatever cooling is on the thing can manage.

For an average user (which is a vast majority of users) it even does not really matter. Outside synthetic tests or some (not all) productivity workloads, even big hungry Intel CPUs have a reasonable power consumption.



Zach_01 said:


> 5600X has a 65W TDP and ~75W PPT.
> 75W is with PB (precision boost) on, meaning all core boost (whatever that freq is). It’s way above base freq. If you disable PB it will drop to maybe 40-50W PPT for base freq.
> Now if you turn PB on and PBO (PB Override) also on, it may draw even higher that 75W if temperature (primarily) allow it.
> 
> The 65W rating is what cooler the CPU needs while it is on 75W PPT on specific ambient conditions.


76W is power limit on 5600X at bone stock. And it will hold that indefinitely under CPU load.
What exactly do you mean by Precision Boost? Precision Boost is Zen's internal clocking technology. If you turn that off (can you?) it should be a noticeable performance hit because the CPU would not boost.
Precision Boost Overdrive is for the most part simply raised power limit.


----------



## InVasMani (Feb 5, 2021)

Precision Boost is AMD's terminology for Intel's SpeedStep I believe. It's the same idea rapid dynamic adaptive voltage control switching adjustments. It's pretty much a voltage LFO from high to low that scales frequency and voltage lower and higher based on the CPU load quickly. Intel if I'm not mistaken is a little more advanced at that particular aspect between the two, but AMD's certainly made progress and gotten better in that region.


----------



## Zach_01 (Feb 5, 2021)

londiste said:


> 76W is power limit on 5600X at bone stock. And it will hold that indefinitely under CPU load.
> What exactly do you mean by Precision Boost? Precision Boost is Zen's internal clocking technology. If you turn that off (can you?) it should be a noticeable performance hit because the CPU would not boost.
> Precision Boost Overdrive is for the most part simply raised power limit.


Of course you can disable it. When we talk about boost “we” mean clock over base freq. Is anything changed the last few years?
Maybe I should have said Performance Boost (Boost over base freq). This is how BIOS settings have it. If you turn this off the max clock will be the base freq.

So if you turn off PrecisionBoost the CPU freq will be fluctuating between minimum(idle) freq (example 2200MHz) and base freq (ex. 3600MHz)

PrecisionBoost is the performance enhancement tech by AMD as they state. And the overdrive is potential further boosting headroom.
Clocking from min to base (2.2GHz~3.6GHz) is not boost.



			https://www.amd.com/en/support/kb/faq/cpu-pb2


----------



## ViperXTR (Feb 5, 2021)

So do PSU calculator websites need to account for these boosts in power?


----------



## Flanker (Feb 5, 2021)

Question,  I want to know how much the CPU draws in a sustained gaming or video encoding session at stock settings with all the boost/auto overclocking thingees enabled. Does TDP give a guesstimate of that or should I look at some other spec. (Or just look at a god damn review lol)


----------



## londiste (Feb 5, 2021)

Flanker said:


> Question,  I want to know how much the CPU draws in a sustained gaming or video encoding session at stock settings with all the boost/auto overclocking thingees enabled. Does TDP give a guesstimate of that or should I look at some other spec. (Or just look at a god damn review lol)


Just look at a god damn review 
It varies quite noticeably by game and power consumption does not necessarily follow the CPU usage %.
Generally, it should be within TDP but I am sure we can find some exceptions


----------



## Zach_01 (Feb 5, 2021)

Flanker said:


> Question,  I want to know how much the CPU draws in a sustained gaming or video encoding session at stock settings with all the boost/auto overclocking thingees enabled. Does TDP give a guesstimate of that or should I look at some other spec. (Or just look at a god damn review lol)


Your best bet is HWiNFO sensors mode.
For AMD you look “CPU PPT” value and for Intel the “CPU Package Power” value. At least these are for the latest 2-3year CPUs.

If you reset sensor value monitoring right before you start the game and then enter, when you exit after X hours you can see the min/max and also the avg value which is the most important IMO.

If you don’t have the CPU then a review that puts the CPU on different loads and measure it individually is the only way.


----------



## cst1992 (Feb 5, 2021)

lexluthermiester said:


> Incorrect. The base clock is the clock from which the total operating clock is derived which is why CPU's have multipliers and have for 30+years. You are talking about *Base Operating Frequency*.


For my 4690k there is no Base Operating Frequency.
I have set a Balanced power profile in Windows, which means, it goes down to 800MHz when in idle, and at 4.3GHz when on load, because those are the clocks I've set in the BIOS.
The base frequency is 3.5, which of course I can set if I disable Turbo Boost, but I haven't and so it's just indicative. I could just force-set the multiplier to 28 in the BIOS and have it run at 2.8GHz, but that'd be a waste of a good chip.


----------



## londiste (Feb 5, 2021)

Zach_01 said:


> If you reset sensor value monitoring right before you start the game and then enter, when you exit after X hours you can see the min/max and also the avg value which is the most important IMO.


Also possible to let something draw you some graphs. Either in real time or log the data and do some analysis afterwards.

I have Rainmeter drawing stuff based off HwInfo64 monitoring data:


----------



## freeagent (Feb 5, 2021)

cst1992 said:


> For my 4690k there is no Base Operating Frequency.
> I have set a Balanced power profile in Windows, which means, it goes down to 800MHz when in idle, and at 4.3GHz when on load, because those are the clocks I've set in the BIOS.
> The base frequency is 3.5, which of course I can set if I disable Turbo Boost, but I haven't and so it's just indicative. I could just force-set the multiplier to 28 in the BIOS and have it run at 2.8GHz, but that'd be a waste of a good chip.


Your base operating frequency is 3.5-3.9. You are overclocked right now.. My 3770K did 4300 with stock volts too.


----------



## newtekie1 (Feb 5, 2021)

lexluthermiester said:


> Incorrect. The base clock is the clock from which the total operating clock is derived which is why CPU's have multipliers and have for 30+years. You are talking about *Base Operating Frequency*. If you are going to insult and attempt(poorly) to correct someone like @unclewebb, who knows a LOT more about tech than you do, try NOT to embarrass yourself in the process.



Actually, the correct name for it is *Processor Base Frequency*. If you are going to attempt(poorly) to correct someone do not embarrass yourself in the process.  The context made it clear what base clock speed I was talking about, Uncleweb obviously knew what I was talking about.  You're post is just trolling and off topic at this point.



Vayra86 said:


> This is a *65W *TDP CPU as per the Intel slide right above it. I even specifically mentioned that even if you reduce all of the idle load (50W) you'll still be grossly out of spec.



And that is whole system power WITH Turbo boost enabled.  Show me some numbers with turbo disabled and then we can talk.


----------



## ThrashZone (Feb 5, 2021)

unclewebb said:


> The 16:00 minute mark of the first video you posted shows that Cinebench R20 power consumption is 200W. That is well under the 250W short term turbo power limit that Intel recommends the 10900K should be set to. It does not take 56 seconds to complete R20 so a 10900K should have no problem completing this benchmark at its full rated speed with zero power limit throttling.
> 
> If the BIOS sets a 10900K to the default turbo values, 125W long, 250W, short and 56 seconds, Cinebench R20 will run at full speed for the entire test. Turbo boost does not last indefinitely. If you run Cinebench R20 multiple times back to back, the turbo boost reserve will be gone and the CPU will throttle based on the long term 125W limit.
> 
> ...


Hi,
I didn't need either of the videos I got the short story why throttling was happening so I just posted them so others could watch if they wanted too

Already been said throttling will kick in after 44 seconds "max bios setting for power state is [448]" and it does, my screen shot shows minimum vid dropping to 4.018Mhz and so did cache drop to 38 lol 

Real point was 5.1 on 10900k didn't beat my ols 7900x at 4.9 I believe until I switched MCE remove all limits on so yeah I was a little pissed lol


----------



## trickson (Feb 5, 2021)

OMG 9 pages and I can not figure out who is lying now!

I think YOU all are liars no one knows the truth, You can't handle the TRUTH. 
Watts and voltage and TDP who's lying to me???


----------



## Bill_Bright (Feb 5, 2021)

Aquinus said:


> I'm sure people have opinions about me using a Mac as a daily driver.


You traitorous scumbag! LOL 


ViperXTR said:


> So do PSU calculator websites need to account for these boosts in power?


Sure can't speak for all but you can be confident the best PSC calculator (and the one many others are based on) the eXtreme OuterVision PSU Calculator ensures in its estimates that the recommended supplies are more than adequate.

It is important to remember that no PSU calculator ever wants to suggest an underpowered supply. So they all pad the results to ensure that never happens. Plus whenever PSU size is being calculated (either manually or via a calculator), it must be assumed that it is possible (no matter how remote the possibility) that there could be a moment in time when the CPU and the GPU, motherboard, RAM, drives, fans and all other attached devices will maximize their demands at the same time. While unlikely, it is possible, so calculators account for that.

So stick with the eXtreme OuterVision PSU Calculator and you will have nothing to worry about. If you are a worry-wart and still concerned, add an extra 50 - 100W to the calculator's results and sleep soundly at night - with one eye open, of course.


----------



## r9 (Feb 5, 2021)

Mussels said:


> the magical lighting inside the melted sand make zappy zappy


I always wanted to know how CPUs actually work. Thanks! 



cst1992 said:


> Intel’s Desktop TDPs No Longer Useful to Predict CPU Power Consumption | ExtremeTech
> 
> 
> Intel's higher-end desktop CPU TDPs no longer communicate anything useful about the CPUs power consumption under load. ...
> ...


So Max base and Max boost W rating would be the best representation.


----------



## cst1992 (Feb 5, 2021)

freeagent said:


> Your base operating frequency is 3.5-3.9. You are overclocked right now.. My 3770K did 4300 with stock volts too.


Volts are not stock; the voltage at 4.3GHz is higher than voltage at 3.5GHz.
I just let the motherboard auto-set it for me.


----------



## freeagent (Feb 5, 2021)

Its all so confusing what is stock anymore 

Is it balls to the wall or is it time to hug a tree I just don't know anymore.


----------



## Zach_01 (Feb 5, 2021)

r9 said:


> So Max base and Max boost W rating would be the best representation.



So something like this that I posted 5 pages back. Intel will never advertise this with those high numbers. Some SKUs are hitting x3~4 the "advertised" TDP.





Nobody is lying. Intel or AMD are just telling a part of the truth. You have to read behind the lines. Its typical marketing...

By Intel:
_"Thermal Design Power (TDP) represents the *average power, in watts, the processor dissipates* when operating *at Base Frequency with all cores active under an Intel-defined*, high-complexity workload. Refer to Datasheet for thermal solution requirements"_

AMD's formulae of TDP as I described with simple words also a few pages back


----------



## Bill_Bright (Feb 5, 2021)

We are going around in circles - saying the same things over and over again.


----------



## Zach_01 (Feb 5, 2021)

Bill_Bright said:


> We are going around in circles - saying the same things over and over again.


Yes, and I hope these pics will end it...



I've been saying these from 4th page... sadly


----------



## Bill_Bright (Feb 5, 2021)

Zach_01 said:


> I've been saying these from 4th page... sadly


The merry-go-round was going long before then. I'm hopping off.


----------



## trickson (Feb 5, 2021)

Bill_Bright said:


> We are going around in circles - saying the same things over and over again.


Right! I mean 9 pages and it's not even about Intel lying anymore lol. 
It has Morphed!


----------



## Vayra86 (Feb 5, 2021)

newtekie1 said:


> And that is whole system power WITH Turbo boost enabled.  Show me some numbers with turbo disabled and then we can talk.



No no no.... Look again at the Intel spec sheet. It clearly says TDP is 65W. Not 70. Not 80. Not 75 for five seconds. It says 65W in the same line as it is saying 'Up to' a number of clock speeds.

NOWHERE does Intel specify something about turbo TDPs. Why doesn't the Intel spec sheet say 'Up to 100W' to match their 'up to' turbos? I mean, we're spending 9 pages now trying to figure out if someone's lying to us.

The answer's right there. This isn't a half truth or anything... its a spec sheet that lies to you. *None* *of those CPUs go 'Up To' their rated frequencies* *on 65W*.

On top of that, even Intel itself specifies that usage may run over the TDP that is specified if you read the small print. So who are we kidding here? Ourselves? And, again, don't come out saying 'this was always like that' because its clear the goal posts have shifted since Intel specifies PL1/PL2 and uses TVB and whatnot. All that is, is lots of smoke and mirrors to hide the fact that they really need those 120+ Watts to do anything worthwile in competitive performance.

And here's the kicker... they ALREADY lied to us because those turbos only count for a small number of cores, not the whole CPU - another nice little bit of info that's gloriously fallen off those spec sheets, gosh I wonder why. And again, looking back at pre-Skylake... you could load an *all-core OC at turbo clocks* within stock voltages, or sometimes even under them. Good luck w that today.

This is the marketing reality slowly shifting our own realities. We already took some baby steps in the past, but luckily not everyone forgets.



trickson said:


> OMG 9 pages and I can not figure out who is lying now!
> 
> I think YOU all are liars no one knows the truth, You can't handle the TRUTH.
> Watts and voltage and TDP who's lying to me???



Everyone, obviously. Trust no one. Always ask your CPU where he's been today and how hot he got.


----------



## trickson (Feb 5, 2021)

Who's zoomin who??


----------



## newtekie1 (Feb 5, 2021)

Vayra86 said:


> No no no.... Look again at the Intel spec sheet. It clearly says TDP is 65W. Not 70. Not 80. Not 75 for five seconds. It says 65W in the same line as it is saying 'Up to' a number of clock speeds.
> 
> NOWHERE does Intel specify something about turbo TDPs. Why doesn't the Intel spec sheet say 'Up to 100W' to match their 'up to' turbos? I mean, we're spending 9 pages now trying to figure out if someone's lying to us.
> 
> ...



From Intel's spec sheets:



> Processor Base Frequency​Processor Base Frequency describes the rate at which the processor's transistors open and close.* The processor base frequency is the operating point where TDP is defined.* Frequency is typically measured in gigahertz (GHz), or billion cycles per second.





> TDP​Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates *when operating at Base Frequency* with all cores active under an Intel-defined, high-complexity workload. Refer to Datasheet for thermal solution requirements.





> Max Turbo Frequency​Max turbo frequency is the maximum *single core* frequency at which the processor is capable of operating using Intel® Turbo Boost Technology and, if present, Intel® Thermal Velocity Boost. Frequency is typically measured in gigahertz (GHz), or billion cycles per second.



They aren't lying about anything, they are telling you exactly the specs and how they define TDP as well as the maximum boost.  If you don't like the processor going beyond the rated TDP, turn off turbo boost and STFU about it.  But guess what, you better turn of PBO on your AMD processor too, because guess what happens to the power consumption when PBO is active.

Neither is lying about their processor's power consumption. Some people are seeing a number, doing no research on what that number means or how it is determined and instead just assuming it means something it doesn't.


----------



## Vayra86 (Feb 5, 2021)

newtekie1 said:


> From Intel's spec sheets:
> 
> 
> 
> ...



I can see the merit of your point of view about it, but can you see the merit in mine? I'm playing the unsuspecting customer, trying to get 'a little bit' informed. They google Intel and get that spec sheet.

What happened to informing them properly? Honestly? You did not answer that question in relation to my example. Yes, we can read the full Intel bible on how its supposed to work, but can you reasonably expect that, or should Intel define TDPs on the spec sheet just as they do clocks - within a range, and not a set value?

Money can also be made by making sure your DYI'ing customers know what they're getting. We've had several reports of mishaps related to this in recent past.

And the silly thing is, they now report a 125W TDP and a separate value as well for turbo's I believe, but its still too low.

And obviously for AMD the rules aren't any different.


----------



## newtekie1 (Feb 5, 2021)

Vayra86 said:


> I can see the merit of your point of view about it, but can you see the merit in mine? I'm playing the unsuspecting customer, trying to get 'a little bit' informed. They google Intel and get that spec sheet.
> 
> What happened to informing them properly? Honestly? You did not answer that question in relation to my example. Yes, we can read the full Intel bible on how its supposed to work, but can you reasonably expect that, or should Intel define TDPs on the spec sheet just as they do clocks - within a range, and not a set value?



On the spec sheet of every modern processor on Intel's website, every relevant spec has a big *i* next to it.  You click on that and it tells you all the information you need to know about that spec.  If the consumer is too stupid to actually research and read what these specs mean when Intel makes it so insanely easy, then that's the consumer's fault.  They don't even put little numbers next to the spec and make you scroll down to the bottom to read little tiny fine print. It is a *i* and you click it and it comes right up tell you what the spec is.

Now, go over to AMD's website and look at the spec sheet for the 5950X.  It says TDP of 105w and gives absolutely no information about how that TDP is determined.  Is that the TDP at the base clock? Is it the TDP at the maximum boost clock? Can you answer that for me based solely on the information provided on the 5900X product page?


----------



## Vayra86 (Feb 5, 2021)

newtekie1 said:


> On the spec sheet of every processor on Intel's website, every relevant spec has a big ? next to it.  You click on that and it tells you all the information you need to know about that spec.  If the consumer is too stupid to actually research and read what these specs mean when Intel makes it so insanely easy, then that's the consumer's fault.  They don't even put little numbers next to the spec and make you scroll down to the bottom to read little tiny fine print. It is a huge ? and you click it and it comes right up tell you what the spec is.



You do realize that ARK isnt exactly the first place to look nor is it something Intel actively directs you to when they do marketing, right?


----------



## Zach_01 (Feb 5, 2021)

Vayra86 said:


> And obviously for AMD the rules aren't any different.


Even though AMD's CPU power consumption also exceed TDP, AMD's TDP is very very different in comparison with Intel TDP (the meaning).
For example if you install a better cooler/TIM on the AMD CPU (from the one that AMD took measurements) with lower thermal resistance, (1) the rated TDP is going up by a little if PPT is the same.... (2) and going up by a lot (obviously) if PPT is higher because of lower internal CPU temp = higher clocking (under PPT limit permission)
But the 1st section is the interesting one.


----------



## newtekie1 (Feb 5, 2021)

Vayra86 said:


> You do realize that ARK isnt exactly the first place to look nor is it something Intel actively directs you to when they do marketing, right?



Who said anything about ark? It's literally on the standard product page for the processors on Intel's website. Here, try this:  Go to Google and search 10900K. Ignore the fact that the Ark page is the first result(because ignoring the first result on Google is definitely what the average consumer does). Go to the second result, which is the Intel product page for the 10900K. Click on that. It immediately takes you to the spec sheet. Click the *i* next to TDP.

You still want to try to say Intel's making it hard for the average consumer?

Oh, and by the way, how are you doing on figuring out how AMD TDP numbers are determined?


----------



## trickson (Feb 5, 2021)

Okay it is official this thread is; INSANE.


----------



## Zach_01 (Feb 5, 2021)

Because some people fail to understand what each manufacturer states with advertised TDP, doesnt make the thread insane but something else...


----------



## trickson (Feb 5, 2021)

Zach_01 said:


> Because some people fail to understand what each manufacturer states with advertised TDP, doesn't make the thread insane but something else...


9 Pages all talking in circles I call that ; INSANE.

I'm dizzy and hopping off this merry-go-round ....


----------



## Vayra86 (Feb 5, 2021)

newtekie1 said:


> Who said anything about ark? It's literally on the standard product page for the processors on Intel's website. Here, try this:  Go to Google and search 10900K. Ignore the fact that the Ark page is the first result(because ignoring the first result on Google is definitely what the average consumer does). Go to the second result, which is the Intel product page for the 10900K. Click on that. It immediately takes you to the spec sheet. Click the *i* next to TDP.
> 
> You still want to try to say Intel's making it hard for the average consumer?
> 
> Oh, and by the way, how are you doing on figuring out how AMD TDP numbers are determined?



Not at all, if you 'want to know more about thermal solutions' you need to refer to the manual. In other words, dive deep.

Why can Intel not specify the max TDP on the website then and there?












						Intel® Core™ i9-10850K Processor (20M Cache, up to 5.20 GHz) - Product Specifications | Intel
					

Intel® Core™ i9-10850K Processor (20M Cache, up to 5.20 GHz) quick reference with specifications, features, and technologies.




					www.intel.com
				




And I already told you AMD has a similar responsibility, stop dodging, jesus christ.


----------



## Frick (Feb 5, 2021)

trickson said:


> OMG 9 pages and I can not figure out who is lying now!
> 
> I think YOU all are liars no one knows the truth, You can't handle the TRUTH.
> Watts and voltage and TDP who's lying to me???



Science.


----------



## lexluthermiester (Feb 6, 2021)

Isaac` said:


> your wrong he is actually right lol





cst1992 said:


> For my 4690k there is no Base Operating Frequency.
> I have set a Balanced power profile in Windows, which means, it goes down to 800MHz when in idle, and at 4.3GHz when on load, because those are the clocks I've set in the BIOS.
> The base frequency is 3.5, which of course I can set if I disable Turbo Boost, but I haven't and so it's just indicative. I could just force-set the multiplier to 28 in the BIOS and have it run at 2.8GHz, but that'd be a waste of a good chip.






newtekie1 said:


> Actually, the correct name for it is *Processor Base Frequency*.


That depends on where you read it. However, base clock is NOT the same as operating frequency. 


newtekie1 said:


> If you are going to attempt(poorly) to correct someone do not embarrass yourself in the process.


How witty. 


newtekie1 said:


> You're post is just trolling and off topic at this point.


You need a refresher on the definition of "troll".

With Bill Bright on this one, I'm out.


----------



## Hachi_Roku256563 (Feb 6, 2021)

lexluthermiester said:


> That depends on where you read it. However, base clock is NOT the same as operating frequency.


the base clock 
is the clock that the cpu is def gonna hit
most likely in the worst scenarios ie
intel i5 in surface without fan thermal throttles  
BUT it wont go lower then the 1.10ghs base


----------



## lexluthermiester (Feb 6, 2021)

Isaac` said:


> the base clock
> is the clock that the cpu is def gonna hit


Incorrect. Base clock is the clock speed that is used to determine operating clock via multipliers. This is how CPU's are binned differently. Some can run higher and thus have a high multiplier. Example. My Xeon W3680 has a default multi of 25. So when the multiplier is applied to the base clock of 133mhz, it get's the base operating frequency of 3.33ghz because 133mhz x 25 = 3.3ghz. It can turbo faster and it down-clocks lower, and it does so by dynamically changing the multiplier. The base clock always stays the same, 133mhz.


----------



## 80-watt Hamster (Feb 6, 2021)

lexluthermiester said:


> Incorrect. Base clock is the clock speed that is used to determine operating clock via multipliers. This is how CPU's are binned differently. Some can run higher and thus have a high multiplier. Example. My Xeon W3680 has a default multi of 25. So when the multiplier is applied to the base clock of 133mhz, it get's the base operating frequency of 3.33ghz because 133mhz x 25 = 3.3ghz. It can turbo faster and it down-clocks lower, and it does so by dynamically changing the multiplier. The base clock always stays the same, 133mhz.



You're being unnecessarily pedantic.  Google "base clock", and almost the entire first page of results talks about base operating frequency.  Correct or not, it's become more-or-less colloquially accepted.  I don't think anyone here mistakenly thinks BCLK is being discussed.


----------



## Gmr_Chick (Feb 6, 2021)

cst1992 said:


> Get a Tesla then.
> The 2021 Model S gets 100+ MPGe at highway speed and the *Plaid version has a 200 mph top speed.*



Was that a Spaceballs reference?


----------



## 80-watt Hamster (Feb 6, 2021)

cst1992 said:


> Get a Tesla then.
> The 2021 Model S gets 100+ MPGe at highway speed and the Plaid version has a 200 mph top speed.





Gmr_Chick said:


> Was that a Spaceballs reference?



I can't believe I missed that.


----------



## cst1992 (Feb 6, 2021)

Gmr_Chick said:


> Was that a Spaceballs reference?


Elon is like that  He's - how people say - a "man of culture".



trickson said:


> Right! I mean 9 pages and it's not even about Intel lying anymore lol.
> It has Morphed!


Tell me about it. I'm not even sure where the conversation is going anymore, and it's my thread!



lexluthermiester said:


> That depends on where you read it. However, base clock is NOT the same as operating frequency.


No need for that attitude.
Base clock(which is 100MHz for most processors) is not the same as base frequency(3.5GHz for the 4690k).



Isaac` said:


> BUT it wont go lower then the 1.10ghs base


That's definitely not true.
My CPU has a minimum multiplier of x8, which means it can(and does at idle) operate at 800MHz.
I can (via the BIOS) make it operate at 800MHz on load(it only consumes 15W maximum, even in Prime95), but then it's obviously very slow.


----------



## Deleted member 202104 (Feb 6, 2021)

Gmr_Chick said:


> Was that a Spaceballs reference?



Hahaha.  Yeah, it's an upgrade from Ludicrous mode.  Elon must like that movie.  I just wish they include some raspberry jam for the radar.


----------



## TheoneandonlyMrK (Feb 6, 2021)

newtekie1 said:


> From Intel's spec sheets:
> 
> 
> 
> ...


Pbo, is not turned on by default, Intel however does turn there's on by default.
AMD chips do boost without pbo on but it's in it's title precision boost overclocking.
These thing's are not the same not how these companies present them.


----------



## Palladium (Feb 6, 2021)

unclewebb said:


> It would be fun to go to BestBuy or similar to see if the salesman gives you the whole truth about Intel TDP or just part of the truth. Start up Cinebench R23 and 5 minutes later you will be able to ask one of two questions.
> 
> 1) Why is this 5 GHz CPU running so slow?
> 
> ...



I want TDP to mean actual power draw during all core sustained max turbo clocks at maxed workloads, no ifs or buts.


----------



## 80-watt Hamster (Feb 6, 2021)

Palladium said:


> I want TDP to mean actual power draw during all core sustained max turbo clocks at maxed workloads, no ifs or buts.



And I want a gold-plated toilet seat, but it's just not in the cards, baby.


----------



## newtekie1 (Feb 6, 2021)

Vayra86 said:


> Not at all, if you 'want to know more about thermal solutions' you need to refer to the manual. In other words, dive deep.
> 
> Why can Intel not specify the max TDP on the website then and there?
> 
> ...



Intel provides a thermal solution, exception on high end processors that Intel expects people that are building a computer using those processors are versed enough to know how to pick the appropriate thermal solution.

Besides that, the TDP listed for their processors is the number you can use to buy a thermal solution. There is no need to "dive deep."  If you guy a 65w CPU and put a thermal solution that is capable of handling 65w, then you'll get the performance Intel promises, end of discussion. Intel is not lying.



lexluthermiester said:


> That depends on where you read it. However, base clock is NOT the same as operating frequency.



Really? You should tell that to AMD...and nVidia...oh and TPU. Maybe, just maybe, Base Clock *IS* actually an interchangeable term in the industry for Base Frequency...



theoneandonlymrk said:


> Pbo, is not turned on by default, Intel however does turn there's on by default.
> AMD chips do boost without pbo on but it's in it's title precision boost overclocking.
> These thing's are not the same not how these companies present them.



The default setting on every X series AMD board I've built with has PBO set to auto, which is essentially on.  But even with it off, AMD CPUs still turbo boost and go beyond their rated TDP.



Palladium said:


> I want TDP to mean actual power draw during all core sustained max turbo clocks at maxed workloads, no ifs or buts.



And people in Hell want ice water.


----------



## Zach_01 (Feb 6, 2021)

Palladium said:


> I want TDP to mean actual power draw during all core sustained max turbo clocks at maxed workloads, no ifs or buts.


Yeah, you can develop your own chip and promote it the way you like or the way you understand.

TDP is not a potential total max power consumption... period.
Its the minimum thermal solution to get the advertised performance in duration the manufacturer specifies.
They do not list this on the box, either of them... They dont mean the exact same thing in comparison (Intel vs AMD) but the word "Thermal" in the title should get people to think *if* they want to know more than just what cooler to use. If not then stick to TDP as a minimum for the HSF/or whatever.



newtekie1 said:


> The default setting on every X series AMD board I've built with has PBO set to auto, which is essentially on.  But even with it off, AMD CPUs still turbo boost and go beyond their rated TDP.



While its true that the default PBO setting is auto its not mean On. Only when it is Enabled you get the expansion in TDC/EDC/PPT limits. On Auto or Disabled the limits are the SKU's default limits. RyzenMaster can confirm this.


----------



## newtekie1 (Feb 6, 2021)

Zach_01 said:


> Yeah, you can develop your own chip and promote it the way you like or the way you understand.



Or, as has been pointed out multiple times in this thread, disable all boosting and run at the base clock only. Since that is the configuration where TDP is defined.



Zach_01 said:


> While its true that the default PBO setting is auto its not mean On. Only when it is Enabled you get the expansion in TDC/EDC/PPT limits. On Auto or Disabled the limits are the SKU's default limits. RyzenMaster can confirm this.



I probably mispoke originaly when I said PBO, I should have just said Turbo Boost. PBO goes beyond turbo boost as you pointed out.  But even with PBO off, AMD processors still exceed their related TDP when turboing, so it is a moot point.


----------



## cst1992 (Feb 6, 2021)

Palladium said:


> I want TDP to mean actual power draw during all core sustained max turbo clocks at maxed workloads, no ifs or buts.


Guess you'll have to found your own chip company for that...
I say do it. Intel is on the decline anyway.



Zach_01 said:


> Yeah, you can develop your own chip and promote it the way you like or the way you understand.


You beat me to it.



newtekie1 said:


> Or, as has been pointed out multiple times in this thread, disable all boosting and run at the base clock only. Since that is the configuration where TDP is defined.


That's such a waste...


----------



## Vayra86 (Feb 6, 2021)

newtekie1 said:


> Intel provides a thermal solution, exception on high end processors that Intel expects people that are building a computer using those processors are versed enough to know how to pick the appropriate thermal solution.
> 
> Besides that, the TDP listed for their processors is the number you can use to buy a thermal solution. There is no need to "dive deep."  If you guy a 65w CPU and put a thermal solution that is capable of handling 65w, then you'll get the performance Intel promises, end of discussion. Intel is not lying.
> 
> ...


We'll agree to disagree


----------



## Bill_Bright (Feb 6, 2021)

cst1992 said:


> I say do it. Intel is on the decline anyway.


Come on! That's pure bullfeathers. 

Just a tiny bit of homework (which you and I already talked about!) shows Intel is far from in decline. 

Microsoft, Intel share gains lead Dow's nearly 250-point climb



> Microsoft's shares have climbed $7.99, or 3.4%, while those of* Intel are up $1.33, or 2.4%*



The increase in PC sales has helped Intel 



> The positive impact of 2020 on PC sales was once again seen thanks to the 2020 fourth quarter results shared by Intel. The chip maker announced that its *revenues on the PC side increased by 33 percent* compared to the previous year. *Laptop revenues increased by 30 percent.*



Intel Stock Rises on Fourth-Quarter Earnings Beat as PC Sales Continue to Impress



> Intel (ticker: INTC) logged overall fourth-quarter net income of $5.9 billion, which amounts to $1.42 a share, compared with a profit of $6.9 billion, or $1.58 in the year-ago quarter. Adjusted for restructuring and acquisition costs, earnings were $1.52 a share.
> 
> *The results handily beat Intel’s own sales forecasts for the fourth quarter, and topped consensus estimates*, allowing outgoing CEO Bob Swan to leave the company on something of a high note.
> 
> “We significantly exceeded our expectations for the quarter, *capping off our fifth consecutive record year*,”



And for the record, this all helps AMD too. 

Are there ups and downs? Sure. And has AMD gained market share? Yes and no.  AMD no longer enjoys the advantage of selling less expensive processors as they did in the past because today, in most segments, their processors are similarly priced. AMD was climbing in the desktop share but still below 50%. Intel, however, dominates in the more rapidly growing laptop market. 

But more significantly in terms of timing with your comments, while AMD was gaining marketshare the last few years, this last year shows where, Intel Claws Back Desktop PC and Notebook Market Share From AMD, First Time in Three Years. 

So please! For those unwilling to do your homework and verifying your facts before posting, leave the unsubstantiated and clearly false commentary out.


----------



## TheoneandonlyMrK (Feb 6, 2021)

newtekie1 said:


> Intel provides a thermal solution, exception on high end processors that Intel expects people that are building a computer using those processors are versed enough to know how to pick the appropriate thermal solution.
> 
> Besides that, the TDP listed for their processors is the number you can use to buy a thermal solution. There is no need to "dive deep."  If you guy a 65w CPU and put a thermal solution that is capable of handling 65w, then you'll get the performance Intel promises, end of discussion. Intel is not lying.
> 
> ...


Auto isn't pbo on, boot Ryzen master it says auto overclocking, board run, turn pbo on and Ryzen reports pbo on.
Auto is board run.
Default is default AMD run.
Pbo is and pbo on.

And I too see all board's set to auto, not pbo or default.


----------



## Mouth of Sauron (Feb 6, 2021)

Remember several years ago, when Intel actually had FANBOYS, so - remember all the noise about power-efficiency their Sandy Lakes or whatever had, because they like used 30W less than AMD?

30W is/was for me 2 LED lightbulbs, of course during the peak CPU activity, which is only yadayadayada...

Still, people did that - "AMD CPUs are power-hungry hogs and with Intel I'll save so much energy to buy a palace and a yacht, while making the world a better place..."


----------



## newtekie1 (Feb 6, 2021)

cst1992 said:


> That's such a waste...


Yep, just like worrying about your CPUs power consumption.  Performance comes at a price.  You can't have your cake and eat it too.


theoneandonlymrk said:


> Auto isn't pbo on, boot Ryzen master it says auto overclocking, board run, turn pbo on and Ryzen reports pbo on.
> Auto is board run.
> Default is default AMD run.
> Pbo is and pbo on.
> ...


As I already clarified before you made this post, I misspoke when I said PBO. I mean the standard turbo boost AMD uses.  The processors exceed their rated TDP with PBO off, so arguing about what the default setting is for PBO is totally moot.


----------



## trickson (Feb 6, 2021)

Mouth of Sauron said:


> Remember several years ago, when Intel actually had FANBOYS, so - remember all the noise about power-efficiency their Sandy Lakes or whatever had, because they like used 30W less than AMD?
> 
> 30W is/was for me 2 LED lightbulbs, of course during the peak CPU activity, which is only yadayadayada...
> 
> Still, people did that - "AMD CPUs are power-hungry hogs and with Intel I'll save so much energy to buy a palace and a yacht, while making the world a better place..."


Is this while mining or not? 
Because mining is soooooo green!


----------



## Bill_Bright (Feb 6, 2021)

Mouth of Sauron said:


> Remember several years ago, when Intel actually had FANBOYS,
> "AMD CPUs are power-hungry hogs and with Intel I'll save so much energy to buy a palace and a yacht, while making the world a better place..."


 Wow. So says the guy who just clearly painted himself an AMD fanboy with his one and only post in this thread!  

And how ironic that comment, when on your own profile page you say this, 



			
				Mouth of Sauron said:
			
		

> I doubt I'll post more here. This was a nice place, and I'll still prefer highly professional reviews, but the forum become trollocratia. I wish you well with established troll with very little knowledge. If you ever see this, then I have a MIGHTY NEED to say something on certain topic.


What do you call someone who joined a thread because they had a "MIGHT NEED" to criticize others?

Oh well. 

***

I have always found it amazing and puzzling why some put so much emphasis on the cost of the CPU when the CPU is just one, and often NOT the most expensive component in a computers. 

After factoring in the cost of the motherboard, RAM, drives, graphics card, case, PSU, Windows, monitor, speakers, keyboard and mouse (which all cost the same regardless the platform), spread those costs over the expected life of the computer, is the price of the CPU really that significant? Especially if you just happen to prefer Blue over Red and so are willing to pay extra for it? I think not.


----------



## TheoneandonlyMrK (Feb 7, 2021)

newtekie1 said:


> Yep, just like worrying about your CPUs power consumption.  Performance comes at a price.  You can't have your cake and eat it too.
> 
> As I already clarified before you made this post, I misspoke when I said PBO. I mean the standard turbo boost AMD uses.  The processors exceed their rated TDP with PBO off, so arguing about what the default setting is for PBO is totally moot.


Now I can't talk for r5000 ,anything earlier stays within its Tdp at Default settings.
And goes no where near double or triple if it did.

I'm out down players about, the stage is yours.


----------



## newtekie1 (Feb 7, 2021)

theoneandonlymrk said:


> Now I can't talk for r5000 ,anything earlier stays within its Tdp at Default settings.
> And goes no where near double or triple if it did.
> 
> I'm out down players about, the stage is yours.



Nope. Ryzen 3000 goes beyond spec when turboing too.  See here: https://www.techpowerup.com/review/amd-ryzen-9-3900xt/18.html

The idle power consumption of the whole system is 54w, the all-core load is 195w.  That's a 141w increase under load, so we know the 105w rated 3900XT is actually consuming at least 141w, and is probably closer to 155-165w.  But we know for sure it is going beyond its rated TDP.

The Ryzen 2000 series is the same deal. See Here: https://www.techpowerup.com/review/amd-ryzen-7-2700x/18.html

The idle power consumption of the whole system is 49w, the all core load is 213w. So we know the processor under load is consuming at least 164w, and likely closer to 180-190w.

And you guessed it, the Ryzen 1000 series is the same deal: https://www.techpowerup.com/review/amd-ryzen-7-1800x/14.html


----------



## TheoneandonlyMrK (Feb 7, 2021)

Seems Ryzen master lies by quite a bit too then eh, I'll look into it.
3800X pbo on ppt225 tdc 150 edc125 says the cores are pulling 85 watts at 4.2 crunching here I think maybe AMD are calling out the wattage the actual cores will use not the whole chip perhaps but I have not seen Ryzen master report higher than the Tdp wattage used by the core's, hwinfo too and I do have a killawatt but obviously it can't really do anything but whole system.
And your whole system assumptions are tat the memory and subsystem also ramps with load and needs accounting for plus PSU losses.
And you don't comment on two to three times the wattage pulled pl1 and 2 so I can see where your at, I'll still be leaving you too it.


----------



## newtekie1 (Feb 7, 2021)

theoneandonlymrk said:


> Seems Ryzen master lies by quite a bit too then eh, I'll look into it.
> 3800X pbo on ppt225 tdc 150 edc125 says the cores are pulling 85 watts at 4.2 crunching here I think maybe AMD are calling out the wattage the actual cores will use not the whole chip perhaps but I have not seen Ryzen master report higher than the Tdp wattage used by the core's, hwinfo too and I do have a killawatt but obviously it can't really do anything but whole system.
> And your whole system assumptions are tat the memory and subsystem also ramps with load and needs accounting for plus PSU losses.
> And you don't comment on two to three times the wattage pulled pl1 and 2 so I can see where your at, I'll still be leaving you too it.



The other components don't really consume much more power when under load. RAM doesn't have an idle state, so it's power consumption under load is only a watt or two more than in its idle state. The rest of the system is the same deal.

And the PSU actually gets more efficient at the higher loads so that just makes things worse, or you can consider it basically cancelling out the minor extra power consumption from the other subsystems being under load.  Either way, the fact remains, AMD processors definitely exceed their rated TDP too.  And there is nothing wrong with it.


----------



## TheoneandonlyMrK (Feb 7, 2021)

newtekie1 said:


> The other components don't really consume much more power when under load. RAM doesn't have an idle state, so it's power consumption under load is only a watt or two more than in its idle state. The rest of the system is the same deal.
> 
> And the PSU actually gets more efficient at the higher loads so that just makes things worse, or you can consider it basically cancelling out the minor extra power consumption from the other subsystems being under load.  Either way, the fact remains, AMD processors definitely exceed their rated TDP too.  And there is nothing wrong with it


I disagree with most of your points,for one thing of many ram has power down enabled by default these days.
But regardless I am out ,as I was three times now we will just have to disagree


----------



## freeagent (Feb 7, 2021)

Looking at the CPU PPT sensor on my 95w 3600XT, shows a max of 120w under a hard load.

At the wall this system pulls about 15w less then my highly clocked 3770K so about 250 with only a hard CPU load. The 3770K was "only" @ 84w according to Aida64. Core Temp said it was in the 100's of watts @ 4700Mhz 1.35v. My 3600XT is running 1 clock 1 voltage, like my 3770K

PSU calculator says 9900K requires 30w more than my XT. Everyone is full of shit 

 No you guys, the people working the numbers..


----------



## Vayra86 (Feb 7, 2021)

freeagent said:


> Looking at the CPU PPT sensor on my 95w 3600XT, shows a max of 120w under a hard load.
> 
> At the wall this system pulls about 15w less then my highly clocked 3770K so about 250 with only a hard CPU load. The 3770K was "only" @ 84w according to Aida64. Core Temp said it was in the 100's of watts @ 4700Mhz 1.35v. My 3600XT is running 1 clock 1 voltage, like my 3770K
> 
> ...


The marked difference of a perspective with historical data and practical experience, is what that is.

That is the same basis I have and use for saying Intel is exceeding the norms of proper info on specsheets, and right now, certainly more so than AMD.


----------



## newtekie1 (Feb 7, 2021)

theoneandonlymrk said:


> I disagree with most of your points,for one thing of many ram has power down enabled by default these days.
> But regardless I am out ,as I was three times now we will just have to disagree



Not system RAM. System RAM just runs at the same speed and voltage all the time.  Meaning it consumes basically the same under load as idle.


----------



## TheoneandonlyMrK (Feb 7, 2021)

newtekie1 said:


> Not system RAM. System RAM just runs at the same speed and voltage all the time.  Meaning it consumes basically the same under load as idle.


Err yes system ram has power down enabled by default on every Ryzen system I tried.
Seems we both have misconceptions then and still no comment on intel using upto 3X the power they market using but all's fair , no sir a very final goodbye to you.


----------



## Zach_01 (Feb 7, 2021)

freeagent said:


> Looking at the CPU PPT sensor on my 95w 3600XT, shows a max of 120w under a hard load.
> 
> At the wall this system pulls about 15w less then my highly clocked 3770K so about 250 with only a hard CPU load. The 3770K was "only" @ 84w according to Aida64. Core Temp said it was in the 100's of watts @ 4700Mhz 1.35v. My 3600XT is running 1 clock 1 voltage, like my 3770K
> 
> ...


Because you CPU has a PPT limit of 125W by default and TDP is not desrcibing this...


----------



## newtekie1 (Feb 7, 2021)

theoneandonlymrk said:


> Err yes system ram has power down enabled by default on every Ryzen system I tried.
> Seems we both have misconceptions then and still no comment on intel using upto 3X the power they market using but all's fair , no sir a very final goodbye to you.



And Ryzen RAM Power Down is disable by default.

Actually I commented on that plenty, Intel processors don't use any more power than they market them using while AMD processors do.


----------



## freeagent (Feb 7, 2021)

Zach_01 said:


> Because you CPU has a PPT limit of 125W by default and TDP is not desrcibing this...


I'm still pretty new. Quite amateurish..


----------



## TheoneandonlyMrK (Feb 7, 2021)

newtekie1 said:


> And Ryzen RAM Power Down is disable by default.
> 
> Actually I commented on that plenty, Intel processors don't use any more power than they market them using while AMD processors do.


Nah just re checked auto not disabled or enabled by default so depending on memory could be on or off.
And we disagree on point 2 the pl1 and 2 power use is not widely known to those not of an enthusiast level soo that's the point, and the point of this thread.
Not Intel's verses AMD.

And regardless of your opinion on it I still think Intel could do better on disclosure as do many others.


----------



## RandallFlagg (Feb 7, 2021)

Just leaving some information here...

The only time the Intel rig drew more power was under artificial load like Prime95.  Under gaming, single thread load, normal multi-thread load, and idle the 9900K drew less power than the 3700X.

So if your primary use case is running Prime95 AMD is definitely your best bet.


----------



## newtekie1 (Feb 7, 2021)

theoneandonlymrk said:


> Nah just re checked auto not disabled or enabled by default so depending on memory could be on or off.
> And we disagree on point 2 the pl1 and 2 power use is not widely known to those not of an enthusiast level soo that's the point, and the point of this thread.
> Not Intel's verses AMD.
> 
> And regardless of your opinion on it I still think Intel could do better on disclosure as do many others.



All the boards I've used have it off by default, there isn't even an Auto option.  And you have to go into like 5 menus deep to even find the option.  So it likely comes down to a motherboard by motherboard basis.  I would guess off is the default on most board simply because Memory Power Down is known to hurt RAM compatibility so most motherboard manufacturers would rather just leave it off to avoid the headache.  Plus, it isn't like RAM uses that much power to begin with, 4 sticks of DDR4 use like 10w of power.  And the test rig used here at TPU uses an X570 Taichi, which I know for sure from personal experience defaults to having it off.


----------



## Zach_01 (Feb 7, 2021)

freeagent said:


> I'm still pretty new. Quite amateurish..


Nothing wrong to be new on something new... we all are in front of it!


----------



## cst1992 (Feb 8, 2021)

newtekie1 said:


> The other components don't really consume much more power when under load. RAM doesn't have an idle state, so it's power consumption under load is only a watt or two more than in its idle state. The rest of the system is the same deal.
> 
> And the PSU actually gets more efficient at the higher loads so that just makes things worse, or you can consider it basically cancelling out the minor extra power consumption from the other subsystems being under load.  Either way, the fact remains, AMD processors definitely exceed their rated TDP too.  And there is nothing wrong with it.


Let's consider a point here that we haven't before - GPU power consumption and TDP.
My 3060Ti has a TDP of 200W given on NVIDIA's website. Even under testing during Unigine Valley and Heaven, it didn't exceed that number by more than 1-2%. Only when I adjusted the power limit of the card to 110%(1 8-pin connector meant I couldn't push it past 225W anyway), I was able to draw 220W from the card.

In other words, the TDP is something that I could depend on.
When I built my computer in 2016, I wanted to go for a 750W PSU, but ultimately went for a Gold 650W instead of a Silver/Bronze 750W unit. Still I got a motherboard with SLI compatibility so that one day I could run 2 970s instead of 1 and the CPU with a mild overclock.

Now, if the cards were consuming 175W+ each instead of 145W and the CPU 125W+ instead of its rated 88 then I'd have regretted depending on these numbers for my PSU choice.

I get that the 3060Ti is a special case because it's a power-limited card, but still, a piece of hardware should pull close to what it's rated power consumption is, otherwise the whole point of that number is moot.


----------



## Bill_Bright (Feb 8, 2021)

freeagent said:


> I'm still pretty new. Quite amateurish..


Everyone was at some point. Sadly, there are some who forget that fact and sadly, assume everyone should know what they have learned. Or worse, ridicule the newbie for being a newbie and not yet knowing what they have learned.


----------



## RandallFlagg (Feb 8, 2021)

cst1992 said:


> Let's consider a point here that we haven't before - GPU power consumption and TDP.
> My 3060Ti has a TDP of 200W given on NVIDIA's website. Even under testing during Unigine Valley and Heaven, it didn't exceed that number by more than 1-2%. Only when I adjusted the power limit of the card to 110%(1 8-pin connector meant I couldn't push it past 225W anyway), I was able to draw 220W from the card.
> 
> In other words, the TDP is something that I could depend on.
> ...



TDP is not rated max power consumption.  That's the problem doing DIY builds and not understanding what the numbers mean.   

If you don't want to have to dig into and understand what the numbers mean, you should probably buy an OEM rig, or else figure on getting an outsized PSU.   Alienware for example will not sell you an RTX 3090 without a 1000W PSU:


----------



## qubit (Feb 8, 2021)

Aquinus said:


> I'm sure people have opinions about me using a Mac as a daily driver.


hmmm... on checking, it's not on the approved list.


----------



## thesmokingman (Feb 8, 2021)

Are you surprised, really? This is from the same team that stuck a chiller under the table and pretended to release a new chip (overclocked) forgetting it is cooled by said chiller.


----------



## freeagent (Feb 8, 2021)

qubit said:


> hmmm... on checking, it's not on the approved list.


I don't mind their phones but I wouldn't buy one of their computers


----------



## qubit (Feb 8, 2021)

freeagent said:


> I don't mind their phones but I wouldn't buy one of their computers


Ditto. I bought an iPhone about a year ago when I got sick of the rampant unpatched security holes in Android that the manufacturers just don't care about. Apple isn't perfect, but at least actively patch vulnerabilities and for a good long time, too. Believe me, I didn't buy an iPhone because I got starry eyed about Apple products, but purely because of security issues. Android has more features and is more flexible and I miss that. At least I'm relatively safe, though.


----------



## trickson (Feb 8, 2021)

Let's stay ontopic people. This thing is going around and around.
Clearly Some feel Intel is lying about there CPU's, Well I have some exciting news for every one...
........................ No one (not Intel nor AMD) is lying about there CPU or the power it uses...............................
First off they use engineering samples and huge equipment to test with, They (Intel/AMD) have specific precise equipment to gauge and verify the spread sheet settings.
If you think there is someone lying to you it is in fact the SOFTWARE I have found software to be very fallible as of late.
Ryzen and all the lakes HAS shocked everyone it simply has and I can see this in CPU-Z and Other software vs what the BIOS even has! It's a joke really! 
I see the Youtube reviewers here on TPU utterly shocked and that is a FACT! 
I do not review nor do I get free stuff to review nor do I want to.
I do however see things that do NOT add up and one of them is this thread.
No Intel is NOT lying it is the shit software that you use sorry you guys need to step it up on the program side!


----------



## unclewebb (Feb 8, 2021)

trickson said:


> it is the shit software that you use


Let's not shoot the messenger. Intel CPUs use an energy counter within the CPU. This counter goes up based on CPU load and speed and what type of software is being run. Monitoring software reads this counter every second, finds out how much energy has been consumed, divides that number by the time interval and reports a power consumption number. All software that is working correctly should end up reporting the same thing. This is not measured power consumption. It is estimated power consumption. The formula that Intel uses to determine how rapidly the energy counter counts up is totally up to them. 

If Intel was unscrupulous, they could make all software report whatever they wanted it to report. I have not seen any evidence that Intel is doing this.  

The 10850K is a power consuming pig when overclocked and running Prime95. At base frequency, where Intel TDP is measured, the 10850K operates well under the 125W TDP rating. That debunks the Intel is lying conspiracy that this thread is based on. New cars do not measure fuel mileage with a brick on the accelerator pedal while going up a big hill and no one complains. Why is everyone so butt hurt that Intel does not document power consumption at full speed while running a torture test? 

If you do not like how Intel rates their CPUs, you can always switch teams and buy an AMD CPU.


----------



## TheoneandonlyMrK (Feb 8, 2021)

unclewebb said:


> Let's not shoot the messenger. Intel CPUs use an energy counter within the CPU. This counter goes up based on CPU load and speed and what type of software is being run. Monitoring software reads this counter every second, finds out how much energy has been consumed, divides that number by the time interval and reports a power consumption number. All software that is working correctly should end up reporting the same thing. This is not measured power consumption. It is estimated power consumption. The formula that Intel uses to determine how rapidly the energy counter counts up is totally up to them.
> 
> If Intel was unscrupulous, they could make all software report whatever they wanted it to report. I have not seen any evidence that Intel is doing this.
> 
> ...


Nah it's definitely the software   , leg pulled only not ripped off..


----------



## londiste (Feb 8, 2021)

RandallFlagg said:


> TDP is not rated max power consumption.  That's the problem doing DIY builds and not understanding what the numbers mean.
> 
> If you don't want to have to dig into and understand what the numbers mean, you should probably buy an OEM rig, or else figure on getting an outsized PSU.   Alienware for example will not sell you an RTX 3090 without a 1000W PSU:
> 
> View attachment 187546


For GPUs it absolutely is. GPUs have a power limit set at TDP and they will not consume any more power than that. That has been the case for at least last 4 generations or so.
Especially in case of RTX 3090 there are some buts around the short spikes it does and something about it triggering some power supplies so Alienware just wants to be really sure.



unclewebb said:


> Let's not shoot the messenger. Intel CPUs use an energy counter within the CPU. This counter goes up based on CPU load and speed and what type of software is being run. Monitoring software reads this counter every second, finds out how much energy has been consumed, divides that number by the time interval and reports a power consumption number. All software that is working correctly should end up reporting the same thing. This is not measured power consumption. It is estimated power consumption. The formula that Intel uses to determine how rapidly the energy counter counts up is totally up to them.


They use energy counter over a certain period to determine the allowed turbo amount and length. This is based pretty much solely on the power consumed. As a simplified example - every second it spends using less power than TDP, it can spend another second the same amount over TDP and then it gets averaged out over a longer period. CPU load and speed and software have less to do with this, all that simply end up as power consumption factors for determining the power limit. Not just Intel, AMD is doing a variation of the same thing. This also happens far far more frequently than a second. Software loads and shows the same data but with less frequency.

Or course, when you raise (or manufacturer raises) power limit away, this all doesn't matter


----------



## newtekie1 (Feb 8, 2021)

cst1992 said:


> Let's consider a point here that we haven't before - GPU power consumption and TDP.
> My 3060Ti has a TDP of 200W given on NVIDIA's website. Even under testing during Unigine Valley and Heaven, it didn't exceed that number by more than 1-2%. Only when I adjusted the power limit of the card to 110%(1 8-pin connector meant I couldn't push it past 225W anyway), I was able to draw 220W from the card.
> 
> In other words, the TDP is something that I could depend on.
> ...



You are still failing to understand that TDP is not a power consumption number giving by Intel. And nVidia doesn't give TDP number for their current gen cards to the public.  The specs for your 3060Ti gives a Total Board Power number, which is actually a maximum power consumption number.  Intel doesn't give power consumption numbers, TDP is not a power consumption number.


----------



## cst1992 (Feb 8, 2021)

I know that, but at least I know in the case of the card how much power it'll consume, so that I plan my build accordingly.
Even PCPartPicker uses the TDP value in their PSU calculations(and yes, you're right - the NVIDIA value is board power draw).
IMO it's just a shitty marketing move to define power consumption to base frequency and some arbitrary lab-only load and use that in spec sheets.
I guess we'll have to turn to reviews for PSU calculations...


----------



## RandallFlagg (Feb 8, 2021)

cst1992 said:


> I know that, but at least I know in the case of the card how much power it'll consume, so that I plan my build accordingly.
> Even PCPartPicker uses the TDP value in their PSU calculations(and yes, you're right - the NVIDIA value is board power draw).
> IMO it's just a shitty marketing move to define power consumption to base frequency and some arbitrary lab-only load and use that in spec sheets.
> I guess we'll have to turn to reviews for PSU calculations...



Most people do not worry because they do not build their own rig, and if they do they just get a mfr rec spec psu.  

I left the lights on in my bathroom this morning.  There are 8 x 60W incandescent bulbs in there.  By my calc, thats 4hrs x 60W x 8 bulbs = 1920W-Hr I wasted.  That is about like the little 17W max load difference between Intel and AMD running Prime95 for 113 hours straight or 14.1 8-hour days.   

This is not something sane people worry about.


----------



## cst1992 (Feb 9, 2021)

RandallFlagg said:


> Most people do not worry because they do not build their own rig, and if they do they just get a mfr rec spec psu.
> 
> I left the lights on in my bathroom this morning.  There are 8 x 60W incandescent bulbs in there.  By my calc, thats 4hrs x 60W x 8 bulbs = 1920W-Hr I wasted.  That is about like the little 17W max load difference between Intel and AMD running Prime95 for 113 hours straight or 14.1 8-hour days.
> 
> This is not something sane people worry about.


I just don't know what to say to you right now.


----------



## Caring1 (Feb 10, 2021)

RandallFlagg said:


> This is not something sane people worry about.


And yet you took the time to not only think about it, but also do the math.


----------



## ThrashZone (Feb 10, 2021)

cst1992 said:


> I know that, but at least I know in the case of the card how much power it'll consume, so that I plan my build accordingly.
> Even PCPartPicker uses the TDP value in their PSU calculations(and yes, you're right - the NVIDIA value is board power draw).
> IMO it's just a shitty marketing move to define power consumption to base frequency and some arbitrary lab-only load and use that in spec sheets.
> I guess we'll have to turn to reviews for PSU calculations...


Hi,
Doubt many would believe it and just say it's all 200w overkill but here's Intel's psu chart


----------



## 95Viper (Feb 10, 2021)

Stop the fanboy BS.
Stay on topic.
And, if you wish to have a personal argument with someone... take it to PMs; and, not in the thread.

Thank You and Have a Great Discussion.


----------



## trickson (Feb 10, 2021)

Honestly no one really buys a CPU biased on it's TDP ( No one means general public). And It's been said Intel isn't lying nor is AMD.
Honestly I no longer see the point here other than to bash a company (Which is against the rules I Think) .. Maybe Not Maybe I am wrong.


----------



## Bill_Bright (Feb 10, 2021)

RandallFlagg said:


> Most people do not worry because they do not build their own rig, and if they do they just get a mfr rec spec psu.


I agree that most people don't build their own and instead buy factory built systems, I disagree that most builders "just get a manufacturer recommended spec PSU". Frankly, I don't even know what that means. 

What manufacturer? Motherboard? GPU? CPU? PSU?

In an attempt to heed 95Viper's guidance and get back on track (this thread is about Intel), I looked at two Intel and two AMD processors. No PSU recommendations there. And how could they? They don't know which motherboard or in particular, which graphics solution (plus all the other devices) that PSU will need to support.  

I would contend while every self-builder was a first time builder at some point, most have more than one build (or major upgrade) under their belts and have learned a thing or two about picking brands and sizing up PSUs - either by doing their homework with their first build, or if not, learning from their first build mistakes. 

I've been helping folks research parts for many years. Very few visit the actual PSU makers sites. So again, not sure what manufacturer you speak of. I rarely visit PSU makers sites when researching PSUs. I visit review sites and retailers. I want the facts, not the marketing fluff from the maker. And if I even look at the TDP, it is to get an idea of my cooling requirements, not PSU size. 

If the builder knows enough to pick a PSU brand and use their site to determine a recommended size, that's not a bad thing. But if they know enough to build their own computer, chances are the majority are going to do their homework and seek out advice for their first build or two until they get some experience under their belts. 

NO DOUBT there a few builders out there who fail to do their homework and simply buy the cheapest PSU they can find. Typically they simply guess at the size or arbitrarily pull a number out of thin air (650W is pretty popular). But that certainly is not "most" self-builders.


----------



## kapone32 (Feb 10, 2021)

RandallFlagg said:


> Just leaving some information here...
> 
> The only time the Intel rig drew more power was under artificial load like Prime95.  Under gaming, single thread load, normal multi-thread load, and idle the 9900K drew less power than the 3700X.
> 
> ...


What about the 10900K? Why do Z590 boards have VRMs that rival Threadripper? Yesterday on The Full Nerd a question was asked. Would a 1200 Watt PSU be enough to run to 3090s and a 10900K? The answer was non committal. I have run 2 Vega 64s with a 2920X with no concern on my 1200 Watt PSU.


----------



## freeagent (Feb 10, 2021)

^^ A setup like that TR uses 87% of a 750w PSU for 94% efficiency, on a 1200w PSU that is a 55% load with 94.7% efficiency 

With No OC 

Only because I am looking at PSU's right now..


----------



## kapone32 (Feb 10, 2021)

freeagent said:


> ^^ A setup like that TR uses 87% of a 750w PSU for 94% efficiency, on a 1200w PSU that is a 55% load with 94.7% efficiency
> 
> With No OC
> 
> Only because I am looking at PSU's right now..


Well I seriously doubt that a 750 Watt PSU could handle that load on constant as even 750 is just a burst wattage for plenty of PSUs.


----------



## trickson (Feb 10, 2021)

Bill_Bright said:


> I agree that most people don't build their own and instead buy factory built systems, I disagree that most builders "just get a manufacturer recommended spec PSU". Frankly, I don't even know what that means.
> 
> What manufacturer? Motherboard? GPU? CPU? PSU?


I think the person meant the manufacture of the prebuilt computer.
HP/ Dell and the like all use "there own" PSU units unless the buyer specifies the PSU. 
12 pages and still haven't seen a thing about the lie Intel is pimping.


----------



## Bill_Bright (Feb 10, 2021)

trickson said:


> I think the person meant the manufacture of the prebuilt computer.


But they are not going to recommend a PSU. They are just going to include what they have decided is needed. Even if you choose to customize your Dell, for example, you might be able to swap in a different CPU or add more RAM, but you are still going to get the PSU they decide you need. 

Pretty much the only time you can truly select your own PSU is if you have a local shop custom build it for you. The bigger the computer brand (Dell, HP, Acer, Lenovo) the few custom options you are offered.


----------



## kapone32 (Feb 10, 2021)

Bill_Bright said:


> But they are not going to recommend a PSU. They are just going to include what they have decided is needed. Even if you choose to customize your Dell, for example, you might be able to swap in a different CPU or add more RAM, but you are still going to get the PSU they decide you need.
> 
> Pretty much the only time you can truly select your own PSU is if you have a local shop custom build it for you. The bigger the computer brand (Dell, HP, Acer, Lenovo) the few custom options you are offered.


The PSU is usually where the most profit lies for the OEM.


----------



## Bill_Bright (Feb 10, 2021)

kapone32 said:


> The PSU is usually where the most profit lies for the OEM.


Not sure how that applies to my comment - which was about customers having a choice, or rather limited or no choice. 

You may be right but pretty sure I am correct to say the reason PSUs might provide their largest profit margin (at least by percentage) would simply be (1) because they can promise the OEM PSU maker to buy a million or two PSUs in the coming year, then demand and get them at super deep volume discounts. And then (2), because they can use that same model PSU in several different model PCs. 

For example, 3 different model computers might require 3 different motherboards, 3 different CPUs, and 3 different cases. But the same model PSU could be used in all 3.


----------



## kapone32 (Feb 10, 2021)

Bill_Bright said:


> Not sure how that applies to my comment - which was about customers having a choice, or rather limited or no choice.
> 
> You may be right but pretty sure I am correct to say the reason PSUs might provide their largest profit margin (at least by percentage) would simply be (1) because they can promise the OEM PSU maker to buy a million or two PSUs in the coming year, then demand and get them at super deep volume discounts. And then (2), because they can use that same model PSU in several different model PCs.
> 
> For example, 3 different model computers might require 3 different motherboards, 3 different CPUs, and 3 different cases. But the same model PSU could be used in all 3.


I was supporting your point. You expanded on my thought.


----------



## trickson (Feb 10, 2021)

This thread is like watching CNN or MSNBC.


----------



## freeagent (Feb 10, 2021)

kapone32 said:


> Well I seriously doubt that a 750 Watt PSU could handle that load on constant as even 750 is just a burst wattage for plenty of PSUs.


That's why I laughed, I was checking out the BeQuiet PSU Calculator. Seemed a bit optimistic..

It just goes to show even the pro's don't know what they are talking about when it comes to power.. a calculator said a 9900K would only be 30w more than my current one.

In the end, no one will tell you how they formulate their equation, and you should just buy the biggest cooler you can because everyone is lying anyways, because 255w is the new 65w.. here's a shitty piece of aluminum and some screws have fun


----------



## cst1992 (Feb 10, 2021)

freeagent said:


> just buy the biggest cooler you can


Not an easy choice, considering larger coolers are much more expensive than smaller ones. The price increase and cooling capacity increase is not a linear relationship.


----------



## Bill_Bright (Feb 10, 2021)

cst1992 said:


> Not an easy choice, considering larger coolers are much more expensive than smaller ones.


It is not just about price. Many cases limit the height of the coolers they can support. And I am not talking about "slim" cases either. Some larger coolers are very tall. On some motherboards, larger coolers could interfere with RAM too. And perhaps larger graphics cards. 



cst1992 said:


> The price increase and cooling capacity increase is not a linear relationship.


Setting price aside, bigger does not necessarily mean better cooling efficiency. The number, size and shape of the fins matter. As does the material used in the fins and the baseplate where the heatsink makes contact with the die. Then not all fans are created equal either.


----------



## newtekie1 (Feb 10, 2021)

kapone32 said:


> Well I seriously doubt that a 750 Watt PSU could handle that load on constant as even 750 is just a burst wattage for plenty of PSUs.



Not, a good PSU should be able to handle full load for long periods of time. They days of PSU manufacturers listing peak numbers instead of constant is pretty much gone on the good manufacturer's PSUs thanks to the increase in PSU reviews.



cst1992 said:


> Not an easy choice, considering larger coolers are much more expensive than smaller ones. The price increase and cooling capacity increase is not a linear relationship.


I mean, honestly, not really.  You can get a 92mm tower cooler that will handle pretty much any Intel processor at stock settings and fit in pretty much any normal width case for $20.  The generic extruded aluminum coolers are like $15. So not really a big difference.


----------



## Mouth of Sauron (Feb 11, 2021)

To clarify it more, what I said is basically that peak increase in TDP isn't important (and it's my opinion for quite a long time, as said).

Typical desktop CPU draw is like 90-100W since, well, forever. More recent 'gimmicks' which allow CPU to draw additional power for small speed gains means little in overall power usage. They are also pretty much pushed to the limit, and therefore undervolting actually became a thing.


----------



## Bill_Bright (Feb 11, 2021)

Mouth of Sauron said:


> and therefore undervolting actually became a thing.


Undervolting, underclocking, and other similar techniques and "gimmicks" to reduce heat production and buildup in personal computers have been around for nearly 20 years that I know of, if not longer. It has only recently become "a thing" because the kids of today finally started to become aware of the problems of excessive heat in their gaming rigs, and because manufacturers have learned how to "market" undervolting as a "feature" in their products. 

Early computer enthusiasts, folks who have been around awhile, were some of the first to build custom PC PVRs and HTPCs (home theater PCs). Computers where "silent running" was an absolute must. So totally "passive" (no fan) cooling was essential. Undervolting and underclocking was commonly done to ensure our fanless systems remained properly cooled without any noisy fans or water pumps making a racket in the background. 

And it wasn't child's play like it is today where you can simply enter the BIOS Setup Menu or run a little program, change a setting, reboot and be done. Or where if things go wrong, you just change the setting back or run a little recovery app, reboot and be good to go. Back in the day you had to physically modify the motherboard by tracing circuits and chasing voltages, cutting runs and soldering in jumpers (without schematics, BTW) - then say a couple Hail Mary's, cross your fingers and toes, connect power, boot and pray everything doesn't go up in smoke. 


Mouth of Sauron said:


> They are also pretty much pushed to the limit


No they aren't. If they were pushed to the limit, there would be no "Turbo" modes. TDP would not be stated at "base" levels. There would be no such thing as overclocking because there would be no headroom to allow it. 

What is "a thing" these days is manufacturers have made the process of implementing undervolting "a thing". And they have learned how to market that feature. But undervolting, as a method to reduce heat and increase efficiency, has been around almost as long as Ben Franklin and his key and kite.


----------



## trickson (Feb 11, 2021)

I personally think they are lying us all.
There is not one mention of AMD having "Unlocked" Multipliers anymore it some how just went to this "Boost" which is IMHO the absolute TDP of the Chip.
First off if you want the boost it's fine it works like AMD says.
You are NOT going to be able to keep that OC ( Unless you want a bricked out CPU ). Like if you really want 4.4GHz on a Ryzen CPU you have to add the Vcore and this chip doesn't like that, This chip wants less power and Less MHZ it's like once it reaches that max turbo boost it get's really HOT really fast and the cooling can NOT keep up no matter what Air cooler you have NO way!
I took the advice of others running at 4.4GHz with 1.4Vcore would ultimately KILL this Chip and ANY other Ryzen CPU.
I see this now in the way this CPU acts with Vcore and heat. backing off to absolute stock settings I see radical voltages and MHz as well as temp's it's one of the most erratic CPU lines I have ever seen. No wonder everyone is all caught up and spinning there wheels! You can not OC the chip without radically changing voltages and that changes the temps so radically that the cooler can not even manage that fast of a heat transfer that the chip gets damaged.
I am not risking it on any of the CPU's. Till AMD can officially come and announce that we can OC once again. Like in the FX Black editions.

So yes in conclusion I would have to say that YES Intel and AMD both are lying about a lot of things!


----------



## 80-watt Hamster (Feb 12, 2021)

I think anyone still following this thread, if they haven't already done so, should look at this post by Zach_01 for Intel's own summary of TDP for their chips, as well as watch (as in the whole thing; I know it's long-ish) the GN video on AMD TDP linked a couple posts later.

Things we've learned over the course of this thread (YMMV):

TDP does not mean, and has never meant, maximum power consumption
It can, however, resemble average power consumption at base frequency, particularly with Intel

Intel and AMD calculate TDP differently, in ways that don't necessarily produce a useful value for an end user
Modern turbo and boost strategies can push power consumption well past TDP for short periods
Certain computational loads can also drive it higher over longer periods.

Overclocking completely obviates TDP as a useful value.
My takeaway is this: neither manufacturer is lying about TDP, AFAICT.  It's more Thermal Design Power not actually meaning what it sounds like it should, and we DIY-ers latching onto it because it's all there is (outside of reviews and such, of course).  Something more meaningful would be nice, but I'm not sure there's a compelling reason for either company to provide it.  If they do, it's certainly not going to be on behalf of a "handful" of enthusiasts on forums.  I mean, if even the cooler manufacturers are unhappy with it, and the chipmakers won't provide something better for them, it's probably a lost cause.


----------



## trickson (Feb 12, 2021)

80-watt Hamster said:


> I think anyone still following this thread, if they haven't already done so, should look at this post by Zach_01 for Intel's own summary of TDP for their chips, as well as watch (as in the whole thing; I know it's long-ish) the GN video on AMD TDP linked a couple posts later.
> 
> Things we've learned over the course of this thread (YMMV):
> 
> ...


Agreed. 
What a perfect way to end a thread.


----------



## xenocide (Feb 12, 2021)

kapone32 said:


> What about the 10900K? Why do Z590 boards have VRMs that rival Threadripper? Yesterday on The Full Nerd a question was asked. Would a 1200 Watt PSU be enough to run to 3090s and a 10900K? The answer was non committal. I have run 2 Vega 64s with a 2920X with no concern on my 1200 Watt PSU.


Yes. I'm sure it's the CPU and not the fact that each GPU(s) in that scenario are wildly different. According to TPU's own numbers a 3090 will draw 350-450W, where as a Vega 64 will draw 290-310W. This is why we don't compare completely different setups.


----------

