# Be careful when recommending B560 motherboards to novice builders (HWUB)



## Valantar (May 12, 2021)

HWUnboxed just posted a pretty interesting video on how OOB performance varies across B560 boards with 65W 11th-gen Intel CPUs. While all of is is within Intel spec, performance in sustained all-core workload varied by over 40% on the 11700 and over 30% on the 11400F. Gaming performance was more even, but still varied by double digit percentages. The gist of it is that higher end (~$200) B560 boards run the chips without active power limits OOB, while cheaper boards enforce them strictly - which again introduces variable performance due to variations in voltage tuning etc. Even between the cheaper B560 boards with enforced power limits there were notable performance differences. These boards do allow for disabling power limits, though for two of the three boards tested this resulted in VRM power throttling on the 11700, causing intermittent _hard_ throttling (below spec, 800MHz for one, 2GHz for the other). That's going to give a juddery and terrible experience, and alleviating it requires adding more VRM cooling (if at all possible).

So, given just how excellent the value proposition of this platform is overall, it's definitely worth ensuring that buyers know what they're getting into. Most CPU reviews even of the 11400 is likely done on unlocked Z590 platforms, so users can potentially see significantly lower performance than expected.

They promised a larger round-up of B560 boards coming up focusing on this, which should provide a pretty decent starting point for making recommendations.


----------



## Zyll Goliat (May 12, 2021)

Yeah....That's not good at all Intel


----------



## W1zzard (May 12, 2021)

Valantar said:


> The gist of it is that higher end (~$200) B560 boards run the chips without active power limits OOB, while cheaper boards enforce them strictly


Surprised that this is news to anyone


----------



## Valantar (May 12, 2021)

W1zzard said:


> Surprised that this is news to anyone


Whether it's news or not is one thing, the fact that it brings 30-40% performance differences between boards with nominally similar features is definitely not normal. Intel's TDP calculations really haven't withstood the change from 4c8t to 8c16t or higher very well at all. That's the main change after all, as the new 65W CPUs have dramatically lower base clocks than previous lower core count chips, leading to major performance drops for boards that don't allow them to turbo indefinitely. It seems that as more time passes, the range of what is "in spec" just keeps increasing. As they point out in the video, a 7700K had a 17% delta between base and boost clocks, while the 11700K has a 96% delta between base and turbo. If the range of possible "in-spec" performance is literally 1x-2x, that spec is pretty meaningless.


----------



## W1zzard (May 12, 2021)

Valantar said:


> is definitely not normal


This IS definitely normal for Intel. Check my non-K CPU reviews over the last generations. What you want is the most power-limit castrated CPU at best pricing, and then give it unlimited power limit. No need for OC at all

The underlying reason for those lame TDPs is that some (shitty) OEM systems are designed to only handle 65 W in terms of cooling and power.


----------



## Valantar (May 12, 2021)

W1zzard said:


> This IS definitely normal for Intel. Check my non-K CPU reviews over the last generations. What you want is the most power-limit castrated CPU at best pricing, and then give it unlimited power limit. No need for OC at all


While it's true that you've been covering it (since 10th gen it seems - there are no 9th gen 65W parts reviewed on TPU that I can find, and the i5-8500 review makes no mention of adjusting power limits from what I can tell), but your i5-10400F review shows a 3.5% uplift in Cinebench MT and a 2.1% increase in gaming at 720p - including a 3% BCLK adjustment. So it hardly matches the increases seen here, that's for sure. The 10500 bumps that to 7% in CB MT, but again ... that's not 30, let alone 40.

One relevant question here though is whether your CB testing is a single run, an average or several, or a single run after a given warm-up time of looping runs. If it's just a single run that would go some way towards showing why your tests show relatively minimal changes compared to HWUB's results.



W1zzard said:


> The underlying reason for those lame TDPs is that some (shitty) OEM systems are designed to only handle 65 W in terms of cooling and power.


As for this ... that's a consequence, not a reason. The reason for the 65W TDP is history and PR. Sure, OEM expectations based on this history is a large part of why it's maintained, but if Intel changed their TDPs, OEMs would have no choice but to adjust accordingly (and there would likely be cTDP-down modes for SFF OEM desktops). IMO the main reason for Intel not changing TDPs to keep sensible base-to-boost ratios for their chips is that this would look _really_ embarassing for them compared to Ryzen. They were likely holding out on making changes for a few years in hopes that 10nm and 7nm would pan out, but when that didn't happen they've just stuck with it to not lose face. Pinning Intel's design cop-outs on their OEM partners sounds like a misrepresentation of the power balance in those relations.


----------



## newtekie1 (May 12, 2021)

W1zzard said:


> This IS definitely normal for Intel. Check my non-K CPU reviews over the last generations. What you want is the most power-limit castrated CPU at best pricing, and then give it unlimited power limit. No need for OC at all
> 
> The underlying reason for those lame TDPs is that some (shitty) OEM systems are designed to only handle 65 W in terms of cooling and power.


It isn't like this is only an Intel problem. There are some B550 motherboards out there that can't handle a fully loaded 5800X either.

I think every major manufacturer is guilty of putting out an absolute garbage VRM budget B550/B560 motherboard.  We've moved into the era where even if you are just going to run the system "at stock" spending, even a little bit, more on the motherboard can make a difference.


----------



## Chomiq (May 12, 2021)

News:


----------



## Valantar (May 12, 2021)

newtekie1 said:


> It isn't like this is only an Intel problem. There are some B550 motherboards out there that can't handle a fully loaded 5800X either.
> 
> I think every major manufacturer is guilty of putting out an absolute garbage VRM budget B550/B560 motherboard.  We've moved into the era where even if you are just going to run the system "at stock" spending, even a little bit, more on the motherboard can make a difference.


What's a "fully loaded" 5800X? I've never seen AMD motherboards deviate from stock power limits the way Intel boards often do, at least at stock. But then I haven't been reading _that_ many motherboard reviews. A 5800X boosts up to ...138W or something? Not quite the 144 of a 5900X and 5950X at least. Are there boards that can't handle this stock boost and will throttle due to VRM thermals?



Chomiq said:


> News:


This is a rather different angle though. Those videos cover K SKUs and Z-series motherboards. This is about supposedly locked-down non-K 65W SKUs on non-OC chipset boards. One should be able to expect relatively consistent performance given these things, but ... well, apparently not.


----------



## newtekie1 (May 21, 2021)

Valantar said:


> What's a "fully loaded" 5800X? I've never seen AMD motherboards deviate from stock power limits the way Intel boards often do, at least at stock. But then I haven't been reading _that_ many motherboard reviews. A 5800X boosts up to ...138W or something? Not quite the 144 of a 5900X and 5950X at least. Are there boards that can't handle this stock boost and will throttle due to VRM thermals?


Simple, you pop a 5800X in, turn on PBO, and load it up fully with whatever load testing program you prefer(Prime95, OCCT, Linpack, etc.). Yes, it will power throttle in some B550 boards.

I'm not sure where you are getting your power numbers, but a 5800X with PBO OFF(which is AFAIK how the chips are tested here at TPU) will pull 160w.  And a 2700X is rated at 105w and will pull 195w(that's more of a deviation than the 10900K and 11900K). The 5950x is also rated at 105w but will pull 195w as well(again more of a deviation than Intel). Sorry, AMD isn't squeaky clean here.

If you put any of those processors in a AsRock B550M-HDV or an MSI B550M-A Pro, for examples, you're going to see VRM throttling.

The fact is, it doesn't matter what platform you are on, if you buy a shit budget board and put a processor that pulls 150w+ under load, you're going to have a bad time. The days of "the motherboard doesn't make a performance difference" are long gone. 

Plus you can get a decent H570 motherboard for $130-140, and B560 boards start at about $110(the "budget" AsRock B560 they talk about in the original video is $115).  I honestly struggle to see why anyone would even consider B560 when the leap in price to H570 boards is so small. And really, the same applies to B550 and X570.  Any decent B550 is going to cost you $110-115, and a good X570 is $140. I don't see a point in saving $20-30 on a part that is the central nervous system of the computer by going down an entire product tier.


----------



## tabascosauz (May 21, 2021)

newtekie1 said:


> Simple, you pop a 5800X in, turn on PBO, and load it up fully with whatever load testing program you prefer(Prime95, OCCT, Linpack, etc.). Yes, it will power throttle in some B550 boards.
> 
> I'm not sure where you are getting your power numbers, but a 5800X with PBO OFF(which is AFAIK how the chips are tested here at TPU) will pull 160w.  And a 2700X is rated at 105w and will pull 195w(that's more of a deviation than the 10900K and 11900K). The 5950x is also rated at 105w but will pull 195w as well(again more of a deviation than Intel). Sorry, AMD isn't squeaky clean here.
> 
> ...



No, none of those CPUs pull over 150W stock. Fine text in the upper right corner of any of the TPU power graphs - *Whole System*, not CPU. None of those Vermeer chips are pulling beyond ~145W worst case under stock settings. 142W PPT is 142W PPT. Sometimes you can account for a ~1-2W deviation either way depending on the board, but PPT/TDC/EDC is always the law until you set a static OC.

That said, I agree that none of this 11th gen power limit stuff is new and is a little overblown.

The good boards are good boards that would be recommended, regardless of what platform it's on.
The three boards that suck, well, suck, and wouldn't be recommended anyways, regardless of what platform it's on.


----------



## 80-watt Hamster (May 21, 2021)

newtekie1 said:


> Plus you can get a decent H570 motherboard for $130-140, and B560 boards start at about $110(the "budget" AsRock B560 they talk about in the original video is $115).  I honestly struggle to see why anyone would even consider B560 when the leap in price to H570 boards is so small.



Is it even statistically likely that an arbitrary H570 board will have better power delivery than an arbitrary B560?  HU's been doing quite a bit on the VRM side lately, and one takeaway seems to be that robust power and board price are not directly correlated.

Edit: spelling


----------



## Frick (May 21, 2021)

W1zzard said:


> Surprised that this is news to anyone



It's news to me for sure. Historically, and up to quite recently apparently, any motherboard has been fine for "avarage" use, beyond that it was about features and overclocking capabilities. Boards not being able to actually fully handle supported CPUs is ... pretty bad, IMO.


tabascosauz said:


> The good boards are good boards that would be recommended, regardless of what platform it's on.
> The three boards that suck, well, suck, and wouldn't be recommended anyways, regardless of what platform it's on.



If you lose double digit performance because you bought "a sucky motherboard" I would absolutely argue the specs for the base level should change. In the past you could pair a high end CPU with a low end motherboard and the CPU would perform as it should. This is the normalcy to strive for.


----------



## X71200 (May 21, 2021)

There's a military green PCB Tomahawk on that budget Intel end, but it's been out of stock lately. Costs around 130, and seems to have a decent VRM. 

Historically, there have been some boards that sucked awfully even at stock with certain CPUs. This has happened quite a bit with old MSI boards and the Piledriver platform. They had crap VRM and the CPUs sucked power to no end, ended up with even popping the VRM. There was a huge thread on OCN some 10 years ago when Piledriver was still relevant about this. You could still dig and find specs about those Bulldozer platforms, not that it's of any relevancy today.


----------



## W1zzard (May 22, 2021)

Frick said:


> It's news to me for sure. Historically, and up to quite recently apparently, any motherboard has been fine for "avarage" use, beyond that it was about features and overclocking capabilities. Boards not being able to actually fully handle supported CPUs is ... pretty bad, IMO.


It is still like that. The difference is that some boards run CPUs out of spec, at default settings


----------



## Valantar (May 22, 2021)

W1zzard said:


> It is still like that. The difference is that some boards run CPUs out of spec, at default settings


You're kind of missing a major point here though: the span of what constitutes 'in spec' has grown massively over the past 3-4 generations. Which in turn opens the door for this quasi-overclocking through unlocked power limits, but also means that a chip can run at anything from, say, 2.5 to 4.6GHz (no, I didn't bother looking up precise numbers rn) and everything will still be 'in spec'. And Intel is in turn encouraging this through not enforcing their power limit specs beyond ensuring the baseline is met. MCE on high end SKUs and Z-series boards was already confusing, this is now propagating down the stack.

Of course it's also a major change from the 4c8t generations where base clock was typically not something you actually saw in practice, with most chips running within the given power limit while still exceeding base clocks noticeably. This is of course down to Intel (impressively, but still) stretching the usefulness of their 14nm process to stay competitive against much more efficient architectures and nodes.



newtekie1 said:


> Simple, you pop a 5800X in, turn on PBO, and load it up fully with whatever load testing program you prefer(Prime95, OCCT, Linpack, etc.). Yes, it will power throttle in some B550 boards.


So, in other words, you enable an auto OC and a in a power virus. You see how that is no longer stock, right? Enabling PBO isn't stock. It's an available option in BIOS. Hardly comparable to the issue at hand here.

As for your power numbers cited here, they're pure nonsense. PPT is strictly enforced unless settings are changed manually, and constitutes a hard package power likmit. Either show a source demonstrating otherwise or stop making misleading comparisons, please.


----------



## W1zzard (May 22, 2021)

Valantar said:


> You're kind of missing a major point here though: the span of what constitutes 'in spec' has grown massively over the past 3-4 generations. Which in turn opens the door for this quasi-overclocking through unlocked power limits, but also means that a chip can run at anything from, say, 2.5 to 4.6GHz (no, I didn't bother looking up precise numbers rn) and everything will still be 'in spec'. And Intel is in turn encouraging this through not enforcing their power limit specs beyond ensuring the baseline is met. MCE on high end SKUs and Z-series boards was already confusing, this is now propagating down the stack.


You are 100% correct. I still think it's really bad to expect (want?) out of spec operation out of the box, from any product that you buy. What does that even do to warranty? Legally you press the power switch and the warranty is gone


----------



## Valantar (May 22, 2021)

W1zzard said:


> You are 100% correct. I still think it's really bad to expect (want?) out of spec operation out of the box, from any product that you buy. What does that even do to warranty?


Yeah, it's a complete mess. Defining usable warranty terms within a strictly controlled multi-component system is difficult enough. When specs instead become vague guidelines that nobody follows, terms like 'stock operation' become utterly meaningless. Of course this was the same with MCE on Z-series boards previously, but at least that was somewhat limited to high end OC SKUs and enthusiast users. Now it's becoming the de facto standard. All the while it completely erodes the applicability of benchmarks and makes any expectation of a given level of performance come with a huge asterisk attached.


----------



## Mussels (May 22, 2021)

W1zzard said:


> Surprised that this is news to anyone


the amount of people i've shown this video who dont believe its real...

"intel is the budget champion now!" (if you're fine with a 50% performance loss on cheap boards)


----------



## asdkj1740 (May 22, 2021)

i dont really get it. whats the problem?


----------



## Mussels (May 22, 2021)

asdkj1740 said:


> i dont really get it. whats the problem?


Did you watch the video?
Intels specs are all over the place, boards that follow intels guidelines are upto 50% slower than ones that blow the limits out, and throw extra wattage to the CPU (from 65W to 250W, i think was the worst)(
loosely:
Cheap board: 65W, 50% slower
high end board with SAME CPU: 250W, 50% faster.

So when you buy an intel chip, do you really get the same product you see reviewed, if they're using top end boards and you're not?


----------



## W1zzard (May 22, 2021)

Mussels said:


> do you really get the same product you see reviewed, if they're using top end boards and you're not?


that's why my reviews show "default" at true Intel stock settings, and "max power limit", too.


----------



## asdkj1740 (May 22, 2021)

Mussels said:


> Did you watch the video?
> Intels specs are all over the place, boards that follow intels guidelines are upto 50% slower than ones that blow the limits out, and throw extra wattage to the CPU (from 65W to 250W, i think was the worst)(
> loosely:
> Cheap board: 65W, 50% slower
> ...


from what i have heard, all asus z490s are set to be 125w by default. and they got tremendous criticism then, that's why they have changed since z590.

there are plenty top end shit mobos (expensive as well) in the recent intel platforms:
z370 gigabyte gaming 7 (actually the whole z370 lineup messed up with thermal pad's thickness) 
z390 msi ace (12 phases 1H+1L discret mosfets with poor heatsink and no chokes cooling and poor pcb cooling design)
z490 asus proart (looks good mosfet components but fucked up all over 100c)
z590 gigabyte aorus pro ax (great mosfet components with good looking heatsink but still fucked up)

to your question:
no, because intel chips cant work without mobo.
if you want your intel chip bought to be running at its full potential, you better check the power consumption of it then find a mobo that can provide such levels of power, if prime95 avx512 is all you want.
if what you guys want is a cheap board but 0% slower, then good luck my friend.

high end mobo, what is that, msi x570?


----------



## newtekie1 (May 22, 2021)

80-watt Hamster said:


> Is it even statistically likely that an arbitrary H570 board will have better power delivery than an arbitrary B560?  HU's been doing quite a bit on the VRM side lately, and one takeaway seems to be that robust power and board price are not directly correlated.
> 
> Edit: spelling



It does seem like even the cheap H570 boards have decent enough VRMs compared to the cheap B560 boards.



tabascosauz said:


> No, none of those CPUs pull over 150W stock. Fine text in the upper right corner of any of the TPU power graphs - *Whole System*, not CPU. None of those Vermeer chips are pulling beyond ~145W worst case under stock settings. 142W PPT is 142W PPT. Sometimes you can account for a ~1-2W deviation either way depending on the board, but PPT/TDC/EDC is always the law until you set a static OC.
> 
> That said, I agree that none of this 11th gen power limit stuff is new and is a little overblown.



The whole system for the 5950X is 195w, it's going over 150w. And even if you normalize for the rest of the system, it's going more out of spec than the Intel chips are. Same for the 2700X.  PPT/TDC/EDC are all adjustable, it's what PBO, AMD's own feature built into the processor, changes.  So with PBO enabled 142w PPT limit is raised.



Valantar said:


> So, in other words, you enable an auto OC and a in a power virus. You see how that is no longer stock, right? Enabling PBO isn't stock. It's an available option in BIOS. Hardly comparable to the issue at hand here.
> 
> As for your power numbers cited here, they're pure nonsense. PPT is strictly enforced unless settings are changed manually, and constitutes a hard package power likmit. Either show a source demonstrating otherwise or stop making misleading comparisons, please.



PBO is an AMD technology. It's where they get their advertised performance numbers from. It isn't a shady overclock put in place by the board manufacturers.  That's like saying enabling Thermal Velocity Boost on Intel's side is an auto overclock that takes the processors out of stock. It just isn't true, AMD built PBO into the processors and reported performance numbers with in enabled. Hell they thoughted it as a major feature during the 3000 series launch. It also does NOT overclock the processor. It does not push the clock speeds beyond the advertised speeds, what it does is allow the processor to boost more cores for longer, putting more load on the VRMs. It does this specifically by raising those PPT limits(along with a few other power related limits).  This whole thread is basically complaining that motherboard manufacturers on the Intel side are shipping boards with essentially Intel's PBO enabled by default and some don't(and I have to wonder if AMD motherboard manufacturers do this sometimes too).

If you want the proof, just go look at the TPU reviews. You'll see 105w rated processors consuming way more than 105w, as I pointed out.  But back to the motherboards and VRM throttling, when you buy a B550 motherboard, and it only has a 4-pin CPU power connector that is rated for 75w, you really have to scratch your head and ask "is this board going to really be able to reliably deliver double that for long periods of time?"  And I can already answer that, No.



Mussels said:


> "intel is the budget champion now!" (if you're fine with a 50% performance loss on cheap boards)



It still applies, like I said, the "cheap" boards aren't really that much cheaper anyway. It's like $20 difference.


----------



## GorbazTheDragon (May 23, 2021)

newtekie1 said:


> It does seem like even the cheap H570 boards have decent enough VRMs compared to the cheap B560 boards.


H570s are pretty thin on the ground compared to B560s, overall selection is less but realistically you can just lump all of them into the same category...

In terms of VRMs the only ones I would really trust are ones with powerstages, and maybe on a stretch the MSI ones with heatsinks... As long as Asus and GB are using 4C10N/4C06N setups their boards with discrete mosfets are pretty much on the avoid list for anything more than a quad core (same goes for Asrock and their random rubbish mosfets).


----------



## Kissamies (May 23, 2021)

Wonder how many prebuilt-owners are wondering that why their systems are performing way worse than others' systems with the same CPU..


----------



## GorbazTheDragon (May 23, 2021)

Before or after removing the bloatware


----------



## AusWolf (May 23, 2021)

It's not only about performance. I love small form factor PCs where cooling is an issue, so going all-in for performance isn't always my way to go.

One of my last builds was based on a Core i7-7700 non-K. It was a mini-ITX system in a Coolermaster Elite 130 case. The locked CPU was the right choice, as it offered about 10% lower performance than the K variant, but with a much lower TDP.

My HTPC (details below) is based on a flea market motherboard that I used with a Core i3-2120T. I recently upgraded it for an i7-3770T. It has slightly lower clocks than the K version, but nearly half the TDP (which it doesn't even fully utilise anyway). It doesn't heat up the small case, but hopefully has the raw power to decode the film's that the 750 Ti's hardware decoder can't.

An example from the other side of things is my main PC. It has a Ryzen 3 3100 inside a slim (low profile) case. I tried upgrading to a Ryzen 5 3600, but the layout of the case didn't allow enough airflow to cool it down. It's not an issue with the roughly 50 W the 3100 uses under full load, but the 85-88 W peak the 3600 asked for was too much. One could say that airflow in the case is too restricted, and I should swap it for a bigger one, but that would defeat the purpose. Now I'm using the 3100 again, while a fiend of mine is happy with the 3600. All this could have been avoided if AMD didn't take their TDP values out of thin air. At least my friend is happy with the 3600 I couldn't use. 

I'm actually thinking about Intel's 11th gen as my upgrade path especially because of the locked SKUs. Pure performance is nice, but I have no use of the extra 10-20% if it means that I can't keep my PC within operating temperatures in my SFF case. I'm also thinking about waiting for AMD's 5000G series (the monolithic die layout may be easier to cool?), but I'm afraid that it would be just another 3600 disaster for me.

All in all, locked performance with locked TDP has its uses, even in desktop DIY. The part I agree with is that one should make educated decisions before buying, and never skimp on the motherboard (and power supply).


----------



## Post Nut Clairvoyance (May 23, 2021)

80-watt Hamster said:


> Is it even statistically likely that an arbitrary H570 board will have better power delivery than an arbitrary B560?  HU's been doing quite a bit on the VRM side lately, and one takeaway seems to be that robust power and board price are not directly correlated.


H570 boards is on average, more expensive. Most H570 stops catering to the absolute lowest end (<100w) CPU PL.
Perfect example would be (ASRock) B560m-itx v H570m-itx, former has 4 phase 50A Vishay Sic for Vcore, latter 6. Former cannot handle with 11400f full turbo avx(even with high airflow directly onto heatsink, it follows base Intel spec of 56 second PL2). latter can handle 11600K, avx512 full turbo (provided, I don't know the PL limit on this board, should be higher than its B560 counterpart)

Price and VRM is correlated, the problem is, IMO, RKL really really draws power like a motherfucker. AIDA64 FPU frying a 11400F with AVX512 ON will pull 170W, that is strictly "decent enough B550 AM4" or "good B560" territory, AMD side just doesn't demand as strong a VRM. There is also a small gap, most B560 I've seen is in 3 categories: [<100w, 4 powerstage or 6 phase with upto 3 fet per phase], [>200w 8 powerstage or 6 doubled phased 2 fets each] or [12 phase powerstage, why is this even on a B560]. 

11400F being the most cost efficient CPU is left without a capable budget board here, as category 2 ASRock B560m steel legends is often sold out.



AusWolf said:


> It's not only about performance. I love small form factor PCs where cooling is an issue, so going all-in for performance isn't always my way to go.
> 
> One of my last builds was based on a Core i7-7700 non-K. It was a mini-ITX system in a Coolermaster Elite 130 case. The locked CPU was the right choice, as it offered about 10% lower performance than the K variant, but with a much lower TDP.
> 
> ...


I think RKL has larger die area making it relatively easier to cool vs. its power draw. It will only help so much and in a SFF case, especially if you already have your own cooler, is hard to manage.
If not oc'ing or needing extra 500mhz, locked CPU's pricing makes it suitable for sure. You could also run RKL w/ PL limit. Even the already locked 11400F. 
It is dependent on board price for RKL, AM4 CPUs are all overpriced atm, but AM4 boards are decently cheaper. I bought 11400F + B560M Aorus Pro as latter is on sale RN (229-> 149) on pccasegear, I see you're also in AUS, but are looking for SFF which is basically exclusively m-itx.
If you intend to run 11400F with 95w PL, ASRock b560m-itx is an option. HOWEVER, ASRock is the only manufacturer AFAIK to HARD ENFORCE Intel base spec, as in, no option in BIOS to indefinitely extend PL2 on their cheaper board, where other brand at least give you the option, even if on stock cannot deliver high PL2.
and actually, with your case cooler option you might not be able to upgrade til later on. RKL is a power hog, 11th gen doesn't have i3, 3100 or 3300x could be the current upper limit.


----------



## RJARRRPCGP (May 23, 2021)

newtekie1 said:


> It isn't like this is only an Intel problem. There are some B550 motherboards out there that can't handle a fully loaded 5800X either.
> 
> I think every major manufacturer is guilty of putting out an absolute garbage VRM budget B550/B560 motherboard.  We've moved into the era where even if you are just going to run the system "at stock" spending, even a little bit, more on the motherboard can make a difference.


That's like the AM3/AM3+ era again! (especially motherboards with no MOSFET heatsink and/or VRM heatsink)


----------



## Kissamies (May 23, 2021)

RJARRRPCGP said:


> That's like the AM3/AM3+ era again! (especially motherboards with no MOSFET heatsink and/or VRM heatsink)


Exactly. I remember people getting a 8000-series FX and pairing it with a cheap motherboard, then suffering from recuded performance because of the VRM throttling.


----------



## RJARRRPCGP (May 23, 2021)

Chloe Price said:


> Exactly. I remember people getting a 8000-series FX and pairing it with a cheap motherboard, then suffering from recuded performance because of the VRM throttling.


I don't know if those motherboards can even handle a 2-module FX! FFS!


----------



## Zach_01 (May 23, 2021)

newtekie1 said:


> It does seem like even the cheap H570 boards have decent enough VRMs compared to the cheap B560 boards.
> 
> 
> 
> ...


The advertised numbers of AMD is with PB enabled and PBO disabled.

It’s really hard to find a 500series board that can’t handle a stock (105W TDP) 142W PPT/140A EDC CPU, except those dirt cheap with out a VRM heatsink. Some others just need a little more airflow around VRMs.
PBO or static OC is a different story.
And AMD TDP is not defining max power consumption. This should be well known by now. It’s a rough guide for the minimum thermal solution.


----------



## AusWolf (May 23, 2021)

Post Nut Clairvoyance said:


> I think RKL has larger die area making it relatively easier to cool vs. its power draw. It will only help so much and in a SFF case, especially if you already have your own cooler, is hard to manage.
> If not oc'ing or needing extra 500mhz, locked CPU's pricing makes it suitable for sure. You could also run RKL w/ PL limit. Even the already locked 11400F.


That's what I'm thinking too.  AMD's 7 nm chiplets are very efficient (under load at least), but are a nightmare to cool in a SFF case. I'm currently using AMD's stock Wraith cooler with the 3100, but I also have a be quiet! Shadow Rock LP on the shelf for a potential upgrade. I ordered it for the 3600, but it wasn't enough to cool it down in my PC case for some reason. Currently I'm happy with the 3100, but that won't always be the case, and my options are limited due to the cooling difficulties I'm facing. I could just dust off the Aerocool Aero One Mini case that I have and call it a day, but nah. Where's the challenge in that? 




Post Nut Clairvoyance said:


> It is dependent on board price for RKL, AM4 CPUs are all overpriced atm, but AM4 boards are decently cheaper. I bought 11400F + B560M Aorus Pro as latter is on sale RN (229-> 149) on pccasegear, I see you're also in AUS, but are looking for SFF which is basically exclusively m-itx.
> If you intend to run 11400F with 95w PL, ASRock b560m-itx is an option. HOWEVER, ASRock is the only manufacturer AFAIK to HARD ENFORCE Intel base spec, as in, no option in BIOS to indefinitely extend PL2 on their cheaper board, where other brand at least give you the option, even if on stock cannot deliver high PL2.
> and actually, with your case cooler option you might not be able to upgrade til later on. RKL is a power hog, 11th gen doesn't have i3, 3100 or 3300x could be the current upper limit.


I'm actually in the UK.  But as you said, RKL CPUs are quite cheap, it's just the motherboard that costs a bit more than with AMD.

The funny thing about my PC case is that even though it's a slim one that only accepts low profile graphics cards and CPU coolers, micro-ATX motherboards aren't an issue. I'm using an Asus B550M TUF Wifi at the moment, and I would be a bit sad to swap it for something else (unless it's of the same quality as this one).

If I go Intel again, I want to be looking at something similar - the Asus B560M TUF Wifi, or the Asus Z590M Prime are the ones with similar-looking quality and affordability available in my area. As for CPU, I was thinking about a Core i7-11700 non-K and locking its PL1 to 65 W, and PL2 to whatever I can cool. Hopefully, the Asus boards I looked at (or something else) would let me do that, even if it's not their default setting.

If I stay with AMD, I'm very curious about the Ryzen 5000G series, especially the GE models. A monolithic die could potentially be easier to cool than chiplets, not to mention the low TDP they have. The backlash here is that AMD's TDP values have very little (if any) connection to reality, so even the GE chips might end up being difficult to cool because of their high turbo speeds. Not to mention that they're not available (yet).

I don't know. I'm torn between two worlds. Neither Intel, nor AMD seem to offer exactly what I need in their current generations. 



Chloe Price said:


> Exactly. I remember people getting a 8000-series FX and pairing it with a cheap motherboard, then suffering from recuded performance because of the VRM throttling.


Those were good times. Seeing 20% CPU usage because games didn't utilise 8 cores / 4 modules, and the same 20% usage on my HD 7970 because 1-2 cores just weren't enough to feed it with data... priceless!


----------



## Selaya (May 23, 2021)

The way I see it, the best use for those bottom-of-the-barrel B560s is to pair them with either 10400F or 10600K. Skylake is a lot more power-efficient than Rocket Lake, and in general the 2666 memory is what hamstrung the i5s in the past. Now that you can actually OC your memory with a B560, that is of no concern anymore, and neither of these draw much power (not compared to Rocket Lake, in any case).


----------



## Kissamies (May 23, 2021)

Selaya said:


> The way I see it, the best use for those bottom-of-the-barrel B560s is to pair them with either 10400F or 10600K.


10600K with a B series board? I suppose you meant 10600F?


----------



## Selaya (May 23, 2021)

No, I meant the 10600K(F). It is barely more expensive than the 10600, and even if you're not OCing it it is a great 6-core part, as long as it's not stuck with 2666 memory.


----------



## 80-watt Hamster (May 23, 2021)

Chloe Price said:


> 10600K with a B series board? I suppose you meant 10600F?





Selaya said:


> No, I meant the 10600K(F). It is barely more expensive than the 10600, and even if you're not OCing it it is a great 6-core part, as long as it's not stuck with 2666 memory.



Seconded.  I've cooled on the "why would you pair a K chip with a locked board?" sentiment over the past couple years, at least if the price is fairly close.  Stock clocks are higher, and you're (hypothetically) getting a better-binned chip that may run cooler clock-for-clock.  Fringe benefit:  K chips historically sell for more once obsoleted.  The delta may not be as large as when new, but it's something.


----------



## newtekie1 (May 23, 2021)

Zach_01 said:


> The advertised numbers of AMD is with PB enabled and PBO disabled.
> 
> It’s really hard to find a 500series board that can’t handle a stock (105W TDP) 142W PPT/140A EDC CPU, except those dirt cheap with out a VRM heatsink. Some others just need a little more airflow around VRMs.
> PBO or static OC is a different story.
> And AMD TDP is not defining max power consumption. This should be well known by now. It’s a rough guide for the minimum thermal solution.


No, when the 3000 series was coming out, they advertised PBO and gave performance number showing what it was capable of.



Selaya said:


> The way I see it, the best use for those bottom-of-the-barrel B560s is to pair them with either 10400F or 10600K.


I wouldn't put anything more than an i3 in any motherboard that costs less than $120.  And if it is a computer that I expect any kind of upgrades to happen on, a budget sub-$120 board makes absolutely no sense.


----------



## Zach_01 (May 23, 2021)

newtekie1 said:


> No, when the 3000 series was coming out, they advertised PBO and gave performance number showing what it was capable of.


I beg to differ...
To advertise performance under under just PB and then PBO is 2 different things. When I bought my R5 3600 45 days after 3000 launch I didn't expect PBO performance but just PB one. And I did get that.
There was not any misconception there.

Intel on the other hand calling things very differently with the PL1 and PL2, and apparently a lot of boards cant handle the higher level of performance/consumption. Where on 95+% of AMD 500 series boards you can run a 16core with the advertised numbers. Just not with PBO. Its different.


----------



## Post Nut Clairvoyance (May 23, 2021)

newtekie1 said:


> No, when the 3000 series was coming out, they advertised PBO and gave performance number showing what it was capable of.
> 
> 
> I wouldn't put anything more than an i3 in any motherboard that costs less than $120.  And if it is a computer that I expect any kind of upgrades to happen on, a budget sub-$120 board makes absolutely no sense.


The lowest end B560 boards in the market is at least capable of running 10400f without limits, once you do some bios setting change. I just think 6 cores has already surpassed 4 cores in value when the 2600(or 1600) came out, and looking at the price of comet lake (there is no RKL) i3's I don't think it makes too much sense overall, as the extra money put towards 6 cores is very justified by the amount of use you'll get out of them. if it wasn't for the fact that a very capable board is on sale i would not have picked it up over the sub 120$ crapshoots that will not hold a full boosting 11400f.
Speaking of upgrades, we don't need to worry about upgrading a 1200 socket cpu generation anytime soon!  

I do agree though on my first PC build I cheaped out on the motherboard (a320m board) because I had no intention of going beyond a am4 6 core. where as now hopefully I'll get the B560M aorus pro soon, that thing should handle a 11900k if I were held at gun point to buy one...


----------



## newtekie1 (May 23, 2021)

Zach_01 said:


> I beg to differ...
> To advertise performance under under just PB and then PBO is 2 different things. When I bought my R5 3600 45 days after 3000 launch I didn't expect PBO performance but just PB one. And I did get that.
> There was not any misconception there.
> 
> Intel on the other hand calling things very differently with the PL1 and PL2, and apparently a lot of boards cant handle the higher level of performance/consumption. Where on 95+% of AMD 500 series boards you can run a 16core with the advertised numbers. Just not with PBO. Its different.


Well we can agree to disagree then, because I very much remember them advertising PBO and showing the performance difference it makes. It wasn't the only performance numbers they gave, but they did give PBO performance data in their advertising for PBO.


----------



## Zach_01 (May 23, 2021)

newtekie1 said:


> Well we can agree to disagree then, because I very much remember them advertising PBO and showing the performance difference it makes. It wasn't the only performance numbers they gave, but they did give PBO performance data in their advertising for PBO.


Ok then!
How is that the same with whats going on now with Intel's 11th gen and 560 boards that can only run base clock and not boost, beats me...


----------



## GorbazTheDragon (May 23, 2021)

The better case for zen2 PB2 is something like 10% performance loss due to a 65w PPT rather than 88w on the lower power parts. Though, you stick a 16c into one of the cheaper B450 boards (check hardware numb3rs on YouTube as an example) and they will throttle (not boost less, they will overheat the VRM, drop to <1GHz and go back to normal when the temps go down) if you don't play with the PBO numbers.

That said, those AM4 boards in question are half the price of these B560 boards and the CPUs in question are way lower end (11400f can be limited for example) than the 12 and 16 core ryzens.


----------



## newtekie1 (May 23, 2021)

Zach_01 said:


> Ok then!
> How is that the same with whats going on now with Intel's 11th gen and 560 boards that can only run base clock and not boost, beats me...


Because with PBO enabled, a feature AMD built into their processors and advertises as a feature of their processors, there are B550 boards that do the same thing.

Plus, none of the B560 boards limited the processors to their base clocks and weren't able to boost when the processors were run at stock configurations within Intel's specs.


----------



## AusWolf (May 23, 2021)

GorbazTheDragon said:


> The better case for zen2 PB2 is something like 10% performance loss due to a 65w PPT rather than 88w on the lower power parts. Though, you stick a 16c into one of the cheaper B450 boards (check hardware numb3rs on YouTube as an example) and they will throttle (not boost less, they will overheat the VRM, drop to <1GHz and go back to normal when the temps go down) if you don't play with the PBO numbers.
> 
> That said, those AM4 boards in question are half the price of these B560 boards and the CPUs in question are way lower end (11400f can be limited for example) than the 12 and 16 core ryzens.


I can't say much about B560, but when the B550 platform came out, many people got upset about how expensive the motherboards were compared to B450. Not many people talked about the fact that the newer B550 boards are generally better built, hence the price increase. I bet most of the ones with VRM heatsinks can run any Ryzen 9 with PBO. I'm not sure if the same can be said about B450.

Cheap motherboards being cheap in quality isn't a new thing.


----------



## Zach_01 (May 23, 2021)

newtekie1 said:


> Because with PBO enabled, a feature AMD built into their processors and advertises as a feature of their processors, there are B550 boards that do the same thing.
> 
> Plus, none of the B560 boards limited the processors to their base clocks and weren't able to boost when the processors were run at stock configurations within Intel's specs.


My mistake... I thought that 11400F's base clock was something better than the crappy 2.6GHz, like 3.3~3.6GHz... like most modern/advanced CPUs do.

Yes, that is a win!
kudos to Intel




You can compare it all you want but the fiasco of 30~50% performance difference on a low/midrange CPU does not exist in-between any B550.
And what an obsession with PBO...


----------



## tabascosauz (May 23, 2021)

AusWolf said:


> That's what I'm thinking too.  AMD's 7 nm chiplets are very efficient (under load at least), but are a nightmare to cool in a SFF case. I'm currently using AMD's stock Wraith cooler with the 3100, but I also have a be quiet! Shadow Rock LP on the shelf for a potential upgrade. I ordered it for the 3600, but it wasn't enough to cool it down in my PC case for some reason. Currently I'm happy with the 3100, but that won't always be the case, and my options are limited due to the cooling difficulties I'm facing. I could just dust off the Aerocool Aero One Mini case that I have and call it a day, but nah. Where's the challenge in that?
> 
> 
> 
> ...



There's nothing hard to cool about the 6-core Renoir or Cezanne APUs, they will run cooler than the comparable chiplet SKUs, guaranteed. Max boost of 4.3 and 4.4 (4650G/4600G give you +100MHz for free past AMD spec) respectively won't give you thermal challenges unless you literally cool them with a raw potato. You'd have to really try, using something like a NH-L9a and really choking the airflow in the case, to hit 80C under any stock load. Try using a L9a on a 3600 or 5600X, lol.

The APUs are 156mm^2 and 175mm^2, compared to 74mm^2 for a single chiplet.
They have to draw significantly less power, because there must be enough power budget left in 88W for Vega 7. On the 4650G you'll see the CPU max out at about 60W-ish under all-core, I'm guessing Cezanne is the same or slightly more given the focus on aggressively power-gating the GPU.

If it's easy thermals you're after, 8-core Rocket Lake is the wrong product. Comet Lake was the unicorn because it had the thinner die/thinner substrate/thicker IHS. Rocket Lake ditched that for 9900K-style packaging, so the increase in die size doesn't really offset the temps. Basically, add +2 cores to whatever RKL chip you're thinking of when considering cooling. Thus the 11400/11500/11600 are still okay, but once you get up to the 11700, treat it like you would a 10900F.

Yes, you can lock it to the stock 65W PL1, but that doesn't change the fact that it still will be drawing up to 200-260W during that Tau period, and temps will rise accordingly in that 30-second window.


----------



## GorbazTheDragon (May 23, 2021)

AusWolf said:


> I can't say much about B560, but when the B550 platform came out, many people got upset about how expensive the motherboards were compared to B450. Not many people talked about the fact that the newer B550 boards are generally better built, hence the price increase. I bet most of the ones with VRM heatsinks can run any Ryzen 9 with PBO. I'm not sure if the same can be said about B450.
> 
> Cheap motherboards being cheap in quality isn't a new thing.


Don't disagree, the problem I have with these cheap LGA1200 boards is they are limiting boost performance of even lower midrange chips. It's bad for uninformed consumers too who will look at reviews of a 11400F in a high end z590 board where it is likely running power unlocked at 200w, then they will go and buy the cheapest B560 or H510 board and find the performance is 2/3 of that in the review because these boards have PL2 limits below 100w...

I think this is a much more realistic scenario than running a 5900x/5950x (or the zen2 parts) in a B450M-A Pro Max or similar $50 AM4 board...

3900x: $400, 5900x: $600+. Bottom of the barrel AM4 board: $50-60.
11400F: $170. Bottom of the barrel H510/B560 board: $70-90


----------



## Kissamies (May 23, 2021)

80-watt Hamster said:


> Seconded.  I've cooled on the "why would you pair a K chip with a locked board?" sentiment over the past couple years, at least if the price is fairly close.  Stock clocks are higher, and you're (hypothetically) getting a better-binned chip that may run cooler clock-for-clock.  Fringe benefit:  K chips historically sell for more once obsoleted.  The delta may not be as large as when new, but it's something.


I know, it's the exactly same thing in Finland, K CPUs have way more resell value, especially when some people want to upgrade from their HT-less i5 CPUs without upgrading the whole system.

But isn't the point of a locked board to save money when building a system, that's why I'm always wondering that why a locked board and more expensive K CPU.


----------



## MentalAcetylide (May 24, 2021)

GorbazTheDragon said:


> Before or after removing the bloatware


When you say "bloatware", it makes me think of Carter Wong in the movie Big Trouble in Little China when he gets really mad & blows up. I know I'll never buy another Dell or Alienware. Dell should stick to building enterprise workstations.


----------



## AusWolf (May 24, 2021)

GorbazTheDragon said:


> Don't disagree, the problem I have with these cheap LGA1200 boards is they are limiting boost performance of even lower midrange chips. It's bad for uninformed consumers too who will look at reviews of a 11400F in a high end z590 board where it is likely running power unlocked at 200w, then they will go and buy the cheapest B560 or H510 board and find the performance is 2/3 of that in the review because these boards have PL2 limits below 100w...
> 
> I think this is a much more realistic scenario than running a 5900x/5950x (or the zen2 parts) in a B450M-A Pro Max or similar $50 AM4 board...
> 
> ...


That is true, though I don't expect those uninformed consumers to buy any high-end cooling solution with their $70 motherboard, so they probably can't even handle 200+ W anyway. Limiting power to what their cheap-ass motherboard and worthless Intel box cooler can handle is the safe option.

One party says Intel is lying about TDP numbers, as with proper cooling and a proper motherboard, their chips clearly eat more power than stated on the box. The other party says Intel is lying about turbo frequencies because cheap motherboards don't unlock the power targets that would allow the CPUs to run at the designated speeds. Who's right? I think no one is, as even Intel TDP numbers are only valid with locked power targets (and Intel gives a free hand to motherboard manufacturers in this), and all turbo speeds are "up to" values nowadays. You may or may not be able to run it depending on your system.

HWUnboxed tried to make big news out of this, but my rule of thumb has always been the same: look at the VRM. If it has no heatsink on it, then either stay away, or never put anything more powerful than a Pentium or Ryzen 3 in it. It's good that they came out with the video to help uninformed people be a little bit less uninformed (though I'm not sure how many non-enthusiasts watch their videos before buying), but the existence of cheap **** motherboards is not B560's fault.

Edit: Another must-watch from HWUnboxed. It's a bit long, but explains the situation very well. When Intel advertises a CPU as a "65 W, 4.4 GHz all-core turbo" part, it no longer means 65 W and 4.4 GHz all-core. It means 65 W or 4.4 GHz all-core, depending on motherboard VRM, cooling capacity and BIOS settings. Every CPU is designed to run at least at base frequencies. How close you get to max. turbo is up to you. It doesn't look as messy to me as it's said to be, or at least I find AMD's turbo and TDP situation much more confusing. My Ryzen 9 5950X never kept its 105 W TDP when nearing max. turbo (it was more like 130-135 W), but my Ryzen 3 3100 doesn't even come close to TDP, maxing out at about 50 W. Here's the video:


----------



## Valantar (May 24, 2021)

newtekie1 said:


> No, when the 3000 series was coming out, they advertised PBO and gave performance number showing what it was capable of.





newtekie1 said:


> Well we can agree to disagree then, because I very much remember them advertising PBO and showing the performance difference it makes. It wasn't the only performance numbers they gave, but they did give PBO performance data in their advertising for PBO.





newtekie1 said:


> Because with PBO enabled, a feature AMD built into their processors and advertises as a feature of their processors, there are B550 boards that do the same thing.
> 
> Plus, none of the B560 boards limited the processors to their base clocks and weren't able to boost when the processors were run at stock configurations within Intel's specs.


There's a difference between advertising a(n optional, non-enabled at stock) feature that can increase performance, while explicitly pointing out that this is something that must be enabled, vs. advertising a level of _stock_ performance that is contingent on uncommunicated factors that can't be easily identified.

Also, what does it matter if it's advertised if it's not stock? Intel advertises the overclocking capabilities of their CPUs, so you'd need to include that as well to be consistent in your logic.

In this case, the situations is as follows:
With Intel, there is 30-40% performance variance using the same CPU between motherboards that are nominally similar even on midrange CPUs.
With AMD, there is the expected +/- 5% variance that comes with motherboard design and BIOS tuning, though there's an optional mode that can be enabled where lower end boards will fail to keep up with higher end ones.

Are you really saying you don't see the difference?


----------



## AusWolf (May 24, 2021)

Valantar said:


> There's a difference between advertising a(n optional, non-enabled at stock) feature that can increase performance, while explicitly pointing out that this is something that must be enabled, vs. advertising a level of _stock_ performance that is contingent on uncommunicated factors that can't be easily identified.


That's a matter of interpretation, imo. Stock performance is the base clock. All turbo speeds are "up to" levels, which means that there is a chance for you to reach those levels, or stay anywhere between base and max. turbo with a good enough motherboard and cooling. It's not guaranteed, though. That's why 125 W K-SKUs have much higher base clocks, but pretty much the same turbo clocks as 65 W locked parts.

Again, an educated decision is needed before buying anything. You can't assume that the 11700 reaches 4.9 GHz within the 65 W power target, while the 11700K only goes up 100 MHz higher with almost double the TDP. It's just not logical.  Even if one knows nothing about computers, at least they should ask someone who does.

Edit: Speaking of educated decisions, what should I upgrade to? Core i7-11700 locked to 65 W (or whatever I can cool in my SFF box), or wait for the Ryzen 7 5700GE and hope that its TDP means something?


----------



## Zach_01 (May 24, 2021)

AusWolf said:


> My Ryzen 9 5950X never kept its 105 W TDP when nearing max. turbo (it was more like 130-135 W), but my Ryzen 3 3100 doesn't even come close to TDP, maxing out at about 50 W.



It's been said more than a few times in TPU but I guess it’s really frustrating (not to me). Complicated maybe? I'm not arguing with that...

AMD’s TDP designation is not about max power consumption. AMD never said so, at least for latest generations. I believe it includes FX-series also.
It’s for minimum cooling solution under specific temperatures (CPU die surface and cooler ambient) to get the advertised stock performance. And by stock they mean on “regular” boosting (not PBO).
In other words they are talking about heat dissipation towards the cooler under specific circumstances, not about total consumption.

This is the equation they use:

TDP (Watts) = (tCase°C - tAmbient°C)/(HSF θca)

tCase°C = CPU die surface temp
tAmbient°C = cooler ambient (inside the case temp or room if its caseless)
HSF θca = thermal resistance of HeatSinkFan




And here is an example


----------



## AusWolf (May 24, 2021)

Zach_01 said:


> It's been said more than a few times in TPU but I guess it’s really frustrating (not to me). Complicated maybe? I'm not arguing with that...
> 
> AMD’s TDP designation is not about max power consumption. AMD never said so, at least for latest generations. I believe it includes FX-series also.
> It’s for minimum cooling solution under specific temperatures (CPU die surface and cooler ambient) to get the advertised stock performance. And by stock they mean on “regular” boosting (not PBO).
> ...


I know this very well. It's still complicated, and makes me wish this wasn't the case.

Like I mentioned, I tried to upgrade my 3100 to a 3600 under the assumption that the same 65 W TDP meant similar heat to deal with - which was backed up by people telling me how easy it is to cool a 3600. I was wrong. The 3600 might be easy to cool with a tower cooler, but I went through hell with it in my slim case.

It's funny that Intel is called out for lying about power requirements for max turbo, but nobody calls out AMD for making up a BS formula for TDP that gives a final result in W even though the formula has nothing to do with power. AMD does a better job at power efficiency and motherboard VRM requirements, but...

Intel at least means Watts by "W", making it easier to think about cooling. You just don't know what kind of performance you get with enforced power limits... which makes me extremely conflicted about my upgrade path.


----------



## GorbazTheDragon (May 24, 2021)

AusWolf said:


> HWUnboxed tried to make big news out of this, but my rule of thumb has always been the same: look at the VRM. If it has no heatsink on it, then either stay away, or never put anything more powerful than a Pentium or Ryzen 3 in it. It's good that they came out with the video to help uninformed people be a little bit less uninformed (though I'm not sure how many non-enthusiasts watch their videos before buying), but the existence of cheap **** motherboards is not B560's fault.


I think one of the reasons it's more relevant now than it was before is because B560 now has memory overclocking/XMP support which previous non-z series boards lacked. This makes the interestin the boards with this chipset considerably greater.

I think the TDP figures in general are pretty bullshit, intel should be mandating a stricter adherence to PL2 and not have a PL2 that is of order 4x the PL1/TDP rating. They should separate more between low power SKUs and high power SKUs and motherboards should be forced to list supported power. Same goes for AMD, I really don't see why they need to come up with this bullshit formula for their TDPs when they clearly have a power limit number that is actually being used. (or they should be listing a power limit figure and keeping the "TDP" figure they have for cooling purposes as an engineering figure not a marketing figure)

Something I think the community and buyers do need to understand better is that there is a large variance in current draw of workloads, so while at 4GHz prime95 may suck down 150w on a RKL 6-core, a game at 4GHz (even a multicore one) might only use 50w. If you want deterministic performance you will always have to go by the worst case scenario, so a RKL 6-core would only be able to do 4GHz in a 150w power envelope, if you want the best performance in all scenarios you are going to have to deal with the fact that the clock speed will be unpredictable and vary depending on the workload. Either way a stricter power limit system and SKUs separated by power would be good to make it easier for reviewers to do their job... But then again it's not exactly in the interest of Intel or AMD to make it easy for reviewers is it...


----------



## AusWolf (May 24, 2021)

GorbazTheDragon said:


> I think one of the reasons it's more relevant now than it was before is because B560 now has memory overclocking/XMP support which previous non-z series boards lacked. This makes the interestin the boards with this chipset considerably greater.
> 
> I think the TDP figures in general are pretty bullshit, intel should be mandating a stricter adherence to PL2 and not have a PL2 that is of order 4x the PL1/TDP rating. They should separate more between low power SKUs and high power SKUs and motherboards should be forced to list supported power. Same goes for AMD, I really don't see why they need to come up with this bullshit formula for their TDPs when they clearly have a power limit number that is actually being used. (or they should be listing a power limit figure and keeping the "TDP" figure they have for cooling purposes as an engineering figure not a marketing figure)
> 
> Something I think the community and buyers do need to understand better is that there is a large variance in current draw of workloads, so while at 4GHz prime95 may suck down 150w on a RKL 6-core, a game at 4GHz (even a multicore one) might only use 50w. If you want deterministic performance you will always have to go by the worst case scenario, so a RKL 6-core would only be able to do 4GHz in a 150w power envelope, if you want the best performance in all scenarios you are going to have to deal with the fact that the clock speed will be unpredictable and vary depending on the workload. Either way a stricter power limit system and SKUs separated by power would be good to make it easier for reviewers to do their job... But then again it's not exactly in the interest of Intel or AMD to make it easy for reviewers is it...


To be honest, imo Watts meaning Watts should be mandated either by the industry or by law (or both). A R3 3100 that eats 50 W under full load and reaches 70 C with the crappy boxed cooler on low revs in a slim case surely can't fall into the same TDP category as a R5 3600 that maxes out the 88 W PPT with a blink of an eye and can't be cooled without a tower cooler and lots of airflow. That's just a straight out lie from AMD.

As for Intel, I would benchmark everything with and without enforcing power limits. PL2 that's hundreds of Watts above PL1 makes TDP just as much of a lie as AMD's figures are altogether. Same with the stupid thermal velocity boost that gives you an extra 100 MHz if... and if... and if... and if... but that's a different story.

Edit: Fun fact that GPUs do the same with their clocks. You get slight variations in speed depending on VRM and cooling capacity and the type of workload. The only reason nobody complains about them is because they generally tend to boost higher than their advertised boost clocks while also staying within TDP - at least nvidia cards do, while AMD measures chip power consumption which is just as shady as their formula for CPU TDP is.

Let's just agree that this whole boosting business just makes TDP a lot more complicated than it needs to be - unless only the TDP is advertised, and boost varies by circumstances, or vice versa, which of course, wouldn't look as nice on paper as a 5+ GHz 8-core 65 W chip.


----------



## GorbazTheDragon (May 24, 2021)

AusWolf said:


> As for Intel, I would benchmark everything with and without enforcing power limits. PL2 that's hundreds of Watts above PL1 makes TDP just as much of a lie as AMD's figures are altogether. Same with the stupid thermal velocity boost that gives you an extra 100 MHz if... and if... and if... and if... but that's a different story.


The problem with Intel's PL figures is that they are basically completely open ended, a motherboard manufacturer or SI is entirely allowed to set any PL2 limit and duration and it will be within spec by Intel's numbers (I believe HUB touched on this on their follow up video).

I wouldn't have a problem with a processor being specced at 250w PL2 and 65w PL1 as long as a) it was transparent that this 250w PL2 limit existed for a specific duration and b) motherboard vendors were forced to comply with that limit (and thereby build motherboards that are capable of delivering that). But we live in a world where neither is the case...


----------



## AusWolf (May 24, 2021)

GorbazTheDragon said:


> The problem with Intel's PL figures is that they are basically completely open ended, a motherboard manufacturer or SI is entirely allowed to set any PL2 limit and duration and it will be within spec by Intel's numbers (I believe HUB touched on this on their follow up video).
> 
> I wouldn't have a problem with a processor being specced at 250w PL2 and 65w PL1 as long as a) it was transparent that this 250w PL2 limit existed for a specific duration and b) motherboard vendors were forced to comply with that limit (and thereby build motherboards that are capable of delivering that). But we live in a world where neither is the case...


Very true.

As a SFF maniac, I would much rather have no PL2 at all. I believe most, if not all motherboards let you customise these things, so you can set a PL2 that's the same as your PL1, but then who knows how much less performance you're getting out of your CPU. I'd love to see benchmarks that cover this, so I could decide where to upgrade. Knowing how much faster X CPU against Y CPU is _on full power_ is of no use to me.

Edit: typo


----------



## GorbazTheDragon (May 24, 2021)

Personally at least, I think that with SFF you really should be putting the effort in to tune the stuff properly around your cooling (and other capabilities)... At least basically all retail motherboards will let you fiddle with the power limits (even if some of them are wholly inadequate for running sustained boost with high current workloads). On Haswell for example, some people (myself included) used to run power limits to emulate an AVX offset.

Definitely it would be interesting to see some testing of desktop CPUs at different power limits, I know some laptop reviewers already do so.


----------



## Valantar (May 25, 2021)

GorbazTheDragon said:


> I think the TDP figures in general are pretty bullshit, intel should be mandating a stricter adherence to PL2 and not have a PL2 that is of order 4x the PL1/TDP rating. They should separate more between low power SKUs and high power SKUs and motherboards should be forced to list supported power. Same goes for AMD, I really don't see why they need to come up with this bullshit formula for their TDPs when they clearly have a power limit number that is actually being used.


AMD's TDP formula was explicitly created as a reverse-engineering of Intel's formula so that cooler manufacturers and SIs could treat them equivalently in their design processes. Of course this has been undermined by Intel since being on a path of making TDP ever less meaningful of a metric, but still - that's not AMD's fault.


AusWolf said:


> To be honest, imo Watts meaning Watts should be mandated either by the industry or by law (or both). A R3 3100 that eats 50 W under full load and reaches 70 C with the crappy boxed cooler on low revs in a slim case surely can't fall into the same TDP category as a R5 3600 that maxes out the 88 W PPT with a blink of an eye and can't be cooled without a tower cooler and lots of airflow. That's just a straight out lie from AMD.


This goes to both you and @GorbazTheDragon: you're approaching this from the wrong angle, which either stems from a fundamental misunderstanding or from wanting something that doesn't exist. The issue: TDP is not a consumer-facing specification denoting power draw. It never has been. Historically it has been roughly equivalent to this, but this is more coincidental than intentional. TDP is a specification for SIs and cooler OEMs to design their cooling solutions and system designs around. If TDP was meant to denote power draw directly, it would for example be a guide for motherboard makers in designing their VRM setups - but it's not, and there are specific specifications (dealing with the relevant metrics, volts and amps) for that. You can disagree with how TDPs are used in marketing with regards to this - I definitely do! - but you can't just transfer it into being something it isn't.

TDPs serve as broad categories for SIs and cooler makers to design for, and is explicitly directed at large OEMs. This is where the 2/3-tier (historically ~95/65/35W) TDP systems come from - they're guidelines for designing cooling systems for three tiers of CPUs. There has always been variance within these tiers - just as there is with laptops, where a 15W i7 always needs more power than a 15W i5. Treating TDP as an absolute number for power draw has always been wrong. It's just happened to be roughly accurate at times. But it's also typically been far too high - like the R3 3100 you mention, or the i5-2400 in my modded Optiplex 990 SFF (nominally 95W, yet I've never gotten it past ~78).

This is where the current issues stem from - TDP used to be _reasonably close_ to _normal_ power draws, with non-high-end CPUs often coming in noticeably below that number. As technology has progressed, competition has tightened, and Intel has been stuck on 14nm(+++++++++++++) yet has needed to increase core counts, Turbo - which has _always_ been explicitly temporary, variable and potentially above TDP in power draw - has become more important, and has started pushing the silicon closer to its limits. Turbo clocks have divereged much further from base clocks than ever before (that aforementioned i5-2400 has a 300MHz Turbo on top of its 3.1GHz base clock), while the definition of TDP has stayed the same, and the categories have  also stayed the same - largely due to Intel not being able to change these due to their OEM partners (if they changed the 65W class TDP to something more realistic like 105W, this would necessitate every OEM out there completely redesigning their SFF business systems to maintain base performance).

(Of course, we also need to take into account that stock (including stock boosting behaviour) power draws for CPUs are _much_ higher today than 5-10 years ago. An i7-7700k stuck pretty tightly to its 91W tDP in terms of idle-load delta power draw, and only scaled to ~120W when OC'd. These days AMD's 105W CPUs boost to 138/144W, and Intel's 125W CPUs boost to 170-250W.)

The reason for these issues is that Intel is using an OEM-facing design class denomination in consumer products without changing it or otherwise informing users what it means. This of course leads to a lot of confusion. But it also makes no sense for them to change those classes in the OEM world - which is easily 10x the size of CPU retail. A more sensible solution would be a consumer-facing "power class" or some such to denominate something closer to power draw. But that would ultimately look like they're suddenly saying their CPUs use far more power, which means that such a move would never be sanctioned by corporate and PR.

Of course this is only tangentially related to the issue at hand here - it's one root cause of it, but indirectly. The gap between base clocks (and power at those clocks) and turbo clocks (and the power at those clocks) is now large enough that due to Intel not enforcing their PL1, PL2  and tau specs with motherboard manufacturers, we now have a situation where the same CPU can perform very, very differently depending on the motherboard you put it in, which is not how things are expected to work. Intel could easily do this - but it would also make their CPUs look worse in reviews, so again, corporate and PR would never accept that. So instead, we get this quasi-sanctioned motherboard-dependent not-quite-auto-OC situation where the ultimate performance of a system is far more variable than ever before. Which of course sucks for end users and DIYers. But Intel (and AMD, though potentially a _tad_ less) ultimately doesn't care about us - they care far more about the OEMs and laptop makers that represent the majority of their sales.


AusWolf said:


> Very true.
> 
> As a SFF maniac, I would much rather have no PL2 at all. I believe most, if not all motherboards let you customise these things, so you can set a PL2 that's the same as your PL1, but then who knows how much less performance you're getting out of your CPU. I'd love to see benchmarks that cover this, so I could decide where to upgrade. Knowing how much faster X CPU against Y CPU is _on full power_ is of no use to me.


That would be great! Completely agree if reviews would cover this. At least TPU does test at both Intel official spec as well as unlocked power limits. But depending on just how SFF you go, there'll always be tuning (and the related stability testing) needed, which would drastically increase the reviewers' workload. And of course binning/silicon lottery outcomes dramatically affect this. So it's not very likely to happen.

But giving up boost isn't happening. CPU boosting represents _massive_ performance gains in everyday tasks such as web browsing and office work - even in very thermally limited systems. Which is why most OEMs let their CPUs constantly bounce off the thermal throttle point of the CPU - it doesn't harm the CPU or system in any way, but allows for far better responsiveness and performance. We as enthusiast DIYers tend not to accept this, nor the performance loss inherent to it (with the commonly accepted wisdom being that if you're bouncing off the throttle point in a DIY system, replace your cooler and gain performance!). This is doubly true if we also want silence - another factor most OEMs don't care about.

So as SFF enthusiasts - which is a niche and extreme approach to DIY PC builds, after all - we need to accept that a) we might not get peak performance, b) we'll need to tune our systems more, and c) there are no official specs denoting the information we need. That's life. And it's not going to change. Luckily SFF case designs are progressing at a rapid pace, allowing for much, much better cooling in smaller volumes than ever before (the number of <15l cases fitting 240 or even 280mm radiators today compared to 3-4-5 years ago speaks to this), so the tuning and compromises are shrinking, or at worst keeping pace with how power draw is deviating from the expectation created by TDP classes. But we need to accept that our use case is non-standard, and account for that in our builds. (On a related note, are you on the smallformfactor.net forums?)

Btw, did you test your 3600 when you had it at its 45W cTDP/Eco Mode setting? That might have been a better fit for your case/cooling.


----------



## AusWolf (May 25, 2021)

GorbazTheDragon said:


> Personally at least, I think that with SFF you really should be putting the effort in to tune the stuff properly around your cooling (and other capabilities)... At least basically all retail motherboards will let you fiddle with the power limits (even if some of them are wholly inadequate for running sustained boost with high current workloads). On Haswell for example, some people (myself included) used to run power limits to emulate an AVX offset.
> 
> Definitely it would be interesting to see some testing of desktop CPUs at different power limits, I know some laptop reviewers already do so.


I just found one from GN, though it's with a 11700K at 125 W power limit vs. unlocked. It would be nice to see the same with the non-K at 65 W.












Valantar said:


> AMD's TDP formula was explicitly created as a reverse-engineering of Intel's formula so that cooler manufacturers and SIs could treat them equivalently in their design processes. Of course this has been undermined by Intel since being on a path of making TDP ever less meaningful of a metric, but still - that's not AMD's fault.
> 
> This goes to both you and @GorbazTheDragon: you're approaching this from the wrong angle, which either stems from a fundamental misunderstanding or from wanting something that doesn't exist. The issue: TDP is not a consumer-facing specification denoting power draw. It never has been. Historically it has been roughly equivalent to this, but this is more coincidental than intentional. TDP is a specification for SIs and cooler OEMs to design their cooling solutions and system designs around. If TDP was meant to denote power draw directly, it would for example be a guide for motherboard makers in designing their VRM setups - but it's not, and there are specific specifications (dealing with the relevant metrics, volts and amps) for that. You can disagree with how TDPs are used in marketing with regards to this - I definitely do! - but you can't just transfer it into being something it isn't.
> 
> ...


The only thing I don't understand is... well, let's take three 65 W TDP CPUs that I've had as examples:

The Core i7-7700 (non-K): At stock settings, the crappy box cooler managed to keep it from thermal throttling, though it was so loud that I swapped it for a 120 mm AIO (I had a Coolermaster Elite 130 case back then), and never had an issue since. It consumed roughly 60-65 W, and really, the only reason I had to stop using the box cooler is the unbearable noise at high RPMs.
The Ryzen 3 3100: At stock settings, the box cooler (Wraith Stealth) is more than enough to keep it cool. 70 C max with low RPM in a case with limited airflow (1x 8 cm fan on the bottom as intake, and 1x 8 cm on top as exhaust). Package power is at 50 W under full load.
The Ryzen 5 3600: At stock settings, the box cooler (same Wraith Stealth) failed to keep it within acceptable temps even with high RPM. Even the be quiet! Shadow Rock LP couldn't keep it below 80 C on low RPM settings (probably because of the limited airflow in the case). Package power is just short of 90 W.
If TDP has more to do with heat and cooling specifications for OEMs (as stated by both Intel and AMD), then how can these three totally different (from a thermal perspective) CPUs fall into the same category? 



Valantar said:


> That would be great! Completely agree if reviews would cover this. At least TPU does test at both Intel official spec as well as unlocked power limits. But depending on just how SFF you go, there'll always be tuning (and the related stability testing) needed, which would drastically increase the reviewers' workload. And of course binning/silicon lottery outcomes dramatically affect this. So it's not very likely to happen.


That's true, and I really appreciate it, though again, I could only find the 11700KF, but not the non-K. It seems like the higher core count non-K SKUs are generally forgotten by reviewers for some reason.



Valantar said:


> But giving up boost isn't happening. CPU boosting represents _massive_ performance gains in everyday tasks such as web browsing and office work - even in very thermally limited systems. Which is why most OEMs let their CPUs constantly bounce off the thermal throttle point of the CPU - it doesn't harm the CPU or system in any way, but allows for far better responsiveness and performance. We as enthusiast DIYers tend not to accept this, nor the performance loss inherent to it (with the commonly accepted wisdom being that if you're bouncing off the throttle point in a DIY system, replace your cooler and gain performance!). This is doubly true if we also want silence - another factor most OEMs don't care about.
> 
> So as SFF enthusiasts - which is a niche and extreme approach to DIY PC builds, after all - we need to accept that a) we might not get peak performance, b) we'll need to tune our systems more, and c) there are no official specs denoting the information we need. That's life. And it's not going to change. Luckily SFF case designs are progressing at a rapid pace, allowing for much, much better cooling in smaller volumes than ever before (the number of <15l cases fitting 240 or even 280mm radiators today compared to 3-4-5 years ago speaks to this), so the tuning and compromises are shrinking, or at worst keeping pace with how power draw is deviating from the expectation created by TDP classes. But we need to accept that our use case is non-standard, and account for that in our builds. (On a related note, are you on the smallformfactor.net forums?)
> 
> Btw, did you test your 3600 when you had it at its 45W cTDP/Eco Mode setting? That might have been a better fit for your case/cooling.


Very true again. The bad thing about it is that there's no way of knowing how a CPU performs with the tweaking/settings we need before buying one. Right now, I'm torn between building an Intel system with the Core i7-11700 non-K, and waiting for the Ryzen 7 5700GE to come for the DIY market.

smallformfactor.net? I didn't know it existed, thanks for the info. I'll definitely check it out. 

As I also didn't know the 3600 had cTDP!  Where is it? I remember working on a laptop with an Intel CPU with cTDP. The setting was in the Windows power plan settings, but there was nothing like it with the 3600.


----------



## Mussels (May 26, 2021)

AusWolf said:


> how can these three totally different (from a thermal perspective) CPUs fall into the same category?


Because they make their own metrics, and their own testing - so they can throw them out however they want

they come up with a category (15W/45W/65W/105W) for board makers and OEMs to tune cooling and power for, and then slap products into those existing categories later

3600 was too much for the wraith stealth, but then if they re-labelled it to a 95W chip theyd have to throw in a wraith prism, OEMs would need to include better coolers to meet their specs, and blah blah blah... intel does the same shit (only worse, with PL1/PL2)


----------



## Valantar (May 26, 2021)

Mussels said:


> Because they make their own metrics, and their own testing - so they can throw them out however they want
> 
> they come up with a category (15W/45W/65W/105W) for board makers and OEMs to tune cooling and power for, and then slap products into those existing categories later
> 
> 3600 was too much for the wraith stealth, but then if they re-labelled it to a 95W chip theyd have to throw in a wraith prism, OEMs would need to include better coolers to meet their specs, and blah blah blah... intel does the same shit (only worse, with PL1/PL2)


More or less, yes. Though that's the glass-half-empty view. The glass-half-full view is that TDP is still only promising base clock performance, with anything above that being temporary and/or optional. This is of course not what's advertised at retail, but with retail chips you also need to supply your own cooler (for most CPUs), and even with crappy stock cooler you'll see the boost clocks in responsiveness-driving bursts. Is this honest advertising? Both yes and no. It's mainly overcomplicated, and that overcomplication is only the fault of the CPU makers. The biggest issues is that the problem is getting worse, proliferating down the product stack (and thus reaching a far wider audience), while CPU makers are doing nothing to alleviate it.


AusWolf said:


> The only thing I don't understand is... well, let's take three 65 W TDP CPUs that I've had as examples:
> 
> The Core i7-7700 (non-K): At stock settings, the crappy box cooler managed to keep it from thermal throttling, though it was so loud that I swapped it for a 120 mm AIO (I had a Coolermaster Elite 130 case back then), and never had an issue since. It consumed roughly 60-65 W, and really, the only reason I had to stop using the box cooler is the unbearable noise at high RPMs.
> The Ryzen 3 3100: At stock settings, the box cooler (Wraith Stealth) is more than enough to keep it cool. 70 C max with low RPM in a case with limited airflow (1x 8 cm fan on the bottom as intake, and 1x 8 cm on top as exhaust). Package power is at 50 W under full load.
> ...


The answer here is pretty simple: as mentioned before, OEMs don't really care whatsoever about thermals _as long as they stay within spec_. 80C is within spec. 95C and not boosting as high is within spec, as long as it's not going below base clock, and device skin temperatures aren't excessive. You see this in pretty much every laptop out there - put a load on the CPU, and it stays bouncing off the throttle point, even when you _know_ the fans could spin up higher and reduce thermals notably. Intel of course has a history of supplying stock coolers that can't actually keep up with the TDP of the chip they're paired with, but that's unrelated to the TDP of the chip and simply due to them using shitty, under-specced coolers.

OEMs rely on _slightly _overbuilding their coolers so that they can soak up short-term boost heat outputs, but also tune their systems accordingly (hence the massive variability in PL2 and tau in laptops especially, as those are the most restricted in terms of cooling). Enthusiast DIYers are often _massively _overbuilding their coolers - but also have the rather utopian expectation of being able to run near or at peak boost constantly without a loud cooler or high temperatures. It's pretty obvious that depending on the setup, one or more of those factors will have to give.

But the key here is this: it's not _throttling _unless it's below base clock. If it's at or above base clock, it's just not boosting (as high). That's the specification, that's what's _actually_ promised, although the marketing (with the ever-present but always quite invisible "up to") does a lot of work to make it seem otherwise. Marketing for current-gen CPUs is designed around _seeming_ to promise a lot, while _actually_ promising only base clocks - pure CYA, "you can't sue me for this", "we didn't actually mislead you, you just didn't pay attention to the right things (the ones we tried pretty hard to hide from you)".

Boost is after all opportunistic and contextual. So all the CPUs you mention are no doubt capable of maintaining their base clock within the TDP-level power draw without overheating as long as they are paired with a built-to-spec cooler. Some (most?) of them might even boost above base within those confines - like my i5-2400 that sticks at its boost clock 100% of the time and never comes close to 95W reported power draw. This used to be the norm back when Intel didn't have any real competition and could comfortably leave plenty of unused headroom in their silicon (i.e. up to Skylake or Kaby Lake). But with boost algorithms becoming ever more aggressive, sophisticated, and opportunistic, and CPU makers working to push their silicon as far as is safely possible to gain a competitive advantage in the ever-important short-term, bursty loads that largely determine the feeling of system responsiveness, the delta between base (actually promised) and boost (seemingly promised) speeds are growing dramatically, especially as core counts rise.

To alleviate this, the only feasible solution is to introduce some sort of two-tier power denomination for each chip - i.e. a clearly marked base power/boost power denomination. But any way you do that, it would arguably be just as big of a mess as the current mess. Does boost power mean 1, 2, 3, 4, n core boost? Is boost power a constant number? Can it be exceeded for short term loads? Must all hardware be able to maintain this number? Especially the latter question has huge ramifications, as (assuming boost power is constant and peak all-core) that would essentially require _every_ socket LGA 1200 motherboard to be able to feed ~290W to an 11900K indefinitely, for example. That would drive up B560 and H570 board prices through the roof. And if boost power  isn't a platform requirement, what's really the point of the metric? Would motherboards (and OEM systems) need to start advertising which boost power level they are built for? That would be a complete mess, for sure. "Hi, I want a B560 motherboard." "Okay, do you want a 65W, 95W, 125W, 150W, 175W, 225W or 290W B550 motherboard?" Yeah, that's not going to work. And if all chips still work in all motherboards, just at different sustained performance levels, then nothing has actually changed from today save some modicum of transparency that will be entirely overshadowed by the sheer confusion it would bring with it.

Of course, MCE and similar "features" in DIY motherboards have established this problem long ago. But it used to be limited to high end SKUs, the i7s and i9s of the world. There was some commonly accepted wisdom that you'd more than likely sacrifice some performance if you paired a top-end CPU with a cheap motherboard. But now that thinking also applies to a relatively low-end (though still midrange in terms of pricing), non-K i5. And that's where this goes from a niche problem mostly limited to enthusiasts who presumably know of it and are willing and able to deal with it (or those with more money than sense), to a mass-market problem.


----------



## AusWolf (May 26, 2021)

Valantar said:


> More or less, yes. Though that's the glass-half-empty view. The glass-half-full view is that TDP is still only promising base clock performance, with anything above that being temporary and/or optional. This is of course not what's advertised at retail, but with retail chips you also need to supply your own cooler (for most CPUs), and even with crappy stock cooler you'll see the boost clocks in responsiveness-driving bursts. Is this honest advertising? Both yes and no. It's mainly overcomplicated, and that overcomplication is only the fault of the CPU makers. The biggest issues is that the problem is getting worse, proliferating down the product stack (and thus reaching a far wider audience), while CPU makers are doing nothing to alleviate it.
> 
> The answer here is pretty simple: as mentioned before, OEMs don't really care whatsoever about thermals _as long as they stay within spec_. 80C is within spec. 95C and not boosting as high is within spec, as long as it's not going below base clock, and device skin temperatures aren't excessive. You see this in pretty much every laptop out there - put a load on the CPU, and it stays bouncing off the throttle point, even when you _know_ the fans could spin up higher and reduce thermals notably. Intel of course has a history of supplying stock coolers that can't actually keep up with the TDP of the chip they're paired with, but that's unrelated to the TDP of the chip and simply due to them using shitty, under-specced coolers.
> 
> ...


Let's be honest, isn't this something that GPUs (especially nvidia) have been doing in the last 6-8 years? You've got an advertised base clock that you never see in real life, a boost clock which you probably also don't see if your card's cooler is any decent, and then the card boosts up to thermal, voltage, power, usage, etc. limits, leaving absolutely no headroom for overclocking. The difference between base and max. boost is huge, and anywhere in between is within spec. The only difference is that nvidia strictly keeps to TDP limits, something that CPUs could do if CPU TDP calculations weren't overcomplicated. On the other hand, you only have GPU chip power draw on AMD cards which is just as shady a practice as their CPU TDP formula is.


----------



## Post Nut Clairvoyance (May 26, 2021)

AusWolf said:


> Let's be honest, isn't this something that GPUs (especially nvidia) have been doing in the last 6-8 years? You've got an advertised base clock that you never see in real life, a boost clock which you probably also don't see if your card's cooler is any decent, and then the card boosts up to thermal, voltage, power, usage, etc. limits, leaving absolutely no headroom for overclocking. The difference between base and max. boost is huge, and anywhere in between is within spec. The only difference is that nvidia strictly keeps to TDP limits, something that CPUs could do if CPU TDP calculations weren't overcomplicated. On the other hand, you only have GPU chip power draw on AMD cards which is just as shady a practice as their CPU TDP formula is.


I think we should just ignore GPUs... Intel and AMD at least doesn't trip PSU protection when the transients hits, in gaming, where the CPU is fine due to not actually 100% and GPU just go crazy as frames output fluctuate.
Also do not think we'll have a decent working power rating. Makes more money to have them confusing.


----------



## Zach_01 (May 26, 2021)

Mussels said:


> Because they make their own metrics, and their own testing - so they can throw them out however they want
> 
> they come up with a category (15W/45W/65W/105W) for board makers and OEMs to tune cooling and power for, and then slap products into those existing categories later
> 
> 3600 was too much for the wraith stealth, but then if they re-labelled it to a 95W chip theyd have to throw in a wraith prism, OEMs would need to include better coolers to meet their specs, and blah blah blah... intel does the same shit (only worse, with PL1/PL2)


Actually 3600 can't be labeled as 95W as it only draws ~88W max on full stock limits.
AMD's TDP labeling is about cooling requirements (heat in watts toward the cooler) during that max (88W) power consumption.
95W is the heat in watts toward the cooler from a 125W (max) power draw CPU.

Its the 3100 that it should be labeled as a 40~45W TDP or even less, as its max power draw is around 60W according to web info.

One can ask... Why this discrepancy between max power consumption and the "expected" heat to the cooler?
The answer is simple. Not all the "produced" heat is going to end up to the cooler. Some of it will escape through the CPU substrate to the board. Coolers don't suck the produced heat but just take the larger portion of it through conduction to the heat spreader.

It has nothing to do with Intel's labeling method that is indeed related to PL1 only, whether is base clock consumption or not.

--------------------------------------------------------------

Let's take the example from GamersNexus for the 105W TDP (~142W PPT) 3900X and alter just the cooler's ambient temp and see what will happen.
I remind that AMD's testing methodology is taking place with fixed temperatures. And they're trying to find the proper cooler capacity to achieve those.

TDP (Watts) = (tCase°C - tAmbient°C)/(HSF θca)



61.8°C = tCase°C = CPU case temp (optimal temp for CPU lid)
42°C = tAmbient°C = cooler's ambient temp (the inside of a case or the room ambient if there is no case)
0.189 = Cooler's thermal resistance

Let's say that cooler's thermal resistance is a constant 0.189 (constant mass, surface, material, fan rpm and TIM applied).
We all know that if we improve(decrease) the ambient temp of the room/case, the CPU temp will decrease also (if its power consumption is also constant) but it wont be respectively decreased.

So, we decrease the cooler's ambient temp by 2°C from 42°C to 40°C, and the CPU tCase°C is decreased by 0.6°C from 61.8°C to 61.2°C (sounds right to you?)

(tCase°C - tAmbient°C)/(HSF θca) = TDP (Watts)
(61.2 - 40) / (0.189) = 112.17

So, now with the new ambient/CPU temp the heat removal from the cooler is no longer 105W but 112W even though its power consumption is the same (142W).

To take it even further if we change the cooler with a better one (most 240~280mm AIOs have a thermal resistance lower than 0.1) the heat removal we be much greater than 112W with the same power consumption (142W).

---------------------------------------------------------

TDP (Thermal Design Power)
Definitely not a power consumption metric...


----------



## Mussels (May 26, 2021)

Zach_01 said:


> Actually 3600 can't be labeled as 95W as it only draws ~88W max on full stock limits.
> AMD's TDP labeling is about cooling requirements (heat in watts toward the cooler) during that max (88W) power consumption.
> 95W is the heat in watts toward the cooler from a 125W (max) power draw CPU.
> 
> ...



i used simplified examples about how they're lumped into brackets, rather than each model having specific details


----------



## AusWolf (May 27, 2021)

Zach_01 said:


> Actually 3600 can't be labeled as 95W as it only draws ~88W max on full stock limits.
> AMD's TDP labeling is about cooling requirements (heat in watts toward the cooler) during that max (88W) power consumption.
> 95W is the heat in watts toward the cooler from a 125W (max) power draw CPU.
> 
> ...


I understand why AMD made up their formula to refer to heat rather than power consumption, but it doesn't take a degree in engineering to know that subtracting two hugely variable and fairly independent values in °C and dividing the result with a constant that is not really a constant will never give you a result in W. Watt is the unit of power which is the amount of work done in a certain time (Joules per second). I understand that AMD's version is a guidance towards cooling capacity, but it's still BS.

Let's make up another formula:
168 h = average time I work per month,
320 h = average time it feels like I work per month,
2 = my average stress level on a scale of 5,
168 h * 320 h * 2 = £ 107,520 per year. It looks like I'm severely underpaid.


----------



## Valantar (May 27, 2021)

AusWolf said:


> Let's be honest, isn't this something that GPUs (especially nvidia) have been doing in the last 6-8 years? You've got an advertised base clock that you never see in real life, a boost clock which you probably also don't see if your card's cooler is any decent, and then the card boosts up to thermal, voltage, power, usage, etc. limits, leaving absolutely no headroom for overclocking. The difference between base and max. boost is huge, and anywhere in between is within spec. The only difference is that nvidia strictly keeps to TDP limits, something that CPUs could do if CPU TDP calculations weren't overcomplicated. On the other hand, you only have GPU chip power draw on AMD cards which is just as shady a practice as their CPU TDP formula is.


That is pretty much exactly what this is. And essentially it means that unlike a decade ago, when the 2700K had >50% OC potential just left in it, we now get a large portion of that performance included at stock - as long as the cooler and power delivery can keep up. I also agree that it would be simpler if CPU makers followed the GPU line on TDP, though the issue there is that GPU loads tend to be all or nothing, while CPU loads are hugely variable, so unlike GPUs you'd see a lot of cases where the TDP seems wildly overblown. Of course this would also be a nightmare for OEMs and SIs as they would need various cTDP-down modes for their PCs.

(Edit: one major difference though: GPUs don't power throttle, they crash. Given that they have purpose-built VRMs, they work from the assumption of always having plentiful power at hand, so when power is limited, they just crash outright. CPU's don't have that level of control and thus have to be a lot more flexible in responding to power delivery limitations.)

Btw, I forgot to respond to this:


AusWolf said:


> As I also didn't know the 3600 had cTDP!  Where is it? I remember working on a laptop with an Intel CPU with cTDP. The setting was in the Windows power plan settings, but there was nothing like it with the 3600.


In my ASRock BIOS it's labelled Eco Mode or something like that. It allows for stepping 105W CPUs down to 65W, and 65W CPUs down to 45W. AFAIK all it does is set PPT/EDC/TDC/ETC/FTW/WTF(yes, I hate all these generic abbreviations) to preset lower levels, but it's really useful. AMD actually used to have this before Ryzen too - the a8-7600 that I just retired from my NAS had a 45W mode in BIOS.

For a build like yours with limited cooling I would probably look at an APU instead though. At least my experiences with cooling a 4650G is that it's _so damn easy_. I've got one in my HTPC, which lives in a small Lazer3D HT5 case, uses a modified Arctic Accelero S1 GPU cooler as a CPU cooler (it's bent and mangled to fit and I had a mounting bracket laser cut from aluminium), and while there is a 140mm fan on it, that fan is off >95% of the time. Yes, there is a vent directly above the cooler, but it's still a 6c12t CPU running passively in a tiny case. It even keeps switching off the fan while gaming - I'd estimate the fan is off and on ~50% of the time while playing Rocket League using the iGPU (with the iGPU OC'd to 2100MHz, RAM at 3800MT/s, CPU stock). Oh, and the CPU routinely boosts 100-200MHz above spec in desktop workloads. In comparison, my 5800X (stock) runs pretty warm even under water.


AusWolf said:


> I understand why AMD made up their formula to refer to heat rather than power consumption, but it doesn't take a degree in engineering to know that subtracting two hugely variable and fairly independent values in °C and dividing the result with a constant that is not really a constant will never give you a result in W. Watt is the unit of power which is the amount of work done in a certain time (Joules per second). I understand that AMD's version is a guidance towards cooling capacity, but it's still BS.


It's not BS, it's a useful formula for calculating the cooling needs of a PC. Pick your worst-case scenario ambient temp (most PCs tend to be rated for operation at 40-45°C ambient, though case ambient can easily be 10°C above room ambient), your desired maximum tCase, and you can then either plug in a known TDP and have the formula tell you the needed thermal resistance of your cooler to maintain that temperature, or you can plug in a known thermal resistance from a cooler you have and have the formula tell you which TDP tier it's suitable for. To reiterate what I started my minor wall of text above with:


Valantar said:


> TDP is not a consumer-facing specification denoting power draw. It never has been. Historically it has been roughly equivalent to this, but this is more coincidental than intentional. TDP is a specification for SIs and cooler OEMs to design their cooling solutions and system designs around.


From this perspective, the formula seems eminently useful. As do the categorizations/tiers for CPUs. You just can't expect those to be equivalent to power draw. And in a DIY PC, not only does nobody ever do a calculation like this, but you're dealing with retail coolers that use various (and often dubious) other formulas for their TDP claims (and never publish thermal resistance numbers), airflow is dependent on the case, fans, and heaps of other choices which influence the cooling efficiency of the system, and last but not least, our expectations are much higher - we want peak performance all the time, while running cool and quiet. If cooler manufacturers published thermal resistance numbers (which would be problematic, as thermal resistance is dependent on the heat input as well as the cooler - a cooler that's _great_ at 200W might be average at 65W, or a _great_ 65W cooler might cause thermal throttling at 150W) this would also be problematic, as those would almost by necessity be at fixed full fan speeds - which nobody generally wants to run their coolers at. So there's no easy way out of this past the obvious: view TDP numbers as vague categorizations that _indicate_ something about power draw but don't _say_ that power draw will never exceed the stated number, and rely on reviews of relevant components for more accurate data.

The issue here, which started this thread, is that Intel's current over-aggressive boosting and non-enforcement of specifications has thrown another (semi-uncontrollable) variable into this mix. Where it used to be "pick your CPU, the pick a cooler that can handle it", it now is "pick your CPU, a cooler that can handle it, and a motherboard capable of sustaining its above-stock boost if you want review-like performance". And that's a big change.


----------



## The red spirit (May 27, 2021)

Valantar said:


> and last but not least, our expectations are much higher - we want peak performance all the time, while running cool and quiet.


But are we actually to be critiqued there? Tell me which Youtuber enforces Intel spec or better yet does test just at base clock speed? There aren't many and since all what is being shown in benches are results with maxed out power limits, it forms an expectation that this is how those chips are supposed to perform and anything less is unacceptable. It's just simply psychologically unacceptable to enforce 'Intel spec' and be fine with performance losses. Even more so, when seemingly nobody else does that. This situation is even worse, when Intel chips need unlocked TDP to be actually competitive with Ryzen, otherwise it would be behind Ryzen in every bench. At least now Intel chips are either worse of the same. Not a great situation, but much better than tanking in all benches. AMD did the same with AM3+ and FX chips. They also lied a lot about TDP to the point of motherboards frying and that unmitigated FX 9590 disaster, which only worked on several very expensive boards. Intel should just stop making sub 3 GHz non K chips, be honest and raise TDPs. i5 11400 would look much nicer with 3.6-3.8GHz base clock, 4.4GHz boost and 80 watt TDP. Also Intel should post their own official PL2 and all core maximum boost clock. I think that they should just get rid of PL2, PL3 and PL4 stuff. It adds complexity and has no real benefit.

Frankly, Intel has too much corporatic bullshit and they need to kill it.


----------



## AusWolf (May 27, 2021)

The red spirit said:


> But are we actually to be critiqued there? Tell me which Youtuber enforces Intel spec or better yet does test just at base clock speed? There aren't many and since all what is being shown in benches are results with maxed out power limits, it forms an expectation that this is how those chips are supposed to perform and anything less is unacceptable. It's just simply psychologically unacceptable to enforce 'Intel spec' and be fine with performance losses. Even more so, when seemingly nobody else does that. This situation is even worse, when Intel chips need unlocked TDP to be actually competitive with Ryzen, otherwise it would be behind Ryzen in every bench.





Valantar said:


> The issue here, which started this thread, is that Intel's current over-aggressive boosting and non-enforcement of specifications has thrown another (semi-uncontrollable) variable into this mix. Where it used to be "pick your CPU, the pick a cooler that can handle it", it now is "pick your CPU, a cooler that can handle it, and a motherboard capable of sustaining its above-stock boost if you want review-like performance". And that's a big change.


That's exactly my problem with reviews these days. TechPowerUp! is still doing fine, but if you look at any review on youtube, they all enforce unlocked TDPs and expect CPUs to run like that in every motherboard. They only ever care about peak performance with beefy coolers, and that extra 1% that nobody can ever see in real life, but puts X CPU just ahead of the competition. That's what caused the stir at Hardware Unboxed. Do you drive your car with the engine rpm at redline all the time? I don't think so. Heck, even AMD CPUs don't maintain their max turbo clocks all the time. In fact, AMD never even publishes their boosting tables, only a vague max boost clock (that you probably never see in real life, just like with any Intel CPU) and nobody complains about it.

Edit: Speaking of AMD max boost clocks, the "Asus Optimizer" setting in my motherboard BIOS pushes power limits out in space so that the CPU can maintain a higher boost clock even at a 100% workload. With this enabled, my 5950 chewed through around 180 Watts in Cinebench and came close to throttling temps even with a 240 mm AIO. Sure, it maintained 4.2-4.4 GHz all-core instead of the normal 3.6-3.8, but still... Why does nobody complain about this? Because it's an Asus feature, not AMD spec. Unlocked power limits by default should not be allowed.



The red spirit said:


> Intel should just stop making sub 3 GHz non K chips, be honest and raise TDPs. i5 11400 would look much nicer with 3.6-3.8GHz base clock, 4.4GHz boost and 80 watt TDP. Also Intel should post their own official PL2 and all core maximum boost clock. I think that they should just get rid of PL2, PL3 and PL4 stuff. It adds complexity and has no real benefit.


I agree, but that would be bad marketing, wouldn't it?



Valantar said:


> That is pretty much exactly what this is. And essentially it means that unlike a decade ago, when the 2700K had >50% OC potential just left in it, we now get a large portion of that performance included at stock - as long as the cooler and power delivery can keep up. I also agree that it would be simpler if CPU makers followed the GPU line on TDP, though the issue there is that GPU loads tend to be all or nothing, while CPU loads are hugely variable, so unlike GPUs you'd see a lot of cases where the TDP seems wildly overblown. Of course this would also be a nightmare for OEMs and SIs as they would need various cTDP-down modes for their PCs.
> 
> (Edit: one major difference though: *GPUs don't power throttle*, they crash. Given that they have purpose-built VRMs, they work from the assumption of always having plentiful power at hand, so when power is limited, they just crash outright. CPU's don't have that level of control and thus have to be a lot more flexible in responding to power delivery limitations.)


They don't throttle, but they adjust their boost bins. My 1650 runs at different clock speeds during different workloads - Superposition 720p or 1080p Medium lets it run at 1920-1950 MHz, it does around ~1900 in 1080p Ultra, and 1860 in Cyberpunk 2077.



Valantar said:


> In my ASRock BIOS it's labelled Eco Mode or something like that. It allows for stepping 105W CPUs down to 65W, and 65W CPUs down to 45W. AFAIK all it does is set PPT/EDC/TDC/ETC/FTW/WTF(yes, I hate all these generic abbreviations) to preset lower levels, but it's really useful. AMD actually used to have this before Ryzen too - the a8-7600 that I just retired from my NAS had a 45W mode in BIOS.


I don't remember seeing a similar setting in my BIOS when I still had the 3600. It would be nice to play with it. Too late I guess. 



Valantar said:


> For a build like yours with limited cooling I would probably look at an APU instead though. At least my experiences with cooling a 4650G is that it's _so damn easy_. I've got one in my HTPC, which lives in a small Lazer3D HT5 case, uses a modified Arctic Accelero S1 GPU cooler as a CPU cooler (it's bent and mangled to fit and I had a mounting bracket laser cut from aluminium), and while there is a 140mm fan on it, that fan is off >95% of the time. Yes, there is a vent directly above the cooler, but it's still a 6c12t CPU running passively in a tiny case. It even keeps switching off the fan while gaming - I'd estimate the fan is off and on ~50% of the time while playing Rocket League using the iGPU (with the iGPU OC'd to 2100MHz, RAM at 3800MT/s, CPU stock). Oh, and the CPU routinely boosts 100-200MHz above spec in desktop workloads. In comparison, my 5800X (stock) runs pretty warm even under water.


That would be a solid plan if there was an APU available. The Ryzen 4000 series are expensive and very difficult to find (they're also kind of a downgrade in gaming) and the 5000 series aren't out on DIY channels, yet.

Not to worry, my impulse-bought Asus TUF B560M-Plus Wifi and Core i7-11700 have just arrived. Tests coming soon. 



Valantar said:


> It's not BS, it's a useful formula for calculating the cooling needs of a PC. Pick your worst-case scenario ambient temp (most PCs tend to be rated for operation at 40-45°C ambient, though case ambient can easily be 10°C above room ambient), your desired maximum tCase, and you can then either plug in a known TDP and have the formula tell you the needed thermal resistance of your cooler to maintain that temperature, or you can plug in a known thermal resistance from a cooler you have and have the formula tell you which TDP tier it's suitable for. To reiterate what I started my minor wall of text above with:


If it wasn't BS, it wouldn't have tricked me into swapping my 65 W TDP processor with another 65 W part and expecting it to work just fine. Maybe it works with OEMs whose only goal is to make their systems 'just work', even if at the edge of throttling, but DIYers need to know what to expect and how to build their systems before buying.


----------



## The red spirit (May 27, 2021)

AusWolf said:


> That's exactly my problem with reviews these days. TechPowerUp! is still doing fine, but if you look at any review on youtube, they all enforce unlocked TDPs and expect CPUs to run like that in every motherboard. They only ever care about peak performance with beefy coolers, and that extra 1% that nobody can ever see in real life, but puts X CPU just ahead of the competition.


I don't think that it was only extra 1%. In some games I really do notice performance improvement of having PL values maxed out. For me "stock" value just feels like choking CPU for no real good reason instead of truly enjoying it. And yet at the same time, doing stuff like that probably isn't good for motherboard longevity. BTW that game is Wreckfest. It seemed that I got 55 fps, instead of more at lows and it bothered me. I also seem to benefit from more performance in Genshin Impact. However, in "productivity" loads I really wouldn't care less about performance loss, it's not a load where work output should be seen in real time, instead you click and let computer do its stuff. 




AusWolf said:


> That's what caused the stir at Hardware Unboxed. Do you drive your car with the engine rpm at redline all the time? I don't think so.


And yet power curves are important for daily driving and for spirited driving, but who actually talks about them? No one. And god forbid, you didn't buy souped up version of some car and got more basic version. Then there's no way to get such data at all, unless you go to dyno and measure it yourself. Car manufacturers are no better than Intel in their spec sheets. 



AusWolf said:


> Heck, even AMD CPUs don't maintain their max turbo clocks all the time. In fact, AMD never even publishes their boosting tables, only a vague max boost clock (that you probably never see in real life, just like with any Intel CPU) and nobody complains about it.


Actually, you do see maximum boost clock quite often on Intel chips. I often see 4.3GHz on i5 10400F. And I often see all core maximum clock of 4GHz at pretty much any load. I heard that Ryzen chips simply don't have such tables and if they can they will keep increasing clock speed as long as cooling permits doing so until maximum specified turbo speed by AMD, also AMD does that by 25MHz increments and Intel does it in 100Mhz increments. I don't remember many details, but AMD and Intel boosting algorithms are substantially different.



AusWolf said:


> Edit: Speaking of AMD max boost clocks, the "Asus Optimizer" setting in my motherboard BIOS pushes power limits out in space so that the CPU can maintain a higher boost clock even at a 100% workload. With this enabled, my 5950 chewed through around 180 Watts in Cinebench and came close to throttling temps even with a 240 mm AIO. Sure, it maintained 4.2-4.4 GHz all-core instead of the normal 3.6-3.8, but still... Why does nobody complain about this? Because it's an Asus feature, not AMD spec. Unlocked power limits by default should not be allowed.


Oh dear, those Ryzens are bad at dissipating heat. I remember stock FX 6300 consuming over 200 watts in stress test stock, despite being marketed as 95 watt chip. It didn't have wattage limiter, so turbo worked as long as there was thermal and VRM headroom (aka forever in most cases). It was rather easy to cool and didn't really need anything more than Hyper 103 cooler. That cooler was fine for 4.4-4.6Ghz all core overclock and it had to keep temps under 62C, because it was thermal limit at first. Later updated to 72C. Ryzen 5950X should be, in theory, much easier to cool than FX. However, if it's all that impossible to cool it well, then obsessing with boost clock is a waste of time. I personally think that PL and PPT values should be abandoned as nobody really cares about those, then cooling CPU, instead there could be temperature limiter, which would reduce boost speed at certain set temperature. It would be much easier to set up than vague TDP, which means almost nothing to end user.




AusWolf said:


> I agree, but that would be bad marketing, wouldn't it?


I doubt it. Intel has successfully sold many chips with higher TDPs and people really don't care too much about TDP anyway. Many Intel i5 and i7 chips had TDP in 90s or 80s. And let's not forget current K chips, which are specced at 125 watt TDP. Bad marketing is to let OEMs mess with TDP too much and end up with current TDP bullshit. That's the last thing Intel needed after losing a lot of reputation. TDP spec mostly matters to prebuilt computer OEMs, which want to engineer a cooling solution just for stated wattage and not a bit better. 

You can watch this video:









Bitching about lost boost starts at 10:30

This situation just isn't good. It feels like a lot of performance is being lost by sticking to too low TDP spec or by making 65 watt cooler. And if enthusiast buys an Intel chip today and invests into better than stock cooler, one can easily gain a lot of performance. The question is it gaining performance or just unchoking chip from stupid Intel spec? In times, when i5 11400F has a base speed of 2.6GHz and maximum boost clock of 4.4GHz, I would say that if you actually stick to 65 watt TDP (turbo boost off, as Intel specifies that TDP is at base clock speed), then you would be loosing a bit less than half of CPU performance to get that 65 watt TDP. In real life loads would will still get closer to 3.4Ghz even at 65 watt TDP with turbo on, but it only takes one heavy task on CPU to keep it running at base speed to fit into tiny 65 watt power budget. BOINC might be heavy enough load to not see more than 2.6GHz on that chip. When at higher power budget it could be running at 4Ghz on all cores. That's a lot of performance loss, on chips which other than stupid TDP spec, can perform much better, granted that you use aftermarket tower cooler. Even 92mm tower cooler would likely be enough for i5 with unlocked PL values. 

And people seem to overlook another Intel CPU line, the T series chips, which are rated at 35 watts. i9 10900T was rated at 35 watts and to do that it has base speed of 1.9GHz and maximum boost clock of 4.6GHz. In this case you won't ever see it running at base speed or at TDP. For Intel chips maximum all core boost clock is essentially a new base clock. And those T chips were really bad at their job as they hardly saved any power when compared to non T version. Thanks to stupidly high PL2 values. Bullshit like that destroyed any value of separate T sku. What the point of getting T version, when you can get non T version and then set TDP to whatever you like? And for that matter what's the point of getting K sku, when you can just ramp up PL values on lower end chip and it will be almost as good as K version? Also non k version wouldn't even lose warranty from having PL values modified as Intel let motherboard OEMs go wild. 

It's such a bad shitstorm, that I don't even know which is the least painful way to resolve it anymore. To enforce strict TDP? To raise TDP? To get rid of PL2? To keep performance or to accept losses? All this nonsense just makes me want to go back to era of single clock speed for everyone and be done with all this TDP bullshit. Let TDP to be whatever is needed for rated clock speed and be fine with results, but computer OEMs wouldn't be having any of that. 



AusWolf said:


> Not to worry, my impulse-bought Asus TUF B560M-Plus Wifi and Core i7-11700 have just arrived. Tests coming soon.


Cheers, but be ready for existential crisis of whether to unlock PL values or not.


----------



## The red spirit (May 27, 2021)

Inspired by this thread:








						Be careful when recommending B560 motherboards to novice builders (HWUB)
					

HWUnboxed just posted a pretty interesting video on how OOB performance varies across B560 boards with 65W 11th-gen Intel CPUs. While all of is is within Intel spec, performance in sustained all-core workload varied by over 40% on the 11700 and over 30% on the 11400F. Gaming performance was more...




					www.techpowerup.com
				




Intel B560 chipset boards fail again, hard. This time they can't even sustain "Intel spec" settings of 125 watt chips (Intel suggested PL1 of 125 watt and PL2 of 251 watts for 11900K and PL1 of 125 watts and PL2 of 224 watts for 11600K). Board failed to sustain around 100 watts, making it a complete no go with any k chip and in fact quite toasty with non k chips. Asrock and Gigabyte low end boards failed VRM test. Both overheated VRMs and failed to sustain a base clock speed of CPUs, making k series chips RMA-able in such case (if chip can't sustain base clock speed, Intel offers RMA for it). Motherboards however "work as expected" and by that it means no refund and no RMA, if you are unhappy with it. 










As always, if you buy a motherboard then always pay attention to VRM quality, especially on Rocket Lake platform.


----------



## Gmr_Chick (May 28, 2021)

For those that don't feel like watching the video, in short, ASRock had one job and massively blew it. Again.  

Kudos to Steve for calling them out on their bullshit yet again. Make no mistake though, none of the boards in the video were exactly stellar.

Also, it sounded like Steve was suggesting the VRMs on ASRock's H510 boards were even worse, yet claim to support the 11900K on the product page.


----------



## The red spirit (May 28, 2021)

Gmr_Chick said:


> For those that don't feel like watching the video, in short, ASRock had one job and massively blew it. Again.


Gigabyte failed too.


----------



## Gmr_Chick (May 28, 2021)

The red spirit said:


> Gigabyte failed too.


That's also true, yes. But not *quite* as hard as ASRock.


----------



## Post Nut Clairvoyance (May 28, 2021)

well ASRock does have hard limit on PL2 and its lower than PL1 of 125W CPU of say 11600K, I like to mess with my cheap toys, like adding heatsink myself and blast them with air to make them do things that they were never designed to. a dick move on this part since they have better VRM than MSI's PRO-E (thanks MSI for reminding me NIKO Sem exists). ASRock hard locks their VRM to 100w and subsequently 80-90c. Gigabyte had slightly better VRM than HDV I think, but also allowed you to not throttle the base clock of a 11600k.

Hopefully either 10nm makes efficiency improvements, or Intel more strict on lest not failing base clock on a i5...where (yes AMD CPUs are more efficient, that has no correlation to a board that violates Intel min spec) 3600X can be ran on a320 no problems and no modifications (software or hardware).


----------



## Mussels (May 28, 2021)

The red spirit said:


> Inspired by this thread:
> 
> 
> 
> ...










Case this isn't a misleading absolute clusterfark, when these results are all from the same CPU...

This is asrocks particular fail, where the 125W parts all got the 65W limit treatment:


----------



## Zach_01 (May 28, 2021)

The red spirit said:


> Inspired by this thread:
> 
> 
> 
> ...


This is a cheap board without even just a chunk of metal on those VRMs.
Most of these boards should've been labeled as i3 capable boards. They don't even run i5s properly, let alone i7/9s.

What are vendors and Intel thinking...  (?)


----------



## AusWolf (May 28, 2021)

The red spirit said:


> Actually, you do see maximum boost clock quite often on Intel chips. I often see 4.3GHz on i5 10400F. And I often see all core maximum clock of 4GHz at pretty much any load. I heard that Ryzen chips simply don't have such tables and if they can they will keep increasing clock speed as long as cooling permits doing so until maximum specified turbo speed by AMD, also AMD does that by 25MHz increments and Intel does it in 100Mhz increments. I don't remember many details, but AMD and Intel boosting algorithms are substantially different.


That's exactly what I mean.  AMD CPUs don't have boost tables like Intel chips do. That's why nobody cares if you see 3.6 or 3.8 or even 4 GHz in all-core workloads. No one ever said what clocks you should see, so you're not expecting anything.



The red spirit said:


> Oh dear, those Ryzens are bad at dissipating heat.


That's another thing nobody seems to talk about. My Ryzen 3 3100 runs at 72 °C in Cinebench with the stock cooler (50 W max power). The 3600 came very close to throttling even with a be quiet! Shadow Rock LP (that was cold to the touch) with a maxed out 88 W PPT. The 5950X behaved similarly to the 3100 when paired with a 240 mm AIO with default BIOS settings (~130 W power consumption), but enabling the Asus optimizer and bringing power consumption towards 180 W made it jump way above 80 °C even after a few seconds of Cinebench with the same AIO. My theory is that coldplate designs are traditionally optimised to work best around the middle of their surface area. As chiplets are offset and manufactured on a smaller 7 nm node, they just can't transfer their heat to the cooler as effectively as a larger central die can. This needs more testing too. Maybe this is why AMD is so reluctant to bring their APUs to the DIY market. 

I remember the FX times too. My 8150 was the most brilliant terrible processor I've ever had. 



The red spirit said:


> And people seem to overlook another Intel CPU line, the T series chips, which are rated at 35 watts. i9 10900T was rated at 35 watts and to do that it has base speed of 1.9GHz and maximum boost clock of 4.6GHz. In this case you won't ever see it running at base speed or at TDP. For Intel chips maximum all core boost clock is essentially a new base clock. And those T chips were really bad at their job as they hardly saved any power when compared to non T version. Thanks to stupidly high PL2 values. Bullshit like that destroyed any value of separate T sku. What the point of getting T version, when you can get non T version and then set TDP to whatever you like? And for that matter what's the point of getting K sku, when you can just ramp up PL values on lower end chip and it will be almost as good as K version? Also non k version wouldn't even lose warranty from having PL values modified as Intel let motherboard OEMs go wild.


If you think about it, spending a lot less money for a "T" CPU and unlocking its power limits is a much better deal than buying a "K" version and locking it to suit your available cooling capacity.  It's a shame T series are not available for DIY with 11th gen.



The red spirit said:


> It's such a bad shitstorm, that I don't even know which is the least painful way to resolve it anymore. To enforce strict TDP? To raise TDP? To get rid of PL2? To keep performance or to accept losses? All this nonsense just makes me want to go back to era of single clock speed for everyone and be done with all this TDP bullshit. Let TDP to be whatever is needed for rated clock speed and be fine with results, but computer OEMs wouldn't be having any of that.


To be honest, I've disagreed with boosting right from the start. Why would you not want peak performance with pre-designed power consumption all the time? The whole concept is flawed.



The red spirit said:


> Cheers, but be ready for existential crisis of whether to unlock PL values or not.


Well honestly, I'm happy with my Ryzen 3 3100 as I don't really need more with my GTX 1650 at the moment. The i7 is only to meant to be a bit more future-proof, but more importantly to satisfy my curiosity of how heat dissipation differs between 7 nm chiplets and a 14 nm central die. If it ends up fine, it will be money well spent. If not, I can sell it at any time, being a modern and newly bought part. 

As for the existential crisis, I'm planning to do some in-depth testing of different PL values and cooling possibilities in a tiny case with restricted airflow. If there is interest, I might as well publish the results (maybe open a new forum thread) for future SFF builders.


----------



## The red spirit (May 28, 2021)

Zach_01 said:


> This is a cheap board without even just a chunk of metal on those VRMs.
> Most of these boards should've been labeled as i3 capable boards. They don't even run i5s properly, let alone i7/9s.
> 
> What are vendors and Intel thinking...  (?)


I dunno. But I have one of the cheapest FM2+ boards from Gigabyte. I think it's A68H-DS2 something. It handles Athlon 760K at stock perfectly fine and judging by VRM temperatures it could handle it overclocked. 760K is a FX derived chip and even at stock it uses obnoxious amount of power if all cores are loaded (140-160 watts). All it has is 4+1 phases and they are bare, no heatsink. I also have some A68H-HD+ Asrock board and it handles Athlon 845 perfectly fine too, again bare 4+1 phases. So it's not like OEMs can't make a functional board with bare VRMs that can't supply that wattage, I think that they just downspeced them too much and therefore they overheat. Thus turning i5 11600k into 11600.



AusWolf said:


> That's exactly what I mean.  AMD CPUs don't have boost tables like Intel chips do. That's why nobody cares if you see 3.6 or 3.8 or even 4 GHz in all-core workloads. No one ever said what clocks you should see, so you're not expecting anything.


At least Intel lets you to fry your stuff if you are complete idiot.



AusWolf said:


> That's another thing nobody seems to talk about. My Ryzen 3 3100 runs at 72 °C in Cinebench with the stock cooler. The 3600 came very close to throttling even with a be quiet! Shadow Rock LP (that was cold to the touch). The 5950X behaved similarly to the 3100 when paired with a 240 mm AIO with default BIOS settings (~130 W power consumption), but enabling the Asus optimizer and bringing power consumption towards 180 W made it jump way above 80 °C even after a few seconds of Cinebench with the same AIO. My theory is that coldplate designs are traditionally optimised to work best around the middle of their surface area. As chiplets are offset and manufactured on a smaller 7 nm node, they just can't transfer their heat to the cooler as effectively as a larger central die can. This needs more testing too. Maybe this is why AMD is so reluctant to bring their APUs to the DIY market.


72C in Cinebench? That means it will almost throttle in prime95. That's not good. I don't consider that my stuff has proper cooling if it can't run prime95 indefinitely, and in some cases prime95 and Furmark at the same time. I couldn't touch Ryzens with such problems. I'm not sure about your theory, but I know that I would try lapping those Ryzens. Maybe they are just uneven. 



AusWolf said:


> I remember the FX times too. My 8150 was the most brilliant terrible processor I've ever had.


Oh I so agree here, I loved my FX 6300. My first chip that I pushed to 5.288 GHz and the first chip to make VRM area of motherboard brown in process. I gotta say that I didn't really care if it destroyed motherboards, as long as it was fun to overclock it. It was also great undervolting and it could run passively cooled with Scythe Mugen 4 heatsink. I kept using it until 2019, at the point where performance of it just wasn't good enough anymore. Apparently, FX lasted a long time and were still surprisingly not bad even in 2019:












AusWolf said:


> If you think about it, spending a lot less money for a "T" CPU and unlocking its power limits is a much better deal than buying a "K" version and locking it to suit your available cooling capacity.  It's a shame they're not available for DIY with 11th gen.


The problem is that they aren't usually sold for a lot less than non T version.



AusWolf said:


> To be honest, I disagreed with boosting right from the start. Why would you not want peak performance with pre-designed power consumption all the time? The whole concept is flawed.


Perhaps. For me coming from FX it was obvious that only base clock is important spec, because boost is opportunistic and is never a guaranteed speed. That's how I thought until I discovered PL and Tau stuff. FX and Phenom II chips had an awful boost, that wasn't worth bothering with and I just disabled it as it pushed way too many volts for small performance gains and boosting could ruing other core clock speeds, oh you also had to enable C6 state and AMD APM, and I'm not a fan of either. But on Intel you can disable any of that shit and set PL high




AusWolf said:


> Well honestly, I'm happy with my Ryzen 3 3100 as I don't really need more for my GTX 1650 at the moment. The i7 is only to meant to be a bit more future-proof, but more importantly to satisfy my curiosity of how heat dissipation differs between 7 nm chiplets and a 14 nm central die. If it ends up fine, it will be money well spent. If not, I can sell it at any time, being a modern and newly bought part.
> 
> As for the existential crisis, I'm planning to do some in-depth testing of different PL values and cooling possibilities in a tiny case with restricted airflow. If there is interest, I might as well publish the results (maybe open a new forum thread) for future SFF builders.


Oh, then I guess it's fine. One tip, I tested my 10400F at 45 watts and it was still surprisingly decent, also at 3.6 GHz it uses dramatically less power than at 4GHz. Impact was so great that wattage came down from 111 watts to just 74 watts, if CPU is reporting it correctly. From my wall readings, I don't have a reason to suspect that this is incorrect.


----------



## AleXXX666 (May 28, 2021)

well, the moral is simple guys: don't put 8-core max-clocker to some cheapest mobo you could get for cashback lol. balance is needed in pc building, it's not good idea to put RTX3090 with some i3-10100 or Ryzen 3100 nor it's good idea to build Ryzen 9 with A520 or 10700/11700 on H410 or H510, or crappy B-chipset equivalent just for the manufacturer to put price $20 higher lol.....


----------



## Zach_01 (May 28, 2021)

The red spirit said:


> I dunno. But I have one of the cheapest FM2+ boards from Gigabyte. I think it's A68H-DS2 something. It handles Athlon 760K at stock perfectly fine and judging by VRM temperatures it could handle it overclocked. 760K is a FX derived chip and even at stock it uses obnoxious amount of power if all cores are loaded (140-160 watts). All it has is 4+1 phases and they are bare, no heatsink. I also have some A68H-HD+ Asrock board and it handles Athlon 845 perfectly fine too, again bare 4+1 phases. So it's not like OEMs can't make a functional board with bare VRMs that can't supply that wattage, I think that they just downspeced them too much and therefore they overheat. Thus turning i5 11600k into 11600.


References on 10 year old systems provide little to zero insight at this current situation. Today a board without VRM heatsink is an entry level one and it shouldn't be used with anything above i3 for Intel and R5 for AMD.


AusWolf said:


> That's another thing nobody seems to talk about. My Ryzen 3 3100 runs at 72 °C in Cinebench with the stock cooler (50 W max power). The 3600 came very close to throttling even with a be quiet! Shadow Rock LP (that was cold to the touch) with a maxed out 88 W PPT. The 5950X behaved similarly to the 3100 when paired with a 240 mm AIO with default BIOS settings (~130 W power consumption), but enabling the Asus optimizer and bringing power consumption towards 180 W made it jump way above 80 °C even after a few seconds of Cinebench with the same AIO. My theory is that coldplate designs are traditionally optimised to work best around the middle of their surface area. As chiplets are offset and manufactured on a smaller 7 nm node, they just can't transfer their heat to the cooler as effectively as a larger central die can. This needs more testing too. Maybe this is why AMD is so reluctant to bring their APUs to the DIY market.


This is exactly the reason I switched thermal paste from a normal one (AS5) to LiquidMetal. Very small die surface and off-center position.
With AS5 the R5 3600 (88W) had the same max temp with my old FX8370 (150+W) with the same 280mm AIO.
LM helped decrease temp about 6~7C.

The down side with LM is the cost, material treatment and replacements. Also it interacts with some other material like copper and alu.
I'm ok with these, but for sure its not for the mainstream user.


----------



## The red spirit (May 28, 2021)

Zach_01 said:


> References on 10 year old systems provide little to zero insight at this current situation. Today a board without VRM heatsink is an entry level one and it shouldn't be used with anything above i3 for Intel and R5 for AMD.


I bought that board with CPU in 2018 and it's FM2+ platform, not FM2. It's not 10 years old. At that same time Zen+ was out. 

BTW Artic Silver 5 is one of the worst performing thermal pastes nowadays, there's no surprise that even some basic paste would be better than it. It's literally dead last in tests.


----------



## Valantar (May 28, 2021)

The red spirit said:


> Inspired by this thread:
> 
> 
> 
> ...


Inspired by (and linking to) the same thread that you're posting in? 

Still, it's pretty atrocious that ASRock makes motherboards that don't even meet base spec, and that Intel doesn't enforce platform standards.


AusWolf said:


> That's exactly my problem with reviews these days. TechPowerUp! is still doing fine, but if you look at any review on youtube, they all enforce unlocked TDPs and expect CPUs to run like that in every motherboard. They only ever care about peak performance with beefy coolers, and that extra 1% that nobody can ever see in real life, but puts X CPU just ahead of the competition. That's what caused the stir at Hardware Unboxed. Do you drive your car with the engine rpm at redline all the time? I don't think so. Heck, even AMD CPUs don't maintain their max turbo clocks all the time. In fact, AMD never even publishes their boosting tables, only a vague max boost clock (that you probably never see in real life, just like with any Intel CPU) and nobody complains about it.


The problem for reviewers is they essentially have to choose - either Intel spec or motherboard default, whatever that is - and depending on their resources, they are partially at the mercy of motherboard makers, as not everyone can afford to buy an expensive motherboard for a test platform. Most serious reviewers are pretty transparent about this as well as their reasoning behind whatever choice they make. But YouTubers generally don't count as 'serious reviewers'. GN is the only real exception (though perhaps Level1Techs should be included?). Other than that there's AnandTech, TPU, and a few other sites doing in-depth written reviews of high quality, including discussions of methodologies and the rationales behind them.


AusWolf said:


> Edit: Speaking of AMD max boost clocks, the "Asus Optimizer" setting in my motherboard BIOS pushes power limits out in space so that the CPU can maintain a higher boost clock even at a 100% workload. With this enabled, my 5950 chewed through around 180 Watts in Cinebench and came close to throttling temps even with a 240 mm AIO. Sure, it maintained 4.2-4.4 GHz all-core instead of the normal 3.6-3.8, but still... Why does nobody complain about this? Because it's an Asus feature, not AMD spec. Unlocked power limits by default should not be allowed.


Yep, platform holders need to enforce their standards. Optional features from third parties are never a problem as long as they are optional.


AusWolf said:


> They don't throttle, but they adjust their boost bins. My 1650 runs at different clock speeds during different workloads - Superposition 720p or 1080p Medium lets it run at 1920-1950 MHz, it does around ~1900 in 1080p Ultra, and 1860 in Cyberpunk 2077.


Not the same. They adjust their boost bins to keep within power and thermal limits, and mainly the latter. Different workloads draw different amounts of power, so 1950MHz and 1860MHz might both draw the same amount of power depending on what work is being done.


AusWolf said:


> I don't remember seeing a similar setting in my BIOS when I still had the 3600. It would be nice to play with it. Too late I guess.


It's not the most widely advertised nor obviously visible setting, but it should be there on any current-gen motherboard.


AusWolf said:


> That would be a solid plan if there was an APU available. The Ryzen 4000 series are expensive and very difficult to find (they're also kind of a downgrade in gaming) and the 5000 series aren't out on DIY channels, yet.


I agree that they are difficult to find, but they're not that expensive. A bit of a premium, sure. I got my 4650G from a reputable Ebay store, and have had zero issues.


AusWolf said:


> Not to worry, my impulse-bought Asus TUF B560M-Plus Wifi and Core i7-11700 have just arrived. Tests coming soon.


Good luck dealing with its stock 224W PL2, I guess?  Should be decent enough as long as you're willing to tune things manually (or are willing to deal with short-term (28s tau) heat loads well above TDP.


AusWolf said:


> If it wasn't BS, it wouldn't have tricked me into swapping my 65 W TDP processor with another 65 W part and expecting it to work just fine. Maybe it works with OEMs whose only goal is to make their systems 'just work', even if at the edge of throttling, but DIYers need to know what to expect and how to build their systems before buying.


Again: you need to change what you think TDP means, because you're talking and acting as if it is a consumer-facing denotation of power consumption. It isn't. It never has been - though, as discussed above, it used to be pretty similar. That is no longer the case. It is a number denoting a design spec for OEMs, and a marketing class for end users. Period. If you went into this with the expectation that "my system can cool ~65W reasonably, so I'll be able to sustain base clock and maybe boost some, but I'll never see a high all-core boost", then you wouldn't be disappointed, and your expectations would align with actual specifications. If you expect to get much higher than base boost clocks, while simultaneously steering well clear of tJmax, within a TDP-like thermal envelope, then you're basing your expectations on something that isn't reality, and that nobody has promised.


The red spirit said:


> Actually, you do see maximum boost clock quite often on Intel chips. I often see 4.3GHz on i5 10400F. And I often see all core maximum clock of 4GHz at pretty much any load. I heard that Ryzen chips simply don't have such tables and if they can they will keep increasing clock speed as long as cooling permits doing so until maximum specified turbo speed by AMD, also AMD does that by 25MHz increments and Intel does it in 100Mhz increments. I don't remember many details, but AMD and Intel boosting algorithms are substantially different.


My 5800X easily exceeds its 4.7GHz spec in desktop usage - two cores boost to a reported 4.841 (likely 4.85 minus some base clock measurement inaccuracy, as the 100MHz base clock is reported as 99.8), with the remaining six all hitting 4.791. That's at entirely stock settings, though of course these clocks are with extremely low % and light loads. In Prime95 (blend, so not the hottest/heaviest workload) it fluctuates between 4.475 and 4.55GHz all-core at a PPT of 122-127W (well below its 138W limit). It also runs up to ~85°C under that load - but then my water loop is configured for silence, bases pump and fan rpms only off water temperatures, and ramps very slowly.

(Testing single core speeds is much more difficult as the Windows scheduler will shift heavy processes around from core to core rapidly in order to alleviate heat build-up - so running Blend with a single thread just results in a reported ~20% load across several cores, with clock speeds and which cores are under load fluctuating rapidly.)


The red spirit said:


> Oh dear, those Ryzens are bad at dissipating heat. I remember stock FX 6300 consuming over 200 watts in stress test stock, despite being marketed as 95 watt chip. It didn't have wattage limiter, so turbo worked as long as there was thermal and VRM headroom (aka forever in most cases). It was rather easy to cool and didn't really need anything more than Hyper 103 cooler. That cooler was fine for 4.4-4.6Ghz all core overclock and it had to keep temps under 62C, because it was thermal limit at first. Later updated to 72C. Ryzen 5950X should be, in theory, much easier to cool than FX.


Well ... thermal density is _massively _increased. The 5800X is the most difficult to cool (105W TDP/138W PPT across a single CCD, compared to 105W TDP/144W PPT across two CCDs for the 5900X and 5950X), and it's still fine. No, you're not likely to se temps in the 60s under an all-core load at high boosts. Given that that heat is distributed across just 83.7mm² (and not even evenly - a lot of that size is L3 and interconnects, with the cores being ~half of the CCD), compared to 315mm² for Bulldozer. It stands to reason that a) Ryzen gets hotter overall as the heat sources are smaller and more concentrated, and b) that it's more difficult to dissipate this heat out through the IHS and into the cooler thanks to the concentration of heat. Given the density of these chips, what they're able to do with them is highly impressive.


The red spirit said:


> However, if it's all that impossible to cool it well, then obsessing with boost clock is a waste of time. I personally think that PL and PPT values should be abandoned as nobody really cares about those, then cooling CPU, instead there could be temperature limiter, which would reduce boost speed at certain set temperature. It would be much easier to set up than vague TDP, which means almost nothing to end user.


... there is. CPUs throttle if they get to hot. Boost is dependent on thermals as well as power. But the thermal throttle points are pretty high, typically around 95-100°C - thankfully, as those are entirely safe temperatures for the CPU, and anything lower would just be giving up performance for no reason. If you're asking for CPUs to throttle at temperatures lower than this, I suggest you take a step back and re-think a bit.



The red spirit said:


> This situation just isn't good. It feels like a lot of performance is being lost by sticking to too low TDP spec or by making 65 watt cooler. And if enthusiast buys an Intel chip today and invests into better than stock cooler, one can easily gain a lot of performance. The question is it gaining performance or just unchoking chip from stupid Intel spec? In times, when i5 11400F has a base speed of 2.6GHz and maximum boost clock of 4.4GHz, I would say that if you actually stick to 65 watt TDP (turbo boost off, as Intel specifies that TDP is at base clock speed), then you would be loosing a bit less than half of CPU performance to get that 65 watt TDP. In real life loads would will still get closer to 3.4Ghz even at 65 watt TDP with turbo on, but it only takes one heavy task on CPU to keep it running at base speed to fit into tiny 65 watt power budget. BOINC might be heavy enough load to not see more than 2.6GHz on that chip. When at higher power budget it could be running at 4Ghz on all cores. That's a lot of performance loss, on chips which other than stupid TDP spec, can perform much better, granted that you use aftermarket tower cooler. Even 92mm tower cooler would likely be enough for i5 with unlocked PL values.


Again: _please_ stop treating TDP as if it is a consumer-facing denomination of power draw. Did you at all read the rest of this thread? I agree that it would make more sense for a consumer-oriented 10400F-like SKU to have a higher base clock and a 95W TDP, but that would just add another SKU to an already complex lineup and make inventory-keeping and binning more challenging for Intel, but more importantly, for distributors and retailers. There are already _19 SKUs_ for Rocket Lake - and that doesn't even include i3s or Pentiums! Adding higher base clock SKUs for consumers to replace the 65W SKUs would just make a mess of things (especially if you start wanting both F and non-F versions of those as well). And OEMs wouldn't use those, as they wouldn't be able to fit them into their 65W thermal designs, meaning they couldn't just cut the 65W SKUs either.

The issue here isn't low TDPs or low base clocks, it's the lack of predictability of performance.


The red spirit said:


> And people seem to overlook another Intel CPU line, the T series chips, which are rated at 35 watts. i9 10900T was rated at 35 watts and to do that it has base speed of 1.9GHz and maximum boost clock of 4.6GHz. In this case you won't ever see it running at base speed or at TDP. For Intel chips maximum all core boost clock is essentially a new base clock. And those T chips were really bad at their job as they hardly saved any power when compared to non T version. Thanks to stupidly high PL2 values. Bullshit like that destroyed any value of separate T sku. What the point of getting T version, when you can get non T version and then set TDP to whatever you like?


T SKUs are binned (slightly) better than higher TDP SKUs, allowing them to run at slightly higher base clocks at 35W. Sure, you could get lucky and get a great K or non-K chip and match it, but there's no guarantee. But T chips are rarely sold at retail at all (a few shops carry them intermittently at best), and are _only_ targeted towards uSFF OEM solutions like ThinkStation Tinys and Optiplex uSFFs. They might consume more than 35W even there - if thermals and power delivery allows them to - but the entire point is that they constitute a discrete class of CPU - ones designed for very low power, small form factor applications - that OEMs require for their large-scale business, education and government customers. You could of course run a 65W CPU in the same chassis, but it would _constantly_ be thermal throttling, which isn't ideal, and the fan would be running fast even at idle. Instead, the 35W SKU lets them design tiny cases while the PL2 lets these CPUs stay as responsive as their higher power counterparts in desktop workloads, where very high clocks matter the most.

Again, you're treating an OEM-oriented designation (TDP) as well as now, an OEM-oriented product series as if they were consumer-directed. They aren't. It stands to reason things will be misunderstood and misinterpreted when things meant for one group are fit into the frameworks of understanding of the other.


The red spirit said:


> And for that matter what's the point of getting K sku, when you can just ramp up PL values on lower end chip and it will be almost as good as K version? Also non k version wouldn't even lose warranty from having PL values modified as Intel let motherboard OEMs go wild.


That's indeed part of the issue here, with Intel's non-enforcement of power limits - that locked-down SKUs with high boost clocks can now essentially act as unlocked SKUs (there's no real OC headroom on top of stock boost anyhow) with unlocked power limits. That's essentially what this thread is about.


The red spirit said:


> It's such a bad shitstorm, that I don't even know which is the least painful way to resolve it anymore. To enforce strict TDP? To raise TDP? To get rid of PL2? To keep performance or to accept losses? All this nonsense just makes me want to go back to era of single clock speed for everyone and be done with all this TDP bullshit. Let TDP to be whatever is needed for rated clock speed and be fine with results, but computer OEMs wouldn't be having any of that.


Accepting some performance variability is necessary in DIY - there will always be variables that manufacturers can't account for - but Intel still needs to define their specs more clearly and enforce them better. The current free-for-all among motherboard manufacturers is the core problem here, not TDPs or PL2 settings.


AusWolf said:


> That's another thing nobody seems to talk about. My Ryzen 3 3100 runs at 72 °C in Cinebench with the stock cooler (50 W max power). The 3600 came very close to throttling even with a be quiet! Shadow Rock LP (that was cold to the touch) with a maxed out 88 W PPT.


Hm, I've seen quite a few discussions of various Zen2 and Zen3 Ryzens "running hot". My impression is that it's established knowledge in enthusiast circles at this point that higher thermal density Zen2 and Zen3 Ryzens are a bit difficult to cool.


AusWolf said:


> If you think about it, spending a lot less money for a "T" CPU and unlocking its power limits is a much better deal than buying a "K" version and locking it to suit your available cooling capacity.  It's a shame T series are not available for DIY with 11th gen.


Arent T-series SKUs - the few times you can find them at retail - typically the same price as K SKUs, if not more expensive?


AusWolf said:


> To be honest, I've disagreed with boosting right from the start. Why would you not want peak performance with pre-designed power consumption all the time? The whole concept is flawed.


As with everything it's a compromise. But would you honestly want a CPU that either sacrificed a massive amount of responsiveness (seriously, the difference in desktop usage feel between, say, 3GHz and 4.5GHz is _massive_) for a reasonable max power draw, or one that demanded crazy cooling to work at all? That's just dumb to me. CPUs today are _smart_, as in _they adapt to their workloads and environments_. This allows for a far, far better balance of responsiveness, overall performance, and cooling needs than any previous solution. Is it perfect? Of course not. But it's flexible enough to adapt to a lot of shitty configurations while delivering the best possible OOB experience across the board. Why would you not want that?


The red spirit said:


> 72C in Cinebench? That means it will almost throttle in prime95. That's not good. I don't consider that my stuff has proper cooling if it can't run prime95 indefinitely, and in some cases prime95 and Furmark at the same time. I couldn't touch Ryzens with such problems. I'm not sure about your theory, but I know that I would try lapping those Ryzens. Maybe they are just uneven.


_Throttle_ means _run below base clock_. It might not boost quite as high. That is not the same.


The red spirit said:


> I don't consider that my stuff has proper cooling if it can't run prime95 indefinitely, and in some cases prime95 and Furmark at the same time.


What does that mean? At what thermals? At what noise levels? At what power draws? Heck, the Threadripper 1920X in my partner's workstation ran Prime95 indefinitely with a clogged water cooler - at 600MHz and ~90°C, but it ran for as long as I wanted it to. Also, running power virus loads to check thermals is kind of useless, seeing how _no_ real-world workload will create as much heat as P95 small FFT or FurMark. Heck, FurMark has a tendency to kill perfectly good GPUs due to overheating parts of the die. I really don't see the benefit of that.


The red spirit said:


> I dunno. But I have one of the cheapest FM2+ boards from Gigabyte. I think it's A68H-DS2 something. It handles Athlon 760K at stock perfectly fine and judging by VRM temperatures it could handle it overclocked. 760K is a FX derived chip and even at stock it uses obnoxious amount of power if all cores are loaded (140-160 watts). All it has is 4+1 phases and they are bare, no heatsink. I also have some A68H-HD+ Asrock board and it handles Athlon 845 perfectly fine too, again bare 4+1 phases. So it's not like OEMs can't make a functional board with bare VRMs that can't supply that wattage, I think that they just downspeced them too much and therefore they overheat. Thus turning i5 11600k into 11600.


Not really a valid comparison. VRMs care about amps, not watts, and due to older CPUs running at much higher voltages. Those Athlon X4s ran at 1.5V or higher, which has pretty dramatic effects compared to contemporary CPUs often running at 1-1.2V. 160W at 1.5V is 106.7A. 160W at 1V is 160A, and at 1.2V it's 133A. Both of those are significant increases for, say, a 4x50A power delivery setup.


----------



## HTC (May 28, 2021)

Zach_01 said:


> This is a cheap board without even just a chunk of metal on those VRMs.
> Most of these boards should've been labeled as i3 capable boards. They don't even run i5s properly, let alone i7/9s.
> 
> What are vendors and Intel thinking...  (?)



Let me see if i get this straight:


They were thinking????


----------



## Valantar (May 28, 2021)

The red spirit said:


> I bought that board with CPU in 2018 and it's FM2+ platform, not FM2. It's not 10 years old. At that same time Zen+ was out.
> 
> BTW Artic Silver 5 is one of the worst performing thermal pastes nowadays, there's no surprise that even some basic paste would be better than it. It's literally dead last in tests.


When you bought it is irrelevant - old stock is still sold until companies write it off and bin it. The Athlon X4 760K is an FM2 (not +, though it's also compatible with + motherboards) CPU from October 2012.


----------



## Zach_01 (May 28, 2021)

Valantar said:


> Not really a valid comparison. VRMs care about amps, not watts, and due to older CPUs running at much higher voltages. Those Athlon X4s ran at 1.5V or higher, which has pretty dramatic effects compared to contemporary CPUs often running at 1-1.2V. 160W at 1.5V is 106.7A. 160W at 1V is 160A, and at 1.2V it's 133A. Both of those are significant increases for, say, a 4x50A power delivery setup.





Valantar said:


> When you bought it is irrelevant - old stock is still sold until companies write it off and bin it. The Athlon X4 760K is an FM2 (not +, though it's also compatible with + motherboards) CPU from October 2012.


That was exactly my point. The tech is 10years old.


----------



## Solid State Soul ( SSS ) (May 28, 2021)

AusWolf said:


> The funny thing about my PC case is that even though it's a slim one that only accepts low profile graphics cards and CPU coolers, micro-ATX motherboards aren't an issue. I'm using an Asus B550M TUF Wifi at the moment, and I would be a bit sad to swap it for something else (unless it's of the same quality as this one).
> 
> If I go Intel again, I want to be looking at something similar - the Asus B560M TUF Wifi, or the Asus Z590M Prime are the ones with similar-looking quality and affordability available in my area. As for CPU, I was thinking about a Core i7-11700 non-K and locking its PL1 to 65 W, and PL2 to whatever I can cool. Hopefully, the Asus boards I looked at (or something else) would let me do that, even if it's not their default setting.


Asus tuf dont have as good VRMs as Gigabyte or MSI's upper mid tier boards, those are equipped with 12 50amp stages, where the tuf is only 8 50amps stages


----------



## The red spirit (May 28, 2021)

Valantar said:


> Inspired by (and linking to) the same thread that you're posting in?


It was originally a separate thread, but for some reason merged with this one.



Valantar said:


> My 5800X easily exceeds its 4.7GHz spec in desktop usage - two cores boost to a reported 4.841 (likely 4.85 minus some base clock measurement inaccuracy, as the 100MHz base clock is reported as 99.8), with the remaining six all hitting 4.791. That's at entirely stock settings, though of course these clocks are with extremely low % and light loads. In Prime95 (blend, so not the hottest/heaviest workload) it fluctuates between 4.475 and 4.55GHz all-core at a PPT of 122-127W (well below its 138W limit). It also runs up to ~85°C under that load - but then my water loop is configured for silence, bases pump and fan rpms only off water temperatures, and ramps very slowly.
> 
> (Testing single core speeds is much more difficult as the Windows scheduler will shift heavy processes around from core to core rapidly in order to alleviate heat build-up - so running Blend with a single thread just results in a reported ~20% load across several cores, with clock speeds and which cores are under load fluctuating rapidly.)


That's with custom loop? Oh god. 




Valantar said:


> Well ... thermal density is _massively _increased. The 5800X is the most difficult to cool (105W TDP/138W PPT across a single CCD, compared to 105W TDP/144W PPT across two CCDs for the 5900X and 5950X), and it's still fine. No, you're not likely to se temps in the 60s under an all-core load at high boosts. Given that that heat is distributed across just 83.7mm² (and not even evenly - a lot of that size is L3 and interconnects, with the cores being ~half of the CCD), compared to 315mm² for Bulldozer. It stands to reason that a) Ryzen gets hotter overall as the heat sources are smaller and more concentrated, and b) that it's more difficult to dissipate this heat out through the IHS and into the cooler thanks to the concentration of heat. Given the density of these chips, what they're able to do with them is highly impressive.


Well, I'm not really impressed by thermals of Ryzen chips. You could cool FX chips at 5GHz and keep them under 62C with just big air cooler. Stock 95 watt FX chips could be passively cooled with same air cooler, but with fans removed. And now you need big water cooler just to keep Ryzen working at stock clocks. That's a fail to me. The last time AMD needed water cooler was with FX 9590 and it was just 120mm AIO.




Valantar said:


> ... there is. CPUs throttle if they get to hot. Boost is dependent on thermals as well as power. But the thermal throttle points are pretty high, typically around 95-100°C - thankfully, as those are entirely safe temperatures for the CPU, and anything lower would just be giving up performance for no reason. If you're asking for CPUs to throttle at temperatures lower than this, I suggest you take a step back and re-think a bit.


Keeping CPU at 100C or 90C isn't acceptable for it. That it can survive such temperatures, means that it won't have any lasting effect if it reaches such temperatures occasionally. I remember some Intel thermal engineer posting that their 14nm chips could survive 1.4 volts at up to 80C in long term, but violate that voltage or cooling and electromigration will be bad.




Valantar said:


> Again: _please_ stop treating TDP as if it is a consumer-facing denomination of power draw.


Never, Intel's PL1 is how they define TDP. For the first time they finally got their shit together in this one aspect.




Valantar said:


> Did you at all read the rest of this thread? I agree that it would make more sense for a consumer-oriented 10400F-like SKU to have a higher base clock and a 95W TDP, but that would just add another SKU to an already complex lineup and make inventory-keeping and binning more challenging for Intel, but more importantly, for distributors and retailers. There are already _19 SKUs_ for Rocket Lake - and that doesn't even include i3s or Pentiums! Adding higher base clock SKUs for consumers to replace the 65W SKUs would just make a mess of things (especially if you start wanting both F and non-F versions of those as well). And OEMs wouldn't use those, as they wouldn't be able to fit them into their 65W thermal designs, meaning they couldn't just cut the 65W SKUs either.


Well that's obvious, but it matters now what they will do with Alder Lake. 




Valantar said:


> T SKUs are binned (slightly) better than higher TDP SKUs, allowing them to run at slightly higher base clocks at 35W. Sure, you could get lucky and get a great K or non-K chip and match it, but there's no guarantee. But T chips are rarely sold at retail at all (a few shops carry them intermittently at best), and are _only_ targeted towards uSFF OEM solutions like ThinkStation Tinys and Optiplex uSFFs. They might consume more than 35W even there - if thermals and power delivery allows them to - but the entire point is that they constitute a discrete class of CPU - ones designed for very low power, small form factor applications - that OEMs require for their large-scale business, education and government customers. You could of course run a 65W CPU in the same chassis, but it would _constantly_ be thermal throttling, which isn't ideal, and the fan would be running fast even at idle. Instead, the 35W SKU lets them design tiny cases while the PL2 lets these CPUs stay as responsive as their higher power counterparts in desktop workloads, where very high clocks matter the most.
> 
> Again, you're treating an OEM-oriented designation (TDP) as well as now, an OEM-oriented product series as if they were consumer-directed. They aren't. It stands to reason things will be misunderstood and misinterpreted when things meant for one group are fit into the frameworks of understanding of the other.


First, I highly doubt that T chips are actually a better bins of non T chips and BIOSes often allow you to set your own PL values. 




Valantar said:


> Accepting some performance variability is necessary in DIY - there will always be variables that manufacturers can't account for - but Intel still needs to define their specs more clearly and enforce them better. The current free-for-all among motherboard manufacturers is the core problem here, not TDPs or PL2 settings.


DIY market was just fine without TDP shenanigans. Even chips with one clock speed were decently acceptable and didn't have problems. I'm not a fan of turbo and other power tweaking. One static clock with downclocking for power savings seems to be the best design so far.



Valantar said:


> _Throttle_ means _run below base clock_. It might not boost quite as high. That is not the same.


I know full well that it's not exactly a throttle in legal terms, but realistically you lose performance, because your cooler can't keep up. You sacrifice performance to not damage the chip. 



Valantar said:


> What does that mean? At what thermals? At what noise levels? At what power draws? Heck, the Threadripper 1920X in my partner's workstation ran Prime95 indefinitely with a clogged water cooler - at 600MHz and ~90°C, but it ran for as long as I wanted it to.


Obviously at below maximum manufacturer specified temperature, maximum clock speed and at whatever my ears tell me is acceptable noise level, which tends to be somewhere at up to 1200 rpms most of the time, while preferably at no more than 1000 rpm. Power draw depends on chip and is generally not a concerns, unless it's very high. Your partner's TR system would have failed this test spectacularly.



Valantar said:


> Also, running power virus loads to check thermals is kind of useless, seeing how _no_ real-world workload will create as much heat as P95 small FFT or FurMark. Heck, FurMark has a tendency to kill perfectly good GPUs due to overheating parts of the die. I really don't see the benefit of that.


prime95 is a perfectly realistic workload, some people calculate primes for weeks. And let's not get into Furmark shit again. I will be very clear, if card can't handle some type of workload, then it's either badly tuned or has an inadequate cooling solution. I don't care that it kills some badly engineered cards, as no properly made card should die in Furmark. Also judging by power figures, running Furmark is not much different than mining or running MilkyWay@Home. My RX 580 can handle Furmark just fine with vBIOS mods. It now can't reach 80s and barely breaks into 70s in Furmark. RX 560 that I have in other machine, fails to reach 70s. 



Valantar said:


> Not really a valid comparison. VRMs care about amps, not watts, and due to older CPUs running at much higher voltages. Those Athlon X4s ran at 1.5V or higher, which has pretty dramatic effects compared to contemporary CPUs often running at 1-1.2V. 160W at 1.5V is 106.7A. 160W at 1V is 160A, and at 1.2V it's 133A. Both of those are significant increases for, say, a 4x50A power delivery setup.


And watts are amps*volts, therefore VRMs care about watts. And no those Athlons didn't run at 1.5 volts. Athlon X4 870K and Athlon A4 845 are both limited to 1.5V or 1.485V. No Athlon came out with more than that. Also, most of that voltage is needed to turbo to work, so if you disable turbo, you can get massive voltage reductions.



Valantar said:


> When you bought it is irrelevant - old stock is still sold until companies write it off and bin it. The Athlon X4 760K is an FM2 (not +, though it's also compatible with + motherboards) CPU from October 2012.


Nah, it's new stock. I have loads of chips for FM2+ boards. Athlon 760K is just one of them. I bought it for unique reasons:








						Weird CPU temperature reporting (solved)
					

I finally managed to unearth internet cable and loaded up same Mint dvd. I thought that you can't install software, while using live dvd as it is non writable media once finalized. Turns out I was wrong. However, I really lack experience with linux. Still, I managed to cobble together some basic...




					www.overclock.net
				




Previously that computer had 870K, which was made in 2015 and Athlon 845 was made in 2016. Both are nowhere near being 10 years old. Several motherboards had an extended manufacturing for some reason and thus you could buy them even in 2018 and probably in 2019. Athlon 845 is an unicorn chip, which is somewhat rare as it was released at the end of lifespan of FM2+ platform and it had Carrizo core, the last architectural improvement on AM3+ and FM2+ platforms. Athlon 870K is also late production model, but is a better binned 860K. Availability of it was poor and it mostly sold after FM2+ becoming obsolete. There were bunch of other rare CPUs released in 2016 for FM2+ platform, like A6 7470K or A10 7890K.


----------



## AusWolf (May 28, 2021)

Valantar said:


> The problem for reviewers is they essentially have to choose - either Intel spec or motherboard default, whatever that is - and depending on their resources, they are partially at the mercy of motherboard makers, as not everyone can afford to buy an expensive motherboard for a test platform. Most serious reviewers are pretty transparent about this as well as their reasoning behind whatever choice they make. But YouTubers generally don't count as 'serious reviewers'. GN is the only real exception (though perhaps Level1Techs should be included?). Other than that there's AnandTech, TPU, and a few other sites doing in-depth written reviews of high quality, including discussions of methodologies and the rationales behind them.


If I had to write a review, I'd try to do it both ways - like the guys here at TPU do. When reviewing, you need to consider that not everyone who reads your review will want the same out of their system.



Valantar said:


> Not the same. They adjust their boost bins to keep within power and thermal limits, and mainly the latter. Different workloads draw different amounts of power, so 1950MHz and 1860MHz might both draw the same amount of power depending on what work is being done.


Exactly. Throttling means dropping below base clock, which (coming back to the original topic) only that one ASRock motherboard does in HU's latter video. All the rest are within spec, however vague that spec is.



Valantar said:


> I agree that they are difficult to find, but they're not that expensive. A bit of a premium, sure. I got my 4650G from a reputable Ebay store, and have had zero issues.


I saw a 4750G on ebay a couple weeks ago for about £450. As an OEM CPU, it comes with no box and no warranty. I got the Asus B560M TUF motherboard and the i7-11700 for the same price brand new. We'll see what happens when the 5000G/GE series come out for DIY. I might buy one just to test it, and sell the Core i7 if it's any good. 



Valantar said:


> Good luck dealing with its stock 224W PL2, I guess?  Should be decent enough as long as you're willing to tune things manually (or are willing to deal with short-term (28s tau) heat loads well above TDP.


Oh no, I'm definitely not gonna run a 224 W PL2.  I intend to do as much tweaking as necessary to make it work in my thin SFF case. I want to find the perfect balance. 



Valantar said:


> My 5800X easily exceeds its 4.7GHz spec in desktop usage - two cores boost to a reported 4.841 (likely 4.85 minus some base clock measurement inaccuracy, as the 100MHz base clock is reported as 99.8), with the remaining six all hitting 4.791. That's at entirely stock settings, though of course these clocks are with extremely low % and light loads. In Prime95 (blend, so not the hottest/heaviest workload) it fluctuates between 4.475 and 4.55GHz all-core at a PPT of 122-127W (well below its 138W limit). It also runs up to ~85°C under that load - but then my water loop is configured for silence, bases pump and fan rpms only off water temperatures, and ramps very slowly.
> 
> (...)
> 
> ...


Having gone through the 3100, 5950X, 3600 and then back to the 3100, that's my conclusion too. I guess you won't damage these modern chips with high temperatures as much as you would for example an FX CPU. I remember those having maximum recommended temperatures of 61 °C, while Tjmax is usually around 100 °C these days. I also remember when Navi came out and everybody freaked out of the newly reported junction temperature reaching 100 °C. AMD had to make a statement that junction temp is totally fine up to 110 °C. Still, modern Ryzens get hot even with low power consumption, and are more difficult to cool than chips of yesteryear.



Valantar said:


> As with everything it's a compromise. But would you honestly want a CPU that either sacrificed a massive amount of responsiveness (seriously, the difference in desktop usage feel between, say, 3GHz and 4.5GHz is _massive_) for a reasonable max power draw, or one that demanded crazy cooling to work at all? That's just dumb to me. CPUs today are _smart_, as in _they adapt to their workloads and environments_. This allows for a far, far better balance of responsiveness, overall performance, and cooling needs than any previous solution. Is it perfect? Of course not. But it's flexible enough to adapt to a lot of shitty configurations while delivering the best possible OOB experience across the board. Why would you not want that?


I'm not quite sure that's the case. My Ryzen 3 3100 basically runs at 3.85-3.9 GHz all the time, independent of workload, as it never maxes out its power limit. Hungrier chips with more cores could do the same with cTDP. If you want full power, set cTDP to the highest, and enjoy maximum clock speed all the time. You want low thermals? Just turn your cTDP down to have your clocks and voltages decrease too. You don't even need different SKUs with different TDP ratings for this.



The red spirit said:


> 72C in Cinebench? That means it will almost throttle in prime95. That's not good. I don't consider that my stuff has proper cooling if it can't run prime95 indefinitely, and in some cases prime95 and Furmark at the same time. I couldn't touch Ryzens with such problems. I'm not sure about your theory, but I know that I would try lapping those Ryzens. Maybe they are just uneven.


I don't test with prime95. I use my PC for gaming, so I don't need such a heavy workload to test for CPU thermals. Cinebench is just fine.
Same with GPUs: I stay clear of Furmark, and use a Superposition loop, or 3DMark stability test instead.



The red spirit said:


> Oh I so agree here, I loved my FX 6300. My first chip that I pushed to 5.288 GHz and the first chip to make VRM area of motherboard brown in process. I gotta say that I didn't really care if it destroyed motherboards, as long as it was fun to overclock it. It was also great undervolting and it could run passively cooled with Scythe Mugen 4 heatsink. I kept using it until 2019, at the point where performance of it just wasn't good enough anymore. Apparently, FX lasted a long time and were still surprisingly not bad even in 2019:


To be honest, I thought my 8150 was a difficult chip to cool, though slapping a Hyper 212 on it was just fine. Now I reconsider my opinion with modern Ryzens.  FX was also my first platform where I burned my fingers just by touching the VRM heatsink.
As for performance, I wasn't really happy with it gaming-wise. Though I think these old FXes might be doing a little better with the passing of time and games needing more cores/threads to run well nowadays.


----------



## The red spirit (May 28, 2021)

AusWolf said:


> I don't test with prime95. I use my PC for gaming, so I don't need such a heavy workload to test for CPU thermals. Cinebench is just fine.
> Same with GPUs: I stay clear of Furmark, and use a Superposition loop, or 3DMark stability test instead.


On Comet Lake prime95 was the best stability test. Different platforms have slightly different best stability testing tools. Generally higher power consumption at wall tells a lot about which tool tests more of the chip better.

For GPU I test stability in heaven or tropics, and then separately test thermal in Furmark. For me, any chip should be absolutely stable and have thermals in check. Anything less is never acceptable. Then again, I actually tweak GPU permanently if I attempt to actually tweak it. My RX 580 is BIOS modded with my own custom tune to reduce wattage and noise. I managed to achieve small undervolt too. 

The general rule for stability testing is to get an idea whether system is stable at maximum imaginable load, it doesn't matter if it's realistic or not, because one day you might need a similar load to work perfectly. And once stability testing is done and thermals are in check, it's still advisable to increase voltage a bit to leave some room for any unexpected voltage fluctuation or just aging of chip.




AusWolf said:


> To be honest, I thought my 8150 was a difficult chip to cool, though slapping a Hyper 212 on it was just fine. Now I reconsider my opinion with modern Ryzens.  FX was also my first platform where I burned my fingers just by touching the VRM heatsink.
> As for performance, I wasn't really happy with it gaming-wise. Though I think these old FXes might be doing a little better with the passing of time and games needing more cores/threads to run well nowadays.


I swear to god, those FX chips were the blast to overclock. Over 5GHz on air was super intoxicating. People outside of overclocking circles were always super impressed by that, probably not so much nowadays when stock CPUs do that. Anyway, my last board for FX had a very hot northbridge for some reason and it could cry fingers. My first board for FX was way beyond finger burning hot. I recorded 159C at VRMs and that's the same board that I said got a brown stain. I achieved 5.288GHz with Asrock 970 Pro3 R2.0 board, which didn't have any VRM cooling. I certainly wouldn't want to touch those bare VRMs. Anyway, to not try to reach 5GHz on FX is a crime and impossible to do. Even if it's a suicide run, it's so much fun. I achieved that highest overclock with just Cooler Master Hyper 103 cooler, which is worse than 212 Evo. Cooling didn't really matter as I was limited by motherboards VRM capabilities. I set voltage to 1.72V for FX 6300 with other 2 modules disabled with highest LLC that board had. It didn't throttle, but it lost any efficiency and scaling at that point. Surprisingly it wasn't in 160s on VRMs. Once I got validation, I just used FX at stock settings for another 2-3 years until it died. With any decent board I probably could have achieved 5.5-5.7 GHz on air with same cooler at expense of disabling thermal protection of chip. With actually adequate cooler, 6GHz could be possible on air for suicide run.  

All I can say is that FX chips have totally spoiled me and made my overclocking expectations really high, if I will ever attempt that on some other platform. To the date, there's no better overclocking chip made than FX series. And on top of that FX chips were dirt cheap, so financial loss in case of disaster wouldn't be big. Ryzens just can't match FX in terms of their prices. I still remember 130 Euros for 6 cores and 180-200 Euros for 8 cores, Ryzen never had value close to FX and they don't really overclock, unless you deal with lame turbo.


----------



## AusWolf (May 29, 2021)

The red spirit said:


> On Comet Lake prime95 was the best stability test. Different platforms have slightly different best stability testing tools. Generally higher power consumption at wall tells a lot about which tool tests more of the chip better.
> 
> For GPU I test stability in heaven or tropics, and then separately test thermal in Furmark. For me, any chip should be absolutely stable and have thermals in check. Anything less is never acceptable. Then again, I actually tweak GPU permanently if I attempt to actually tweak it. My RX 580 is BIOS modded with my own custom tune to reduce wattage and noise. I managed to achieve small undervolt too.
> 
> The general rule for stability testing is to get an idea whether system is stable at maximum imaginable load, it doesn't matter if it's realistic or not, because one day you might need a similar load to work perfectly. And once stability testing is done and thermals are in check, it's still advisable to increase voltage a bit to leave some room for any unexpected voltage fluctuation or just aging of chip.


Maximum imaginable load is one thing, but what you're going to use the PC for is another. There is no game on the planet that's going to stress your GPU as much as Furmark does, that's why I think such programs are a bit pointless. I always aim for stability under real life conditions, so Superposition for GPU and Cinebench for CPU are the best imo.



The red spirit said:


> I swear to god, those FX chips were the blast to overclock. Over 5GHz on air was super intoxicating. People outside of overclocking circles were always super impressed by that, probably not so much nowadays when stock CPUs do that. Anyway, my last board for FX had a very hot northbridge for some reason and it could cry fingers. My first board for FX was way beyond finger burning hot. I recorded 159C at VRMs and that's the same board that I said got a brown stain. I achieved 5.288GHz with Asrock 970 Pro3 R2.0 board, which didn't have any VRM cooling. I certainly wouldn't want to touch those bare VRMs. Anyway, to not try to reach 5GHz on FX is a crime and impossible to do. Even if it's a suicide run, it's so much fun. I achieved that highest overclock with just Cooler Master Hyper 103 cooler, which is worse than 212 Evo. Cooling didn't really matter as I was limited by motherboards VRM capabilities. I set voltage to 1.72V for FX 6300 with other 2 modules disabled with highest LLC that board had. It didn't throttle, but it lost any efficiency and scaling at that point. Surprisingly it wasn't in 160s on VRMs. Once I got validation, I just used FX at stock settings for another 2-3 years until it died. With any decent board I probably could have achieved 5.5-5.7 GHz on air with same cooler at expense of disabling thermal protection of chip. With actually adequate cooler, 6GHz could be possible on air for suicide run.
> 
> All I can say is that FX chips have totally spoiled me and made my overclocking expectations really high, if I will ever attempt that on some other platform. To the date, there's no better overclocking chip made than FX series. And on top of that FX chips were dirt cheap, so financial loss in case of disaster wouldn't be big. Ryzens just can't match FX in terms of their prices. I still remember 130 Euros for 6 cores and 180-200 Euros for 8 cores, Ryzen never had value close to FX and they don't really overclock, unless you deal with lame turbo.


To be honest, I've always thought overclocking was pointless, and I still do. Whatever extra you get out of your PC in benchmarks doesn't matter; the perceivable difference in real-life experience is always going to be minimal at best, at the cost of exponentially increased heat and power consumption, and decreased longevity of your parts.


----------



## The red spirit (May 29, 2021)

AusWolf said:


> Maximum imaginable load is one thing, but what you're going to use the PC for is another. There is no game on the planet that's going to stress your GPU as much as Furmark does, that's why I think such programs are a bit pointless. I always aim for stability under real life conditions, so Superposition for GPU and Cinebench for CPU are the best imo.


When you have to ensure that cooler is truly capable of dealing with heat, then Furmark is a great tool at ensuring that. I don't want cooler to be good enough 95% of time, I want it to be good enough even in the worst case scenario, so that I know that it can cope with everyday loads and with occasionally higher loads. 




AusWolf said:


> To be honest, I've always thought overclocking was pointless, and I still do. Whatever extra you get out of your PC in benchmarks doesn't matter; the perceivable difference in real-life experience is always going to be minimal at best, at the cost of exponentially increased heat and power consumption, and decreased longevity of your parts.


I guess you could make such point. I don't necessarily disagree with it, but there's more to it. I mean people do this stuff to computers, others do it to their cars. Sometimes gains are small, sometimes they are substantial. Often gains come with other costs (lower longevity). I think that we as hobbyists can enjoy overclocking just as much as actual gains as we just appreciate the achievement itself. It's like making 1000 bhp car. It's useless and dangerous, likely will break down soon, but it's so much fun to actually achieve such number, even if you can't really utilize more than 600 bhp well and 100 bhp legally. FX line of CPUs was the last where overclocking could yield big gains in performance. Lowest clocked FX had 3.3GHz base speed and you could clock it to almost 5GHz. I get that it means investing a lot more into cooling, motherboard, but let's be honest it's a bit silly, yet fun. And just a bit before FX, there were Core 2 Duos, which came with base speed of 1.86GHz and could be rather safely overclocked to 3GHz+ with same motherboard and stock cooler. Of course, there were even more legendary overclocking chips like Celeron 300A, which could be overclocked from 300MHz stock speed to 450MHz. Overclocking is basically hot-rodding but for computers. Obviously it takes time, effort and money to do, not everyone appreciates that, but those that do they enjoy it for what it is, often when they know how impractical that actually is.


----------



## AusWolf (May 30, 2021)

The red spirit said:


> When you have to ensure that cooler is truly capable of dealing with heat, then Furmark is a great tool at ensuring that. I don't want cooler to be good enough 95% of time, I want it to be good enough even in the worst case scenario, so that I know that it can cope with everyday loads and with occasionally higher loads.


I get that, though there is no "occasionally higher load" ever. The way Furmark stresses your GPU is unrealistic, and it's guaranteed that you'll never encounter a similar scenario while gaming.



The red spirit said:


> I guess you could make such point. I don't necessarily disagree with it, but there's more to it. I mean people do this stuff to computers, others do it to their cars. Sometimes gains are small, sometimes they are substantial. Often gains come with other costs (lower longevity). I think that we as hobbyists can enjoy overclocking just as much as actual gains as we just appreciate the achievement itself. It's like making 1000 bhp car. It's useless and dangerous, likely will break down soon, but it's so much fun to actually achieve such number, even if you can't really utilize more than 600 bhp well and 100 bhp legally. FX line of CPUs was the last where overclocking could yield big gains in performance. Lowest clocked FX had 3.3GHz base speed and you could clock it to almost 5GHz. I get that it means investing a lot more into cooling, motherboard, but let's be honest it's a bit silly, yet fun. And just a bit before FX, there were Core 2 Duos, which came with base speed of 1.86GHz and could be rather safely overclocked to 3GHz+ with same motherboard and stock cooler. Of course, there were even more legendary overclocking chips like Celeron 300A, which could be overclocked from 300MHz stock speed to 450MHz. Overclocking is basically hot-rodding but for computers. Obviously it takes time, effort and money to do, not everyone appreciates that, but those that do they enjoy it for what it is, often when they know how impractical that actually is.


I guess I understand that too. I know a few people who like tuning basic cars to their limits. It's just that the time they spend in the garage making sure their "upgrades" don't end up being complete sh**, I spend out on the road enjoying the drive.


----------



## The red spirit (May 30, 2021)

AusWolf said:


> I get that, though there is no "occasionally higher load" ever. The way Furmark stresses your GPU is unrealistic, and it's guaranteed that you'll never encounter a similar scenario while gaming.


As I said mining and BOINC loads are quite similar to Furmark in terms of power usage and heat output.




AusWolf said:


> I guess I understand that too. I know a few people who like tuning basic cars to their limits. It's just that the time they spend in the garage making sure their "upgrades" don't end up being complete sh**, I spend out on the road enjoying the drive.


Until a wild Nissan Micra with RB20DE swap overtakes you. Or for that matter maybe some chap lucks out and finds Mitsu colt with 4g63 engine and invests a bit in turbo and handling mods... Oh, the possibilities are endless.


----------



## chrcoluk (May 30, 2021)

I blame intel more than the board vendors, I would only blame a board vendor if they not meeting what they advertising.

Ultimately this is happening because intel cpu's have become very power hungry, but intel refuse to reflect this is in the official spec's, as well as not enforcing board partners to adhere to these limits, they dont do this of course as they want the marketing benefits of what their chips can do whilst running with unlimited power whilst also advertising a low TDP.

--edit--

Just seen the video on the asrock board, opinion changed, cannot excuse asrock for that board, I think its ok to release boards that cannot properly handle higher power limits and overclocks, but not ok to advertise boards to support chips where you cannot meet specification, of course intel still has blame as well.


----------



## Valantar (May 31, 2021)

The red spirit said:


> That's with custom loop? Oh god.


A custom loop, yes, but with a single 240mm rad for both CPU and GPU, and a quasi-AIO CPU DDC pump-block combo that isn't particularly good thermally. Also, the loop is configured for silence and not thermals, with fans ramping slowly and based on water temperatures rather than component temperatures.


The red spirit said:


> Well, I'm not really impressed by thermals of Ryzen chips. You could cool FX chips at 5GHz and keep them under 62C with just big air cooler. Stock 95 watt FX chips could be passively cooled with same air cooler, but with fans removed. And now you need big water cooler just to keep Ryzen working at stock clocks. That's a fail to me. The last time AMD needed water cooler was with FX 9590 and it was just 120mm AIO.


Apparently you didn't read what I wrote whatsoever. Oh well.


The red spirit said:


> Keeping CPU at 100C or 90C isn't acceptable for it. That it can survive such temperatures, means that it won't have any lasting effect if it reaches such temperatures occasionally. I remember some Intel thermal engineer posting that their 14nm chips could survive 1.4 volts at up to 80C in long term, but violate that voltage or cooling and electromigration will be bad.


Sorry, but that's nonsense. Silicon is perfectly fine running at 90-100°C for extended periods of time. As I've said before here, look at laptops - most laptops _idle_ in the 60s-70s and hit tJmax at any kind of load as they prioritize keeping quiet + accept that running hot doesn't do any harm. It won't be if you _also_ ramp voltages high while loading the CPU heavily, but advanced self-regulating CPUs like Ryzens don't allow those in combination unless you explicitly disable protections and override regulatory mechanisms. Heck, Buildzoid once tried to intentionally degrade his 3700X, and after something like a continuous 60 hours at >110°C (thermal limits bypassed) and 1.45V under 100% load he lost ... 25MHz of clock stability. So under any kind of regular workload degradation is never, ever happening, as that combination of thermals, voltage and load over time is utterly absurd for real-world workloads. Sure, his sample might be very resistant to electromigration, but even accounting for that there's no reason to worry at all.


The red spirit said:


> Never, Intel's PL1 is how they define TDP. For the first time they finally got their shit together in this one aspect.


PL1 is absolutely not how Intel defines TDP. PL1 is defined _from_ TDP, TDP is defined as a thermal output class of CPUs towards which CPUs are tuned in terms of base clock and other characteristics. Power draw is only tangentially related to TDP.


The red spirit said:


> Well that's obvious, but it matters now what they will do with Alder Lake.


It's not going to change. The 65W TDP tier is utterly dominant in the OEM space, which outsells DIY by at least an order of magnitude. 65W TDPs for midrange and lower end chips aren't changing. If you want more for DIY, they have a K SKU to sell you to cover that desire - for a price, of course. You, and us DIYers overall, are not first in line for things being adjusted to our desires, and never will be.


The red spirit said:


> First, I highly doubt that T chips are actually a better bins of non T chips and BIOSes often allow you to set your own PL values.


They are supposed to be better binned - whether they are in real life is always a gamble, as there's a lot of overlap between different bins, and some are interchangeable depending on the application.


The red spirit said:


> DIY market was just fine without TDP shenanigans. Even chips with one clock speed were decently acceptable and didn't have problems. I'm not a fan of turbo and other power tweaking. One static clock with downclocking for power savings seems to be the best design so far.


Again: it seems like you haven't read the rest of this thread at all. I'll just point you to this post. Though especially this part:


Valantar said:


> you're approaching this from the wrong angle, which either stems from a fundamental misunderstanding or from wanting something that doesn't exist. The issue: TDP is not a consumer-facing specification denoting power draw. It never has been. Historically it has been roughly equivalent to this, but this is more coincidental than intentional. TDP is a specification for SIs and cooler OEMs to design their cooling solutions and system designs around. If TDP was meant to denote power draw directly, it would for example be a guide for motherboard makers in designing their VRM setups - but it's not, and there are specific specifications (dealing with the relevant metrics, volts and amps) for that. You can disagree with how TDPs are used in marketing with regards to this - I definitely do! - but you can't just transfer it into being something it isn't.


Saying "DIY market was just fine without TDP shenanigans" is such an absurd reversal of reality that it makes it utterly impossible to actually discuss the issues at hand. TDPs have _never_ been directly related to power draw, nor has it ever been intended for the DIY market beyond a product class delineation.

As for abandoning boost: well, if you'd be happy with ~2.5GHz CPUs, then by all means. Because that's what we'd get if there wasn't boost - we'd get base clock at sustained TDP-like power draws. The 65W TDP tier isn't going anywhere, again, as OEMs buy millions of those CPUs, and changing it would be extremely expensive for them.


The red spirit said:


> I know full well that it's not exactly a throttle in legal terms, but realistically you lose performance, because your cooler can't keep up. You sacrifice performance to not damage the chip.


Yes. But that's not throttling. That's part of tuning a DIY system. Nobody has ever promised 100% boost clock 24/7 under 100% all-core load, or even 1-core load. You really need to be more nuanced in your approach to this.


The red spirit said:


> Obviously at below maximum manufacturer specified temperature, maximum clock speed and at whatever my ears tell me is acceptable noise level, which tends to be somewhere at up to 1200 rpms most of the time, while preferably at no more than 1000 rpm. Power draw depends on chip and is generally not a concerns, unless it's very high. Your partner's TR system would have failed this test spectacularly.


"At below maximum manufacturer specified temperature" ... okay ... so, anything below 100°C-ish? Because above you seemed to say 80°C was unacceptable. Yet that's quite a bit below maximum. Also, 1200rpm ... of which model of fan, how many fans, which case, which cooler? And _obviously_ the TR system would have failed, _it had a clogged AIO cooler_. My point was: you're making generalizing claims without defining even close to a sufficient amount of variables. Your criteria still make it sound like my cooling setup is well within your wants, yet you're saying above that it's unacceptable, so ... there's something more there, clearly.


The red spirit said:


> prime95 is a perfectly realistic workload, some people calculate primes for weeks. And let's not get into Furmark shit again. I will be very clear, if card can't handle some type of workload, then it's either badly tuned or has an inadequate cooling solution. I don't care that it kills some badly engineered cards, as no properly made card should die in Furmark. Also judging by power figures, running Furmark is not much different than mining or running MilkyWay@Home. My RX 580 can handle Furmark just fine with vBIOS mods. It now can't reach 80s and barely breaks into 70s in Furmark. RX 560 that I have in other machine, fails to reach 70s.


Prime95 not "realistic". Yes, some people calculate primes for weeks. Some people calculate the changes in molecular or cell structures of complex organisms when subjected to various chemicals. That doesn't make either a relevant end-user workload. If you're doing workstation things, get a workstation, or accept that consumer-grade products aren't designed for that and you need to overbuild to match. As for FurMark, whether a GPU can "handle" it is irrelevant. It is a workload explicitly created for maximum heat output, which is _dangerous_ to run. It doesn't matter what thermals your GPU reads (heck, the very fact that you're saying "it can handle it with BIOS mods!" says enough by itself!), the issue is that it creates extreme hotspots away from the thermal sensors on your GPU. Most GPUs - all of them pre RDNA - have their thermal sensors along the edge of the die. Under normal loads there's easily a 10-20°C difference in thermals between the edge and centre of the die under full load. Furmark exaggerates that - so if your edge thermal sensor is reading 70-80, the hotspot temperature might be 110 or higher. If your hardware doesnt die that's good for you, but please stop subjecting it to unnecessary and unrealistic workloads just for "stress testing".


The red spirit said:


> And watts are amps*volts, therefore VRMs care about watts. And no those Athlons didn't run at 1.5 volts. Athlon X4 870K and Athlon A4 845 are both limited to 1.5V or 1.485V. No Athlon came out with more than that. Also, most of that voltage is needed to turbo to work, so if you disable turbo, you can get massive voltage reductions.


Jesus christ, man, come on. No. VRMs care about watts _only as expressed in amps_. That was the _entire_ point of what I said. And while it's true I cited the voltage of the highest running Athlons, they're still much higher than current CPUs. (Current-gen Ryzens report very high core voltages in software, but from what AMD's engineering teams has said those voltages are read before stepping down to what the core actually demands, so it's not actually running 1.4V or higher during boost despite what software might say.)

And yes, of course you get voltage reductions if you disable boost. That's ... rather obvious, no? Go below stock behaviour, and you'll get lower voltages and power draws. Not quite surprising.


The red spirit said:


> Nah, it's new stock. I have loads of chips for FM2+ boards. Athlon 760K is just one of them. I bought it for unique reasons:
> 
> 
> 
> ...


No. Old stock = old, unsold products that have been sitting on shelves for a long time. That CPU was launched in October 2012, and while production of course ran for several years after that, it definitely wasn't recently manufactured when you bought it. And even if it was, it was still ancient tech at that point. Which is fine, but please don't try to say that it wasn't old.


The red spirit said:


> Previously that computer had 870K, which was made in 2015 and Athlon 845 was made in 2016. Both are nowhere near being 10 years old. Several motherboards had an extended manufacturing for some reason and thus you could buy them even in 2018 and probably in 2019. Athlon 845 is an unicorn chip, which is somewhat rare as it was released at the end of lifespan of FM2+ platform and it had Carrizo core, the last architectural improvement on AM3+ and FM2+ platforms. Athlon 870K is also late production model, but is a better binned 860K. Availability of it was poor and it mostly sold after FM2+ becoming obsolete. There were bunch of other rare CPUs released in 2016 for FM2+ platform, like A6 7470K or A10 7890K.


Not that those CPUs aren't interesting, but they're still old tech. My A8-7600 that I just retired from my NAS was just as old. Sure, AMD iterated upon its 'large machinery' cores for quite a few years, and even launched Carrizo very close to Ryzen, but the actual changes generation-on-generation were pretty tiny. And that a five or six-year-old CPU is less old than a 10-year-old CPU is ... not that interesting?


AusWolf said:


> If I had to write a review, I'd try to do it both ways - like the guys here at TPU do. When reviewing, you need to consider that not everyone who reads your review will want the same out of their system.


Absolutely. Though that's a lot of work - more than most reviewers probably have time (or get paid) for. IMO, reviewers ought to have at least two test systems per generation, one high end and one midrange, and compare the two at spec and stock settings. That would be near ideal.


AusWolf said:


> Exactly. Throttling means dropping below base clock, which (coming back to the original topic) only that one ASRock motherboard does in HU's latter video. All the rest are within spec, however vague that spec is.


Yeah, that's pretty atrocious. This is why this discussion is getting so muddled though - people mix up annoyance at Intel for being vague AF and not enforcing their specs with OEMs partially making use of that to effectively OC their parts, and partly just making cheap shit and selling it as if it was good enough. Both sides need addressing, and need addressing specifically for what they're messing up. But that's tricky.


AusWolf said:


> I saw a 4750G on ebay a couple weeks ago for about £450. As an OEM CPU, it comes with no box and no warranty. I got the Asus B560M TUF motherboard and the i7-11700 for the same price brand new. We'll see what happens when the 5000G/GE series come out for DIY. I might buy one just to test it, and sell the Core i7 if it's any good.


Whoa! I paid €225 for my 4650G. I don't care much about the warranty - I've never had a CPU fail, and stories of that are rare enough that I can't imagine needing it.


AusWolf said:


> Oh no, I'm definitely not gonna run a 224 W PL2.  I intend to do as much tweaking as necessary to make it work in my thin SFF case. I want to find the perfect balance.


Sounds interesting! Let me know if you make a build log?


AusWolf said:


> I'm not quite sure that's the case. My Ryzen 3 3100 basically runs at 3.85-3.9 GHz all the time, independent of workload, as it never maxes out its power limit. Hungrier chips with more cores could do the same with cTDP. If you want full power, set cTDP to the highest, and enjoy maximum clock speed all the time. You want low thermals? Just turn your cTDP down to have your clocks and voltages decrease too. You don't even need different SKUs with different TDP ratings for this.


Well, the 3100 is a "low end of its TDP tier" SKU, i.e. it's likely overspecced in terms of TDP. They could probably make its base and boost clocks match if they wanted to, but probably leave some room between them to add leeway for utilizing garbage-tier bins of chips if they want to. (You often see the same on older i5s and i3s too.) Each tier must include a range of products after all. But without modern boosting systems, we'd either need SKU-specific TDPs or we'd get a _much_ smaller range of chips to choose from as the power draw would limit differentiation.



The red spirit said:


> The general rule for stability testing is to get an idea whether system is stable at maximum imaginable load, it doesn't matter if it's realistic or not, because one day you might need a similar load to work perfectly. And once stability testing is done and thermals are in check, it's still advisable to increase voltage a bit to leave some room for any unexpected voltage fluctuation or just aging of chip.


That's a commonly held enthusiast belief, but it's a rather irrational one. Power viruses and unrealistic heat loads can be beneficial if you're _really_ pushing things and still want 24/7 stability, but for anything else they're both rather useless, potentially misleading, and possibly harmful to your components. What is the value of keeping CPU temps while running Prime95 under a given level if the CPU is never going to see a workload similar to that? Etc.


The red spirit said:


> Ryzens just can't match FX in terms of their prices. I still remember 130 Euros for 6 cores and 180-200 Euros for 8 cores, Ryzen never had value close to FX and they don't really overclock, unless you deal with lame turbo.


Value is relative. You clearly value overclocking for its own sake. Which is of course fine if that's what you like to spend your time doing! But your conception of value handily overlooks the fact that FX (and Bulldozer derivatives in general) performed rather terribly. They were fun from a technical and OC perspective, and they were cheap, but they were routinely outperformed by affordable i5s (and even i3s towards the end) with half the cores or less. Ryzen gen 1 and 2 delivered _massive_ value in terms of performance/$, but as you said, they never really OC'd at all. I prefer the latter, you prefer the former - to each their own, but your desire is by far the more niche and less generally relevant one.


----------



## The red spirit (May 31, 2021)

Valantar said:


> A custom loop, yes, but with a single 240mm rad for both CPU and GPU, and a quasi-AIO CPU DDC pump-block combo that isn't particularly good thermally. Also, the loop is configured for silence and not thermals, with fans ramping slowly and based on water temperatures rather than component temperatures.


Despite that it's still a custom loop and it's still likely on par with bigger air coolers.




Valantar said:


> Sorry, but that's nonsense. Silicon is perfectly fine running at 90-100°C for extended periods of time. As I've said before here, look at laptops - most laptops _idle_ in the 60s-70s and hit tJmax at any kind of load as they prioritize keeping quiet + accept that running hot doesn't do any harm.


And tell me how long do those laptops last. I doubt that they will be alive after a decade. And it's not like it's not known, we all remember nVidia GPU fiasco with 8000 series GPUs cooking themselves to death. Also GTX 480s many of them are dead. R9 290Xs many of them are dead. Any AMD monstrosity like Fury X ignoring water cooler failure, the core itself is cooked to death on most cards. 




Valantar said:


> It won't be if you _also_ ramp voltages high while loading the CPU heavily, but advanced self-regulating CPUs like Ryzens don't allow those in combination unless you explicitly disable protections and override regulatory mechanisms. Heck, Buildzoid once tried to intentionally degrade his 3700X, and after something like a continuous 60 hours at >110°C (thermal limits bypassed) and 1.45V under 100% load he lost ... 25MHz of clock stability. So under any kind of regular workload degradation is never, ever happening, as that combination of thermals, voltage and load over time is utterly absurd for real-world workloads. Sure, his sample might be very resistant to electromigration, but even accounting for that there's no reason to worry at all.


Well I have read about some dude (at OCN) trying to see electromigration of chip and it was Sandy bridge i7. He ramped up voltage to 1.7V and kept CPU cool, but only after 15 minutes it needed more voltage to be stable. And at more sane voltages, ne needed few hours to make it need more voltage. And translate that to 8 years of computer usage. You would want a CPU to be functional for at least 15 years and most people want it to be working for 8 years or so, any accelerated electromigration at such rates isn't acceptable. And if Ryzen needed only that much to electromigrated, think about Ryzens running stock with stock coolers. They usually stay at 85C under load and still get voltage in 1.2-1.45 volt range. That's very close to Buldzoid's test and only 60 hours to damage it like that is really not good, knowing that Ryzen chips likely doesn't test stability of itself and ask for more volts from factory than what AMD set it to have. 



Valantar said:


> PL1 is absolutely not how Intel defines TDP. PL1 is defined _from_ TDP, TDP is defined as a thermal output class of CPUs towards which CPUs are tuned in terms of base clock and other characteristics. Power draw is only tangentially related to TDP.


It is how Intel defines TDP. TDP is "Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload." aka long term power limit which is PL1 and it is always set to match advertised TDP. 



Valantar said:


> It's not going to change. The 65W TDP tier is utterly dominant in the OEM space, which outsells DIY by at least an order of magnitude. 65W TDPs for midrange and lower end chips aren't changing. If you want more for DIY, they have a K SKU to sell you to cover that desire - for a price, of course. You, and us DIYers overall, are not first in line for things being adjusted to our desires, and never will be.


Sure, but prebuilts had no problems in the past dealing with 95 watt TDP anyway. Let's not forget i7 2600k or Core 2 Quads or any AMD Phenom. 65 watt TDP is just chocking the chips for no good reason.



Valantar said:


> They are supposed to be better binned - whether they are in real life is always a gamble, as there's a lot of overlap between different bins, and some are interchangeable depending on the application.


I don't think that they bin them and I haven't heard of that at all. It would be rather stupid of them to separately release them as faster models with lower voltages as it would mean more silicon unable to match Intel spec of SKU. 



Valantar said:


> Again: it seems like you haven't read the rest of this thread at all. I'll just point you to this post. Though especially this part:
> 
> Saying "DIY market was just fine without TDP shenanigans" is such an absurd reversal of reality that it makes it utterly impossible to actually discuss the issues at hand. TDPs have _never_ been directly related to power draw, nor has it ever been intended for the DIY market beyond a product class delineation.


I have, stop saying that nonsense. I may not agree with that, but doesn't mean that I don't read it. Anyway, TDP was once a decent metric, no need to shit on it. How hard could it possible be for chip make to calculate amps*volts of each chips at chip's maximum theoretical load? It's not hard at all, but for us it is as we aren't usually informed about official voltage of chip or its capabilities to pull amps. TDP only becomes a load of croc if companies start to obfuscate what it actually is and feed public with bullshit. Pentium 3 never had a problem of incorrect TDP being specified, 1.4GHz model was rated at 32.2 watts. Pentium 4 2.8GHz was rated at 68.4 watts. That was what you could measure while CPU was loaded and if you measured CPU power rail. Just like they could back then, they still can do the same with all power limits of modern chips.



Valantar said:


> As for abandoning boost: well, if you'd be happy with ~2.5GHz CPUs, then by all means. Because that's what we'd get if there wasn't boost - we'd get base clock at sustained TDP-like power draws. The 65W TDP tier isn't going anywhere, again, as OEMs buy millions of those CPUs, and changing it would be extremely expensive for them.


For me it would be 4GHz at 105 watts, which is what i5 10400F is exactly pulling. And I don't care about OEMs as in my country they are legitimately rare and practically don't exist. OEMs are American only concept, which doesn't apply to the rest of this planet.



Valantar said:


> Yes. But that's not throttling. That's part of tuning a DIY system. Nobody has ever promised 100% boost clock 24/7 under 100% all-core load, or even 1-core load. You really need to be more nuanced in your approach to this.


I would be if I expected it to be used with aluminum sunflower cooler and if all of us were speaking in legalese everyday, but I'm not. If chip can safely achieve that and do no harm to board, why on Earth I wouldn't want that "boost". Boost itself has been for nearly a decade almost identical to base speed as CPU either works at idle speed or maximum speed, which is boost. They rarely work at base clock and most users don't see that unless they disable boost in BIOS.




Valantar said:


> "At below maximum manufacturer specified temperature" ... okay ... so, anything below 100°C-ish? Because above you seemed to say 80°C was unacceptable. Yet that's quite a bit below maximum. Also, 1200rpm ... of which model of fan, how many fans, which case, which cooler? And _obviously_ the TR system would have failed, _it had a clogged AIO cooler_. My point was: you're making generalizing claims without defining even close to a sufficient amount of variables. Your criteria still make it sound like my cooling setup is well within your wants, yet you're saying above that it's unacceptable, so ... there's something more there, clearly.


Why not look at my system then in profile? My cooler is clearly a Scythe Choten with stock fan, case is Silencio S400. It has 3 fans in it, one top exhaust, two front intakes, usually working at 600-800 rpm. And sure I am biased, of course "under manufacturer spec" is the most minimum spec for cooling. I said that it acceptable only when CPU is running prime95 and GPU is running Furmark at the same time. You got those 80C at nowhere near such a high load and not even close to worst case scenario. 




Valantar said:


> Prime95 not "realistic". Yes, some people calculate primes for weeks. Some people calculate the changes in molecular or cell structures of complex organisms when subjected to various chemicals. That doesn't make either a relevant end-user workload.


You clearly said that is very relevant for them.



Valantar said:


> If you're doing workstation things, get a workstation, or accept that consumer-grade products aren't designed for that and you need to overbuild to match.


As if there were stuff like that. HEDT is high end desktop, not a workstation. And why would I not use my plebeian chips for such loads, they are perfectly capable of that and are designed to be general purpose. General purpose means that if I want I only use it for playing mp3s and if I want it, then I use it assemble molecules. I see nothing stupid or unreasonable about that. It might not be that fastest, but that doesn't mean it can be unstable or catch on fire. 




Valantar said:


> As for FurMark, whether a GPU can "handle" it is irrelevant. It is a workload explicitly created for maximum heat output, which is _dangerous_ to run. It doesn't matter what thermals your GPU reads (heck, the very fact that you're saying "it can handle it with BIOS mods!" says enough by itself!), the issue is that it creates extreme hotspots away from the thermal sensors on your GPU. Most GPUs - all of them pre RDNA - have their thermal sensors along the edge of the die. Under normal loads there's easily a 10-20°C difference in thermals between the edge and centre of the die under full load. Furmark exaggerates that - so if your edge thermal sensor is reading 70-80, the hotspot temperature might be 110 or higher. If your hardware doesnt die that's good for you, but please stop subjecting it to unnecessary and unrealistic workloads just for "stress testing".


I don't run it long, only to get an idea of what my thermals are.



Valantar said:


> And yes, of course you get voltage reductions if you disable boost. That's ... rather obvious, no? Go below stock behaviour, and you'll get lower voltages and power draws. Not quite surprising.


Actually boost is also technically running above manufacturer spec and is never accounted for in TDP calculations.



Valantar said:


> No. Old stock = old, unsold products that have been sitting on shelves for a long time. That CPU was launched in October 2012, and while production of course ran for several years after that, it definitely wasn't recently manufactured when you bought it. And even if it was, it was still ancient tech at that point. Which is fine, but please don't try to say that it wasn't old.


870K was launched in 2015, for which the system was assembled for and technically it had refreshed architecture, so it wasn't the same at older part, therefore it's 2015 tech.



Valantar said:


> That's a commonly held enthusiast belief, but it's a rather irrational one. Power viruses and unrealistic heat loads can be beneficial if you're _really_ pushing things and still want 24/7 stability, but for anything else they're both rather useless, potentially misleading, and possibly harmful to your components. What is the value of keeping CPU temps while running Prime95 under a given level if the CPU is never going to see a workload similar to that? Etc.


And you think that Intel at factories doesn't use "power virus" to determine heat output? The last time I read about that, Intel used their in house tools for that and specific heat simulators or at least specialized software loads to simulate that. They do exactly what prime95 does, but better and even more taxing on chip + in final settings they add some safety margin to account for less than perfect VRMs, vDroop, hot climates and etc. If you are saying that bullshit, then Intel should fire all those people, who ensure stability and predictable heat output of chips as they are apparently useless.




Valantar said:


> Value is relative. You clearly value overclocking for its own sake. Which is of course fine if that's what you like to spend your time doing! But your conception of value handily overlooks the fact that FX (and Bulldozer derivatives in general) performed rather terribly. They were fun from a technical and OC perspective, and they were cheap, but they were routinely outperformed by affordable i5s (and even i3s towards the end) with half the cores or less. Ryzen gen 1 and 2 delivered _massive_ value in terms of performance/$, but as you said, they never really OC'd at all. I prefer the latter, you prefer the former - to each their own, but your desire is by far the more niche and less generally relevant one.


In my situation I only had a choice of either buying i3 4130 or FX 6300, FX was overall better and lasted longer. FX were a great value chips. And FX 8320 was selling for slightly less than i5 4440, so FX was maybe a better value deal too. FX didn't perform terribly, they just weren't as fast as Intel in single threaded loads. That doesn't mean that they were decently fast. I can tell that you never had an FX and have no idea what they actually were like.


----------



## AusWolf (May 31, 2021)

The red spirit said:


> Until a wild Nissan Micra with RB20DE swap overtakes you. Or for that matter maybe some chap lucks out and finds Mitsu colt with 4g63 engine and invests a bit in turbo and handling mods... Oh, the possibilities are endless.


They can overtake me all they want. I'm out to enjoy the drive, not to race. 



Valantar said:


> Sounds interesting! Let me know if you make a build log?


Will do, I just have to find the proper channel first (maybe a new forum thread). 

So far, I was bold enough to test with the stock settings (65 W PL1, 28 s Tau, 225 W PL2). The CPU hits 90 °C and power consumption around 180-190 W in all-core workloads, but for some reason, real-life Tau seems to last only 3-4 seconds, not 28 - maybe because of thermals, though there is no throttling reported. After that, when PL1 kicks in, temps quickly settle around 60-65 °C and clock speed drops to 2.7-2.8 GHz. In a 10-minute sustained Cinebench run, the 11700 scores similarly to a Ryzen 1700X this way.

In single-threaded runs, the CPU "only" eats around 50 W and maintains its highest boost clock of 4.8-4.9 GHz all the time, which puts it on par with Ryzen 5000 chips in the score. Temps are around 72-75 °C.

It appears that the stock 225 W PL2 is too aggressive for my setup, but the 65 W PL1 is too mild. There is definitely more tweaking needed. 

Edit: It's also interesting to note that single-threaded runs use less power, but result in higher temperatures.



The red spirit said:


> In my situation I only had a choice of either buying i3 4130 or FX 6300, FX was overall better and lasted longer. FX were a great value chips. And FX 8320 was selling for slightly less than i5 4440, so FX was maybe a better value deal too. FX didn't perform terribly, they just weren't as fast as Intel in single threaded loads. That doesn't mean that they were decently fast. I can tell that you never had an FX and have no idea what they actually were like.


I had an FX-8150 and a Core i3-4160 as well. While from a completely subjective standpoint I loved both, the i3 was the better chip for gaming _at that time_.


----------



## The red spirit (May 31, 2021)

AusWolf said:


> So far, I was bold enough to test with the stock settings (65 W PL1, 28 s Tau, 225 W PL2). The CPU hits 90 °C and power consumption around 180-190 W in all-core workloads, but for some reason, real-life Tau seems to last only 3-4 seconds, not 28 - maybe because of thermals, though there is no throttling reported.


It doesn't report throttling, because technically losing some boost isn't throttling. Your Tau is likely working correctly, it's just that your cooling can't cope with stock PL2. Overall, it seems that you need PL1 of 80 watts and PL2 of 110-120 watts. 





AusWolf said:


> I had an FX-8150 and a Core i3-4160 as well. While from a completely subjective standpoint I loved both, the i3 was the better chip for gaming _at that time_.


That's great, but quickly i3 became rather obsolete. Everyone knew that getting a dual core chip in 2014 wouldn't end well and it only took some better threaded games to arrive to make FX chips clearly better than i3. Also you can overclock FX a lot if you desired. I'm not sure about Zambezi chips, but Vishera was truly adequate. The crazy thing is that FX 6300 had a low price of just 130 Euros and to this day there isn't a 6 core chips selling so cheap. Ryzen 1600 was 160 Euros, i5 10400F was 155 Euros. However, the real advantage of Intel platform is that you could swap CPU to i7 later and to this day it would be good. On AM3+ there wasn't any upgradability, only higher overlocking headroom.


----------



## AusWolf (Jun 1, 2021)

The red spirit said:


> It doesn't report throttling, because technically losing some boost isn't throttling. Your Tau is likely working correctly, it's just that your cooling can't cope with stock PL2. Overall, it seems that you need PL1 of 80 watts and PL2 of 110-120 watts.


Funny enough, I just tested that before work last night.  The 120 W PL2 is a bit too steep (it still clocks down after a few seconds even with a 40 s Tau), but with the 80 W PL1, it sits comfortably in the low-mid 80 degrees while holding a stable 3 GHz all-core. It isn't much (about Ryzen 2700X levels of performance), but 1. it's awesome to know that using an 11700 above stock PL1 is possible even in a SFF system, and 2. I'm not going to run anything that needs 16 threads at 100% usage all the time, so I guess I'm fine, and lastly 3. The cooler gets reasonably hot during these tests. With the Ryzen 3600 and its stock 88 W PPT, the CPU got very hot, but the cooler stayed cold to the touch. That concludes my previous assumption: Ryzens have terrible heat dissipation, despite the efficiency of 7 nm chips.

Edit: All tests with the 11700 were done with the "Silent" BIOS fan preset. I'm not only a SFF freak, but a silence freak too. 



The red spirit said:


> That's great, but quickly i3 became rather obsolete. Everyone knew that getting a dual core chip in 2014 wouldn't end well and it only took some better threaded games to arrive to make FX chips clearly better than i3. Also you can overclock FX a lot if you desired. I'm not sure about Zambezi chips, but Vishera was truly adequate. The crazy thing is that FX 6300 had a low price of just 130 Euros and to this day there isn't a 6 core chips selling so cheap. Ryzen 1600 was 160 Euros, i5 10400F was 155 Euros. However, the real advantage of Intel platform is that you could swap CPU to i7 later and to this day it would be good. On AM3+ there wasn't any upgradability, only higher overlocking headroom.


Yes, the i3 became obsolete as games started using more threads, but that's why I said: it _was_ the better gamer _at that time_.  I remember seeing around 20% usage on both the 8150 and the HD 7970 I had it paired with in Assasin's Creed 3, and the game barely ran at 30 FPS. It was an extreme case, but still: games didn't need 8 cores back then, and the single-core performance on FX was just plain terrible.


----------



## Kissamies (Jun 1, 2021)

The red spirit said:


> Everyone knew that getting a dual core chip in 2014 wouldn't end well


In fact an overclocked Pentium G3258 was still a great budget chip though it did became obsolete pretty quickly so you're pretty right there.


----------



## Valantar (Jun 1, 2021)

AusWolf said:


> Funny enough, I just tested that before work last night.  The 120 W PL2 is a bit too steep (it still clocks down after a few seconds even with a 40 s Tau), but with the 80 W PL1, it sits comfortably in the low-mid 80 degrees while holding a stable 3 GHz all-core. It isn't much (about Ryzen 2700X levels of performance), but 1. it's awesome to know that using an 11700 above stock PL1 is possible even in a SFF system, and 2. I'm not going to run anything that needs 16 threads at 100% usage all the time, so I guess I'm fine, and lastly 3. The cooler gets reasonably hot during these tests. With the Ryzen 3600 and its stock 88 W PPT, the CPU got very hot, but the cooler stayed cold to the touch. That concludes my previous assumption: Ryzens have terrible heat dissipation, despite the efficiency of 7 nm chips.
> 
> Edit: All tests with the 11700 were done with the "Silent" BIOS fan preset. I'm not only a SFF freak, but a silence freak too.


Is it possible to be an SFF freak without also being a silence freak? I know there are people out there who use FlexATX PSUs and don't mind the noise, but to me, those people belong in the kookoo bin. Some say 'size, silence, performance - pick two'; I say 'screw that, I want all three' and the fun is making that happen 

Current Ryzens definitely have worse heat dissipation than recent Intels - it would be shocking if not given the far greater heat density combined with the off-centre positioning of the cores. The area of the cores is at most half, and the CCD is off in a corner rather than centred - very different from Intel, for sure. Not all coolers handle that equally well, and it sounds like your Shadow Rock might be particularly poor (though it might also have been a bad mount? Ryzens need pretty even pressure across the IHS). What were your clocks at those temperatures? When I was testing my build I ran my 5800X with an old Hyper 212 Evo (open on my desk), and it kept it very nicely cool (at least for that class of cooler) and boosting above spec. I think I saw slightly higher all-core boost with that compared to my current water loop actually - goes to show how a reverse-flow block isn't ideal, but there aren't many DIY DDC pump+block combos out there!). Then again, they are engineered around running rather hot, with the dynamic boost system scaling very well around "high" thermals. There's a dedicated monitoring circuit in these CPUs that controls currents, voltages, clock speeds and more in order to ensure the CPU never reaches potentially harmful combinations of these, and the only way of overriding this is through fixed-clock OC. But I definitely understand the concern if your cooler didn't seem to be doing the job.

As far as I can remember Intel's boosting system is rather 'dumb', in that the CPU will try to boost to its set boost clock within PL2 as long as tau hasn't expired, but will drop down to PL1 (and whatever boost can be maintained within that) completely if it reaches thermal limits within that span. I don't think there are dynamic limits at all. So it's not as opportunistic a system as on recent Ryzens, which just go as fast as they can until limits are hit, then step down gradually from that until an equilibrium is found. That likely explains why you're better off setting a higher PL1 and kind of ignoring PL2 - though keeping PL2 high is no doubt beneficial for responsiveness in desktop uses, as boosting very high there (for very short spans of time) will make for a smoother experience.


----------



## The red spirit (Jun 1, 2021)

AusWolf said:


> Funny enough, I just tested that before work last night.  The 120 W PL2 is a bit too steep (it still clocks down after a few seconds even with a 40 s Tau), but with the 80 W PL1, it sits comfortably in the low-mid 80 degrees while holding a stable 3 GHz all-core. It isn't much (about Ryzen 2700X levels of performance), but 1. it's awesome to know that using an 11700 above stock PL1 is possible even in a SFF system, and 2. I'm not going to run anything that needs 16 threads at 100% usage all the time, so I guess I'm fine, and lastly 3. The cooler gets reasonably hot during these tests. With the Ryzen 3600 and its stock 88 W PPT, the CPU got very hot, but the cooler stayed cold to the touch. That concludes my previous assumption: Ryzens have terrible heat dissipation, despite the efficiency of 7 nm chips.


It's that's a case 80 watts is pretty much all you can reasonably achieve. You may want PL2 set to 85 watts and Tau to something like 8 seconds.




AusWolf said:


> Yes, the i3 became obsolete as games started using more threads, but that's why I said: it _was_ the better gamer _at that time_.  I remember seeing around 20% usage on both the 8150 and the HD 7970 I had it paired with in Assasin's Creed 3, and the game barely ran at 30 FPS. It was an extreme case, but still: games didn't need 8 cores back then, and the single-core performance on FX was just plain terrible.


Oh I get it, but when you don't have much cash, you want your stuff to last long. And that was a strong point of FX. Sure it may have always been a 45 fps chip, but if it can keep being like that for 6 years instead of 3 years, that's a win. I never played Assassin's Creed 3, but usually FX for me was capable of 40-50 fps. The game that really killed FX for me was Far Cry 5. CPU became a clearly a limiting factor there and that's mostly because Ubisoft can't write a game properly. Anyway, I thought that it was a time to upgrade and once I did I never played Far Cry 5 again. From what I have observed, Far Cry 5 has a terrible engine and that even a really good hardware struggles with it. The main problem is that the game itself is very boring, so I completed story of Far Cry 1 again. And nowadays Assasin's Creed is again a complete shitshow in terms of CPU optimization. But at this point I just don't understand why people even care about that franchise. AC1 and AC2 were perhaps cool, but over time AC series just started losing plot and became a game about anything but Ezio. 

A surprising thing that FX 6300 was struggling in Doom Eternal. Many people say that it is a wonderfully optimized game and yet it was really heavy on CPU. Doom 4 was much easier to run on CPU, meanwhile Doom Eternal on FX meant 30-40 fps. And funny thing is that Vulkan was supposed to make weak hardware run Doom better, but for me it usually meant a loss of around 7 fps on average. 

The FX was also struggling in Victoria II. At year 1920, it will inevitably be at 5-10 fps. Too bad that even upgrade to i5 10400F meant almost nothing that that game still doesn't run well. A fun thing is that this particular game only needs a modest GPU power, probably even ATi X600 would run it maxed out at 4k, but on CPU it's absolutely brutal.


----------



## Valantar (Jun 1, 2021)

The red spirit said:


> Despite that it's still a custom loop and it's still likely on par with bigger air coolers.


Oh, sure. It's perfectly capable of dissipating the full ~400W heat load of my CPU+GPU at reasonable noise levels. It just happens to have a relatively poor CPU block, which means that steady-state CPU temperatures are probably 10+ degrees higher than with a better block. An apt illustration of this is that adding my 275W GPU into the mix doesn't affect CPU thermals much, so the limitation is clearly in the CPU block and not the rest of the system.


The red spirit said:


> And tell me how long do those laptops last. I doubt that they will be alive after a decade. And it's not like it's not known, we all remember nVidia GPU fiasco with 8000 series GPUs cooking themselves to death. Also GTX 480s many of them are dead. R9 290Xs many of them are dead. Any AMD monstrosity like Fury X ignoring water cooler failure, the core itself is cooked to death on most cards.


My old Thinkpad X201 lasted a decade before I sold it on, and routinely ran the CPU very hot (despite being repasted twice through its lifetime). It's true that many laptops die early, and many do die due to insufficient cooling, but it's _very_ rarely the CPU itself that fails in these cases. It might be the PCB itself takes damage from repeated heating/cooling cycles, or the solder joints below the CPU, RAM, or anything else, or peripheral components (charging circuitry is common, as are VRM failures and internal display circuitry failures). I don't think I've ever come across a laptop with a verifiable dead CPU - though of course it is a bit difficult to tell. But CPUs are _extremely_ robust, and are closely monitored for thermals. Bad laptop designs tend to cook everything else than the CPU by not ensuring sufficient internal airflow and exhaust of hot air, which kills other things, but not the CPU itself.


The red spirit said:


> Well I have read about some dude (at OCN) trying to see electromigration of chip and it was Sandy bridge i7. He ramped up voltage to 1.7V and kept CPU cool, but only after 15 minutes it needed more voltage to be stable. And at more sane voltages, ne needed few hours to make it need more voltage. And translate that to 8 years of computer usage. You would want a CPU to be functional for at least 15 years and most people want it to be working for 8 years or so, any accelerated electromigration at such rates isn't acceptable. And if Ryzen needed only that much to electromigrated, think about Ryzens running stock with stock coolers. They usually stay at 85C under load and still get voltage in 1.2-1.45 volt range. That's very close to Buldzoid's test and only 60 hours to damage it like that is really not good, knowing that Ryzen chips likely doesn't test stability of itself and ask for more volts from factory than what AMD set it to have.


Electromigration and clock degradation varies _massively_ between process nodes and architectures, so those aren't comparable. Also, you clearly didn't read what I said: Buildzoid ran his chip way above stock thermal limits, at fixed voltages and currents, all of which were far above stock behaviour. Here's the video if you want more detail btw. But in short, he ran the CPU at 105-112°C (depending on the time of day and how hot the room was) (also, he tried running it at 1.52V, but it shut down hard due to hitting 115°C, which is apparently the hardcoded silicon thermal shutdown limit). According to him, AMD tests its chips at slightly less idiotic settings than this for hundreds of hours to ensure they don't degrade under stock conditions. And the difference in electromigration at his ~110°C 133A 1.444V (get, 1.5V set) and stock behaviour (throttling at 95°C IIRC, voltages reading similarly high but actually being bucked lower by the CPU) is very significant. He goes into this himself as well. His results, while of course a sample size of one, indicate that these CPUs if run at stock, even with terrible cooling, will _never_ degrade.


The red spirit said:


> It is how Intel defines TDP. TDP is "Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload." aka long term power limit which is PL1 and it is always set to match advertised TDP.


Close, but not quite. PL1 is _recommended_ to be set equal to TDP, and you seem to be missing that "power [...] the processor _dissipates_" is something else than "power the processor _consumes_". The difference is small, but it's nonetheless meaningful. TDP has never been directly related to power draw. It's been closely aligned, but that relation has always been variable and somewhat incidental.


The red spirit said:


> Sure, but prebuilts had no problems in the past dealing with 95 watt TDP anyway. Let's not forget i7 2600k or Core 2 Quads or any AMD Phenom. 65 watt TDP is just chocking the chips for no good reason.


They dealt with them, sure, but OEMs have had a clear desire to build smaller, more affordable and space-efficient business desktops - as that's their bread and butter - and have thus pushed for lower TDPs. Also, K-SKUs like the 2600K have almost never been used in OEM systems, outside of a few gaming models. One of the major developments when intel moved to the Core architecture was the lowering of mainstream TDPs, which in turn allowed for the proliferation of SFF and uSFF business desktops, AIOs, and the like. Most of these use 65W no-letter CPUs, while the smallest use T-SKU 35W CPUs. 95W isn't seen in these spaces.


The red spirit said:


> I don't think that they bin them and I haven't heard of that at all. It would be rather stupid of them to separately release them as faster models with lower voltages as it would mean more silicon unable to match Intel spec of SKU.


They do. You know how many SKUs Intel makes for each generation of chips, right? Binning is how they differentiate between these. And T SKUs are always taken from bins that perform well at low voltages. K SKUs are taken from bins that clock high at higher voltages. Some times these bins are similar, if not interchangeable. Some times they aren't.


The red spirit said:


> I have, stop saying that nonsense. I may not agree with that, but doesn't mean that I don't read it. Anyway, TDP was once a decent metric, no need to shit on it. How hard could it possible be for chip make to calculate amps*volts of each chips at chip's maximum theoretical load? It's not hard at all, but for us it is as we aren't usually informed about official voltage of chip or its capabilities to pull amps. TDP only becomes a load of croc if companies start to obfuscate what it actually is and feed public with bullshit. Pentium 3 never had a problem of incorrect TDP being specified, 1.4GHz model was rated at 32.2 watts. Pentium 4 2.8GHz was rated at 68.4 watts. That was what you could measure while CPU was loaded and if you measured CPU power rail. Just like they could back then, they still can do the same with all power limits of modern chips.


Well, if you did read the thread, then you're just adamant in maintaining a belief in a reality that has never existed. I would really recommend you take a step back and try to consider the larger context. Nobody has 'shit on' TDP as a metric, we are simply discussing how it's quite problematic as boost becomes more aggressive and Intel fails to enforce their specifications in the DIY market, leading to extremely wide performance deltas for seemingly identical products.

Nobody has said it would be difficult to calculate a specific TDP for each chip, but I've been trying to say for ... what, three pages of posts now, that _this is not the purpose or function of TDP_. TDP is a) a thermal dissipation specification, divided into classes, for which OEMs and cooler makers design cooling systems, and b) a marketing tier system vaguely related to power draw. You're arguing for TDP to not actually be about cooling and thermals, but rather about power draw. Which ... why would we then call it TDP? _Thermal_ _design power_? Unless that power (in watts) specifies what a cooling system must be able to dissipate, that name becomes nonsense.

As for why we can't go back to the Pentium 2/3/4 era ... well, those were fixed-clock CPUs, they had no clock scaling whatsoever, no power savings at idle to speak of, and they all had very low power consumptions. The difference in cooling needs between a 35.2W and a 64.1W CPU are tiny compared to the difference between contemporarily relevant power draws like 65W vs. 225W. So again, if you want to go back to that, you also need to accept going back to the other drawbacks of the times - such as limited motherboard compatibility (no more just picking a suitable motherboard with the correct socket, you now need to explicitly check that the CPU is listed as supported!), no boost clocks (= significant drops in system responsiveness), etc. Oh, and that completely ignores the fact that it would piss off OEMs to no end and pretty much kill Intel's business relations. Which means they would never, ever do that.


The red spirit said:


> For me it would be 4GHz at 105 watts, which is what i5 10400F is exactly pulling. And I don't care about OEMs as in my country they are legitimately rare and practically don't exist. OEMs are American only concept, which doesn't apply to the rest of this planet.


Okay, so the 10400F would be rated at that. But then the 10600 (non-K) would either be specced the same (as they are the same bin, most likely), or would need to have its own TDP tier. And when each CPU has its own TDP, the metric becomes meaningless.

To be clear: what you're asking for is clearly defined _power draw metrics_. This is not _thermal design power_. I agree that accurate power draw metrics would be great to have on the spec sheet, but please stop mixing up your terms.

Also, sayin "OEMs are an American concept" is ludicrous. Dell, HP and Lenovo sell the _vast_ majority of desktop PCs in the world, and they sell them to businesses, governments and educational institutions across the world.  Two of these three might be American companies, but that is utterly irrelevant - they operate globally, and in sum likely sell far more outside of the US than in the US - the US is just ~330M people, after all. Are you actually saying that major companies in your country buy their computers from small local manufacturers, or build them themselves? That is very hard to believe, as small manufacturers are quite unlikely to have the support systems major companies require. And major companies _definitely_ don't build DIY systems.


The red spirit said:


> I would be if I expected it to be used with aluminum sunflower cooler and if all of us were speaking in legalese everyday, but I'm not. If chip can safely achieve that and do no harm to board, why on Earth I wouldn't want that "boost". Boost itself has been for nearly a decade almost identical to base speed as CPU either works at idle speed or maximum speed, which is boost. They rarely work at base clock and most users don't see that unless they disable boost in BIOS.


Yes, that's how DIY PCs work. They also often ignore PL1 by setting PL2 as infinite, or set a higher PL1 than stock. But remember, you're also asking for strict adherence to TDP, and you want TDP to be equal to PL1. Something has to give here. Please make up your mind - all of these cannot logically be true at the same time.


The red spirit said:


> Why not look at my system then in profile? My cooler is clearly a Scythe Choten with stock fan, case is Silencio S400. It has 3 fans in it, one top exhaust, two front intakes, usually working at 600-800 rpm. And sure I am biased, of course "under manufacturer spec" is the most minimum spec for cooling. I said that it acceptable only when CPU is running prime95 and GPU is running Furmark at the same time. You got those 80C at nowhere near such a high load and not even close to worst case scenario.


Sorry, but my 80°C was while running Prime95 - as a response to your example. Which is also why those temperatures don't worry me whatsoever. Heck, 80°C in real world use wouldn't really be worrying either - it's well below any throttle point, and nowhere near harmful to anything. I would like it to be cooler, but I prefer silence. As for the rest of your setup, that wasn't relevant, the point was: you're setting arbitrary standards, presenting them in an oversimplified way, and using that as an argument. That is a really, really bad way of arguing.


The red spirit said:


> You clearly said that is very relevant for them.


Relevant to perhaps a couple hundred users worldwide? Sure. That is not reason to use that as a generally valid benchmark - quite the opposite. You might as well argue that the needs of rally drivers are the best way to set safety standards and equipment levels for cars. Specialist needs are specialist needs, even if they use (derivatives of) general purpose equipment.


The red spirit said:


> As if there were stuff like that. HEDT is high end desktop, not a workstation. And why would I not use my plebeian chips for such loads, they are perfectly capable of that and are designed to be general purpose. General purpose means that if I want I only use it for playing mp3s and if I want it, then I use it assemble molecules. I see nothing stupid or unreasonable about that. It might not be that fastest, but that doesn't mean it can be unstable or catch on fire.


... Xeon-W is for workstations, as is Ryzen Pro and Threadripper Pro. These are chips tested and validated for such workloads. Sure, you _can_ use any chip for such a workload, but you then also need to be cognizant that this is not a use that it's tested and validated for. And this is fine! It's likely to work perfectly. But again, you can't throw together any combination of retail consumer parts, subject them to a professional workload, and expect it to perform above spec. Which is essentially what you're arguing here.


The red spirit said:


> I don't run it long, only to get an idea of what my thermals are.


.... if you're not reaching steady-state thermals, what's the point? Also, how are you getting "an idea what your thermals are" from running a power virus that generates more heat than literally any common GPU workload out there? That would give a very _un_representative view of your thermals. If you're into overblown cooling for its own sake, and pushing thermals as low as you can within your chosen paramenters, then that's what you like, but stop acting like that's suitable as a generally applicable standard for anything. And again, Furmark has been demonstrated to kill GPUs at stock due to its extreme heat load and how it intentionally aims to break thermal limits. Recommending it is reckless at best.


The red spirit said:


> Actually boost is also technically running above manufacturer spec and is never accounted for in TDP calculations.


... I know. I have said so quite a few times. However, there are always safety margins built into the specification - any Intel chip when limited to TDP in power draw will boost to some extent (unless you've gotten the absolutely worst possible chip in that bin). Thus, disabling boost will inevitably drop voltages and power draws. Disabling boost does not mean strictly adhering to TDP (as that would require individual "TDP"s (in your meaning of "power draw specs) not for each SKU, but for each physical chip, as they inevitably differ from each other.


The red spirit said:


> 870K was launched in 2015, for which the system was assembled for and technically it had refreshed architecture, so it wasn't the same at older part, therefore it's 2015 tech.


... the chip you were intitially talking about still launched in October 2012.


The red spirit said:


> And you think that Intel at factories doesn't use "power virus" to determine heat output? The last time I read about that, Intel used their in house tools for that and specific heat simulators or at least specialized software loads to simulate that. They do exactly what prime95 does, but better and even more taxing on chip + in final settings they add some safety margin to account for less than perfect VRMs, vDroop, hot climates and etc. If you are saying that bullshit, then Intel should fire all those people, who ensure stability and predictable heat output of chips as they are apparently useless.


How manufacturers torture test their components and how end users use their components are not the same, nor should they be. Manufacturers need to test unrealistic worst-case scenarios. That doesn't make unrealistic worst-case scenarios good tests for end users, as _what you are testing for_ is not the same. And no, Intel doesn't use power viruses to set TDP. Many Intel CPUs throttle under power virus loads if set to stock behaviour.


The red spirit said:


> In my situation I only had a choice of either buying i3 4130 or FX 6300, FX was overall better and lasted longer. FX were a great value chips. And FX 8320 was selling for slightly less than i5 4440, so FX was maybe a better value deal too. FX didn't perform terribly, they just weren't as fast as Intel in single threaded loads. That doesn't mean that they were decently fast. I can tell that you never had an FX and have no idea what they actually were like.


Decently fast, sure, for their time and disregarding power draw. They did decently well in MT loads (though by no means close to their nominal core count advantage), consumed dramatically more power even at the same TDP when compared to Intel (which just goes to show how TDP has never been a metric for power draw), lagged behind significantly in ST workloads, and kind-of-sort-of caught up when overclocked, but at fully 3x the power consumption. They were fine for their time, if you didn't mind buying hefty cooling. But they aged very poorly, and even an i5-6600 at 65W trounces the FX-8320E OC'd to 4.8GHz in the vast majority of tests. They might have seen an uptick in relative performance as more applications have become more multithreaded, but by that time (i.e. 2018+) they were already so far behind affordable current-generation offerings there was no real point. Of course a CPU you already own is infinitely cheaper than buying a new one, so if it performed adequately that is obviously great - I'm a big fan of making hardware last as long as possible (hence my current soon-to-be 6-year-old GPU, and me keeping my Core2Quad system from 2009 to 2017). But those old FX CPUs never aged well.


----------



## The red spirit (Jun 1, 2021)

Jill Valentine said:


> In fact an overclocked Pentium G3258 was still a great budget chip though it did became obsolete pretty quickly so you're pretty right there.


Because people buying it only wanted to overclock it. Nobody really thought that it's going to last long. The notable thing about it is that you could reach 5 GHz+ on it with normal cooling and that's why it sold so well. If Intel completely lost their marbles and released Comet Lake Celeron, which comes with base clock of 5 GHz and can be overclocked to 6.5 GHz on air cooler, would you buy it? It would likely sell quite well.



Valantar said:


> My old Thinkpad X201 lasted a decade before I sold it on, and routinely ran the CPU very hot (despite being repasted twice through its lifetime). It's true that many laptops die early, and many do die due to insufficient cooling, but it's _very_ rarely the CPU itself that fails in these cases. It might be the PCB itself takes damage from repeated heating/cooling cycles, or the solder joints below the CPU, RAM, or anything else, or peripheral components (charging circuitry is common, as are VRM failures and internal display circuitry failures). I don't think I've ever come across a laptop with a verifiable dead CPU - though of course it is a bit difficult to tell. But CPUs are _extremely_ robust, and are closely monitored for thermals. Bad laptop designs tend to cook everything else than the CPU by not ensuring sufficient internal airflow and exhaust of hot air, which kills other things, but not the CPU itself.


That's almost as bad as CPU itself dying. And let's be honest, in 2021 it's nearly impossible to buy a properly engineered laptop and tat won't end up being a waste of money 4 years later. They became disposable ovens that anyone should avoid and just get a desktop if they can. 

And there's a difference of running hot while consuming 35 watts and running hot while consuming 200+ watts. The desktop chip will suffer far more and much more likely experience a failure.




Valantar said:


> Electromigration and clock degradation varies _massively_ between process nodes and architectures, so those aren't comparable. Also, you clearly didn't read what I said: Buildzoid ran his chip way above stock thermal limits, at fixed voltages and currents, all of which were far above stock behaviour. Here's the video if you want more detail btw. But in short, he ran the CPU at 105-112°C (depending on the time of day and how hot the room was) (also, he tried running it at 1.52V, but it shut down hard due to hitting 115°C, which is apparently the hardcoded silicon thermal shutdown limit). According to him, AMD tests its chips at slightly less idiotic settings than this for hundreds of hours to ensure they don't degrade under stock conditions. And the difference in electromigration at his ~110°C 133A 1.444V (get, 1.5V set) and stock behaviour (throttling at 95°C IIRC, voltages reading similarly high but actually being bucked lower by the CPU) is very significant. He goes into this himself as well. His results, while of course a sample size of one, indicate that these CPUs if run at stock, even with terrible cooling, will _never_ degrade.


There's no such thing as "will never degrade". It's just a question of how long it takes before degrading to the point of instability. Northwood chips only needed a few weeks. Overclocked Sandy's do also degrade fast. If you want Skylake-Comet Lake last long, then you shouldn't use more than 1.4 volts and you can't reach more than 80C under any load.



Valantar said:


> Close, but not quite. PL1 is _recommended_ to be set equal to TDP, and you seem to be missing that "power [...] the processor _dissipates_" is something else than "power the processor _consumes_". The difference is small, but it's nonetheless meaningful. TDP has never been directly related to power draw. It's been closely aligned, but that relation has always been variable and somewhat incidental.


CPU converts almost all electrical power into heat, so power consumption is pretty much a TDP. 




Valantar said:


> They dealt with them, sure, but OEMs have had a clear desire to build smaller, more affordable and space-efficient business desktops - as that's their bread and butter - and have thus pushed for lower TDPs. Also, K-SKUs like the 2600K have almost never been used in OEM systems, outside of a few gaming models. One of the major developments when intel moved to the Core architecture was the lowering of mainstream TDPs, which in turn allowed for the proliferation of SFF and uSFF business desktops, AIOs, and the like. Most of these use 65W no-letter CPUs, while the smallest use T-SKU 35W CPUs. 95W isn't seen in these spaces.


They may as well use laptop chips then and no i7 2600K was used in quite a bit of "boring" desktops. My dad's work computer is literally a decade old i7 2700k machine with Radeon 7770. A prebuilt desktop, that wasn't obnoxiously expensive. The catch is that it's a prebuilt computer, not legit prebuilt like Dell, which I mentioned to you that legit prebuilts are rare and that's because they make zero sense to buy here. That is unless you buy one used.




Valantar said:


> They do. You know how many SKUs Intel makes for each generation of chips, right? Binning is how they differentiate between these. And T SKUs are always taken from bins that perform well at low voltages. K SKUs are taken from bins that clock high at higher voltages. Some times these bins are similar, if not interchangeable. Some times they aren't.


I'm pretty sure that T chips are just non T chips, which can't boost as high and remain in Intel's preferred voltage target. Actually more efficient chips end up in laptops.




Valantar said:


> Nobody has said it would be difficult to calculate a specific TDP for each chip, but I've been trying to say for ... what, three pages of posts now, that _this is not the purpose or function of TDP_. TDP is a) a thermal dissipation specification, divided into classes, for which OEMs and cooler makers design cooling systems, and b) a marketing tier system vaguely related to power draw. You're arguing for TDP to not actually be about cooling and thermals, but rather about power draw. Which ... why would we then call it TDP? _Thermal_ _design power_? Unless that power (in watts) specifies what a cooling system must be able to dissipate, that name becomes nonsense.


Because all electrical power is converted to heat with only tiny fraction of that into something else. And if you claim that it's hard to calculate specific TDP, then it's not. Intel should provide several TDPs then. Average TDP, all out TDP and all out single core TDP.



Valantar said:


> As for why we can't go back to the Pentium 2/3/4 era ... well, those were fixed-clock CPUs, they had no clock scaling whatsoever, no power savings at idle to speak of, and they all had very low power consumptions. The difference in cooling needs between a 35.2W and a 64.1W CPU are tiny compared to the difference between contemporarily relevant power draws like 65W vs. 225W. So again, if you want to go back to that, you also need to accept going back to the other drawbacks of the times - such as limited motherboard compatibility (no more just picking a suitable motherboard with the correct socket, you now need to explicitly check that the CPU is listed as supported!), no boost clocks (= significant drops in system responsiveness), etc. Oh, and that completely ignores the fact that it would piss off OEMs to no end and pretty much kill Intel's business relations. Which means they would never, ever do that.


You don't get it. A chip at maximum load more or less becomes fixed clock chip anyway and that's where Intel could measure TDP. It's really easy. Idle is irrelevant as it doesn't affect TDP and boost only lasts so long before Tau expires. If they got rind of Tau and PL2, then left turbo and measured TDP at standard PL1, there wouldn't be any confusion. It's nowhere near as dramatic as you make it out to be.




Valantar said:


> Okay, so the 10400F would be rated at that. But then the 10600 (non-K) would either be specced the same (as they are the same bin, most likely), or would need to have its own TDP tier. And when each CPU has its own TDP, the metric becomes meaningless.


No it doesn't, because each CPU consumes different amount of power just like they always did. You can't keep same TDP for i5 and i7 and expect same clock speed. Cooling choice then would be a choice that you make and actually face the reality instead of trying to warp it through TDP. 




Valantar said:


> To be clear: what you're asking for is clearly defined _power draw metrics_. This is not _thermal design power_. I agree that accurate power draw metrics would be great to have on the spec sheet, but please stop mixing up your terms.


It's the same anyway.




Valantar said:


> Also, sayin "OEMs are an American concept" is ludicrous. Dell, HP and Lenovo sell the _vast_ majority of desktop PCs in the world, and they sell them to businesses, governments and educational institutions across the world.  Two of these three might be American companies, but that is utterly irrelevant - they operate globally, and in sum likely sell far more outside of the US than in the US - the US is just ~330M people, after all. Are you actually saying that major companies in your country buy their computers from small local manufacturers, or build them themselves? That is very hard to believe, as small manufacturers are quite unlikely to have the support systems major companies require. And major companies _definitely_ don't build DIY systems.


It is exactly what happens. My whole university is full of Phenom local prebuilts. Every school I have been into used some kind of local prebuilt. Nobody gives a damn about that support that you are speaking about as even if it existed it would be nearly useless or it doesn't exist in local language. Which in most cases it doesn't exist and Dells, Lenovos or whatever else also has a huge price premium and worse configs. They are simply irrelevant. That's my country, which is still considered to be a first world country, now imagine what happens in 3rd world. They would laugh at your support argument. I'm telling you, outside of few rich countries with actual support, almost nobody else cares about OEM prebuilts as they simply make no sense and before that you may not even be able to get the model you want.





Valantar said:


> Yes, that's how DIY PCs work. They also often ignore PL1 by setting PL2 as infinite, or set a higher PL1 than stock. But remember, you're also asking for strict adherence to TDP, and you want TDP to be equal to PL1. Something has to give here. Please make up your mind - all of these cannot logically be true at the same time.


Raise TDP, kill PL2 setting, raise PL1 default setting to 95 watts. Done.




Valantar said:


> Sorry, but my 80°C was while running Prime95 - as a response to your example. Which is also why those temperatures don't worry me whatsoever. Heck, 80°C in real world use wouldn't really be worrying either - it's well below any throttle point, and nowhere near harmful to anything. I would like it to be cooler, but I prefer silence. As for the rest of your setup, that wasn't relevant, the point was: you're setting arbitrary standards, presenting them in an oversimplified way, and using that as an argument. That is a really, really bad way of arguing.


Now load GPU and see if your system can actually deal with heat.




Valantar said:


> Relevant to perhaps a couple hundred users worldwide? Sure.


That would be just one bigger lab. Dude, face the reality, many people use CPU fully loaded. Tell me how many people transcode videos, run BOINC, run Folding or do something else demanding. That would be in millions, not in hundreds.



Valantar said:


> ... Xeon-W is for workstations, as is Ryzen Pro and Threadripper Pro. These are chips tested and validated for such workloads. Sure, you _can_ use any chip for such a workload, but you then also need to be cognizant that this is not a use that it's tested and validated for. And this is fine! It's likely to work perfectly. But again, you can't throw together any combination of retail consumer parts, subject them to a professional workload, and expect it to perform above spec. Which is essentially what you're arguing here.


Do you honestly think that you need those chips for work? Ever heard of being ripped off?




Valantar said:


> .... if you're not reaching steady-state thermals, what's the point? Also, how are you getting "an idea what your thermals are" from running a power virus that generates more heat than literally any common GPU workload out there?


Because at 15-20 minutes it reaches the highest temperature that it will ever achieve and any further testing is pointless.




Valantar said:


> ... I know. I have said so quite a few times. However, there are always safety margins built into the specification - any Intel chip when limited to TDP in power draw will boost to some extent (unless you've gotten the absolutely worst possible chip in that bin). Thus, disabling boost will inevitably drop voltages and power draws. Disabling boost does not mean strictly adhering to TDP (as that would require individual "TDP"s (in your meaning of "power draw specs) not for each SKU, but for each physical chip, as they inevitably differ from each other.


Disabling boost means exactly running at TDP or below. Intel:
"Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload"



Valantar said:


> ... the chip you were intitially talking about still launched in October 2012.


You can't read, 870k launched in 2015. 760k is a replacement for it due to unexpected technicalities.




Valantar said:


> How manufacturers torture test their components and how end users use their components are not the same, nor should they be. Manufacturers need to test unrealistic worst-case scenarios. That doesn't make unrealistic worst-case scenarios good tests for end users, as _what you are testing for_ is not the same. And no, Intel doesn't use power viruses to set TDP. Many Intel CPUs throttle under power virus loads if set to stock behaviour.


You better show me that "throttling". Its impossible. I tested my own i5 10400F under prime95 without turbo and it was consuming around 40 watts. That's one of the heaviest loads imaginable and it's nowhere close to 65 watt TDP. Only i9 10900 would be closer to throttling, but I don't think it would actually ever reach that point.




Valantar said:


> Decently fast, sure, for their time and disregarding power draw. They did decently well in MT loads (though by no means close to their nominal core count advantage), consumed dramatically more power even at the same TDP when compared to Intel (which just goes to show how TDP has never been a metric for power draw)


Lol no, it also thermally dissipated all that heat. You missed out on it, but "95 watt" FX 6300 came with aluminum cooler, which had a 70mm fan and it had max rpms of 6500. Even then CPU could rather easily overheat .Hyper 103 was enough for FX 6300 at stock settings. Meanwhile FX 8320 likely needed 212 Evo for stock settings, which is designed to dissipate close to 200 watts anyway.



Valantar said:


> lagged behind significantly in ST workloads, and kind-of-sort-of caught up when overclocked, but at fully 3x the power consumption.


It's only 2 times, if you actually read what you posted.



Valantar said:


> They were fine for their time, if you didn't mind buying hefty cooling. But they aged very poorly, and even an i5-6600 at 65W trounces the FX-8320E OC'd to 4.8GHz in the vast majority of tests.


Hardly (they are closer to match with slightly better overall results for i5) and i5 6600 wasn't launched in 2014. Also i5 cost nearly twice of what FX 8320 did, so yeah it's totally fair.



Valantar said:


> They might have seen an uptick in relative performance as more applications have become more multithreaded, but by that time (i.e. 2018+) they were already so far behind affordable current-generation offerings there was no real point. Of course a CPU you already own is infinitely cheaper than buying a new one, so if it performed adequately that is obviously great - I'm a big fan of making hardware last as long as possible (hence my current soon-to-be 6-year-old GPU, and me keeping my Core2Quad system from 2009 to 2017). But those old FX CPUs never aged well.


You are just shitting on them way more than they actually sucked. It's your comment that didn't age particularly well. 

And now compare FX chips with their price equivalents and era equivalents:
FX 8320 with i3 3250 or i5 3350P
FX 6300 with i3 3225
FX 4300 with i3 3210

And that's the closest video I found to what I wanted to see, which still has Intel chip which was more expensive than FX 8320 (169 dollars vs 184 dollars):









Did FX actually suck? Not really. And wattage of FX is similar to what older chips consumed, so FX wasn't exceptionally bad in that aspect either. So 8 core FX is closer to 4 core i5 and 6 core FX is clearly better than i3. And 9 years later FX 8320 is still delivering a playable experience in games:









And you can overclock it, so it performs better. You can easily achieve 4.4GHz overclock from stock 3.5GHz. And you get around 20% more performance out of it. Now tell me, how exactly did FX suck as long term budget chip and how i5 3350P was actually better.


----------



## Kissamies (Jun 1, 2021)

The red spirit said:


> Because people buying it only wanted to overclock it. Nobody really thought that it's going to last long. The notable thing about it is that you could reach 5 GHz+ on it with normal cooling and that's why it sold so well. If Intel completely lost their marbles and released Comet Lake Celeron, which comes with base clock of 5 GHz and can be overclocked to 6.5 GHz on air cooler, would you buy it? It would likely sell quite well.


Nah, they didn't hit 5GHz as it's a rare sight to see even a 4790K hit that frequency. But I get your point.

Weird that Intel still sells Celerons as 2c/2t ones, even on desktop usage that's just insufficient.


----------



## The red spirit (Jun 1, 2021)

Jill Valentine said:


> Nah, they didn't hit 5GHz as it's a rare sight to see even a 4790K hit that frequency. But I get your point.











I don't see why not. If Linus achieved near 5GHz overclock and temps are in check, there's no reason not to clock it further. Unless I'm not aware of some architecture peculiarities, it seems that it could do more than 5GHz easily.



Jill Valentine said:


> Weird that Intel still sells Celerons as 2c/2t ones, even on desktop usage that's just insufficient.


It's fine for web browsing and office tasks.


----------



## The red spirit (Jun 2, 2021)

@Valantar 
I have calmed down a bit, thought a bit and forced myself to think about TDP and variable performance. I may not have liked it and somewhat ignored it, but it's actually genius. It lets you get more performance out of same cooler and very importantly, if you don't like stock cooler, you can keep the same chips, change absolutely nothing in BIOS and then upgrade cooler and get a very cool and quiet CPU, without it filling up the extra cooling capacity with higher PL1. And many benefits of such approach can still be enjoyed with very simple coolers. Eh, maybe that isn't that bad. 

However, if Intel wants to be truly successful with such things, they absolutely must have to step up in their communication, because people think that running outside of Intel spec is acceptable and normal and yet we see videos like this:









Linus is a dipshit reviewer, who doesn't read or comprehend spec sheet, but the problems is that he effectively communicates with tech crowd and has a lot of influence on potential buyers. If Linus or more techtubers keep doing this shit any longer, Intel will be forced to raise TDP and reduce PL2. A problem is that people are much more likely are going to watch YT video over reading Intel's spec sheet. I wonder if some tech Karen actually got pissed off about performance and sued Intel or some other brand for "not getting the full performance" and actually won in court, what kind of aftereffects would it have? You know people often sue for ridiculous reasons in 'Murica and sometimes they win. 

Anyway, right now pretty much every techtuber expects to get almost all turbo speed all the time and if they get any less than that, which is still perfectly within spec, it's not going to end well. After all, techtubers have an enormous influence and can do a lot of harm or good for certain brands and their engineering decisions. If Intel does nothing about that, well they are going to be fucked.


----------



## AusWolf (Jun 3, 2021)

The red spirit said:


> @Valantar
> I have calmed down a bit, thought a bit and forced myself to think about TDP and variable performance. I may not have liked it and somewhat ignored it, but it's actually genius. It lets you get more performance out of same cooler and very importantly, if you don't like stock cooler, you can keep the same chips, change absolutely nothing in BIOS and then upgrade cooler and get a very cool and quiet CPU, without it filling up the extra cooling capacity with higher PL1. And many benefits of such approach can still be enjoyed with very simple coolers. Eh, maybe that isn't that bad.
> 
> However, if Intel wants to be truly successful with such things, they absolutely must have to step up in their communication, because people think that running outside of Intel spec is acceptable and normal and yet we see videos like this:
> ...


That brings us back to one of my very first posts in this forum thread: the problem here (I think) isn't the loosely defined Intel spec. It also isn't motherboard manufacturers making different tiers of motherboards that fulfil the spec in different ways. The problem is 1. motherboard manufacturers not communicating their VRM specifications towards the public, and 2. reviewers expecting every single motherboard to be able to deliver 150+ Watts of power to the CPU, stay cool and maintain maximum boost frequencies at the same time, and then giving manufacturers sh** if they fail to do so with certain models. They of all people should acknowledge that sticking to a 65 W power limit is just as much within spec as running max boost clocks. They should also realise that nobody is going to buy the cheapest motherboard on the market without any background information and expect it to run full boost on an 11900K. Well, some people might, but we generally refer to them as retards. Hardware Unboxed made a big deal out of nothing imo (except for that one ASRock motherboard that truly failed in one of their later videos). As for Linus, he used to be good, but he's been all for the show lately. I prefer watching his weird experiment videos to be fair.

As for the configurable TDP/PL values, the more I'm playing with my new 11700, the more I'm starting to like it. I recently changed the memory controller setting from Auto to Gear 2, and package/core temps magically dropped by 10 °C with an extra 100 points in Cinebench R23. I might be able to increase PL1 even further.


----------



## The red spirit (Jun 3, 2021)

AusWolf said:


> That brings us back to one of my very first posts in this forum thread: the problem here (I think) isn't the loosely defined Intel spec. It also isn't motherboard manufacturers making different tiers of motherboards that fulfil the spec in different ways. The problem is 1. motherboard manufacturers not communicating their VRM specifications towards the public, and 2. reviewers expecting every single motherboard to be able to deliver 150+ Watts of power to the CPU, stay cool and maintain maximum boost frequencies at the same time, and then giving manufacturers sh** if they fail to do so with certain models. They of all people should acknowledge that sticking to a 65 W power limit is just as much within spec as running max boost clocks. They should also realise that nobody is going to buy the cheapest motherboard on the market without any background information and expect it to run full boost on an 11900K. Well, some people might, but we generally refer to them as retards. Hardware Unboxed made a big deal out of nothing imo (except for that one ASRock motherboard that truly failed in one of their later videos). As for Linus, he used to be good, but he's been all for the show lately. I prefer watching his weird experiment videos to be fair.


Honestly some people in prebuilts will end up with 11900K on cheap board and that board may even fail to deliver PL1 expected power, the 125 watts. Some people buying low end board rationalize that if chip is on compatible chip list in motherboard manufacturer's site, then it must work correctly. I personally think that if motherboard specifies that it supports certain chips, then it absolutely has to be able to deliver all power needed and do no overheat even if there's almost no air blowing directly on them (aka using a tower cooler). AM3+ was ruined by very shady tactics of manufacturers and some board VRMs actually caught on fire (some MSI boards). Many makers claimed that their board supported 8 core chips and yet they didn't have any VRM cooling and sometimes only had 3+1 phases. That's unacceptable and minimum specification should be higher for any other new platform. All this shady shit going on with LGA 1200 isn't acceptable for one simple reason, if motherboard is only made to fulfill spec just so so, then chances are that such board won't last very long until malfunctioning. Then there are other factors like hotter climates, dust build-up and etc. IMO any LGA 1200 board should be made to work with supported CPU's PL2 and 20% higher CPU power demands. If board can only sustain 150 watts, then it should be limited to only 65 watt TDP parts (i5 11400F PL1 is 65 watts and PL2 is 154 watts). Otherwise, lesser products will soon become an e-waste. And to get expected performance, at least 1:1.5 power limit ratio between PL1 and PL2 should be maintained by every motherboard vendor. For i5 11400F that would be PL1 65 watts and PL2 would be 97.5 watts. So the complete minimum spec board for i5 11400F would be the board, which can sustain at least 117 watts continuously (for 1 or 2 hours) and passively cooled VRMs in some benchmark case should not exceed 70C in 20C room. 




AusWolf said:


> As for the configurable TDP/PL values, the more I'm playing with my new 11700, the more I'm starting to like it. I recently changed the memory controller setting from Auto to Gear 2, and package/core temps magically dropped by 10 °C with an extra 100 points in Cinebench R23. I might be able to increase PL1 even further.


To me it's only PL1 that matters, PL2 looks pointless. Anyway, where you will post your achievement log? It seems that 11700 was good enough for you, from what you write I can conclude that it's much more power hungry chip than i5 10400F as i5 within 65 watt limit does achieve all core maximum boost frequencies under almost any high load. I ran Cinebench R23 30 minute loop yesterday for fun at Intel "spec" settings and it kept going at 3.8-4GHz. So as long as it's not thermally limited, then i5 at 65 watts can actually sustain all core turbo in almost any workload. That's really nice. Your 11700 seems to be nowhere close to all core turbo in long workload. It makes me wonder if 11700 wouldn't be slower at Intel "spec" settings than 10400F in something like encoding.


----------



## AusWolf (Jun 3, 2021)

The red spirit said:


> Honestly some people in prebuilts will end up with 11900K on cheap board and that board may even fail to deliver PL1 expected power, the 125 watts. Some people buying low end board rationalize that if chip is on compatible chip list in motherboard manufacturer's site, then it must work correctly. I personally think that if motherboard specifies that it supports certain chips, then it absolutely has to be able to deliver all power needed and do no overheat even if there's almost no air blowing directly on them (aka using a tower cooler). AM3+ was ruined by very shady tactics of manufacturers and some board VRMs actually caught on fire (some MSI boards). Many makers claimed that their board supported 8 core chips and yet they didn't have any VRM cooling and sometimes only had 3+1 phases. That's unacceptable and minimum specification should be higher for any other new platform. All this shady shit going on with LGA 1200 isn't acceptable for one simple reason, if motherboard is only made to fulfill spec just so so, then chances are that such board won't last very long until malfunctioning. Then there are other factors like hotter climates, dust build-up and etc.


Maybe the solution should be not allowing manufacturers to add CPUs to their support list if the VRM can't supply enough electricity for PL2 without overheating.



The red spirit said:


> IMO any LGA 1200 board should be made to work with supported CPU's PL2 and 20% higher CPU power demands. If board can only sustain 150 watts, then it should be limited to only 65 watt TDP parts (i5 11400F PL1 is 65 watts and PL2 is 154 watts). Otherwise, lesser products will soon become an e-waste. And to get expected performance, at least 1:1.5 power limit ratio between PL1 and PL2 should be maintained by every motherboard vendor. For i5 11400F that would be PL1 65 watts and PL2 would be 97.5 watts. So the complete minimum spec board for i5 11400F would be the board, which can sustain at least 117 watts continuously (for 1 or 2 hours) and passively cooled VRMs in some benchmark case should not exceed 70C in 20C room.


Well, the default PL2 of 8-core 65 W Rocket Lake chips is 225 Watts - which I think is way too much power for any CPU. But it's a number from Intel, so I guess they should mandate motherboard makers to be able to deliver it, or maybe come up with a lower PL2 instead.



The red spirit said:


> To me it's only PL1 that matters, PL2 looks pointless. Anyway, where you will post your achievement log? It seems that 11700 was good enough for you, from what you write I can conclude that it's much more power hungry chip than i5 10400F as i5 within 65 watt limit does achieve all core maximum boost frequencies under almost any high load. I ran Cinebench R23 30 minute loop yesterday for fun at Intel "spec" settings and it kept going at 3.8-4GHz. So as long as it's not thermally limited, then i5 at 65 watts can actually sustain all core turbo in almost any workload. That's really nice. Your 11700 seems to be nowhere close to all core turbo in long workload. It makes me wonder if 11700 wouldn't be slower at Intel "spec" settings than 10400F in something like encoding.


I think I will open a forum thread here on TPU once I get myself to it (maybe over the weekend). 

Intel has far reached the limitations of traditional TDP designations, just applied limits instead of changing the formula like AMD did. I remember my 7700 (non-K) always maintained its max boost while consuming anywhere between 50-60 Watts under full load. It seems your 10400F is close too, but one just can't do the same with 8 cores. The 11700 eats about 50 Watts and maintains 4.8-4.9 GHz in single-threaded loads, which is nice, but drops to 2.8 GHz in Cinebench R23 multi with default power limits. You can't expect to load 8 times the cores with only 30% more power, I guess.


----------



## The red spirit (Jun 3, 2021)

AusWolf said:


> Maybe the solution should be not allowing manufacturers to add CPUs to their support list if the VRM can't supply enough electricity for PL2 without overheating.


That's what AMD did in AM3+ days. Boards were in 3 tiers. 95W, 125W and 225W boards, too bad that even then it was a complete shitshow as all AM3+ consumed way more power than AMD said. If Intel wanted to, they could pull it off more elegantly. 



AusWolf said:


> Well, the default PL2 of 8-core 65 W Rocket Lake chips is 225 Watts - which I think is way too much power for any CPU. But it's a number from Intel, so I guess they should mandate motherboard makers to be able to deliver it, or maybe come up with a lower PL2 instead.


I would get rid of PL2 and make PL1 95 watts. That almost looks like a good solution, knowing that LGA 1200 boards should be able to handle PL1 of 125 watts, too bad Intel stock cooler and OEMs wouldn't agree with that. Anyway, PL2 needs to be lower anyway. Even 1:2 ratio is crazy. Ideally imo PL ratio should be 1:1.5.




AusWolf said:


> Intel has far reached the limitations of traditional TDP designations, just applied limits instead of changing the formula like AMD did. I remember my 7700 (non-K) always maintained its max boost while consuming anywhere between 50-60 Watts under full load. It seems your 10400F is close too, but one just can't do the same with 8 cores. The 11700 eats about 50 Watts and maintains 4.8-4.9 GHz in single-threaded loads, which is nice, but drops to 2.8 GHz in Cinebench R23 multi with default power limits. You can't expect to load 8 times the cores with only 30% more power, I guess.


That's unfortunate, because that's almost half of what chip is capable of. It seems to me that Intel was just lazy and forced the same PL1 to all locked chips. It's like car manufacturers reusing same engine in number or cars, just that in Intel's case it would be like sticking 1,9 TDi into dumpster truck. It will move, but it would be infuriatingly slow. Meanwhile PL2 is like W12 in that same dumpster truck. It seems that Intel just didn't really think about PL consequences on different SKUs and perhaps only tested one CPU and then applied those findings to every chip. Seems like something that braindead management would do.


----------



## AusWolf (Jun 4, 2021)

The red spirit said:


> That's unfortunate, because that's almost half of what chip is capable of. It seems to me that Intel was just lazy and forced the same PL1 to all locked chips. It's like car manufacturers reusing same engine in number or cars, just that in Intel's case it would be like sticking 1,9 TDi into dumpster truck. It will move, but it would be infuriatingly slow. Meanwhile PL2 is like W12 in that same dumpster truck. It seems that Intel just didn't really think about PL consequences on different SKUs and perhaps only tested one CPU and then applied those findings to every chip. Seems like something that braindead management would do.


That pretty much sums it up. Though I don't think it's a total failure, as even with lower clocks, you still have a 6/8 core CPU. AMD FX failed as a gaming platform because of its low single-core performance, but didn't age too badly because games started using more threads. With these 65 W 11th gen Core CPUs, you have the high low-threaded clock speed you need in the present, and the core count you may need in the future. And if you want to combine the two, you can slap a bigger cooler on it and increase/disable its power limits. Heck, I'm even tempted to get my AeroCool Aero One Mini case and Corsair H100i out of the wardrobe and see how far the little i7 goes, even though this was never my original plan.


----------



## AusWolf (Jun 5, 2021)

Valantar said:


> Sounds interesting! Let me know if you make a build log?





The red spirit said:


> Anyway, where you will post your achievement log?


It's aliiive! 









						Small form factor gaming - build log and support forum for new builders
					

I'm starting this thread as a build log, but if anyone wants to join with their systems, or just to seek advice, feel free.  I've been a PC gamer since 1998, and have been building my own computers since 2004. In the last 7 or 8 years, I've been specialising in small form factor builds. I find...




					www.techpowerup.com


----------

