• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Ryzen 9 7950X Cooling Requirements & Thermal Throttling

Not easy to speak with AMD supporters that ignore what Black's Equation means...


Are you serious ?
It is on the web since 26 Sept...


20° just by delidding it.
It is an insane result, showing us how bad designed this IHS is.
A 20c reduction yes I got that but that is just temperature. He is doing an all core overclock at a fixed voltage and from what I understand he is primarily testing temperature reduction NOT under normal conditions (ie: letting the CPU decide what to do) because he ultimately want to produce and sell delidding products. (nothing wrong with that btw)

So he got a 50MHz gain. Then after increasing voltage to 5.5GHz all core so maybe in total 100Mhz gain? It's not that clear to me from the video there was a tangible performance benefit per risk/cost to delidding so I would prefer to have a good analysis from a more controlled experiment before and after deledding. This limited data doesn't really answer if the IHS is really a problem let alone if delidding is really the beneficial answer. (I degress that there is something I may have missed in the video so feel free to correct me if I missed something)

Interestingly he notes the the CPU package power reduces with reduced temperature (still under 200W) so strictly speaking what I have read so far the CPU will drive to 95c. However I take it why that is not happening here is is because he is manually setting clocks and voltage in this case, thusly NOT raising the power limits which the CPU will typically control given the parameter limits in PBO.

If he was letting the CPU be in control and raised his PBO limits would the CPU still continue to drive to 95c when delidded? Would that still produce real gains over 200W?
I suspect not but I have a burning desire for someone to prove to me what happens.
 
How fast do CPU's degrade at those temps?
That depends on several factors, and it is different for every single CPU. Users’ workloads are different too, so it is hard to say X months or X years. By the way considering the fact you can completely avoid to expose the CPU to that idiotic behavior just by using smarter power limits, or designing a proper IHS, it is a pity customers are exposed to the risk.
 
in use85.png

this is what last generation looks like IN USE at performance optimised settings ie pbo on, and infinity fabric at 1900, optimised, and it would have been on for days but for 22h2 win 11 update.

my 3800x did two years fine, rocking on, so for an extra 1Ghz on a smaller denser node 95=this is fine.

waiting on that maths still MAX'Opinion.

"That depends on several factors, and it is different for every single CPU. Users’ workloads are different too, so it is hard to say X months or X years."

Exactly and AMD did the maths, scientific testing,just like intel have for there K,KF, and KS verrions etc sooooooo.
 
View attachment 264635
this is what last generation looks like IN USE at performance optimised settings ie pbo on, and infinity fabric at 1900, optimised, and it would have been on for days but for 22h2 win 11 update.

my 3800x did two years fine, rocking on, so for an extra 1Ghz on a smaller denser node 95=this is fine.
Did you post the wrong screenshot? I almost jumped out of my seat before I noticed it says 5900x. o_O
 
A 20c reduction yes I got that but that is just temperature. He is doing an all core overclock at a fixed voltage and from what I understand he is primarily testing temperature reduction NOT under normal conditions (ie: letting the CPU decide what to do) because he ultimately want to produce and sell delidding products. (nothing wrong with that btw)

So he got a 50MHz gain. Then after increasing voltage to 5.5GHz all core so maybe in total 100Mhz gain? It's not that clear to me from the video there was a tangible performance benefit per risk/cost to delidding so I would prefer to have a good analysis from a more controlled experiment before and after deledding. This limited data doesn't really answer if the IHS is really a problem let alone if delidding is really the beneficial answer. (I degress that there is something I may have missed in the video so feel free to correct me if I missed something)

Interestingly he notes the the CPU package power reduces with reduced temperature (still under 200W) so strictly speaking what I have read so far the CPU will drive to 95c. However I take it why that is not happening here is is because he is manually setting clocks and voltage in this case, thusly NOT raising the power limits which the CPU will typically control given the parameter limits in PBO.

If he was letting the CPU be in control and raised his PBO limits would the CPU still continue to drive to 95c when delidded? Would that still produce real gains over 200W?
I suspect not but I have a burning desire for someone to prove to me what happens.
No, I’m not suggesting a delid, it is dangerous. It was only a way to demonstrate how bad designed the IHS is.
the video is based on an overcloker perspective, but delid ding usually produces far lower gains in temperature. He was very surprised by the results.
 
That depends on several factors, and it is different for every single CPU. Users’ workloads are different too, so it is hard to say X months or X years. By the way considering the fact you can completely avoid to expose the CPU to that idiotic behavior just by using smarter power limits, or designing a proper IHS, it is a pity customers are exposed to the risk.

I mean do you actually have concrete examples where high temperatures have meant a degraded CPU, and can you point to them? Has it been tested? Heat isn't good for electronics sure, but during all my years messing about with computer and reading about other people messing about with them there have been very few dead CPU's. The point is it's speculation at this point. CPU's lasts a really long time, the only reason they get replaced is because they become obsolete. If you get more performance out of them I'm totally fine with some degradation during its lifetime.

The bigger discussion is probably the process wall we're fast approaching. How much more perfomance is there to squeeze out of the silicon mix we use now? If it slows down more it might mean there is less reason to upgrade, meaning keeping current components for a longer time span.
 
View attachment 264635
this is what last generation looks like IN USE at performance optimised settings ie pbo on, and infinity fabric at 1900, optimised, and it would have been on for days but for 22h2 win 11 update.

my 3800x did two years fine, rocking on, so for an extra 1Ghz on a smaller denser node 95=this is fine.

waiting on that maths still MAX'Opinion.

"That depends on several factors, and it is different for every single CPU. Users’ workloads are different too, so it is hard to say X months or X years."

Exactly and AMD did the maths, scientific testing,just like intel have for there K,KF, and KS verrions etc sooooooo.
You clearly cannot understand Black’s equation…
85° is waaaaay lower than 95°…
 
English may not be my primary language, but I explained it quite well above:

I am complaining because of an idiotic default choice

That's more than enough to complain. I hate idiotic choices by manufacturers. I hate marketing BS. AMD new CPUs are garbage at default setting and I feel entitled to complain here, especially because I'm not wearing a nice pair of red glowing glasses like many here.

And before trying to imply I'm an Intel supporter or anything,

View attachment 264624

no, I am not (and we have 2 Asus notebooks, one with 5800HS and one with 6800H).
I am just not biased.
I really don't know how tech savvy people here can defend a choice to have a CPU running at 90+ degrees under load when you can have THE SAME CPU running 10/15° less within 2/5% of its performance. This is when the technology is driven by marketing and not engineering.
Ok It's a bad decision by AMD, somewhat forced by Intel but any way, so you know what should work? Just buy a cheap sub $100 Bxxx series mobo & let it throttle :laugh:

As they say in some circles win-win :pimp:

And just in case you want to break it down further ~ let's say you're buying this expensive platform then you ought to be informed enough about why you need/want it & how you can make it better.

I posted this in some other thread so this is on the same lines ~ you are not entitled to peak performance at cheap prices, we'll ignore the greed of companies including AMD for now!

So when you are spending this much you know the risk/reward involved, now if not zen4 there's still AM4 or Intel's cheaper alternatives & you can always do 99.9% of tasks that you do on 7xxx just slower that's all.

People need to get over this entitlement syndrome & move away from their periodic lust of shiny new toys hardware, your wallets & the planet will thank you for it! If you don't need it generally speaking avoid it :toast:
 
I mean do you actually have concrete examples where high temperatures have meant a degraded CPU, and can you point to them? Has it been tested? Heat isn't good for electronics sure, but during all my years messing about with computer and reading about other people messing about with them there have been very few dead CPU's. The point is it's speculation at this point. CPU's lasts a really long time, the only reason they get replaced is because they become obsolete. If you get more performance out of them I'm totally fine with some degradation during its lifetime.

The bigger discussion is probably the process wall we're fast approaching. How much more perfomance is there to squeeze out of the silicon mix we use now? If it slows down more it might mean there is less reason to upgrade, meaning keeping current components for a longer time span.
Well, to end up to a complete dead CPU would take time, for sure.
I’m speaking about much subtle silicon degradation, that results in higher energy requirements and temperatures.
BTW even if you are an insane upgrader like me and others, when you stop using your hardware you are supposed to sell/give it to other people that are expecting it to works for several years down the road.
 
You clearly cannot understand Black’s equation…
85° is waaaaay lower than 95°…
It's 10 degree.

And I am still waiting on your proof you know how to use blacks equation it's clear you can type it's name only.

And your talking about something you can't prove and seemingly have no experience with.

I degraded to its death a q6600 , two FX 8350's so far, and so far couldn't degrade a Ryzen out of use though I did make a 2600X eventually not oc to 1800 fabric, no, degraded chips do not use noticeably more power and no it didn't take two years at stock settings, they HAD to be pushed to they're silicon limits for year's.

And your examples?! Evidence, experience, Maths
 
Last edited:
Well, to end up to a complete dead CPU would take time, for sure.
I’m speaking about much subtle silicon degradation, that results in higher energy requirements and temperatures.

Temps won't go up, because of the design, but the clock speeds might go down. By how much? Personally my unfounded speculation is that after say three years or so (handy given the warranty!) maybe you get a few hundred MHz less, which would be fine to me. BTW, is there a guaranteed clock speed and power draw on these things? I assume there is not, because that would be nuts.
 
The base clocks are guaranteed, turbo & everything else is not. Same goes for Intel, Nvidia on their GPU side.

guaranteed clock speed and power draw on these things
You can't guarantee power draw because each chip will be different. This is why despite some objections I'd say AMD's definition of "TDP" is better than Intel, even if only marginally.
 
Ok It's a bad decision by AMD, somewhat forced by Intel but any way, so you know what should work? Just buy a cheap sub $100 Bxxx series mobo & let it throttle :laugh:

Well, judging by what we have seen so far, there is not such a thing as a “cheap B650 mobo” … so your plan won’t work :D
People need to get over this entitlement syndrome & move away from their periodic lust of shiny new toys hardware, your wallets & the planet will thank you for it! If you don't need it generally speaking avoid it :toast:
That’s an entirely different matter.
 
Great great article W11z as usual!
Since i was able to get my hands on a used 5800x from a friend something like 2yrs ago i installed it in my dan a4 because why not. The cooler of choice was/is a wraith stealth just because i can remove it without having to take apart half of the computer like with my beloved nh-L9i. At first i was horrified by temps and specially noise (even with noctua). Eco mode and curve optimizer -20 were close but not enough. Eventually the fan ramped up like crazy.
Until i found the one magic setting (for me) in bios. Thermal throttle limit to the rescue: i oscillate between 75-80° setting, my PC is super silent since that day. I know is not magic, just a hard limit. I admit i'm with the ones that prefer silence over raw power.
Looks like i'll be re-using those 3 settings until i reach an acceptable level of noise for my taste/use case in my next 7700x/8700x, eco mode and TTL 80°
 
It's 10 degree.

And I am still waiting on your proof you know how to use blacks equation it's clear you can type it's name only.

And your talking about something you can't prove and seemingly have no experience with.

I degraded to its death a q6600 , two FX 8350's so far, and so far couldn't degrade a Ryzen out of use though I did make a 2600X eventually not oc to 1800 fabric, no, degraded chips do not use noticeably more power and no it didn't take two years at stock settings, they HAD to be pushed to they're silicon limits for year's.

And your examples?! Evidence, experience, Maths
You don’t even know what an equation is, apparently. It is exponential, so 10° is it A LOT
 
You don’t even know what an equation is, apparently. It is exponential, so 10° is it A LOT
And your Maths proving you right is where ?!.
And I have ran mostly GPU's admittedly at 95°C for three years under similar 100% 24/7 conditions, many in fact.


Your one repeated answer or side insult away from ignore.
 
Last edited:
No, I’m not suggesting a delid, it is dangerous. It was only a way to demonstrate how bad designed the IHS is.
the video is based on an overcloker perspective, but delid ding usually produces far lower gains in temperature. He was very surprised by the results.
Well I'll have to leave this conversation with some 180c disagreement (pun intended). My opinion on the delidding video's limited information suggests the IHS design is not practically performance impactful in a manual all core overclock scenario, however it is certainly temperature impactful. Since it's only part of what is probably a bigger picture in the coming months of how these CPU's perform I cannot conclude for now that the IHS design is necessarily "bad".

One question that enters my mind is did the AMD engineers conclude that 95c was an optimal condition for heat transfer away from the cores? If so then I would strive to run the CPU at that temperature while balancing performance per watt.
 
Well I'll have to leave this conversation with some 180c disagreement (pun intended). My opinion on the delidding video's limited information suggests the IHS design is not practically performance impactful in a manual all core overclock scenario, however it is certainly temperature impactful. Since it's only part of what is probably a bigger picture in the coming months of how these CPU's perform I cannot conclude for now that the IHS design is necessarily "bad".

One question that enters my mind is did the AMD engineers conclude that 95c was an optimal condition for heat transfer away from the cores? If so then I would strive to run the CPU at that temperature while balancing performance per watt.
I remember that was discussion about AMD CPUs and this 95C temperature not long after Zen 3 launched.

There is for, example, a TPU Forum thread entitled 'Ryzen 5800 owners complain about very high MT load temps' that referenced a comment by Robert Hallock as follows

"...Yes. I want to be clear with everyone that AMD views temps up to 90C (5800X/5900X/5950X) and 95C (5600X) as typical and by design for full load conditions. Having a higher maximum temperature supported by the silicon and firmware allows the CPU to pursue higher and longer boost performance before the algorithm pulls back for thermal reasons.

Is it the same as Zen 2 or our competitor? No. But that doesn't mean something is "wrong." These parts are running exactly as-designed, producing the performance results we intend....".

AMD also issue this graphic about Zen 3 processors which both confirms and expands on what Robert Hallock said:

amdzen3.jpg


This would imply that the 95C is based on AMD's generic approach to CPUs rather than being based on a factor such as an optimal condition for heat transfer.
 
There's this user @ AnandTech forums that received his 1st 7950X (he has 2 now) and he's quite the fan of distributed computing, so he set up his 1st 7950X and ... was getting low speeds with all cores loaded, despite using a 420 AIO: just 3.1 to 3.3 GHz. The CPU was running @ 95º and, despite the low speed, was STILL outpacing any of his 5950Xs (he has more than 1 of these).

He then realized he had made a very basic mistake: he forgot to take out the tape off the head of the the AIO ...


Now, his 1st 7950X runs @ 5.1 GHz on all cores, and it's performance is even better, despite running @ 95º.

His 2nd 7950X is now running with a BeQuiet Dark Rock Pro 4 and is seeing similar performance to the other one in Primegrid (with Lasso for balancing affinity):

 
There's this user @ AnandTech forums that received his 1st 7950X (he has 2 now) and he's quite the fan of distributed computing, so he set up his 1st 7950X and ... was getting low speeds with all cores loaded, despite using a 420 AIO: just 3.1 to 3.3 GHz. The CPU was running @ 95º and, despite the low speed, was STILL outpacing any of his 5950Xs (he has more than 1 of these).

He then realized he had made a very basic mistake: he forgot to take out the tape off the head of the the AIO ...


Now, his 1st 7950X runs @ 5.1 GHz on all cores, and it's performance is even better, despite running @ 95º.

His 2nd 7950X is now running with a BeQuiet Dark Rock Pro 4 and is seeing similar performance to the other one in Primegrid (with Lasso for balancing affinity):

Textbook error I have never made, honest,not, I definitely have.

But truly indicative of both AMD's point of it self governing, crunching is a actual load too and nothing like gaming it's 24/7 blender type load.
 
How fast do CPU's degrade at those temps?
I'm interested also. I have my setup in use for 12 years now and the new one I'm buying will sty for, i guess, the same amount of time if not more. A lot of primer rendering and othe 'creators' workloads.
 
View attachment 264642

This would imply that the 95C is based on AMD's generic approach to CPUs rather than being based on a factor such as an optimal condition for heat transfer.

AMD target timeframe is the warranty period, which is 24 months in the worst case (EU).
They are sure the CPU won’t break in 24 months running at those max temperatures, and that’s enough. And here we are speaking about silicon degradation, not a complete failure, which is even more difficult to assess. A CPU requiring higher voltage to keep the same performance is technically still in working conditions, so warranty wise is still ok.
Customers target timeframe is another story. Here above we have people looking for 10+ years. Even users like me, upgrading every 2/3 years , are expecting to sell/give the CPU in working conditions to someone else.

I'm interested also. I have my setup in use for 12 years now and the new one I'm buying will sty for, i guess, the same amount of time if not more. A lot of primer rendering and othe 'creators' workloads.
I would not accept such high temperatures in your case. I would set lower power limits, or stick to ECO Mode.
 
For what it's worth, for lidded processors Intel intel specifies a maximum temperature that should not be exceeded on the IHS, measured at the geometrical center on the surface, in the 60–70 °C range depending on the exact SKU. On the other hand, the company only specifies that core temperature should be below TjMax at PL1.

The point here is that reported core temperature is not the temperature of the entire CPU. This should be also clear from iGPU temperature, if available on AMD processors.
 
Back
Top