• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Hot CPUs and longevity

Joined
Jul 20, 2020
Messages
71 (0.04/day)
So... the new CPUs from AMD and Intel both run at close to 100C under load, unless you have special cooling.

The companies say it's fine (what else can they say).
But can we really assume that longevity won't be affected?
It's not just about the max temperature, but also the min-max cycles, which put more stress on the parts.
 
It's not just about the max temperature, but also the min-max cycles, which put more stress on the parts.
Actually, it is more about the long term average. 100°C maximum under load may seem alarming, but what really is more important is how long it sits up that high. And for most users, that would not be very long since most of the time, our CPUs sit closer to idle than maxed out.

Yes, the heat-up/cool down cycles affect component aging, but we really have no control over that - unless we never power up the computer at all.

If the CPU comes with a cooler, I say that is more than adequate IN A PROPERLY COOLED CASE. Remember, it is the case's responsibility to provide a sufficient supply of cool air flowing through the case. And it is your responsibility, as the user, to ensure the case cooling does that. The CPU cooler need only toss the CPUs heat up into that flow of air.

If the CPU does not come with a cooler, then obviously it is the user's responsibility to select an adequate CPU cooler.

And don't forget the cooler must be mounted properly - with a proper (thin but complete) application of TIM (thermal interface material).
 
I would say it's too high, after all it's an upper temperature limit, personally I would settle at 70-80C. For CPUs it hasn't been much problem so far, but for GPUs it certainly was. Basically high end AMD R9 cards have various temperature related problems at this age (6-8 years), some other random high end models also have problems with that, but it seems that it's not just temperatures alone that kill cards, it's also high wattage. Under 200 watts it seems that most survive for damn long time, but above that wattage, reliability isn't so great. Anyway, airflow matters, you should have adequate airflow in computer if you want it to last. Undervolting is also good for hardware longevity, but imo that's more optional than airflow.
 
Huh?

First you say it is too high. Then you say for CPUs it hasn't been a problem. Then you talk about R9 and other graphics cards.

This isn't about graphics cards or GPUs.

@Jokii - You really need to be specific about exactly which specific "CPU" you are talking about. It the CPU's specs report the maximum temps allowed are 100°C (or higher) then it will not hurt the CPU to approach that limit AS LONG AS it does not exceed it or sit at or near that level for long periods of time.

Edit comment: fixed typo
 
Last edited:
no problems.
my friend ran his 2500k with the stock intel cooler for almost 10 years and every kind of load (from loading a website to playing a game) had the CPU non stop at 100°C.
now 10 years later it still overclocks to 4.7 Ghz.
 
Yep, another day another to hot thread.

I have run 24/7 flat out for three years a q6600@ 70s, an AMD FX 8350@ 85+ 4 years, a r2600X at 85 a year a 3800X at 85 a year or so an 8750H at 90/95 a year.

All at they're top limits and throttling, never under locked, memory always overclocked and slight OC all round.
And many GPU's pegged at 95 for years all with no harm.

If the maker validates higher temperatures, than higher temperatures are fine.

And they're a natural effect of physics, yes you can run it colder but you Will have to compromise, performance, cost, ease of use etc.

It's fine.
 
Agreed. And your mention of throttling brings up another point. If a CPU gets hot, its self-protection feature will cause it to throttle back in speed (and thus heat) so that it will not get "too" hot.
 
ive never had a cpu die from overheating if it's designed to run at 100c max then that's what it was made to do so it should be fine now the lost heat that don't sit to well in my mind because lost heat is lost energy but that's another story I'm not going to crank on with how TPD is going up with every new cpu release.
 
I would be more concerned about the effect of thermal cycling on the solder of soldered chips such as for a GPU and not so much for socketed chips such as for a CPU

I have brought back some GPUs by reflowing the solder with flux.
 
Last edited:
I would add, on earlier systems I was purely on air so temperatures were harder to maintain, now I have two 360 rads I monitor water temperature to try for sub 55°c at lowest noise and let the hardware do as much as it can, IE I lean on the manufacturers statement , I think due to age and density of die the Vega never gets past 65 but every Ryzen I have put in this hot it's max core temperature with time and stayed there unless I set a lower temperature, which has been possible a while.
Flat out cooling obviously does reduce temperature.

Point is I only degraded chips after they're useful life of 4 years two times and both with near world beating oc attempts (in my head at the moment before they popped or reset obv) and not just pushing them to a reasonable high and holding them there indefinitely.
 
My main machine is still running a 12 year old chip...
But it's probably running at stock speed?

Back in 2008 I built a system with an E7200, ran overclocked for 2 years and handed the rig to my dad in 2010, he's still using it today.
 
@Jokii - You really need to be specific about exactly which specific "CPU" you are talking about.
As I said in the OP, "the new CPUs from AMD and Intel". Desktop CPUs, to be more specific. They run hotter than most CPUs in the past.

Anyway, I'm sure they'll generally outlast the 3-year warranty period, but beyond that...
It would be interesting to see some proper longevity stats on this matter (not just personal anecdotes). Obviously not all CPUs respond the same to high temperatures, but they're probably similar.
 
As I said in the OP, "the new CPUs from AMD and Intel". Desktop CPUs, to be more specific. They run hotter than most CPUs in the past.

Anyway, I'm sure they'll generally outlast the 3-year warranty period, but beyond that...
It would be interesting to see some proper longevity stats on this matter (not just personal anecdotes). Obviously not all CPUs respond the same to high temperatures, but they're probably similar.
Not really, the 7950X goes up to 95 then typically doesn't down clock much wherein the 13900K rockets within 20 seconds to 100C and the clocks drop dramatically to 4.8 or lower, so not that similar.
To a typical gamer who really isn't best served by a 13900K or 7950X( due to price and the fact a 13600K or 7700X is better value) it won't matter though, they won't use the CPU to that extent while gaming.

Oh and I have one fx8350 still in use now in a friend's pc many years after I thrashed it for three years and another with an i7 920 still running well though it's motherboard Does have the odd issue IE limited USB working etc.
 
Huh?

First you say it is too high. Then you say for CPUs it hasn't been a problem. Then you talk about R9 and other graphics cards.

This isn't about graphics cards or GPUs.
Yes, it is too high. nowadays CPUs can suck 300 watts and yes, you should set it lower and tune PL to match 80C or 200 watts tops. Less is good too, but perhaps pointless.
 
So... the new CPUs from AMD and Intel both run at close to 100C under load, unless you have special cooling.

The companies say it's fine (what else can they say).
But can we really assume that longevity won't be affected?
It's not just about the max temperature, but also the min-max cycles, which put more stress on the parts.
MTBF (Mean time before failure) is calculated the same at half load and half temp the same way if full load and max temp.

The Mean Time between Failure (MTBF) is simply the inverse of the failure rate for an exponential distribution while the Failure in Time (FIT) rate is 10^9 x failure rate.

In example:
If FIT = 15.1, then MTBF = 10^9/15.1 = 66225165

Unfortunately, there isn't much on Intels site that gives the actual FIT of current gen chips.

But it could be safe to say (at defaults) the modern cpu is capable of 10 years (roughly 100,000 hours) at load and at max temp.
 
I'm pretty sure that CPU/GPU companies know what they're doing when designing these toasters. Their temps just feel hella high as we've been used to lower temps with previous generations of hardware.
 
MTBF (Mean time before failure) is calculated the same at half load and half temp the same way if full load and max temp.

The Mean Time between Failure (MTBF) is simply the inverse of the failure rate for an exponential distribution while the Failure in Time (FIT) rate is 10^9 x failure rate.

In example:
If FIT = 15.1, then MTBF = 10^9/15.1 = 66225165

Unfortunately, there isn't much on Intels site that gives the actual FIT of current gen chips.

But it could be safe to say (at defaults) the modern cpu is capable of 10 years (roughly 100,000 hours) at load and at max temp.
Where did you get the mttf figures?
I thought it was a calculus equation? Black's Equation.
Can we even get the answers from manufacturers for the equation? Feel like that's kept secret.
My understanding of the equation was the smaller, hotter and more current running through a transistor the faster they fail.

So now that they are the smallest they've been and running hotter than they have in the recent past couldn't we conclude that a newer CPU shouldn't last as long as an older CPU that also ran hot because of the absolute size of the transistor?
 
Where did you get the mttf figures?
I thought it was a calculus equation? Black's Equation.
Can we even get the answers from manufacturers for the equation? Feel like that's kept secret.
My understanding of the equation was the smaller, hotter and more current running through a transistor the faster they fail.

So now that they are the smallest they've been and running hotter than they have in the recent past couldn't we conclude that a newer CPU shouldn't last as long as an older CPU that also ran hot because of the absolute size of the transistor?

Is where I sourced the example.
 
Degradation will happen eventually at peak temp. And if you then add voltage to counteract that, you will slowly degrade it more. Think 5-7 years depending on node and voltages used. Which just so happens to generally be the age at which youll want something new.
Its not a major issue, but yes you can find it and high temp/voltage will decrease longevity. Whether you will be bothered is personal/use case.
 
Degradation will happen eventually at peak temp. And if you then add voltage to counteract that, you will slowly degrade it more. Think 5-7 years depending on node and voltages used. Which just so happens to generally be the age at which youll want something new.
Its not a major issue, but yes you can find it and high temp/voltage will decrease longevity. Whether you will be bothered is personal/use case.
5 years is a major issue
 
5 years is a major issue
Warranty is 2, right? So whos problem will it be? ;)

I have seen it on my 3570K. Near 2018 it could no longer do 4.4 Ghz under the set voltages. Wanted more vcore for stability but I reduced clocks to 4.2 instead at much lower volts. And that 4.4 was with 86C peak temp
 
Last edited:
Back
Top