• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Hot CPUs and longevity

Basically high end AMD R9 cards have various temperature related problems at this age (6-8 years), some other random high end models also have problems with that, but it seems that it's not just temperatures alone that kill cards, it's Nox Vidmate VLC also high wattage. Under 200 watts it seems that most survive for damn long time, but above that wattage, reliability isn't so great.
I have a 290X on my 2nd rig and it works like a charm. :)
 
MTBF (Mean time before failure) is calculated the same at half load and half temp the same way if full load and max temp.

The Mean Time between Failure (MTBF) is simply the inverse of the failure rate for an exponential distribution while the Failure in Time (FIT) rate is 10^9 x failure rate.

In example:
If FIT = 15.1, then MTBF = 10^9/15.1 = 66225165

Unfortunately, there isn't much on Intels site that gives the actual FIT of current gen chips.

But it could be safe to say (at defaults) the modern cpu is capable of 10 years (roughly 100,000 hours) at load and at max temp.
I very much doubt you can run a 5 nm Zen chip at 100 C at max boost for 10 years. I would not be surprised if it fails a couple months after the 3-year warranty ends if you do that.

5 years is a major issue
5 years is when I buy 'em ;)

According to an article at a competitive site.....

  • Ryzen 5000 series fails at 2.9 percent.
  • Ryzen 3000 series fails at 3 percent.
  • ThreadRipper 3000 series fails at 2.5 percent.
For comparison, the company’ data on Intel chips:

  • Intel 9th-gen fails at 0.9 percent.
  • Intel 10th-gen fails at 1.2 percent.
Looks like TSMC 7 nm is indeed less durable than good old 14 nm...

no problems.
my friend ran his 2500k with the stock intel cooler for almost 10 years and every kind of load (from loading a website to playing a game) had the CPU non stop at 100°C.
now 10 years later it still overclocks to 4.7 Ghz.
You cannot compare a 32 nm chip with current ultra fragile 5/7 nm chips.

If the maker validates higher temperatures, than higher temperatures are fine.
Until the warranty ends, yes, but many of us, for one reason or another, especially these days, want to use our hardware (way) beyond that.
 
I very much doubt you can run a 5 nm Zen chip at 100 C at max boost for 10 years. I would not be surprised if it fails a couple months after the 3-year warranty ends if you do that.
.
Why not?? People run chips full tilt all the time. There are Rosetta @home and Folding@Home grinders out there. They slap a rig together, many with overclocks, many without. Running for years...

Warranty. That's a manufacturers decision. But generally speaking in 3 years that cpu will be at the end of its platforms life cycle and a bit slower to later released hardware.

I guess the question the should become,

Will all the rest of the hardware make the same 10 years journey to prove that point??
 
This is probably why server CPUs have lower clocks. While the temps may be just as high, by the time the silicon degrades to the point that 1.8 GHz doesn't work, the CPU will be loooong since replaced.
 
Last edited:
I very much doubt you can run a 5 nm Zen chip at 100 C at max boost for 10 years. I would not be surprised if it fails a couple months after the 3-year warranty ends if you do that.


5 years is when I buy 'em ;)


Looks like TSMC 7 nm is indeed less durable than good old 14 nm...


You cannot compare a 32 nm chip with current ultra fragile 5/7 nm chips.


Until the warranty ends, yes, but many of us, for one reason or another, especially these days, want to use our hardware (way) beyond that.
And as I also previously stated, I have ,am and do Use my stuff 24/7 I have adequately tested AMD and Intel's longevity ratings at load for duration, I'll join the class action if it's required, but I don't expect that to happen.
 
And as I also previously stated, I have ,am and do Use my stuff 24/7 I have adequately tested AMD and Intel's longevity ratings at load for duration, I'll join the class action if it's required, but I don't expect that to happen.
You cannot join a class action (does that even exist in the UK, by the way?) if your hardware is out of warranty.

Warranty. That's a manufacturers decision. But generally speaking in 3 years that cpu will be at the end of its platforms life cycle and a bit slower to later released hardware.
Have you checked my sig? I do not care about some "platform life cycle" or whatever corporate BS they come up with. I care about my wallet, my requirements and the environment.
 
If it is designed to run hot, then the only thing it has going against it is thermal cycles. If you leave it on it will probably be fine.
 
ultra fragile 5/7 nm chips.
current processors are MUCH more reliable and strong than older nodes.
50mv too much and a 22nm 4790k can degrade in a couple hours in P95.
a 3700X can run at 110C with ~200mv above fit in P95 small FFT for 60-70 hours before it degrades by a couple mv. (Buildzoid intentionally tried to degrade one)
 
You cannot join a class action (does that even exist in the UK, by the way?) if your hardware is out of warranty.


Have you checked my sig? I do not care about some "platform life cycle" or whatever corporate BS they come up with. I care about my wallet, my requirements and the environment.
And other people don't care lol. Your opinion is valid, but lacks substance.

I mean you can look up the life cycle of a solid state capacitor. It's 100,000 hours in use for that entire time.

I'm using currently a cpu with a 27% overclock with a manufacturer date of 1999 on the PCB.

I don't not know however the actual years of use at full load, but I can't imagine it was ever used used in this fashion.

And if you care about the wallet, you buy into new hardware on a regular basis because it's more efficient. Generally processing more for the same dollar amount or less.

If you are using rigs such as the one in your signature, roughly the same IPC S a 6th gen Intel, maybe a cpu upgrade would be the smartest thing you did today.

A 5000 series chip is going to have way way more performance than a 1600.

Much like I care Bout my wallet and choose a 12400F for my computing needs. 2700x and 8700k is long in the tooth, even if one where to argue "capable".
 
current processors are MUCH more reliable and strong than older nodes.
50mv too much and a 22nm 4790k can degrade in a couple hours in P95.
a 3700X can run at 110C with ~200mv above fit in P95 small FFT for 60-70 hours before it degrades by a couple mv. (Buildzoid intentionally tried to degrade one)
You should tell @tabascosauz that. They experienced degradation even with stock clocks.

And other people don't care lol. Your opinion is valid, but lacks substance.

I mean you can look up the life cycle of a solid state capacitor. It's 100,000 hours in use for that entire time.

I'm using currently a cpu with a 27% overclock with a manufacturer date of 1999 on the PCB.

I don't not know however the actual years of use at full load, but I can't imagine it was ever used used in this fashion.

And if you care about the wallet, you buy into new hardware on a regular basis because it's more efficient. Generally processing more for the same dollar amount or less.

If you are using rigs such as the one in your signature, roughly the same IPC S a 6th gen Intel, maybe a cpu upgrade would be the smartest thing you did today.

A 5000 series chip is going to have way way more performance than a 1600.

Much like I care Bout my wallet and choose a 12400F for my computing needs. 2700x and 8700k is long in the tooth, even if one where to argue "capable".
You are using a CPU from 1999 and telling me I should get a newer CPU in the same breath? :laugh: :roll:

Why should I care that a newer CPU is "more efficient"? Am I going to notice the difference on the electricity bill between a Richland and a Tiger Lake laptop? It is definitely not "more efficient" for the environment in any way to send old hardware to the landfill and buy new hardware with lower power consumption, except perhaps if you are talking about hardware that is 20 years old.

Based on the fact that Piledriver IPC does not come close to Skylake IPC, I assume you are referring to my system specs rather than my signature. I got my 1600 AF (which is a Zen+ chip like the 2600) in 2020, so why would I ditch it and get a 5000 series chip? Actually, I am going to ditch it but only because I am getting rid of the entire lousy AM4/X470 platform and that has nothing to do with CPU performance. I do not have the money or need to upgrade to a 5000 series CPU. Furthermore, like many people so this is not meant as a personal attack, you have a 1000W PSU that does not even have a Bronze or Silver rating and you are telling me that I need to upgrade from 65W 12 nm Zen+ for more "efficiency" when I have a Seasonic 550W Gold rated PSU LMFAO. If you are worried about power consumption the first thing you do is get an appropriate wattage and quality Gold/Platinum rated PSU. Do you have any idea how much electricity your PSU is wasting when your PC is idling? It has got to be an obscene amount.
 
If it is designed to run hot, then the only thing it has going against it is thermal cycles. If you leave it on it will probably be fine.

I think clever designs keep the temperature up by using the right fan curve.

i.e. one wants bad cooling a low load.
 
I think clever designs keep the temperature up by using the right fan curve.

i.e. one wants bad cooling a low load.
this is why I like that my Richland ProBook idles at 44-45 C and maxes out at about 60 C under heavy load (turbocore/boost does not work under Linux though so it normally would be higher).
 
Did not know that.
I mean, only on Trinity/Richland APUs specifically. It works fine Kaveri, Carrizo and later (including Zen) APUs and also Intel CPUs AFAIK.
 
You cannot join a class action (does that even exist in the UK, by the way?) if your hardware is out of warranty.


Have you checked my sig? I do not care about some "platform life cycle" or whatever corporate BS they come up with. I care about my wallet, my requirements and the environment.
Have you ever ran any CPU flat out 24/7?!, For weeks on end?!
 
You should tell @tabascosauz that. They experienced degradation even with stock clocks.


You are using a CPU from 1999 and telling me I should get a newer CPU in the same breath? :laugh: :roll:

Why should I care that a newer CPU is "more efficient"? Am I going to notice the difference on the electricity bill between a Richland and a Tiger Lake laptop? It is definitely not "more efficient" for the environment in any way to send old hardware to the landfill and buy new hardware with lower power consumption, except perhaps if you are talking about hardware that is 20 years old.

Based on the fact that Piledriver IPC does not come close to Skylake IPC, I assume you are referring to my system specs rather than my signature. I got my 1600 AF (which is a Zen+ chip like the 2600) in 2020, so why would I ditch it and get a 5000 series chip? Actually, I am going to ditch it but only because I am getting rid of the entire lousy AM4/X470 platform and that has nothing to do with CPU performance. I do not have the money or need to upgrade to a 5000 series CPU. Furthermore, like many people so this is not meant as a personal attack, you have a 1000W PSU that does not even have a Bronze or Silver rating and you are telling me that I need to upgrade from 65W 12 nm Zen+ for more "efficiency" when I have a Seasonic 550W Gold rated PSU LMFAO. If you are worried about power consumption the first thing you do is get an appropriate wattage and quality Gold/Platinum rated PSU. Do you have any idea how much electricity your PSU is wasting when your PC is idling? It has got to be an obscene amount.
Hmm?

Why would you poke fun at using vintage hardware? That's not part of the discussion....

It's about longevity.

Well in my aging experience, processors don't exactly just die in a short period of time under any kind of use.

You spent your money they way it fit you. I don't think it's funny, and have no desire to poke fun at you. But an CPU upgrade on your platform would be wiser money spent vs buying into a new platform.
 
Hmm?

Why would you poke fun at using vintage hardware? That's not part of the discussion....

It's about longevity.

Well in my aging experience, processors don't exactly just die in a short period of time under any kind of use.

You spent your money they way it fit you. I don't think it's funny, and have no desire to poke fun at you. But an CPU upgrade on your platform would be wiser money spent vs buying into a new platform.
I don't mean to suggest that there is anything wrong with using old CPUs; in fact, that was my entire point. I respect your experience but I do not think experience with older nodes can transfer to the latest CPUs and so far the evidence seems to support that. If my AM4 setup was operating in a satisfactory manner then it indeed would make more sense to upgrade to a newer AM4 CPU if more performance is desired. But neither of those things is the case. I am not really going to buy into a new platform; I am just going to use an old Carrizo laptop that I cannot seem to get rid of anyway as a desktop replacement. I will probably never buy post-2016 x86 hardware again; I am waiting for suitable RISC-V hardware with ARM as a last resort.
 
I don't mean to suggest that there is anything wrong with using old CPUs; in fact, that was my entire point. I respect your experience but I do not think experience with older nodes can transfer to the latest CPUs and so far the evidence seems to support that. If my AM4 setup was operating in a satisfactory manner then it indeed would make more sense to upgrade to a newer AM4 CPU if more performance is desired. But neither of those things is the case. I am not really going to buy into a new platform; I am just going to use an old Carrizo laptop that I cannot seem to get rid of anyway as a desktop replacement. I will probably never buy post-2016 x86 hardware again; I am waiting for suitable RISC-V hardware with ARM as a last resort.
A lot of these newer chips are in the server segment. Usually high core and wattage. Typically higher than a home computer including multiple socketed boards.

How often do these servers get updated? Every 5 maybe 10 years? They are built to be redundant as could be also. I suppose that could play a big role too.

As far as your personal computing needs, like mine and everyone else, some stay on the cutting edge. They can afford it. Get to appreciate it.

My kids will not have new gen stuff unless they buy for themselves. My daughter runs a ryzen 1400. Does just fine for her. But will have an upgrade of cpu for sure. And she can use the board till it's end of life, the cpu 1400 has been running since 2017. Not full tilt. Not even a hot chip either. But the 2700X is used heavily by my gaming son. I'll probably slap in a 5600x in that one. It'll run cooler but give a nice fps boost for his games.

Oh, the 2700x is delidded as well. Running on air cooling. It's been on my bench table once or twice.
 
Have you ever ran any CPU flat out 24/7?!, For weeks on end?!
no, but if I buy second hand hardware, I do not know how it has been used. All I can do is buy quality hardware that generally runs cool and hope that it was not used by idiots who constantly blocked the vents.
 
no, but if I buy second hand hardware, I do not know how it has been used. All I can do is buy quality hardware that generally runs cool and hope that it was not used by idiots who constantly blocked the vents.
I'm not sure what your about.

Your replys are all over the place, if you have a issue, start a help thread.

If you have a relevant point, have at it, back it up if possible with facts.

If you're here just to moan then stop replying to me, not once have you bothered to read what I asked you or replied with the answer.

Wtaf is the point for me, in speaking to you in this case, when you just ramble on a tangent.
 
The 13900k can be set in the bios to run @ 115c

Gone are the days of 100c throttling. W1z writes about it in his review.
 
I have been using CPU's since the 386DX a mighty 20Mhz CPU, then more Intel until the first gen Athlon all the way today with Ryzen and 8th gen Intel and now 13th gen. Multiple workloads from gaming to editing, rendering etc and have yet to have a single CPU fail though my typical lifespan is 4 to 5 years before I sell them on ebay and upgrade. Currently on a 13700K with a small overclock to 5.4GHz all P core and 4.4GHz all E core with an offset vcore and temps never exceed 80 degrees C in the bench testing with the likes of Cinebench R23 over 31K score, Blender can go mid 80's but that is with a 360mm AIO. Always been into overclocking at the lowest vcore possible so as to maximise performance and get the best out of my spend.

Bottom line, good cooling is a must with the top end CPU's. These CPU's are now so easy to optimise for effeciency on both sides, so AMD or Intel that it should not really matter. If you are running 24/7 full load (which the vast majority do not) then optimise the power through bios and run the CPU to a certain temp and watt target and all will be good.

I have to admit, I was surprised when the new Ryzen 7000 series were launched to run at 95 Degrees C and that was suppossed to be normal!!!!!! and it was instantly excepted by so many reviewers who went to great pains to say, all is good...For me that is a no no...mid 80's degrees for all core sustained workloads is my limit and I will configure my CPU for that either with cooling or undervolting...

And one more thought, I wouldnt be surprised if AMD do a hybrid design much like Intel as that would work wonders with there chiplet stratagy!
 
Last edited:
This is probably why server CPUs have lower clocks. While the temps may be just as high, by the time the silicon degrades to the point that 1.8 GHz doesn't work, the CPU will be loooong since replaced.
Server CPUs have lower clocks because they usually fit a much larger amount of cores in the same TDP as a desktop chip (and may have multiple sockets on a board), and also they have smaller coolers due to the form factor. Can't exactly put a NH-D14 in a cupboard full of 1U racks.

Have you ever ran any CPU flat out 24/7?!, For weeks on end?!
not that guy but I've done anything from password cracking (24/7 bruteforce for weeks, then generating rainbow tables, then the same on gpu) to overnight encoding to coin mining, and never lost PC hardware from too much usage. I've used a GPU for 9 years at 24/7 power, doing everything from gaming to bitcoin mining. I had a smartphone that almost lasted a decade, in the end the battery did not last a day and any replacement batteries were so old they did not help. Actually, I lost a motherboard one time, but that was on a PC I've changed since twice and only kept around as a retro machine (super socket 7).

As long as the hardware isn't built with inherent faults, like those Fermi based Geforce cards, the early Xbox 360s, or any AVR using the recalled Texas Instruments DSPs, they will usually work for as long as they become technologically obsolete and you trash them. People still use C64s, and those things ran extremely hot for the time.
 
I have been using CPU's since the 386DX a mighty 20Mhz CPU, then more Intel until the first gen Athlon all the way today with Ryzen and 8th gen Intel and now 13th gen. Multiple workloads from gaming to editing, rendering etc and have yet to have a single CPU fail though my typical lifespan is 4 to 5 years before I sell them on ebay and upgrade. Currently on a 13700K with a small overclock to 5.4GHz all P core and 4.4GHz all E core with an offset vcore and temps never exceed 80 degrees C in the bench testing with the likes of Cinebench R23 over 31K score, Blender can go mid 80's but that is with a 360mm AIO. Bottom line, good cooling is a must with the top end CPU's. These CPU's are now so easy to optimise for effeciency on both sides, so AMD or Intel that it should not really matter. If you are running 24/7 full load (which the vast majority do not) then lower and optimise the power and run the CPU to a certain temp and watt target and all will be good.

I have to admit, I was surprised when the new Ryzen 7000 series were set to run at 95 Degrees C and that was suppossed to be normal and it was instantly excepted by so many reviewers who went to great pains to say, all is good...For me mid 80's degrees for all core sustained workloads is my limit and I will configure my CPU for that iether with cooling or undervolting...
I think you are fine doing as you are but I think that all companies involved in chip manufacturing have been working to use the silicon to its Max from day one.

Sticking with AMD though I'm sure Intel do similar they have been doing longevity testing many years now and used the data they got from FX to influence the design of Ryzen generation one, the testing they have done on each generation has informed what followed.

That and they do have blacks equation and various testing regimes to validate the running parameters they set like 95°C.

They always have in mind they Don't want mass failure's or early life failure and account for that too.

I don't think they just decided off cuff 95 is fine, they know, like Intel knows 110 is fine too.

It goes against some prior knowledge but that's reasonable evolution of designs to me.

@ymdhis I agree and have done similar.

To do so required preventative maintenance, cleaning out rads etc regularly, I still do this.
And I have seen the odd nightmare when it went wrong.
But no CPU died, motherboards have died in my hands, like a crosshair V with a 8350 in it, CPU survived but the current pulled by five GPU on a bench run melted the 24pin and molex additional GPU power connection, 3X Rx 580 plus a gtx 460-2win just because.
 
Back
Top