• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Ryzen 5800 owners complain about very high MT load temps

A comment from AMD - I'd love to add it to the original post but editing it is no longer possible:

AMD views temps up to 90C (5800X/5900X/5950X) and 95C (5600X) as typical and by design for full load conditions. Having a higher maximum temperature supported by the silicon and firmware allows the CPU to pursue higher and longer boost performance before the algorithm pulls back for thermal reasons. Is it the same as Zen 2 or our competitor? No. But that doesn't mean something is "wrong." These parts are running exactly as-designed, producing the performance results we intend.

Check the attachments as well.

sure hold on few moments and I get it up and running and here ya go

as you can see from the screenshot CPU temps and Package power are 77°c and 130W

You either have PBO enabled or PPT limits increased. Period.

What's the point in upgrading to a new version of a cell phone for every release that comes out?
To me, none. It's stupid and a waste.
To others, they simply cannot live without having the latest and greatest.

If folks want to upgrade from a solid CPU to another solid CPU that offers a small performance boost, that's their prerogative.

New phones normally have much better cameras though I have to agree that usually generational improvements are not worth it however upgrading each 2-3 years makes a night and day difference in quality. And then there are cases when the smartphone vendor even decreases the quality of its products: consider the OnePlus 6 with a telephoto camera vs. the OnePlus 8T without it. And the latter costs a lot more.
 

Attachments

Last edited:
That should be the die temperature (Tdie). I don't have any software right now that can read Zen 3 temperatures on a per-core basis, so I'm just stuck with that for now.

Currently I'm using GD900 as my thermal paste of choice and it has worked well over the last year. I had a few goes at pasting the 5600X to make sure I got it right. I re-pasted my GTX 1060 with it and the temperatures have been great, so I haven't bothered to invest in a different thermal paste since.
It wont be any per-core temperature for ZEN3 just as this was the case for ZEN2 also. Its pointless. The CPU (Tctl/Tdie) value is all you need, reporting always the highest (spot) temp of any core across all cores and CCDs switching instantly to the highest reporting sensor.

You either have PBO enabled or PPT limits increased. Period.
Of course @Athlonite has increased limits...
Look at the CPU PPT limit at 3.2% with that 130W PPT...:shadedshu:

Also check his CPU EDC/TDC values... reporting exactly the same... and the missing EDC limit? ...its fishy
He has some weird manual settings in PBO, and also thinks we hevent done our homework on Ryzen3000.
Its been a while since I did those weird PBO settings... setting EDC to 1 Amp for example...;)

He probably using high PBO scalar too
 
Last edited:
It wont be any per-core temperature for ZEN3 just as this was the case for ZEN2 also. Its pointless. The CPU (Tctl/Tdie) value is all you need, reporting always the highest (spot) temp of any core across all cores and CCDs switching instantly to the highest reporting sensor.


Of course @Athlonite has increased limits...
Look at the CPU PPT limit at 3.2% with that 130W PPT...:shadedshu:

Also check his CPU EDC/TDC values... reporting exactly the same... and the missing EDC limit? ...its fishy
He has some weird manual settings in PBO, and also thinks we hevent done our homework on Ryzen3000.
Its been a while since I did those weird PBO settings... setting EDC to 1 Amp for example...;)

He probably using high PBO scalar too
Setting Custom PBO to AMD's official TDP/TDC/EDC stock values is one of the many ways to try and prevent motherboards from misreporting power to the CPU on Auto settings. Everyone with a Zen2 or Zen3 CPU should be aware of the following "stock" values that motherboard vendors love to misreport and fudge in an attempt to seem "faster" (Reality - they run at very similar speeds but far less efficiently)
  • 65W CPU = 88W TDP, 60A TDC, 90A ECD
  • 105W CPU = 142W TDP, 95A TDC, 140A EDC
If your CPU is toasty and you don't like it, try setting a manual PBO with those values to see if anything improves. If there's a perceptible difference, your motherboard is made of LIES.
 
Setting Custom PBO to AMD's official TDP/TDC/EDC stock values is one of the many ways to try and prevent motherboards from misreporting power to the CPU on Auto settings. Everyone with a Zen2 or Zen3 CPU should be aware of the following "stock" values that motherboard vendors love to misreport and fudge in an attempt to seem "faster" (Reality - they run at very similar speeds but far less efficiently)
  • 65W CPU = 88W TDP, 60A TDC, 90A ECD
  • 105W CPU = 142W TDP, 95A TDC, 140A EDC
If your CPU is toasty and you don't like it, try setting a manual PBO with those values to see if anything improves. If there's a perceptible difference, your motherboard is made of LIES.
You use "Power Reporting Deviation (Accuracy)" to evaluate the board. And in order to do that you must have CPU PB/PBO on auto(or just enable) and run 100% load. No other manual CPU settings. Everything must be on Auto.
We know this:
  • 65W CPU = 88W TDP, 60A TDC, 90A ECD
  • 105W CPU = 142W TDP, 95A TDC, 140A EDC
Just Athlonite was trying to convince us (in vain) that the 3700X is drawing 130W on stock settings... yeah right!
 
e 3700X is drawing 130W on stock settings... yeah right!
130W is not 88W. I don't know how else you can put that to him :D
 
I shouldn't have to explain that ssd's have finite writes on a computer forum, or do you think they have an infinite write lifespan like hdd's? If you think your ssd drive has an infinite write lifespan start running write speed tests and dont stop and then tell me you don't want to minimize drive writes.


Obviously you didn't read the article in the second link.
You're the IT equivalent of an antivaxxer or something equally bizarre.
 
130W is not 88W. I don't know how else you can put that to him :D
I/We don't have to.
He probably knows it, but trying to... do what exactly?
I don't even give a tiny rat's arse.
 
Obviously you didn't read the article in the second link.
You're the IT equivalent of an antivaxxer or something equally bizarre.
Actually I did, he tortured the drives to death with nothing but writes, what part of that was unclear to you?

Maybe if you only use your computer for a few minutes a day to check sports scores or lottery longevity isn't an issue (also most people are addicted to phones now) but for people who edit hd video for example writes to a single ssd drive can become an issue. It takes a fair amount of writes but make no mistake simply writing to an ssd is shortening its lifespan. Reading/loading does not shorten its lifespan.

Also I appear to be a fat duck talking to a moose named octopus...
 
Last edited:
Actually I did, he tortured the drives to death with nothing but writes, what part of that was unclear to you?

Maybe if you use your computer for 4 minutes a day to check sports scores longevity isn't an issue but for those of us who edit hd video writes to a single drive can become an issue. And again you can kill an ssd merely by writing to it. It takes a fair amount of writes but make no mistake simply writing to an ssd is shortening its lifespan.
writes are limited, indeed. However that worry, for most users (not makeveli, lol) dont need to worry about writes. As you link shows, it took several months and PETAbytes of data to kill these drives. That is not remotely a real world situation. Worrying about writes on an ssd is a NON ISSUE for 99% of people.
 
writes are limited, indeed. However that worry, for most users (not makeveli, lol) dont need to worry about writes. As you link shows, it took several months and PETAbytes of data to kill these drives. That is not remotely a real world situation. Worrying about writes on an ssd is a NON ISSUE for 99% of people.
Indeed an average user is not likely to run into any issues however I have killed an ssd before using a it as the only drive in a computer. A 240gb ssd.
 
writes are limited, indeed. However that worry, for most users (not makeveli, lol) dont need to worry about writes. As you link shows, it took several months and PETAbytes of data to kill these drives. That is not remotely a real world situation. Worrying about writes on an ssd is a NON ISSUE for 99% of people.
Somewhere way back on the Techreport forums I posted an update on my Samsung 840 and people called out the insane amount of data I'd written to my 1-year-old 840 SSD. I'd been using it as an ESX swapfile/scratch dump for a synthetic VMWare testing environment and I still only wrote 240TB to it in a year even running a silly number of multiple VM's with dumb, unrealistic data-heavy workloads just to get worst-case-scenario data for work.

No consumer needs to worry about SSD lifespan - at least not with a half-decent TLC drive with a DRAM cache. All bets are off with ultra-budget QLC DRAM-less but if you buy a godawful piece-of-shit SSD like that just to save 10% on price, and then use it for write-heavy workloads then you should expect all the trouble you deserve for such stupidity/ignorance.
 
A comment from AMD - I'd love to add it to the original post but editing it is no longer possible:



Check the attachments as well.



You either have PBO enabled or PPT limits increased. Period.



New phones normally have much better cameras though I have to agree that usually generational improvements are not worth it however upgrading each 2-3 years makes a night and day difference in quality. And then there are cases when the smartphone vendor even decreases the quality of its products: consider the OnePlus 6 with a telephoto camera vs. the OnePlus 8T without it. And the latter costs a lot more.
So whole threads pointless AMD said so.

Ask a mod he may open edits for you.
 
So whole threads pointless AMD said so.

Ask a mod he may open edits for you.

Not really pointless. As far as I can see there's a huge yet to be explained variability in 5800X load temps. Some people are OK (with temps slightly higher than those for the 3800X/XT), other people say they see temps above 90C with CPU throttling. This is far from being settled down. I'm looking at purchasing this CPU and I don't want to get a CPU from a bad batch (if that's indeed the case).
 
Indeed an average user is not likely to run into any issues however I have killed an ssd before using a it as the only drive in a computer. A 240gb ssd.
Because of writes? Or did the controller crap out? Things die. It didn't die becuase it was the only SSD in the system.

Come on guys... the information is all there...killing a drive with writes is incredibly difficult for 99% of users. If you're grinding several GB /day, sure... but so few do that it's just not a worry. My pagefile is set static to 2GB (32GB of RAM).

Anyway, this isn't about SSDs... so I'll leave it at that.
 
Because of writes? Or did the controller crap out? Things die. It didn't die becuase it was the only SSD in the system.

Come on guys... the information is all there...killing a drive with writes is incredibly difficult for 99% of users. If you're grinding several GB /day, sure... but so few do that it's just not a worry. My pagefile is set static to 2GB (32GB of RAM).

Anyway, this isn't about SSDs... so I'll leave it at that.
If I remember correctly from last year, he had an old SSD that died from a lot writes several years ago.
But you're right, this isn't SSD thread...
 
I am getting ready to throw a 5800x against a dual 360mm radiator solution, + EK velocity waterblocks :) so we'll see how it heats up...(normally this build has a GPU waterblock in the mix but there are zero waterblocks for the FTW3 3080 atm).
 
I was messing around today with undervolting my R9 5900X. I think i'll leave it at 1.25V and 4.3GHz. Brings very nice reduction in power consumption and temperature with really minimal performance difference in games.ž
Edit. Actually, lowered the voltage some more to 1.225V. Temperatures went down even more. And power consumption as well. Seems stable after Cinebench R23 and playing some mortal shell. Will see over time.
If TPU reviewer needed 1.4V for 4.5GHz stability, then i think 1.225V for 4.3GHz is fantastic.
 
Last edited by a moderator:
I am getting ready to throw a 5800x against a dual 360mm radiator solution, + EK velocity waterblocks :) so we'll see how it heats up...(normally this build has a GPU waterblock in the mix but there are zero waterblocks for the FTW3 3080 atm).
It'll heat up just as well as ever, I can vouch for that, it's just a lot of heat in a small package see my system specs ,I'm there it gets hot because it was designed to crack on with work when asked , and to do so to the limit's designed into it.
The only thing extensive cooling will do is allow higher sustained clock's though I am talking about flat out loads for day's not minutes so most Wouldn't see such behaviour.
Every generation of Ryzen behaved the same too, I tried em.

@birdie yes, indeed pointless, next up Intel's next CPU release 11xxx series is due, I wonder if they'll get hot, hmnn!?
 
Last edited:
Because of writes? Or did the controller crap out? Things die. It didn't die becuase it was the only SSD in the system.

Come on guys... the information is all there...killing a drive with writes is incredibly difficult for 99% of users. If you're grinding several GB /day, sure... but so few do that it's just not a worry. My pagefile is set static to 2GB (32GB of RAM).

Anyway, this isn't about SSDs... so I'll leave it at that.
Sorry for going off topic, I think it was the controller on the drive was bad, the model was known for it I think, a good ssd as the only drive should last at least 5 years of pretty much continual use if not more.
 
It wont be any per-core temperature for ZEN3 just as this was the case for ZEN2 also. Its pointless. The CPU (Tctl/Tdie) value is all you need, reporting always the highest (spot) temp of any core across all cores and CCDs switching instantly to the highest reporting sensor.


Of course @Athlonite has increased limits...
Look at the CPU PPT limit at 3.2% with that 130W PPT...:shadedshu:

Also check his CPU EDC/TDC values... reporting exactly the same... and the missing EDC limit? ...its fishy
He has some weird manual settings in PBO, and also thinks we hevent done our homework on Ryzen3000.
Its been a while since I did those weird PBO settings... setting EDC to 1 Amp for example...;)

He probably using high PBO scalar too


FYI there are no fishy setting in my BIOS PBO / EDC limits are just left on AUTO so whatever the system thinks it can do it will do infact the only setting to be changed was DOCP for my ram timings everything else as I've said already is on AUTO
 
I was messing around today with undervolting my R9 5900X. I think i'll leave it at 1.25V and 4.3GHz. Brings very nice reduction in power consumption and temperature with really minimal performance difference in games.ž
Edit. Actually, lowered the voltage some more to 1.225V. Temperatures went down even more. And power consumption as well. Seems stable after Cinebench R23 and playing some mortal shell. Will see over time.
If TPU reviewer needed 1.4V for 4.5GHz stability, then i think 1.225V for 4.3GHz is fantastic.

I would recommend just disabling PBO and leaving Vcore on Auto. It knocked 12C off my maximum die temperature, but only reduced Cinebench R20 scores by 10% for multi and 2% for single. Not a bad deal.
 
So granting my systems cooling is gross overkill I am hitting 4.75 ghz on a 1.2 vcore and I am not going past 80c . That’s using an ek velocity am4 block. Switching to a Optimus WC foundation am4 cpu cooler block here soon. Also using a set of g.skill ripjaws DDR4-3800 cas 16 flck 1900. I got a 6200 on cone bench r20 which is 300-400 sounds of a 16 c 1950x
 
seems like pebkac, one was complaining hitting 50-60c for load temps
 
I would recommend just disabling PBO and leaving Vcore on Auto. It knocked 12C off my maximum die temperature, but only reduced Cinebench R20 scores by 10% for multi and 2% for single. Not a bad deal.
Won't that get the CPU to run at 3.7GHz base frequency? That's too low. With Vcore at 1.225V and multiplier at 43, i get almost identical cinebench results as with everything on auto.
 
Won't that get the CPU to run at 3.7GHz base frequency? That's too low. With Vcore at 1.225V and multiplier at 43, i get almost identical cinebench results as with everything on auto.

It drops my all-core frequency from 4.41 to 4.06GHz but the single core frequency of 4.6GHz stays the same. So really it's the multi-core rendering where disabling PBO seems to have the largest impact on performance and temperatures.
 
Back
Top