# Ryzen 5800 owners complain about very high MT load temps



## birdie (Nov 10, 2020)

Source 1.
Source 2.
Source 3.

Many have resorted to enabling Eco Mode in Ryzen Master which basically turns the 5800X into a yet to be announced 5700X. From the Ryzen Master manual:



> Applying Eco-Mode lowers the processor’s power consumption from default stock to AMD’s lower, standard AM4 infrastructure power level (TDP).
> ‒ 105W and 95W TDP models shift to 65W
> ‒ 65W TDP models shift to 45W
> • The processor continues to manage core voltage and frequency automatically to the reduced power levels - expect frequencies may be lower
> ...



A number of people disagreed with me when I said that AMD basically OC'ed Ryzen 5000 CPUs as much as possible right out of the gate to gain that coveted single threaded performance lead and it looks like it's exactly what people are dealing with now. Luckily enabling Eco Mode translates into a ~8% performance loss in MT Cinebench R20 scores which shouldn't affect games but it's still not pleasant for those who have shelled out $450. But here's a problem: the stock Ryzen 3700X scores around 4750 points while TDP limited 5800X scores around 5350 which is a 12% performance increase while costing 36% more.


----------



## Zach_01 (Nov 10, 2020)

That guy on reddit said that he was hitting 90C on R20MT and he re-seated his cooler and got -5C. But he didn't mention his cooler.
Sounds like aggressive boosting just like 3000 with not(maybe) good cooling. And the spikes from idle 30C to 50/60s or even 70s is same as 3000 depending cooling again and windows power plan. Its the hotspot sensor (Tctl/Tdie) reading that only ZEN2/3 reveal.

Is there any other examples?


----------



## birdie (Nov 10, 2020)

Zach_01 said:


> That guy on reddit said that he was hitting 90C on R20MT and he re-seated his cooler and got -5C. But he didn't mention his cooler.
> Sounds like aggressive boosting just like 3000 with not(maybe) good cooling. And the spikes from idle 30C to 50/60s or even 70s is same as 3000 depending cooling again and windows power plan. Its the hotspot sensor (Tctl/Tdie) reading that only ZEN2/3 reveal.
> 
> Is there any other examples?



Anandtech review? The same 90C:



> Also of note are the last two processors – both processors are reporting 4450 MHz all-core turbo frequency, however the 5800X is doing it with 14.55 W per core, but the 5600X can do it with only 10.20 W per core. In this instance, this seems that the voltage of the 5800X is a lot higher than the other processors, and this is forcing higher thermals – we were measuring 90ºC at full load after 30 seconds (compared to 73ºC on the 5600X or 64ºC on the 5950X), which might be stunting the frequency here. The motherboard might be over-egging the voltage a little here, going way above what is actually required for the core.


----------



## LFaWolf (Nov 10, 2020)

Zach_01 said:


> That guy on reddit said that he was hitting 90C on R20MT and he re-seated his cooler and got -5C. But he didn't mention his cooler.
> Sounds like aggressive boosting just like 3000 with not(maybe) good cooling. And the spikes from idle 30C to 50/60s or even 70s is same as 3000 depending cooling again and windows power plan. Its the hotspot sensor (Tctl/Tdie) reading that only ZEN2/3 reveal.
> 
> Is there any other examples?


He did - be Quiet dark rock on b550 MSI mortar.


----------



## Zach_01 (Nov 10, 2020)

And I forgot to mention the Power Reporting Deviation matter, where a board is "lying" to the CPU about its consumption to push it more. Although this may happening on all CPUs , but not so sure...
At this point we need the kind of board in use for those cases.

Or may be its early AGESA in general.


----------



## Vayra86 (Nov 10, 2020)

142W peak will do that to any cooling solution but the very biggest.

These temp spikes aren't new. I guess you're just going to have to settle for 'within spec' and pray it won't burn a hole in your socket at some point. The engineers say this is fine, so yeah.

Not too happy about it either though.


----------



## Frick (Nov 10, 2020)

Yeah I mean you want to use the capabilites of the CPU right? That means more heat, and spiking while doing anything is fine. Wasn't there an outrage for some Ryzen generation because measuring software read the ms-duration spikes and reported those values as max? Also I assume those are die temps and not package temps.


----------



## birdie (Nov 10, 2020)

Frick said:


> Yeah I mean you want to use the capabilities of the CPU right? That means more heat, and spiking while doing anything is fine. Wasn't there an outrage for some Ryzen generation because measuring software read the ms-duration spikes and reported those values as max? Also I assume those are die temps and not package temps.



I personally mean nothing while people report Ryzen Master temps. No one uses HWiNFO64 from mid 2019.


----------



## tabascosauz (Nov 10, 2020)

I guess this is why the early 5800X adopters should have waited for reviews and opted for a 5600X or 5900X. So far there isn't a single Vermeer chip that's being pushed as hard as the 5800X.

The 6-core has lower TDP, is less dense, and sustains the same all-core frequency with much less Vcore. Counterintuitively, it almost looks like it has better chiplet quality.
The 12-core has the second lowest thermal density, and runs lower Vcore and all core.
While the 16-core is the only other SKU with full chiplets, it runs more conservatively than the 12-core and runs even lower all-core. 
That just leaves the 5800X, running a blistering 4.5GHz all core at 1.35V+ on one chiplet, while all of its other brethren have dialed all core Vcore back to sub-1.3V this time around.

The 3800X ran barely higher all-core stock than a 3700X, and already didn't run much cooler than a 3900X. Not entirely surprising that the 5800X is like this. 

AMD firmware restricted the 3700X in AGESA somewhere after 1004, to have it run cooler and closer to 65W and stop cannibalizing the 3800X. AMD could make firmware changes to dial back the Vcore in P95 and Cinebench here, but how likely idk.


----------



## EarthDog (Nov 10, 2020)

birdie said:


> No one uses HWiNFO64 from mid 2019.


That's because it's been updated many times since then. 

In fact, isn't that the application which tells you about the power deviation that @Zach_01 mentioned earlier?


----------



## Zach_01 (Nov 10, 2020)

EarthDog said:


> That's because it's been updated many times since then.
> 
> In fact, isn't that the application which tells you about the power deviation that @Zach_01 mentioned earlier?


And keep evolving....






						HWiNFO v6.35-4305 Beta released
					

HWiNFO v6.35-4305 Beta available.  Changes:  Enhanced sensor monitoring on ASUS H570, B560, H510 and Q570 series. Added reporting of Precision Boost Clock Limit and Automatic OC Offset on AMD Vermeer. Fixed monitoring of CPU power and HTC status on AMD Zen3.




					www.hwinfo.com


----------



## Deleted member 24505 (Nov 10, 2020)

So no jokes about about hot and sweaty Intel now eh  

Someone on reddit said this-
The 5800x and 5950x have the exact same 105W TDP and 142W max power limit. So there is a larger power and thermal budget available per core on the 5800x vs the 5950x.

On the 5800x, all that power goes to one die, and is divided amongst only 8 cores. While on the 5950x, you have the exact same 142W divided amongst 16 cores and two separate dies which will be easier to cool. So the 5800x running hotter and being able to draw more power per core makes total sense.

And the 5600x is 65W(88W power budget), so it has a 38% lower max power draw than the 105W chips, but only two fewer cores. So again this ratio makes total sense.

sense or not?


----------



## thesmokingman (Nov 10, 2020)

tigger said:


> So no jokes about about hot and sweaty Intel now eh
> 
> Someone on reddit said this-
> The 5800x and 5950x have the exact same 105W TDP and 142W max power limit. So there is a larger power and thermal budget available per core on the 5800x vs the 5950x.
> ...



Total sense. The problem is that the complainers did not do their due diligence. The 5800x TDP is well advertised. It only take minimal logic to see how much more power is available to that chip given its TDP vs the number of cores. PPL seem to think the 5800x is the same as a 5600x... smh.


----------



## TumbleGeorge (Nov 10, 2020)

Better chiplets vs worse chiplets by powerdrow? Better for flagship?


----------



## thesmokingman (Nov 10, 2020)

TumbleGeorge said:


> Better chiplets vs worse chiplets by powerdrow? Better for flagship?



Read the label on a 5800x. It has a 105w TDP to feed its 8 cores instead of 12 or 16 cores.


----------



## Zach_01 (Nov 10, 2020)

tigger said:


> So no jokes about about hot and sweaty Intel now eh
> 
> Someone on reddit said this-
> The 5800x and 5950x have the exact same 105W TDP and 142W max power limit. So there is a larger power and thermal budget available per core on the 5800x vs the 5950x.
> ...


Yes it does...



thesmokingman said:


> Total sense. The problem is that the complainers did not do their due diligence. The 5800x TDP is well advertised. It only take minimal logic to see how much more power is available to that chip given its TDP vs the number of cores. *PPL seem to think the 5800x is the same as a 5600x... smh.*


...or a 3700X, 65W TDP (88W PPT) also


----------



## TheoneandonlyMrK (Nov 10, 2020)

birdie said:


> Source 1.
> Source 2.
> 
> Many have resorted to enabling Eco Mode in Ryzen Master which basically turns the 5800X into a yet to be announced 5700X. From the Ryzen Master manual:
> ...


Well surprise, AMD and Intel adopt boost algorithms to maximize performance beyond stock and shit gets hot.
Are you a Noob , since when was this new my Ryzen third gen on water gets hot as did my Q6600 Fx8350 and many others.


----------



## Vayra86 (Nov 10, 2020)

theoneandonlymrk said:


> Well surprise, AMD and Intel adopt boost algorithms to maximize performance beyond stock and shit gets hot.
> Are you a Noob , since when was this new my Ryzen third gen on water gets hot as did my Q6600 Fx8350 and many others.



The difference now is that you're very limited in controlling those spikes. The algorithm thinks for you and that takes some getting used to, at least. People like control.


----------



## TumbleGeorge (Nov 10, 2020)

thesmokingman said:


> Read the label on a 5800x. It has a 105w TDP to feed its 8 cores instead of 12 or 16 cores.


Same TDP for one chiplet vs two and also boost low. That speak of much worse efficiency. This depend more electrical loses= more heat to achieve same or close frequency.


----------



## DuxCro (Nov 10, 2020)

Interesting. My max temp on R9 5900X was 65C with OCCT and small data set. Fortron Windale 6 cooler with 2 X 120mm fans.


----------



## PooPipeBoy (Nov 10, 2020)

tabascosauz said:


> I guess this is why the early 5800X adopters should have waited for reviews and opted for a 5600X or 5900X. So far there isn't a single Vermeer chip that's being pushed as hard as the 5800X.
> 
> The 6-core has lower TDP, is less dense, and sustains the same all-core frequency with much less Vcore. Counterintuitively, it almost looks like it has better chiplet quality.
> The 12-core has the second lowest thermal density, and runs lower Vcore and all core.
> ...



The 5900X actually has an 8+4 core configuration this time around, so it does get one fully-enabled CCD like the 5800X.
So even though it's running higher frequencies and the temperatures should be worse on the 5900X by nature, it seems like AMD has something going on to actually make it run cooler.


----------



## Frick (Nov 11, 2020)

Vayra86 said:


> The difference now is that you're very limited in controlling those spikes. The algorithm thinks for you and that takes some getting used to, at least. People like control.



Yeah well the CPU knows its own silicone better than people so...


----------



## R0H1T (Nov 11, 2020)

Vayra86 said:


> The difference now is that you're very limited in controlling those spikes. The algorithm thinks for you and that takes some getting used to, at least. People like control.


You know that scheduler on an OS, that also takes control away from the user ~ I bet most people don't like that either.


----------



## Athlonite (Nov 11, 2020)

Zach_01 said:


> ...or a 3700X, 65W TDP (88W PPT) also



my 3700X pulls way more than 88W try double it's TDP at 135.8W during CBr20 multi mind you it's also pulling an all core of 4450MHz and reaches 77c


----------



## ratirt (Nov 11, 2020)

Funny because I looked over the TPUs reviews and it would seem the 5800X doesn't reach temps the 5900X does. The power draw on the other hand is similar with the 5800X drawing a bit less power. Also the voltage is quite high on the 5800X 1.355v compared to 1.238v on the 5900X.


PooPipeBoy said:


> The 5900X actually has an 8+4 core configuration this time around


I think you are wrong. The 5900X uses 6+6 and that is why the price is so competitive comparing to the 5800X because this one has to use 8 core chiplet. So the 5800X and 5950X are the only chips using 8core chiplets. Also, the 5950X uses the top quality silicon while the 5800X uses any 8core chiplet which passed testing process.
That is why there is a discrepancy in the voltage that the 5800X uses (all 5000 series chips use around 1.2v while the 5800x uses 1.35v and that is a noticeable difference). The voltage is higher than 3800X and that is the only chip in the 5000 series family which has temps and voltage higher than the previous gen counterpart.


----------



## Zach_01 (Nov 11, 2020)

Athlonite said:


> my 3700X pulls way more than 88W try double it's TDP at 135.8W during CBr20 multi mind you it's also pulling an all core of 4450MHz and reaches 77c


That is interesting...
Can you show to us a screenshot of HWiNFO sensors mode, full window, during R20 MultiT?


----------



## PooPipeBoy (Nov 11, 2020)

ratirt said:


> I think you are wrong. The 5900X uses 6+6 and that is why the price is so competitive comparing to the 5800X because this one has to use 8 core chiplet. So the 5800X and 5950X are the only chips using 8core chiplets. Also, the 5950X uses the top quality silicon while the 5800X uses any 8core chiplet which passed testing process.
> That is why there is a discrepancy in the voltage that the 5800X uses (all 5000 series chips use around 1.2v while the 5800x uses 1.35v and that is a noticeable difference). The voltage is higher than 3800X and that is the only chip in the 5000 series family which has temps and voltage higher than the previous gen counterpart.



Then we'll wait until someone with a 5900X can use Ryzen Master to verify the CCX configuration. Let's actually prove that I'm wrong, rather than just saying it.


----------



## R0H1T (Nov 11, 2020)

Well AMD can use any combination of 4+8 or 6+6 *IIRC *even on zen2, there's no technical limitation in there. So it could simply be that they're making it with a combination depending on the chiplets they have as well as demand of such ships.


----------



## ratirt (Nov 11, 2020)

PooPipeBoy said:


> Then we'll wait until someone with a 5900X can use Ryzen Master to verify the CCX configuration. Let's actually prove that I'm wrong, rather than just saying it.


You can watch reviews and you will have that answer. There might be a possibility to use defective chiplets 4 core but these (as you guys probably know) are rare (80mm2 size) on a matured 7nm node. 
Besides, AMD will not make 4 core processors with chiplets rather monolithic CPU with iGPU. At the beginning of shipping, AMD will surely focus on the chiplets and bin them accordingly. 6 cores per chiplet  for 5600X and high quality for 5900X same goes with 5800X and 5950X. You can see it with the 5800X higher voltage (lower quality) than 5950X (higher quality). It's fairly simple I think.


----------



## Vayra86 (Nov 11, 2020)

Frick said:


> Yeah well the CPU knows its own silicone better than people so...



Stop talking about my tits will you



R0H1T said:


> You know that scheduler on an OS, that also takes control away from the user ~ I bet most people don't like that either.



No thats why they disable page file

And then complain why things don't work


----------



## oobymach (Nov 11, 2020)

Vayra86 said:


> No thats why they disable page file
> 
> And then complain why things don't work


Off topic but even I use invisible tasks in task scheduler, I have an inaudible tone play every 5 mins to keep my T50's from turning off. I actually wrote the damn script for it, but more and more software is getting invasive to the point where it's disruptive. Ads for upgrades in your software, multiple useless tasks to run a single device, if I weren't a hobbyist and didn't know how to disable such crap I might be put off by the invasiveness.

Also you need pagefile but not for windows to have control of it, mine averages 16-60mb thanks to custom size setting. In these days of ssd's where writes need to be minimal taking control of your pc isn't a bad idea. I re-routed all my temp folders to an hdd and have a m2 ssd just for my pagefile thus keeping writes to my m2 c: drive to a minimum.


----------



## birdie (Nov 11, 2020)

Athlonite said:


> my 3700X pulls way more than 88W try double it's TDP at 135.8W during CBr20 multi mind you it's also pulling an all core of 4450MHz and reaches 77c



The 3700X pulls at most 93W, period, unless you've enabled PBO/OC/increased its PPT.



PooPipeBoy said:


> Then we'll wait until someone with a 5900X can use Ryzen Master to verify the CCX configuration. Let's actually prove that I'm wrong, rather than just saying it.



2 six-core CCX'es:


----------



## PooPipeBoy (Nov 11, 2020)

Cool, then I stand corrected.


----------



## birdie (Nov 11, 2020)

The plot thickens:



> I will post the thermals on the new one. Ended up cleaning and repasting the damn thing four times before I decided it was the damn cpu and not my cooler or paste application technic. Called my companies AMD tech group (servers) and had them take a look at it remotely. They have a very handy diagnostic suite that interrogates the hell out of the chip on the hardware level (tried to snag a copy and got my hand smacked (damn); two hours later and they said it was the TIM under the heat spreader. AMD flagged the chip's serial number and are going to RMA it back to the QC lab to figure out what happened and to see if more also got dodgy TIM (the 5800x and 5900s get soldered; not sure about the rest of the series) application in that batch; I doubt enough got messed up to warrant a recall.... at least I hope not. Due to the temperature spikes those cores are now damaged. So I returned it and got a store credit and a promise of a phone call when they get their next shipment. Statistically I was the one to get a dud chip; *they randomly test chips after the head spreader is attached before they package them for QC. If they had to test everyone then they would never get them to market.* When they test to bin them it is before they place the head spreader and test stability right on the core. They have a solid copper block cooler with a liquid metal pad thingy that they use to cool the core when binning them (have a picture somewhere I got from someone). I have my 3900x till then so I am not sitting with a pile of dismantled computer parts pining for a round of whatever game tickles my fancy at the time.



Hopefully it's an isolated issue and it's not what people are dealing with.


----------



## Glaceon (Nov 11, 2020)

I suddenly don't regret buying a 3900X in August...


----------



## Frick (Nov 11, 2020)

birdie said:


> The plot thickens:
> 
> Hopefully it's an isolated issue and it's not what people are dealing with.



How hot did that one get?


----------



## PooPipeBoy (Nov 11, 2020)

Just did some testing to see how the thermals were doing on my 5600X:

Idle: 45C
Gaming: 72C
Cinebench R20 Multi: 89C

Granted I'm running a 92mm tower cooler but at least it's a point of comparison. Not throttling in Cinebench R20 and getting 4.4GHz on all cores constantly. There's no way I'd be able to run a 5800X without a major cooling upgrade and so it's little surprise that the 5800X club members are having issues.


----------



## Khonjel (Nov 11, 2020)

Vayra86 said:


> Stop talking about my tits will you


Just as point of reference, how big are they?


----------



## birdie (Nov 11, 2020)

PooPipeBoy said:


> Just did some testing to see how the thermals were doing on my 5600X:
> 
> Idle: 45C
> Gaming: 72C
> ...



89C is just one degree away before the CPU starts throttling (90C for the Ryzen 5000 series). Your cooling solution is clearly insufficient even for your 5600X CPU.


----------



## Chrispy_ (Nov 11, 2020)

These temperature spikes are no different to what we've had with the XT Zen2 models. It's all part of the boost alogorithm and if you cool it harder it will just boost higher and get hotter again. I have a 3600XT (gift from AMD, not something I wasted money on) and it's exactly the same behaviour as the 5600X. Later-model 3950X in our renderfarms behave much the same way, though I don't use PBO on those - just 100% plain stock and with motherboard/BIOS combos that don't 'cheat' with power reporting.

If you give it headroom, the CPU will use it. If you don't want such aggressive boost you can reduce the TDP or set custom PBO settings that are actually lower than the default PBO.


----------



## PooPipeBoy (Nov 11, 2020)

birdie said:


> 89C is just one degree away before the CPU starts throttling (90C for the Ryzen 5000 series). Your cooling solution is clearly insufficient even for your 5600X CPU.



Eh it's fine. Cinebench isnt really that critical beyond testing and benchmarking. I dont do any rendering workloads.


----------



## tabascosauz (Nov 11, 2020)

PooPipeBoy said:


> Eh it's fine. Cinebench isnt really that critical beyond testing and benchmarking. I dont do any rendering workloads.



Honestly, it looks like the firmware still has a long way to go. Most reviewers pegged the 5600X as similarly cool-running if not more so than the 3600, so looking at the scattered user reports about how the chips are behaving suggests that the 5600X isn't a hot-running chip, just the firmware is still in such a primitive state that the new Vermeer chips don't understand when to "go" and when to "stop". The 3600 would never give the U9S a run for its money at stock settings; from the reviews, neither should the 5600X.

So essentially, launch firmware is just wack, in classic Ryzen fashion, just like pre-1003 AGESA. Good news is Hallock already promised improved firmware coming up soon.

If AMD reins back these chips' Vcore a bit and board vendors do a better job of calling for more consistent stock Vcore, the 5800X may get a little bit better.










Optimumtech's 5950X is both bizarrely underperforming in all-core and also boosting to a ridiculously hot extent in mere productivity and web browsing, which points to the firmware needing work. Unsurprisingly reminiscent of how Ryzen 3000 behaved before AGESA 1003ABBA's boost filter. The 5950X frequencies during Heaven (a GPU benchmark, of all things) look a lot like the spikes you see if you graph Tctl/Tdie idle temps in HWInfo over time.


----------



## R0H1T (Nov 11, 2020)

Just lower the voltage on them, it's possible to get'em beyond *5GHz *mobo makers are pumping way too much.

Unless I see some decent sample set with voltages, cooling, clock speeds & what application was running I wouldn't call this an alarming thing just yet.


----------



## Vayra86 (Nov 11, 2020)

Khonjel said:


> Just as point of reference, how big are they?



5nm


----------



## EarthDog (Nov 11, 2020)

Vayra86 said:


> 5nm


Smaller nodes FTW!


----------



## Deleted member 24505 (Nov 11, 2020)

tabascosauz said:


> Honestly, it looks like the firmware still has a long way to go. Most reviewers pegged the 5600X as similarly cool-running if not more so than the 3600, so looking at the scattered user reports about how the chips are behaving suggests that the 5600X isn't a hot-running chip, just the firmware is still in such a primitive state that the new Vermeer chips don't understand when to "go" and when to "stop". The 3600 would never give the U9S a run for its money at stock settings; from the reviews, neither should the 5600X.
> 
> So essentially, launch firmware is just wack, in classic Ryzen fashion, just like pre-1003 AGESA. Good news is Hallock already promised improved firmware coming up soon.
> 
> ...



Did he say he is using a 240mm rad for the cpu and a 3090? no wonder temps are so crappy.


----------



## Zach_01 (Nov 11, 2020)

Khonjel said:


> Just as point of reference, how big are they?





Vayra86 said:


> 5nm


Oh... you couldnt be more in to the discussion! We are talking about squished nodes... apparently...

Seriously now, I really would like to see a view of HWiNFO during R20 MT run on these 5000 CPUs.


----------



## EarthDog (Nov 11, 2020)

Zach_01 said:


> Seriously now, I really would like to see a view of HWiNFO during R20 MT run on these 5000 CPUs.


I deleted a post in this thread, but regretting it. This thread really feels like a witch hunt (considering the source).

Anyway, I'll have a 5950x landing today so I can put something up late tonight.


----------



## Octopuss (Nov 11, 2020)

oobymach said:


> In these days of ssd's where writes need to be minimal


Excuse me, have you timewarped here from 2005 or something?


----------



## Athlonite (Nov 11, 2020)

Zach_01 said:


> That is interesting...
> Can you show to us a screenshot of HWiNFO sensors mode, full window, during R20 MultiT?


sure hold on few moments and I get it up and running and here ya go





as you can see from the screenshot CPU temps and Package power are 77°c and 130W


----------



## EarthDog (Nov 11, 2020)

5950X at stock peaked at 74C during the single run MT test (Corsair H115i on auto, ambient of 25C on an open test bench). It ran at 3.8 GHz to 3.85 GHz. I'm running the 10 min test now and will post pics in a while. 

Temps seem fine out of the gate. I don't expect to see it increase much since they test runs so fast and all of the downtime in the 10  min test. No issues with mine as expected.

EDIT: Nope.. it was the same 74C. No issues here.

Coretemp shows 122W when crunching MT.


----------



## PooPipeBoy (Nov 11, 2020)

Athlonite said:


> sure hold on few moments and I get it up and running and here ya go
> 
> as you can see from the screenshot CPU temps and Package power are 77°c and 130W



How do you get HWInfo64 to show wattage? I guess the current version isn't updated for Zen 3 yet.

Also something odd I've noticed with running benchmarks this morning is that my maximum temperatures are 10C lower than what I was reporting yesterday. Not sure why. Ambient temps haven't changed much.






According to CoreTemp my CPU package is using around 105 watts in Cinebench R20, which drops down to 17 watts at idle:



 



EDIT: I ran Cinebench R20 again and now the max temperature is back up at 89C and running 4,415MHz all cores. I guess it's a Precision Boost thing that kicks in randomly and increases temperature by 10C.


----------



## Reonu (Nov 12, 2020)

Any news on this? My 5800X arrives next week and I'm really nervous lol. I hope I get a good one.


----------



## ratirt (Nov 12, 2020)

PooPipeBoy said:


> EDIT: I ran Cinebench R20 again and now the max temperature is back up at 89C and running 4,415MHz all cores. I guess it's a Precision Boost thing that kicks in randomly and increases temperature by 10C.


That 89c is it a die or package temp? You can always double check the thermal paste and if you used the one that comes with the cooler, you can always replace it with something with better performance like Thermal Grizzly Kryonaut.
Here is something for reference.


----------



## Zach_01 (Nov 12, 2020)

ratirt said:


> That 89c is it a die or package temp? You can always double check the thermal paste and if you used the one that comes with the cooler, you can always replace it with something with better performance like Thermal Grizzly Kryonaut.
> Here is something for reference.


There is no package temp in ZEN2/3. All temps is die from different locations and/or averages.
You can see them on HWiNFO.

1. CPU (Tctl/Tdie)
2. CPU Die (average)
3. CPU CCD1 (Tdie)

=

1. Hotspot (absolute max) switching between all sensors.
2. Avg of all sensors
3. 1 sensor from a specific location.


----------



## mtcn77 (Nov 12, 2020)

Nobody with a 3700x needs an 'x5800', they are made in the same process node.


----------



## ratirt (Nov 12, 2020)

mtcn77 said:


> Nobody with a 3700x needs an 'x5800', they are made in the same process node.


There is a difference in performance between the two despite process node. 


Zach_01 said:


> There is no package temp in ZEN2/3. All temps is die from different locations and/or averages.
> You can see them on HWiNFO.
> 
> 1. CPU (Tctl/Tdie)
> ...


You are right. Not sure what I been thinking here


----------



## mtcn77 (Nov 12, 2020)

ratirt said:


> There is a difference in performance between the two despite process node.


Yeah, so why pay 36% for just 8%? It is not a requirement.


----------



## DuxCro (Nov 12, 2020)

I did a 10 minute run with new cinebench R23 on my 5900X. Everything on default. You can see the temperatures and voltages. Also tried later setting 4.6GHz in Ryzen master and core vltage to 1.325V. Completed the full run of 18 passes in just over 10 minutes, but then temparatures rised to almost 90 celsius.


----------



## ratirt (Nov 12, 2020)

mtcn77 said:


> Yeah, so why pay 36% for just 8%? It is not a requirement.


You mean 36% more cash for just 8% more performance I assume. I think there is a bigger performance gap than 8% between the 5800X and 3700X if you take into account the general performance difference. Individuals may find the 5800X still a good upgrade depending on their needs and what they are going to use this CPU for.
My answer here would be.
There are people who will find 5800X CPU upgrade worth the money due to what they will be doing with it.
Others will just buy it because it is a new processor and they have actually no idea what the performance difference in the applications they are using will be.
I'm sure there are also people, like you, who find the upgrade from 3700X to 5800X pointless, because there will be no difference in performance (or negligible performance improvement) for whatever they want to use the CPU for. One example here might be playing at 4k. Absolutely no difference between 3700X and 5800X.
So, self-evaluation if the upgrade is worth the money.  
To be honest, I'm thinking about upgrading my 2700X to 5900X and since I will play 4K anyway maybe the upgrade to 5000 series is pointless for me as well. On the other hand 5000Series offer this SAM technology that boosts the GPU performance for the 6000 series GPUs, so it may turn out to be worth upgrading anyway. Either way, I'm still waiting for reviews and I will make a decision after evaluating every aspect of the upgrade and performance gains and decide whether I will go 5000 series or not.


----------



## neatfeatguy (Nov 12, 2020)

mtcn77 said:


> Yeah, so why pay 36% for just 8%? It is not a requirement.



What's the point in upgrading to a new version of a cell phone for every release that comes out?
To me, none. It's stupid and a waste.
To others, they simply cannot live without having the latest and greatest.

If folks want to upgrade from a solid CPU to another solid CPU that offers a small performance boost, that's their prerogative.


----------



## Deleted member 24505 (Nov 12, 2020)

neatfeatguy said:


> What's the point in upgrading to a new version of a cell phone for every release that comes out?
> To me, none. It's stupid and a waste.
> To others, they simply cannot live without having the latest and greatest.
> 
> If folks want to upgrade from a solid CPU to another solid CPU that offers a small performance boost, that's their purgative.



Some people have money to piss up the wall


----------



## DuxCro (Nov 12, 2020)

tigger said:


> Some people have money to piss up the wall


I upgraded from R5 3600 to R9 5900X because of rendering. If i used my PC only for gaming, i wouldn't even consider "upgrading" to ZEN 3. If you look at performance numbers in TPU 5900X review, gaming gains from R5 3600 to R9 5900X are really minor. So if anyone is thinkin of upgrading fromZEN 2 to ZEN 3 just for gaming...don't do it.  Unless you really have money to burn.


----------



## Chrispy_ (Nov 12, 2020)

DuxCro said:


> I upgraded from R5 3600 to R9 5900X because of rendering. If i used my PC only for gaming, i wouldn't even consider "upgrading" to ZEN 3. If you look at performance numbers in TPU 5900X review, gaming gains from R5 3600 to R9 5900X are really minor. So if anyone is thinkin of upgrading fromZEN 2 to ZEN 3 just for gaming...don't do it.  Unless you really have money to burn.


Yeah, the number of games that aren't GPU-limited is tiny - it's important to remember than very few people actually game on 360Hz monitors. 1440p@144Hz is about the most demanding you'll get and even then a stock i5 or 3600 will allow the GPU to get close to its limits except in a few rare exceptions.


----------



## Zach_01 (Nov 12, 2020)

PooPipeBoy said:


> How do you get HWInfo64 to show wattage? I guess the current version isn't updated for Zen 3 yet.
> 
> Also something odd I've noticed with running benchmarks this morning is that my maximum temperatures are 10C lower than what I was reporting yesterday. Not sure why. Ambient temps haven't changed much.
> 
> ...


About power consumption get the latest beta version, it could be a bug.





						Asus X570-P  / Ryzen 5950X Missing Wattage
					

I've got an Asus X570-P (BIOS 2812) with a Ryzen 5950X. I noticed that when I run HWInf0 (4.34) I don't see any power draw under the CPU. I have cleared all preferences and reinstalled HWI to make sure that I haven't hid anything.




					www.hwinfo.com
				




You can find here:





						Announcements
					

Important announcements, new releases




					www.hwinfo.com
				




Can you please give us the value (%) of "Power Reporting Deviation (Accuracy)" along with "CPU PPT" when you run R20 MultiT? After you update to latest and having the "CPU PPT" sensor.


----------



## Deleted member 24505 (Nov 12, 2020)

DuxCro said:


> I upgraded from R5 3600 to R9 5900X because of rendering. If i used my PC only for gaming, i wouldn't even consider "upgrading" to ZEN 3. If you look at performance numbers in TPU 5900X review, gaming gains from R5 3600 to R9 5900X are really minor. So if anyone is thinkin of upgrading fromZEN 2 to ZEN 3 just for gaming...don't do it.  Unless you really have money to burn.



There is a certain number of people that I think get off on having a really expensive PC they do fuck all with apart from browse and post how much better than anyone elses it is. iirc there was certainly a certain amount on TPU that would almost certainly upgrade to the newest GPU and CPU even buying new motherboards too, even if it was a almost pointless waste of money. I don't know if TPU still has a population of these people or not, but there certainly is a certain amount bawling because they cant get new Nvidia GPU's or AMD CPU's. I wonder if these people are doing the upgrades because they need them or because they are like i said with money to piss up the wall.


----------



## PooPipeBoy (Nov 12, 2020)

DuxCro said:


> I upgraded from R5 3600 to R9 5900X because of rendering. If i used my PC only for gaming, i wouldn't even consider "upgrading" to ZEN 3. If you look at performance numbers in TPU 5900X review, gaming gains from R5 3600 to R9 5900X are really minor. So if anyone is thinkin of upgrading fromZEN 2 to ZEN 3 just for gaming...don't do it.  Unless you really have money to burn.



When I upgraded from a 3100 to 5600x the biggest performance difference was in Minecraft, believe it or not. Especially modded. I saw a 63% increase in average FPS in a modpack thats highly single threaded (only 2 threads active) in a saved world that I've spent five years building out. It went from struggling to hit 60fps to a buttery smooth 100fps. Lots of IPC and unified cache are a godsend because the game optimisation is completely laughable.

Obviously I'm GPU limited in most cases by the GTX 1060, but the point is that Zen 3 makes a big difference in CPU-limited scenarios that demand IPC.



Zach_01 said:


> About power consumption get the latest beta version, it could be a bug.
> 
> 
> 
> ...



Interesting, I'll see if I can download the beta then.

UPDATE: @Zach_01 I got the new results with wattage readings:


----------



## Glaceon (Nov 13, 2020)

According to Robert Hallock, Zen 3 is intended to get that hot:

"Yes. I want to be clear with everyone that AMD views temps up to 90C (5800X/5900X/5950X) and 95C (5600X) as *typical and by design* for full load conditions. Having a higher maximum temperature supported by the silicon and firmware allows the CPU to pursue higher and longer boost performance before the algorithm pulls back for thermal reasons.

Is it the same as Zen 2 or our competitor? No. But that doesn't mean something is "wrong." These parts are running exactly as-designed, producing the performance results we intend."

5800X is hotter than 5600X and 5900X due to 8 cores on a single CCX unlike the other two CPUs.


----------



## Deleted member 24505 (Nov 13, 2020)

Glaceon said:


> According to Robert Hallock, Zen 3 is intended to get that hot:
> 
> "Yes. I want to be clear with everyone that AMD views temps up to 90C (5800X/5900X/5950X) and 95C (5600X) as *typical and by design* for full load conditions. Having a higher maximum temperature supported by the silicon and firmware allows the CPU to pursue higher and longer boost performance before the algorithm pulls back for thermal reasons.
> 
> ...



so basically Zen 3 requires better cooler


----------



## ratirt (Nov 13, 2020)

tigger said:


> so basically Zen 3 requires better cooler


It doesn't that's what the dude said. He explained why the temps are higher, why they have decided these to go higher and why it is safe.


----------



## Deleted member 24505 (Nov 13, 2020)

ratirt said:


> It doesn't that's what the dude said. He explained why the temps are higher, why they have decided these to go higher and why it is safe.



Well i am pretty sure if my cpu was going up to 90c with my cooler, i would get a better one, even if it is apparently ok. Just because the guy said it is ok, does not mean everyone with a ryzen is going to allow it, otherwise why does everyone with one not just use the stock cooler>


----------



## ratirt (Nov 13, 2020)

tigger said:


> Well i am pretty sure if my cpu was going up to 90c with my cooler, i would get a better one, even if it is apparently ok. Just because the guy said it is ok, does not mean everyone with a ryzen is going to allow it, otherwise why does everyone with one not just use the stock cooler>


You can get a better one but it's not going to harm your CPU nor anything if it does get to 90c. I have a liquid on my CPU because I don't like high temps but others can be ok with what they get and since it is fine they may keep it that way.


----------



## PooPipeBoy (Nov 13, 2020)

ratirt said:


> That 89c is it a die or package temp? You can always double check the thermal paste and if you used the one that comes with the cooler, you can always replace it with something with better performance like Thermal Grizzly Kryonaut.
> Here is something for reference.



That should be the die temperature (Tdie). I don't have any software right now that can read Zen 3 temperatures on a per-core basis, so I'm just stuck with that for now.

Currently I'm using GD900 as my thermal paste of choice and it has worked well over the last year. I had a few goes at pasting the 5600X to make sure I got it right. I re-pasted my GTX 1060 with it and the temperatures have been great, so I haven't bothered to invest in a different thermal paste since.


----------



## ratirt (Nov 13, 2020)

PooPipeBoy said:


> That should be the die temperature (Tdie). I don't have any software right now that can read Zen 3 temperatures on a per-core basis, so I'm just stuck with that for now.
> 
> Currently I'm using GD900 as my thermal paste of choice and it has worked well over the last year. I had a few goes at pasting the 5600X to make sure I got it right. I re-pasted my GTX 1060 with it and the temperatures have been great, so I haven't bothered to invest in a different thermal paste since.


To be honest, there were tests showing what the difference among variety of thermal compound is. Maybe you can look into that to see if there is a difference between your and a higher quality one. I'm always using the thermal grizzly. The reviews for the 5000 series ryzen showed that the temps are basically what the 3000 series Ryzens were reporting except for the 5800X. That one was always a tad higher. Are you absolutely sure the thermal paste sits well on the CPU? Maybe you should double check. I'd start with this just to make sure.
BTW. Did you try to lower the Vcore? Set it up manually? The Vcore is damn high for the 5800X. Try lowering it and see if the CPU is stable then measure the temps.


----------



## oobymach (Nov 13, 2020)

Octopuss said:


> Excuse me, have you timewarped here from 2005 or something?


I shouldn't have to explain that ssd's have finite writes on a computer forum, or do you think they have an infinite write lifespan like hdd's? If you think your ssd drive has an infinite write lifespan start running write speed tests and dont stop and then tell me you don't want to minimize drive writes.









						7 mistakes easily 'kill' SSDs
					

Although the price is much higher than that of normal and low-capacity HDD, SSDs are still used by many users.  Simply, its speed is much faster than HDD.  If used to Boot Win, you



					tipsmake.com
				












						The SSD Endurance Experiment: They're all dead
					

I never thought this whole tech journalism gig would turn me into a mass murderer. Yet here I am, with the blood of six SSDs on my hands, and that’s...




					techreport.com


----------



## birdie (Nov 13, 2020)

A comment from AMD - I'd love to add it to the original post but editing it is no longer possible:



> *AMD views temps up to 90C (5800X/5900X/5950X) and 95C (5600X) as typical and by design for full load conditions*. Having a higher maximum temperature supported by the silicon and firmware allows the CPU to pursue higher and longer boost performance before the algorithm pulls back for thermal reasons. Is it the same as Zen 2 or our competitor? No. But that doesn't mean something is "wrong." These parts are running exactly as-designed, producing the performance results we intend.



Check the attachments as well.



Athlonite said:


> sure hold on few moments and I get it up and running and here ya go
> 
> as you can see from the screenshot CPU temps and Package power are 77°c and 130W



You either have PBO enabled or PPT limits increased. Period.



neatfeatguy said:


> What's the point in upgrading to a new version of a cell phone for every release that comes out?
> To me, none. It's stupid and a waste.
> To others, they simply cannot live without having the latest and greatest.
> 
> If folks want to upgrade from a solid CPU to another solid CPU that offers a small performance boost, that's their prerogative.



New phones normally have much better cameras though I have to agree that usually generational improvements are not worth it however upgrading each 2-3 years makes a night and day difference in quality. And then there are cases when the smartphone vendor even decreases the quality of its products: consider the OnePlus 6 with a telephoto camera vs. the OnePlus 8T without it. And the latter costs a lot more.


----------



## Zach_01 (Nov 13, 2020)

PooPipeBoy said:


> That should be the die temperature (Tdie). I don't have any software right now that can read Zen 3 temperatures on a per-core basis, so I'm just stuck with that for now.
> 
> Currently I'm using GD900 as my thermal paste of choice and it has worked well over the last year. I had a few goes at pasting the 5600X to make sure I got it right. I re-pasted my GTX 1060 with it and the temperatures have been great, so I haven't bothered to invest in a different thermal paste since.


It wont be any per-core temperature for ZEN3 just as this was the case for ZEN2 also. Its pointless. The CPU (Tctl/Tdie) value is all you need, reporting always the highest (spot) temp of any core across all cores and CCDs switching instantly to the highest reporting sensor.



birdie said:


> You either have PBO enabled or PPT limits increased. Period.


Of course @Athlonite has increased limits...
Look at the CPU PPT limit at 3.2% with that 130W PPT...

Also check his CPU EDC/TDC values... reporting exactly the same... and the missing EDC limit? ...its fishy
He has some weird manual settings in PBO, and also thinks we hevent done our homework on Ryzen3000.
Its been a while since I did those weird PBO settings... setting EDC to 1 Amp for example...

He probably using high PBO scalar too


----------



## Chrispy_ (Nov 13, 2020)

Zach_01 said:


> It wont be any per-core temperature for ZEN3 just as this was the case for ZEN2 also. Its pointless. The CPU (Tctl/Tdie) value is all you need, reporting always the highest (spot) temp of any core across all cores and CCDs switching instantly to the highest reporting sensor.
> 
> 
> Of course @Athlonite has increased limits...
> ...


Setting Custom PBO to AMD's official TDP/TDC/EDC stock values is one of the many ways to try and prevent motherboards from misreporting power to the CPU on Auto settings. Everyone with a Zen2 or Zen3 CPU should be aware of the following "stock" values that motherboard vendors love to misreport and fudge in an attempt to seem "faster" (Reality - they run at very similar speeds but far less efficiently)

65W CPU = 88W TDP, 60A TDC, 90A ECD
105W CPU = 142W TDP, 95A TDC, 140A EDC
If your CPU is toasty and you don't like it, try setting a manual PBO with those values to see if anything improves. If there's a perceptible difference, your motherboard is made of LIES.


----------



## Zach_01 (Nov 13, 2020)

Chrispy_ said:


> Setting Custom PBO to AMD's official TDP/TDC/EDC stock values is one of the many ways to try and prevent motherboards from misreporting power to the CPU on Auto settings. Everyone with a Zen2 or Zen3 CPU should be aware of the following "stock" values that motherboard vendors love to misreport and fudge in an attempt to seem "faster" (Reality - they run at very similar speeds but far less efficiently)
> 
> 65W CPU = 88W TDP, 60A TDC, 90A ECD
> 105W CPU = 142W TDP, 95A TDC, 140A EDC
> If your CPU is toasty and you don't like it, try setting a manual PBO with those values to see if anything improves. If there's a perceptible difference, your motherboard is made of LIES.


You use "Power Reporting Deviation (Accuracy)" to evaluate the board. And in order to do that you must have CPU PB/PBO on auto(or just enable) and run 100% load. No other manual CPU settings. Everything must be on Auto.
We know this:

65W CPU = 88W TDP, 60A TDC, 90A ECD
105W CPU = 142W TDP, 95A TDC, 140A EDC
Just Athlonite was trying to convince us (in vain) that the 3700X is drawing 130W on stock settings... yeah right!


----------



## Chrispy_ (Nov 13, 2020)

Zach_01 said:


> e 3700X is drawing 130W on stock settings... yeah right!


130W is not 88W. I don't know how else you can put that to him


----------



## Octopuss (Nov 13, 2020)

oobymach said:


> I shouldn't have to explain that ssd's have finite writes on a computer forum, or do you think they have an infinite write lifespan like hdd's? If you think your ssd drive has an infinite write lifespan start running write speed tests and dont stop and then tell me you don't want to minimize drive writes.
> 
> 
> 
> ...


Obviously you didn't read the article in the second link.
You're the IT equivalent of an antivaxxer or something equally bizarre.


----------



## Zach_01 (Nov 13, 2020)

Chrispy_ said:


> 130W is not 88W. I don't know how else you can put that to him


I/We don't have to.
He probably knows it, but trying to... do what exactly?
I don't even give a tiny rat's arse.


----------



## oobymach (Nov 13, 2020)

Octopuss said:


> Obviously you didn't read the article in the second link.
> You're the IT equivalent of an antivaxxer or something equally bizarre.


Actually I did, he tortured the drives to death with nothing but writes, what part of that was unclear to you?

Maybe if you only use your computer for a few minutes a day to check sports scores or lottery longevity isn't an issue (also most people are addicted to phones now) but for people who edit hd video for example writes to a single ssd drive can become an issue. It takes a fair amount of writes but make no mistake simply writing to an ssd is shortening its lifespan. Reading/loading does not shorten its lifespan.

Also I appear to be a fat duck talking to a moose named octopus...


----------



## EarthDog (Nov 13, 2020)

oobymach said:


> Actually I did, he tortured the drives to death with nothing but writes, what part of that was unclear to you?
> 
> Maybe if you use your computer for 4 minutes a day to check sports scores longevity isn't an issue but for those of us who edit hd video writes to a single drive can become an issue. And again you can kill an ssd merely by writing to it. It takes a fair amount of writes but make no mistake simply writing to an ssd is shortening its lifespan.


writes are limited, indeed. However that worry, for most users (not makeveli, lol) dont need to worry about writes. As you link shows, it took several months and PETAbytes of data to kill these drives. That is not remotely a real world situation. Worrying about writes on an ssd is a NON ISSUE for 99% of people.


----------



## oobymach (Nov 13, 2020)

EarthDog said:


> writes are limited, indeed. However that worry, for most users (not makeveli, lol) dont need to worry about writes. As you link shows, it took several months and PETAbytes of data to kill these drives. That is not remotely a real world situation. Worrying about writes on an ssd is a NON ISSUE for 99% of people.


Indeed an average user is not likely to run into any issues however I have killed an ssd before using a it as the only drive in a computer. A 240gb ssd.


----------



## Chrispy_ (Nov 13, 2020)

EarthDog said:


> writes are limited, indeed. However that worry, for most users (not makeveli, lol) dont need to worry about writes. As you link shows, it took several months and PETAbytes of data to kill these drives. That is not remotely a real world situation. Worrying about writes on an ssd is a NON ISSUE for 99% of people.


Somewhere way back on the Techreport forums I posted an update on my Samsung 840 and people called out the insane amount of data I'd written to my 1-year-old 840 SSD. I'd been using it as an ESX swapfile/scratch dump for a synthetic VMWare testing environment and I _still_ only wrote 240TB to it in a year even running a silly number of multiple VM's with dumb, unrealistic data-heavy workloads just to get worst-case-scenario data for work.

No consumer needs to worry about SSD lifespan - at least not with a half-decent TLC drive with a DRAM cache. All bets are off with ultra-budget QLC DRAM-less but if you buy a godawful piece-of-shit SSD like that just to save 10% on price, and then use it for write-heavy workloads then you should expect all the trouble you deserve for such stupidity/ignorance.


----------



## TheoneandonlyMrK (Nov 13, 2020)

birdie said:


> A comment from AMD - I'd love to add it to the original post but editing it is no longer possible:
> 
> 
> 
> ...


So whole threads pointless AMD said so.

Ask a mod he may open edits for you.


----------



## birdie (Nov 13, 2020)

theoneandonlymrk said:


> So whole threads pointless AMD said so.
> 
> Ask a mod he may open edits for you.



Not really pointless. As far as I can see there's a huge yet to be explained variability in 5800X load temps. Some people are OK (with temps slightly higher than those for the 3800X/XT), other people say they see temps above 90C with *CPU throttling*. This is far from being settled down. I'm looking at purchasing this CPU and I don't want to get a CPU from a bad batch (if that's indeed the case).


----------



## EarthDog (Nov 13, 2020)

oobymach said:


> Indeed an average user is not likely to run into any issues however I have killed an ssd before using a it as the only drive in a computer. A 240gb ssd.


Because of writes? Or did the controller crap out? Things die. It didn't die becuase it was the only SSD in the system.

Come on guys... the information is all there...killing a drive with writes is incredibly difficult for 99% of users. If you're grinding several GB /day, sure... but so few do that it's just not a worry. My pagefile is set static to 2GB (32GB of RAM).

Anyway, this isn't about SSDs... so I'll leave it at that.


----------



## Zach_01 (Nov 13, 2020)

EarthDog said:


> Because of writes? Or did the controller crap out? Things die. It didn't die becuase it was the only SSD in the system.
> 
> Come on guys... the information is all there...killing a drive with writes is incredibly difficult for 99% of users. If you're grinding several GB /day, sure... but so few do that it's just not a worry. My pagefile is set static to 2GB (32GB of RAM).
> 
> Anyway, this isn't about SSDs... so I'll leave it at that.


If I remember correctly from last year, he had an old SSD that died from a lot writes several years ago.
But you're right, this isn't SSD thread...


----------



## bpgt64 (Nov 13, 2020)

I am getting ready to throw a 5800x against a dual 360mm radiator solution, + EK velocity waterblocks    so we'll see how it heats up...(normally this build has a GPU waterblock in the mix but there are zero waterblocks for the FTW3 3080 atm).


----------



## DuxCro (Nov 13, 2020)

I was messing around today with undervolting my R9 5900X. I think i'll leave it at 1.25V and 4.3GHz. Brings very nice reduction in power consumption and temperature with really minimal performance difference in games.ž
Edit. Actually, lowered the voltage some more to 1.225V. Temperatures went down even more. And power consumption as well. Seems stable after Cinebench R23 and playing some mortal shell. Will see over time.
If TPU reviewer needed 1.4V for 4.5GHz stability, then i think 1.225V for 4.3GHz is fantastic.


----------



## TheoneandonlyMrK (Nov 13, 2020)

bpgt64 said:


> I am getting ready to throw a 5800x against a dual 360mm radiator solution, + EK velocity waterblocks    so we'll see how it heats up...(normally this build has a GPU waterblock in the mix but there are zero waterblocks for the FTW3 3080 atm).


It'll heat up just as well as ever, I can vouch for that, it's just a lot of heat in a small package see my system specs ,I'm there it gets hot because it was designed to crack on with work when asked , and to do so to the limit's designed into it.
The only thing extensive cooling will do is allow higher sustained clock's though I am talking about flat out loads for day's not minutes so most Wouldn't see such behaviour.
Every generation of Ryzen behaved the same too, I tried em.

@birdie yes, indeed pointless, next up Intel's next CPU release 11xxx series is due, I wonder if they'll get hot, hmnn!?


----------



## oobymach (Nov 13, 2020)

EarthDog said:


> Because of writes? Or did the controller crap out? Things die. It didn't die becuase it was the only SSD in the system.
> 
> Come on guys... the information is all there...killing a drive with writes is incredibly difficult for 99% of users. If you're grinding several GB /day, sure... but so few do that it's just not a worry. My pagefile is set static to 2GB (32GB of RAM).
> 
> Anyway, this isn't about SSDs... so I'll leave it at that.


Sorry for going off topic, I think it was the controller on the drive was bad, the model was known for it I think, a good ssd as the only drive should last at least 5 years of pretty much continual use if not more.


----------



## Athlonite (Nov 14, 2020)

Zach_01 said:


> It wont be any per-core temperature for ZEN3 just as this was the case for ZEN2 also. Its pointless. The CPU (Tctl/Tdie) value is all you need, reporting always the highest (spot) temp of any core across all cores and CCDs switching instantly to the highest reporting sensor.
> 
> 
> Of course @Athlonite has increased limits...
> ...




FYI there are no fishy setting in my BIOS PBO / EDC limits are just left on AUTO so whatever the system thinks it can do it will do infact the only setting to be changed was DOCP for my ram timings everything else as I've said already is on AUTO


----------



## PooPipeBoy (Nov 14, 2020)

DuxCro said:


> I was messing around today with undervolting my R9 5900X. I think i'll leave it at 1.25V and 4.3GHz. Brings very nice reduction in power consumption and temperature with really minimal performance difference in games.ž
> Edit. Actually, lowered the voltage some more to 1.225V. Temperatures went down even more. And power consumption as well. Seems stable after Cinebench R23 and playing some mortal shell. Will see over time.
> If TPU reviewer needed 1.4V for 4.5GHz stability, then i think 1.225V for 4.3GHz is fantastic.



I would recommend just disabling PBO and leaving Vcore on Auto. It knocked 12C off my maximum die temperature, but only reduced Cinebench R20 scores by 10% for multi and 2% for single. Not a bad deal.


----------



## bpgt64 (Nov 14, 2020)

So granting my systems cooling is gross overkill I am hitting 4.75 ghz on a 1.2 vcore and I am not going past 80c .  That’s using an ek velocity am4 block. Switching to a Optimus WC foundation am4 cpu cooler block here soon.   Also using a set of g.skill ripjaws DDR4-3800 cas 16 flck 1900.  I got a 6200 on cone bench r20 which is 300-400 sounds of a 16 c 1950x


----------



## Totally (Nov 14, 2020)

seems like pebkac, one was complaining hitting 50-60c for load temps


----------



## DuxCro (Nov 14, 2020)

PooPipeBoy said:


> I would recommend just disabling PBO and leaving Vcore on Auto. It knocked 12C off my maximum die temperature, but only reduced Cinebench R20 scores by 10% for multi and 2% for single. Not a bad deal.


Won't that get the CPU to run at 3.7GHz base frequency? That's too low. With Vcore at 1.225V and multiplier at 43, i get almost identical cinebench results as with everything on auto.


----------



## PooPipeBoy (Nov 14, 2020)

DuxCro said:


> Won't that get the CPU to run at 3.7GHz base frequency? That's too low. With Vcore at 1.225V and multiplier at 43, i get almost identical cinebench results as with everything on auto.



It drops my all-core frequency from 4.41 to 4.06GHz but the single core frequency of 4.6GHz stays the same. So really it's the multi-core rendering where disabling PBO seems to have the largest impact on performance and temperatures.


----------



## Makaveli (Nov 18, 2020)

oobymach said:


> I shouldn't have to explain that ssd's have finite writes on a computer forum, or do you think they have an infinite write lifespan like hdd's? If you think your ssd drive has an infinite write lifespan start running write speed tests and dont stop and then tell me you don't want to minimize drive writes.
> 
> 
> 
> ...



People were doing this 10 years ago when we had Intel 80GB G2 SSD's.

You are the first person i've seen post anything like this in 6+ years with the endurance on modern drives this practice is not really necessary anymore.

So I will ask the same question did you hit 88 MPH?


----------



## oobymach (Nov 19, 2020)

Makaveli said:


> People were doing this 10 years ago when we had Intel 80GB G2 SSD's.
> 
> You are the first person i've seen post anything like this in 6+ years with the endurance on modern drives this practice is not really necessary anymore.
> 
> So I will ask the same question did you hit 88 MPH?


If you think a 5 year lifespan is all you need and plan on throwing your computer away and getting a new one in 5 years with all new drives then no worries but my setup each drive should last twice that and should last through multiple builds. I don't think minimizing writes to an ssd is a bad thing, you apparently don't care.

Since building this setup I've diverted over a terabyte from my main ssd not including downloads and temp. My main drive is already over 5tb of writes on a drive with a 300tb write life or 1/60th its lifespan and it's fairly new and that's _with _a couple hdd's for temp, movies, downloads, and such.

If you want your ssd to last longer you can follow the steps below, if you think ssd's have infinite write cycles and came here from the future like the super saiyan kid go nuts and fill your drive to capacity and run write speed tests all the time on it. It's not my drive, I don't care what you do with it.









						How to Maximize SSD Life Span & Performance; Avoid These 7 Mistakes
					

New SSD users? Thee are a lot of SSD maintenance tips that you can refer before you start to work with SSD drives. You can't treat Solid State Drives like you did for HDDs. Please check here for the tips to maximize SSD lifespan.




					mashtips.com


----------



## EarthDog (Nov 19, 2020)

You guys are still droning on about SSD life? Just use your damn drives already... lol. This isn't 2010, people. Move on (also not the thread for it).


----------



## oobymach (Nov 19, 2020)

I'm not the one who keeps bringing it up, and there are still people out there doing defrags and shit, I'm just trying to help, people seem to think it's funny that their computer has a limited write life or maybe they like throwing money away not my business. I demand more from my investment than they do probably.


----------



## EarthDog (Nov 19, 2020)

oobymach said:


> I'm not the one who keeps bringing it up, and there are still people out there doing defrags and shit, I'm just trying to help, people seem to think it's funny that their computer has a limited write life or maybe they like throwing money away not my business. I demand more from my investment than they do probably.


You keep responding, however.

If you have Windows 8 or newer, you aren't defragging the SSD. If you're manually doing it, you're misinformed. TRIM is also a automatic function. My pagefile has ALWAYS been on an SSD.. why wouldn't it? Why the heck would anyone put a PF on an SSD? When used it defeats the purpose of an SSD in the first place. Temp files? SSD. Indexing, always been on. Writes aren't an issue (for 99% of users)... Mmkay? 

I don't like throwing money away either. I'm just a realist with how my SSDs are used (to that end an overwhelming majority of use it) and don't go to any length to ensure longer life. 99% of users won't kill their drives with writes, even well past its warranty.



oobymach said:


> In these days of ssd's where writes need to be minimal


Just know this isn't true at all today. A decade ago, users needed to worry. Today, no. I have no problem with what you do with your system. I'm just informing users that is isn't needed...and hasn't been for several years.

Again, not the place for this....... move along already before the mods step in. I'm out.


----------



## oobymach (Nov 19, 2020)

EarthDog said:


> You keep responding, however.
> 
> If you have Windows 8 or newer, you aren't defragging the SSD. If you're manually doing it, you're misinformed. TRIM is also a automatic function. My pagefile has ALWAYS been on an SSD.. why wouldn't it? Why the heck would anyone put a PF on an SSD? When used it defeats the purpose of an SSD in the first place. Temp files? SSD. Indexing, always been on. Writes aren't an issue (for 99% of users)... Mmkay?
> 
> ...


I used to work tech support for a major computer manufacturer, you'd be surprised how many misinformed people there are. And again sorry to have gone off topic, your setup is your own, do what you want it's not mine.


----------



## Makaveli (Nov 19, 2020)

oobymach said:


> If you think a 5 year lifespan is all you need and plan on throwing your computer away and getting a new one in 5 years with all new drives then no worries but my setup each drive should last twice that and should last through multiple builds. I don't think minimizing writes to an ssd is a bad thing, you apparently don't care.
> 
> Since building this setup I've diverted over a terabyte from my main ssd not including downloads and temp. My main drive is already over 5tb of writes on a drive with a 300tb write life or 1/60th its lifespan and it's fairly new and that's _with _a couple hdd's for temp, movies, downloads, and such.
> 
> ...



You are preaching to the wrong guy.

I have two intel 160GB G2 drives in Raid 0 still in running that have been up for 10 years. I didn't bother moving swap files and doing what you did. And I will say it again the quality of flash out now with modern controllers you don't need to do this.

That article isn't telling me anything I didn't know already I've been doing this 20 years!

Don't let the picture fool you I am no child.


----------



## oobymach (Nov 19, 2020)

Makaveli said:


> You are preaching to the wrong guy.
> 
> I have two intel 160GB G2 drives in Raid 0 still in running that have been up for 10 years. I didn't bother moving swap files and doing what you did. And I will say it again the quality of flash out now with modern controllers you don't need to do this.
> 
> ...


We've only really gone from 75-150tbw to 300-600tbw drives. The controllers are better and data is randomized among the open bits more effectively and drive life has been extended on the average consumer level drive but you need to spend mucho dinero to get an ssd with petabytes of endurance. Easier to just reroute your browser and os temp folders, pagefile and downloads onto an hdd, make your ssd last years more. Again sorry for going off topic, I'll try not to do it again.









						Intel® SSD D7-P5500 and D7-P5600 Series Product Brief
					

Brief: The Intel® SSD D7-P5500 and D7-P5600 Series deliver predictably fast, high performance to meet today’s increasingly I/O-intensive requirements.




					www.intel.ca


----------



## Makaveli (Nov 19, 2020)

oobymach said:


> We've only really gone from 75-150tbw to 300-600tbw drives. The controllers are better and data is randomized among the open bits more effectively and drive life has been extended on the average consumer level drive but you need to spend mucho dinero to get an ssd with petabytes of endurance. Easier to just reroute your browser and os temp folders, pagefile and downloads onto an hdd, make your ssd last years more. Again sorry for going off topic, I'll try not to do it again.
> 
> 
> 
> ...



Sent you a PM as we are going off topic on this thread.


----------



## Lindatje (Nov 19, 2020)

*Ryzen 5800 owners complain about very high MT load temps*

They have stock coolers so no problem at all. But it's AMD so it must be bad.... @birdie AMD is always bad in your eyes.


----------



## Makaveli (Nov 19, 2020)

Lindatje said:


> *Ryzen 5800 owners complain about very high MT load temps*
> 
> They have stock coolers so no problem at all. But it's AMD so it must be bad.... @birdie AMD is always bad in your eyes.



The single CCX 5800 should have high MT load temps since it runs at like 4.55Ghz across all cores when loaded. The difference however is only a few degrees and not the sky is falling that some are making it out to be.

Looking a 8c difference from the 5900X to the 5800X and the latter has two dies to spread the heat and lower all core clocks when loaded. And 2c difference when you compare it to the 3800XT.

Its a none issue with proper cooling.


----------



## gerardfraser (Nov 19, 2020)

5800X here cheap $40USD CPU AIO Cooler. Not all owners have a problem
4800Mhz all core 
CPU voltage- 1.29v Temperature- 72.5 ℃









Cinebench R20  Single core boost on all cores 5100Mhz . Multi drops 4750Mhz 
CPU voltage- varies Temperature- 76.5 ℃


----------



## birdie (Nov 20, 2020)

Lindatje said:


> AMD is always bad in your eyes.



I've never said so, sorry. If it had been a single Reddit thread with a single person, and I would have created a topic like "All Ryzen 5000 CPUs overheat" you could think so. There have been *multiple posts* on Reddit, with at least a *few dozen people* reporting very high temperatures using *not exactly cheap* coolers. So, leave your "I don't love AMD" with yourself please. I don't love a single company in this world - that's outright idiotic. They are just companies and they are looking for maximum profits and we are nothing more than customers for them.


----------



## TheoneandonlyMrK (Nov 20, 2020)

birdie said:


> I've never said so, sorry. If it had been a single reddit thread with a single person, and I would have createed a topic like "All ryzen 5000 CPUs overheat" you could think so. There have been *multiple posts* on Reddit, with at least a *few dozen people* reporting very high temperatures using *not exactly cheap* coolers. So, leave your "I don't love AMD" with yourself please. I don't love a single company in this world - that's outright idiotic. They are just companies and they are looking for maximum profits and we are nothing more than customers for them.


Yet you don't pass hyperbolic tension in Nvidia or Intel's direction, is that balanced to you.
Redit comments aren't worth much if people had issues we would have high heat forum issue posts all over, we don't.
Close thread please mods.


----------



## birdie (Nov 20, 2020)

theoneandonlymrk said:


> Yet you don't pass hyperbolic tension in Nvidia or Intel's direction, is that balanced to you.
> Redit comments aren't worth much if people had issues we would have high heat forum issue posts all over, we don't.
> Close thread please mods.



I have been quite vocal about how low Intel has fallen recently, their inability to execute and bring working 7/10nm nodes to the market. Both ICL and TGL are failures of epic proportions. TGL, for the first time ever, has performance regressions not seen in any previous Intel's uArchs.
I have been quite vocal about how NVIDIA should have chosen TSMC instead of Samsung for its RTX 3000 series.
I have been quite vocal about how NVIDIA has gone overboard with the RTX 3000 power consumption and OC'ing out of the box.
I have been quite vocal about how both NVIDIA/AMD have been ripping us off recently by steadily raising prices for their mid-range cards.

It pains me to see how AMD fans feel personally threatened when someone criticizes their favourite company products as if their own lives depend on AMD's prosperity. I don't remember this phenomenon ever occuring in the past and it's quite cringe worthy actually.

Again, the 5800X overheating is not a witch-hunt, it's an issue not widespread ever before to the same extent. There were isolated reports of Ryzen 3000 CPUs overheating but I've never even noticed them.

Over and out.


----------



## TheoneandonlyMrK (Nov 20, 2020)

birdie said:


> I have been quite vocal about how low Intel has fallen recently, their inability to execute and bring working 7/10nm nodes to the market. Both ICL and TGL are failures of epic proportions. TGL, for the first time ever, has performance regressions not seen in any previous Intel's uArchs.
> I have been quite vocal about how NVIDIA should have chosen TSMC instead of Samsung for its RTX 3000 series.
> I have been quite vocal about how NVIDIA has gone overboard with the RTX 3000 power consumption and OC'ing out of the box.
> I have been quite vocal about how both NVIDIA/AMD have been ripping us off recently by steadily raising prices for their mid-range cards.
> ...


It pains me to see people labeling others , point to all the threads with issues of over heating on Tpu.
Shouldn't take you long what with there being so many.

You may have one lined an intel or Nvidia thread with a post but This, is different.

Always seems to me the staunchest Fans are first to point out other alleged fans, meanwhile with 3 intel systems and 1 AMD I'll sit unbiased by bullsh## quite happy here.


----------



## 95Viper (Nov 20, 2020)

Stay on topic.
Keep it civil.

Thank You


----------



## Prcoje123 (Nov 22, 2020)

Guys, there is not chance that difference between sample to sample is 15-20c with similar cooling and conditions. Chips are not bad, it's AGESA or it's some other voltage in BIOS.
If you compare your voltages to Gerard's one, you will see that his IOD, CCD, VDDP voltages are low, and maybe there is a possible fix. If you leave those on auto, my b550 tomahawk VDDP sets it on 1.15v for 3700x, which is ridiculous.
It's not same having 1.15v for example on 5900x and on 5800x/5600x(less noticeable, since it's 65w part), bcz of the die structure.
Crosshair x570 Hero, 2x360pe rad, 1.3v 4850 all core cinebench r20, and the temp was around 72c for 2 samples,  but the guy tested it for sure didn't leave voltages on auto. Also, didn't use newest beta bios. It's all board to board. I saw a test with 1.05v VDDP and 76c on 4800mhz, 1.3v, so there needs to be something wrong there. I would test and post, but I don't have a sample yet.


----------



## mouacyk (Nov 23, 2020)

AMD's got cache!


----------



## turbogear (Nov 23, 2020)

Well I did not observe any such issues on my Ryzen 5800x.
I have the processor now since a few days and did many tests with it.
I did not notice any temperature reading in range of 90°C. 

For me the maximum CPU Tdie temperature remains in range of 62°C when running OCCT V7.1.0. 
With the HeavyLoad V3.6, I was able to stress it more and temperature was in range on 76°C max.
With Cinebench R20 Multi-thread stress test the temperatures were similar to Heavyload at 76°C max.

Frequency on all cores during these test was 4800MHz.

I am not sure anymore but if I remember correctly with Ryzen 3700x that I had before I think I observed similar temperatures.

I have PBO turned on in the bios.
The Windows power options is at Balanced Mode.
I am using ROG Crosshair VIII Hero (WI-FI) with bios version 2502 (AMD AM4 AGESA V2 PI 1.1.0.0 Patch C ).

I have a full water cooler loop with Ryzen 5800x and Radeon VII included.
Here is the cooling setup:
Pump reservoir combo with EK-DDC 3.2 PWM, three EK Rads 1x360mm+1x240mm+1x120mm , CPU block XSPC RayStorm Pro, EK Waterblock for Radeon VII


Here are the temperature reading with 20minutes of HeavyLoad stress test run.
These readings are from HWiNFO64 v6.35-4310:


----------

