# Upgrade to 5900X



## DebenPoison (Aug 19, 2022)

Currently running a Ryzen 3700X (Asus X570 TUF) with an RTX 3090 FE and with the prices of the 5900X CPU's being what they are, really considering the upgrade.

What do ya'll think? I know the 3700X holds me back some, but probably not much.  I game on a 1440p 34" LED and typically run most games at very high ~ ultra settings @ 1440p resolution.

I'm in Canada and mostly game with my machine.  I'm on the fence right now waiting to see if the Ryzen 7000 chips drop the 5 series chips any cheaper or maybe just jump directly to 7000 series depending on performance/pricing.


----------



## siluro818 (Aug 19, 2022)

Decide for yourself if it's worth it (benchmarks are done with your GPU).

I am with the same CPU and have been considering the 5900X myself, but on a 6700XT it'd be pointless. Probably change both further down the line.


----------



## Chomiq (Aug 19, 2022)

Big questions are - do you need 12 cores and can you wait?


----------



## jesdals (Aug 19, 2022)

I would say before you upgrade a 3000 series cpu to a 5000, have a look at your current settings, do you use PBO, what is your Infinity settings and memory setup - if they all are max out then go ahead - but a 1600 Infinity 3000 cpu is not the same as a tuned 3000. I went from a 3800x running 1866 infinity to a 5950x running 1900, now that did make a difference - but I would not recommend it to any one who havent optimized their current system first.


----------



## pavle (Aug 19, 2022)

As can be seen from TPU chart, going from 80% to 100% performance (1/4 step up) isn't really worth it (except if you need more cores).


----------



## DebenPoison (Aug 20, 2022)

jesdals said:


> I would say before you upgrade a 3000 series cpu to a 5000, have a look at your current settings, do you use PBO, what is your Infinity settings and memory setup - if they all are max out then go ahead - but a 1600 Infinity 3000 cpu is not the same as a tuned 3000. I went from a 3800x running 1866 infinity to a 5950x running 1900, now that did make a difference - but I would not recommend it to any one who havent optimized their current system first.



I do have PBO enabled ; I'm unsure on the Infinity settings. Are these BIOS settings? Most of my BIOS settings are auto besides PBO enabled and XMP 2.0 enabled. Precision Boost works well though, my 3700X boosts from 3.5Ghz to 4.4Ghz quite comfortably at max temp of about 70'C.


----------



## tabascosauz (Aug 20, 2022)

DebenPoison said:


> I do have PBO enabled ; I'm unsure on the Infinity settings. Are these BIOS settings? Most of my BIOS settings are auto besides PBO enabled and XMP 2.0 enabled. Precision Boost works well though, my 3700X boosts from 3.5Ghz to 4.4Ghz quite comfortably at max temp of about 70'C.



I went from 3700X to 5900X on just a 2060 Super and it was an improvement here and there (especially lows), but I play a lot of framecapped/CPU-bound games, and you sound like you're nearly entirely GPU-bound most of the time from 1440p ultrawide @ Ultra. Not sure how much you stand to gain.

Ryzens tend not to go anywhere on gaming performance with just pure clockspeed. Good memory running a tight profile @ 1900-2000MHz IF is where we get the biggest gains. 3700X is also clocked so low that honestly PBO doesn't do much, though if you have a later production chip with good SP you might try for all-core 4.4/4.5.

5900X is, at least at stock, thermally similar to a 3700X. If not running even cooler than the 3700X despite more wattage. However, 5900X has a LOT of PBO headroom - in the custom loop I was running up around 220W tops for benching at one point. My 3700X barely got to about 110W with the EDC trick. Not that it'll make much difference in games, of course.

If there's no desperately pressing need for 12 cores, you have the 5800X3D as an option - for gaming primarily, that'd be the CPU I'd be leaning towards as an upgrade from 3700X. Give that 3090 a CPU it deserves.


----------



## jesdals (Aug 20, 2022)

Infinity settings is in bios remember that memory and infinity settings is best at 1:1. You can se your settings in hardware info 64 or cpu-z



or CPU-Z


----------



## DebenPoison (Aug 21, 2022)

From CPU-Z









Looks like 1:1 to me

For interest sake, here's the last heaven benchmark I did





30 mins of gaming on Jedi Fallen Order


----------



## Mussels (Aug 21, 2022)

DebenPoison said:


> Currently running a Ryzen 3700X (Asus X570 TUF) with an RTX 3090 FE and with the prices of the 5900X CPU's being what they are, really considering the upgrade.
> 
> What do ya'll think? I know the 3700X holds me back some, but probably not much.  I game on a 1440p 34" LED and typically run most games at very high ~ ultra settings @ 1440p resolution.
> 
> I'm in Canada and mostly game with my machine.  I'm on the fence right now waiting to see if the Ryzen 7000 chips drop the 5 series chips any cheaper or maybe just jump directly to 7000 series depending on performance/pricing.


Zen 2 to Zen 3 is quite a big gaming performance boost - since i run 165Hz (or fast vsync on the 4K display) I saw the max FPS jump by roughly 30FPS in modern titles

What's important is that a 5600x gives most of that gain, too - if you want a performance hike without higher wattages or any cooling concerns, a 5600x or 5700x are fantastic gaming options.


----------



## Calmmo (Aug 21, 2022)

You will see improvements on a 3090 sure. Your ram speeds are budget tier (and that trc 75 looks wrong..).
I would wait for 7000 annoument or actual launch to snap up a 58003D on the cheap when the prices invevitably drop on the old stuff. The extra cache on the 3D cpu also somewhat neutralize ram perf differences.


----------



## siluro818 (Aug 21, 2022)

DebenPoison said:


> From CPU-Z
> 
> View attachment 258852
> 
> ...



Zen 2 infinity fabric 1:1 dram speed is 3200mhz. If you switch to Zen 3 you'll need to OC to 3600mhz to get the 1:1 or get new chips altogether.


----------



## gffermari (Aug 21, 2022)

It's 5800X3D or 5950X for a high end system like yours.

Since you have a 3090 and primarily game on it, I would suggest going all in for a 5800X3D.
Only if you really need more cores for work or other, then go for a 5950X.

I did the same....3700X to 5800X3D.


----------



## Mussels (Aug 21, 2022)

siluro818 said:


> Zen 2 infinity fabric 1:1 dram speed is 3200mhz. If you switch to Zen 3 you'll need to OC to 3600mhz to get the 1:1 or get new chips altogether.


That's not correct. Zen 3's IF can be 1:1 upto 3900, with 3600 just being the common goal.
It's not a fixed number.

Having set up a 5700x and 3070Ti system last night, if you're OCD about low CPU temps, get the 5700x. Coldest and lowest wattage gaming CPU i've seen in a long time.


----------



## jesdals (Aug 21, 2022)

DebenPoison said:


> From CPU-Z
> 
> View attachment 258852
> 
> View attachment 258853


I would consider better memory - if yours cant do and OC to 3600Mhz with decent timings a 5000 series cpu with 3200MHz settings is cribbled and you will not get the full potential out of it. Try setting your memory settings in bios to auto and set the speed to 1800Mhz if it run with that you can try setting a CL 18 setting if they cant do that I would start with better memory


----------



## DebenPoison (Aug 21, 2022)

Corsair Vengeance RGB Pro DDR4 that I'm running and in all honesty have done zero performance testing on them. I am sure they're capable of more than 3200mhz and potentially tighter timings. I could tinker with this.  Anyone know what these might achieve and at what voltage?









						CORSAIR Vengeance RGB Pro 32GB DDR4 3200 Desktop Memory - Newegg.com
					

Buy CORSAIR Vengeance RGB Pro 32GB (2 x 16GB) DDR4 3200 (PC4 25600) Desktop Memory Model CMW32GX4M2C3200C16 with fast shipping and top-rated customer service. Once you know, you Newegg!




					www.newegg.ca
				












 -may look to replicate his settings and check results when I get some time


----------



## A Computer Guy (Aug 21, 2022)

DebenPoison said:


> Corsair Vengeance RGB Pro DDR4 that I'm running and in all honesty have done zero performance testing on them. I am sure they're capable of more than 3200mhz and potentially tighter timings. I could tinker with this.  Anyone know what these might achieve and at what voltage?
> 
> 
> 
> ...


What version do you have?  I looked in my notes...I had this model CMW32GX4M2C3200C16 (ver4.32)  I was able to OC it to DDR4-3800 but there was a trick to it.  I had to lower the voltage to 1.33v to unlock the higher frequency.

Here is a screenshot of my last notes when tinkering with this kit.  It might give you some ideas to try.  YMMV.





found some other notes... I was trying to tighten the timings a bit more.  The one in red was my last test before moving on.  Not sure if it was stable.


----------



## tabascosauz (Aug 21, 2022)

DebenPoison said:


> Corsair Vengeance RGB Pro DDR4 that I'm running and in all honesty have done zero performance testing on them. I am sure they're capable of more than 3200mhz and potentially tighter timings. I could tinker with this.  Anyone know what these might achieve and at what voltage?
> 
> 
> 
> ...



Knowing the Corsair SKU is no closer to knowing what you have. Slow SKUs from any memory vendor has every IC under the sun in a single sku, 3200CL16 is the melting pot. On Corsair there is only one way to be sure, pull off the heatspreaders (don't do it for a shitty kit like this).

You can usually formulate an educated guess based on what Thaiphoon Burner says, or infer based on the Revision number on the sticks, but both can be wildly inaccurate at times (Thaiphoon guesses wrong all the time, and rev# hasn't been very accurate since Corsair started lumping different dies in a single rev#). Then for certain distinctively behaving ICs, you can take a guess based on how they react to different timings or vdimm






						Die versions etc
					

I found this thread on overclocking that looks to give a ton of info on Corsair Die versions etc https://www.reddit.com/r/overclocking/wiki/ram/ddr4 Corsair "Version Number" Corsair sticks identify the IC with a 'version number' on the label such as "ver 4.31" - props to them for this as it helps...




					forum.corsair.com


----------



## A Computer Guy (Aug 22, 2022)

tabascosauz said:


> ... and rev# hasn't been very accurate since Corsair started lumping different dies in a single rev#...


Well that sucks.


----------



## tabascosauz (Aug 22, 2022)

A Computer Guy said:


> Well that sucks.



To be fair, it's not always the case. But Thaiphoon doesn't _always _get things wrong either, I guess it's better than nothing (ahem Patriot). And I was wrong, pulling off the heatspreaders doesn't always help either, Corsair relabels DRAM packages lmao



> *Especially with Micron, Corsair version numbers are sometimes weird. Confirmed means an IC has been seen under a version number, not that it can't also cover something else.*





> Corsair has a 3 digit version number on the sticks' label, indicating what ICs are on the stick.
> The first digit is the manufacturer.
> 3 = Micron
> 4 = Samsung
> ...


----------



## Lei (Aug 22, 2022)

tabascosauz said:


> If there's no desperately pressing need for 12 cores, you have the 5800X3D as an option - for gaming primarily, that'd be the CPU I'd be leaning towards as an upgrade from 3700X. Give that 3090 a CPU it deserves.




 Yeah, that's what I was going to say. the 3D CPU, something worth upgrading to for a gamer.



DebenPoison said:


> I'm on the fence right now waiting to see if the Ryzen 7000 chips drop the 5 series chips any cheaper or maybe just jump directly to 7000 series depending on performance/pricing.


That's also good idea, at least 7000 will give you DDR5. and why would you need 12 cores? except for Cyberpunk


----------



## phanbuey (Aug 22, 2022)

5900x is a great upgrade - will give that rig a long life.  I wouldn't go 5700x honestly, for $100 more the 5900x is well worth it.

The 5800x3d is good too but at that point i would just wait for zen 4.

But yeah either 5900x or 5800x3d are great (maybe even 5800x3d once zen 4 is out and prices drop?).



Lei said:


> That's also good idea, at least 7000 will give you DDR5. and why would you need 12 cores? except for Cyberpunk


Why not if it's $350?  Then you have a chip that can game, do productivity and run badly coded games with no compromises.


----------



## A Computer Guy (Aug 22, 2022)

DebenPoison said:


> Currently running a Ryzen 3700X (Asus X570 TUF) with an RTX 3090 FE and with the prices of the 5900X CPU's being what they are, really considering the upgrade.
> 
> What do ya'll think? I know the 3700X holds me back some, but probably not much.  I game on a 1440p 34" LED and typically run most games at very high ~ ultra settings @ 1440p resolution.
> 
> I'm in Canada and mostly game with my machine.  I'm on the fence right now waiting to see if the Ryzen 7000 chips drop the 5 series chips any cheaper or maybe just jump directly to 7000 series depending on performance/pricing.


If you're not going to get at least a 25% performance gain I wouldn't do it unless you get some good cash back from your 3700x, need the extra cores, or get a good deal on Zen3 after AM5 is released for a bit on a newegg or amazon special.


----------



## Lei (Aug 22, 2022)

phanbuey said:


> Why not if it's $350?  Then you have a chip that can game, do productivity and run badly coded games with no compromises.


because: get a 4k monitor with that money. You have 3090 FE


----------



## GerKNG (Aug 22, 2022)

DebenPoison said:


> mostly game with my machine


5800X3D


----------



## phanbuey (Aug 22, 2022)

Lei said:


> because: get a 4k monitor with that money. You have 3090 FE
> 
> View attachment 258983


So instead of upgrading his aging 3700x that is bottlenecking his card at virtually all resolutions (esp lows and 1%) for $350 5900x ($250 or less if he sells the 3700x for at least $100 on eBay), you're saying he should buy a $650-$1000 monitor...

Sure - that too would be a good upgrade, but kind of not the point?  It's a good performance per $ regardless of the monitor he's using.

If you're going for the best performance then the 5800x3d for sure.  The 5900x is not a bad choice either - cyberpunk is hardly the last game that's going to eat threads for no good reason.


----------



## A Computer Guy (Aug 22, 2022)

GerKNG said:


> 5800X3D


That chart makes the 5800X3D pretty tempting.


----------



## GerKNG (Aug 22, 2022)

A Computer Guy said:


> That chart makes the 5800X3D pretty tempting.


I replaced my 12700k with it and there are still ways to overclock and use the curve optimizer which gave me another 3-5% more fps.
If you already have a compatible board it's a nobrainer


----------



## A Computer Guy (Aug 22, 2022)

GerKNG said:


> I replaced my 12700k with it and there are still ways to overclock and use the curve optimizer which gave me another 3-5% more fps.
> If you already have a compatible board it's a nobrainer


What about a lower end card like 6600XT?


----------



## GerKNG (Aug 22, 2022)

A Computer Guy said:


> What about a lower end card like 6600XT?


Why not?
The faster the CPU the longer you'll keep it. Maybe even for 5-6 years.


----------



## Lei (Aug 22, 2022)

phanbuey said:


> you're saying he should buy a $650-$1000 monitor...











						LG 32” UHD HDR10 Monitor with AMD FreeSync™ (32UL500-W) | LG USA
					

Shop LG 32UL500-W on the official LG.com website for the most up to date information. Buy online for delivery or in-store pick-up.




					www.lg.com
				




And I said 5800X3D


----------



## A Computer Guy (Aug 22, 2022)

GerKNG said:


> Why not?
> The faster the CPU the longer you'll keep it. Maybe even for 5-6 years.


LOL or in my case maybe 10 to 15.


----------



## phanbuey (Aug 22, 2022)

Lei said:


> LG 32” UHD HDR10 Monitor with AMD FreeSync™ (32UL500-W) | LG USA
> 
> 
> Shop LG 32UL500-W on the official LG.com website for the most up to date information. Buy online for delivery or in-store pick-up.
> ...



Nice 60hz freesync panel.  I wouldn't recommend that at all for a 3090.

I did too.  Either chip is fine - the 5900x is just cheaper and will do the job nicely, (and also sell nicely) -- you know - then he can put that extra $100 to your 4k monitor upgrade lol.


----------



## JrRacinFan (Aug 22, 2022)

If you held onto that 3700X this long, wait the couple months , little past the 7xxx series launch, see what comes of pricing of current SKU's to save a few dollars.


----------



## gffermari (Aug 22, 2022)

GerKNG said:


> I replaced my 12700k with it and there are still ways to overclock and use the curve optimizer which gave me another 3-5% more fps.
> If you already have a compatible board it's a nobrainer



Did you really replace a 12700K with a 5800X3D?
That’s madness!

Anyway. There’s no point to buy a 5900X if you don’t need the cores.
Actually if you need cores, go for a 5950X.
All the rest should go for the 5800X3D. The difference is massive to ignore it.


----------



## Mussels (Aug 22, 2022)

DebenPoison said:


> Corsair Vengeance RGB Pro DDR4 that I'm running and in all honesty have done zero performance testing on them. I am sure they're capable of more than 3200mhz and potentially tighter timings. I could tinker with this.  Anyone know what these might achieve and at what voltage?
> 
> 
> 
> ...


If they're like any other corsair memory i've seen, they all seem capable of the 3200-3600 range, but will NOT tighten timings.

Only their 8GB sticks had a chance of being samsung ICs - anything hynix (all the 16GB and 32GB sticks) would overclock, but not reduce timings.



gffermari said:


> Did you really replace a 12700K with a 5800X3D?
> That’s madness!
> 
> Anyway. There’s no point to buy a 5900X if you don’t need the cores.
> ...


It's expensive, but its a faster CPU for gaming - whats the madness part, if you can afford it?


----------



## QuietBob (Aug 22, 2022)

A Computer Guy said:


> What about a lower end card like 6600XT?


Ryzen 3300X vs. 5800X3D: 20 game benchmarks @ 1080p with max settings
But the 6600XT is not a lower end card


----------



## Mussels (Aug 24, 2022)

QuietBob said:


> Ryzen 3300X vs. 5800X3D: 20 game benchmarks @ 1080p with max settings
> But the 6600XT is not a lower end card


Yes it is?

It's the fourth slowest out of the 12 6000XT series cards.
Therefore... it's one of the lower end.


You can argue semantics all you want but the word 'lower' implies it's anything lesser than previously mentioned GPU's, or in the context of modern GPU's series
That doesnt say or mean anything about its performance or features, it's just about it's placement in a product stack.


----------



## Assimilator (Aug 24, 2022)

I upgraded from a 3600X to a 5900X for no other reason than the higher core count improves my system longevity, as I see nothing interesting in Zen 4 that is making me want to upgrade.

For your use-case, I'd definitely suggest waiting the month or so until Zen 4 releases. At that point not only will you have the possibility to upgrade your whole system, but the price of Zen 3 CPUs will start dropping so that if you do decide to go that route you'll be able to pick up one for less.


----------



## freeagent (Aug 24, 2022)

More cores is good, but the boost potential of the CPU is awesome in ST. I have my 5600X boosting to 4850, and my 5900X boosting to 5150.


----------



## Mussels (Aug 25, 2022)

Assimilator said:


> I upgraded from a 3600X to a 5900X for no other reason than the higher core count improves my system longevity, as I see nothing interesting in Zen 4 that is making me want to upgrade.
> 
> For your use-case, I'd definitely suggest waiting the month or so until Zen 4 releases. At that point not only will you have the possibility to upgrade your whole system, but the price of Zen 3 CPUs will start dropping so that if you do decide to go that route you'll be able to pick up one for less.


This. A lot of people would benefit from any sales on the 5600x in the next few months, so far only the 5900x has had the price drops - and they've become a lot more popular since that happened


----------



## Assimilator (Aug 25, 2022)

Mussels said:


> This. A lot of people would benefit from any sales on the 5600x in the next few months, so far only the 5900x has had the price drops - and they've become a lot more popular since that happened


The prices are stupid due to AMD's stupidity in introducing the 5500/5600/5700X two years after all the other Zen 3 parts, to the point that I could have picked up a brand new 5800X for *less* than a brand new 5700X. Instead I bought a second-hand 5900X for less than either.


----------



## JrRacinFan (Aug 25, 2022)

Assimilator said:


> The prices are stupid due to AMD's stupidity in introducing the 5500/5600/5700X two years after all the other Zen 3 parts, to the point that I could have picked up a brand new 5800X for *less* than a brand new 5700X. Instead I bought a second-hand 5900X for less than either.


Well, here's something you may wanna hear ....

I purchased a 5800X from Amazon and was shipped a 5900X to my door, Amazon error in my favor. This was about a year ago when 5800X was 400 USD and 5900x was 750 USD.


----------



## Assimilator (Aug 25, 2022)

JrRacinFan said:


> Well, here's something you may wanna hear ....
> 
> I purchased a 5800X from Amazon and was shipped a 5900X to my door, Amazon error in my favor. This was about a year ago when 5800X was 400 USD and 5900x was 750 USD.


I never get that sort of good luck ;_;


----------



## Frick (Aug 25, 2022)

Assimilator said:


> I never get that sort of good luck ;_;



You also have to be dishonest to keep it.


----------



## JrRacinFan (Aug 25, 2022)

I'm certain if a friend gave you say $200 you would keep it. Obviously, they gave it to you for a reason.


----------



## Chrispy_ (Aug 25, 2022)

5800X3D for gaming. Don't bother with the 5900X it's not enough of an improvement to justify the cost or effort.


----------



## oxrufiioxo (Aug 25, 2022)

My Vote would be 5800X3D as well if sticking with AM4.


----------



## Mussels (Aug 27, 2022)

Chrispy_ said:


> 5800X3D for gaming. Don't bother with the 5900X it's not enough of an improvement to justify the cost or effort.


the 5800x3d costs more than a 5900x
and unless you have a top tier GPU you may not see any benefits over a 5600x


That said, if you want AM4 to last a really long time, a 5800x3d would probably age like a 2700K or 4770K has, and be usable for a decade


----------



## Chrispy_ (Aug 27, 2022)

Mussels said:


> the 5800x3d costs more than a 5900x
> and unless you have a top tier GPU you may not see any benefits over a 5600x
> 
> 
> That said, if you want AM4 to last a really long time, a 5800x3d would probably age like a 2700K or 4770K has, and be usable for a decade


OP said they have a 3090FE. It's the exact canditdate for a top-tier gaming CPU.


----------



## Mussels (Aug 29, 2022)

Chrispy_ said:


> OP said they have a 3090FE. It's the exact canditdate for a top-tier gaming CPU.


Clarity - too many times people will take a comment on page 2 of a forum and ignore context and things get derailed.

Even with a custom water 3090, i'm still managing to GPU limit myself with a 5800x at 4.6GHz.


----------



## lexluthermiester (Aug 29, 2022)

DebenPoison said:


> What do ya'll think? I know the 3700X holds me back some, but probably not much.


It shouldn't be holding you back at all. A 3700X is an excellent gaming CPU. If you want to upgrade for gaming, don't go with more cores, go with better performance. Others have suggested and shown that the 5800X3D is the way to go and I'm going to echo that, but ONLY if the game you're into needs more CPU power(and that's a short list). Otherwise get a 4K 120hz display and call it a day for 18months.


----------



## Mussels (Aug 29, 2022)

lexluthermiester said:


> It shouldn't be holding you back at all. A 3700X is an excellent gaming CPU. If you want to upgrade for gaming, don't go with more cores, go with better performance. Others have suggested and shown that the 5800X3D is the way to go and I'm going to echo that, but ONLY if the game you're into needs more CPU power(and that's a short list). Otherwise get a 4K 120hz display and call it a day for 18months.


3700x isn't good for high FPS gaming, you hit a ceiling around 120FPS

Even at 4K, i'm sitting in the 120-200 range on this 3090, depending on the scene/settings


----------



## lexluthermiester (Aug 29, 2022)

Mussels said:


> 3700x isn't good for high FPS gaming, you hit a ceiling around 120FPS


Um, I have not seen this. But then again, it's not been a problem for most of the customers I serve.


----------



## Mussels (Aug 29, 2022)

lexluthermiester said:


> Um, I have not seen this. But then again, it's not been a problem for most of the customers I serve.


It shows up in every game review, and every thread where users have upgraded from zen 2 to zen 3.
Every game is a little different, but most DX12 titles sit around that 120FPS threshold


Heres some examples from the 5700x review, since it's recent











Not every game behaves the same, but the gaming performance gap between Zen 2 and Zen 3 is huge


----------



## lexluthermiester (Aug 29, 2022)

Mussels said:


> Not every game behaves the same


I was about to say that. I've seen 1700X's and 2700X's get 300+fps. It just depends on the game and the display.


----------



## Mussels (Aug 29, 2022)

lexluthermiester said:


> I was about to say that. I've seen 1700X's and 2700X's get 300+fps. It just depends on the game and the display.


Yeah, but i've seen far too many games even on my GTX1080 that couldnt make use of my 165Hz displays, that could with a zen 3 CPU.

I did say DX12 titles, specifically


----------



## SpittinFax (Aug 29, 2022)

Hopefully the 5900X price drops when Zen 4 does. It actually went up by AU$30 recently which kind of dulled my interest in upgrading.


----------



## DebenPoison (Sep 1, 2022)

I mean based on the recent leaks of 7600X and 7700X performance, this may totally be the way to go.

I'm sitting on the fence for the next month to see what prices do either way but I must say, Zen 4 is very tempting based on what we're seeing.  Couple that with DDR5 ; yeah its a bit more $$ but may be the way to go. I don't anticipate purchasing RTX 4 series card so I may as well pair my 3090 with a newer CPU and DDR5.

Edit: Interesting to see so many prefer the 5800X3D over the 5900X ; from what I had seen the performance increase vs 5900X is limited to a select few games and granted, it does kick some ass, but the additional cores could come in useful if I ever decided to throw a few VMs on the machine. It'll be telling to see how the 7600X goes up against the 5800X3D in gaming performance, as I say I'm primarily a gamer.

Thank you for all the helpful comments and information thus far!


----------



## Chrispy_ (Sep 1, 2022)

DebenPoison said:


> I mean based on the recent leaks of 7600X and 7700X performance, this may totally be the way to go.
> 
> I'm sitting on the fence for the next month to see what prices do either way but I must say, Zen 4 is very tempting based on what we're seeing.  Couple that with DDR5 ; yeah its a bit more $$ but may be the way to go. I don't anticipate purchasing RTX 4 series card so I may as well pair my 3090 with a newer CPU and DDR5.
> 
> ...


The games certainly don't need the cores, they need the cache and the peak boost speeds. I have a regular 5800X and I won't be changing it until at least the Zen4 with 3DvCache arrive, which is probably at least a year away.

If you're after VMs, it's probably worth buying an used server with a 2P Xeon or older EPYC. I've run VMs at home for work during COVID lockdown (swapped out to a 3900X and 128GB of slow-ass DDR4-2133) but you can pick up used tower 2P servers like an HP Proliant Gen8 for a few hundred bucks. Load it up with some cheap used ECC DDR4 if it doesn't already come with plenty, add some new M.2 storage and throw it into a utility cupboard or the attic. A couple of older Xeons with 6 or 8 cores each and 8 memory channels between them will be great for VMs and you don't have to compromise your gaming machine to mess around with HyperV or VMware.


----------



## Mussels (Sep 12, 2022)

SpittinFax said:


> Hopefully the 5900X price drops when Zen 4 does. It actually went up by AU$30 recently which kind of dulled my interest in upgrading.


You're in luck, prices are all over the place - stores with low stock are raising prices, those with stock are lowering them

this is from priceme.com.au that i have never used before today


----------



## SpittinFax (Sep 12, 2022)

Mussels said:


> You're in luck, prices are all over the place - stores with low stock are raising prices, those with stock are lowering them
> 
> this is from priceme.com.au that i have never used before today
> 
> View attachment 261440



Prices in the low/mid ranges are very good right now. I saw the 5700X has dropped to AU$369 everywhere right now and it's very tempting. Sure it's not a 5900X but the efficiency and thermal performance of that 8 core chip is hard to ignore.


----------



## DebenPoison (Sep 12, 2022)

I picked up a 5900X on eBay as “opened box” condition for $375 CAD , that’s about $290 USD. I’ll sell my 3700X for at least $150 CAD, so I think it was a great deal. Eagerly awaiting my upgrade!!


----------



## kapone32 (Sep 12, 2022)

DebenPoison said:


> I picked up a 5900X on eBay as “opened box” condition for $375 CAD , that’s about $290 USD. I’ll sell my 3700X for at least $150 CAD, so I think it was a great deal. Eagerly awaiting my upgrade!!


The first thing you are going to notice is how snappy and solid your PC is going to feel. If you have proper cooling the chip will happily boost to 4.9 to 5.1 GHZ on 1 or 2 cores. Of all the AM4 CPUs I have owned the 5900X felt the "sturdiest" for lack of a better word. Everything you do with a 5900X or higher will feel that way but the 5900X does it using less power than the 5950X. I also have found that the 5900X has the best Memory Controller but that is subjectively vs what else I have used on the platform.


----------



## DebenPoison (Sep 16, 2022)

kapone32 said:


> The first thing you are going to notice is how snappy and solid your PC is going to feel. If you have proper cooling the chip will happily boost to 4.9 to 5.1 GHZ on 1 or 2 cores. Of all the AM4 CPUs I have owned the 5900X felt the "sturdiest" for lack of a better word. Everything you do with a 5900X or higher will feel that way but the 5900X does it using less power than the 5950X. I also have found that the 5900X has the best Memory Controller but that is subjectively vs what else I have used on the platform.


I've installed 5900X today and yeah it definitely feels snappier, and I see this funny enough directly in Windows 11 Search via Indexing.  When I searched previously with my 3700X there was this delay in finding the application which I could never quite figure out why (Windows 10 I didn't have this issue).  Immediately with the 5900X this issue simply disappeared.

So far so good, I enabled PBO and see the following:

I did have another voltage PBO related setting enabled previously, I may play more with PBO.


----------



## SpittinFax (Sep 16, 2022)

How does the 5900X do when you undervolt it to reduce temps? Not gonna lie, I'm a big fan of low temps and low power consumption (which is why I like the RX6600) while still having great performance. So my two options are: 1) Go with the 5700X, or 2) the 5900X with a big undervolt. But it looks like I would need to dial it back quite a lot to match the 5600X in thermals. So that might rule out the 5900X as the best option.

Thermals might seem like a non-issue but Australian summers are no joke. That heatwave that the UK declared a national emergency is normal t-shirt weather around here. Which is why I don't like my PC kicking out heat where avoidable.

Still, the 5700X looks like a great chip. Those eight cores are very efficient.


----------



## 1234chgm (Sep 16, 2022)

5800x3D good


----------



## tabascosauz (Sep 16, 2022)

SpittinFax said:


> How does the 5900X do when you undervolt it to reduce temps? Not gonna lie, I'm a big fan of low temps and low power consumption (which is why I like the RX6600) while still having great performance. So my two options are: 1) Go with the 5700X, or 2) the 5900X with a big undervolt. But it looks like I would need to dial it back quite a lot to match the 5600X in thermals. So that might rule out the 5900X as the best option.
> 
> Thermals might seem like a non-issue but Australian summers are no joke. That heatwave that the UK declared a national emergency is normal t-shirt weather around here. Which is why I don't like my PC kicking out heat where avoidable.
> 
> Still, the 5700X looks like a great chip. Those eight cores are very efficient.



I have never associated Curve Optimizer undervolting with a reduction in temperatures on my Zen 3 CPUs. Maybe if your cooler is *really* low on the performance chart, but not even on the likes of L12S, L9x65 and Big Shuriken 3 would I say there was a real thermal benefit to just using CO. I run -2/5/30/10/20/15/20/20/20/20/20/20 on my 5900X, and either all-core -10 or -15 on my 5700G.

At one point I determined that on mine, stock all-core performance at 142W is roughly equivalent to undervolt all-core at 130W. So if that's what you're after (reducing power in conjunction with CO to achieve iso-performance), I suppose you could undervolt to run a few degrees cooler (about 6.5C of difference for me, that 12W reduction), but at the same power limit then there will be no temp improvement as it will run clocks higher.

At 142W and a 21-25C ambient on water I maxed out in the upper 60s, maybe hitting low 70s once in a while.
Same conditions on the NH-C14S, number of games will regularly get me into the high 70s and occasionally 80s; Cinebench about 70C, non-AVX all-core about 78C, and Linpack/TM5 easily 80C+.

There aren't really any magical tricks to Zen 3 thermals. The 5600/5600X/5700X run cool because they run a lower 76W PPT limit now. Push the PPT hard, bring up the ST boost ceiling and there's not much difference. Unfortunately the 2CCD parts don't go so low on power, IOD power is higher and CPU will want to draw up to 100W even just in lighter games.


----------



## R0H1T (Sep 16, 2022)

Chrispy_ said:


> which is probably at least a year away.


Or 6 months if RPL takes the gaming crown even if only for a short while, AMD most likely won't take long this time because zen5 is expected to be launched 1H of 2024 *IIRC*.

Or who knows even slightly earlier ~


----------



## Chrispy_ (Sep 16, 2022)

R0H1T said:


> Or 6 months if RPL takes the gaming crown even if only for a short while, AMD most likely won't take long this time because zen5 is expected to be launched 1H of 2024 *IIRC*.
> 
> Or who knows even slightly earlier ~


I don't think Raptor Lake will have any impact on AMD's roadmap. The Zen4 X3D variants are already in production and take months to roll through the fabs. They'll launch when they're finished cooking and if Intel can't compete that just means AMD can sell them for a higher price


----------



## SpittinFax (Sep 16, 2022)

tabascosauz said:


> I have never associated Curve Optimizer undervolting with a reduction in temperatures on my Zen 3 CPUs. Maybe if your cooler is *really* low on the performance chart, but not even on the likes of L12S, L9x65 and Big Shuriken 3 would I say there was a real thermal benefit to just using CO. I run -2/5/30/10/20/15/20/20/20/20/20/20 on my 5900X, and either all-core -10 or -15 on my 5700G.
> 
> At one point I determined that on mine, stock all-core performance at 142W is roughly equivalent to undervolt all-core at 130W. So if that's what you're after (reducing power in conjunction with CO to achieve iso-performance), I suppose you could undervolt to run a few degrees cooler (about 6.5C of difference for me, that 12W reduction), but at the same power limit then there will be no temp improvement as it will run clocks higher.
> 
> ...



Thanks for the info. Yeah I did notice something similar with my 5600X where running most cores at a -30 offset didn't really change temperatures much. It still reaches 70 degrees on a big cooler (DRP4), so a 5900X would most likely be creeping over 80 degrees. It explains why many 5900X owners are on water cooling. 12 cores is awesome but ultimately I'm wary of jumping onto a processor that might not be to my liking in terms of power and thermals. It might be a safer choice to overclock an 8 core if I want more multicore performance, rather than starting with a 12 core and underclocking it.


----------



## kapone32 (Sep 16, 2022)

SpittinFax said:


> Thanks for the info. Yeah I did notice something similar with my 5600X where running most cores at a -30 offset didn't really change temperatures much. It still reaches 70 degrees on a big cooler (DRP4), so a 5900X would most likely be creeping over 80 degrees. It explains why many 5900X owners are on water cooling. 12 cores is awesome but ultimately I'm wary of jumping onto a processor that might not be to my liking in terms of power and thermals. It might be a safer choice to overclock an 8 core if I want more multicore performance, rather than starting with a 12 core and underclocking it.


Actually the 5900X is not that bad. All of the AM4 cpu performance is affected by the Agesa updates and BIOS settings as well. Your performance is entirely based on the cooling potential. As an example the 5950X has a base clock of 3.4 GHZ but will go to 4.9 with a 280MM AIO and 5.2 with a 360MM AIO. I will say though that even a decent air cooler/240mm AIO with a proper ventilated case will probably give you 4.9 GHZ with the 5900X. The 5600X is not a good example as it has a 65-75 Watt power limit but the 5800X is actually the hottest AM4 CPU so the 5700X but that is not in the same league as the 5900X either in terms of raw performance while producing the same amount of heat. The 5900X pulls about 105-125 Watts max but the fact that it has 2 CCM allows the heat to be mitigated better. As far as undervolting this is my thought. AM3 CPUs were great for UV/OC especially the 8000 series but when AM4 launched AMD took over the OC of the chip. I tried an all core OC and got to 4.7 GHZ all cores but the CPU would idle at 50 C. The other thing is right now as I am typing my CPU is at .7 GHZ @ 1.1 Volts using 12W of power so we are good. In fact unless I am making a video, Gaming with CPU intensive Games or Tasks I do not see my CPU pull over 50 Watts so AMD is actually great at making sure your CPU runs cool. Even some AM4 MBs do not like any user limited voltages so be aware of that too.


----------



## Chrispy_ (Sep 16, 2022)

SpittinFax said:


> Thanks for the info. Yeah I did notice something similar with my 5600X where running most cores at a -30 offset didn't really change temperatures much. It still reaches 70 degrees on a big cooler (DRP4), so a 5900X would most likely be creeping over 80 degrees. It explains why many 5900X owners are on water cooling. 12 cores is awesome but ultimately I'm wary of jumping onto a processor that might not be to my liking in terms of power and thermals. It might be a safer choice to overclock an 8 core if I want more multicore performance, rather than starting with a 12 core and underclocking it.


Efficiency falls off a cliff at higher clocks, in other words a CPU will use twice the power to run 20% faster.

If you want multicore performance without silly power consumption, buy more cores and tell them they can only have 142W. You can absolutely run a 5950X with a 120mm air cooler and get great performance, it's just that you won't get 4.4GHz all-core at the base TDP of 105W (142W boost)

I have a bunch of 5950X rendering nodes that are cooled with dual-fan NH-U9S 92mm coolers and they render animation frames in VRay at 3.5GHz or so all day without breaking a sweat. Just because you _can_ run them at 280W with insane cooling to get another 900MHz doesn't mean you have to. Modern CPUs and boards let you choose how much power you want to use, and a 5950X is remarkably impressive in eco-mode (65W) and will probably still get close to a 5900X drawing 4x the power.


----------



## freeagent (Sep 16, 2022)

The less power you allow, the lower your GFlop output will be


----------



## tabascosauz (Sep 16, 2022)

SpittinFax said:


> Thanks for the info. Yeah I did notice something similar with my 5600X where running most cores at a -30 offset didn't really change temperatures much. It still reaches 70 degrees on a big cooler (DRP4), so a 5900X would most likely be creeping over 80 degrees. It explains why many 5900X owners are on water cooling. 12 cores is awesome but ultimately I'm wary of jumping onto a processor that might not be to my liking in terms of power and thermals. It might be a safer choice to overclock an 8 core if I want more multicore performance, rather than starting with a 12 core and underclocking it.



Dark Rock Pro 4? You'll be fine. C14S is a step down from DRP4, was like 2-5C hotter on my 3700X, and I've been running fine all this time. U9S should still be okay even, maybe at the edge at 142W in hotter workloads.

At stock, a 5900X in MT behaves similarly to a 3700X stock. Just with higher and more unpredictable ST spikes from higher ST boost. 4950 is the ST cap out of the box, if you're lucky PBO can take you up to 5150 but that's strictly dependent on your CPU sample. Have seen 5900X that max out at 5150 out multiple cores, have seen 5900X that struggle to reach 4900.

L12S and lower is where the 5900X starts to fall apart (without a cool ambient and case airflow that is overwhelmingly geared towards helping it out).

Better multicore always comes from having more cores, OCing a 5800X to beat a 5900X or a OC 5900X to beat a 5950X is extremely tough and pointless. The 5700X may look efficient with its 76W PPT but once you start pushing it there will be little difference to the 5800X. Get the single CCD parts over 100W and you start to feel the heat.



kapone32 said:


> Actually the 5900X is not that bad. All of the AM4 cpu performance is affected by the Agesa updates and BIOS settings as well. Your performance is entirely based on the cooling potential. As an example the 5950X has a base clock of 3.4 GHZ but will go to 4.9 with a 280MM AIO and 5.2 with a 360MM AIO. I will say though that even a decent air cooler/240mm AIO with a proper ventilated case will probably give you 4.9 GHZ with the 5900X. The 5600X is not a good example as it has a 65-75 Watt power limit but the 5800X is actually the hottest AM4 CPU so the 5700X but that is not in the same league as the 5900X either in terms of raw performance while producing the same amount of heat. The 5900X pulls about 105-125 Watts max but the fact that it has 2 CCM allows the heat to be mitigated better. As far as undervolting this is my thought. AM3 CPUs were great for UV/OC especially the 8000 series but when AM4 launched AMD took over the OC of the chip. I tried an all core OC and got to 4.7 GHZ all cores but the CPU would idle at 50 C. The other thing is right now as I am typing my CPU is at .7 GHZ @ 1.1 Volts using 12W of power so we are good. In fact unless I am making a video, Gaming with CPU intensive Games or Tasks I do not see my CPU pull over 50 Watts so AMD is actually great at making sure your CPU runs cool. Even some AM4 MBs do not like any user limited voltages so be aware of that too.



I'm not sure what sorts of "intensive" games you play where the 5900X always draws less than 50W, seems a little exaggerated. With a 2060 Super and 3070 Ti I've never seen less than 50W package power in any game. 50W is honestly pretty ambitious for demanding games even on a single-CCD (3700X) or APU (5700G).

~60W is about the norm for the lightest games, most settle around 90-100W, an increasing number of games will regularly run up to the 125-142W area. DCS, MW2019, BFV, just off the top of my head......


----------



## kapone32 (Sep 16, 2022)

tabascosauz said:


> I'm not sure what sorts of "intensive" games you play where the 5900X always draws less than 50W, seems a little exaggerated. With a 2060 Super and 3070 Ti I've never seen less than 50W package power in any game. 50W is honestly pretty ambitious for demanding games even on a single-CCD (3700X) or APU (5700G).
> 
> ~60W is about the norm for the lightest games, most settle around 90-100W, an increasing number of games will regularly run up to the 125-142W area. DCS, MW2019, BFV, just off the top of my head......


I have a 6800XT so that could be part of it. When I am playing a 4x20 x2 unit (Ultra) battle with TWWH3 Immortal Empires I can see up to 100W CPU power draw the same can be see with High Risk Areas in Everspace 2. When  I am playing Pacman 256 or Raiden Legacy or even Torchlight I don't see past 40 W on the CPU. I do agree with you though that new Games do push the CPU (And GPU) to run as high as possible.


----------



## Zach_01 (Sep 17, 2022)

Was just testing FarCry NewDawn. With GPU capped at 60FPS the 5900X was by average at 100W. Without GPU cap around 110W.
Boost both times up to ~4.95GHz for best cores and 4.85~4.9GHz for worst.
CPU thermal limit at 75C, ambient 27~28C. Boost override +50MHz
System power by avg 100W difference. (+10W CPU, +80W GPU, +10W the rest I guess)






With same PBO settings (see specs) and CO (negative 7~15) on MT max temp is below 70C.

If you want to control temp, its different for MT and ST/middle-theaded loads.
For MT you limit PPT/EDC to the desired level.
For ST you cant really do anything else than limit temp.


----------



## tabascosauz (Sep 17, 2022)

Zach_01 said:


> Was just testing FarCry NewDawn. With GPU capped at 60FPS the 5900X was by average at 100W. Without GPU cap around 110W.
> Boost both times up to ~4.95GHz for best cores and 4.85~4.9GHz for worst.
> CPU thermal limit at 75C, ambient 27~28C. Boost override +50MHz
> System power by avg 100W difference. (+10W CPU, +80W GPU, +10W the rest I guess)
> ...



There is always the option of setting negative boost clock override to reduce ST clock, but I fully agree - no reason to do so on a Dark Rock Pro 4. ST temp spikes won't be anywhere near concerning, MT temps will be fine at 142W.


----------



## Zach_01 (Sep 17, 2022)

tabascosauz said:


> There is always the option of setting negative boost clock override to reduce ST clock, but I fully agree - no reason to do so on a Dark Rock Pro 4. ST temp spikes won't be anywhere near concerning, MT temps will be fine at 142W.


Yeah there's that too, to cut down ST boost but I find it less optimal performance wise.
I believe that by positive boost override + thermal limit you're getting a more sustained ST boost that can potentially improve on lower ambient - better cooling.


----------



## freeagent (Sep 17, 2022)

5900X @ 142w is really not that bad at all. To put it in perspective, my 5600X under full power limits, and an aggressive curve running Linpack can do ~135w PPT @ 4600MHz. !42w on 5900X should be about 55-60c.


----------



## tabascosauz (Sep 17, 2022)

freeagent said:


> 5900X @ 142w is really not that bad at all. To put it in perspective, my 5600X under full power limits, and an aggressive curve running Linpack can do ~135w PPT @ 4600MHz. !42w on 5900X should be about 55-60c.



Yeah, but thats just because you leave your windows open in the winter   FC140 with fast fans is a bit more capable than DRP4


----------



## Zach_01 (Sep 17, 2022)

freeagent said:


> 5900X @ 142w is really not that bad at all. To put it in perspective, my 5600X under full power limits, and an aggressive curve running Linpack can do ~135w PPT @ 4600MHz. !42w on 5900X should be about 55-60c.


135W with what PowerReportingDeviation?

-----------------------------------------------------------------

CB R23 run (10min). Readings steady after 4~5min
Ambient 28C
Cooler maxed (5+year old H110i 280mm)

CPU PPT ~142W (true avg)
CPU temp 67C avg,
~4.4GHz


----------



## freeagent (Sep 17, 2022)

Zach_01 said:


> 135W with what PowerReportingDeviation?


Have a look


----------



## SpittinFax (Sep 17, 2022)

Chrispy_ said:


> Efficiency falls off a cliff at higher clocks, in other words a CPU will use twice the power to run 20% faster.
> 
> If you want multicore performance without silly power consumption, buy more cores and tell them they can only have 142W. You can absolutely run a 5950X with a 120mm air cooler and get great performance, it's just that you won't get 4.4GHz all-core at the base TDP of 105W (142W boost)
> 
> I have a bunch of 5950X rendering nodes that are cooled with dual-fan NH-U9S 92mm coolers and they render animation frames in VRay at 3.5GHz or so all day without breaking a sweat. Just because you _can_ run them at 280W with insane cooling to get another 900MHz doesn't mean you have to. Modern CPUs and boards let you choose how much power you want to use, and a 5950X is remarkably impressive in eco-mode (65W) and will probably still get close to a 5900X drawing 4x the power.



That's nuts. And that's what I like about the older Xeons chips, they're normally downclocked so that they can churn through productivity work all day. The E5 2678-V3 runs so cold that even the cheapest tower cooler is enough. Tuning a 5900X in a similar way would limit performance a lot but I'm not too worried about leaving performance on the table if it means better efficiency.



tabascosauz said:


> Dark Rock Pro 4? You'll be fine. C14S is a step down from DRP4, was like 2-5C hotter on my 3700X, and I've been running fine all this time. U9S should still be okay even, maybe at the edge at 142W in hotter workloads.
> 
> At stock, a 5900X in MT behaves similarly to a 3700X stock. Just with higher and more unpredictable ST spikes from higher ST boost. 4950 is the ST cap out of the box, if you're lucky PBO can take you up to 5150 but that's strictly dependent on your CPU sample. Have seen 5900X that max out at 5150 out multiple cores, have seen 5900X that struggle to reach 4900.
> 
> ...



ST on the 5600X is pretty good so I could just downclock the 5900X to 4.6GHz. For comparison my 5600X hovers around 40W PPT while gaming so it has very good efficiency under those loads.

Too bad it's not possible to have the single-core efficiency of a 5600X with the multi-threaded performance of a 5900X, but I guess there has to be a compromise at some point.


----------



## tabascosauz (Sep 17, 2022)

SpittinFax said:


> ST on the 5600X is pretty good so I could just downclock the 5900X to 4.6GHz. For comparison my 5600X hovers around 40W PPT while gaming so it has very good efficiency under those loads.
> 
> Too bad it's not possible to have the single-core efficiency of a 5600X with the multi-threaded performance of a 5900X, but I guess there has to be a compromise at some point.



The cores are the same, maybe better binned if lucky.

All round the 2CCD parts will just draw more power, more silicon to power and more losses. As usual, SOC power scales with VSOC so especially around 3600 you can probably drop VSOC quite a bit (~1.02V for me vs. 1.11V at 3800), but you just won't match 1CCD SOC power draw. In turn, 1CCD doesn't have a chance in hell of matching APU SOC power draw. Just chiplet things.

SOC doesn't tell the whole story either; on 2CCD there's also a fair bit of power always lost to minor rails. Minor rails are less accurately measured/estimated iirc so there's always something like 10W overhead when subtracting CPU+SOC from Package power. 1CCD is better by a few watts (whereas overhead on APU is negligible) but again, just chiplet life. It makes Zen 3's overall efficiency look even worse than it really is.

With a decent cooler, SOC power doesn't contribute much to overall thermals so it doesn't matter that much. All of the significant heat and spikes come from CCD. Towards the high end of Fabric speeds you'll be looking at 15-20W regularly, maybe peaking slightly above 20W in extreme UMC load (ie. TM5).

There is not always correlation between core clock and per-core power. If you try a variety of different games/workloads you will sometimes see extreme power draw and blistering heat out of surprisingly low ST clocks (ie. 18-21W), in other workloads you may see crazy high 5.0GHz+ at average temps and barely over 10W per-core.


----------



## Zach_01 (Sep 17, 2022)

freeagent said:


> Have a look
> 
> View attachment 261954


I cant really look as you have it hidden... lol
Power Reporting Deviation must be observed during the run. After the test finishes I would never know what it was just by looking a screenshot like the above.
Also you dont have Snapshot CPU Polling enabled like you should and It would be nice for avg values to be visible too.


----------



## SLObinger (Sep 17, 2022)

I went from an R9 3900X to a 5900X and it was night and day difference especially in single threaded workloads. PBO works really well with Zen 3. I would definitely go for that upgrade


----------



## freeagent (Sep 17, 2022)

Zach_01 said:


> Power Reporting Deviation must be observed during the run. After the test finishes I would never know what it was just by looking a screenshot like the above.
> Also you dont have Snapshot CPU Polling enabled like you should and It would be nice for avg values to be visible too.


I didn't think PRD was important that is why it's not visible. I don't have the average visible because it drops as soon as the load is off, and you have to be right there to snapshot it or else it is just wasted time.


----------



## Zach_01 (Sep 17, 2022)

freeagent said:


> I didn't think PRD was important that is why it's not visible. I don't have the average visible because it drops as soon as the load is off, and you have to be right there to snapshot it or else it is just wasted time.


Yes that is what I was looking for. Because the 135W can be something else completely when PRD is taken account. It is important on 100% MT.
It can be truly 135W or 110W or 150W…

If anyone wants to learn something more about their system they should start the bench/stress test, reset HWiNFO values and take the screen shot right before test is over. Anything else is just wrong. You can’t conclude anything by seeing lows/highs


----------



## freeagent (Sep 17, 2022)

Zach_01 said:


> I cant really look as you have it hidden... lol
> Power Reporting Deviation must be observed during the run. After the test finishes I would never know what it was just by looking a screenshot like the above.
> Also you dont have Snapshot CPU Polling enabled like you should and It would be nice for avg values to be visible too.


Ok, this is what you mean, right? Snapshot polling, PRD, and the averages... I let TM5 run for a couple of hours


----------



## tabascosauz (Sep 17, 2022)

freeagent said:


> Ok, this is what you mean, right? Snapshot polling, PRD, and the averages... I let TM5 run for a couple of hours
> 
> View attachment 262033



TM5 load and clocks vary a lot, maybe a dedicated CPU test works better (dunno about Linpack, also kinda variable). TM5 doesn't even run true all-core at all times I think

As long as you're there observing when the CPU is relatively consistently at 100% load, the number you see should be reliable


----------



## Zach_01 (Sep 17, 2022)

freeagent said:


> Ok, this is what you mean, right? Snapshot polling, PRD, and the averages... I let TM5 run for a couple of hours
> 
> View attachment 262033


Is TM5 loading this much the CPU? Haven’t run it for some time now and I’m not close to my system now.

Nevertheless…
At first sight someone could say that you are beating the crap out of that 5600X. With power up to 170W and 180A. For a single CCD CPU this is killer readings. Not in the good way…
Taking account the PRD though at 130~135% things are changing drastically.

135W PPT with 132% PRD means that the actual power of the CPU (PPT) is 135/1.32 = 102~103W.
I find it ok for a 6core CPU as my R5 3600 run about the same but with lower current(A).
Applying the same to the 180A…
180/1.32 = ~136A
I’m not sure this is legit though to calculate true current(A).

I wouldn’t like my CPU to sustain this kind of current through it, even a 140A on a single CCD.
Personally I don’t care for 100% load MT boost so I keep it under the default limit at 125A (from 140A) on the dual CCD 5900X.

Also I’m not sure that any Ryzen CPU needs 140A let alone more, for “just” 100~105W.
This looks to me like it’s on the inefficient side.
High current(A) with not so high power could mean “unnecessary” voltage (?).
Are you using PBO Scalar beyond the default (X1) to override health management?

I assume also (by PBO limit readings) that you’re using motherboard PBO limits?
If yes, then boards are stupid enough to increase all PBO limits to the max available.
Another insufficiency…

Maybe you should try to limit current(A) more. If you care about CPU longevity.
I know that temp is not all that high but this is not really a CPU test, right?

Curve Optimizer settings? Max (30)?

I’d like to see it with something a little more CPU oriented test like CB R23. And it doesn’t have to be for 2+hours.
10~15min suffice.


----------



## freeagent (Sep 17, 2022)

Sure I’ll run it. My first pic was 5600X and the second was my 5900X


----------



## tabascosauz (Sep 19, 2022)

freeagent said:


> Sure I’ll run it. My first pic was 5600X and the second was my 5900X



Sometimes cinebench doesn't max out PPT (hence why it runs so cool at stock) but CPU-Z Stress option is pretty stable and should be reasonably accurate


----------



## freeagent (Sep 19, 2022)

tabascosauz said:


> Sometimes cinebench doesn't max out PPT (hence why it runs so cool at stock) but CPU-Z Stress option is pretty stable and should be reasonably accurate


Oh yeah it does not come close to maxing PPT, I just use Linpack Xtreme because it will max out PPT pretty easily. A half hour of R23 and all cores just run at 4600 the entire time with temps in the mid to upper 70s with a 20c ambient, with Linpack they run at 4475-4525 with loads at 80-85ish depending on my ambient. One thing I like about 5900X is that it can be as tame as a kitty cat, or be a fire breathing dragon depending on how much you want/can feed it 

I can set 240 PPT but will not get more than 235.5w. That must be a coded limitation within the depths of the board or CPU somewhere. But still, 235w PPT is pretty intense..


----------



## tabascosauz (Sep 19, 2022)

freeagent said:


> Oh yeah it does not come close to maxing PPT, I just use Linpack Xtreme because it will max out PPT pretty easily. A half hour of R23 and all cores just run at 4600 the entire time with temps in the mid to upper 70s with a 20c ambient, with Linpack they run at 4475-4525 with loads at 80-85ish depending on my ambient. One thing I like about 5900X is that it can be as tame as a kitty cat, or be a fire breathing dragon depending on how much you want/can feed it
> 
> I can set 240 PPT but will not get more than 235.5w. That must be a coded limitation within the depths of the board or CPU somewhere. But still, 235w PPT is pretty intense..



You don't have to run high PPT. For the purposes of determining power deviation you can (and maybe should since 200W+ PBO is so wonky nowadays) just run completely stock and look at the number during all-core


----------



## lexluthermiester (Sep 19, 2022)

freeagent said:


> Sure I’ll run it. My first pic was 5600X and the second was my 5900X





tabascosauz said:


> Sometimes cinebench doesn't max out PPT (hence why it runs so cool at stock) but CPU-Z Stress option is pretty stable and should be reasonably accurate


Prime95 64bit.


----------



## tabascosauz (Sep 19, 2022)

lexluthermiester said:


> Prime95 64bit.



Small FFT doesn't max out PPT anymore on stock Ryzen. Boost algorithm for Vermeer and newer CPUs basically recognizes Prime95 as a power virus and automatically throttles down to exactly base clock (3.7GHz), only draws about 130W.

That said, power deviation metric still appears to be accurate in Small FFT.


----------



## DebenPoison (Sep 19, 2022)

Talk to me friends ~ what should I be enabling, disabling or manually adjusting in my BIOS to achieve better PBO results on this 5900X?

Currently all I have is everything default other than DOCP @ 3200mhz for memory ; and 3 x PBO enabled settings in BIOS.  Enabled seems to grab an extra ~100mhz vs Auto. I did see an increase max cpu mhz PBO related setting with a limit of up to 200mhz but that is set to auto at present.

Running the Asus TUF X570 Wifi motherboard with a fairly current BIOS (2021), but there is newer available.  Also running a Cooler Master ML240L V2 AIO which I'll be replacing soon with a 360mm Artic Liquid Freezer II A-RGB in the next couple weeks.  Current thermal paste is MX-2.

I have set Thermal limit in bios to auto perhaps this should be set to 80'C or more? Also I note the voltage looks lower than it could be 1.288V CPU Core VID and SoC Voltage seems low compared to the above screenshots. Are those being set manually or do we typically leave them auto ?


----------



## Mussels (Sep 19, 2022)

"and 3 x PBO enabled settings in BIOS"
What does this mean?


The Tuf x570 is one of their weaker boards, performance is fine but they cut corners. Watch your chipset temps because if it's like my x570-F, the thermal pad they used will have dried up and gone to shit long ago


CPU-Z is not a good benchmark or stress test on AMD, cinebench is heavier but a more reliable test - if you're getting 4.55GHz all core, you're doing better than most of us.



HWinfo results dont tell us much if it's been running for under 2 minutes btw, you need to start it before the benchmarks for the idle results, then run the test programs so we have something to compare to and see how high temps get over time


----------



## tabascosauz (Sep 19, 2022)

@DebenPoison

1. SVI2 Vcore only measures properly at load. When you see 1.4V+ it's either at idle (where it's incapable of measuring actual Vcore behaviour), or high single core load. If you ran 1.4V during an all-core load you would have thermal shutdown a long time ago, Zen 3 is designed for a 1.2-1.25V all core Vcore at stock.

2. Auto PBO boost override is +0. If you want more you need to specify it.

3. All the boost override does is change your global limit (refer to your HWInfo sensor list). Stock is 4950MHz, goes up to 5150. If you're lucky you'll only ever bump into the global limit on 1 or 2 core load, and you might get some more ST performance out of a 5150 limit, but all-core will be unaffected by what you set because it will always clock much lower.

4. What exactly have you changed in BIOS? I've never seen CPU-Z drop only CCD2 clocks like that even in the Stress option. 4.55 is relatively high for CPU-Z MT but the score is low for that clock. Run some other benchmarks, I'd run CPU-Z for thermal testing and power deviation, not for the score.


5. I don't see VSOC being low or a problem, nor how it is relevant to cores performance.


----------



## phill (Sep 19, 2022)

I'd say, if you'd like to upgrade then upgrade.  As I think if your gaming the 5900X is a decent CPU from the 3700X.  Of course as its been mentioned, if you don't really need the cores and just game, then a 5800X3D might be a better CPU for you overall but value for money, I think the 5900X would get the nodd from me    I'll be moving from a Ryzen 2700 and a 2700X to one at some point for my girls rigs


----------



## Zach_01 (Sep 19, 2022)

DebenPoison said:


> Talk to me friends ~ what should I be enabling, disabling or manually adjusting in my BIOS to achieve better PBO results on this 5900X?
> 
> Currently all I have is everything default other than DOCP @ 3200mhz for memory ; and 3 x PBO enabled settings in BIOS.  Enabled seems to grab an extra ~100mhz vs Auto. I did see an increase max cpu mhz PBO related setting with a limit of up to 200mhz but that is set to auto at present.
> 
> ...







I have a thermal limit at 75C and Curve Optimizer negative with magnitude from 8 (best cores) to 18 (worst cores)











Mussels said:


> "and 3 x PBO enabled settings in BIOS"
> What does this mean?


Probably he means scalar...



tabascosauz said:


> @DebenPoison
> 4. What exactly have you changed in BIOS? I've never seen CPU-Z drop only CCD2 clocks like that even in the Stress option. 4.55 is relatively high for CPU-Z MT but the score is low for that clock. Run some other benchmarks, I'd run CPU-Z for thermal testing and power deviation, not for the score.


Never test it my self on CPU-Z but without CO second CCD is lower clocking on other MT tests when all cores in there are 7-12 on perf order.


----------



## lexluthermiester (Sep 19, 2022)

tabascosauz said:


> Small FFT doesn't max out PPT anymore on stock Ryzen. Boost algorithm for Vermeer and newer CPUs basically recognizes Prime95 as a power virus and automatically throttles down to exactly base clock (3.7GHz), only draws about 130W.
> 
> That said, power deviation metric still appears to be accurate in Small FFT.


I did not know that. WTH AMD? That's some dumbass BS.


----------



## tabascosauz (Sep 19, 2022)

Zach_01 said:


> Never test it my self on CPU-Z but without CO second CCD is lower clocking on other MT tests when all cores in there are 7-12 on perf order.



You observed CCD2 dropping 350MHz effective below CCD1 while having the same core clocks?



lexluthermiester said:


> I did not know that. WTH AMD? That's some dumbass BS.



If boost algo treated Small FFT like regular AVX the CPUs would go up in flames, they just don't do all-core the same way as Intel. 3.7GHz still does almost 100W core power - there's no way not throttling would still stay within 142W PPT. The generally lower stock Vcore and more aggressive boost algo account for all of Zen 3's lower temperatures compared to Zen 2 - crank up the Vcore to Matisse levels (1.28-1.35V) and they will behave the same.


----------



## lexluthermiester (Sep 19, 2022)

tabascosauz said:


> If boost algo treated Small FFT like regular AVX the CPUs would go up in flames, they just don't do all-core the same way as Intel. 3.7GHz still does almost 100W core power - there's no way not throttling would still stay within 142W PPT. The generally lower stock Vcore and more aggressive boost algo account for all of Zen 3's lower temperatures compared to Zen 2 - crank up the Vcore to Matisse levels (1.28-1.35V) and they will behave the same.


I hate to say this, but it's 6 years and I'm STILL trying to get a grip on how AMD does power delivery on Ryzen CPU's.. It's a complicated, convoluted mess. I get that it's a necessity, but still..


----------



## Zach_01 (Sep 19, 2022)

tabascosauz said:


> You observed CCD2 dropping 350MHz effective below CCD1 while having the same core clocks?


I will do tests w/ and w/o CO later when I get back home.
CB R23, CPU-Z…
Any other workload?



lexluthermiester said:


> I hate to say this, but it's 6 years and I'm STILL trying to get a grip on how AMD does power delivery on Ryzen CPU's.. It's a complicated, convoluted mess. I get that it's a necessity, but still..


What I also find odd is temp of second CCD. Even on same discrete and effective clock the 2 CCDs have way different temp. I will also try to demonstrate this by showing the C-States of each core-CCD during the MT tests


----------



## freeagent (Sep 19, 2022)

I open the taps for all the power the socket will give me, and adjust my curve to that. Runs a bit warm at full load, but it’s manageable, and within safe numbers so all good.


----------



## tabascosauz (Sep 19, 2022)

Zach_01 said:


> I will do tests w/ and w/o CO later when I get back home.
> CB R23, CPU-Z…
> Any other workload?
> 
> ...



No, as in, effective clock 350MHz below core clock for 1 CCD is not normal in any MT benchmark lol, hence why I asked exactly what settings OP currently messed with

CPU-Z is not a good test for score comparison, but I know how Zen3 behaves during Stress test better than any other benchmark. It occasionally drops score suddenly around the 10-15min mark then spends a long time slowly building back up, but clocks/temps/power are always consistent.

Every CPU has different core quality and IHS contact. I don't even look at the CCD1 and CCD2 average anymore, useless after HWInfo added per-core temps. Mine looks wildly different to yours, CCD2 about 3 degrees warmer at iso-clock/Vcore/power, 13-15C deltas between some CCD2 cores - it doesn't mean anything if the cores are performing up to par and drawing reasonable power. Just production variation. Why would C-states ever matter if all cores should be at full tilt during a heavy, consistent all-core test?


----------



## Zach_01 (Sep 20, 2022)

tabascosauz said:


> No, as in, effective clock 350MHz below core clock for 1 CCD is not normal in any MT benchmark lol, hence why I asked exactly what settings OP currently messed with
> 
> CPU-Z is not a good test for score comparison, but I know how Zen3 behaves during Stress test better than any other benchmark. It occasionally drops score suddenly around the 10-15min mark then spends a long time slowly building back up, but clocks/temps/power are always consistent.
> 
> Every CPU has different core quality and IHS contact. I don't even look at the CCD1 and CCD2 average anymore, useless after HWInfo added per-core temps. Mine looks wildly different to yours, CCD2 about 3 degrees warmer at iso-clock/Vcore/power, 13-15C deltas between some CCD2 cores - it doesn't mean anything if the cores are performing up to par and drawing reasonable power. Just production variation. Why would C-states ever matter if all cores should be at full tilt during a heavy, consistent all-core test?


You are right of course about C-States...
But I noticed now that my second CCD (full MT load) has slightly higher effective clock but has lower power consumption and of course temp.
The eff. clock diff is not so distinct on CB_R23 as on other tests. The individual core power(s) is following exactly the "perf #" order. And this is true on both cases, w/ CO and w/o CO.

Settings:




------------------------------------------------

CPU-Z stress test w/ CO


CPU-Z stress test w/o CO


------------------------------------------------

FurMark CPU w/ CO


FurMark CPU w/o CO


------------------------------------------------

P95 (small FFT, 128KB~128KB) w/ CO


P95 (small FFT, 128KB~128KB) w/o CO


------------------------------------------------

Blender (barbershop) w/ CO


Blender (barbershop) w/o CO


------------------------------------------------

CB R23 (MT) w/ CO


CB R23 (MT) w/o CO


----------



## DebenPoison (Sep 20, 2022)

tabascosauz said:


> @DebenPoison
> 
> 1. SVI2 Vcore only measures properly at load. When you see 1.4V+ it's either at idle (where it's incapable of measuring actual Vcore behaviour), or high single core load. If you ran 1.4V during an all-core load you would have thermal shutdown a long time ago, Zen 3 is designed for a 1.2-1.25V all core Vcore at stock.
> 
> ...



PBO Boost override - What is an optimal value to set or is auto preferred here? I occassionally see two cores hit 5Ghz, wonder if setting 200mhz is worthwhile then?

PS Was only running CPU-Z to measure the clocks under load, not score. 



Zach_01 said:


> Probably he means scalar...
> 
> 
> Never test it my self on CPU-Z but without CO second CCD is lower clocking on other MT tests when all cores in there are 7-12 on perf order.



The Asus X570 TUF is confusing at best when enabling PBO. It has PBO settings in multiple places and I didn't have the names on hand for each setting.

The first two screenshots seem to indicate the same PBO setting but weirdly they can have different settings configured, so they don't appear to be in sync.

The first screenshot offers a few additional settings too that the second does not, but it does allow you to select advanced which provides additional PBO settings including Scalar and others. The third screenshot references the advanced setting.

First SS -> AI Tweaker
Second SS -> Advanced Menu
Third SS -> Advanced Menu -> Advanced PBO setting unlocking additional PBO settings

Setting the 3 I have enabled to AUTO, provide a slightly worse clock on all cores under full load (about 100mhz), therefore I've got them set to Enabled.















Mussels said:


> "and 3 x PBO enabled settings in BIOS"
> What does this mean?
> 
> 
> ...



See above post for reference of the PBO settings thanks!

I haven't seen an issue on the chipset temp ; what are chipsets designed to reach now days? 60'C max?

HWInfo was reset once the CPU was under full load ; then captured still under full load.  I merely wanted to present what the clocks looked like under load.  The idle results are pretty good, 34'C and below across most cores. I can reference this if there would be further value in seeing it  

In game I'm not seeing above 64'C or 65'C but stress test with prime95 is murder, see below with idle temps.  P95 test is Small FFTs for 5 minutes


----------



## Zach_01 (Sep 20, 2022)

DebenPoison said:


> PBO Boost override - What is an optimal value to set or is auto preferred here? I occassionally see two cores hit 5Ghz, wonder if setting 200mhz is worthwhile then?
> 
> PS Was only running CPU-Z to measure the clocks under load, not score.
> 
> ...


Typically PBO on Auto is disabled.
Meaning PBO limits are on default settings.
For the 5900X they are:
PPT: 142W
TDC: 90A
EDC: 140A

When you turn PBO to Enabled, those limits are set by the board.
Typically boards are setting the limits to skyrocket values that completely throw the CPU out of optimal operation.

Observe your last P95 SS that you're hitting PPT = 200Watt. Hence the near limit (90C) temperature.

I'm not familiar with ASUS boards and BIOS, but I do know that that "PBO Fmax Enhancer" complicates things even more than already are.
I would keep PBO Fmax Enhancer Disabled, PBO on Advanced and set everything on manual to your preference (within reason).

First you have to know what you want more...
Best gaming performance or best 100% (MT) load performance. You cant have both. You can have a bit of both but not best of both at the same time.

By default the max allowed clock of the 5900X is 4950MHz. With boost override you can have additional 200MHz (with steps of +25MHz).
Curve Optimizer on the other hand is a way to alter the voltage/frequency (v/f) curve. With negative steps (1~30) you can set the magnitude of the curve shift (per core or entire CPU) to tell it to run higher boost (f) on the same voltage (v). This affects the entire clock range of the CPU.
With reasonable PPT/EDC limits you can have slightly higher clocks (within the same power envelope W) with each negative step.
Curve Optimizer affects mostly the 100% CPU load and less the single/middle threaded loads like gaming.

The thing is that you cant set +200MHz on boost override and go all the way (20~30 steps) on curve optimizer too, because the clock stretching will be too high and some cores will start loosing stability.
Both settings (boost override / curve optimizer) are stretching boost clock in a different way.

Like I said before you have to decide what you want/need more for your daily tasks (and not for benchmark scores, like too many users do).


----------



## tabascosauz (Sep 20, 2022)

Zach_01 said:


> You are right of course about C-States...
> But I noticed now that my second CCD (full MT load) has slightly higher effective clock but has lower power consumption and of course temp.
> The eff. clock diff is not so distinct on CB_R23 as on other tests. The individual core power(s) is following exactly the "perf #" order. And this is true on both cases, w/ CO and w/o CO.



Wow! Seems like you really lucked out on that second CCD. Core deltas looking good. Higher clocks on that CCD maybe because it's 8C cooler, idk if that's a feature because I don't usually see CCD1-CCD2 gap so big. Small clock differences though.



DebenPoison said:


> PBO Boost override - What is an optimal value to set or is auto preferred here? I occassionally see two cores hit 5Ghz, wonder if setting 200mhz is worthwhile then?
> 
> PS Was only running CPU-Z to measure the clocks under load, not score.
> 
> ...



Enable snapshot polling so Core Clock metrics actually make sense.

You can set +200 if you want, it shouldn't hurt anything or change overall boost behaviour. If you have cores capable of scaling past 4950, they will make themselves known after you set +200. More aggressive CO offsets on those cores can take them further, to a point.

In Asus BIOS, you should be doing all your PBO settings in AI Tweaker. Except Curve Optimizer - last I checked AGESA 1207 still hasn't changed it and you need to go to Advanced>AMD OC to do your Curve Optimizer settings. There is a CO menu in Tweaker but it ranges from wonky to straight up doesn't work. Rest of the settings, do them in Tweaker not AMD OC.

The aggressive boost throttling only works at stock power limits. When you enable PBO (even if you don't touch anything and leave the settings at auto) most vendors treat it as a free "go fast" button. Boost algo is not nearly as conservative then.

Still not sure why you have a 400MHz gap between CCD1 and CCD2. Core clock looks fine but the bench scores so far look more like effective clock is telling the truth. 

The 4201 BIOS works okay on my Impact, but even if they mostly copy+paste the same update between boards, sometimes vendors fuck up implementation for one particular board. Might be worth trying some other recent BIOSes to see if that behaviour continues.


----------



## DebenPoison (Sep 24, 2022)

Some improvements have been made... still playing with various settings and once I get optimal results from CPU, will begin with memory (guess this thread should be moved to the OC channel  )


----------



## DebenPoison (Sep 25, 2022)

Any issues with these temps? They're higher than expected with an Artic Freezer ii 360mm AIO.  The mounting really sucks on this AIO with the Asus X570 TUF mobo.  First mount it wouldn't the PC wouldn't boot, probably shorting out even though there was just the slightest gap between the boards caps and the mounting bracket.  I re-mounted with less pressure but I think I'm seeing this in higher than I'd like temps. Am I ok to run like this daily? In game I can see up to 68-72'C at peek


----------

