• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Ryzen Owners Zen Garden

Look, i screened DOCP OFF (+ CPU boost too) and DOCP Tweaked (the most tweaked, by Asus lol) :
1.125 V max? Seriously? :eek: Something must be wrong with my board on Auto settings. Even in all-core workloads, I have at least 1.3-1.35 V on the cores.
 
1.125 V max? Seriously? :eek: Something must be wrong with my board on Auto settings. Even in all-core workloads, I have at least 1.3-1.35 V on the cores.

Here in Auto just after boot and after playing MGS4 in RPCS3, you'd know i always have virtualisation and iGPU disabled, if it can help, especially iGPU :
 

Attachments

  • auto (boot).png
    auto (boot).png
    427.8 KB · Views: 91
  • auto (mgs4 rpcs3 played).png
    auto (mgs4 rpcs3 played).png
    441.3 KB · Views: 73
Here in Auto just after boot and after playing MGS4 in RPCS3, you'd know i always have virtualisation and iGPU disabled, if it can help, especially iGPU :
Under 1.2 V! Wow!

How does the iGPU affect core voltages? Unfortunately, I need the iGPU for multi-monitor (for the second HDMI port).
 
I'm not sure, but, each time i see somebody with problems on AM, it lists GSkill RAM !!
Under 1.2 V! Wow!

How does the iGPU affect core voltages? Unfortunately, I need the iGPU for multi-monitor (for the second HDMI port).

iGPU is from what i read (perhaps from NG video) exactly in the hot spot !!
 
iGPU is from what i read (perhaps from NG video) exactly in the hot spot !!
Hotspot for sure, but I'd imagine that core voltages are totally independent from the iGPU.
 
I jumped ship and went AMD, just bought 5800X3D and a B550 tuf gaming plus wifi ;)
My old setup is 3770k, i dont want to wait until it dies :p

Congrats! Went from a 3770K to 3700X (planned on 5900X later) and that was good, but the X3D was a very welcome surprise :)

I did see in another thread you have a NH-U9S Chromax coming with it.
Just a fair warning this chip does run pretty warm.

I have a NH-U12A Chromax and it shot to 90c instantly in Cinebench, dropped to 83c after enabling -30 CO all cores.
Games usually 50-60s but a CPU intensive one will go into 70s (all with -30 CO). Ambient in the room lately has been a little over 17c (63f).
 
I'm not sure, but, each time i see somebody with problems on AM, it lists GSkill RAM !!
Not me, I have GSkill RAM in my system and it's running just fine. Then again, maybe I'm not looking at the right data in HWInfo.

1682962674863.png
 
Not me, I have GSkill RAM in my system and it's running just fine. Then again, maybe I'm not looking at the right data in HWInfo.

View attachment 294075
Don't fear my words, i din't tell all GSkill owners reported something, i noted all reports i read a bout AMD include GSkill in specs.

EDIT:
i never got in red in temps, it stops at 84-88 as i seen.
 
I am pleased with my decision not to buy into AM5. The performance boost would have been nice, but not the headaches.
I honestly have had a headache free experience, but the news is a bit disconcerting, yeah.

Maybe I'm safe since I did a static multiplier only OC, for compile times. (5.3Ghz)

SOC over 1.2 is a no go as i understand it.
I'd say 1.2 is ok, don't let anything go over 1.25 would be my worry point.
 
I honestly have had a headache free experience, but the new is a bit disconcerting, yeah.
I'm with @R-T-B here, I've had a similar experience. Up until now with all of this drama, I've had no issues. I'm seriously crossing my fingers and hoping that it stays that way.
 
I'm with @R-T-B here, I've had a similar experience. Up until now with all of this drama, I've had no issues. I'm seriously crossing my fingers and hoping that it stays that way.
I come home every day and check for burnt electronic smell, which isn't ideal, so far so good... lol.
 
I'd say 1.2 is ok, don't let anything go over 1.25 would be my worry point.
That's what HWInfo reports on my system; a flat 1.25 volts on the SoC rail. Absolutely no deviation from that number whatsoever no matter what kind of load that I put on it and I have HWInfo running in the background all the time monitoring system vitals.
I come home every day and check for burnt electronic smell, which isn't ideal, so far so good... lol.
If I move my mouse and the monitor comes to life, I'm golden.
 
I'm with @R-T-B here, I've had a similar experience. Up until now with all of this drama, I've had no issues. I'm seriously crossing my fingers and hoping that it stays that way.
My only headache was with the early BIOSes on my MSi board that got ironed out pretty soon. Zen 4 and AM5 are a stable platform otherwise.

That's what HWInfo reports on my system; a flat 1.25 volts on the SoC rail. Absolutely no deviation from that number whatsoever no matter what kind of load that I put on it and I have HWInfo running in the background all the time monitoring system vitals.

If I move my mouse and the monitor comes to life, I'm golden.
SoC voltage is constant. 1.25 V is fine, I guess.
 
Congrats! Went from a 3770K to 3700X (planned on 5900X later) and that was good, but the X3D was a very welcome surprise :)

I did see in another thread you have a NH-U9S Chromax coming with it.
Just a fair warning this chip does run pretty warm.

I have a NH-U12A Chromax and it shot to 90c instantly in Cinebench, dropped to 83c after enabling -30 CO all cores.
Games usually 50-60s but a CPU intensive one will go into 70s (all with -30 CO). Ambient in the room lately has been a little over 17c (63f).
Bought it for gaming, not for OC, cinebench or anything alike. :p If it shows to be a bad choice, my sons 10600k have a crap cooler on it and he can have it, whilst i get an AIO or so.
 
Honestly, it is a matter of thermal density and resistance, not high TDP. An AIO won't be much better than your Noctua. I run mine on a PA120SE and it is perfectly happy.

I don't put it under much stress though, it is a work/business unit and I needed the cache for a program I use with crappy coding.
 
Congrats! Went from a 3770K to 3700X (planned on 5900X later) and that was good, but the X3D was a very welcome surprise :)

I did see in another thread you have a NH-U9S Chromax coming with it.
Just a fair warning this chip does run pretty warm.

I have a NH-U12A Chromax and it shot to 90c instantly in Cinebench, dropped to 83c after enabling -30 CO all cores.
Games usually 50-60s but a CPU intensive one will go into 70s (all with -30 CO). Ambient in the room lately has been a little over 17c (63f).

Sounds like you might wanna recheck your mount, my C14S was just below 80C with -30, my U12A was about 76-78C at -30, and the FC140 does anywhere between 72-76C at -30. All at an ambient of 20-25C. The chip is pretty sensitive to cooler contact and to a degree paste application. U9S is usually only slightly behind the C14S.

Though, there have been noticeable temp differences between samples of 5800X3D, and every 5800X3D/board/BIOS combination behaves differently as to package power.
 
Honestly, it is a matter of thermal density and resistance, not high TDP. An AIO won't be much better than your Noctua. I run mine on a PA120SE and it is perfectly happy.

I don't put it under much stress though, it is a work/business unit and I needed the cache for a program I use with crappy coding.
its not so much i dont think it will cool it, its more the noise it would make in the effort. ive used aio last 20 years for the low noise levels.
 
Hotspot for sure, but I'd imagine that core voltages are totally independent from the iGPU.

I've still been brooding over that one. The "source" in GN's video we have no idea where it came from, and its claim that VDDCR_GFX is linked to Vcore goes against everything we have ever known about AMD iGPUs. And there are no other sources corroborating it right now......but the theory does check out with how the chip exploded.

An easy proof for VDDCR_GFX falling under VSOC is simply to OC any of the Vega iGPUs - when not at stock or using GFX Curve Optimizer, VDDCR_GFX = VSOC. Mandatory rule, no exceptions.

One would think that with all the record-pushing iGPU OC that Skatterbencher has done on the new Raphael iGPU, they would have changed their AM5 topology chart by now if they were wrong as to VDDCR_GFX falling under VSOC, but they haven't......GN's "source" be looking a lil sus.

There's also basically nothing else on the IOD that runs off Vcore. I can't think of a single identifiable component that does.

Quote from the standard topology explanation every one of their AM5 articles, AM4 works the same way:

VDDCR_GFX provides the voltage for the GPU cores on the IO die. The voltage rails can work in either regular mode or bypass mode. In regular mode, the voltage is managed by the integrated voltage regulator and derived from the VDDCR_SOC voltage rail. If the integrated VR is disabled and set to bypass mode, the voltage is equal to the VDDCR_SOC voltage rail.
 
Yeah I reseated it twice, figured the same when I first checked too.

I'll have to give another run with Cinebench just to see, it may have bene higher ambient when I did that initially.
Myself and a bunch of others on OCN all experienced this to a similar degree; thank goodness for core offset lol.

For what I do it's not bad, I mostly just game or watch streaming services on this rig no real productivity :p
 
Does that mean i'm lucky to have disabled iGPU since first BOOT ?!!
 
Does that mean i'm lucky to have disabled iGPU since first BOOT ?!!

That's not how the iGPU or IO die works. Calm down lol

@Double-Click gaming temps sound normal, so probably just AMD QC strikes again then. I had close to 20C core deltas on my 5900X :laugh:
 
I've still been brooding over that one. The "source" in GN's video we have no idea where it came from, and its claim that VDDCR_GFX is linked to Vcore goes against everything we have ever known about AMD iGPUs. And there are no other sources corroborating it right now......but the theory does check out with how the chip exploded.

An easy proof for VDDCR_GFX falling under VSOC is simply to OC any of the Vega iGPUs - when not at stock or using GFX Curve Optimizer, VDDCR_GFX = VSOC. Mandatory rule, no exceptions.

One would think that with all the record-pushing iGPU OC that Skatterbencher has done on the new Raphael iGPU, they would have changed their AM5 topology chart by now if they were wrong as to VDDCR_GFX falling under VSOC, but they haven't......GN's "source" be looking a lil sus.

There's also basically nothing else on the IOD that runs off Vcore. I can't think of a single identifiable component that does.

Quote from the standard topology explanation every one of their AM5 articles, AM4 works the same way:
I'm even more confused now. Totally not your fault, there's just too many voltage rails on a modern CPU. :laugh:

What about CPU cores? How do I make sure my board doesn't overvolt the crap out of them for the illusion of added stability?
 
I'm even more confused now. Totally not your fault, there's just too many voltage rails on a modern CPU. :laugh:

What about CPU cores? How do I make sure my board doesn't overvolt the crap out of them for the illusion of added stability?

I'm still not sure how we came onto the topic of overvolting Vcore :D

Short answer: it's not an issue
Long answer: it's not worth worrying about, because
  • Stock Precision Boost functionality is not and has never been an issue, and in the freak event boost algorithm rampant overvolting is actually causing all these CPUs to explode, then all Ryzens post-2019 would also be caught in this net because PB has not fundamentally changed; most aggressive of all in terms of Vcore are actually the oldest CPUs (Ryzen 3000), and they haven't been dying or significantly degrading by the droves
  • You just have to trust software monitoring and the fact that up-to-1.5V PB hasn't blown up CPUs in 4 years, because it would be impossible to monitor Vcore at a polling rate that approaches how fast these CPUs are actually controlled, without expensive hardware logging
As to the point that Vcore always degrades, sure, on a bare technical level. But that technical level of degradation also applies to Intel CPUs, which are notoriously hardy. Reality is that the amount of "degradation" experienced is not relevant in the timeframe of the CPU's expected lifespan...........in AMD's case, if PB works as intended, of which there is so far no evidence to the contrary.

The Ryzen question of degradation can never be discussed with voltage alone - it is current (accelerated by high temp and volts if present) that most readily kills Ryzens. The PB algorithm is designed exactly to take this into account - stock boost behaviour will never apply high Vcore (1.35V+) under high current, which is why you will see high Vcore for light ST but gradually falling as you bring more cores/more load/more current online.

The APUs demonstrate this relationship between volts and current really well. The UMC is both AMD's best DDR4 controller and highly tolerant to VSOC, which is why you'll see record OC scores almost always pushing absolutely obscene VSOC (way north of 1.3V). Given good cooling and knowing what you're doing, benching at high VSOC still shouldn't cause significant damage, but I can guarantee that those 5700Gs aren't long for the world if you decide to start benching the iGPU under those conditions (drawing an easy 30A+ of current). When at 2D idle or when paired with a dGPU, iGPU/SOC power draw is usually in the single digits therefore current draw does not become a problem.

Because VSOC doesn't have anything like PB that naturally scales back voltage when current increases, it's easy to see why running high VSOC at high load/for extended period of time is harmful. But that doesn't apply to Vcore, unless you go out of your way to find unsafe PBO settings/try to break the algorithm (which is not hard), or run a static OC.

 
Last edited:
Yep, once the core performance boost is off , temps and power go way down at all-core loads. My 7950X is given about 1Volt for its 4.5GHz stock speed.
That's a very nice voltage setting!

1.125 V max? Seriously? :eek: Something must be wrong with my board on Auto settings. Even in all-core workloads, I have at least 1.3-1.35 V on the cores.
Agreed, that is impressive!
 
Back
Top