• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Ryzen 5 8500G

I could test that theory on this 5900X but I don't wanna reboot right now and it's irrelevant information either way :D
If your 5900X runs into power limit in all-core loads, then you might see some clock increase by disabling a CCD. The 7800X3D is a different animal, though, because it never gets anywhere near its power limit by default. I assume other 7000X3D CPUs behave similarly.
 
No, but he's not talking about all-core, I don't think.

He's talking about disabling an entire CCD in the BIOS and running an "8-core" part with a 16-core power budget, which is a "half-core" load rather than an all-core load. If a 230W Ryzen 7 existed, it might be relevant to that discussion at least, but I'm not sure how it's relevant here other than to prove that power budget is the limiting factor in CPUs unless you have access to higher-end boards that make no sense for AMD lowest-cost, cut-down die.
No I am talking about my 7900X3D. I do not disable anything on that CPU. In Games that do not support X3D there is no issue having the CPU at 5.6 Ghz. I have argued that it could be the lack of a full CCD and X3D voltage. For me it has been proven that performance is not impacted by lowering the voltage. My chip does not even do 120 Watts of power draw unless I am doing a 3DMark benchmark or AIDA CPU benchmark.

Getting back to this thread, I expect that this will still shine as a replacement for my 5600G based system but the price is still a little high. If you want to get one of these I would pair it with a As Rock board as they are the only ones that support 120hz on the HDMI port on the MB.

At $160 list with a shortage of PCIe lanes, the balance is off with this processor for an individual to build a PC from. This is a system integrator processor only and not for gaming. Instead it's for the Dells and Lenovos to make office PCs with and it will excel at that with the appropriate negotiated volume discount and slapped in their budget-basement configurations.

It's DOA for home use but I love that @W1zzard tested one here because I would be forever curious about its capabilities.
The DGPU upgrade for this would be the 6500XT or below and you don't need any more lanes than that.
 
The DGPU upgrade for this would be the 6500XT or below and you don't need any more lanes than that.

As it roughly matches the Ryzen 5 5600 which was my primary gaming CPU until recently, IMO this CPU is overkill for merely being a good PCIe lane match for the 6500 XT or 6400. It could instead be matched with an RX 6600 XT or similar and would have the same PCIe bandwidth as I get in my all- PCIe 3.0 Mobo setup here (4x 4.0 = 8x 3.0). So there are additional possibilities.

It's really the price. The $125 R5 5600 is a better choice. At ~$130, the 8500g would be better because of AM5's future.
 
Impressive efficiency, pretty meeeh excrpt for that...
 
No I am talking about my 7900X3D. I do not disable anything on that CPU. In Games that do not support X3D there is no issue having the CPU at 5.6 Ghz. I have argued that it could be the lack of a full CCD and X3D voltage. For me it has been proven that performance is not impacted by lowering the voltage. My chip does not even do 120 Watts of power draw unless I am doing a 3DMark benchmark or AIDA CPU benchmark.

Getting back to this thread, I expect that this will still shine as a replacement for my 5600G based system but the price is still a little high. If you want to get one of these I would pair it with a As Rock board as they are the only ones that support 120hz on the HDMI port on the MB.


The DGPU upgrade for this would be the 6500XT or below and you don't need any more lanes than that.
Oh wow, okay.

I'm a little unfamiliar with the dual-CCD X3D chips, I've only used 5800X3D and 7800X3D but yours appears to be dramatically different from review samples, yours are faster by 500-600MHz which is phenomenal. Even people using the 1usmus Hydra utility on the usual OC forums to tweak their 7900X3D/7950X3D aren't seeing more than 5.2GHz all-core.

Either you've tuned it manually to within an inch of its undervolt/offset limits, or you've got a top 1% golden sample that's literally orders of magnitude better than anything reported on the web. Maybe a combination of both?

I'd say "screenshot or it didn't happen" but I believe you and/or don't really care - the 7900X3D isn't for me anyway ;)

You're right about the 5600G comparison. Power consumption of an 8500G should be dramatically better as you're jumping from TSMC 7nm to 4nm, and the new 4CU RDNA3 should use way less power than the ancient and DDR4-hobbled Vega7 in your 5600G.

Good call on the Asrock boards with 4K120 support. 4K120 TV's are pretty cheap these days with plenty of current gen offerings from TCL or Hisense at under $500, even less if you get last-gen on deep discount.
 
Oh wow, okay.

I'm a little unfamiliar with the dual-CCD X3D chips, I've only used 5800X3D and 7800X3D but yours appears to be dramatically different from review samples, yours are faster by 500-600MHz which is phenomenal. Even people using the 1usmus Hydra utility on the usual OC forums to tweak their 7900X3D/7950X3D aren't seeing more than 5.2GHz all-core.

Either you've tuned it manually to within an inch of its undervolt/offset limits, or you've got a top 1% golden sample that's literally orders of magnitude better than anything reported on the web. Maybe a combination of both?

I'd say "screenshot or it didn't happen" but I believe you and/or don't really care - the 7900X3D isn't for me anyway ;)

You're right about the 5600G comparison. Power consumption of an 8500G should be dramatically better as you're jumping from TSMC 7nm to 4nm, and the new 4CU RDNA3 should use way less power than the ancient and DDR4-hobbled Vega7 in your 5600G.

Good call on the Asrock boards with 4K120 support. 4K120 TV's are pretty cheap these days with plenty of current gen offerings from TCL or Hisense at under $500, even less if you get last-gen on deep discount.
Looks like you are right. there have been some Chipset updates and that seems to have lowered the ultimate clock. Could be AMD sandbagging these chips to make the 9000 look even better. Not even one of my cores does 5.6. I have been doing a lot of Gaming so benchmarks and that kind of stuff have been at a minimum. I do know though that I have seen my nonX3D Cores running at 5.6 instead of 5.325.

Screenshot 2024-07-26 081036.png
 
Looks like you are right. there have been some Chipset updates and that seems to have lowered the ultimate clock. Could be AMD sandbagging these chips to make the 9000 look even better. Not even one of my cores does 5.6. I have been doing a lot of Gaming so benchmarks and that kind of stuff have been at a minimum. I do know though that I have seen my nonX3D Cores running at 5.6 instead of 5.325.

View attachment 356444
Might have been an AGESA update after the initial run of AM5 CPUs melting themselves into the socket and burning out.

Could also be that HWINFO is too slow to accurately report average effective clock because AMD explicitly state that their core-juggling for thermal managment happens so fast that most utilities don't handle it. According to them (Rob Hallock's tweets in particular) you need to monitor clocks with Ryzen Master if you want anything even close to accurate once you're bumping up against any kind of power or thermal boost limit.

You're also potentially reading HWINFO wrong, as 5325MHz isn't the current value, it's the 3rd column which is the peak value ever seen for that single core since sampling started. If you fire up something like a 15-min CB24 test and then open HWINFO fresh, the 4th column represents your average clock for an all-core load and it's the 1st column that is going to be inaccurate instant-to-instant as the core-juggling may well happen several times between each HWINFO sample point.

If that screenshot is currently an all-core load, your average clocks are a little either side of ~4.7GHz which is about right for a reduced-voltage X3D part vs the ~5GHz I see on 170W/230WPPT 7900X parts crunching vray and wind/solar simulations here all day every day.
 
Last edited:
Might have been an AGESA update after the initial run of AM5 CPUs melting themselves into the socket and burning out.

Could also be that HWINFO is too slow to accurately report average effective clock because AMD explicitly state that their core-juggling for thermal managment happens so fast that most utilities don't handle it. According to them (Rob Hallock's tweets in particular) you need to monitor clocks with Ryzen Master if you want anything even close to accurate once you're bumping up against any kind of power or thermal boost limit.

You're also potentially reading HWINFO wrong, as 5325MHz isn't the current value, it's the 3rd column which is the peak value ever seen for that single core since sampling started. If you fire up something like a 15-min CB24 test and then open HWINFO fresh, the 4th column represents your average clock for an all-core load and it's the 1st column that is going to be inaccurate instant-to-instant as the core-juggling may well happen several times between each HWINFO sample point.

If that screenshot is currently an all-core load, your average clocks are a little either side of ~4.7GHz which is about right for a reduced-voltage X3D part vs the ~5GHz I see on 170W/230WPPT 7900X parts crunching vray and wind/solar simulations here all day every day.
I can't say that I have seen that I have had mine since launch. This is a recent development. I recorded that while doing a CPU benchmark in AIDA64 that was in conjunction with the 41.5% Core usage.

HWinfo is all I really use and I know that is has shown me 5.625 on those cores showing 5.325.

I hear you and I also give up on reading these CPUs. It just increased another 50 mhz. I wish AMD software allowed me to use Ryzen Master but I get the message that my CPU does not support OC so Ryzen Master will be in view mode. Whatever it is I love it for Gaming.

Screenshot 2024-07-26 090322.png
 
And totally contradicts less systematic but more optimized benchmarks like this 7W idle 12400-based PC.
Thanks for the link. Good to see someone is continuing the great work of Mike Chin of SilentPCReview, in a way.
What should I conclude from this? That idle power depends less on the CPU, and more on power state settings of the motherboard and operating system?
Probably. That's why reviewers are reluctant to state idle power consumption. It's unpredictable and putting it under control would be an extensive research job. I can also remember that some sites used to list both "idle" and "long idle" power - apparently the system entered a lower power idle state after x minutes of being idle.
 
Probably. That's why reviewers are reluctant to state idle power consumption. It's unpredictable and putting it under control would be an extensive research job. I can also remember that some sites used to list both "idle" and "long idle" power - apparently the system entered a lower power idle state after x minutes of being idle.
Despite all of the different info people spread on the internet, I still find the 0% idle and 100% load Windows CPU power settings to work best with any CPU. This could be a good base line for a review.
 
That this chip will not be used with DGPUs.
Do you think they are right? they may well be considering they forced it via limitations, but do you think the demand would have reflected that? Many people buy iGPU chips and use them with dGPUS, having the option is always good.
 
Do you think they are right? they may well be considering they forced it via limitations, but do you think the demand would have reflected that? Many people buy iGPU chips and use them with dGPUS, having the option is always good.
A 7600 would make more sense if you were going to get a DGPU as that has all 28 lanes available to use. Many people just buy this for a HTPC based system.
 
Do you think they are right? they may well be considering they forced it via limitations, but do you think the demand would have reflected that? Many people buy iGPU chips and use them with dGPUS, having the option is always good.
The RX 6600 is still a decent value offering that wouldn't be stupid to run at PCIe 4.0 x4.
Sure, you're losing a few percent by not getting all 8 lanes the card supports, but it's barely going to matter because even in it's half-lane configuration it's vastly better performance/$ than the abysmal 6500XT.
 
Despite all of the different info people spread on the internet, I still find the 0% idle and 100% load Windows CPU power settings to work best with any CPU. This could be a good base line for a review.
The only good setting is Default settings, because that’s what 99% of people are running at. That also makes it fair to vendors and incentives them to improve settings

everything else is worth a separate article

No doubt that optimized settings can lower or increase idle power. One of the biggest dials for that is memory frequency, voltage, timings.
 
Atm, where I am in the world, the 8600G makes better sense, one major retailer here have 8000 series range permanently discounted until early Oct. About $50 price diff between 8500G & 8600G. I'd rather pay that extra to get the better iGPU & all Zen 4 cores any day.
Capture1 - Copy.JPG
 
The only good setting is Default settings, because that’s what 99% of people are running at. That also makes it fair to vendors and incentives them to improve settings

everything else is worth a separate article

No doubt that optimized settings can lower or increase idle power. One of the biggest dials for that is memory frequency, voltage, timings.
Good point. :)

When one finds the settings that seem to work best, it's easy to forget what the majority, non-tech-savvy audience uses.
 
just wait for 2 years, and its gonna be super cool CPU, coz it will drop in price and motherboard too.
 
just wait for 2 years, and its gonna be super cool CPU, coz it will drop in price and motherboard too.
Then AMD will have Zen 5 apu with RDNA4 most likely.
 
it will cost 150$, but this one will drop in price and become a good CPU, right now its super expencive
It's just too new.

There's not really an real competition in this space. The i3-14100T, or a manually-limited 14100 are the closest match but they're only quad cores, their UHD730 is the 1/4 power variant of Mobile Xe, which is itself only borderline useful for 3D in the full 96EU variant. At 24EU, heavily downclocked, the UHD730 is likely worse than even the 2CU 'bare-minimum' IGP in Zen4 Raphael CPUs.

So, with no direct competition, AMD is doing what any for-profit company is doing, and setting the sticker price at whatever they think they can get away with. Once the initial wave of people who will pay anything have all been satisfied, sales numbers will slow to a crawl and force prices down. The higher MSRP works in AMD's (and resellers') favour because they can make profit from the impatient early adopters, and then they can get the psychological benefit of "deep discount" after that where people will see two $100 products of the same age and without doing any research, most people will buy the product that's "supposed to be $150" instead of the product that has always been $100.
 
Excellent review W1zzard, however I have a question, which version of Stockfish did you use? I ask because I'm waiting for the upcoming 9700X to see if it's worth upgrading or not.
Performance is in line (slightly worse actually) with my 5800X with Stockfish 16.1 BMI2, but better to ask.
 
How? This is probably the perfect precursor to AMD's dominance with zen5 (Strix point/Halo?) & a great way to see their scheduling having 0 issues on Windows!

The chip is slightly overpriced like most APU I've seen from AMD in the recent past.
The less-threaded benchmarks like web browser and 720p gaming left me with the opposite impression: that they do have scheduling issues, or that it's just underwhelming.
I'd have expected to see the CPU be closer to the 7600 in those, but it trails quite a bit. Either half the L3 cache causes this, or are those getting stuck on the C-cores?
 
so 14900K lost to 7950X after updating new BIOS ? It's like performance dropped by ~4%.
Lots of software updates, pretty much updated every single test, new AI tests, some changes to existing benchmarks, added Git test

which version of Stockfish did you use?
16.1
 
It's just too new.

There's not really an real competition in this space. The i3-14100T, or a manually-limited 14100 are the closest match but they're only quad cores, their UHD730 is the 1/4 power variant of Mobile Xe, which is itself only borderline useful for 3D in the full 96EU variant. At 24EU, heavily downclocked, the UHD730 is likely worse than even the 2CU 'bare-minimum' IGP in Zen4 Raphael CPUs.

So, with no direct competition, AMD is doing what any for-profit company is doing, and setting the sticker price at whatever they think they can get away with. Once the initial wave of people who will pay anything have all been satisfied, sales numbers will slow to a crawl and force prices down. The higher MSRP works in AMD's (and resellers') favour because they can make profit from the impatient early adopters, and then they can get the psychological benefit of "deep discount" after that where people will see two $100 products of the same age and without doing any research, most people will buy the product that's "supposed to be $150" instead of the product that has always been $100.
I remember when I wanted to build a HTPC for my Console retro drive. I went to the PC store (Canada Computers) and they had an open box B550 Tuf board on sale for $65 CAD so I bought it. Brought it home and installed my 5600G. Went on the Asus Website and downloaded the BIOS for the B550 Tuf and started enjoying it. I had a M2 sitting around so I decided I would install it in the 2nd M2 slot on the board. Uh oh the board has no 2nd M2. Did I get a board that passed QC? Nope, the board turned out to be the A520 Tuf. In my PC snobbery A520 was not something I would buy but other than flexibility, there was no difference in performance.

That has me have an As Rock A620 in my Newegg cart that I check to see what it costs.

Black friday should be interesting this year. You might be able to get a great B650E board for a good price,
 
Back
Top