Thursday, March 17th 2022

AMD's Robert Hallock Confirms Lack of Manual CPU Overclocking for Ryzen 7 5800X3D

In a livestream talking about AMD's mobile CPUs with HotHardware, Robert Hallock shone some light on the rumours about the Ryzen 7 5800X3D lacking manual overclocking. As per earlier rumours, something TechPowerUp! confirmed with our own sources, AMD's Ryzen 7 5800X3D lacks support for manual CPU overclocking and AMD asked its motherboard partners to remove these features in the UEFI. According to the livestream, these CPUs are said to be hard locked, so there's no workaround when it comes to adjusting the CPU multiplier or Voltage, but at least AMD has a good reason for it.

It turns out that the 3D V-Cache is Voltage limited to a maximum of 1.3 to 1.35 Volts, which means that the regular boost Voltage of individual Ryzen CPU cores, which can hit 1.45 to 1.5 Volts, would be too high for the 3D V-Cache to handle. As such, AMD implemented the restrictions for this CPU. However, the Infinity Fabric and memory bus can still be manually overclocked. The lower Voltage boost also helps explain why the Ryzen 7 5800X3D has lower boost clocks, as it's possible that the higher Voltages are needed to hit the higher frequencies.
That said, Robert Hallock made a point of mentioning that overclocking is a priority for AMD and the Ryzen 7 5800X3D is a one off when it comes to these limitations. The reason behind this is that AMD is limited by the manufacturing technology available to the company today, but it wanted to release the technology to consumers now, rather than wait until the next generation of CPUs. In other words, this is not a change in AMD's business model, as future CPUs from AMD will include overclocking.

Hallock also explained why AMD didn't go with more cores for its first 3D V-Cache CPU and it has to do with the fact that most workloads outside of gaming don't reap much of a benefit. This is large due to how different applications use cache memory and when it comes to games, a lot of the data is being reused, which is a perfect scenario for a large cache, whereas something like video editing software, can't take advantage of a large cache in the same way. This means that AMD's secret to boosting the performance in games is that more game data ends up sitting closer to the CPU, which results in a 12 ns latency for the CPU to retrieve that data from the L3 cache, compared to 60-80 ns when the data has to be fetched from RAM. Add to this the higher bandwidth of the cache and it makes sense how the extra cache helps boost the performance in games.

For more details, please see video below. The interesting part starts around the 45:30 mark.

Add your own comment

222 Comments on AMD's Robert Hallock Confirms Lack of Manual CPU Overclocking for Ryzen 7 5800X3D

#201
sillyconjunkie
JismIncorrect. They said it themself the Cache is dependent of the CPU core voltage. They coud'nt design a seperate voltage rail among it due to compatible pin layouts. So you go with what you have, really. Since it's a EPYC gimmick, epyc's where never tested against OC's since the clocks of those CPU avg on 2 to 3.4Ghz.
Whether or not the extra cache can be powered down is irrelevant. It's a multi-layer cpu on a 7nm process and whether it was designed by AMD or Intel, it will always run hotter than a similar cpu design on a single layer.
Posted on Reply
#202
ratirt
fevgatosIm pretty sure there is a huge difference in all core workloads. For example i think I can hit 15k (thats how much 5800x gets right?) cbr23 at around 45-50w
I thought you are talking about gaming not CBR23.
Posted on Reply
#203
Mussels
Freshwater Moderator
ioannisNo big deal since static OC on ryzen cpus is kinda pointless. Limiting the voltage to 1.3-1.35 I am guessing pretty much nerfs boost clock override offset but again it's no big deal since you get almost none gaming performance from this. Only thing that has a meaning to do in ryzen cpus is raising the PBO Limits as much as your cooling allows to keep higher all-core clocks if you need them.
It varies chip to chip
My 5800x for example is one of the hotter running ones, so a static OC gains me 200Mhz all core, loses 500Mhz single threaded, but also runs 30C colder (well, with 40C ambients it did)

These are simply chips to be left on auto, or with minimal PBO tweaking (curve offset may remain) - and that appeals to a MUCH larger userbase than the overclockers
Posted on Reply
#204
mama
Will this be 6nm or 7nm?
Posted on Reply
#205
JustBenching
ratirtI thought you are talking about gaming not CBR23.
In gaming the difference is even bigger
Posted on Reply
#206
Taraquin
fevgatosIn gaming the difference is even bigger
Not to start another war, but I'm not so sure since games generally utilize few cores most of the time. My 5600X in SOTTR (a game that utilizes many cores well) get 230-240fps CPU game avg 1080p highest running 76W limit, but I get 215-225fps running 45W limit so only around 5% fps loss last I checked. In both cases IO-die uses 15-20W. It may be more or less in other games, but generally less since most games uses less cores than SOTTR.

If you can test your 12th gen in SOTTR 1080p highest with unlimited and 45W limit it would be great :)

Ryzen 5k scales poorly efficiencywise in allcore loads if voltage is above 1.25V-ish (meaning 4.5-4.75GHz allcore depending on bin, CO value etc). I'm not sure where sæefficient scaling stops on 12th gen, but wouldn't be surprised if it were around 1.2-1.3v allcore like Ryzen 5k, maybe lower. My 12400F only runs allcore at 1.0v so can't test scaling on that one.
Posted on Reply
#207
JustBenching
TaraquinNot to start another war, but I'm not so sure since games generally utilize few cores most of the time. My 5600X in SOTTR (a game that utilizes many cores well) get 230-240fps CPU game avg 1080p highest running 76W limit, but I get 215-225fps running 45W limit so only around 5% fps loss last I checked. In both cases IO-die uses 15-20W. It may be more or less in other games, but generally less since most games uses less cores than SOTTR.

If you can test your 12th gen in SOTTR 1080p highest with unlimited and 45W limit it would be great :)

Ryzen 5k scales poorly efficiencywise in allcore loads if voltage is above 1.25V-ish (meaning 4.5-4.75GHz allcore depending on bin, CO value etc). I'm not sure where sæefficient scaling stops on 12th gen, but wouldn't be surprised if it were around 1.2-1.3v allcore like Ryzen 5k, maybe lower.
Unless we have the same card it's kind of pointless. Ill win in efficiency just because i habe a 3090 that pumps 300 fps in 1080p. We could try locking the framerate to 150 or something and then check the power draw i guess
Igorslab and derbauer ran some tests and adl are insane in gaming efficiency
Posted on Reply
#208
Taraquin
fevgatosUnless we have the same card it's kind of pointless. Ill win in efficiency just because i habe a 3090 that pumps 300 fps in 1080p. We could try locking the framerate to 150 or something and then check the power draw i guess
Igorslab and derbauer ran some tests and adl are insane in gaming efficiency
No, you can compare, look at CPU game avg, this is what your CPU produces and will be equal no matter what GPU you use :) You may be right that you lose less perf pwr limiting a 12th gen, but it would be nice to see how it affects. You will get different numbers than me, but the point is to show how much lower perf gets if pwr limited :)
Posted on Reply
#209
JustBenching
TaraquinNo, you can compare, look at CPU game avg, this is what your CPU produces and will be equal no matter what GPU you use :) You may be right that you lose less perf pwr limiting a 12th gen, but it would be nice to see how it affects. You will get different numbers than me, but the point is to show how much lower perf gets if pwr limited :)
The cpu game is affected by the gpu actually, downclock your gpu and youll get higher numbers. I dont know why it works that way but it does
Posted on Reply
#210
Taraquin
fevgatosThe cpu game is affected by the gpu actually, downclock your gpu and youll get higher numbers. I dont know why it works that way but it does
I have not seen that, I get same cpu game avg running UV or OC profile on GPU, but your system might behave different. Still using same GPU settings you can show how your CPU, that is what I was curious about? Can you do that? :)
Posted on Reply
#211
JustBenching
TaraquinI have not seen that, I get same cpu game avg running UV or OC profile on GPU, but your system might behave different. Still using same GPU settings you can show how your CPU, that is what I was curious about? Can you do that? :)
Just got back home, i get 225 @ 45watts with e cores off. Too bored to try e cores on right now, maybe later :P
Posted on Reply
#212
Taraquin
fevgatosJust got back home, i get 225 @ 45watts with e cores off. Too bored to try e cores on right now, maybe later :p
But try at stock pwr and compare :)
Posted on Reply
#213
JustBenching
TaraquinBut try at stock pwr and compare :)
Stock with no power limits i get around 330 if I remember correctly, but i've never checked how much it actually consumes.
Posted on Reply
#214
Taraquin
fevgatosStock with no power limits i get around 330 if I remember correctly, but i've never checked how much it actually consumes.
Okay, you lose 33% perf then. How much does it use in SOTTR when not pwr limited? I haven't tested the 5800X, but considering I only lose 5% fps going from 76W to 45W I would be surprised if 5800X lose a lot more. All this considered it seems gaming is less impacted by pwr limit that productivity which for instance the 10900K test on HWUB showed.
Posted on Reply
#215
JustBenching
TaraquinOkay, you lose 33% perf then. How much does it use in SOTTR when not pwr limited? I haven't tested the 5800X, but considering I only lose 5% fps going from 76W to 45W I would be surprised if 5800X lose a lot more. All this considered it seems gaming is less impacted by pwr limit that productivity which for instance the 10900K test on HWUB showed.
Ill check, i never really paid attention. Sotr though is kind of a weird game to test for this cause its really memory and cache dependant more than it cares about the actual cpu.

The only game where I've noticed high power consumption is cyberpunk with RT on at very low resolution to make it cpu bound. Ive seen peaks at 170 watts which is insane for a game, usually its under 100watts in most games.
Posted on Reply
#216
Taraquin
fevgatosIll check, i never really paid attention. Sotr though is kind of a weird game to test for this cause its really memory and cache dependant more than it cares about the actual cpu.

The only game where I've noticed high power consumption is cyberpunk with RT on at very low resolution to make it cpu bound. Ive seen peaks at 170 watts which is insane for a game, usually its under 100watts in most games.
SOTTR scales with everything which makes it weird, but good for testing :) Cyberpunk has high CPU usage using DLSS or native low res, it scales good with bandwith (loves DDR5 and performs much better than with DDR4), too bad the built in bench is inconsistent :/ If I remember correct CP has some sections that utilize AVX2, that might explain high CPU usage.
Posted on Reply
#217
Taraquin
fevgatosIll check, i never really paid attention. Sotr though is kind of a weird game to test for this cause its really memory and cache dependant more than it cares about the actual cpu.

The only game where I've noticed high power consumption is cyberpunk with RT on at very low resolution to make it cpu bound. Ive seen peaks at 170 watts which is insane for a game, usually its under 100watts in most games.
I just remembered that TPU did a powerscaling test:

It seems ADL is very efficient down to around 75W limit or maybe a bit below, but get serious problems at 50W so somewhere between 50 and 75W gamingperformance tanks completely. 5600X at 76W limit is equally efficient as 12900K at 75W.

5800X does not behave the same and scales well at 65W vs stock 142W.
gamingperf at 65W is 98-99% of 142W :O How low you can go on 5800X before performance tanks is a big question, but I'm sure it loses a lot less perf at 45W than 12900K, since it loses over 40% perf at 50W, while I lose 5% at 45W.

The voltage/frequency curve of Ryzen 5k is a bit weird where you get very good linear scaling to around 1.1v.
My 5600X tested at various speeds, lowest voltage/powerusage in CB23:
4.2@0.99v 56W
4.3@1.02v 60W
4.4@1.05v 64W
4.5@1.10v 69W
4.6@1.18v 76W
4.7@1.26v 95W
4.8@1.34v 115W
I think it is quite comparable to 5800X. All these test were done with 4000 ram so I/O-die uses a bit more power than an avg 5600X.

Even though the I/O-die uses a bit of power (10-15W load 3200MHz ram, 20-30W load 4000MHz ram) the mem controller on ADL, I/O etc uses a fair amount of power, but at the same die as cores.

I bet 12600K will have better perf at low tdp due to less cores/cache.
Posted on Reply
#218
B-Real
Well, AMD did what it promised:
"From this small sample though, we found that the 5800X3D is slightly faster than the 12900K when using the same DDR4 memory.

It's a small 5% margin, but that did make it 19% faster than the 5900X on average, so AMD's 15% claim is looking good."



Quite an impressive comeback within the generation even without DDR5 support.
Posted on Reply
#219
JustBenching
TaraquinI just remembered that TPU did a powerscaling test:

It seems ADL is very efficient down to around 75W limit or maybe a bit below, but get serious problems at 50W so somewhere between 50 and 75W gamingperformance tanks completely. 5600X at 76W limit is equally efficient as 12900K at 75W.

5800X does not behave the same and scales well at 65W vs stock 142W.
gamingperf at 65W is 98-99% of 142W :O How low you can go on 5800X before performance tanks is a big question, but I'm sure it loses a lot less perf at 45W than 12900K, since it loses over 40% perf at 50W, while I lose 5% at 45W.

The voltage/frequency curve of Ryzen 5k is a bit weird where you get very good linear scaling to around 1.1v.
My 5600X tested at various speeds, lowest voltage/powerusage in CB23:
4.2@0.99v 56W
4.3@1.02v 60W
4.4@1.05v 64W
4.5@1.10v 69W
4.6@1.18v 76W
4.7@1.26v 95W
4.8@1.34v 115W
I think it is quite comparable to 5800X. All these test were done with 4000 ram so I/O-die uses a bit more power than an avg 5600X.

Even though the I/O-die uses a bit of power (10-15W load 3200MHz ram, 20-30W load 4000MHz ram) the mem controller on ADL, I/O etc uses a fair amount of power, but at the same die as cores.

I bet 12600K will have better perf at low tdp due to less cores/cache.
That's because the voltage is set a bit to high by default on the 12900k. With a little undervolting I managed 14k cbr23 score @ 35 watts. I can max out my 3090 with that power limit :P
Posted on Reply
#220
Taraquin
fevgatosThat's because the voltage is set a bit to high by default on the 12900k. With a little undervolting I managed 14k cbr23 score @ 35 watts. I can max out my 3090 with that power limit :p
One could argue that AMD does the same, and above 1.2v the efficiencycurve is garbage, probably the same on ADL.
Posted on Reply
#221
Mussels
Freshwater Moderator
fevgatosThat's because the voltage is set a bit to high by default on the 12900k. With a little undervolting I managed 14k cbr23 score @ 35 watts. I can max out my 3090 with that power limit :p
I can max out my 5800x + 3090 and stay under 400W, the moment you tweak values it's an unfair comparison
Posted on Reply
Add your own comment
Nov 21st, 2024 12:13 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts