Thursday, March 17th 2022

AMD's Robert Hallock Confirms Lack of Manual CPU Overclocking for Ryzen 7 5800X3D

In a livestream talking about AMD's mobile CPUs with HotHardware, Robert Hallock shone some light on the rumours about the Ryzen 7 5800X3D lacking manual overclocking. As per earlier rumours, something TechPowerUp! confirmed with our own sources, AMD's Ryzen 7 5800X3D lacks support for manual CPU overclocking and AMD asked its motherboard partners to remove these features in the UEFI. According to the livestream, these CPUs are said to be hard locked, so there's no workaround when it comes to adjusting the CPU multiplier or Voltage, but at least AMD has a good reason for it.

It turns out that the 3D V-Cache is Voltage limited to a maximum of 1.3 to 1.35 Volts, which means that the regular boost Voltage of individual Ryzen CPU cores, which can hit 1.45 to 1.5 Volts, would be too high for the 3D V-Cache to handle. As such, AMD implemented the restrictions for this CPU. However, the Infinity Fabric and memory bus can still be manually overclocked. The lower Voltage boost also helps explain why the Ryzen 7 5800X3D has lower boost clocks, as it's possible that the higher Voltages are needed to hit the higher frequencies.
That said, Robert Hallock made a point of mentioning that overclocking is a priority for AMD and the Ryzen 7 5800X3D is a one off when it comes to these limitations. The reason behind this is that AMD is limited by the manufacturing technology available to the company today, but it wanted to release the technology to consumers now, rather than wait until the next generation of CPUs. In other words, this is not a change in AMD's business model, as future CPUs from AMD will include overclocking.

Hallock also explained why AMD didn't go with more cores for its first 3D V-Cache CPU and it has to do with the fact that most workloads outside of gaming don't reap much of a benefit. This is large due to how different applications use cache memory and when it comes to games, a lot of the data is being reused, which is a perfect scenario for a large cache, whereas something like video editing software, can't take advantage of a large cache in the same way. This means that AMD's secret to boosting the performance in games is that more game data ends up sitting closer to the CPU, which results in a 12 ns latency for the CPU to retrieve that data from the L3 cache, compared to 60-80 ns when the data has to be fetched from RAM. Add to this the higher bandwidth of the cache and it makes sense how the extra cache helps boost the performance in games.

For more details, please see video below. The interesting part starts around the 45:30 mark.

Add your own comment

222 Comments on AMD's Robert Hallock Confirms Lack of Manual CPU Overclocking for Ryzen 7 5800X3D

#126
Taraquin
CutechriYeah? 12900K locked at 35W does around 12600 in CB R23. A 5950X does about half at the same wattage. 12900K consumes a ton because of Intel's idiotic PL2 limits. Rest of the chips are very competitive and some beat Zen 3 in efficiency.
On notebooks Zen 3 is more efficient at low wattage vs ADL. My 5600X is far more efficient that my 12400F if I restrict power (50W and 5600X beat 12400F in CB23, but stock 12400F uses 5W less and gets 600points more), but both running stock 12400F is slightly more efficient in most cases.
Posted on Reply
#127
SL2
fevgatosIntel walk all over zen 3 in gaming, both in performance and efficiency, since they consume a lot less power
Intel is faster and it uses more power to get there, it's not more efficient here at least. It's the seemingly less optimized 5800X alone that uses 3 % more energy, while the other Zen 3 models uses less.

Posted on Reply
#128
JustBenching
MatsIntel is faster and it uses more power to get there, it's not more efficient here at least. It's the seemingly less optimized 5800X alone that uses 3 % more energy, while the other Zen 3 models uses less.

This is not gaming. Check igorslab review where he did a gaming efficiency test.
Posted on Reply
#129
SL2
fevgatosThis is not gaming. Check igorslab review where he did a gaming efficiency test.
Yeah, you're right, I just saw it.
Posted on Reply
#130
Taraquin
fevgatosThis is not gaming. Check igorslab review where he did a gaming efficiency test.
In general Intel is more efficient running single core, AMD multicore, most games use few cores/treads most of the time.
Posted on Reply
#131
SL2
The dual chiplet models are behind in efficiency, but not the other two. I'm looking at the last graph.

That's just efficiency tho.
Posted on Reply
#132
Valantar
TaraquinIn general Intel is more efficient running single core, AMD multicore, most games use few cores/treads most of the time.
This is inaccurate - Intel's cores easily scale past 50W/core in ST loads, while AMD's cores top out at ~20W/core. Intel's cores are also faster with ADL, but not to match the power consumption (that would require them to be ~2.5x faster!). What we're likely seeing in these gaming efficiency tests is likely more of an overall chip architecture thing: it's quite well documented that Infinity Fabric uses a decent chunk of power (up to ~100W on Threadripper; 20+W on Ryzen), a power cost that Intel doesn't have thanks to their monolithic design. That increases AMD's base power level under any kind of load - which of course places them at a disadvantage in low threaded loads, especially bursty ones where Intel might be able to intermittently clock down or don't need to sustain peak clocks during a consistent 100% load. This obviously doesn't make the overall efficiency of the CPU any less real - it doesn't matter whatsoever whether the CPU cores or some interconnect is consuming the power as long as it's being consumed, after all - but it's important to correctly attribute this. Intel manages the efficiency they do here thanks to the combination of high IPC and an efficient monolithic die interconnect, which places them at an advantage over AMD's slightly lower IPC, more efficient CPU cores, but much higher interconnect power. This is also why we see the two-CCD AMD chips consume so much more power: even if only a few cores are under load, they need to keep twice as many IF links active at full speed, doubling IF power over one-CCD chips.
Posted on Reply
#133
JustBenching
ValantarThis is inaccurate - Intel's cores easily scale past 50W/core in ST loads, while AMD's cores top out at ~20W/core. Intel's cores are also faster with ADL, but not to match the power consumption (that would require them to be ~2.5x faster!). What we're likely seeing in these gaming efficiency tests is likely more of an overall chip architecture thing: it's quite well documented that Infinity Fabric uses a decent chunk of power (up to ~100W on Threadripper; 20+W on Ryzen), a power cost that Intel doesn't have thanks to their monolithic design. That increases AMD's base power level under any kind of load - which of course places them at a disadvantage in low threaded loads, especially bursty ones where Intel might be able to intermittently clock down or don't need to sustain peak clocks during a consistent 100% load. This obviously doesn't make the overall efficiency of the CPU any less real - it doesn't matter whatsoever whether the CPU cores or some interconnect is consuming the power as long as it's being consumed, after all - but it's important to correctly attribute this. Intel manages the efficiency they do here thanks to the combination of high IPC and an efficient monolithic die interconnect, which places them at an advantage over AMD's slightly lower IPC, more efficient CPU cores, but much higher interconnect power. This is also why we see the two-CCD AMD chips consume so much more power: even if only a few cores are under load, they need to keep twice as many IF links active at full speed, doubling IF power over one-CCD chips.
That is true, its the fabric. Thats why alderlake is insanely efficient at 35w for example while zen 3 are absolutely horrific.

But even in normal out of the box operation alderlake are more efficient in.most tasks,they only lose to full core loads cause of that 240pl2.
Posted on Reply
#134
Valantar
fevgatosThat is true, its the fabric. Thats why alderlake is insanely efficient at 35w for example while zen 3 are absolutely horrific.

But even in normal out of the box operation alderlake are more efficient in.most tasks,they only lose to full core loads cause of that 240pl2.
That's debatable, and highly dependent on the workload - they lose against 1-CCD Ryzen in 100% load ST tasks simply due to the massive scaling of their cores (it doesn't matter if you save 20W on your interconnect if your core consumes 30W more), but if the load is more intermittent or lighter, then it can indeed win - it all depends how much the core is being loaded. There's also the interesting example of monolithic Ryzen (Cezanne, Rembrandt), where their mobile chips trounce Alder Lake for efficiency at anything below 45W, but lose above that as Intel has more room to scale clocks.

This is why I'm hoping AMD moves to some sort of integrated bridge tech with Zen4, at least for MSDT chips (it might not be feasible for EPYC/Threadripper due to the sheer thermal density of 8 CCDs packed that tightly). Going that route would allow them to essentially eliminate this disadvantage entirely. But unless they do, this disadvantage isn't going anywhere.
Posted on Reply
#135
JustBenching
ValantarThat's debatable, and highly dependent on the workload - they lose against 1-CCD Ryzen in 100% load ST tasks simply due to the massive scaling of their cores (it doesn't matter if you save 20W on your interconnect if your core consumes 30W more), but if the load is more intermittent or lighter, then it can indeed win - it all depends how much the core is being loaded. There's also the interesting example of monolithic Ryzen (Cezanne, Rembrandt), where their mobile chips trounce Alder Lake for efficiency at anything below 45W, but lose above that as Intel has more room to scale clocks.

This is why I'm hoping AMD moves to some sort of integrated bridge tech with Zen4, at least for MSDT chips (it might not be feasible for EPYC/Threadripper due to the sheer thermal density of 8 CCDs packed that tightly). Going that route would allow them to essentially eliminate this disadvantage entirely. But unless they do, this disadvantage isn't going anywhere.
Any examples where they lose in st workloads? Remember we are talking about efficiency, not power consumption.

As an example, phoronix run a 300+ benchmark roundup and the 12900k beat the 5950x both in performance and efficiency.

Since you mention 1ccd,the 5800x for example is as efficienct as a 10900k (!!!) in long multi core loads, after they both settle at their long duration power limit. Basically with both at 125w they perform the same in cinebrnch and blender runs. Which is kinda funny since the 10900k is basically a node and an architecture from 2015, lol
Posted on Reply
#136
Taraquin
ValantarThis is inaccurate - Intel's cores easily scale past 50W/core in ST loads, while AMD's cores top out at ~20W/core. Intel's cores are also faster with ADL, but not to match the power consumption (that would require them to be ~2.5x faster!). What we're likely seeing in these gaming efficiency tests is likely more of an overall chip architecture thing: it's quite well documented that Infinity Fabric uses a decent chunk of power (up to ~100W on Threadripper; 20+W on Ryzen), a power cost that Intel doesn't have thanks to their monolithic design. That increases AMD's base power level under any kind of load - which of course places them at a disadvantage in low threaded loads, especially bursty ones where Intel might be able to intermittently clock down or don't need to sustain peak clocks during a consistent 100% load. This obviously doesn't make the overall efficiency of the CPU any less real - it doesn't matter whatsoever whether the CPU cores or some interconnect is consuming the power as long as it's being consumed, after all - but it's important to correctly attribute this. Intel manages the efficiency they do here thanks to the combination of high IPC and an efficient monolithic die interconnect, which places them at an advantage over AMD's slightly lower IPC, more efficient CPU cores, but much higher interconnect power. This is also why we see the two-CCD AMD chips consume so much more power: even if only a few cores are under load, they need to keep twice as many IF links active at full speed, doubling IF power over one-CCD chips.
I mostly agree, but at semi low clockspeed Zen 3 is very efficient. My 5600X capped at 45W runs 4.85 SC and 3.7 MC, IO-die uses 20W then. 2 ccds are a different matter but single ccds are really efficient at low power.
fevgatosAny examples where they lose in st workloads? Remember we are talking about efficiency, not power consumption.

As an example, phoronix run a 300+ benchmark roundup and the 12900k beat the 5950x both in performance and efficiency.

Since you mention 1ccd,the 5800x for example is as efficienct as a 10900k (!!!) in long multi core loads, after they both settle at their long duration power limit. Basically with both at 125w they perform the same in cinebrnch and blender runs. Which is kinda funny since the 10900k is basically a node and an architecture from 2015, lol
10900K has 2 more cores, 4 trrads and higher clocks though. Skylake was a very good arcitecture :)
Posted on Reply
#137
Valantar
fevgatosAny examples where they lose in st workloads? Remember we are talking about efficiency, not power consumption.

As an example, phoronix run a 300+ benchmark roundup and the 12900k beat the 5950x both in performance and efficiency.

Since you mention 1ccd,the 5800x for example is as efficienct as a 10900k (!!!) in long multi core loads, after they both settle at their long duration power limit. Basically with both at 125w they perform the same in cinebrnch and blender runs. Which is kinda funny since the 10900k is basically a node and an architecture from 2015, lol
The 5800X is an outlier among Zen3 though - while the 5900X and 5950X have higher single core power draws, the 5800X matches or exceeds their per-core draw from 6-8 cores. Yet it clocks lower. This likely means that the 5800X is a relatively different bin from both the 5600X and 59xxX chips, one where power consumption under high loads is less important - simply because it has more room to move with a 105W/138W power budget and just one CCD. Literally every other Zen3 product out there would do better in that comparison against the 10900K. Which, of course, ignores the 10900K having a 2c4t advantage. So, Intel gets the inherent efficiency advantage of being "wide and slow" compared to AMD's somewhat low binned, high clocked 5800X, and still only matches them? That's not a particularly impressive showing.

Is this the review you're referring to, btw? I can't find that they say the 12900K is generally more efficient than the 5950X there - in that (extremely unreadable) graph of theirs they seem to both take the lead in various tests. I have no idea which of them are ST and which are MT, though. I have seen ST tests where AMD comes out looking decent in terms of efficiency against ADL, but sadly I can't remember where - and even more sadly, most reviewers limit their efficiency testing to one or two scenarios, which really limits results.
TaraquinI mostly agree, but at semi low clockspeed Zen 3 is very efficient. My 5600X capped at 45W runs 4.85 SC and 3.7 MC, IO-die uses 20W then. 2 ccds are a different matter but single ccds are really efficient at low power.
Yeah, it's still a very efficient architecture - it's just getting to a point where the higher power floor of package-based IF is starting to show its weaknesses.
Posted on Reply
#138
JustBenching
ValantarThe 5800X is an outlier among Zen3 though - while the 5900X and 5950X have higher single core power draws, the 5800X matches or exceeds their per-core draw from 6-8 cores. Yet it clocks lower. This likely means that the 5800X is a relatively different bin from both the 5600X and 59xxX chips, one where power consumption under high loads is less important - simply because it has more room to move with a 105W/138W power budget and just one CCD. Literally every other Zen3 product out there would do better in that comparison against the 10900K. Which, of course, ignores the 10900K having a 2c4t advantage. So, Intel gets the inherent efficiency advantage of being "wide and slow" compared to AMD's somewhat low binned, high clocked 5800X, and still only matches them? That's not a particularly impressive showing.

Is this the review you're referring to, btw? I can't find that they say the 12900K is generally more efficient than the 5950X there - in that (extremely unreadable) graph of theirs they seem to both take the lead in various tests. I have no idea which of them are ST and which are MT, though. I have seen ST tests where AMD comes out looking decent in terms of efficiency against ADL, but sadly I can't remember where - and even more sadly, most reviewers limit their efficiency testing to one or two scenarios, which really limits results.


Yeah, it's still a very efficient architecture - it's just getting to a point where the higher power floor of package-based IF is starting to show its weaknesses.
I think thats the one, there is a graph somewhere where it shows consumption across all benches, and yes the 12900k is both the fastest and the most efficient compared to the 5950x. Ill find that once im on my pc, im on the phone right now.
Taraquin10900K has 2 more cores, 4 trrads and higher clocks though. Skylake was a very good arcitecture :)
Well the 5950x has 33% more threads yet we are still comparing then, so does it matter?

I dont know, all i remember about 10900k was people claiming its an oven toaster etc.,not realising it is as efficient as the 5800x
Posted on Reply
#139
ratirt
fevgatosWell the 5950x has 33% more threads yet we are still comparing then, so does it matter?

I dont know, all i remember about 10900k was people claiming its an oven toaster etc.,not realising it is as efficient as the 5800x
No it isn't :) I have a 5800x and I can assure you it is not a oven toaster and if you consider gaming, my 5800x doesn't go above 50watts and I got a 6900XT
Posted on Reply
#140
Jism
FlydommoOverclocking will soon be a thing of the past, like the combustion engine. If the stock 5800X 3D delivers signficant performance gains over an overclocked 5800X, why wouldn't you go for the 5800X 3D? Just because the clock speed is lower?
People forget that the normal 5800X is a single CCD and does'nt suffer from the latency impact it has over 2 CCD based chips like the 5900/5950X.

The chip on it's own is already fast enough. The extra cache seems like a nice wave of goodbye to a ending AM4 platform. Applications and games that can benefit from extra will surely get the extra from it.

Locked or not; with a proper board i think you can "extend" clocks using simple BCLK as long as the board has a external clockgenerator. Hence why my 2700X is operating beyond 4.5Ghz in single threads.

Id buy it. I'm not OC'ing anyway as in manual clocks; but if the thing does provide boost just plant a good cooler and your good to go.
Posted on Reply
#141
JustBenching
ratirtNo it isn't :) I have a 5800x and I can assure you it is not a oven toaster and if you consider gaming, my 5800x doesn't go above 50watts and I got a 6900XT
I believe you, thats my point, what you just said applies to the 10900k,yet people were sayong otherwise
Posted on Reply
#142
chrcoluk
fevgatosIntel walk all over zen 3 in gaming, both in performance and efficiency, since they consume a lot less power
I checked my post since you only partially quoted, I wasnt talking about gaming specifically. My main point really was that I dont understand why people prefer to do inefficient and risky overclocks to get performance vs getting more out of the box.

Now I did check igor's review since it got mentioned a few replies down and the results are interesting, my intel question was basically what would happen if you capped the intel chips to 135w, 95w, and 65w. It seems we may already have the answer for gaming and if thats your main use for the chips they not that bad. Is it the case if they capped to 135w you lose little performance? kind of like the RTX 3000 series that gains very little for the last 30% or so power.
Posted on Reply
#143
JustBenching
chrcolukI checked my post since you only partially quoted, I wasnt talking about gaming specifically. My main point really was that I dont understand why people prefer to do inefficient and risky overclocks to get performance vs getting more out of the box.

Now I did check igor's review since it got mentioned a few replies down and the results are interesting, my intel question was basically what would happen if you capped the intel chips to 135w, 95w, and 65w. It seems we may already have the answer for gaming and if thats your main use for the chips they not that bad. Is it the case if they capped to 135w you lose little performance? kind of like the RTX 3000 series that gains very little for the last 30% or so power.
Well even for non gaming the 12900k can be the most efficient cpu at everything. For example, at 35w my 12900k scores 12600 on cinebench r23. That mskes it more efficient than the m1, and by far more efficient than any zen 3.
Posted on Reply
#144
QuietBob
Bloax
very well, if you go by extremely rough (no GPU for easy 1:1 :^))) comparisons of very juiced configs for both :- )

Not well enough that I'd rate it worth your while to bruteforce (yes, that is Reboot, Enter voltage, Test, Reboot ... Compare, Pick Best Performers, Test, Reboot, Enter voltage ...) a working SOC, IOD and CCD voltage.
As without those, you're gonna have a lot more stutters than if you do.

It's especially hard to recommend with pretty sweet deals on 12700k's being suspiciously frequent.

Though if you're sitting on a Ryzen 1600x or 2600 - then it's probably a sweet processor.
Hold on, are these leaked benchmarks? So the first result would be a 5800X@4.85 boost (PBO on) and IF@2000, and the second 5800X3D@4.65 boost (assuming PBO is on)?
If so, things look really promising for the V-cache variant. Look at the gains:

+82% min fps
+15% for 1% lows
+31% for 0.1% lows
+69% for 0.01% lows
+83% for 0.005% lows

Even if it's a single game - and the results accurate - the +15% increase in 1% lows would be in line with AMD's previous statements.
Posted on Reply
#145
chrcoluk
fevgatosWell even for non gaming the 12900k can be the most efficient cpu at everything. For example, at 35w my 12900k scores 12600 on cinebench r23. That mskes it more efficient than the m1, and by far more efficient than any zen 3.
How have you come to that conclusion? Some workloads it hits crazy power usage right? Or am I misunderstanding something. Or are you talking with capped power?

Is it the case its a good chip but just brought to market in a bad way with its shipping configuration?
Posted on Reply
#146
ratirt
ratirtNo it isn't :) I have a 5800x and I can assure you it is not a oven toaster and if you consider gaming, my 5800x doesn't go above 50watts and I got a 6900XT
fevgatosI believe you, thats my point, what you just said applies to the 10900k,yet people were sayong otherwise
I don't think my 5800x is so far off with power consumption to 12900k either though.
Posted on Reply
#147
trieste15
Would it not be possible for the 3D cache to run off a separate voltage plane?
Posted on Reply
#148
Valantar
fevgatosWell the 5950x has 33% more threads yet we are still comparing then, so does it matter?

I dont know, all i remember about 10900k was people claiming its an oven toaster etc.,not realising it is as efficient as the 5800x
Given that the 5950X has the exact same power limits as the 5800X (well, 6W higher boost power, 144W vs. 138W), that changes the picture quite a bit, no?
Posted on Reply
#149
NDown
AssimilatorOr AMD could... y'know... not sell a product that is quite obviously an unfinished experiment to customers. Because that would be the smart and ethical thing to do.

Strange how you chose to ignore what's literally the most obvious option.

At the end of the day, though, AMD is just shooting themselves in the foot with this Franken-CPU. Because someone will release a BIOS that "accidentally" removes the limit (or maybe AMD will do it themselves by fucking up AGESA, it's a coin toss), and idiots will flash that BIOS and burn their shiny new 5800X3Ds, and they'll moan and whine and complain about it on social media, and regardless of the fact that those users were the stupid ones, AMD's reputation will suffer.

It's amazing, Intel releases a line of CPUs that's actually competitive again and AMD immediately goes full retard and dreams up a product that nobody asked for and will do them harm over the long run, when what they actually should've done was just fucking lower their prices. But they've been riding the gravy train for so long that they've become greedier than Intel, something I thought impossible.
Yeah bro lots of people really want to see AMD as a saving grace in PC market lmaooo, while they're the same as Intel and any other profit oriented companies.

That being said, i also dont know if manual OC ever worth it in Ryzen ever since the 1st gen came out, nothing big to my eyes really, the price for a product at the end of its platform kinda stings though.
Posted on Reply
#150
Valantar
trieste15Would it not be possible for the 3D cache to run off a separate voltage plane?
Probably not - it interfaces directly with the on-die L3 cache in a way that's supposed to be entirely transparent to the CPU. Making that work while having those run off separate voltage planes sounds ... complicated, if not impossible, given how you'd have signalling between the two cache blocks at different voltages that would then need conversion, adding latency, making the cache die slower to acces, making for inconsistent performance and unpredictable pipeline stalls.
Posted on Reply
Add your own comment
Nov 21st, 2024 12:19 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts