# AMD's Robert Hallock Confirms Lack of Manual CPU Overclocking for Ryzen 7 5800X3D



## TheLostSwede (Mar 17, 2022)

In a livestream talking about AMD's mobile CPUs with HotHardware, Robert Hallock shone some light on the rumours about the Ryzen 7 5800X3D lacking manual overclocking. As per earlier rumours, something TechPowerUp! confirmed with our own sources, AMD's Ryzen 7 5800X3D lacks support for manual CPU overclocking and AMD asked its motherboard partners to remove these features in the UEFI. According to the livestream, these CPUs are said to be hard locked, so there's no workaround when it comes to adjusting the CPU multiplier or Voltage, but at least AMD has a good reason for it.

It turns out that the 3D V-Cache is Voltage limited to a maximum of 1.3 to 1.35 Volts, which means that the regular boost Voltage of individual Ryzen CPU cores, which can hit 1.45 to 1.5 Volts, would be too high for the 3D V-Cache to handle. As such, AMD implemented the restrictions for this CPU. However, the Infinity Fabric and memory bus can still be manually overclocked. The lower Voltage boost also helps explain why the Ryzen 7 5800X3D has lower boost clocks, as it's possible that the higher Voltages are needed to hit the higher frequencies. 



 


That said, Robert Hallock made a point of mentioning that overclocking is a priority for AMD and the Ryzen 7 5800X3D is a one off when it comes to these limitations. The reason behind this is that AMD is limited by the manufacturing technology available to the company today, but it wanted to release the technology to consumers now, rather than wait until the next generation of CPUs. In other words, this is not a change in AMD's business model, as future CPUs from AMD will include overclocking. 

Hallock also explained why AMD didn't go with more cores for its first 3D V-Cache CPU and it has to do with the fact that most workloads outside of gaming don't reap much of a benefit. This is large due to how different applications use cache memory and when it comes to games, a lot of the data is being reused, which is a perfect scenario for a large cache, whereas something like video editing software, can't take advantage of a large cache in the same way. This means that AMD's secret to boosting the performance in games is that more game data ends up sitting closer to the CPU, which results in a 12 ns latency for the CPU to retrieve that data from the L3 cache, compared to 60-80 ns when the data has to be fetched from RAM. Add to this the higher bandwidth of the cache and it makes sense how the extra cache helps boost the performance in games.

For more details, please see video below. The interesting part starts around the 45:30 mark.










*View at TechPowerUp Main Site*


----------



## mama (Mar 17, 2022)

Fair enough.


----------



## Selaya (Mar 17, 2022)

i guess too much voltage will just fry the cache?
well, too bad i guess


----------



## Kissamies (Mar 17, 2022)

I still guess that there's PBO which auto overclocks it with safe voltages.


----------



## TheLostSwede (Mar 17, 2022)

Selaya said:


> i guess too much voltage will just fry the cache?
> well, too bad i guess


That does seem to be the case, yes.



MaenadFIN said:


> I still guess that there's PBO which auto overclocks it with safe voltages.


He didn't explicitly say that, but he seemed to imply so, yes. Apparently it's not possible to link to exact time codes on TPU, so updated with where the interesting part starts.


----------



## Kissamies (Mar 17, 2022)

TheLostSwede said:


> He didn't explicitly say that, but he seemed to imply so, yes. Apparently it's not possible to link to exact time codes on TPU, so updated with where the interesting part starts.


Mmkay, post updates when you get more info


----------



## DemonicRyzen666 (Mar 17, 2022)

So why does it have an X in its name then? 
it's not Ryzen 5800X 3D
it's a ryzen 5800 3D


----------



## TheLostSwede (Mar 17, 2022)

DemonicRyzen666 said:


> So why does it have an X in its name then?
> it's not Ryzen 5800X 3D
> it's a ryzen 5800 3D


The non X models can still be overclocked manually.

If you follow the link below and select Unlocked for Overclocking from the menu on the left, you'll see that the non X models are included.


			https://www.amd.com/en/products/specifications/processors/11776%201736%201896%202466


----------



## Kissamies (Mar 17, 2022)

TheLostSwede said:


> The non X models can still be overclocked manually.
> 
> If you follow the link below and select Unlocked for Overclocking from the menu on the left, you'll see that the non X models are included.
> 
> ...


Yep, every (at least desktop, dunno about mobile ones) Ryzen is unlocked so far, 5800X3D just breaks this rule. AMD doesn't have special unlocked SKUs like Intel's K ones.



DemonicRyzen666 said:


> So why does it have an X in its name then?
> it's not Ryzen 5800X 3D
> it's a ryzen 5800 3D


Why not? The X ones have higher frequency than non-X's. And since this is a boosted one via the 3D cache, the X monicker is deserver even tho it's a locked SKU.


----------



## Nephilim666 (Mar 17, 2022)

How will it fare against an overclocked i5-12600k or i7-12700k I wonder...


----------



## Jism (Mar 17, 2022)

I think you can still fiddle around with bclk. But once that DRAM starts to suffer from electromigration, your toast. Cache processes all the data so it could ruin the CPU workings completely.


----------



## Calmmo (Mar 17, 2022)

A cpu that can only operate between 1.3 and 1.35v... minor details, i guess they were waiting for these to be out in the wild to reveal a _minor_ flaw like this


----------



## GoldenX (Mar 17, 2022)

So Curve Optimizer will make this CPU shine.


----------



## TheLostSwede (Mar 17, 2022)

Calmmo said:


> A cpu that can only operate between 1.3 and 1.35v... minor details, i guess they were waiting for these to be out in the wild to reveal a _minor_ flaw like this


No, not only between, up to. Edited the news post for clarification.


----------



## ratirt (Mar 17, 2022)

not a big deal for me to be honest. The CPU does not benefit largely from CPU OC but memory and Infinity Fabric. These bring most performance gains. 
For a first 3Dv-cache run as a try out CPU, not surprised.


----------



## Tomorrow (Mar 17, 2022)

So i assume only the VCORE voltage is locked to 1.35v max?

SOC, VDDP and VDDG are not locked?

I will wait for reviews and decide then if i will go with 5800X3D for games or 5950X for apps that rely on high clock speeds. Thankfully 5950X has come down in price. Now under 600€. That used to be 5900X territory. I will get a nice performance boost regardless coming from 3800X.


----------



## Chomiq (Mar 17, 2022)

What's the actual street date for release of 5800X3D and 5700X?


----------



## Mats (Mar 17, 2022)

MaenadFIN said:


> Why not? The X ones have higher frequency than non-X's. And since this is a boosted one via the 3D cache, the X monicker is deserver even tho it's a locked SKU.


It is faster because of the 3D cache, hence the "3D" in the name.

It shouldn't be a "X" because it's not higher clocked than a 5800.

Pretty self explanatory to me.


----------



## TheLostSwede (Mar 17, 2022)

Chomiq said:


> What's the actual street date for release of 5800X3D and 5700X?


20th of April apparently.
Ended up ordering a 5800X, as they dropped to $15 more than the MSRP of the 5700X here, pretty much over night.


----------



## Mats (Mar 17, 2022)

Chomiq said:


> What's the actual street date for release of 5800X3D and 5700X?


"These processors will be generally available from April 4, 2022, while the 5800X3D comes on April 20."








						AMD Spring 2022 Ryzen Desktop Processor Update Includes Six New Models Besides 5800X3D
					

In addition to the Ryzen 7 5800X3D, which AMD claims to be the world's fastest gaming processor, AMD gave its desktop processor product-stack a major update, with as many as six other processor models spanning a wide range of price-points that help the company better compete with the bulk of the...




					www.techpowerup.com


----------



## ExcuseMeWtf (Mar 17, 2022)

I guess OC is becoming dead nowadays, even unlocked CPUs have little headroom and boost themselves nicely anyways.


----------



## DeathtoGnomes (Mar 17, 2022)

Nice Write up @TheLostSwede

I was kind of thinking to not wait and buy that 5800x3d, almost. I want to upgrade now, its so hard not to go nuts now, but I think the wait for next gen and am5 will be well worth that wait.


----------



## TheLostSwede (Mar 17, 2022)

DeathtoGnomes said:


> Nice Write up @TheLostSwede
> 
> I was kind of thinking to not wait and buy that 5800x3d, almost. I want to upgrade now, its so hard not to go nuts now, but I think the wait for next gen and am5 will be well worth the wait.


The question is what the AM4 platform as a whole will cost.
That said, considering how long you've waited, you might as well try to make it another six months and skip straight to a DDR5 platform, be that from AMD or Intel.


----------



## Melvis (Mar 17, 2022)

MaenadFIN said:


> Yep, every (at least desktop, dunno about mobile ones) Ryzen is unlocked so far, 5800X3D just breaks this rule. AMD doesn't have special unlocked SKUs like Intel's K ones.
> 
> 
> Why not? The X ones have higher frequency than non-X's. And since this is a boosted one via the 3D cache, the X monicker is deserver even tho it's a locked SKU.



Its the Xtreme!!! Gaming Processor


----------



## DeathtoGnomes (Mar 17, 2022)

TheLostSwede said:


> The question is what the AM4 platform as a whole will cost.
> That said, considering how long you've waited, you might as well try to make it another six months and skip straight to a DDR5 platform, be that from AMD or Intel.


Assuming the supply problems are 'more functional' in 6-9 months, AM4 build costs will be interesting to see after launch.


----------



## Tomgang (Mar 17, 2022)

I guess that explains the lower core clocks. It's because of the lower voltage and none oc. Meh It's not a chip for me then. 

Stick to my 5600X and 5950X.

Bit fair enough if the v-cashe could be damaged. Let's just hope that is not the case for Zen 4 as well. Can you imagine all Zen 4 cpu being locked for OC?


----------



## DeathtoGnomes (Mar 17, 2022)

Closer to the very end, last 5 minutes or so, they tried to pry info from Robert H. on the plans of AM5s longevity. Although nothing is concrete yet, _my magic 8-ball_™ says there might be something more to that discussion.



Tomgang said:


> Let's just hope that is not the case for Zen 4 as well. Can you imagine all Zen 4 cpu being locked for OC?


if thats the case with zen4, it will be incremental updates instead of a full upgrade. But, always a but, will AMD divide into the _big cache_ for gaming and continue normalcy, per se, for the rest of the chip line up.


----------



## bug (Mar 17, 2022)

So, this Frankenstein's creation isn't sewed together very well, is it?


----------



## Kissamies (Mar 17, 2022)

Melvis said:


> Its the Xtreme!!! Gaming Processor


Well, for Intel the X actually standed for Extreme 



TheLostSwede said:


> 20th of April apparently.
> Ended up ordering a 5800X, as they dropped to $15 more than the MSRP of the 5700X here, pretty much over night.


I guess that they were high as hell when they priced this and it's launching 20th of April..


----------



## Mats (Mar 17, 2022)

DeathtoGnomes said:


> I was kind of thinking to not wait and buy that 5800x3d, almost. I want to upgrade now, its so hard not to go nuts now, but I think the wait for next gen and am5 will be well worth that wait.


AM4 will stay for quite some time according to Su. At the same time, I don't expect AMD to launch AM5 CPU's with the kind of high prices that Vermeer had, now that the competition is real.
I do expect AM5 boards to cost more, tho.


----------



## DeathtoGnomes (Mar 17, 2022)

MaenadFIN said:


> I guess that they were high as hell when they priced this and it's launching 20th of April..


better than April 1st.


----------



## Valantar (Mar 17, 2022)

So it's pretty safe to assume a CPU architecture more fundamentally designed with this feature in mind (rather than having connectors for it added because it was in development and needed a test vehicle) will have a separate cache voltage rail, right? This is probably non-trivial given how closely tied cache is to the cores, but that would seem like the logical way forward.


As for this news more broadly: if it only means there's no manual, multiplier-based OC, but PBO and CO are still there ... who cares? All-core fixed-multiplier OC on Ryzen is a pretty bad idea unless you're consistently running 100% load nT workloads anyhow. There's no reason why anyone should use that over PBO and CO tuning, unless what you're going for is worse performance and higher power consumption.


----------



## Tomgang (Mar 17, 2022)

DeathtoGnomes said:


> Closer to the very end, last 5 minutes or so, they tried to pry info from Robert H. on the plans of AM5s longevity. Although nothing is concrete yet, _my magic 8-ball_™ says there might be something more to that discussion.
> 
> 
> if thats the case with zen4, it will be incremental updates instead of a full upgrade. But, always a but, will AMD divide into the _big cache_ for gaming and continue normalcy, per se, for the rest of the chip line up.


It's difficult to say. Maybe 5800X3d is some sort of a test drive. How many addopt it compared to 5800X. There may be Zen 4 with and with out 3d cashe as well.

But no oc on Zen 4 will be a huge disappointment for me.


----------



## Mats (Mar 17, 2022)

MaenadFIN said:


> I guess that they were high as hell when they priced this and it's launching 20th of April..


Yeah, but it is indeed a higher chip(s) inside that CPU.

_"Our highest chip launches on 4/20" _

Makes sense.


----------



## Valantar (Mar 17, 2022)

Tomgang said:


> It's difficult to say. Maybe 5800X3d is some sort of a test drive.


It clearly is - it's the first product to hit the market with a brand-new technology; it's relatively limited in scope (one SKU, the last product for a five-year-old platform, etc.) and they have announced no plans for further models for this platform with the feature. Definitely a test drive.


Tomgang said:


> How many addopt it compared to 5800X. There may be Zen 4 with and with out 3d cashe as well.


Almost definitely. It doesn't make sense on all SKUs, but a wider roll-out on a chip more thoroughly adapted to this (with a separate cache voltage rail, for example) would make a lot of sense. Something like every tier from x6xx  or x8xx upwards having a 3D cache-enabled top-end SKU would make sense (i.e. 7600 65W, 7600X3D 105W, 7800 65W, 7800X3D 105W, etc.). This would make a lot of sense if they can make the cache die on 7nm even when the CCDs move to 5nm, as that would free up capacity to churn out more cache dice.


Tomgang said:


> But no oc on Zen 4 will be a huge disappointment for me.


Just to be clear: no multiplier-based, fixed frequency and voltage OC is not "no OC". PBO and Curve Optimizer are still ways of overclocking, and they seem to be supported here. They also deliver better results in general on Zen2 and Zen3 than old-school OC techniques.


----------



## stimpy88 (Mar 17, 2022)

Oh dear...


----------



## Chaitanya (Mar 17, 2022)

Last time I overclocked a CPU was a AM2 CPU and havent touched OC either on my 4770k or 3700x.


----------



## Chomiq (Mar 17, 2022)

Mats said:


> Yeah, but it is indeed a higher chip(s) inside that CPU.
> 
> _"Our highest chip launches on 4/20" _
> 
> Makes sense.


"It smokes... the competition".


----------



## Cutechri (Mar 17, 2022)

Who needs manual OC anymore on Zen 2 and above. Just PBO and Curve Optimizer if on Zen 3. Or do as I do and completely ignore overclocking because... why bother.


----------



## Bwaze (Mar 17, 2022)

So comparing new 3D cache Ryzen at fixed frequency (4 GHz) in AMD presentation and stating 15% uplift in gaming was quite dishonest, since new 5800X3D will not achieve 5800X frequency!


----------



## TheLostSwede (Mar 17, 2022)

Bwaze said:


> So comparing new 3D cache Ryzen at fixed frequency (4 GHz) in AMD presentation and stating 15% uplift in gaming was quite dishonest, since new 5800X3D will not achieve 5800X frequency!


What's dishonest with that? It was a technology demo of a CPU they never launched.


----------



## Mats (Mar 17, 2022)

Bwaze said:


> So comparing new 3D cache Ryzen at fixed frequency (4 GHz) in AMD presentation and stating 15% uplift in gaming was quite dishonest, since new 5800X3D will not achieve 5800X frequency!


That's a 12 (6+6) core, which gives it more cache per core than an 8 core. Do you know what a prototype is?


----------



## Bomby569 (Mar 17, 2022)

I'm missing something here, every CPU ever launched has a limit he shouldn't be pushed beyond. 
If we can't even try to OC is because it's already at that limit. If 1.35 is the limit is should be at something more moderate and we could use sylicon lottery to try and push it a bit further like any CPU before it.

So i assume this is just a CPU really pushed to it's limits just to compete with Intel.


----------



## mama (Mar 17, 2022)

bug said:


> So, this Frankenstein's creation isn't sewed together very well, is it?


We will see...


----------



## Valantar (Mar 17, 2022)

Bomby569 said:


> I'm missing something here, every CPU ever launched has a limit he shouldn't be pushed beyond.
> If we can't even try to OC is because it's already at that limit. If 1.35 is the limit is should be at something more moderate and we could use sylicon lottery to try and push it a bit further like any CPU before it.
> 
> So i assume this is just a CPU really pushed to it's limits just to compete with Intel.


It seems to be a bit more complex than that thanks to the specifics of the cache die. The CPU cores themselves likely handle higher voltages and frequencies just fine, but if the cache die is on the same voltage rail as the cores and risks damage above 1.35V, then obviously the cores can't be allowed to go that high either. And seeing how cache and logic have quite different characteristics in silicon, especially when using different libraries (the cache die is ~2x the density of the cache on the CCD, after all), so it likely just isn't designed to scale in the same way.


Bwaze said:


> So comparing new 3D cache Ryzen at fixed frequency (4 GHz) in AMD presentation and stating 15% uplift in gaming was quite dishonest, since new 5800X3D will not achieve 5800X frequency!


That's a prototype of an unreleased product. I doubt AMD would launch the 5800X3D unless it was faster than the 5800X - if it didn't make sense, they would make a lot more money selling those stacked dice in Epyc chips where HPC/server vendors would gobble them up at much higher prices. But ultimately, we'll have to see how this pans out in real-world reviews.


----------



## mb194dc (Mar 17, 2022)

Just lock the voltage but allow the multiplier to be changed? Can't be that complicated...


----------



## Mats (Mar 17, 2022)

Bomby569 said:


> If 1.35 is the limit is should be at something more moderate and we could use sylicon lottery to try and push it a bit further like any CPU before it.


How are you going to do that when manual OC is disabled?


Bomby569 said:


> So i assume this is just a CPU really pushed to it's limits just to compete with Intel.


No.
Even if Alder Lake wasn't released yet, AMD would want the X3D to be as close to the 5800X clock frequencies as possible, otherwise it wouldn't make much sense in launching it.
AFAIK, the 3D cache is primarily made for Milan-X.


----------



## qubit (Mar 17, 2022)

The technical reason for this limitation is interesting, but as a customer, I just it to overclock, so I'd pass on this model.


----------



## illusion archives (Mar 17, 2022)

DemonicRyzen666 said:


> So why does it have an X in its name then?
> it's not Ryzen 5800X 3D
> it's a ryzen 5800 3D


Maybe it's Ryzen 5800 X3D?


----------



## Mats (Mar 17, 2022)

qubit said:


> The technical reason for this limitation is interesting, but as a customer, I just it to overclock, so I'd pass on this model.


The limitations may just as well be the very reason why we don't see more X3D models. Well that and the fact that most X3D's goes to Milan-X.


----------



## tabascosauz (Mar 17, 2022)

Valantar said:


> It clearly is - it's the first product to hit the market with a brand-new technology; it's relatively limited in scope (one SKU, the last product for a five-year-old platform, etc.) and they have announced no plans for further models for this platform with the feature. Definitely a test drive.
> 
> Almost definitely. It doesn't make sense on all SKUs, but a wider roll-out on a chip more thoroughly adapted to this (with a separate cache voltage rail, for example) would make a lot of sense. Something like every tier from x6xx  or x8xx upwards having a 3D cache-enabled top-end SKU would make sense (i.e. 7600 65W, 7600X3D 105W, 7800 65W, 7800X3D 105W, etc.). This would make a lot of sense if they can make the cache die on 7nm even when the CCDs move to 5nm, as that would free up capacity to churn out more cache dice.
> 
> Just to be clear: no multiplier-based, fixed frequency and voltage OC is not "no OC". PBO and Curve Optimizer are still ways of overclocking, and they seem to be supported here. They also deliver better results in general on Zen2 and Zen3 than old-school OC techniques.



Haven't finished watching the video yet, but did he confirm PBO is actually around? Generally even Auto PBO quickly enables higher Vcore peaks (1.5V+) not possible at stock. Yes CO is fine, but seeing as the current AGESA completely removed Boost Override (only to be later reintroduced), sounds like they were prepping their firmware for the 5800X3D by removing the whole shebang. Without Override there's not much point to CO since all Vermeer chips hit their stock global limits easily

But knowing AGESA they have an abysmal record of keeping features consistent (especially vendors like GB that release and pull a billion beta BIOSes weekly). So if the V-cache really is that sensitive to voltage, then that sounds like a recipe for disaster..........but I wonder if it's that the cache really can't handle the voltage, or that heat density is too much under certain heavy loads?

Think Intel have had separate cache clock and voltage for like a decade now - obviously a bit more involved with V-cache but I'd fully expect this whole deal to be resolved on AM5. 5800X3D is just an experiment searching for guinea pigs

Currently on normal L3, cache freq is very closely correlated to core effective clock at all times, so curious to see if they have or will eventually decouple cache clock


----------



## Taraquin (Mar 17, 2022)

I wish it could use negative CO atleast since it requires no more voltage, that could boost perf by 5%+


----------



## freeagent (Mar 17, 2022)

That seems fair enough I suppose. I did not make the connection between cache and voltage. But even still.. this is a modified 5800X, that has been gimped so it doesn’t kill itself.. they shouldn’t be charging 5900X money for it.


----------



## stimpy88 (Mar 17, 2022)

I think the evidence shows that AMD have just found something out about this CPU.  I think they are seeing them fail in a short amount of time.  For proof, I say that a few days ago, AMD demanded that all the MB makers release a new emergency BIOS with OC support disabled.  Meaning that it was previously enabled for the last couple of months, while they have been sampling and testing.

We have also been told that there were thermal issues earlier on, and that they had to downclock it to make it run at a more suitable temperature.  AMD maybe got a little bit carried away by the new packaging tech, and thought they could glue on some cache, and all would be well, and it would be a cheap solution.

The benchmarks will be interesting.  But I hope that cannibalizing the sales of the already existing 5800x, as well as the 5900x and possibly 5950x were worth AMD's experiment.


----------



## Mats (Mar 17, 2022)

Yeah we'll be missing out on that sweet, juicy GHz OC everyone is talking about.. 






It's the age old _I-want-something-for-nothing_ argument, just because you're only willing to pay extra for more cores and nothing else.

If it beats the 5900X in gaming, and I'm not saying that it does (I know the demo used 12 core models), then why not charge a premium for it? Hint: You're not paying for more cores, you pay for extra cache.
It's for gaming, you can't overclock manually.. if you don't like it then don't buy it.



stimpy88 said:


> I think they are seeing them fail in a short amount of time.  For proof, I say that a few days ago, AMD demanded that all the MB makers release a new emergency BIOS with OC support disabled.


Just no. That's not a proof, that's a guess at best.
If the 3D cache was anywhere close to unreliable like you suggest then AMD wouldn't launch Milan-X this month.

This CPU won't cannibalize anything. Why? Well read this thread and you'll get a few hints. Nobody thinks this is a bargain so far at $450.
Besides, this CPU is supposed to have limited availability, although we'll see about that..


----------



## Valantar (Mar 17, 2022)

stimpy88 said:


> I think the evidence shows that AMD have just found something out about this CPU.  I think they are seeing them fail in a short amount of time.  For proof, I say that a few days ago, AMD demanded that all the MB makers release a new emergency BIOS with OC support disabled.  Meaning that it was previously enabled for the last couple of months, while they have been sampling and testing.


Or they could just be bad at consistently pushing BIOS updates as needed - for which there is plenty of evidence.


stimpy88 said:


> We have also been told that there were thermal issues earlier on, and that they had to downclock it to make it run at a more suitable temperature.


There's been speculation as to that, but I can't recall that "we have been told" that. The reductions in base and boost clock are quite easily explained by the power limits being the same (105W/144W) but there being an additional die with 64MB of cache on it, requiring some power that the cores would otherwise have had access to.


stimpy88 said:


> AMD maybe got a little bit carried away by the new packaging tech, and thought they could glue on some cache, and all would be well, and it would be a cheap solution.


Cheap? This is clearly a lot more expensive than just selling a 5800X, and they'll be launching Zen4 for AM5 whether or not it includes 3D cache, so arguing that they did this because it somehow adds a cost benefit does not compute.


stimpy88 said:


> The benchmarks will be interesting.  But I hope that cannibalizing the sales of the already existing 5800x, as well as the 5900x and possibly 5950x were worth AMD's experiment.


How does it cannibalize those sales? And if it does, and the buyers are happy with their chips, does it matter?


----------



## TheLostSwede (Mar 17, 2022)

stimpy88 said:


> I think the evidence shows that AMD have just found something out about this CPU.  I think they are seeing them fail in a short amount of time.  For proof, I say that a few days ago, AMD demanded that all the MB makers release a new emergency BIOS with OC support disabled.  Meaning that it was previously enabled for the last couple of months, while they have been sampling and testing.
> 
> We have also been told that there were thermal issues earlier on, and that they had to downclock it to make it run at a more suitable temperature.  AMD maybe got a little bit carried away by the new packaging tech, and thought they could glue on some cache, and all would be well, and it would be a cheap solution.
> 
> The benchmarks will be interesting.  But I hope that cannibalizing the sales of the already existing 5800x, as well as the 5900x and possibly 5950x were worth AMD's experiment.


That was around a months and a half ago. It took about a month for that information to leak.
None of it was an emergency, it was just another AGESA update from AMD and a beta version at that.


----------



## chrcoluk (Mar 17, 2022)

OC is yesteryear.  If you tweak, you undervolt, or just do custom p-states. 

This seems quite innovative and will be cool to see how it plays out as consumers get hold of the chips.

We are actually lucky the cpu companies are open about their products like this, they could just say "15 % faster but its our secret sauce", as now of course Intel will probably make their own stacked cache on a future gen.


----------



## TheLostSwede (Mar 17, 2022)

Valantar said:


> Or they could just be bad at consistently pushing BIOS updates as needed - for which there is plenty of evidence.


Actually, AMD is pushing out a lot of test/beta builds to the motherboard makers that we as users never get a whiff of. This was most likely supposed to be one of those, but as it goes, it couldn't be kept a secret for long.



chrcoluk said:


> We are actually lucky the cpu companies are open about their products like this, they could just say "15 % faster but its our secret sauce", as now of course Intel will probably make their own stacked cache on a future gen.


Just look at how it is in the Arm world. We just have to trust the SoC makers there, as most of them are more than unwilling to share details that gives any kind of insight into what their own secret sauce is that they've added to make a chip. AMD and Intel are very open with how their products work in comparison.


----------



## Deleted member 24505 (Mar 17, 2022)

So 5800x3D users will be beta testing the 3d cache for AMD in effect, how the tables turn ie ADL


----------



## dgianstefani (Mar 17, 2022)

RAM/IF OC is still available and that's where all the performance is anyway so what's the problem?


----------



## Mats (Mar 17, 2022)

Tigger said:


> So 5800x3D users will be beta testing the 3d cache for AMD in effect, how the tables turn ie ADL


No. Milan-X is launched before 5800X3D.


----------



## QuietBob (Mar 17, 2022)

Bummer, why would you release an enthusiast SKU and lock out enthusiast features? While I perfectly understand the reasons behind AMD's decision, I can't help feeling a little disappointed. I guess limiting the voltage is essentially a safeguard. Just think of these ill-advised "experts" who would happily apply 1.5+ v to static overclocks, only because they saw some random dude doing it.

On the flip side, lower maximum Vcore and boost clocks should mean lower power consumption and temperature. And perhaps we'll see stable 2000+ MHz on the IF as well.

I'll probably still end up getting one. My current workloads would see some improvement from Zen 3 IPC lift, as well as having access to 8 physical cores.


----------



## GerKNG (Mar 17, 2022)

and what has that anything to do with an unlocked multiplier?


----------



## Mats (Mar 17, 2022)

dgianstefani said:


> RAM/IF OC is still available and that's where all the performance is anyway so what's the problem?


I dunno.

The amount of FUD in this thread is astonishing.

"Why should I pay for higher performance when it runs at lower clocks? Them clocks = my epeen. Lower clocks should mean lower price. I measure performance in Hz."

"This thing will break, and that's why AMD have decided to also destroy their entire server market this month."

"The extra cache is in fact a couple of 5G chips. I said it, which means it is the proof of what I said."


----------



## tabascosauz (Mar 17, 2022)

Mats said:


> Just no. That's not a proof, that's a guess at best.
> If the 3D cache was anywhere close to unreliable like you suggest then AMD wouldn't launch Milan-X this month.



EPYC is clocked way lower, if V-cache even remotely has a problem with longevity/power/heat in that envelope then we'd have a massive problem on our hands. Zen has never had a problem in that range. But for the past 4 generations AMD has been pushing Ryzen to the very limit of what its respective process can sustain and often struggling to reconcile the clocks and V-F it wants with the hardware it has, that again looks like the deal with the 5800X3D.

I guess we'll see. I certainly hope it's just a matter of thermals and AMD playing it safe for tbe first time in forever. Every generation we get increasing granularity and precision in terms of voltage domains and clock dividers - cache might be the next one.


----------



## TheinsanegamerN (Mar 17, 2022)

They could have just locked down the voltage if that was the case, and not all OCing. 

Nvidia locked down mobile GPU OCing over voltage concerns too, and the community ripped them a new one. 

But when AMD does it its OK.


----------



## Mats (Mar 17, 2022)

tabascosauz said:


> EPYC is clocked way lower, if V-cache even remotely has a problem with longevity/power/heat in that envelope then we'd have a massive problem on our hands. Zen has never had a problem in that range. But for the past 4 generations AMD has been pushing Ryzen to the very limit of what its respective process can sustain and often struggling to reconcile the clocks and V-F it wants with the hardware it has, that again looks like the deal with the 5800X3D.
> 
> I guess we'll see. I certainly hope it's just a matter of thermals and AMD playing it safe for tbe first time in forever. Every generation we get increasing granularity and precision in terms of voltage domains and clock dividers - cache might be the next one.


My point was that there's an idea that will break even when AMD tries to play safe, AKA more FUD.


----------



## dont whant to set it"' (Mar 17, 2022)

1.35V should be more than sufficient for an allcore clock of 4.6GHz ,thermals be kept in check that is.


----------



## Mats (Mar 17, 2022)

TheinsanegamerN said:


> They could have just locked down the voltage if that was the case, and not all OCing.
> 
> Nvidia locked down mobile GPU OCing over voltage concerns too, and the community ripped them a new one.
> 
> But when AMD does it its OK.


You're comparing a whole lineup to one single half-generation old, possibly limited in availability, almost niche, overprized SKU. 

It's not like you don't have options. Drop the bitterness.


----------



## Taraquin (Mar 17, 2022)

Mats said:


> Yeah we'll be missing out on that sweet, juicy GHz OC everyone is talking about..
> 
> View attachment 240110
> 
> ...


5800X has lower latency than 5900X since it only has one ccd. Single core can reach 4.85GHz stock and 5.05GHz with pbo, 5900X only goes 50MHz highet. 5900X has about same powerbudget so allcore speed is generally lower than 5800X, combine thst with a bit worse latency and games run faster on 5800X, but games that utilize lots of cores prefer 5900X.


----------



## Dr. Dro (Mar 17, 2022)

Non-issue, imo. Memory and Infinity Fabric settings remain unlocked, and if curve optimizer also works, this should be as versatile as the other Ryzen processors, IMO. The automatic tweaking algorithm on these processors is optimized to near perfection, and admittedly, above my own manual overclocking skills.



TheinsanegamerN said:


> They could have just locked down the voltage if that was the case, and not all OCing.
> 
> Nvidia locked down mobile GPU OCing over voltage concerns too, and the community ripped them a new one.
> 
> But when AMD does it its OK.


When did that happen? Not to brag but... my mobile RTX 3050 can OC and it OCs hard... it will run >2.1 GHz... insane little chip


----------



## DeathtoGnomes (Mar 17, 2022)

It looks like a lot of responses clearly show those who havent watched the video. So I partially agree with @Mats on this thread having a lot of FUD (and some whiney comments).


----------



## dgianstefani (Mar 17, 2022)

Literally who OC's Ryzen clock, tune the memory and IF and that's it.


----------



## ThrashZone (Mar 17, 2022)

DeathtoGnomes said:


> It looks like a lot of responses clearly show those who havent watched the video. So I partially agree with @Mats on this thread having a lot of FUD (and some whiney comments).


Hi,
Did the video contain any test against 12900k ?
If not who cares


----------



## Deleted member 24505 (Mar 17, 2022)

I noticed he did say CPU core frequency overclocking or core voltage adjustment, so stock vcore only?


----------



## neatfeatguy (Mar 17, 2022)

I'm not sure why some folks are a bit put off by lack of OC capabilities of this up and coming 5800X3D.

Years ago OCing was an awesome way to push extra out of a CPU. I remember taking the AMD 64 X2 3800+ up from 2.0GHz to almost 3.2GHz.
I was able to push my PII x4 940 from 3.0 to 3.71 and that certainly helped.
I enjoyed the simplified OC capabilities of the PII and the i5-4670k that I ran at 4.4.

However, with how the current Ryzen CPUs manage boosts and how overclocking tends to give, overall, a minimal increased performance over letting the system manage boosts, I couldn't care less about being able to push an overclock on these CPUs.


----------



## bug (Mar 17, 2022)

neatfeatguy said:


> I'm not sure why some folks are a bit put off by lack of OC capabilities of this up and coming 5800X3D.


It's not a big deal, technically.
But for years, AMD rubbed it in Intel's face that they only sell fully unlocked CPUs, enabling users to extract all they want from them. This CPU flies in the face of all that. And it's worth pointing out.


----------



## Taraquin (Mar 17, 2022)

dgianstefani said:


> Literally who OC's Ryzen clock, tune the memory and IF and that's it.


Pbo and curve optimizer can easily net you 5-6%. Even if pbo is disabled, using negative CO is a big win since it can raise allcore frequency by several hundred MHz within pwr budget at same consumotion/temp. 

Static OC I agree is dead on Ryzen 5k. On Ryzen 3k it could be really good sometimes if you got a golden sample 3600 and could run 4.4@1.25v vs 4GHz stock.


----------



## neatfeatguy (Mar 17, 2022)

bug said:


> It's not a big deal, technically.
> But for years, AMD rubbed it in Intel's face that they only sell fully unlocked CPUs, enabling users to extract all they want from them. This CPU flies in the face of all that. And it's worth pointing out.


On noes....! One CPU that goes against the grain.

I suppose I can understand if something like that bothers some.....but it's no sweat off my back.


----------



## Mats (Mar 17, 2022)

bug said:


> But for years, AMD rubbed it in Intel's face that they only sell fully unlocked CPUs, enabling users to extract all they want from them. This CPU flies in the face of all that. And it's worth pointing out.


An AMD representative have told us about the lack of manual overclocking more than a month ahead of the launch, and it has been pointed out numerous times across the net.
I don't see any piece of info missing so far. Repeating this news title isn't worth anything.


----------



## ThrashZone (Mar 17, 2022)

Hi,
Might of been a meaningful story if there were at least some tests to confirm catching the 12900k as gaming champ but without that info it's just youtube waste.


----------



## Mats (Mar 17, 2022)

ThrashZone said:


> Hi,
> Might of been a meaningful story if there were at least some tests to confirm catching the 12900k as gaming champ but without that info it's just youtube waste.


Well the info about the lack of manual OC is important regardless.

I guess it's too soon for details like that, but personally I don't expect the 5800X3D to be the faster one. Then there's the 12900KS.. which seems to have been news to AMD as well.


----------



## Makaveli (Mar 17, 2022)

if this chip can do a 2000 FLK then PBO+CO with DDR4000 memory I think it will be pretty quick.


----------



## ThrashZone (Mar 17, 2022)

Mats said:


> Well the info about the lack of manual OC is important regardless.
> 
> I guess it's too soon for details like that, but personally I don't expect the 5800X3D to be the faster one. Then there's the 12900KS.. which seems to have been news to AMD as well.


Hi,
12900ks is just intel milking the market before amd releases this chip.

As far as this chip and being locked to turbo clocks/ voltages it may not even need to be oc'ed and IF it gets close to either 12900-ks who really will care at that point 12900ks is 800.us 

Sucker bait


----------



## bug (Mar 17, 2022)

neatfeatguy said:


> On noes....! One CPU that goes against the grain.
> 
> I suppose I can understand if something like that bothers some.....but it's no sweat off my back.


It depends on how much you're willing to read into it.
CPU unable to sustain manual overclock like its siblings can mean that it has not undergone sufficient testing (wrt overclocking), meaning AMD somehow felt a need to rush it out. Or it can mean the 3D cache raises some problems AMD was unable/unwilling to fix.

TL;DR It makes 5800X3D make like an epeen CPU ("look at me, I'm faster than Intel"), just like Intel made epeen out of 12900k when the gave it that humongous TDP limit just to be able to claim they can beat AMD.


----------



## ThrashZone (Mar 17, 2022)

Hi,
Cache oc'ing has been known to fry a chip.


----------



## bug (Mar 17, 2022)

ThrashZone said:


> Hi,
> Cache oc'ing has been known to fry a chip.


Nobody suggested the cache should be overclocked. Quite the contrary, manually overclocking this probably _affects _the 3D cache when it shouldn't, that's why it was axed.


----------



## ThrashZone (Mar 17, 2022)

bug said:


> Nobody suggested the cache should be overclocked. Quite the contrary, manually overclocking this probably _affects _the 3D cache when it shouldn't, that's why it was axed.


Hi,
Can't individualize oc'ing man well I guess you're attempting to 

It's all oc'ing to me, one for all and all for none in this case irony


----------



## TheinsanegamerN (Mar 17, 2022)

Dr. Dro said:


> Non-issue, imo. Memory and Infinity Fabric settings remain unlocked, and if curve optimizer also works, this should be as versatile as the other Ryzen processors, IMO. The automatic tweaking algorithm on these processors is optimized to near perfection, and admittedly, above my own manual overclocking skills.
> 
> 
> When did that happen? Not to brag but... my mobile RTX 3050 can OC and it OCs hard... it will run >2.1 GHz... insane little chip


Back in 2015:





__





						[GeForce Forums] Nvidia has officially blocked 900M overclocking
					

"Unfortunately GeForce notebooks were not designed to support  overclocking. Overclocking is by no means a trivial feature, and depends  on thoughtful design of thermal, electrical, and other considerations.  By overclocking a notebook, a user risks serious damage to the system  that could...




					forums.anandtech.com
				




The backlash was harsh enough that nvidia eventually backed off.


neatfeatguy said:


> I'm not sure why some folks are a bit put off by lack of OC capabilities of this up and coming 5800X3D.
> 
> Years ago OCing was an awesome way to push extra out of a CPU. I remember taking the AMD 64 X2 3800+ up from 2.0GHz to almost 3.2GHz.
> I was able to push my PII x4 940 from 3.0 to 3.71 and that certainly helped.
> ...


Because taking away control from end users is generally seen as a bad thing. Even if the gains are minor, that doesnt mean people like platforms being more locked down. AMD has long run on the platform of having unlocked CPUs across the lineup, and now have become massive hypocrytes by locking down their own hardware. AMD fanbois will rip on intel for making locked CPUs and "having lotsof locked SKUs" yet will defend AMD when they lock one of their own chips.

It's a slippery slope. Its one CPU, or its just the low end, oh its just the ryzen 7 and lower, its not like you can do much, ece ece.


neatfeatguy said:


> On noes....! One CPU that goes against the grain.
> 
> I suppose I can understand if something like that bothers some.....but it's no sweat off my back.





> You're comparing a whole lineup to one single half-generation old, possibly limited in availability, almost niche, overprized SKU.
> 
> It's not like you don't have options. Drop the bitterness.



Like I said, it's OK when AMD does it. Intel does it, the community will rip them apart, nvidia does it, the forums light up. AMD does it "lol why so bitter bro, its not like it matters bro".

This is what AMD fanbois call "mindshare" and claim all their competitors benefit from.


----------



## Valantar (Mar 17, 2022)

TheinsanegamerN said:


> Because taking away control from end users is generally seen as a bad thing. Even if the gains are minor, that doesnt mean people like platforms being more locked down. AMD has long run on the platform of having unlocked CPUs across the lineup, and now have become massive hypocrytes by locking down their own hardware. AMD fanbois will rip on intel for making locked CPUs and "having lotsof locked SKUs" yet will defend AMD when they lock one of their own chips.
> 
> It's a slippery slope. Its one CPU, or its just the low end, oh its just the ryzen 7 and lower, its not like you can do much, ece ece.
> 
> ...


IMO it's more of a case of "okay, it seems reasonable to limit this if there's significant risk of damage to the hardware if left open". The same line of reasoning isn't applicable to any traditional CPU (there is risk, but it isn't _significant_ - you need to try quite hard or be entirely devoid of knowledge to damage a conventional CPU with OCing), hence why the context makes this reasoning applicable here and not in other cases. This would be especially true if settings that are safe for every other 5000-series CPU are suddenly highly dangerous to this one. Does this constitute the start of a slippery slope? Unlikely IMO - if anything, features like PBO+ and CO are proof that if anything, AMD is working to provide _better_, _smarter_ OC functionality as time goes on. It is of course entirely possible that this constitutes an about-face, and that they will indeed be cracking down on OCing from now on. But that would be quite surprising, and a significant break with (even recent) actions from their side.


----------



## bug (Mar 17, 2022)

@Valantar So, AMD fixing whatever causes the "significant risk of damage" before launching this is not an option to you? You just go for whatever AMD says, hook, line and sinker?


----------



## Mats (Mar 17, 2022)

TheinsanegamerN said:


> Like I said, it's OK when AMD does it. Intel does it, the community will rip them apart, nvidia does it, the forums light up. AMD does it "lol why so bitter bro, its not like it matters bro".
> 
> This is what AMD fanbois call "mindshare" and claim all their competitors benefit from.


You're still failing to see the difference between a single SKU and a whole mobile GPU generation, and calling people fanboys doesn't change that.

Because you have, like I said, plenty of options. Alder Lake offers great performance, and slower and faster models are critically acclaimed. Don't like Intel? Well AMD is about to launch more CPU's, even though they're not really new anymore. AM5 is maybe 6 months away if you want something newer, and right after that there's Raptor Lake.

Comparing all that to a situation when Nvidia in reality had very little competition in mobile gaming is just very misleading and biased.

You have options, don't be bitter.


			https://pcpartpicker.com/products/cpu/#F=96,98


----------



## Valantar (Mar 17, 2022)

bug said:


> @Valantar So, AMD fixing whatever causes the "significant risk of damage" before launching this is not an option to you? You just go for whatever AMD says, hook, line and sinker?


Let's see ... the cache die can't handle more than 1.35V, and it seems its voltage is tied to vCore. Other Ryzens routinely run vCore above 1.4V. How, exactly, do you propose this "fix this before launch"? By significantly restructuring the Vermeer die to separate out a cache voltage rail? Or by magically overcoming what is likely a limitation of the high density TSMC 7nm cache libraries used for the cache die? Because that's likely what such a fix would require. So no, I don't see that as a likely option, no.

As for whether this constitutes "going for whatever AMD says", that ... I'll leave that for you to judge. I'll side on the side of "let's not allow idiots to break their chips _too_ easily" on this one. You're welcome to disagree.


----------



## ThrashZone (Mar 17, 2022)

Hi,
OEM's lock chips all the time not much is made of it just another job for @unclewebb to deal with throttlestop


----------



## Mats (Mar 17, 2022)

TheinsanegamerN said:


> It's a slippery slope. Its one CPU, or its just the low end, oh its just the ryzen 7 and lower, its not like you can do much, ece ece.


It seems like you don't understand why this happened at all. It's just AMD being mean and greedy?
You really haven't seen the video, or you just don't believe what they say. Your reasoning right there is a slippery slope, and all you do is adding FUD.

AMD had three options.

1 - Launch the 5800X3D like they say the will, no OC.
2 - Not launching at all.
3 - Allowing OC even though AMD knows that the CPU will break.

Not a hard choice really.


----------



## Turmania (Mar 17, 2022)

I personally would not touch a cpu that has voltage limitations. It does not install confidence in me that it will be long lasting product even under normal conditions. As well it does not install confidence in me that when it starts to heat up it can shut down the system during a session. Why risk it?


----------



## Valantar (Mar 17, 2022)

Turmania said:


> I personally would not touch a cpu that has voltage limitations. It does not install confidence in me that it will be long lasting product even under normal conditions. As well it does not install confidence in me that when it starts to heat up it can shut down the system during a session. Why risk it?


Why would it shut down if it's being kept below tJmax? These limits are a widely documented part of the spec; keep within them and you should be fine, and if not then that qualifies you for a warranty replacement of the CPU, as it's not performing as per specifications.


----------



## Makaveli (Mar 17, 2022)

Turmania said:


> I personally would not touch a cpu that has voltage limitations. It does not install confidence in me that it will be long lasting product even under normal conditions. As well it does not install confidence in me that when it starts to heat up it can shut down the system during a session. Why risk it?


Why would a cpu not last long with the manufacturer setting a voltage limit and someone keeping it at stock.

I'm not seeing the logic in this post.


----------



## TheinsanegamerN (Mar 17, 2022)

Mats said:


> You're still failing to see the difference between a single SKU and a whole mobile GPU generation, and calling people fanboys doesn't change that.
> 
> Because you have, like I said, plenty of options. Alder Lake offers great performance, and slower and faster models are critically acclaimed. Don't like Intel? Well AMD is about to launch more CPU's, even though they're not really new anymore. AM5 is maybe 6 months away if you want something newer, and right after that there's Raptor Lake.
> 
> ...


It's hypocritical of AMD, ater using intel's locked CPUs as a talking point, to then lock their own CPUs.

Options from other companies existing is a red herring argument.


Mats said:


> It seems like you don't understand why this happened at all.


Higher voltage can damage the cache. Not hard to understand. 


Mats said:


> It's just AMD being mean and greedy?


It's not like AMD jumped the price on their CPUs by 30-50 percent with the 5000 series, or refused to support the 400 series chipsets until public backlash forced their hand, or like they did the same thing with the 300 series. 

Newsflash, AMD is a corporation. All corporations are greedy, and need to be held to task when that greed spirals out of control. 


Mats said:


> You really haven't seen the video, or you just don't believe what they say. Your reasoning right there is a slippery slope, and all you do is adding FUD.
> 
> AMD had three options.
> 
> ...


"muh FUD" - every fanboi ever.

AMD could have, you know, allowed OC and locked the voltage so the cache doesnt get hurt. Just an idea.

Stop accusing me of FUD when all of your arguments rely on baseless handwaving of any points the opposition makes.


----------



## Turmania (Mar 17, 2022)

Makaveli said:


> Why would a cpu not last long with the manufacturer setting a voltage limit and someone keeping it at stock.
> 
> I'm not seeing the logic in this post.


Wait two years than you might see it. For now I understand you. ButI have a very bad feeling about this product and its longetivity...


----------



## eidairaman1 (Mar 17, 2022)

DemonicRyzen666 said:


> So why does it have an X in its name then?
> it's not Ryzen 5800X 3D
> it's a ryzen 5800 3D


The OEM 5800 can still manually overclock.


----------



## neatfeatguy (Mar 17, 2022)

Mats said:


> It seems like you don't understand why this happened at all. It's just AMD being mean and greedy?
> You really haven't seen the video, or you just don't believe what they say. Your reasoning right there is a slippery slope, and all you do is adding FUD.
> 
> AMD had three options.
> ...



These days people complain just to complain. Here's how I see it, regardless of how it gets handled:


*On one hand you have*:
OMG! AMD! How dare you limit voltages and OC capabilities on your new CPU! We can't manually adjust settings! Your product is bad and you should feel bad! You suck like Intel and Nvidia!






*On the other hand if limits weren't in place and bad things happened*:
OMG! AMD! How dare you not limit the voltage and OC capabilities on your new CPU! CPUs are dying! Your product is bad and you should feel bad!





Looks like AMD is just f'ing things up, no matter what choice they make. Not sure if I feel bad for AMD or the people that just have to complain about the situation one way or another.


----------



## Icon Charlie (Mar 17, 2022)

TheLostSwede said:


> The question is what the AM4 platform as a whole will cost.
> That said, considering how long you've waited, you might as well try to make it another six months and skip straight to a DDR5 platform, be that from AMD or Intel.


IMHO.... The DDR 5 platform is going to be expensive in the short run and maybe down the road as well.  This is why I upgraded my rig last December and am running on a total of 64 gb of underclocked DDR 4 PC 4000 Ram and purchased a 5900OEM.

I simply do not see any fantastic upswings in performance in the near future.  The days of getting great performance for the average diy'er is long gone. My Rig will be fine for 2 to 3 years from now because of its fantastic price + performance + value I got on this setup.


----------



## Mats (Mar 17, 2022)

TheinsanegamerN said:


> It's hypocritical of AMD, ater using intel's locked CPUs as a talking point, to then lock their own CPUs.


CPU's, in plural? I'm only aware of one single SKU that seems to have triggered a few alarmists here and there.



TheinsanegamerN said:


> Options from other companies existing is a red herring argument.


Call it what you want, they're still options, as AMD and Intel platforms are mostly interchangeable. That's far from the Nvidia situation, where you were pretty much forced to buy a Nvidia laptop if you wanted a high end gaming laptop.

The last desktop CPU I bought was a 2600K, the one before that was an Opteron 146. I don't see the point in sticking with one brand only as they both go south once in a while, or for years in worst case.
You know if you accepted Intel CPU's as an option you'd find it easier to process the news of less desirable AMD CPU's.



TheinsanegamerN said:


> It's not like AMD jumped the price on their CPUs by 30-50 percent with the 5000 series, or refused to support the 400 series chipsets until public backlash forced their hand, or like they did the same thing with the 300 series.


Sure, but that's off topic here. When I asked if they're greedy, I was talking about the topic in this thread, not old news.



TheinsanegamerN said:


> AMD could have, you know, allowed OC and locked the voltage so the cache doesnt get hurt. Just an idea.


Well then what's the point? AMD would get so bashed, and rightly so, for falsely claiming that it's overclockable yet wouldn't allow raising the voltage. Now that would have been misleading.
See post #103. 

I still think the it should be called 58003D tho, as it's not higher clocked than the 5800.



TheinsanegamerN said:


> Stop accusing me of FUD when all of your arguments rely on baseless handwaving of any points the opposition makes.


That doesn't make sense. This quote of yours is spreading fear, and it's based on all speculations and no facts, ie baseless:
"It's a slippery slope. Its one CPU, or its just the low end, oh its just the ryzen 7 and lower, its not like you can do much, ece ece."


----------



## mechtech (Mar 17, 2022)

Meh, the chips sometimes performs better on 'auto' anyway with good cooling.


----------



## Cutechri (Mar 17, 2022)

mechtech said:


> Meh, the chips sometimes performs better on 'auto' anyway with good cooling.


5.2 GHz stock on my 5900X on a Noctua NH-U12A, 'nuff said.


----------



## Pastuch (Mar 17, 2022)

The real question here is how does it handle Warzone? A heavily oced 12900k with a good bin and insanely tight bdie can do 250+ fps. If the 5800x3d can match that I’ll sell my 5600x and get one asap. I can only 1733 out of my fclk right now and my averages are only 200fps at 1080p


----------



## Adam Krazispeed (Mar 17, 2022)

OK, AMD. IM NOT BUYING ONE NOW! GO SCREW OFF.

SO ok AMD, SO ITS NOT an AMD Ryzen 7 5800X3D its SHOULD BEEN CALLED RYZEN "5800 GIMPED EDITION"    or 58003D cache NO OC NOT AN X MODEL TEHN Y FREEKING AHOLES... NOT BUYING ONE NOW EVER, NOT BUYING ZEN4 / AM5 EITHER, so

IM GOING INTEL.  AMD, IM DONE WITH UR BS.

zen3 wsa never worth the 300+ that they were, i wish I NEVER BOUGHT ANY OF YOUR BS RYZNE CPUS NOW IM SELLING EVERY MTHE FREEPHUCJING AMD CRAP I HAVE IM DONE, IM DOEN WITH AMD.


----------



## Assimilator (Mar 17, 2022)

neatfeatguy said:


> These days people complain just to complain. Here's how I see it, regardless of how it gets handled:
> 
> 
> *On one hand you have*:
> ...


Or AMD could... y'know... *not sell a product that is quite obviously an unfinished experiment to customers*. Because that would be the smart and ethical thing to do.

Strange how you chose to ignore what's literally the most obvious option.

At the end of the day, though, AMD is just shooting themselves in the foot with this Franken-CPU. Because someone will release a BIOS that "accidentally" removes the limit (or maybe AMD will do it themselves by fucking up AGESA, it's a coin toss), and idiots will flash that BIOS and burn their shiny new 5800X3Ds, and they'll moan and whine and complain about it on social media, and regardless of the fact that those users were the stupid ones, AMD's reputation will suffer.

It's amazing, Intel releases a line of CPUs that's actually competitive again and AMD immediately goes full retard and dreams up a product that nobody asked for and will do them harm over the long run, when what they actually should've done was just fucking lower their prices. But they've been riding the gravy train for so long that they've become greedier than Intel, something I thought impossible.


----------



## neatfeatguy (Mar 17, 2022)

Wow. Some rage and conspiracy posts are popping up now.  I'll just quietly back away from this thread.


----------



## DeathtoGnomes (Mar 17, 2022)

ThrashZone said:


> Hi,
> Did the video contain any test against 12900k ?
> If not who cares


Exactly, who cares about the 12900K in this PR video?



Valantar said:


> As for whether this constitutes "going for whatever AMD says", that ... I'll leave that for you to judge. I'll side on the side of "let's not allow idiots to break their chips _too_ easily" on this one. You're welcome to disagree.


I died laughing at this. People will create narratives for the sake of arguing the fanboi point of view. When was the last time AMD requested removing the ability to OC, only to  reinstate it once the AGESA was updated? I'd bet AMD will reinstate OC'ing on this chip once they can work out whatever the problem is to warrant the blocking of OC'ing, which means an AGESA is forthcoming after launch.


----------



## eidairaman1 (Mar 18, 2022)

DeathtoGnomes said:


> Exactly, who cares about the 12900K in this PR video?
> 
> 
> I died laughing at this. People will create narratives for the sake of arguing the fanboi point of view. When was the last time AMD requested removing the ability to OC, only to  reinstate it once the AGESA was updated? I'd bet AMD will reinstate OC'ing on this chip once they can work out whatever the problem is to warrant the blocking of OC'ing, which means an AGESA is forthcoming after launch.


I see this as well, further testing then a bios update to enable it.

Boy the ragers in here are just stupid...


----------



## chrcoluk (Mar 18, 2022)

Turmania said:


> I personally would not touch a cpu that has voltage limitations. It does not install confidence in me that it will be long lasting product even under normal conditions. As well it does not install confidence in me that when it starts to heat up it can shut down the system during a session. Why risk it?


All cpus have voltage limits in specs.


----------



## Cutechri (Mar 18, 2022)

Assimilator said:


> It's amazing, Intel releases a line of CPUs that's actually competitive again and AMD immediately goes full retard and dreams up a product that nobody asked for and will do them harm over the long run, when what they actually *should've done was just fucking lower their prices*. But they've been riding the gravy train for so long that they've become greedier than Intel, something I thought impossible.


Yeah, those new budget Ryzens would not have existed if it wasn't for Intel providing stiff competition. You could've begged them for budget options all you want and you wouldn't have seen it. Almost as if AMD is just like every other company, only chasing money..


----------



## chrcoluk (Mar 18, 2022)

Assimilator said:


> Or AMD could... y'know... *not sell a product that is quite obviously an unfinished experiment to customers*. Because that would be the smart and ethical thing to do.
> 
> Strange how you chose to ignore what's literally the most obvious option.
> 
> ...


I am confused you prefer to o/c a weaker chip with much more heat and power for that 15% performance unstead of it been included by new tech out of box? Seems bizarre to me, also are intel 250w cpus competitive at same power level as AMD Zen3?


----------



## Cutechri (Mar 18, 2022)

chrcoluk said:


> also are intel 250w cpus competitive at same power level as AMD Zen3?


Yeah? 12900K locked at 35W does around 12600 in CB R23. A 5950X does about half at the same wattage. 12900K consumes a ton because of Intel's idiotic PL2 limits. Rest of the chips are very competitive and some beat Zen 3 in efficiency.


----------



## Bloax (Mar 18, 2022)

Nephilim666 said:


> How will it fare against an overclocked i5-12600k or i7-12700k I wonder...



very well, if you go by _extremely rough_ (no GPU for easy 1:1 :^))) comparisons of very juiced configs for both :- )

Not well enough that I'd rate it worth your while to bruteforce (yes, that is Reboot, Enter voltage, Test, Reboot ... Compare, Pick Best Performers, Test, Reboot, Enter voltage ...) a working SOC, IOD and CCD voltage.
As without those, you're gonna have a lot more stutters than if you do.

It's especially hard to recommend with pretty sweet deals on 12700k's being suspiciously frequent.

Though if you're sitting on a Ryzen 1600x or 2600 - then it's probably a sweet processor.


----------



## GreiverBlade (Mar 18, 2022)

Bloax said:


> Though if you're sitting on a Ryzen 1600x or 2600 - then it's probably a sweet processor.


or a 3600 ...


no deal breaker here for me ... i do not look at manual OC since quite a while ... my last "OC for fun" were a E8500, a i7-920 and the only one that yielded substantial advantage in addition of being fun was a DIP switched Athlon 650 (slot A Pluto core) that pushed nicely to 800mhz

my 3600 is stock and plenty in most task and games i usually do/play ... and given that i can find a 5600X for between 199chf and 230chf atm, i might rather go for that one later instead (will depend on the pricing and since MSRP is dead, i do not really trust press release prices reveal (PRPR? sounds nice ... try it))

i prefer when they announce OC locked rather than F' it up later with a botched microcode update, my 6600k never recovered from it ... did feel almost like a rental OC
although when it was OC'ing ... hardly going above 4.1ghz .... and for a 3.9ghz boost CPU it's a crying shame ... i guess only reviewer get "cherry picked" units to make brands look good ... pfeh!


----------



## Why_Me (Mar 18, 2022)

Adam Krazispeed said:


> OK, AMD. IM NOT BUYING ONE NOW! GO SCREW OFF.
> 
> SO ok AMD, SO ITS NOT an AMD Ryzen 7 5800X3D its SHOULD BEEN CALLED RYZEN "5800 GIMPED EDITION"    or 58003D cache NO OC NOT AN X MODEL TEHN Y FREEKING AHOLES... NOT BUYING ONE NOW EVER, NOT BUYING ZEN4 / AM5 EITHER, so
> 
> ...


Welcome to _Team Blue_.   Let's start your road to recovery by changing that abomination of an avatar in your profile.


----------



## Flydommo (Mar 18, 2022)

Overclocking will soon be a thing of the past, like the combustion engine. If the stock 5800X 3D delivers signficant performance gains over an overclocked 5800X, why wouldn't you go for the 5800X 3D? Just because the clock speed is lower?


----------



## ratirt (Mar 18, 2022)

Assimilator said:


> Or AMD could... y'know... *not sell a product that is quite obviously an unfinished experiment to customers*. Because that would be the smart and ethical thing to do.
> 
> Strange how you chose to ignore what's literally the most obvious option.
> 
> ...


Just because it has a locked Vcore due to cache limitations while using new technology, does not mean unfinished. 
At least I see it that way. Things will improve in time as people always claim it has to mature. Let it mature.


----------



## Valantar (Mar 18, 2022)

DeathtoGnomes said:


> I died laughing at this. People will create narratives for the sake of arguing the fanboi point of view. When was the last time AMD requested removing the ability to OC, only to reinstate it once the AGESA was updated? I'd bet AMD will reinstate OC'ing on this chip once they can work out whatever the problem is to warrant the blocking of OC'ing, which means an AGESA is forthcoming after launch.


IMO this seems to be a rather different situation though. Arbitrarily locking down a CPU because of market segmentation (which is the typical reason for doing so) is quite different from "this chip has a component with a particularly low voltage tolerance, so if we allow normal OC controls there's a particularly high chance you'll break it permanently". With the reasoning given, it seems quite improbale that this ability should be added post-launch. They've tested the cache die; they know what voltages it can handle, and if it's tied to vCore, then they know how high vCore can safely go. The technical reasoning seems sound, even if it's a bit disappointing. And crucially, it passes the "does this seem like it's done to squeeze more money out of people" smell test.


----------



## fevgatos (Mar 18, 2022)

chrcoluk said:


> I am confused you prefer to o/c a weaker chip with much more heat and power for that 15% performance unstead of it been included by new tech out of box? Seems bizarre to me, also are intel 250w cpus competitive at same power level as AMD Zen3?


Intel walk all over zen 3 in gaming, both in performance and efficiency, since they consume a lot less power


----------



## Mats (Mar 18, 2022)

Bloax said:


> It's especially hard to recommend with pretty sweet deals on 12700k's being suspiciously frequent.
> 
> Though if you're sitting on a Ryzen 1600x or 2600 - then it's probably a sweet processor.


Yeah, there are a lot of AM4 owners that might want to upgrade, but a 5700X or a 5600 might be a better choice.

Besides, how much of a difference does a X3D do if you have a 3070 or slower? People seem to forget from time to time that CPU gaming benchmarks are usually done with one of the fastest cards available.

The 12 series is hard to ignore if you don't have an AM4 board.


----------



## DeathtoGnomes (Mar 18, 2022)

Valantar said:


> IMO this seems to be a rather different situation though. Arbitrarily locking down a CPU because of market segmentation (which is the typical reason for doing so) is quite different from "this chip has a component with a particularly low voltage tolerance, so if we allow normal OC controls there's a particularly high chance you'll break it permanently". With the reasoning given, it seems quite improbale that this ability should be added post-launch. They've tested the cache die; they know what voltages it can handle, and if it's tied to vCore, then they know how high vCore can safely go. The technical reasoning seems sound, even if it's a bit disappointing. And crucially, it passes the "does this seem like it's done to squeeze more money out of people" smell test.


I do agree with you on this, but I'll remain hopeful my guess it going to come about as well.


----------



## Taraquin (Mar 18, 2022)

Cutechri said:


> Yeah? 12900K locked at 35W does around 12600 in CB R23. A 5950X does about half at the same wattage. 12900K consumes a ton because of Intel's idiotic PL2 limits. Rest of the chips are very competitive and some beat Zen 3 in efficiency.


On notebooks Zen 3 is more efficient at low wattage vs ADL. My 5600X is far more efficient that my 12400F if I restrict power (50W and 5600X beat 12400F in CB23, but stock 12400F uses 5W less and gets 600points more), but both running stock 12400F is slightly more efficient in most cases.


----------



## Mats (Mar 18, 2022)

fevgatos said:


> Intel walk all over zen 3 in gaming, both in performance and efficiency, since they consume a lot less power


Intel is faster and it uses more power to get there, it's not more efficient here at least. It's the seemingly less optimized 5800X alone that uses 3 % more energy, while the other Zen 3 models uses less.


----------



## fevgatos (Mar 18, 2022)

Mats said:


> Intel is faster and it uses more power to get there, it's not more efficient here at least. It's the seemingly less optimized 5800X alone that uses 3 % more energy, while the other Zen 3 models uses less.
> 
> View attachment 240261View attachment 240262View attachment 240263


This is not gaming. Check igorslab review where he did a gaming efficiency test.


----------



## Mats (Mar 18, 2022)

fevgatos said:


> This is not gaming. Check igorslab review where he did a gaming efficiency test.


Yeah, you're right, I just saw it.


----------



## Taraquin (Mar 18, 2022)

fevgatos said:


> This is not gaming. Check igorslab review where he did a gaming efficiency test.


In general Intel is more efficient running single core, AMD multicore, most games use few cores/treads most of the time.


----------



## Mats (Mar 18, 2022)

The dual chiplet models are behind in efficiency, but not the other two. I'm looking at the last graph.

That's just efficiency tho.


----------



## Valantar (Mar 18, 2022)

Taraquin said:


> In general Intel is more efficient running single core, AMD multicore, most games use few cores/treads most of the time.


This is inaccurate - Intel's cores easily scale past 50W/core in ST loads, while AMD's cores top out at ~20W/core. Intel's cores are also faster with ADL, but not to match the power consumption (that would require them to be ~2.5x faster!). What we're likely seeing in these gaming efficiency tests is likely more of an overall chip architecture thing: it's quite well documented that Infinity Fabric uses a decent chunk of power (up to ~100W on Threadripper; 20+W on Ryzen), a power cost that Intel doesn't have thanks to their monolithic design. That increases AMD's base power level under any kind of load - which of course places them at a disadvantage in low threaded loads, especially bursty ones where Intel might be able to intermittently clock down or don't need to sustain peak clocks during a consistent 100% load. This obviously doesn't make the overall efficiency of the CPU any less real - it doesn't matter whatsoever whether the CPU cores or some interconnect is consuming the power as long as it's being consumed, after all - but it's important to correctly attribute this. Intel manages the efficiency they do here thanks to the combination of high IPC and an efficient monolithic die interconnect, which places them at an advantage over AMD's slightly lower IPC, more efficient CPU cores, but much higher interconnect power. This is also why we see the two-CCD AMD chips consume so much more power: even if only a few cores are under load, they need to keep twice as many IF links active at full speed, doubling IF power over one-CCD chips.


----------



## fevgatos (Mar 18, 2022)

Valantar said:


> This is inaccurate - Intel's cores easily scale past 50W/core in ST loads, while AMD's cores top out at ~20W/core. Intel's cores are also faster with ADL, but not to match the power consumption (that would require them to be ~2.5x faster!). What we're likely seeing in these gaming efficiency tests is likely more of an overall chip architecture thing: it's quite well documented that Infinity Fabric uses a decent chunk of power (up to ~100W on Threadripper; 20+W on Ryzen), a power cost that Intel doesn't have thanks to their monolithic design. That increases AMD's base power level under any kind of load - which of course places them at a disadvantage in low threaded loads, especially bursty ones where Intel might be able to intermittently clock down or don't need to sustain peak clocks during a consistent 100% load. This obviously doesn't make the overall efficiency of the CPU any less real - it doesn't matter whatsoever whether the CPU cores or some interconnect is consuming the power as long as it's being consumed, after all - but it's important to correctly attribute this. Intel manages the efficiency they do here thanks to the combination of high IPC and an efficient monolithic die interconnect, which places them at an advantage over AMD's slightly lower IPC, more efficient CPU cores, but much higher interconnect power. This is also why we see the two-CCD AMD chips consume so much more power: even if only a few cores are under load, they need to keep twice as many IF links active at full speed, doubling IF power over one-CCD chips.


That is true, its the fabric. Thats why alderlake is insanely efficient at 35w for example while zen 3 are absolutely horrific. 

But even in normal out of the box operation alderlake are more efficient in.most tasks,they only lose to full core loads cause of that 240pl2.


----------



## Valantar (Mar 18, 2022)

fevgatos said:


> That is true, its the fabric. Thats why alderlake is insanely efficient at 35w for example while zen 3 are absolutely horrific.
> 
> But even in normal out of the box operation alderlake are more efficient in.most tasks,they only lose to full core loads cause of that 240pl2.


That's debatable, and highly dependent on the workload - they lose against 1-CCD Ryzen in 100% load ST tasks simply due to the massive scaling of their cores (it doesn't matter if you save 20W on your interconnect if your core consumes 30W more), but if the load is more intermittent or lighter, then it can indeed win - it all depends how much the core is being loaded. There's also the interesting example of monolithic Ryzen (Cezanne, Rembrandt), where their mobile chips trounce Alder Lake for efficiency at anything below 45W, but lose above that as Intel has more room to scale clocks.

This is why I'm hoping AMD moves to some sort of integrated bridge tech with Zen4, at least for MSDT chips (it might not be feasible for EPYC/Threadripper due to the sheer thermal density of 8 CCDs packed that tightly). Going that route would allow them to essentially eliminate this disadvantage entirely. But unless they do, this disadvantage isn't going anywhere.


----------



## fevgatos (Mar 18, 2022)

Valantar said:


> That's debatable, and highly dependent on the workload - they lose against 1-CCD Ryzen in 100% load ST tasks simply due to the massive scaling of their cores (it doesn't matter if you save 20W on your interconnect if your core consumes 30W more), but if the load is more intermittent or lighter, then it can indeed win - it all depends how much the core is being loaded. There's also the interesting example of monolithic Ryzen (Cezanne, Rembrandt), where their mobile chips trounce Alder Lake for efficiency at anything below 45W, but lose above that as Intel has more room to scale clocks.
> 
> This is why I'm hoping AMD moves to some sort of integrated bridge tech with Zen4, at least for MSDT chips (it might not be feasible for EPYC/Threadripper due to the sheer thermal density of 8 CCDs packed that tightly). Going that route would allow them to essentially eliminate this disadvantage entirely. But unless they do, this disadvantage isn't going anywhere.


Any examples where they lose in st workloads? Remember we are talking about efficiency, not power consumption.

As an example, phoronix run a 300+ benchmark roundup and the 12900k beat the 5950x both in performance and efficiency.

Since you mention 1ccd,the 5800x for example is as efficienct as a 10900k (!!!) in long multi core loads, after they both settle at their long duration power limit. Basically with both at 125w they perform the same in cinebrnch and blender runs. Which is kinda funny since the 10900k is basically a node and an architecture from 2015, lol


----------



## Taraquin (Mar 18, 2022)

Valantar said:


> This is inaccurate - Intel's cores easily scale past 50W/core in ST loads, while AMD's cores top out at ~20W/core. Intel's cores are also faster with ADL, but not to match the power consumption (that would require them to be ~2.5x faster!). What we're likely seeing in these gaming efficiency tests is likely more of an overall chip architecture thing: it's quite well documented that Infinity Fabric uses a decent chunk of power (up to ~100W on Threadripper; 20+W on Ryzen), a power cost that Intel doesn't have thanks to their monolithic design. That increases AMD's base power level under any kind of load - which of course places them at a disadvantage in low threaded loads, especially bursty ones where Intel might be able to intermittently clock down or don't need to sustain peak clocks during a consistent 100% load. This obviously doesn't make the overall efficiency of the CPU any less real - it doesn't matter whatsoever whether the CPU cores or some interconnect is consuming the power as long as it's being consumed, after all - but it's important to correctly attribute this. Intel manages the efficiency they do here thanks to the combination of high IPC and an efficient monolithic die interconnect, which places them at an advantage over AMD's slightly lower IPC, more efficient CPU cores, but much higher interconnect power. This is also why we see the two-CCD AMD chips consume so much more power: even if only a few cores are under load, they need to keep twice as many IF links active at full speed, doubling IF power over one-CCD chips.


I mostly agree, but at semi low clockspeed Zen 3 is very efficient. My 5600X capped at 45W runs 4.85 SC and 3.7 MC, IO-die uses 20W then. 2 ccds are a different matter but single ccds are really efficient at low power.



fevgatos said:


> Any examples where they lose in st workloads? Remember we are talking about efficiency, not power consumption.
> 
> As an example, phoronix run a 300+ benchmark roundup and the 12900k beat the 5950x both in performance and efficiency.
> 
> Since you mention 1ccd,the 5800x for example is as efficienct as a 10900k (!!!) in long multi core loads, after they both settle at their long duration power limit. Basically with both at 125w they perform the same in cinebrnch and blender runs. Which is kinda funny since the 10900k is basically a node and an architecture from 2015, lol


10900K has 2 more cores, 4 trrads and higher clocks though. Skylake was a very good arcitecture


----------



## Valantar (Mar 18, 2022)

fevgatos said:


> Any examples where they lose in st workloads? Remember we are talking about efficiency, not power consumption.
> 
> As an example, phoronix run a 300+ benchmark roundup and the 12900k beat the 5950x both in performance and efficiency.
> 
> Since you mention 1ccd,the 5800x for example is as efficienct as a 10900k (!!!) in long multi core loads, after they both settle at their long duration power limit. Basically with both at 125w they perform the same in cinebrnch and blender runs. Which is kinda funny since the 10900k is basically a node and an architecture from 2015, lol


The 5800X is an outlier among Zen3 though - while the 5900X and 5950X have higher single core power draws, the 5800X matches or exceeds their per-core draw from 6-8 cores. Yet it clocks lower. This likely means that the 5800X is a relatively different bin from both the 5600X and 59xxX chips, one where power consumption under high loads is less important - simply because it has more room to move with a 105W/138W power budget and just one CCD. Literally every other Zen3 product out there would do better in that comparison against the 10900K. Which, of course, ignores the 10900K having a 2c4t advantage. So, Intel gets the inherent efficiency advantage of being "wide and slow" compared to AMD's somewhat low binned, high clocked 5800X, and still only matches them? That's not a particularly impressive showing.

Is this the review you're referring to, btw? I can't find that they say the 12900K is generally more efficient than the 5950X there - in that (_extremely _unreadable) graph of theirs they seem to both take the lead in various tests. I have no idea which of them are ST and which are MT, though. I have seen ST tests where AMD comes out looking decent in terms of efficiency against ADL, but sadly I can't remember where - and even more sadly, most reviewers limit their efficiency testing to one or two scenarios, which really limits results.



Taraquin said:


> I mostly agree, but at semi low clockspeed Zen 3 is very efficient. My 5600X capped at 45W runs 4.85 SC and 3.7 MC, IO-die uses 20W then. 2 ccds are a different matter but single ccds are really efficient at low power.


Yeah, it's still a very efficient architecture - it's just getting to a point where the higher power floor of package-based IF is starting to show its weaknesses.


----------



## fevgatos (Mar 18, 2022)

Valantar said:


> The 5800X is an outlier among Zen3 though - while the 5900X and 5950X have higher single core power draws, the 5800X matches or exceeds their per-core draw from 6-8 cores. Yet it clocks lower. This likely means that the 5800X is a relatively different bin from both the 5600X and 59xxX chips, one where power consumption under high loads is less important - simply because it has more room to move with a 105W/138W power budget and just one CCD. Literally every other Zen3 product out there would do better in that comparison against the 10900K. Which, of course, ignores the 10900K having a 2c4t advantage. So, Intel gets the inherent efficiency advantage of being "wide and slow" compared to AMD's somewhat low binned, high clocked 5800X, and still only matches them? That's not a particularly impressive showing.
> 
> Is this the review you're referring to, btw? I can't find that they say the 12900K is generally more efficient than the 5950X there - in that (_extremely _unreadable) graph of theirs they seem to both take the lead in various tests. I have no idea which of them are ST and which are MT, though. I have seen ST tests where AMD comes out looking decent in terms of efficiency against ADL, but sadly I can't remember where - and even more sadly, most reviewers limit their efficiency testing to one or two scenarios, which really limits results.
> 
> ...


I think thats the one, there is a graph somewhere where it shows consumption across all benches, and yes the 12900k is both the fastest and the most efficient compared to the 5950x. Ill find that once im on my pc, im on the phone right now.



Taraquin said:


> 10900K has 2 more cores, 4 trrads and higher clocks though. Skylake was a very good arcitecture


Well the 5950x has 33% more threads yet we are still comparing then, so does it matter? 

I dont know, all i remember about 10900k was people claiming its an oven toaster etc.,not realising it is as efficient as the 5800x


----------



## ratirt (Mar 18, 2022)

fevgatos said:


> Well the 5950x has 33% more threads yet we are still comparing then, so does it matter?
> 
> I dont know, all i remember about 10900k was people claiming its an oven toaster etc.,not realising it is as efficient as the 5800x


No it isn't  I have a 5800x and I can assure you it is not a oven toaster and if you consider gaming, my 5800x doesn't go above 50watts and I got a 6900XT


----------



## Jism (Mar 18, 2022)

Flydommo said:


> Overclocking will soon be a thing of the past, like the combustion engine. If the stock 5800X 3D delivers signficant performance gains over an overclocked 5800X, why wouldn't you go for the 5800X 3D? Just because the clock speed is lower?



People forget that the normal 5800X is a single CCD and does'nt suffer from the latency impact it has over 2 CCD based chips like the 5900/5950X.

The chip on it's own is already fast enough. The extra cache seems like a nice wave of goodbye to a ending AM4 platform. Applications and games that can benefit from extra will surely get the extra from it.

Locked or not; with a proper board i think you can "extend" clocks using simple BCLK as long as the board has a external clockgenerator. Hence why my 2700X is operating beyond 4.5Ghz in single threads.

Id buy it. I'm not OC'ing anyway as in manual clocks; but if the thing does provide boost just plant a good cooler and your good to go.


----------



## fevgatos (Mar 18, 2022)

ratirt said:


> No it isn't  I have a 5800x and I can assure you it is not a oven toaster and if you consider gaming, my 5800x doesn't go above 50watts and I got a 6900XT


I believe you, thats my point, what you just said applies to the 10900k,yet people were sayong otherwise


----------



## chrcoluk (Mar 18, 2022)

fevgatos said:


> Intel walk all over zen 3 in gaming, both in performance and efficiency, since they consume a lot less power



I checked my post since you only partially quoted, I wasnt talking about gaming specifically.  My main point really was that I dont understand why people prefer to do inefficient and risky overclocks to get performance vs getting more out of the box.

Now I did check igor's review since it got mentioned a few replies down and the results are interesting, my intel question was basically what would happen if you capped the intel chips to 135w, 95w, and 65w.  It seems we may already have the answer for gaming and if thats your main use for the chips they not that bad.  Is it the case if they capped to 135w you lose little performance? kind of like the RTX 3000 series that gains very little for the last 30% or so power.


----------



## fevgatos (Mar 18, 2022)

chrcoluk said:


> I checked my post since you only partially quoted, I wasnt talking about gaming specifically.  My main point really was that I dont understand why people prefer to do inefficient and risky overclocks to get performance vs getting more out of the box.
> 
> Now I did check igor's review since it got mentioned a few replies down and the results are interesting, my intel question was basically what would happen if you capped the intel chips to 135w, 95w, and 65w.  It seems we may already have the answer for gaming and if thats your main use for the chips they not that bad.  Is it the case if they capped to 135w you lose little performance? kind of like the RTX 3000 series that gains very little for the last 30% or so power.


Well even for non gaming the 12900k can be the most efficient cpu at everything. For example, at 35w my 12900k scores 12600 on cinebench r23. That mskes it more efficient than the m1, and by far more efficient than any zen 3.


----------



## QuietBob (Mar 18, 2022)

Bloax said:


> View attachment 240232
> very well, if you go by _extremely rough_ (no GPU for easy 1:1 :^))) comparisons of very juiced configs for both :- )
> 
> Not well enough that I'd rate it worth your while to bruteforce (yes, that is Reboot, Enter voltage, Test, Reboot ... Compare, Pick Best Performers, Test, Reboot, Enter voltage ...) a working SOC, IOD and CCD voltage.
> ...


Hold on, are these leaked benchmarks? So the first result would be a 5800X@4.85 boost (PBO on) and IF@2000, and the second 5800X3D@4.65 boost (assuming PBO is on)?
If so, things look really promising for the V-cache variant. Look at the gains:

+82% min fps
+15% for 1% lows
+31% for 0.1% lows
+69% for 0.01% lows
+83% for 0.005% lows

Even if it's a single game - and the results accurate - the +15% increase in 1% lows would be in line with AMD's previous statements.


----------



## chrcoluk (Mar 18, 2022)

fevgatos said:


> Well even for non gaming the 12900k can be the most efficient cpu at everything. For example, at 35w my 12900k scores 12600 on cinebench r23. That mskes it more efficient than the m1, and by far more efficient than any zen 3.


How have you come to that conclusion? Some workloads it hits crazy power usage right?  Or am I misunderstanding something.  Or are you talking with capped power?

Is it the case its a good chip but just brought to market in a bad way with its shipping configuration?


----------



## ratirt (Mar 18, 2022)

ratirt said:


> No it isn't  I have a 5800x and I can assure you it is not a oven toaster and if you consider gaming, my 5800x doesn't go above 50watts and I got a 6900XT





fevgatos said:


> I believe you, thats my point, what you just said applies to the 10900k,yet people were sayong otherwise


I don't think my 5800x is so far off with power consumption to 12900k either though.


----------



## trieste15 (Mar 18, 2022)

Would it not be possible for the 3D cache to run off a separate voltage plane?


----------



## Valantar (Mar 18, 2022)

fevgatos said:


> Well the 5950x has 33% more threads yet we are still comparing then, so does it matter?
> 
> I dont know, all i remember about 10900k was people claiming its an oven toaster etc.,not realising it is as efficient as the 5800x


Given that the 5950X has the exact same power limits as the 5800X (well, 6W higher boost power, 144W vs. 138W), that changes the picture quite a bit, no?


----------



## NDown (Mar 18, 2022)

Assimilator said:


> Or AMD could... y'know... *not sell a product that is quite obviously an unfinished experiment to customers*. Because that would be the smart and ethical thing to do.
> 
> Strange how you chose to ignore what's literally the most obvious option.
> 
> ...


Yeah bro lots of people really want to see AMD as a saving grace in PC market lmaooo, while they're the same as Intel and any other profit oriented companies.

That being said, i also dont know if manual OC ever worth it in Ryzen ever since the 1st gen came out, nothing big to my eyes really, the price for a product at the end of its platform kinda stings though.


----------



## Valantar (Mar 18, 2022)

trieste15 said:


> Would it not be possible for the 3D cache to run off a separate voltage plane?


Probably not - it interfaces directly with the on-die L3 cache in a way that's supposed to be entirely transparent to the CPU. Making that work while having those run off separate voltage planes sounds ... complicated, if not impossible, given how you'd have signalling between the two cache blocks at different voltages that would then need conversion, adding latency, making the cache die slower to acces, making for inconsistent performance and unpredictable pipeline stalls.


----------



## Taraquin (Mar 18, 2022)

fevgatos said:


> I think thats the one, there is a graph somewhere where it shows consumption across all benches, and yes the 12900k is both the fastest and the most efficient compared to the 5950x. Ill find that once im on my pc, im on the phone right now.
> 
> 
> Well the 5950x has 33% more threads yet we are still comparing then, so does it matter?
> ...


In most scenarios it is not as efficient as 5800X.
Energy efficiency SuperPI and CB

Since 5800X and 5950X has same power limits stock they generally use the same power when stressed max (140W-ish) and in multicore 5950X would win. 

I never said 10900K was a toaster, 11900K on the other hand


----------



## fevgatos (Mar 18, 2022)

chrcoluk said:


> How have you come to that conclusion? Some workloads it hits crazy power usage right?  Or am I misunderstanding something.  Or are you talking with capped power?
> 
> Is it the case its a good chip but just brought to market in a bad way with its shipping configuration?


Exactly, its shipped with a stupid high  power limit.



ratirt said:


> I don't think my 5800x is so far off with power consumption to 12900k either though.


Im pretty sure there is a huge difference in all core workloads. For example i think I can hit 15k (thats how much 5800x gets right?) cbr23 at around 45-50w



Taraquin said:


> In most scenarios it is not as efficient as 5800X.
> Energy efficiency SuperPI and CB
> 
> Since 5800X and 5950X has same power limits stock they generally use the same power when stressed max (140W-ish) and in multicore 5950X would win.
> ...


Yes the 11th gen was pretty atrocious. 

What you are seeing there is power consumption, not efficiency.


----------



## Taraquin (Mar 18, 2022)

fevgatos said:


> Exactly, its shipped with a stupid high  power limit.
> 
> 
> Im pretty sure there is a huge difference in all core workloads. For example i think I can hit 15k (thats how much 5800x gets right?) cbr23 at around 45-50w
> ...


Show me where 10900K is more efficient? Overall consumption i CB is far lower on 5800X than 10900K, yet it scores higher. It's also faster in single using lesd energy. There may be a few very Intel optimized apps where 10900K is more efficient, but generally 5800X wins in almost everything.


----------



## fevgatos (Mar 18, 2022)

Taraquin said:


> Show me where 10900K is more efficient? Overall consumption i CB is far lower on 5800X than 10900K, yet it scores higher. It's also faster in single using lesd energy. There may be a few very Intel optimized apps where 10900K is more efficient, but generally 5800X wins in almost everything.


I think its GN that runs 30 minute long blender runs and it shows both at the same efficiency


----------



## ThrashZone (Mar 18, 2022)

Taraquin said:


> Show me where 10900K is more efficient? Overall consumption i CB is far lower on 5800X than 10900K, yet it scores higher. It's also faster in single using lesd energy. There may be a few very Intel optimized apps where 10900K is more efficient, but generally 5800X wins in almost everything.


Hi,
I'd be surprised if anyone buying Intel would be doing it because Intel is efficient

Intel has always been the choice for overclocking which is the opposite of what efficiency is in other words overclocking Intel systems can effectively score higher in benchmarks


----------



## Taraquin (Mar 18, 2022)

fevgatos said:


> I think its GN that runs 30 minute long blender runs and it shows both at the same efficiency


This one with 10900K long term pwr limited:





But in general 5800X beats the crap out of 10900K in almost every scenario on efficiency. In gaming stock it performs better and uses 30W less in the TPU test.


----------



## fevgatos (Mar 18, 2022)

Taraquin said:


> This one with 10900K long term pwr limited:
> View attachment 240309
> 
> But in general 5800X beats the crap out of 10900K in almost every scenario on efficiency. In gaming stock it performs better and uses 30W less in the TPU test.


Yeav thats the one I think. Do you see it beating crap? I dont.


----------



## Taraquin (Mar 18, 2022)

fevgatos said:


> Yeav thats the one I think. Do you see it beating cheap? I dont.


Beating cheap? I dont understand what you mean...


----------



## fevgatos (Mar 18, 2022)

Taraquin said:


> Beating cheap? I dont understand what you mean...


Sorry, autocorrect. Do you see it beating the crap out of the 10900k? They are pretty much identical in efficiency


----------



## Valantar (Mar 18, 2022)

fevgatos said:


> Sorry, autocorrect. Do you see it beating the crap out of the 10900k? They are pretty much identical in efficiency


That graph is of power consumption, not efficiency. In the 12900K review, the companion graph for performance is at 10:18, and shows the 5950X - at that power draw (see 24:38) - finishing the render in 9.3 minutes compared to 16.5 minutes for the 10900K and 17.5 minutes for the 11900K. So, for this workload, the 5950X is _dramatically_ more efficient than the 10900K, and even slightly beats the 12900K (9.4 minutes, so really not by a lot), despite it running with Intel's ludicrous 241W unlimited boost "stock" settings against the stock 5950X's limit of 144W. The 12900K can be reined in dramatically and will run far more efficiently if the power limits are lowered, but it's still slower in Blender than a 5950X. But the 5950X definitely beats the crap out of the 10900K in this testing, even when both are limited by similar long-term power limits.


----------



## Taraquin (Mar 18, 2022)

fevgatos said:


> Sorry, autocorrect. Do you see it beating the crap out of the 10900k? They are pretty much identical in efficiency


In blender, not in almost every other app and gaming. You could say that 10900K at it's best running very powerlimited can be as efficient, but in most cases it is not. In SuperPI, CB and gaming it beats the crap put of 10900K in efficiency


----------



## Space Lynx (Mar 18, 2022)

as someone who got a 5600x on launch day, I found overclocking ryzen to be completely stupid anyway, no gains in real world gaming, so no point for me. I am fine with this decision.  ram OC'ing scratches that oc'ing itch for me nowadays anyway.  though I don't expect I will bother there either anymore, XMP, and away I game, life is too short


----------



## fevgatos (Mar 18, 2022)

Valantar said:


> That graph is of power consumption, not efficiency. In the 12900K review, the companion graph for performance is at 10:18, and shows the 5950X - at that power draw (see 24:38) - finishing the render in 9.3 minutes compared to 16.5 minutes for the 10900K and 17.5 minutes for the 11900K. So, for this workload, the 5950X is _dramatically_ more efficient than the 10900K, and even slightly beats the 12900K (9.4 minutes, so really not by a lot), despite it running with Intel's ludicrous 241W unlimited boost "stock" settings against the stock 5950X's limit of 144W. The 12900K can be reined in dramatically and will run far more efficiently if the power limits are lowered, but it's still slower in Blender than a 5950X. But the 5950X definitely beats the crap out of the 10900K in this testing, even when both are limited by similar long-term power limits.


Of course, wasnt comparing the 10900 to the 5950x but to the 5800x.



Taraquin said:


> In blender, not in almost every other app and gaming. You could say that 10900K at it's best running very powerlimited can be as efficient, but in most cases it is not. In SuperPI, CB and gaming it beats the crap put of 10900K in efficiency


What do you mean very power limited? It's as power limited as the 5800x since they consume the same

In gaming im sure they are pretty similar as well, zen 3 consume a lot during gaming, dunno why


----------



## Taraquin (Mar 18, 2022)

fevgatos said:


> Of course, wasnt comparing the 10900 to the 5950x but to the 5800x.
> 
> 
> What do you mean very power limited? It's as power limited as the 5800x since they consume the same


If you run 10900K w/o pwr limits like many do it's very inefficient, if pwr limited it can be efficient in some apps like blender.


----------



## fevgatos (Mar 18, 2022)

Taraquin said:


> If you run 10900K w/o pwr limits like many do it's very inefficient, if pwr limited it can be efficient in some apps like blender.


Well any cpu if asked to pull 250w will be inefficient. Thats pretty normal. The question is why the heck would you do that if you are interested in power efficiency? What you are saying really makes no sense, you care about power efficiency but you are going to run the cpu way above its efficiency curve cause....??


----------



## ThrashZone (Mar 18, 2022)

fevgatos said:


> Well any cpu if asked to pull 250w will be inefficient. Thats pretty normal. *The question is why the heck would you do that if you are interested in power efficiency?* What you are saying really makes no sense, you care about power efficiency but you are going to run the cpu way above its efficiency curve cause....??


Hi,
Keeps the argument going.


----------



## Makaveli (Mar 18, 2022)

ThrashZone said:


> Hi,
> Keeps the argument going.



Discussion


----------



## Taraquin (Mar 18, 2022)

fevgatos said:


> Well any cpu if asked to pull 250w will be inefficient. Thats pretty normal. The question is why the heck would you do that if you are interested in power efficiency? What you are saying really makes no sense, you care about power efficiency but you are going to run the cpu way above its efficiency curve cause....??


Intel have 2 specs, pwr limited and unlimited toggled by one bios click. Standard spec on AMD is 142W-ish limit and raising it is a much more complucated approach via many sub menus in bios. 

Your statement was that they are equally efficient, mine is that they are not except in rare cases.


----------



## fevgatos (Mar 18, 2022)

Taraquin said:


> Intel have 2 specs, pwr limited and unlimited toggled by one bios click. Standard spec on AMD is 142W-ish limit and raising it is a much more complucated approach via many sub menus in bios.
> 
> Your statement was that they are equally efficient, mine is that they are not except in rare cases.


My statement is that if you care about efficiency you wouldnt be running it at 240 or 500watts or power unlocked.. 

In what cases is a 125w 10900k not equally efficienct to a 125w 5800x? Can you show me some numbers?


----------



## ThrashZone (Mar 18, 2022)

Makaveli said:


> Discussion


Hi,
Not really there are some people that just like to argue on lame points.
Professional forum instigators it's best to pass by them and not take the bait.

Forgot they also make snotty remarks to.


----------



## Taraquin (Mar 18, 2022)

fevgatos said:


> My statement is that if you care about efficiency you wouldnt be running it at 240 or 500watts or power unlocked..
> 
> In what cases is a 125w 10900k not equally efficienct to a 125w 5800x? Can you show me some numbers?


Gaming. Check the link I posted earlier. In gaming 10900K running stock uses 31W more while performing worse... 

In cinebench 5800X is much more efficient at 125W since it already beat the 10900K stock while using less stock.


----------



## fevgatos (Mar 18, 2022)

Taraquin said:


> Gaming. Check the link I posted earlier. In gaming 10900K running stock uses 31W more while performing worse...


Thats system power consumption. No point in looking at system power, especially in gaming, cause GPU plays the biggest role



Taraquin said:


> In cinebench 5800X is much more efficient at 125W since it already beat the 10900K stock while using less stock.


And its not in plenty of other applications as shown from gnexus review (vray , chromium compile, blender). Actually, correct me if im wrong, in every test gnexus ran besides adobe, both cpus consumed the same and performed the same.


----------



## ThrashZone (Mar 18, 2022)

Hi,
The argument comparing a 8 core against a 10 core is silly and should of been the first clue to pass.


----------



## fevgatos (Mar 18, 2022)

ThrashZone said:


> Hi,
> The argument comparing a 8 core against a 10 core is silly and should of been the first clue to pass.


Well then you cant compare anything really. There is no other 10 core cpu, the 5950x has 33% more threads than the 12900k so we shouldn't compare them etc. And why stop at the number of cores? Let's not compare two cpus unless they are on the same node or running the same frequency either.


----------



## Taraquin (Mar 18, 2022)

fevgatos said:


> Thats system power consumption. No point in looking at system power, especially in gaming, cause GPU plays the biggest role
> 
> 
> And its not in plenty of other applications as shown from gnexus review (vray , chromium compile, blender). Actually, correct me if im wrong, in every test gnexus ran besides adobe, both cpus consumed the same and performed the same.


They use the same GPU, do you mean that on the same GPU 10900K makes it use more power? MB can be a factor, but that might as well be disfavorable to the MB 5800X uses.

Please hear what Steven says in the video: "That makes it more efficient that 10900K in just about every workload including games in almost every instance." Also see what the consumption numbers were on TPU, chekc Tomshardware, it performs similar to 5800X but consumes much more in viritually everything. Ryzen 7 5800X Power Consumption, Thermals - AMD Ryzen 7 5800X Review: The Pricing Conundrum | Tom's Hardware (tomshardware.com)

Handbrake:
106W vs 206W

Y-cruncher:
112W vs 185W


----------



## fevgatos (Mar 18, 2022)

Taraquin said:


> They use the same GPU, do you mean that on the same GPU 10900K makes it use more power? MB can be a factor, but that might as well be disfavorable to the MB 5800X uses.
> 
> Please hear what Steven says in the video: "That makes it more efficient that 10900K in just about every workload including games in almost every instance." Also see what the consumption numbers were on TPU, chekc Tomshardware, it performs similar to 5800X but consumes much more in viritually everything. Ryzen 7 5800X Power Consumption, Thermals - AMD Ryzen 7 5800X Review: The Pricing Conundrum | Tom's Hardware (tomshardware.com)


I agree that it is more efficient, but the difference is negligible. Its literally less than 10% in everything except adobe where the 5800x was actually around 20% faster with the same power draw.


----------



## ThrashZone (Mar 18, 2022)

fevgatos said:


> Well then you cant compare anything really. There is no other 10 core cpu, the 5950x has 33% more threads than the 12900k so we shouldn't compare them etc. And why stop at the number of cores? Let's not compare two cpus unless they are on the same node or running the same frequency either.


Hi,
To bad nobody thought to use 11900k which is a 8 core same as 5800x.


----------



## fevgatos (Mar 18, 2022)

ThrashZone said:


> Hi,
> To bad nobody thought to use 11900k which is a 8 core same as 5800x.


The 11900 stinks. Nobody would argue thats it is as efficient as zen 3


----------



## Taraquin (Mar 18, 2022)

fevgatos said:


> I agree that it is more efficient, but the difference is negligible. Its literally less than 10% in everything except adobe where the 5800x was actually around 20% faster with the same power draw.


That depends. If your motherboard runs 125W limit out of the box it will be like that, slightly less efficient than 5800X, but usually slower. Many motherboards run unlimited power out of the box, Asus usually do this, then efficiency is terrible, but it performs on pair and sometimes a bit better than 5800X.

5800X always has 142W PPT no matter what unless you manually change it. On Intel it depends on MB, what bios you use etc.


----------



## fevgatos (Mar 18, 2022)

Taraquin said:


> Handbrake:
> 106W vs 206W
> 
> Y-cruncher:
> 112W vs 185W


Havent we been over this? Why are you repeating the same stuff? Those tests are done withing the pl2 240w window. Yes, nobody argues that the 10900k is more efficient when it is runninf at 240w....

I dont know, i feel like im being trolled..


----------



## ThrashZone (Mar 18, 2022)

fevgatos said:


> The 11900 stinks. Nobody would argue thats it is as efficient as zen 3


Hi,
But somehow a 10900k is ?


----------



## fevgatos (Mar 18, 2022)

Taraquin said:


> That depends. If your motherboard runs 125W limit out of the box it will be like that, slightly more effeicient, but usually slower. Many motherboards run unlimited power out of the box, Asus usually do this, then efficiency is terrible, but it performs on pair and sometimes a bit better than 5800X.
> 
> 5800X always has 142W PPT no matter what unless you manually change it. On Intel it depends on MB, what bios you use etc.


Why does that matter? I really dont get you. If YOU care about efficiency then you will manually limit the cpu to 125w,it literally takes a second. So why the heck do you care what a random motherboard does? I seriously dont understand what your point is



ThrashZone said:


> Hi,
> But somehow a 10900k is ?


If we are comparing it to a 5800x, yes, at the same watts they perform almost identical in mt workloads (blender vray chromiun corona etc.)


----------



## Taraquin (Mar 18, 2022)

fevgatos said:


> Why does that matter? I really dont get you. If YOU care about efficiency then you will manually limit the cpu to 125w,it literally takes a second. So why the heck do you care what a random motherboard does? I seriously dont understand what your point is
> 
> 
> If we are comparing it to a 5800x, yes, at the same watts they perform almost identical in mt workloads (blender vray chromiun corona etc.)


Most user will never enter the bios and do anything, so a lot of 10900K users gets a very inefficient CPU, that is my point. If you say stock you can both say 240 and 125W on the 10900K as it depends. 5800X is always the same and will always be efficient.

You first said they were equally efficient without stating that you need 125W limit for that, I disagreed, but now admit 5800X is more efficient even with 10900K running 125W limit so I`m satisfied  It`s not about winning a discussion, it`s about getting the facts straight.

"not realising it is as efficient as the 5800x" that was your original claim which I disagreed on. A simple correction of saying it could be nearly as efficient if running 125W limit I would have had no reason to argue with you


----------



## Chrispy_ (Mar 18, 2022)

All-core manual overclocks died on Ryzen with the launch of PB2 and XFR2 four years ago because whilst it's easy to push a manual OC higher than Zen3's boost clocks, it's not a good idea for a long-term or daily-driver because the voltages will almost certainly kill your CPU. There are plenty of reports of Zen2 and Zen3 that have been volted too hard and are now functioning in limp mode, barely able to maintain stock clocks without boost.

At 1.35V, Zen3 is still likely going to boost to 4.6GHz based on the 5800X experiences. The cache will just have to make up for the missing 200MHz


----------



## Valantar (Mar 18, 2022)

fevgatos said:


> Of course, wasnt comparing the 10900 to the 5950x but to the 5800x.


Even for that comparison - which, again, is a wide-and-slow (at those power levels) design, at least compared to the 8c16t 5800X, which also as explained above is a worst-case scenario for Vermeer - the 5800X is ahead though. 15.7 minutes vs. 16.5 is a 5% advantage, at about 1% lower power. That's definitely not huge, but if that's where you go for your "well, actually" argument, you're really cherry-picking.


fevgatos said:


> What do you mean very power limited? It's as power limited as the 5800x since they consume the same
> 
> In gaming im sure they are pretty similar as well, zen 3 consume a lot during gaming, dunno why


AMD power limits are strictly enforced, while with Intel MCE/unlocked power limits by default has been the norm (at least on moderately high end motherboards) since at least Skylake, and with ADL it is now the actual stock behaviour for K SKUs. It is thus reasonable to highlight that the 10900K is more strictly limited than what is found in most reviews, or in most users' configurations. That still doesn't make it inherently terribly inefficient, but when one part needs manual tuning in most use cases to match the stock efficiency of another, then that part is less efficient in any practical general sense of the word.



fevgatos said:


> Why does that matter? I really dont get you. If YOU care about efficiency then you will manually limit the cpu to 125w,it literally takes a second. So why the heck do you care what a random motherboard does? I seriously dont understand what your point is


Given that most PC builders don't even activate XMP, this is a much higher bar than you're making it out to be.


----------



## DeathtoGnomes (Mar 18, 2022)

I love discussion when one party intentionally changes their talking points and facts. I cant add to that discussion so I'll just make some popcorn.


----------



## Valantar (Mar 18, 2022)

Chrispy_ said:


> All-core manual overclocks died on Ryzen with the launch of PB2 and XFR2 four years ago because whilst it's easy to push a manual OC higher than Zen3's boost clocks, it's not a good idea for a long-term or daily-driver because the voltages will almost certainly kill your CPU. There are plenty of reports of Zen2 and Zen3 that have been volted too hard and are now functioning in limp mode, barely able to maintain stock clocks without boost.
> 
> At 1.35V, Zen3 is still likely going to boost to 4.6GHz based on the 5800X experiences. The cache will just have to make up for the missing 200MHz


My thinking exactly. And if AMD's early 15% number is trustworthy, 4.6GHz*1.15=equivalent to a 5.29GHz 5800X, so that's more than enough.


----------



## Deleted member 24505 (Mar 18, 2022)

I don't give a crap about efficiency as long as i can cool it.


----------



## Taraquin (Mar 18, 2022)

Tigger said:


> I don't give a crap about efficiency as long as i can cool it.


Buy Rocket lake! Huge die, easier to cool


----------



## fevgatos (Mar 18, 2022)

Valantar said:


> Even for that comparison - which, again, is a wide-and-slow (at those power levels) design, at least compared to the 8c16t 5800X, which also as explained above is a worst-case scenario for Vermeer - the 5800X is ahead though. 15.7 minutes vs. 16.5 is a 5% advantage, at about 1% lower power. That's definitely not huge, but if that's where you go for your "well, actually" argument, you're really cherry-picking.


I dont think 5% is an actual difference, and I would state the same whether its intel or amd at the top. It's basically peanuts. For me, a difference would be something like 15-20% and upwards. 5% means one CPU is consuming 125w and the other one 131w for the same workloads. It's kinda whatever. I consider them both equally efficient. You want to be precise, then sure, the 5800x is technically more efficient, by a small - irrelevant margin.



Valantar said:


> AMD power limits are strictly enforced, while with Intel MCE/unlocked power limits by default has been the norm (at least on moderately high end motherboards) since at least Skylake, and with ADL it is now the actual stock behaviour for K SKUs. It is thus reasonable to highlight that the 10900K is more strictly limited than what is found in most reviews, or in most users' configurations. That still doesn't make it inherently terribly inefficient, but when one part needs manual tuning in most use cases to match the stock efficiency of another, then that part is less efficient in any practical general sense of the word.


Sure, but I was talking more about the actual technical side of the cpu (architecture + node) rather than the actual configuration Intel or mobo manafacturers ship it with. You can change the latter, you can't change the former, that's why I'm focusing on what the CPU can do with forced power limits. If it was fundamentally inefficient, then changing the power limits wouldn't do anything, it would remain inefficient.

Other than that, I don't disagree, the way Intel decides to configure most of its' cpus, they are pretty much bad when it comes to efficiency out of the box. But as ive said, that's something that you can easily change.



Taraquin said:


> You first said they were equally efficient without stating that you need 125W limit for that,


You are right, should have mentioned it. Usually when Im talking about efficiency im talking about there being a power limit.


----------



## Mussels (Mar 19, 2022)

Having not read the thread yet, this is my comment to the poll question:


Yes this is a dealbreaker to stop me buying on launch day - i'll wait and see what people figure out before considering it.
I love my 5800x, but higher performance with some mitigation for the higher than usual temps is what I want from an upgrade - and i *like* my current low voltage all core overclock (1.2V, 4.6GHz)

It looks like PBO will still be around, there are reports of the 1.2.0.6B AGESA limiting 5800x CPU voltages - now we know why.
We should have PBO tweaking, just no manual voltage/multi control.

Edit as i go with replies:


Chaitanya said:


> Last time I overclocked a CPU was a AM2 CPU and havent touched OC either on my 4770k or 3700x.


The 4770K I gave to my dad was the best clocking CPU i ever experienced.

From 3.5GHz base to 4.5Ghz daily (for 10 years!) without ever a crash, running 32GB of 2400MHz DDR3 (from the measly 1333Mhz jedec standard)
Absolutely amazing how well it's aged, and it outperforms quite a few modern 4 core CPUs



Mats said:


> Yeah we'll be missing out on that sweet, juicy GHz OC everyone is talking about..
> 
> View attachment 240110
> 
> ...


I get that ~2% loss you showed, but with 35W less power used and 15C lower temps. There is a benefit to it.


----------



## sillyconjunkie (Mar 19, 2022)

Seems to be a lot of confusion about the oc lock.  Pretty sure it's one main reason...Heat.

By design it was never going to be a great oc cpu.  All the extra cache requires that much more package voltage which equals heat.  The second cache layer + oc voltage is gonna cook the cores on the ground floor.

It's locked because it should be.  (disclamer..I like amd stuff).


----------



## Jism (Mar 19, 2022)

Chrispy_ said:


> All-core manual overclocks died on Ryzen with the launch of PB2 and XFR2 four years ago because whilst it's easy to push a manual OC higher than Zen3's boost clocks, it's not a good idea for a long-term or daily-driver because the voltages will almost certainly kill your CPU. There are plenty of reports of Zen2 and Zen3 that have been volted too hard and are now functioning in limp mode, barely able to maintain stock clocks without boost.
> 
> At 1.35V, Zen3 is still likely going to boost to 4.6GHz based on the 5800X experiences. The cache will just have to make up for the missing 200MHz



What most review websites show, is a absolute wrong impression on which voltages to set. Like, look at the avg ryzen CPU review and notice how they all insert 1.4V up to 1.45V. It is a absolute wrong message and ive seen quite some folks blowing up their Ryzen that cant even hold stock clocks or boost anymore, because they ran daily on 1.4V or even higher.

1.33v ~ 1.37v is the absolute max. Anything above will seriously degrade your chip in months not even weeks.



sillyconjunkie said:


> Seems to be a lot of confusion about the oc lock.  Pretty sure it's one main reason...Heat.
> 
> By design it was never going to be a great oc cpu.  All the extra cache requires that much more package voltage which equals heat.  The second cache layer + oc voltage is gonna cook the cores on the ground floor.
> 
> It's locked because it should be.  (disclamer..I like amd stuff).



Incorrect. They said it themself the Cache is dependent of the CPU core voltage. They coud'nt design a seperate voltage rail among it due to compatible pin layouts. So you go with what you have, really. Since it's a EPYC gimmick, epyc's where never tested against OC's since the clocks of those CPU avg on 2 to 3.4Ghz.


----------



## Cutechri (Mar 19, 2022)

Jism said:


> 1.33v ~ 1.37v is the absolute max. Anything above will seriously degrade your chip in months not even weeks.


I'd say 1.28v is the absolute max, seeing as that's the SVI2 TFN every single one of my Zen 2 and above CPUs has been sitting at during all core loads. Someone had their 3600 die within 6 months at 1.325v manual OC.


----------



## Jism (Mar 19, 2022)

Cutechri said:


> I'd say 1.28v is the absolute max, seeing as that's the SVI2 TFN every single one of my Zen 2 and above CPUs has been sitting at during all core loads. Someone had their 3600 die within 6 months at 1.325v manual OC.



I have quite some experience with bulldozer's / FX, they ate 1.65V for breakfast on water from the stock 1.35V. But the Ryzen is a complete different animal; it's so small that any higher voltage in combination with a high current will cause electromigration.

Degradation is real; i woud'nt want to mess with the 5800X's Cache at all. If that degrades your toast. However i still believe BCLK oc'ing should be possible within the realms of acceptable voltages.


----------



## trieste15 (Mar 19, 2022)

Mussels said:


> The 4770K I gave to my dad was the best clocking CPU i ever experienced.
> 
> From 3.5GHz base to 4.5Ghz daily (for 10 years!) without ever a crash, running 32GB of 2400MHz DDR3 (from the measly 1333Mhz jedec standard)
> Absolutely amazing how well it's aged, and it outperforms quite a few modern 4 core CPUs


Actually, this kinda shows just how much reserve CPU capability that Intel was holding back from all of us..

It makes me thankful that AMD surged back and basically the situation now is that overclocks are no longer easy... and this is a good thing for the majority (not so good for the overclocking enthusiast).


----------



## Chrispy_ (Mar 19, 2022)

Jism said:


> What most review websites show, is a absolute wrong impression on which voltages to set. Like, look at the avg ryzen CPU review and notice how they all insert 1.4V up to 1.45V. It is a absolute wrong message and ive seen quite some folks blowing up their Ryzen that cant even hold stock clocks or boost anymore, because they ran daily on 1.4V or even higher.
> 
> 1.33v ~ 1.37v is the absolute max. Anything above will seriously degrade your chip in months not even weeks.


Yeah. When websites set voltages above 1.4V it's usually just to see what the absolute limit is of the silicon for a one-off result. It's definitely not a daily-driver overclock. 

AMD themselves clarified that *fixed* voltages above 1.4V are seriously detrimental to the health of the silicon, and just because your CPU shows boost voltages of up to 1.55V in monitoring software doesn't mean that it's safe to set that as a manual voltage. The boost algorithm allows single-core peak voltages up to 1.55V only for a few hundred _milliseconds_ at a time and software will only report the peak reading at much slower update intervals.

In reality, when you are running a single-threaded benchmark using a non-overclocked Ryzen with regular XFR boost, you're getting voltages of ~1.5V only as the final voltage before the algorithm swaps the load to another core, allowing the original core to both cool and discharge. Ryzen Master shows the core-juggling better than most software but even that doesn't paint the true picture, and was explained in more depth by Rob @amd in a tweet to der8auer. When boosting on a core, the voltage isn't fixed for the roughly half-second of load on that core; It starts off lower, at ~1.3V and ramps up as the charges and thermals in that area of silicon increase over the duration of that brief boost on the core. Over that single half-second cycle on one core, the average voltage to that core will likely still be under 1.4V and even if it isn't, the percentage of time that the core spends at voltages that could promote electromigration is so negligible that the effective time-to-death of the CPU vastly exceeds the warranty period, and likely the relevance of the CPU before obsolescence.


----------



## Octopuss (Mar 19, 2022)

I had to stop reading on page 4.
So much idiocy in one thread. I can't take anymore of that.


----------



## Melvis (Mar 19, 2022)

Who gives a flying F if its doesnt OC? If the performance is as good as what they say then its a win win for us who are on a AM4 4yr+ old System to make it a beast in 2022. This would probably the CPU I would upgrade to, depending on the price though, if its not crazy priced above the 5800X then why not! Just so awesome to have a PC or many of them and to have SO many CPU upgrade options is just amazing.


----------



## ioannis (Mar 20, 2022)

No big deal since static OC on ryzen cpus is kinda pointless. Limiting the voltage to 1.3-1.35 I am guessing pretty much nerfs boost clock override offset but again it's no big deal since you get almost none gaming performance from this. Only thing that has a meaning to do in ryzen cpus is raising the PBO Limits as much as your cooling allows to keep higher all-core clocks if you need them.


----------



## sillyconjunkie (Mar 20, 2022)

Jism said:


> Incorrect. They said it themself the Cache is dependent of the CPU core voltage. They coud'nt design a seperate voltage rail among it due to compatible pin layouts. So you go with what you have, really. Since it's a EPYC gimmick, epyc's where never tested against OC's since the clocks of those CPU avg on 2 to 3.4Ghz.



Whether or not the extra cache can be powered down is irrelevant.  It's a multi-layer cpu on a 7nm process and whether it was designed by AMD or Intel, it will always run hotter than a similar cpu design on a single layer.


----------



## ratirt (Mar 21, 2022)

fevgatos said:


> Im pretty sure there is a huge difference in all core workloads. For example i think I can hit 15k (thats how much 5800x gets right?) cbr23 at around 45-50w


I thought you are talking about gaming not CBR23.


----------



## Mussels (Mar 21, 2022)

ioannis said:


> No big deal since static OC on ryzen cpus is kinda pointless. Limiting the voltage to 1.3-1.35 I am guessing pretty much nerfs boost clock override offset but again it's no big deal since you get almost none gaming performance from this. Only thing that has a meaning to do in ryzen cpus is raising the PBO Limits as much as your cooling allows to keep higher all-core clocks if you need them.


It varies chip to chip
My 5800x for example is one of the hotter running ones, so a static OC gains me 200Mhz all core, loses 500Mhz single threaded, but also runs 30C colder (well, with 40C ambients it did)

These are simply chips to be left on auto, or with minimal PBO tweaking (curve offset may remain) - and that appeals to a MUCH larger userbase than the overclockers


----------



## mama (Mar 21, 2022)

Will this be 6nm or 7nm?


----------



## fevgatos (Mar 21, 2022)

ratirt said:


> I thought you are talking about gaming not CBR23.


In gaming the difference is even bigger


----------



## Taraquin (Mar 21, 2022)

fevgatos said:


> In gaming the difference is even bigger


Not to start another war, but I'm not so sure since games generally utilize few cores most of the time. My 5600X in SOTTR (a game that utilizes many cores well) get 230-240fps CPU game avg 1080p highest running 76W limit, but I get 215-225fps running 45W limit so only around 5% fps loss last I checked. In both cases IO-die uses 15-20W. It may be more or less in other games, but generally less since most games uses less cores than SOTTR.

If you can test your 12th gen in SOTTR 1080p highest with unlimited and 45W limit it would be great 

Ryzen 5k scales poorly efficiencywise in allcore loads if voltage is above 1.25V-ish (meaning 4.5-4.75GHz allcore depending on bin, CO value etc). I'm not sure where sæefficient scaling stops on 12th gen, but wouldn't be surprised if it were around 1.2-1.3v allcore like Ryzen 5k, maybe lower. My 12400F only runs allcore at 1.0v so can't test scaling on that one.


----------



## fevgatos (Mar 21, 2022)

Taraquin said:


> Not to start another war, but I'm not so sure since games generally utilize few cores most of the time. My 5600X in SOTTR (a game that utilizes many cores well) get 230-240fps CPU game avg 1080p highest running 76W limit, but I get 215-225fps running 45W limit so only around 5% fps loss last I checked. In both cases IO-die uses 15-20W. It may be more or less in other games, but generally less since most games uses less cores than SOTTR.
> 
> If you can test your 12th gen in SOTTR 1080p highest with unlimited and 45W limit it would be great
> 
> Ryzen 5k scales poorly efficiencywise in allcore loads if voltage is above 1.25V-ish (meaning 4.5-4.75GHz allcore depending on bin, CO value etc). I'm not sure where sæefficient scaling stops on 12th gen, but wouldn't be surprised if it were around 1.2-1.3v allcore like Ryzen 5k, maybe lower.


Unless we have the same card it's kind of pointless. Ill win in efficiency just because i habe a 3090 that pumps 300 fps in 1080p. We could try locking the framerate to 150 or something and then check the power draw i guess
Igorslab and derbauer ran some tests and adl are insane in gaming efficiency


----------



## Taraquin (Mar 21, 2022)

fevgatos said:


> Unless we have the same card it's kind of pointless. Ill win in efficiency just because i habe a 3090 that pumps 300 fps in 1080p. We could try locking the framerate to 150 or something and then check the power draw i guess
> Igorslab and derbauer ran some tests and adl are insane in gaming efficiency


No, you can compare, look at CPU game avg, this is what your CPU produces and will be equal no matter what GPU you use  You may be right that you lose less perf pwr limiting a 12th gen, but it would be nice to see how it affects. You will get different numbers than me, but the point is to show how much lower perf gets if pwr limited


----------



## fevgatos (Mar 21, 2022)

Taraquin said:


> No, you can compare, look at CPU game avg, this is what your CPU produces and will be equal no matter what GPU you use  You may be right that you lose less perf pwr limiting a 12th gen, but it would be nice to see how it affects. You will get different numbers than me, but the point is to show how much lower perf gets if pwr limited


The cpu game is affected by the gpu actually, downclock your gpu and youll get higher numbers. I dont know why it works that way but it does


----------



## Taraquin (Mar 21, 2022)

fevgatos said:


> The cpu game is affected by the gpu actually, downclock your gpu and youll get higher numbers. I dont know why it works that way but it does


I have not seen that, I get same cpu game avg running UV or OC profile on GPU, but your system might behave different. Still using same GPU settings you can show how your CPU, that is what I was curious about? Can you do that?


----------



## fevgatos (Mar 21, 2022)

Taraquin said:


> I have not seen that, I get same cpu game avg running UV or OC profile on GPU, but your system might behave different. Still using same GPU settings you can show how your CPU, that is what I was curious about? Can you do that?


Just got back home, i get 225 @ 45watts with e cores off. Too bored to try e cores on right now, maybe later


----------



## Taraquin (Mar 21, 2022)

fevgatos said:


> Just got back home, i get 225 @ 45watts with e cores off. Too bored to try e cores on right now, maybe later


But try at stock pwr and compare


----------



## fevgatos (Mar 21, 2022)

Taraquin said:


> But try at stock pwr and compare


Stock with no power limits i get around 330 if I remember correctly, but i've never checked how much it actually consumes.


----------



## Taraquin (Mar 22, 2022)

fevgatos said:


> Stock with no power limits i get around 330 if I remember correctly, but i've never checked how much it actually consumes.


Okay, you lose 33% perf then. How much does it use in SOTTR when not pwr limited? I haven't tested the 5800X, but considering I only lose 5% fps going from 76W to 45W I would be surprised if 5800X lose a lot more. All this considered it seems gaming is less impacted by pwr limit that productivity which for instance the 10900K test on HWUB showed.


----------



## fevgatos (Mar 22, 2022)

Taraquin said:


> Okay, you lose 33% perf then. How much does it use in SOTTR when not pwr limited? I haven't tested the 5800X, but considering I only lose 5% fps going from 76W to 45W I would be surprised if 5800X lose a lot more. All this considered it seems gaming is less impacted by pwr limit that productivity which for instance the 10900K test on HWUB showed.


Ill check, i never really paid attention. Sotr though is kind of a weird game to test for this cause its really memory and cache dependant more than it cares about the actual cpu. 

The only game where I've noticed high power consumption is cyberpunk with RT on at very low resolution to make it cpu bound. Ive seen peaks at 170 watts which is insane for a game, usually its under 100watts in most games.


----------



## Taraquin (Mar 22, 2022)

fevgatos said:


> Ill check, i never really paid attention. Sotr though is kind of a weird game to test for this cause its really memory and cache dependant more than it cares about the actual cpu.
> 
> The only game where I've noticed high power consumption is cyberpunk with RT on at very low resolution to make it cpu bound. Ive seen peaks at 170 watts which is insane for a game, usually its under 100watts in most games.


SOTTR scales with everything which makes it weird, but good for testing  Cyberpunk has high CPU usage using DLSS or native low res, it scales good with bandwith (loves DDR5 and performs much better than with DDR4), too bad the built in bench is inconsistent :/ If I remember correct CP has some sections that utilize AVX2, that might explain high CPU usage.


----------



## Taraquin (Mar 25, 2022)

fevgatos said:


> Ill check, i never really paid attention. Sotr though is kind of a weird game to test for this cause its really memory and cache dependant more than it cares about the actual cpu.
> 
> The only game where I've noticed high power consumption is cyberpunk with RT on at very low resolution to make it cpu bound. Ive seen peaks at 170 watts which is insane for a game, usually its under 100watts in most games.


I just remembered that TPU did a powerscaling test:





It seems ADL is very efficient down to around 75W limit or maybe a bit below, but get serious problems at 50W so somewhere between 50 and 75W gamingperformance tanks completely. 5600X at 76W limit is equally efficient as 12900K at 75W.

5800X does not behave the same and scales well at 65W vs stock 142W.








 gamingperf at 65W is 98-99% of 142W :O How low you can go on 5800X before performance tanks is a big question, but I'm sure it loses a lot less perf at 45W than 12900K, since it loses over 40% perf at 50W, while I lose 5% at 45W.

The voltage/frequency curve of Ryzen 5k is a bit weird where you get very good linear scaling to around 1.1v.
My 5600X tested at various speeds, lowest voltage/powerusage in CB23:
4.2@0.99v 56W
4.3@1.02v 60W
4.4@1.05v 64W
4.5@1.10v 69W
4.6@1.18v 76W
4.7@1.26v 95W
4.8@1.34v 115W
I think it is quite comparable to 5800X. All these test were done with 4000 ram so I/O-die uses a bit more power than an avg 5600X.

Even though the I/O-die uses a bit of power (10-15W load 3200MHz ram, 20-30W load 4000MHz ram) the mem controller on ADL, I/O etc uses a fair amount of power, but at the same die as cores.

I bet 12600K will have better perf at low tdp due to less cores/cache.


----------



## sillyman5454 (Mar 26, 2022)

Pure sh*t. 1 step forward, 1 step back.


----------



## B-Real (Apr 14, 2022)

Well, AMD did what it promised:
"From this small sample though, we found that the 5800X3D is slightly faster than the 12900K when using the same DDR4 memory.

It's a small 5% margin, but that did make it 19% faster than the 5900X on average, so AMD's 15% claim is looking good."





Quite an impressive comeback within the generation even without DDR5 support.


----------



## fevgatos (Apr 14, 2022)

Taraquin said:


> I just remembered that TPU did a powerscaling test:
> 
> 
> 
> ...


That's because the voltage is set a bit to high by default on the 12900k. With a little undervolting I managed 14k cbr23 score @ 35 watts. I can max out my 3090 with that power limit


----------



## Taraquin (Apr 14, 2022)

fevgatos said:


> That's because the voltage is set a bit to high by default on the 12900k. With a little undervolting I managed 14k cbr23 score @ 35 watts. I can max out my 3090 with that power limit


One could argue that AMD does the same, and above 1.2v the efficiencycurve is garbage, probably the same on ADL.


----------



## Mussels (Apr 15, 2022)

fevgatos said:


> That's because the voltage is set a bit to high by default on the 12900k. With a little undervolting I managed 14k cbr23 score @ 35 watts. I can max out my 3090 with that power limit


I can max out my 5800x + 3090 and stay under 400W, the moment you tweak values it's an unfair comparison


----------

