# Potential Ryzen 7000-series CPU Specs and Pricing Leak, Ryzen 9 7950X Expected to hit 5.7 GHz



## TheLostSwede (Aug 4, 2022)

It's pretty clear that we're getting very close to the launch of AMD's AM5 platform and the Ryzen 7000-series CPUs, with spec details and even pricing brackets tipping up online. Wccftech has posted what the publication believes will be the lineup we can expect to launch in just over a month's time, if rumours are to be believed. The base model is said to be the Ryzen 5 7600X, which the site claims will have a base clock of 4.7 GHz and a boost clock of 5.3 GHz. There's no change in processor core or thread count compared to the current Ryzen 5 5600X, but the L2 cache appears to have doubled, for a total of 38 MB of cache. This is followed by the Ryzen 7 7700X, which starts out a tad slower with a base clock of 4.5 GHz, but it has a slightly higher boost clock of 5.4 GHz. Likewise here, the core and thread count remains unchanged, while the L2 cache also gets a bump here for a total of 40 MB cache. Both these models are said to have a 105 W TDP.

The Ryzen 9 7900X is said to have a 4.7 GHz base clock and a 5.6 GHz boost clock, so a 200 MHz jump up from the Ryzen 7 7700X. This CPU has a total of 76 MB of cache. Finally the Ryzen 9 7950X is said to have the same base clock of 4.5 GHz as the Ryzen 7 7700X, but it has the highest boost clock of all the expected models at 5.7 GHz, while having a total of 80 MB cache. These two SKUs are both said to have a 170 W TDP. Price wise, from top to bottom, we might be looking at somewhere around US$700, US$600, US$300 and US$200, so it seems like AMD has adjusted its pricing downwards by around $100 on the low-end, with the Ryzen 7 part fitting the same price bracket as the Ryzen 7 5700X. The Ryzen 9 7900X seems to have had its price adjusted upwards slightly, while the Ryzen 9 7950X seems to be expected to be priced lower than its predecessors. Take these things with the right helping of scepticism for now, as things can still change before the launch. 





*View at TechPowerUp Main Site* | Source


----------



## LifeOnMars (Aug 4, 2022)

Nice. I've loved my AMD setups for the past few years and had no issues. If they keep it up I'll be staying with them. I may even go higher end as well, will give it a year and see how they pan out for people.

(Note - I have had Intel in the past and loved them all too apart from a Pentium D donkeys years ago  )


----------



## usiname (Aug 4, 2022)

Hopefully the prices won't be so low, because I will must to upgrade again this year...


----------



## igralec84 (Aug 4, 2022)

So probably 300 EUR for the 7600X in EU. I remember the 5600X being 350 EUR for the first couple of weeks.


----------



## DeathtoGnomes (Aug 4, 2022)

I am eyeballing that 7900X hard.



usiname said:


> Hopefully the prices won't be so low, because I will must to upgrade again this year...


I'm shooting for around Christmas plus or minus 10 days.


----------



## Daven (Aug 4, 2022)

I wonder if the $300 to $600 gap between the 7700x and the 7900x will allow room for the 7800x3D. That would be the gaming chip to get at ~$450.


----------



## Denver (Aug 4, 2022)

Too good to be true... but I imagine it's a tactic to offset the high price of DDR5 and encourage migration to the new platform(?)


----------



## Bomby569 (Aug 4, 2022)

Even CPU's are increasing TDP now? The 7600x with 105W from the usual 65W will require better coolers.


----------



## RedelZaVedno (Aug 4, 2022)

_It seems like AMD has adjusted its pricing downwards by around $100_


----------



## Angry TacoZ (Aug 4, 2022)

I will likely be upgrading from the R7 2700 to the R7 7700x, should be quite the increase in FPS!


----------



## Mack4285 (Aug 4, 2022)

Bomby569 said:


> Even CPU's are increasing TDP now? The 7600x with 105W from the usual 65W will require better coolers.



They have no choice. When Intel with its 200W+ CPUs is stipulated as a winner in graphs, despite consuming twice as much as AMD CPUs, AMD has to follow. Same on the GPU side - Nvidia feels threatened and decides to not care about power consumption, and AMD has to follow, since the average Joe does not care about power consumption.


----------



## TheLostSwede (Aug 4, 2022)

Denver said:


> Too good to be true... but I imagine it's a tactic to offset the high price of DDR5 and encourage migration to the new platform(?)


DDR5 isn't really expensive any more. The prices are coming down almost daily.


----------



## Bomby569 (Aug 4, 2022)

Mack4285 said:


> They have no choice. When Intel with its 200W+ CPUs is stipulated as a winner in graphs, despite consuming twice as much as AMD CPUs, AMD has to follow. Same on the GPU side - Nvidia feels threatened and decides to not care about power consumption, and AMD has to follow, since the average Joe does not care about power consumption.



It's actually more a heat issue, not power consumption. Prebuilds and those included coolers with a new cpu, it will all go terrible wrong. And i guess people with stuff like the 212 (me) that was perfectly fine will probably need a new cooler. And if they don't yet another terrible experience.



TheLostSwede said:


> DDR5 isn't really expensive any more. The prices are coming down almost daily.



it will all depend on what these new ryzen need. And we know Ryzen doesn't perform to spec if you go for cheaper clocked RAM, like before you either invest in fast RAM or leave performance in the table, a lot of it. 
And decent clocked DDR5 are still very expensive.


----------



## Wirko (Aug 4, 2022)

TheLostSwede said:


> DDR5 isn't really expensive any more. The prices are coming down almost daily.


That's true for DDR5-4800 CL40 but early adopters will be avoiding that lowest grade, which can barely compete with cheap DDR4.


----------



## phanbuey (Aug 4, 2022)

If these prices are close to correct then that’s amazing; but it doesn’t make sense given what they can make on these cores on the server end, and from their stated goal of moving and to a “more premium brand”.

Pretty skeptical on this.


----------



## Denver (Aug 4, 2022)

TheLostSwede said:


> DDR5 isn't really expensive any more. The prices are coming down almost daily.


Even when the price drops to similar levels. It should be taken into account that you are now required to mount at least 2 x 16gb with DDR5. 

Plus, there is also the cost of a new motherboard for those wishing to migrate from socket AM4 to AM5, so I guess the lower prices make sense.


----------



## Bwaze (Aug 4, 2022)

Is it possible these will actually be slower or about even with 5800x 3D in gaming?


----------



## Lionheart (Aug 4, 2022)

Those base clock speeds seem too good


----------



## bonehead123 (Aug 4, 2022)

TheLostSwede said:


> but it has the highest boost clock of all the expected models at *5.7 GHz*, while having a total of 80 MB cache


Oh yea, bring it RED team, bring it baby 

This may be enough to convince me to switch sides if actually true & accurate, but I will wait for reviews.....

TPU gods, please get some samples to test, like, yesterday, hehehe


----------



## RedelZaVedno (Aug 4, 2022)

Zen4 has a big DDR5 and new motherboard's price problem. Only fanboys will buy Zen4 over Alder/Raptor lake or even Zen3 if price difference is too large and it looks like it might be.


----------



## DeathtoGnomes (Aug 4, 2022)

RedelZaVedno said:


> Zen4 has a big DDR5 and new motherboard's price problem. Only fanboys will buy Zen4 over Alder/Raptor lake or even Zen3 if price difference is too large and it looks like it might be.


Fanboys, as you put them, will buy within their budget wishing to buy newer.


----------



## Xex360 (Aug 4, 2022)

Wirko said:


> That's true for DDR5-4800 CL40 but early adopters will be avoiding that lowest grade, which can barely compete with cheap DDR4.


It seems the CL number is even less important now than before. Better check reviews of the kits before.


----------



## mahirzukic2 (Aug 4, 2022)

I would like to get the new 7950x for my workstation. I could move all my work from a company laptop to a home workstation. I do software engineering and this would help in compilation, testing times (it takes about an hour to run the full suite of unit, integration and functional tests on the biggest project I am working on), also with dockerized containers, etc.


----------



## Valantar (Aug 4, 2022)

If those base clocks are accurate, then the higher TDPs are definitely understandable. 16 cores at 4.5GHz? Holy crap! Hopefully that will make the people wishing for AMD to resurrect consumer HEDT quiet down for a while. Of course this also bodes very well for efficiency with stricter power limits, even if stock operation in SFF will become more difficult.



RedelZaVedno said:


> Zen4 has a big DDR5 and new motherboard's price problem. Only fanboys will buy Zen4 over Alder/Raptor lake or even Zen3 if price difference is too large and it looks like it might be.


How so? What indication do you have that AM5 motherboards will be more expensive than LGA1700 motherboards? Or are you talking about people who have bought an LGA1700 system this year not wanting to upgrade again the same year? 'Cause ... that's not _that_ many people. Hardly a market-tanking issue. As for DDR5, it's more expensive than DDR4, but only because DDR4 is absolutely ludicrously cheap these days. DDR5 is honestly not that bad in terms of price, and it's continuing to drop. With another DDR5 platform launching, prices will drop even further. As for Zen3 - if price is more important to you than performance, then yes, that's probably a decent choice - though it's not like X570/B550 motherboards are _cheap_ either, so unless you already have one or are willing to settle for an older generation, savings aren't that big. It's entirely possible that the performance increases from Zen4 will make a 7600X + B650 setup both faster and better in terms of I/O than something like a 5800 X3D+B550 or even X570 setup - and likely the same price or cheaper.


----------



## TheoneandonlyMrK (Aug 4, 2022)

Bwaze said:


> Is it possible these will actually be slower or about even with 5800x 3D in gaming?


Most zen 3 tap out before 5Ghz the x3D caps out at 4.5 afaik, this does 5.7( allegedly) and will have an expected minimum 15% IPC gain.

No they'll beat a 5800X3D or they're not going to do well are they.


----------



## phanbuey (Aug 4, 2022)

TheoneandonlyMrK said:


> Most zen 3 tap out before 5Ghz the x3D caps out at 4.5 afaik, this does 5.7( allegedly) and will have an expected minimum 15% IPC gain.
> 
> No they'll beat a 5800X3D or they're not going to do well are they.



IPC and DDR5 doesn't really matter for games as much as cache hits and latency.  I would be willing to bet the 5800X3D on par with zen4 in gaming, or so close that it's indistinguishable.  7900 4 X3D w/ DDR5 and an IPC lift, now that will be something.

That's not to say zen 4 is bad -- but that the x3d is just such an insane gaming chip.


----------



## DeathtoGnomes (Aug 4, 2022)

I thought they were going the xxxxx3D design as default with the 7000 series, and drop the '3D' from the name.


----------



## Valantar (Aug 4, 2022)

DeathtoGnomes said:


> I thought they were going the xxxxx3D design as default with the 7000 series, and drop the '3D' from the name.


Nope, none of the previously demonstrated Zen4 CPUs have 3D cache, and they've spoken of a separate launch for X3D SKUs (though the naming may or may not stick around, there'll definitely be some name for it).


----------



## DeathtoGnomes (Aug 4, 2022)

Valantar said:


> Nope, none of the previously demonstrated Zen4 CPUs have 3D cache, and they've spoken of a separate launch for X3D SKUs (though the naming may or may not stick around, there'll definitely be some name for it).


damn.


----------



## P4-630 (Aug 4, 2022)

TheLostSwede said:


> DDR5 isn't really expensive any more. The prices are coming down almost daily.


Here in NL not really, the DDR5 Corsair kit I bought few months ago are even more expensive now....


----------



## DeathtoGnomes (Aug 4, 2022)

TheLostSwede said:


> Take these things with the right helping of skepticism for now, as things can still change before the launch.


Also consider retail pricing will doubtfully be exactly MSRP.


----------



## TheoneandonlyMrK (Aug 4, 2022)

phanbuey said:


> IPC and DDR5 doesn't really matter for games as much as cache hits and latency.  I would be willing to bet the 5800X3D on par with zen4 in gaming, or so close that it's indistinguishable.  7900 4 X3D w/ DDR5 and an IPC lift, now that will be something.
> 
> That's not to say zen 4 is bad -- but that the x3d is just such an insane gaming chip.


I disagree.

Rocket lake did well verses X3d I think zen4 will too but we will see.


----------



## phanbuey (Aug 4, 2022)

TheoneandonlyMrK said:


> I disagree.
> 
> Rocket lake did well verses X3d I think zen4 will too but we will see.


Did it though? *Rocket lake*?  In games (especially mins and frame pacing) nothing is really faster than the X3D.






But like you said, let's see.  Exciting stuff.  Still look at that gap between the 5800 and the x3d and imagine than on a 7900x.  That is.... going to be interesting.


----------



## TheLostSwede (Aug 4, 2022)

Denver said:


> Even when the price drops to similar levels. It should be taken into account that you are now required to mount at least 2 x 16gb with DDR5.
> 
> Plus, there is also the cost of a new motherboard for those wishing to migrate from socket AM4 to AM5, so I guess the lower prices make sense.


The motherboard prices I've seen are very encouraging, but B650 is obviously some time out as yet.



RedelZaVedno said:


> Zen4 has a big DDR5 and new motherboard's price problem. Only fanboys will buy Zen4 over Alder/Raptor lake or even Zen3 if price difference is too large and it looks like it might be.


See above with regards to motherboard pricing.


----------



## Rhein7 (Aug 4, 2022)

That 7600X is very spicy but $500 for 7900X please AMD?   

Also IIRC according to the leaks, 3D cache version will be out at the end of the year? So it will be Zen 4 -> Raptor Lake -> Zen 4 3D and that means I still have time to save up.


----------



## TheLostSwede (Aug 4, 2022)

DeathtoGnomes said:


> Also consider retail pricing will doubtfully be exactly MSRP.


Well, most countries add VAT on top.


----------



## Chaitanya (Aug 4, 2022)

Prices especially 7700 and 7600 seem too good to be true if they end up being correct then I will jump from 3700x to AM5 platform.


----------



## thegnome (Aug 4, 2022)

igralec84 said:


> So probably 300 EUR for the 7600X in EU. I remember the 5600X being 350 EUR for the first couple of weeks.


I doubt it, there's less demand for them than Zen 3 most likely. And I'm pretty sure the shortage is more or less gone for CPU's.


----------



## Tomorrow (Aug 4, 2022)

Being an 5800X3D owner i will skip Zen4 and Zen4D.
I hope Zen5 will bring core count increases and V-Cache as default in 2023. Along with 8000–10000 DDR5 speeds.

Also more USB4 and Gen5 M.2 slots.


----------



## Jimmy_ (Aug 4, 2022)

Finally some good numbers! AMD is coming close with OC as well near to intel's with that freaking 80MB that's crazy bt the 13th gen will have 68MB cache - lets if it will be crashing the 13th gen raptor lake with these numbers or not! Not in OC obvious as 13th will go easily 6GHz+. Waiting for performance to power number of these two!

it will be fun - waiting for comparison and reviews of these two beasts


----------



## kapone32 (Aug 4, 2022)

Valantar said:


> If those base clocks are accurate, then the higher TDPs are definitely understandable. 16 cores at 4.5GHz? Holy crap! Hopefully that will make the people wishing for AMD to resurrect consumer HEDT quiet down for a while. Of course this also bodes very well for efficiency with stricter power limits, even if stock operation in SFF will become more difficult.


HEDT was about more than the processor.


----------



## AsRock (Aug 4, 2022)

Daven said:


> I wonder if the $300 to $600 gap between the 7700x and the 7900x will allow room for the 7800x3D. That would be the gaming chip to get at ~$450.



i was thinking the same, although those prices are going up IF true.


----------



## Valantar (Aug 4, 2022)

kapone32 said:


> HEDT was about more than the processor.


Well aware of that, but X670E has tons of PCIe (just needs some motherboards populating it into slots), and DDR5 easily doubles bandwidth over DDR4, so both of those use cases are being improved notably as well.


----------



## Jimmy_ (Aug 4, 2022)

TheoneandonlyMrK said:


> I disagree.
> 
> Rocket lake did well verses X3d I think zen4 will too but we will see.


Rocket didn't have the chance if u compare it with X3D v-cache CPUs.


----------



## DeathtoGnomes (Aug 4, 2022)

TheLostSwede said:


> Well, most countries add VAT on top.


I thought they listed VAT separate. Or do they just note that its included in price?


----------



## Pumper (Aug 4, 2022)

+300 to go from 8 to 12 cores, but only +100 to go from 12 to 16? Doubt.

Still hoping that the rumor of Zen4 CPUs with AM4 compatibility is true.


----------



## Punkenjoy (Aug 4, 2022)

Can't wait for Zen4 3D but also can't wait to see how those will perform vs ADL.


----------



## Blaeza (Aug 4, 2022)

DeathtoGnomes said:


> I thought they listed VAT separate. Or do they just note that its included in price?


I was nearly going to talk politics then, but I remembered where I am so all I shall say is performance uplift for me with humble 3600 is going to be huuuuuge.  "Love, you know I said I don't need new ram or a motherboard? I need ALL of it instead."


----------



## Punkenjoy (Aug 4, 2022)

I am also a bit surprised of the Price of the 7900x and 7950x. I expect they will be much more available since the higher TDP will allow them to bin those less aggressively. Before that they had to find chips that would substain higher frequency at lower power consumption. Now, they just need to get chips that clock higher and boost the power consumption if needed. I assume they will be able to take the chips that clock well at low power and put them in EPYC or Threadripper


It's crazy how having a single CCD for all your product open so much possibility.


----------



## Readlight (Aug 4, 2022)

Made in Taiwan


----------



## TheLostSwede (Aug 4, 2022)

DeathtoGnomes said:


> I thought they listed VAT separate. Or do they just note that its included in price?


Outside of the US, almost all countries bake VAT into retail pricing.
MSRP in US$ never includes sales tax / VAT.


----------



## Wirko (Aug 4, 2022)

Punkenjoy said:


> It's crazy how having a single CCD for all your product open so much possibility.


It also means you have nothing that competes with i3, which sells in large volumes to OEMs.


----------



## phanbuey (Aug 4, 2022)

TheLostSwede said:


> Outside of the US, almost all countries bake VAT into retail pricing.
> MSRP in US$ never includes sales tax / VAT.



Which to this day, I still don't understand why.  This and the expected tipping to vendors who you have a contract with... just put the $(*&%& real price on the paper. 

It's pointlessly convoluted.


----------



## Valantar (Aug 4, 2022)

phanbuey said:


> Which to this day, I still don't understand why.  This and the expected tipping to vendors who you have a contract with... just put the $(*&%& real price on the paper.
> 
> It's pointlessly convoluted.


Yeah the US "there might be sales tax, but we won't tell you until the second before you're paying" thing is incredibly shady and misleading.


----------



## Nordic (Aug 4, 2022)

phanbuey said:


> Which to this day, I still don't understand why.  This and the expected tipping to vendors who you have a contract with... just put the $(*&%& real price on the paper.
> 
> It's pointlessly convoluted.


It is deeply rooted as well. In the US, business who have expected tipping perform better than business that do not.


----------



## btk2k2 (Aug 4, 2022)

phanbuey said:


> IPC and DDR5 doesn't really matter for games as much as cache hits and latency.  I would be willing to bet the 5800X3D on par with zen4 in gaming, or so close that it's indistinguishable.  7900 4 X3D w/ DDR5 and an IPC lift, now that will be something.
> 
> That's not to say zen 4 is bad -- but that the x3d is just such an insane gaming chip.



I expect Zen 4 to win but the margin probably won't be that great. Also the performance profile will be very different as I see the 5800X3D keeping the crown in stuff like MSFS, ACC etc but in some of the standard AAA fare I expect Zen 4 to do well.


----------



## fevgatos (Aug 4, 2022)

Bomby569 said:


> It's actually more a heat issue, not power consumption. Prebuilds and those included coolers with a new cpu, it will all go terrible wrong. And i guess people with stuff like the 212 (me) that was perfectly fine will probably need a new cooler. And if they don't yet another terrible experience.


You know you can power limit your cpu to whatever watts your cooler can handle, right?


----------



## mahirzukic2 (Aug 4, 2022)

fevgatos said:


> You know you can power limit your cpu to whatever wants your cooler can handle, right?


Not only CAN you limit it, but the cpu is by DEFAULT limited by both the power and thermal envelope. If you happen to have a lower thermal cooling capacity cooler, the cpu will adapt to it and lower the  voltage and clocks subsequently to cope with the thermals.


----------



## HD64G (Aug 4, 2022)

Just a guess that the base all-core clocks are without the boosting and when TDP=power limit. The real all-core boost will be much closer to the single core boost when PBO on and the cooler is capable enough. That's why AMD demonstrated gaming at an all-core boost @5,5GHz without oc.


----------



## neatfeatguy (Aug 4, 2022)

phanbuey said:


> Which to this day, I still don't understand why.  This and the expected tipping to vendors who you have a contract with... just put the $(*&%& real price on the paper.
> 
> It's pointlessly convoluted.



Hard to put a sweeping price on something in the US since taxes vary from state to state and even from county to county and even city to city.

Saying something is $500 (MSRP) is easier because sales tax may or may not apply to your living location. Or maybe certain items aren't taxed and others are.
An example: Wisconsin has sales tax on clothing. Minnesota does not.
In MN the county/city I live in, the sales tax is 7.38%.
If I were to drive into Minneapolis (roughly 30 minutes, depending on weather & traffic), the sales tax there is 8.03%.
If I go into Hennepin county - which Minneapolis happens to be in -  (depending on the direction I go from where I live, about 5-10 minutes for me to cross the county lines) the sales tax is 7.53%

Easiest way to think of MSRP for the US would be to add at least 10% to the cost. If something shows a MSRP of $500, assume most people will pay upwards of $550 after taxes.


----------



## efikkan (Aug 4, 2022)

LifeOnMars said:


> Nice. I've loved my AMD setups for the past few years and had no issues. If they keep it up I'll be staying with them. I may even go higher end as well, will give it a year and see how they pan out for people.


Zen 2 and 3 turned out well eventually, but had a bumpy ride with BIOS/firmware issues for several months (I believe it was 4+ months for Zen 3).
After maturity, they've been great though. My system which was built nearly one year ago has had zero crashes (if I recall correctly), and I run my computers for many months without reboot.



RedelZaVedno said:


> _It seems like AMD has adjusted its pricing downwards by around $100_
> 
> *Recession incoming:
> 
> ...


With the current level of inflation we (as consumers) should be happy if we see prices anywhere close to this. And if we do, and AMD can supply enogh chips, then they should move a huge volume of products.



Lionheart said:


> Those base clock speeds seem too good


Achieving something like this would require very good engineering on top of an unusually well performing node.
Do you remember the Zen 2 rumors? At some point the >5 GHz hype was extreme, yet it turned out to be nonsense from a YouTube channel. So we'll see if the details of this article is true or not.



phanbuey said:


> IPC and DDR5 doesn't really matter for games as much as cache hits and latency.  I would be willing to bet the 5800X3D on par with zen4 in gaming, or so close that it's indistinguishable.  7900 4 X3D w/ DDR5 and an IPC lift, now that will be something.
> 
> That's not to say zen 4 is bad -- but that the x3d is just such an insane gaming chip.


IPC is just the average instructions per clock. There are many changes to CPUs which can improve IPC, yet it varies from workload to workload (sometimes even application) whether these improvements translates into increase performance. Typically, increases in execution units, SIMD, etc. have little impact on games but massive impact on video or software rendering, while improvements to prefetcher, cache, etc. typically have more impact on games, yet both of these impact IPC.

I believe Zen 4 will also increase L2 cache, so a matchup here will be quite interesting.

But as for 5800X3D being an "insane gaming chip", that's more than a little exaggerated. There are some games where the gains are very large, but for most of them the gains are marginal in realistic resolutions. We don't know whether this kind of boost from increased L3 will continue with future games, but we do know that software which exhibit this kind of behavior is caused by instruction cache misses, and any good programmer could tell you that misses in instruction cache is primarily due to software bloat. So my point is that designing a CPU with loads of L3 is a double-edged sword; it will "regain" some performance lost due to bad code, but it may also "encourage" bad software design?

I'm more interested in what AMD may use this stacking technology for in the future. If it's just to add more L3 cache, then it's almost a gimmick in the consumer space. But if this someday leads to a modular CPU design where you can have e.g. 8 cores, but you can choose between a "base" version for gaming or one with extra SIMD for multimedia etc., but seamlessly integrated through multi-layer chiplets, then I'm for it.


----------



## Bomby569 (Aug 4, 2022)

fevgatos said:


> You know you can power limit your cpu to whatever watts your cooler can handle, right?



then what's the point in paying for lobster and eating chicken?


----------



## Prima.Vera (Aug 4, 2022)

TheLostSwede said:


> DDR5 isn't really expensive any more. The prices are coming down almost daily.


Funny guy. You should check APAC prices


----------



## TheLostSwede (Aug 4, 2022)

Looks like AMD or at least select partners will offer special DDR5 memory deals come September.

__ https://twitter.com/i/web/status/1555176687440199680


Prima.Vera said:


> Funny guy. You should check APAC prices


Huh? That's what I am checking. DDR5 is really coming down in price here. AU and NZ doesn't apply.


----------



## phanbuey (Aug 4, 2022)

neatfeatguy said:


> Hard to put a sweeping price on something in the US since taxes vary from state to state and even from county to county and even city to city.
> 
> Saying something is $500 (MSRP) is easier because sales tax may or may not apply to your living location. Or maybe certain items aren't taxed and others are.
> An example: Wisconsin has sales tax on clothing. Minnesota does not.
> ...


While true, the rest of the world has figured this out -- the stores just put the actual price after taxes.  The store knows where it is and if there's tax applied well ahead of you trying to buy something.  

And yes, I too automatically calculate 10% overcharge when buying anything.


----------



## fevgatos (Aug 4, 2022)

Bomby569 said:


> then what's the point in paying for lobster and eating chicken?


You are assuming the CPU would be cheaper if it had a lower TDP. It wouldn't, so you are paying for chicken regardless of the power limit


----------



## Blaeza (Aug 4, 2022)

fevgatos said:


> You are assuming the CPU would be cheaper if it had a lower TDP. It wouldn't, so you are paying for chicken regardless of the power limit


I won't power limit my CPU as you are not getting what you pay for.  I'd upgrade my cooler as you want to use it at it's full potential if I had say a 7600X...  Hmm, going to have to have a meeting with the Mrs.


----------



## Wirko (Aug 4, 2022)

efikkan said:


> I'm more interested in what AMD may use this stacking technology for in the future. If it's just to add more L3 cache, then it's almost a gimmick in the consumer space. But if this someday leads to a modular CPU design where you can have e.g. 8 cores, but you can choose between a "base" version for gaming or one with extra SIMD for multimedia etc., but seamlessly integrated through multi-layer chiplets, then I'm for it.


I was imagining additional L2 cache ... if 4 clock cycles of additional delay don't destroy performance too much. What's the L2 latency in Zen 3?


----------



## Bomby569 (Aug 4, 2022)

fevgatos said:


> You are assuming the CPU would be cheaper if it had a lower TDP. It wouldn't, so you are paying for chicken regardless of the power limit



fair, looking at things that way, i guess it's a win.


----------



## Prima.Vera (Aug 4, 2022)

TheLostSwede said:


> Huh? That's what I am checking. DDR5 is really coming down in price here. AU and NZ doesn't apply.








						Amazon.co.jp: G.Skill DDR5 Memory DDR5-6400 32GB Kit (16GB x 2 Discs) Domestic Genuine OVERCLOCK Works Purchase Limited Bonus Sticker Trident Z5 RGB F5-6400J3239G16GX2-TZ5RK : Computers
					

Amazon.co.jp: G.Skill DDR5 Memory DDR5-6400 32GB Kit (16GB x 2 Discs) Domestic Genuine OVERCLOCK Works Purchase Limited Bonus Sticker Trident Z5 RGB F5-6400J3239G16GX2-TZ5RK : Computers



					www.amazon.co.jp
				




This is both ridiculous and callous price in Japan.


----------



## Bomby569 (Aug 4, 2022)

Blaeza said:


> I won't power limit my CPU as you are not getting what you pay for.  I'd upgrade my cooler as you want to use it at it's full potential.



but that was the issue i was mentioning, most people won't, they will just use a underpowered pc, this happens a lot, pre builds or even assembled builds by someone else, most people eally don't know much about pc's besides using them, it's definetely getting better but we are still a minority


----------



## fevgatos (Aug 4, 2022)

Blaeza said:


> I won't power limit my CPU as you are not getting what you pay for.  I'd upgrade my cooler as you want to use it at it's full potential if I had say a 7600X...  Hmm, going to have to have a meeting with the Mrs.


If you think about it for more than a second, that statement is absolutely ridiculous. Youd pay the same amount regardless of the power limit. The 5600x had a lower power limit yet it cost more than the 3600x did at launch. And following your logic - since electricity costs money - your CPU users double the power at 125w instead of 65 but only performs 20% better, therefore you are not getting what you paid for the electricity.


----------



## TheLostSwede (Aug 4, 2022)

Prima.Vera said:


> Amazon.co.jp: G.Skill DDR5 Memory DDR5-6400 32GB Kit (16GB x 2 Discs) Domestic Genuine OVERCLOCK Works Purchase Limited Bonus Sticker Trident Z5 RGB F5-6400J3239G16GX2-TZ5RK : Computers
> 
> 
> Amazon.co.jp: G.Skill DDR5 Memory DDR5-6400 32GB Kit (16GB x 2 Discs) Domestic Genuine OVERCLOCK Works Purchase Limited Bonus Sticker Trident Z5 RGB F5-6400J3239G16GX2-TZ5RK : Computers
> ...


Ah, forgot to add Japan to that list. For some reason, all non Japanese products seem to be stupidly overpriced and many Japanese products are also stupidly overpriced there.
Can't see any pricing for that from here though.
Time to come visit isla formosa...
Basic 4800 MHz modules have been on sale here for as little as US$67 for 2x 8GB.
A pair of 6200 MHz V-Color Manta CL36 16 GB modules retail for US$263, which is about the same some 3600 MHz DDR4 low latency G.Skill modules are going for locally.





						PChome線上購物 -
					






					24h.pchome.com.tw


----------



## HairyLobsters (Aug 4, 2022)

Why was the 6000 naming convention skipped?


----------



## igralec84 (Aug 4, 2022)

Why does G.skill Trident Z 2x16gb 6400 cost almost double the 6000 version  It's 290-350 eur for more or less any 6000 kit or 550-600 eur for the 6400 lol.


----------



## Blaeza (Aug 4, 2022)

fevgatos said:


> If you think about it for more than a second, that statement is absolutely ridiculous. Youd pay the same amount regardless of the power limit. The 5600x had a lower power limit yet it cost more than the 3600x did at launch. And following your logic - since electricity costs money - your CPU users double the power at 125w instead of 65 but only performs 20% better, therefore you are not getting what you paid for the electricity.


I'm a ridiculous kind of guy.  And I boil a full kettle, JUST FOR ME!


----------



## efikkan (Aug 4, 2022)

Wirko said:


> I was imagining additional L2 cache ... if 4 clock cycles of additional delay don't destroy performance too much. What's the L2 latency in Zen 3?


If WikiChip is accurate, ≥12 cycles.
I have no idea about the latency on Zen 4 though. It is possible to maintain comparable latencies with good design and a good node, but we'll see.


----------



## Chrispy_ (Aug 4, 2022)

I'm very dubious about those prices, but one possible strategy to explain it is that AMD wants to get plenty of AM5 boards out into the wild and unless they have cheap CPUs to entice people, nobody on a tighter budget will buy the more expensive AM5+DDR5 combination.

Dropping the price by $100 subsidises the platform cost for new customers and increases AMDs AM5 marketshare which is probably very important for them when they are still producing AM4 CPUs that will work with the wide inventory of cheap, good, affordable AM4 boards and DDR4-3600.


----------



## kapone32 (Aug 4, 2022)

Valantar said:


> Well aware of that, but X670E has tons of PCIe (just needs some motherboards populating it into slots), and DDR5 easily doubles bandwidth over DDR4, so both of those use cases are being improved notably as well.


If AMD had released a 12 core Threadripper chip for $999 with 5000 series Chips inside for TRX40. I would never have bought a 5950X. Indeed this would have had me look at AM5 with this release. As a result I prefer the Extreme for it's flexibility than the Crosshair but I am pumped to see what the rest of the lineup has.


----------



## Punkenjoy (Aug 4, 2022)

Wirko said:


> It also means you have nothing that competes with i3, which sells in large volumes to OEMs.


but does it matter?

They could use one of their monolithic APU to compete on that market, but AMD can't produce enough chips and want to make as much profits. They will allow capacity to higher margin chips before trying to compete on lower end CPU.

You better sell less product with higher margin than the opposite.


----------



## HenrySomeone (Aug 4, 2022)

My, my, after all those bullshit "leaks" about AMD coming out with 6 core, 12 thread R3s, 8core, 16 threads R5s and 12 core, 24 threads R7s in the months before the Zen2 launch, it is in reality Intel, who is bringing more cores (and most importantly, more performance!) to the table under the same branding and same price, for the fourth time now!


----------



## efikkan (Aug 4, 2022)

Chrispy_ said:


> I'm very dubious about those prices, but one possible strategy to explain it is that AMD wants to get plenty of AM5 boards out into the wild and unless they have cheap CPUs to entice people, nobody on a tighter budget will buy the more expensive AM5+DDR5 combination.
> 
> Dropping the price by $100 subsidises the platform cost for new customers and increases AMDs AM5 marketshare which is probably very important for them when they are still producing AM4 CPUs that will work with the wide inventory of cheap, good, affordable AM4 boards and DDR4-3600.


If true, this strategy might actually be profitable, but it depends on one critical factor: supplies.
A lot of the cost of a CPU is offsetting the development cost, so if they sell many more CPUs at a lower price, the actual profit may actually increase.


----------



## thewan (Aug 4, 2022)

Prima.Vera said:


> Funny guy. You should check APAC prices





TheLostSwede said:


> Looks like AMD or at least select partners will offer special DDR5 memory deals come September.
> 
> __ https://twitter.com/i/web/status/1555176687440199680
> 
> Huh? That's what I am checking. DDR5 is really coming down in price here. AU and NZ doesn't apply.





TheLostSwede said:


> Ah, forgot to add Japan to that list. For some reason, all non Japanese products seem to be stupidly overpriced and many Japanese products are also stupidly overpriced there.
> Can't see any pricing for that from here though.
> Time to come visit isla formosa...
> Basic 4800 MHz modules have been on sale here for as little as US$67 for 2x 8GB.
> ...



Don't worry. Mr lost is just being lost as usual. He thinks that APAC only has a single country (Taiwan).

Over here in SEA (which is obviously not? part of APAC), 67USD can get you 1 - 1.5 * 8GB DDR5 4800. We need to spend double that to get 2x8GB. I've looked at Singapore, Malaysia, Indonesia, Thailand and Philippines.

As for your pair of 16GB 6200 DDR5, since availability of niche stuff is rubbish over here, the cheapest one I've spotted in my home of Malaysia (not gonna lookup the other countries, too much hassle. Note that DDR5 4800 was the cheapest here in MY among other SEA markets from the comparison I made above.) are Corsair that retail for 360 USD.

And yes the above price are looking at the cheapest "Legit" prices, since there are alot of scam shops here may show lower prices.

Maybe if you wanted to represent "us" here in the APAC region you would not get lost and do a bit of research before posting on the internet. Its not that hard.


----------



## TheLostSwede (Aug 4, 2022)

thewan said:


> Don't worry. Mr lost is just being lost as usual. He thinks that APAC only has a single country (Taiwan).
> 
> Over here in SEA (which is obviously not? part of APAC), 67USD can get you 1 - 1.5 * 8GB DDR5 4800. We need to spend double that to get 2x8GB. I've looked at Singapore, Malaysia, Indonesia, Thailand and Philippines.
> 
> ...


No need to be rude and no need for personal attacks.

The thing is, when prices are starting to go down in one country in a region, usually most other places follow. So maybe wait a few weeks and prices will come down in your part of APAC as well.


----------



## mb194dc (Aug 4, 2022)

Chrispy_ said:


> I'm very dubious about those prices, but one possible strategy to explain it is that AMD wants to get plenty of AM5 boards out into the wild and unless they have cheap CPUs to entice people, nobody on a tighter budget will buy the more expensive AM5+DDR5 combination.
> 
> Dropping the price by $100 subsidises the platform cost for new customers and increases AMDs AM5 marketshare which is probably very important for them when they are still producing AM4 CPUs that will work with the wide inventory of cheap, good, affordable AM4 boards and DDR4-3600.



There's only going to be a relativity small market for these chips. AM4 will suit 99% of use cases just fine still. The general economic environment has a lot to do with it. 

Last gen is so good, only those who want several hundred fps at 1080p or similar need bother with AM5.


----------



## Makaveli (Aug 4, 2022)

TheLostSwede said:


> DDR5 isn't really expensive any more. The prices are coming down almost daily.


Price are coming down but still expensive when looking at enthusiast level kits and not the OEM junk.

DDR5 6000+ kits cost this much CAD currently. Prices are indeed dropping but not there yet.

$459.99 CAD = 357.50 USD


----------



## HisDivineOrder (Aug 4, 2022)

Remember when AMD said six cores was worth $300? Haha, them were good times.


----------



## Valantar (Aug 4, 2022)

kapone32 said:


> If AMD had released a 12 core Threadripper chip for $999 with 5000 series Chips inside for TRX40. I would never have bought a 5950X. Indeed this would have had me look at AM5 with this release. As a result I prefer the Extreme for it's flexibility than the Crosshair but I am pumped to see what the rest of the lineup has.


I understand that, but consumer HEDT is such a tiny niche now that it doesn't have a relevant-to-consumers thread count advantage and consumers no longer want or need multiple GPUs, that it's not a sustainable market. For professional use it still has value, which is why we have TR Pro.


----------



## wheresmycar (Aug 4, 2022)

$600 speculated for the 12 core counterpart. Crikey nora pandora. Anyway not that i'm interested...

i'm waiting for a gaming upgrade. Preferably the 7700X if the cost is reasonable (incl board and DDR5 memory), otherwise i'd be more than happy with a 7600X. At the speculated $200 the 7600X that sounds like a treat!!


----------



## windwhirl (Aug 4, 2022)

Bomby569 said:


> Even CPU's are increasing TDP now? The 7600x with 105W from the usual 65W will require better coolers.


x600X CPUs have usually been 95W TDP parts. The 5600X was the exception.


----------



## thegnome (Aug 4, 2022)

Wirko said:


> That's true for DDR5-4800 CL40 but early adopters will be avoiding that lowest grade, which can barely compete with cheap DDR4.


In terms of looks and latency maybe, but actual performance is up there with higher end DDR4 kits, just watch some reviews on Alderlake comparing 4 to 5.


----------



## Mussels (Aug 5, 2022)

Denver said:


> Too good to be true... but I imagine it's a tactic to offset the high price of DDR5 and encourage migration to the new platform(?)


Intel did it too - the parts shortages have ended, so once they sell off the higher cost (to retailers) old stock, they can do a decrease for new products


----------



## AlwaysHope (Aug 5, 2022)

Believe none of these prices until I actually see what retailers in my part of the world are going to charge.


----------



## Ravenas (Aug 5, 2022)

If this weren’t a platform moving to DDR5 I would likely skip, but I am interested in DDR5 gains.


----------



## Minus Infinity (Aug 5, 2022)

Angry TacoZ said:


> I will likely be upgrading from the R7 2700 to the R7 7700x, should be quite the increase in FPS!


One of PC's will be upgrading from 1700X, but I'm waiting for the v-cache models before deciding. I'm thinking 7900X this time around but if the clock speed penalty for v-cache is smaller this time I'm in. 

Curious 7800X isn't mentioned. I wonder if they will only make the 7800X v-cache this time around. It would be pointless having a 7700X and a regular 7800X.


----------



## Mussels (Aug 5, 2022)

I'm waiting for the first round of sales on the 2nd gen AM5 parts, unless they do come out with new AM4 parts.


Replacing my 2700x with its ghetto pins would be great, but fortunately not something i need in a hurry


----------



## Bwaze (Aug 5, 2022)

5.7 GHz sounds like a large frequency increase. But we know Ryzen processors don't actually do any work at their boost frequency, they jump to that peak momentarily with very light loads, and they perform even the purely synthetic sincle core load at lower frequency. 

In my opinion that complicates simple arithmetic on how much frequency increase are we seing here.


----------



## gffermari (Aug 5, 2022)

It looks to me that the 7600X, 7700X will match or barely overcome the performance of the 3D in gaming. So the rocket lake will be about 10% or more faster and then a 7800X3D will arrive to “compensate” the loss…

The prices are very good and tempting. But I don’t know if my scenario comes true how the people react to that. Especially if the 13600K is all around performer and cheap enough.


----------



## HenrySomeone (Aug 5, 2022)

wheresmycar said:


> $600 speculated for the 12 core counterpart. Crikey nora pandora. Anyway not that i'm interested...
> 
> i'm waiting for a gaming upgrade. Preferably the 7700X if the cost is reasonable (incl board and DDR5 memory), otherwise i'd be more than happy with a 7600X. At the speculated $200 the 7600X that sounds like a treat!!


If the $200 rumor is true, than it won't even match 12600k, if it'll be better ... then it won't be $200, it's as simple as that (but I'm betting on the former this time).


----------



## Jimmy_ (Aug 5, 2022)

Interesting pricing - as 14th gen intel is been delayed yet again. AMD can gain a lot with their ryzen 7000. 
Kudos to TEAM RED for staying as per their road map


----------



## HenrySomeone (Aug 5, 2022)

This pricing means they'll be behind even 13th gen, so don't count on them gaining a whole lot.


----------



## mahirzukic2 (Aug 5, 2022)

Makaveli said:


> Price are coming down but still expensive when looking at enthusiast level kits and not the OEM junk.
> 
> DDR5 6000+ kits cost this much CAD currently. Prices are indeed dropping but not there yet.
> 
> ...


Here in Germany: (2 x 16GB kit)
https://geizhals.de/?cat=ramddr3&xf=1454_16384~15903_keinSO~253_32768~5828_DDR5
Cheapest 5600 Mhz = 204€ ~ 208$
Cheapest 6000 Mhz = 243€ ~ 248$.
Even if you look at the cheapest 4800 Mhz kit, it's 148€ ~ 150$, not a big deal. You can try overclocking it a bit and getting better timings. Pretty good value.

I don't see a reason why these would be cheaper in Germany or the EU. If anything they are far away from where the RAM is produced, therefore higher shipping costs, there's import costs as well as 20% VAT increase on top of it.


----------



## fevgatos (Aug 5, 2022)

HenrySomeone said:


> If the $200 rumor is true, than it won't even match 12600k, if it'll be better ... then it won't be $200, it's as simple as that (but I'm betting on the former this time).


It doesnt matter what the price is, the 7600x will barely match the 12600k best case scenario. The gap in mt performance between the 5600x and the 12600k is already stupendously high.


----------



## HenrySomeone (Aug 5, 2022)

fevgatos said:


> It doesnt matter what the price is, the 7600x will barely match the 12600k best case scenario. The gap in mt performance between the 5600x and the 12600k is already stupendously high.


Oh, for sure! I was slightly unclear above when I said "if it'll be better" by which I meant better than what the suggested $200 price would imply; it's quite obvious it will never best 12600k.


----------



## mahirzukic2 (Aug 5, 2022)

HenrySomeone said:


> Oh, for sure! I was slightly unclear above when I said "if it'll be better" by which I meant better than what the suggested $200 price would imply; it's quite obvious it will never best 12600k.


Why is that an issue. 7600x suggested price is 200$, cheapest I can find 12600k on newegg is 280$, and here in Germany 310E ~ 315$.
It's kinda hard to assume that the new generation 200$ CPU would beat a generation old processor costing 50% more.


----------



## Valantar (Aug 5, 2022)

Bwaze said:


> 5.7 GHz sounds like a large frequency increase. But we know Ryzen processors don't actually do any work at their boost frequency, they jump to that peak momentarily with very light loads, and they perform even the purely synthetic sincle core load at lower frequency.
> 
> In my opinion that complicates simple arithmetic on how much frequency increase are we seing here.


That's at least somewhat true, but then we've seen ES silicon running at 5.5GHz in-game in AMD's own demos (with clearly visible dynamic clocks, so no major trickery), so we know that they'll clock high in real world use cases as well.

As for the arithmetic on how much frequency increase we're seeing here: look at the base clock increases. These base clocks are reaching Zen3 boost clock levels, and are ~> 1GHz higher than Zen3 base clocks. These chips _will_ clock significantly higher than Zen3.


----------



## fevgatos (Aug 5, 2022)

mahirzukic2 said:


> Why is that an issue. 7600x suggested price is 200$, cheapest I can find 12600k on newegg is 280$, and here in Germany 310E ~ 315$.
> It's kinda hard to assume that the new generation 200$ CPU would beat a generation old processor costing 50% more.


Multiple reasons. First of all, the 13400 will most likely be 6+4, which means it will at least tie the 12600k for starters


----------



## James7787 (Aug 5, 2022)

7600X better not be 300$ with only 6 cores when Intel is already at 14 for i5


----------



## mahirzukic2 (Aug 5, 2022)

James7787 said:


> 7600X better not be 300$ with only 6 cores when Intel is already at 14 for i5


It probably wouldn't as detailed in the news. It would probably have a 200$ suggested price, as for retails price, that's anyone's guess.


----------



## Redwoodz (Aug 5, 2022)

Core counts are not comparable anymore, why is every one still doing it?


----------



## HenrySomeone (Aug 5, 2022)

Main cores of Zen4 and Raptor Lake will likely be pretty comparable (I'd wage on the latter still taking the single thread crown by a notable margin though), so one of them having 8 extra ones (however inferior/weak/useless they might be called by the red boys) gives them quite the advantage, wouldn't you say?


----------



## Valantar (Aug 5, 2022)

Redwoodz said:


> Core counts are not comparable anymore, why is every one still doing it?


Because core counts still tell us a lot about performance across various tasks, as long as one is cognizant of which type of core and how many, etc. - and on the other hand, in many tasks core counts don't matter as long as they're >4/>6 etc. I haven't seen anyone doing 16c v 16c comparisons or whatever, but then I may not have been paying attention. There are lots of possible points of comparison, but until we have confirmed SKUs, pricing, clock speeds, etc., rumored clocks and core counts are pretty much what we've got.


----------



## trparky (Aug 5, 2022)

TheLostSwede said:


> MSRP in US$ never includes sales tax / VAT.


That's because, at least where I live, taxes in my county are different from the taxes in a county south of me. You can't calculate sales tax until you enter in your ZIP code.


----------



## ModEl4 (Aug 5, 2022)

Regarding prices too good to be true in some models!
If AMD wanted to be more competitive and drop prices, the below would seem more logical imo:
7950X $699
7900X $499
7800X $399
7700X $349
7600X $249
7600 (65/88W) $199


----------



## Dr. Dro (Aug 5, 2022)

ModEl4 said:


> Regarding prices too good to be true in some models!
> If AMD wanted to be more competitive and drop prices, the below would seem more logical imo:
> 7950X $699
> 7900X $499
> ...



This is how it's gonna look by the time Zen 5 is about to drop 

For me the DDR4>DDR5 move really busts things up, getting a comparable high-quality 64 GB kit  like my Dominator Platinums would cost me a bundle and I honestly don't fancy going back to a 6 core processor. 

I could go for Raptor Lake or just sit on my 5950X, a GPU upgrade is far more important considering a 4K120 target.


----------



## Easo (Aug 6, 2022)

I am sure 7700X will be going for over 400 EUR in EU for quite a bit.
Ergh...


----------



## A Computer Guy (Aug 6, 2022)

Redwoodz said:


> Core counts are not comparable anymore, why is every one still doing it?


That's because of Intel's P-core / E-core thing right?


----------



## trparky (Aug 6, 2022)

And the only reason why Intel is doing the x86 equivalent of Arms' big.LITTLE architecture is because Intel performance cores are freakin' heat pump on a chip. Reminds me of the old Pentium 4 Prescott days.


----------



## A Computer Guy (Aug 6, 2022)

trparky said:


> And the only reason why Intel is doing the x86 equivalent of Arms' big.LITTLE architecture is because Intel performance cores are freakin' heat pump on a chip. Reminds me of the old Pentium 4 Prescott days.


It would be nice if there was a more clean way in the OS to manually adjust and reserve what kind of things runs on what cores.  (regardless of Intel or AMD) For example reserve certain tasks for slower or faster cores.


----------



## trparky (Aug 6, 2022)

A Computer Guy said:


> It would be nice if there was a more clean way in the OS to manually adjust and reserve what kind of things runs on what cores.  (regardless of Intel or AMD) For example reserve certain tasks for slower or faster cores.


I doubt it. You just have to "trust" Windows to get it right.


----------



## fevgatos (Aug 6, 2022)

trparky said:


> And the only reason why Intel is doing the x86 equivalent of Arms' big.LITTLE architecture is because Intel performance cores are freakin' heat pump on a chip. Reminds me of the old Pentium 4 Prescott days.


And thats more wrong than you can possibly imagine. P cores are way more efficient than zen 3 cores and way more efficient than E cores as well. Problem is they take a lot of die space, which means putting 16 of those in one chip will be insanely expensive.  Heat and power arent an issue, that was juts a clueless statement


----------



## Valantar (Aug 6, 2022)

fevgatos said:


> And thats more wrong than you can possibly imagine. P cores are way more efficient than zen 3 cores and way more efficient than E cores as well. Problem is they take a lot of die space, which means putting 16 of those in one chip will be insanely expensive.  Heat and power arent an issue, that was juts a clueless statement


... Except Zen3 cores peak around 20W, while ADL P cores can draw 2-3x that much. More efficient at lower clocks? Depends on the workload. More efficient at stock? Not even close in any CPU heavy task. They do run very well in games though, with most of those being variable, low threaded workloads that let the CPU boost high to race to finish each frame's compute cycle, which suits ADL's high clocks and good IPC nicely. But, crucially, you can't reliably measure a CPUs efficiency in something that isn't a cpu-intensive task. And for anything CPU-intensive, both Zen3 and E cores are vastly more efficient at anything resembling stock power levels.


----------



## fevgatos (Aug 6, 2022)

Valantar said:


> ... Except Zen3 cores peak around 20W, while ADL P cores can draw 2-3x that much. More efficient at lower clocks? Depends on the workload. More efficient at stock? Not even close in any CPU heavy task. They do run very well in games though, with most of those being variable, low threaded workloads that let the CPU boost high to race to finish each frame's compute cycle, which suits ADL's high clocks and good IPC nicely. But, crucially, you can't reliably measure a CPUs efficiency in something that isn't a cpu-intensive task. And for anything CPU-intensive, both Zen3 and E cores are vastly more efficient at anything resembling stock power levels.


More efficient at everything. What he is saying is that intel cant fit 16p cores cause of power draw which is absurd, cause we already know a p core outperforms a zen 3 core at same wattage. Therefore a 16p core intel would outperform the 5950x for example at same or lower wattage


----------



## fb020997 (Aug 6, 2022)

Angry TacoZ said:


> I will likely be upgrading from the R7 2700 to the R7 7700x, should be quite the increase in FPS!


I went from a 2700X (3200 ram) to a 5600X (3600 ram). And despite being GPU-limited with my Vega 64, I notice an unexpected amount of smoothness with the 5600X. I never played modern games that good before!!! It felt like I had some 20-30fps low spikes with the 2700X, vs none with the new CPU. 
Trust me, it’ll be great especially for the minimum FPS. As smooth as a baby’s bottom, compared to Zen+.


----------



## gffermari (Aug 6, 2022)

Another fun fact is that when Intel released AL, I didn't see the P and E cores in a good way.

But the 12600K and its successor 13600K as i5 models, have a huge advantage over the R5 ones.
The 7600X may have the same performance or better in gaming over 12600K(probably close to13600K) but in MT will be destroyed if it remains a 6/12 cpu.
It has to score 5900X numbers or almost double the 5600X ones, with just 6/12 cores in order to be competitive in MT!

It's funny how the roles turned around. AMD has always been miles ahead in MT in the Ryzen era...


----------



## Valantar (Aug 6, 2022)

fevgatos said:


> More efficient at everything. What he is saying is that intel cant fit 16p cores cause of power draw which is absurd, cause we already know a p core outperforms a zen 3 core at same wattage. Therefore a 16p core intel would outperform the 5950x for example at same or lower wattage


At _everything_? That's ... a stretch. Though, knowing your arguments from previous discussions, you've said things like "My 12900k at stock limited to 125w is like 7-8% behind the 5950x in CBR23." You're consistently arguing for power limiting, underclocking and undervolting the Intel chips, while leaving the AMD chips at stock, as if they aren't pushed to similar extremes on their respective V/F curves? I get that you probably haven't been reading Zen3 UC/UV/curve optimizer threads given that you're an ADL owner, but - news flash - these chips get _far_ more efficient than stock with very modest performance losses as well. If your argument is "Intel's stock settings are crazy, but if power limited ADL is more efficient than stock Zen3", then you're - intentionally or not - creating an uneven playing field and thus making an invalid comparison. If one system is optimized for efficiency, then both should be, no?

Nobody is denying that ADL is quite efficient in low threaded or low utilization workloads even at stock, and can indeed be power limited, undervolted and underclocked to run quite efficiently at not-too-large performance losses. But you're ignoring the fact that the exact same thing is true for Zen3, except that Zen3 starts from a much, much lower stock power usage, especially in ST tasks, and thus has an inherent advantage there. It also has an inherent disadvantage through its through-package MCM solution (which consumes ~20W when active), giving it a higher base power draw, which means again that there's a crossover point somewhere around ~50W where ADL takes over as the more efficient. But, regardless of this, saying "ADL is more efficient at everything" is pure, unadulterated nonsense. It's less efficient at stock in most CPU-heavy workloads. It's less efficient in those same workloads if both systems are tuned equally, outside of a range of very low power limits.

Things often have complex answers, you know.


----------



## HenrySomeone (Aug 6, 2022)

trparky said:


> And the only reason why Intel is doing the x86 equivalent of Arms' big.LITTLE architecture is because Intel performance cores are freakin' heat pump on a chip. Reminds me of the old Pentium 4 Prescott days.


So what do you have to say about the fact that Zen5 will have pretty much the same arrangement, therefore following Intel's lead? Come on, I'm sure you can think of some excuse...


----------



## ratirt (Aug 6, 2022)

HenrySomeone said:


> So what do you have to say about the fact that Zen5 will have pretty much the same arrangement, therefore following Intel's lead? Come on, I'm sure you can think of some excuse...


Zen4 is not even out and you are already talking about what Zen5 chip will have. Stop speculating and talking nonsense just to prove some crazy thought.  

Zen4 pricing does look ok if it turns out to be true. Hopefully it will.


----------



## Valantar (Aug 6, 2022)

ratirt said:


> Zen4 is not even out and you are already talking about what Zen5 chip will have. Stop speculating and talking nonsense just to prove some crazy thought.
> 
> Zen4 pricing does look ok if it turns out to be true. Hopefully it will.


There are rumors that Zen5 will use some form of hybrid arcutecture, or at least sets of cores with different tuning. Either way, regardless of why (and you're mostly right about why), Intel did the right thing by going hybrid, and while their implementation is imperfect (as expected for a first generation effort) it is still impressive in many ways. And there's no doubt hybrid architectures and increased use of accelerators are the future of CPUs/SoCs, and we'll be seeing more of this going forward.


----------



## fevgatos (Aug 6, 2022)

Valantar said:


> At _everything_? That's ... a stretch. Though, knowing your arguments from previous discussions, you've said things like "My 12900k at stock limited to 125w is like 7-8% behind the 5950x in CBR23." You're consistently arguing for power limiting, underclocking and undervolting the Intel chips, while leaving the AMD chips at stock, as if they aren't pushed to similar extremes on their respective V/F curves? I get that you probably haven't been reading Zen3 UC/UV/curve optimizer threads given that you're an ADL owner, but - news flash - these chips get _far_ more efficient than stock with very modest performance losses as well. If your argument is "Intel's stock settings are crazy, but if power limited ADL is more efficient than stock Zen3", then you're - intentionally or not - creating an uneven playing field and thus making an invalid comparison. If one system is optimized for efficiency, then both should be, no?
> 
> Nobody is denying that ADL is quite efficient in low threaded or low utilization workloads even at stock, and can indeed be power limited, undervolted and underclocked to run quite efficiently at not-too-large performance losses. But you're ignoring the fact that the exact same thing is true for Zen3, except that Zen3 starts from a much, much lower stock power usage, especially in ST tasks, and thus has an inherent advantage there. It also has an inherent disadvantage through its through-package MCM solution (which consumes ~20W when active), giving it a higher base power draw, which means again that there's a crossover point somewhere around ~50W where ADL takes over as the more efficient. But, regardless of this, saying "ADL is more efficient at everything" is pure, unadulterated nonsense. It's less efficient at stock in most CPU-heavy workloads. It's less efficient in those same workloads if both systems are tuned equally, outside of a range of very low power limits.
> 
> Things often have complex answers, you know.


No, im not talking about undervolting or anything like that. The guy suggested that intel can't make a 16p core cpu cause of heat and wattage, which we know is absolutely false. And we know this cause we already know how 8p cores perform. A 16p core intel cpu at 130W would vastly outperform the 12900k at 240w and the 5950x at its current 125w limit. So power and heat aint an issue at all, its die space. Even at 240w a 16p core would be way easier to cool than the 12900k, and it would score over 36k in cbr23.

Regarding the rest of what you said, 1p core absolutely creams a zen 3 core at same wattage in every workload. The difference is so vast that not even zen 4 can close it. Therefore it stands to reason that a 16p core intel would have no problem with power or heat

You have a 5800x. Choose a benchmark - the best case scenario for zen 3 - choose a power limit , again , the best case scenario for zen 3 and upload yours score. Ill upload my score at the same benchmark and same power limit with 8 GC cores, I guarantee you zen 3 will get creamed. Especially if we run performance normalized, for example in CBR23 8zen 3 cores need more than double the wattage (and probably some ln2 cooling) to tie the performance of 8gc cores at 65w.


----------



## ixi (Aug 6, 2022)

igralec84 said:


> So probably 300 EUR for the 7600X in EU. I remember the 5600X being 350 EUR for the first couple of weeks.



Yeah, prices were shitty because COVID COVID COVID COVID COVID COVID, RUMORS, RUMORS, RUMORS, PRICE GOUGING AND at last scalpers.


----------



## Valantar (Aug 6, 2022)

ixi said:


> Yeah, prices were shitty because COVID COVID COVID COVID COVID COVID, RUMORS, RUMORS, RUMORS, PRICE GOUGING AND at last scalpers.


Yep, which is why there's no reason to expect a similar situation now. The chip shortage is moving towards being over (it's not over for a bunch of smaller components, but it is for anything major), Covid-related supply chain disruptions are _mostly_ a thing of the past (fingers crossed!), and due to post-lockdown behavioural changes dropping demand for products, scalping is far less attractive too.


fevgatos said:


> No, im not talking about undervolting or anything like that. The guy suggested that intel can't make a 16p core cpu cause of heat and wattage, which we know is absolutely false. And we know this cause we already know how 8p cores perform. A 16p core intel cpu at 130W would vastly outperform the 12900k at 240w and the 5950x at its current 125w limit. So power and heat aint an issue at all, its die space. Even at 240w a 16p core would be way easier to cool than the 12900k, and it would score over 36k in cbr23.
> 
> Regarding the rest of what you said, 1p core absolutely creams a zen 3 core at same wattage in every workload. The difference is so vast that not even zen 4 can close it. Therefore it stands to reason that a 16p core intel would have no problem with power or heat


So ... to be clear:
Rather than testing at the manufacturer-specified power limits, you are arguing for testing at an arbitrary lower limit, but crucially one that is very close to AMD's spec, while very far from Intel's spec. You see how that is problematic, right? How that inherently biases the testing towards one manufacturer? You can keep talking as if iso power = a level playing field all you like, but that isn't reality - reality is that chips come with manufacturer specified power limits, which differ from chip to chip, and any test setting that isn't this number is thus either a reduction or increase from this, and a _different_ reduction or increase between various SKUs. You can't simply ignore the stock setting and say "this is a level playing field because the number is the same". Testing both at 125w, as you are doing, is a whopping 50% (125W) reduction for the Intel chip, while it's a 15% reduction for the AMD chip (19W). Is that a level playing field? No.

Now, is this unfair? Yes, because of architectural differences and how voltage/frequency scaling works. As you yourself keep bringing up regardless of its relevance to what we're discussing, DVFS curves are precisely that - curves. The higher you push an implementation of an architecture, the more voltage and power you need. Which of course means that conversely, the more you drop the power level from a high point on that curve, the better efficiency you will get out at the end. So, by implementing different changes from stock, you are inherently privileging Intel in your comparisons.

Of course, you can argue that Intel's stock config is dumb. Which it is. But that doesn't change the fact that the stock power limit is an inherent trait of the CPU as purchased. It is a configurable one, true, but it is still an inherent trait of the product, and ignoring it doesn't make that any less true.

As for per-core efficiency between Zen3 and ADL: you're just plain wrong there, sorry. Let's take a look at Anandtech's ST testing in SPEC, which is a relatively diverse set of workloads and about as accepted of an industry standard as you get for general computational performance for a CPU:





ADL at its stock clocks (which they measured to 71W over idle in another workload, but might of course be somewhat lower in this, as power is inherently workload-dependent) beats peak Zen3 (5950X) by either 16% or 12% in integer workloads and either 6% or 4% in floating point workloads depending on whether you look at the DDR4 or DDR5 results. You can see detailed per-workload scores in the article here. If you're curious about what the SPEC workloads are, you can read more about them here - it's a pretty good mix of consumer-relevant workloads and more scientific/industry-oriented ones.

Now, there's the question of power. Neither of these chips come close to their stock power limits in ST testing - as I said, the 12900K peaks at 71W with one core active; the 5950X peaks at 49W package power for the same. Sadly we don't have specific power numbers for each of these tests, which introduces a lot of error into any estimates made based on what we have. Still, unless the workload Anandtech uses for power testing happens to be an extreme outlier on Alder lake, ADL needs approximately ~45% more power for (best case) 16% more performance. Yes, this is at an extreme, stupid power level. But it also directly disproves your statement that


fevgatos said:


> 1p core absolutely creams a zen 3 core at same wattage in every workload. The difference is so vast that not even zen 4 can close it.


This is, again, pure, unadulterated nonsense. ALD is _barely_ faster than Zen3 at _much_ higher clocks and power levels. Dropping those clocks will inevitably mean a drop in performance, and even if ADL at stock is pushed _way_ past its efficiency sweet spot (which it is!), you're still not going to match that stock Zen3 ST efficiency without incurring a noticeable performance penalty. That's just reality.

Does that mean there aren't workloads where ADL wins out in ST efficiency? Obviously not! It's not a terrible architecture at all - it just has some weird and contradictory characteristics. But the claims you're making here are just plain nonsense. They bear no relation to reality.


----------



## fevgatos (Aug 6, 2022)

Valantar said:


> So ... to be clear:
> Rather than testing at the manufacturer-specified power limits, you are arguing for testing at an arbitrary lower limit, but crucially one that is very close to AMD's spec, while very far from Intel's spec. You see how that is problematic, right? How that inherently biases the testing towards one manufacturer? You can keep talking as if iso power = a level playing field all you like, but that isn't reality - reality is that chips come with manufacturer specified power limits, which differ from chip to chip, and any test setting that isn't this number is thus either a reduction or increase from this, and a _different_ reduction or increase between various SKUs. You can't simply ignore the stock setting and say "this is a level playing field because the number is the same". Testing both at 125w, as you are doing, is a whopping 50% (125W) reduction for the Intel chip, while it's a 15% reduction for the AMD chip (19W). Is that a level playing field? No.
> 
> Now, is this unfair? Yes, because of architectural differences and how voltage/frequency scaling works. As you yourself keep bringing up regardless of its relevance to what we're discussing, DVFS curves are precisely that - curves. The higher you push an implementation of an architecture, the more voltage and power you need. Which of course means that conversely, the more you drop the power level from a high point on that curve, the better efficiency you will get out at the end. So, by implementing different changes from stock, you are inherently privileging Intel in your comparisons.
> ...


But you are missing the point completely. Im not suggesting anything, Im saying Intel can easily release a 16p core CPU with no power or heat issues. The post I replied to says it can't. So you are essentially agreeing with me, Intel could release a 16 or even 32p core CPU with no power or heat issues, just by keeping the same power limits (240w) or even reducing them, right? Great, then why the heck are you replying to me when we basically agree on that part?



Valantar said:


> This is, again, pure, unadulterated nonsense. ALD is _barely_ faster than Zen3 at _much_ higher clocks and power levels. Dropping those clocks will inevitably mean a drop in performance, and even if ADL at stock is pushed _way_ past its efficiency sweet spot (which it is!), you're still not going to match that stock Zen3 ST efficiency without incurring a noticeable performance penalty. That's just reality.
> 
> Does that mean there aren't workloads where ADL wins out in ST efficiency? Obviously not! It's not a terrible architecture at all - it just has some weird and contradictory characteristics. But the claims you're making here are just plain nonsense. They bear no relation to reality.


I don't even know why we are discussing this. You have a zen 3, I have an alderlake, let's test it. You even have the benefit of choosing the power limits and workloads that make zen 3 shine, so run your 8 zen 3 cores at a workload and wattage of your choosing, ill run 8p cores are the same wattage and workload. I predict zen 3 is going to get absolutely destroyed.

And it makes sense, adl is way wider than zen 3, way bigger die with more performance. Its not surprising at all that its faster / more efficient than zen 3.



Valantar said:


> Of course, you can argue that Intel's stock config is dumb. Which it is. But that doesn't change the fact that the stock power limit is an inherent trait of the CPU as purchased. It is a configurable one, true, but it is still an inherent trait of the product, and ignoring it doesn't make that any less true.


Yes it does change the fact cause we are talking about a theoretical 16p core. Nothing is stopping intel from keeping or reducing the power limit when if it decides to release such a CPU. Intel can decide what that inherent value they want their CPU to have, therefore that's not whats stopping them fromr eleasing a 16p core CPU. In fact they could release one with a 130w power limit that would cream both the 5950x and the 12900k. That's all im saying

Also according to anandtech, this is the power consumption under spec

In SPEC, in terms of package power, the P-cores averaged 25.3W in the integer suite and 29.2W in the FP suite, in contrast to respectively 10.7W and 11.5W for the E-cores, both under single-threaded scenarios. Idle package power ran in at 1.9W.


----------



## Dr. Dro (Aug 6, 2022)

fevgatos said:


> More efficient at everything. What he is saying is that intel cant fit 16p cores cause of power draw which is absurd, cause we already know a p core outperforms a zen 3 core at same wattage. Therefore a 16p core intel would outperform the 5950x for example at same or lower wattage



The hybrid approach unfortunately offers very little benefits to desktop users at the present moment, though I understand it's something that is being developed for a future architecture with 3D stacked packaging. Eventually we will have a small amount of P cores and a very high density of E cores, which will be perfected to have suitably high performance for general usage.

Side note though. Bro, you're quite pent up about Intel losing their dominance or whatever. It's good that both companies are at each other's throats when it comes to this, it means cheaper hardware for us! If Alder Lake hadn't come out, AMD would never back down from their earlier excuses and statements on 300 series boards, would not have released SKUs such as the 5600, 5700X or 5800X3D, and if it didn't perform like it does (remember Rocket Lake), they would happily ask $1750 for the 5950X.

I remember the days that I needed to buy a Core i7 Extreme to get the best performance, and nowadays the maximum performance spot for gaming is actually a little below the halo part (that being the 5800X3D currently), and you will do GREAT with a chip like the 5600X or the 12600K. I can't say that I miss having to spend $999 on a processor at all, even if it means losing the "wow cool my rig is preposterous" factor out there.


----------



## Valantar (Aug 6, 2022)

fevgatos said:


> But you are missing the point completely. Im not suggesting anything, Im saying Intel can easily release a 16p core CPU with no power or heat issues. The post I replied to says it can't. So you are essentially agreeing with me, Intel could release a 16 or even 32p core CPU with no power or heat issues, just by keeping the same power limits (240w) or even reducing them, right? Great, then why the heck are you replying to me when we basically agree on that part?


They absolutely could - it would be huge and stupidly expensive, and they would struggle to keep those cores fed with current memory technology (look at the MT performance increases from DDR4 to DDR5 in AT's 12900K test), but it is obviously possible. The question is what clocks would look like at those power levels, and if it would be even remotely competitive. My guess is no - it would require too large of a clock reduction to be particularly competitive in MT tests. It might not actually be _slower_ than a 5950X, but ... that's not much of a bar to pass at 100W more power.

Of course, there's also the question of architectural changes needed to implement 16 P cores - most likely that would mean moving to either a dual ring bus or mesh, as AFAIK Intel has never used a single ring bus above 10 cores/stops (the groups of 4 E cores have a single ring bus stop). Which would harm efficiency as more power would need to be used for uncore, bringing the base power requirements closer to Zen3. Of course a mesh or dual ring bus would also affect core-to-core latencies and task scheduling, though a single, 16-core ring bus would likely cause untenable levels of core-to-core latency. Either way, scaling isn't entirely simple.



fevgatos said:


> I don't even know why we are discussing this. You have a zen 3, I have an alderlake, let's test it. You even have the benefit of choosing the power limits and workloads that make zen 3 shine, so run your 8 zen 3 cores at a workload and wattage of your choosing, ill run 8p cores are the same wattage and workload. I predict zen 3 is going to get absolutely destroyed.
> 
> And it makes sense, adl is way wider than zen 3, way bigger die with more performance. Its not surprising at all that its faster / more efficient than zen 3.


If I had the time to do something like this in a meaningful amount of workloads (and had access to SPEC or something similar, but unfortunately I don't have $1250 to spare) I'd be down to do a comparison, though these being extremely different systems that'd still be rather problematic. We wouldn't be able to normalize for software or anything else really, unless you also wanted us to reinstall Windows for this. The point being: there's a reason why reviewers exist, as they have access, time, equipment and means to do things most end users don't. Beyond that, thankfully we have good reviews to base our predictions and speculations on, like the ones linked above.

ADL is definitely a wide core, but its actual, real-world performance still isn't vastly ahead of the somewhat narrower Zen3 - as shown in the benchmarks posted and linked above. It's faster, but only because it _also_ clocks notably higher.


fevgatos said:


> Yes it does change the fact cause we are talking about a theoretical 16p core. Nothing is stopping intel from keeping or reducing the power limit when if it decides to release such a CPU. Intel can decide what that inherent value they want their CPU to have, therefore that's not whats stopping them fromr eleasing a 16p core CPU. In fact they could release one with a 130w power limit that would cream both the 5950x and the 12900k. That's all im saying


AFAIK we haven't been talking about a theoretical 16 P-core CPU all this time? Either way, yes, there is something stopping Intel from reducing its power limits overall: competitive positioning. The 12900K _needs_ its stupid high power limit to be clearly faster than the 5950X (and it still isn't so across the board, but in most cases). Whether they could release a 16 P-core CPU for LGA1700 and have it deliver competitive performance at 250W is ... well, something we can speculate on, but from my perspective there are too many unknowns to this to draw hard conclusions, as discussed above. It's not as simple as "twice the CPU cores of a 12900K, done". Heck, with a die that large (that would be, what, 300, 350mm²?), there's even a question of whether they could fit that on the LGA1700 package and connect all the I/O and power to it properly - there needs to be room to route traces through the package for everything, and a larger die makes that more difficult. Not saying it's impossible, just that it's another unknown. As for releasing a 130W 16 P-core CPU that would beat the 5950X? Keep dreaming. That just isn't happening, even if Intel wasn't forced by competition from AMD to push their CPUs to the max.



Dr. Dro said:


> The hybrid approach unfortunately offers very little benefits to desktop users at the present moment, though I understand it's something that is being developed for a future architecture with 3D stacked packaging. Eventually we will have a small amount of P cores and a very high density of E cores, which will be perfected to have suitably high performance for general usage.


I don't agree here - it has four distinct benefits as I see it: increased core counts without ballooning area and thus cost; increased performance in anything nT like tile-based renderers or video encoding; lower power general purpose, low-performance usage; allowing for efficient background processing of relatively heavy tasks like video encoding without throttling the main P cores too hard. The latter is the least likely, as it's extremely dependent on a good scheduler, but both of the former three are real benefits today.


Dr. Dro said:


> Side note though. Bro, you're quite pent up about Intel losing their dominance or whatever. It's good that both companies are at each other's throats when it comes to this, it means cheaper hardware for us! If Alder Lake hadn't come out, AMD would never back down from their earlier excuses and statements on 300 series boards, would not have released SKUs such as the 5600, 5700X or 5800X3D, and if it didn't perform like it does (remember Rocket Lake), they would happily ask $1750 for the 5950X.
> 
> I remember the days that I needed to buy a Core i7 Extreme to get the best performance, and nowadays the maximum performance spot for gaming is actually a little below the halo part (that being the 5800X3D currently), and you will do GREAT with a chip like the 5600X or the 12600K. I can't say that I miss having to spend $999 on a processor at all, even if it means losing the "wow cool my rig is preposterous" factor out there.


Here I truly agree with you though. That AMD stepped up their game with Ryzen was truly needed, but we were also seeing AMD grow into a too-comfortable leadership position with Zen3, and ADL delivered a much-needed kick in the rear to keep them accountable. Not that the chip shortage helped, but that's not the fault of anybody specific - AMD just did what corporations do: squeezed money out of it. We definitely can't attribute the 5800X3D to ADL though - the vias to connect 3D V-cache to Zen3 are there on all Zen3 dice, so that has been the plan from day one. The timing just worked out for AMD - and it's of course possible that they may not have released a relatively affordable "flagship killer" gaming CPU with that cache, but rather pushed it towards workstations or just made it very expensive if ADL hadn't been as fast. 

Still, it's pretty clear that fevgatos is for some reason extremely defensive of Intel and ADL. I'm not going to speculate as to the reasons, but this kind of behaviour just doesn't help anyone. I can agree that discussions of ADL have tended towards too much of "OMG what a power hog", but ... well, it is. It's still good, and underclocks and undervolts well, but it _is_ a power hog at stock. From my perspective, it looks like they're trying to add nuance to the discussion but the only thing they're able to do is present even more biased, absurd statements in the opposite direction of other people. Which utterly fails to add nuance, but instead foments polarization and just breeds conflict. Either way, not a good approach.


----------



## Dr. Dro (Aug 6, 2022)

Valantar said:


> I don't agree here - it has four distinct benefits as I see it: increased core counts without ballooning area and thus cost; increased performance in anything nT like tile-based renderers or video encoding; lower power general purpose, low-performance usage; allowing for efficient background processing of relatively heavy tasks like video encoding without throttling the main P cores too hard. The latter is the least likely, as it's extremely dependent on a good scheduler, but both of the former three are real benefits today.



The problem is, the 5950X does nT renderers/video encoding masterfully, and generally within a lower power envelope compared to an i9-12900K. Alder Lake has much better idle power (lacks IOD) but, in this first iteration, there is realistically not much difference on a desktop whether you go with AMD's traditional approach or Intel's newer design. That's the primary reason why I'm excited for Raptor Lake, I think it will be the first design where what Intel set off to do will show, and in flying colors too.


----------



## fevgatos (Aug 6, 2022)

Valantar said:


> The 12900K _needs_ its stupid high power limit to be clearly faster than the 5950X (and it still isn't so across the board, but in most cases). Whether they could release a 16 P-core CPU for LGA1700 and have it deliver competitive performance at 250W is ... well, something we can speculate on, but from my perspective there are too many unknowns to this to draw hard conclusions, as discussed above. It's not as simple as "twice the CPU cores of a 12900K, done".



It needs its stupid high power cause it has 8 big cores trying to compete against 16 big cores. That wouldn't be the case with a theoretical 16P core cpu. 



Valantar said:


> As for releasing a 130W 16 P-core CPU that would beat the 5950X? Keep dreaming. That just isn't happening, even if Intel wasn't forced by competition from AMD to push their CPUs to the max.


Im telling you 130w 16P core CPU would score over 32k in CBR23. I mean sapphire rapids are coming out in 2023, we can see then, but im pretty certain that will be the case. I mean it's easy to figure out, even with imperfect scaling 8P cores score almost 17k at 65w.



Valantar said:


> The question is what clocks would look like at those power levels, and if it would be even remotely competitive. My guess is no - it would require too large of a clock reduction to be particularly competitive in MT tests. It might not actually be _slower_ than a 5950X, but ... that's not much of a bar to pass at 100W more power.


What do you even mean. The 5950x already has a huge clock reduction in all core workloads compared to the 5800x. A 16p core at 130w would probably run anywhere between 3.5 to 3.8 ghz (again - based on what 8p cores do on half the wattage), while the 5950x runs at 3.9. At 240w it would definitely hit something along the lines of 4.3+ ghz.



Valantar said:


> ADL is definitely a wide core, but its actual, real-world performance still isn't vastly ahead of the somewhat narrower Zen3 - as shown in the benchmarks posted and linked above. It's faster, but only because it _also_ clocks notably higher.


Come on, that's a huge dodge. You don't need SPEC, there are hundreds of other benchmarks you can try. Cinbench, blender, corona, 7zip etc. Choose your poison and lets compare 8 zen 3 vs 8 GC cores, I know for a fact the difference will be 20% minimum when consumption normalized and over 100% when performance normalized. But feel free to prove me wrong, id be glad - ill admit it and ill have learnt something


----------



## Icon Charlie (Aug 7, 2022)

ratirt said:


> Zen4 is not even out and you are already talking about what Zen5 chip will have. Stop speculating and talking nonsense just to prove some crazy thought.
> 
> Zen4 pricing does look ok if it turns out to be true. Hopefully it will.


Well that does not matter if you are going to pay extra cost for your motherboard as well as DDR5 and probably a new CPU cooler and/or PSU.

All of this hype is nonsense. The price increases will be there and so is the increase of wattage as well.   Before I even think of building a new rig and that is what is going to happen to most of us I want to see the REAL overall cost of performance vs wattage over the previous generations components.

Because it looks like  a X670 is going to be double the price of a X570 which was 50% over the X370.  I have the X370 and X570 mb in question so I know the cost increases there. 

This is where AMD is going to make their money.   Selling the MB chipsets to their partners.


----------



## usiname (Aug 7, 2022)

fevgatos said:


> You have a 5800x. Choose a benchmark - the best case scenario for zen 3 - choose a power limit , again , the best case scenario for zen 3 and upload yours score. Ill upload my score at the same benchmark and same power limit with 8 GC cores, I guarantee you zen 3 will get creamed. Especially if we run performance normalized, for example in CBR23 8zen 3 cores need more than double the wattage (and probably some ln2 cooling) to tie the performance of 8gc cores at 65w.


Can you post 8p + 0e 12900k score with 65w in R23 posted on HWBot with benchmate validation? Or 6p + 0e with 50w against my gimped zen 3 ryzen 5500? I will make the same


----------



## fevgatos (Aug 7, 2022)

usiname said:


> Can you post 8p + 0e 12900k score with 65w in R23 posted on HWBot with benchmate validation? Or 6p + 0e with 50w against my gimped zen 3 ryzen 5500? I will make the same


Sure, im on vac back in 4 days , remind me just in case and thank you for offering


----------



## Valantar (Aug 7, 2022)

fevgatos said:


> It needs its stupid high power cause it has 8 big cores trying to compete against 16 big cores. That wouldn't be the case with a theoretical 16P core cpu.


No, it needs stupid high power to compete _in single threaded tasks_. Remember, it uses a lot less power per core in MT than in ST! The first core boosts _71W_ over idle power! True, this number also includes the uncore, memory controller etc. ramping up, but as Anandtech says, even when accounting for this: "we’re still looking at ~55-60 W enabled for _a single core.__" _(their emphasis). Compare this to Zen3, which peaks at less than 21W/core, no matter what. In other words, you can run three 5.05GHz (nominally 4.9GHz, but higher in real life according to AT's per-core testzing) Zen3 cores for each Golden Cove core at 5.2GHz in an instruction dense workload.

So, in short, ADL is quite wide, and specifically due to that it is a power hog at high clocks in heavy workloads. When the GC architecture was first discussed, people expressed concerns about the power hungry nature of expanding an X86 decoder beyond being 4-wide - and that's likely part of what we're seeing here. As instruction density rises, power consumption skyrockets, and ADL _needs_ that 250W power budget to beat - or even keep up with! - Zen3 in low threaded applications. Remember, the 12900K reaches 160W with just 4 P-cores active, and creeps up on 200W with six of them active. None of which are at that 5.2GHz clock, of course.

Now, I spotted something looking back at the AT review that I missed yesterday: they note that in their SPEC testing, they saw 25-30W/core for ADL - which is obviously much better than 55-60W. They put this down to most SPEC workloads having much lower instruction density than POV-Ray, which they use for power testing. Still, assuming that ADL sees that 50% power drop in SPEC compared to POV-Ray and Zen3 sees _none_ (which is quite unlikely!), that's still a 25-50% power consumption advantage per core for AMD. Which is somewhat offset by AMD's ~10W higher uncore power, but that ADL advantage disappears once you exceed two cores active due to Zen3's lower per core power.

Going back to the ST/MT comparison and clock/power scaling for ADL: Extrapolating from Anandtech's per core load testing, and assuming uncore under load is 23W (78W package power minus their low 1c estimate of 55W): 2P = 44W/c, 3P =36.7W/c, 4P =34W/c, 5P = 30,4W/P, 6P = 29W/c, 7P = 27.3W/c, 8P = 27W/P. That last number is, according to the same AT test, at around 4.7GHz. Which is definitely _a lot_ better than 55W/core! But it's still 28% higher than Zen3's 21W (technically 20.6W) @ 4.9GHz. Now, ADL at 5.2GHz wins in SPEC ST by up to 16% (116% score vs. Zen3's 100%), with a 3% clock advantage (5.2 v 5.05GHz). Dropping its clocks to 4.7GHz, assuming a linear drop in performance, would drop that 16% advantage to a 4.8% advantage - and still at a presumable power disadvantage - or at best roughly on par. Sadly we don't have power scaling numbers per core for SPEC, but it's safe to assume that it doesn't see the same dramatic drop as POV-Ray, simply because it doesn't start as high. 

And, crucially, all of this is comparing against Zen3's peak 1c power - and it too drops off noticeably as thread counts increase, with small clock speed losses. The 5950X maintains 20W/c up to 4 active cores, then drops to <17W at 5 cores (@4.675GHz), and ~14-15W at 8c active (@4.6GHz). Zen3 also shows _massive_ efficiency scaling at lower clocks too, going as low as ~8W/core @4ghz (13 cores active) or ~6W/core @ 3.775 (16 cores active).

If we graph out the respective clock/power scaling seen here, we get these two graphs (with the caveat that we don't have precise frequency numbers for ADL per core, and I have instead extrapolated these linearly between its 5.2GHz spec and the 4.7GHz seenin AT's 8P-core power testing):



What do we see there? That Zen3's power is still dropping quite sharply at 16t active (3.775GHz), while ADL's P cores are flattening out in terms of power draw even at just 8 P cores active. We obviously can't extrapolate a line directly from these graphs and towards zero clock speed and expect it to match reality, but it still says something about the power and clock scaling of these two implementations - and it demonstrates how Zen3 scales very well towards lower power. As an added comparison, look at EPYC Milan: the core (not including IF/uncore) power of the 64-core 7763 is just 164W in SPECint, translating to a staggeringly low 2.6W/core, presumably at its 2450MHz base clock.

It is entirely possible - even essentially establlished knowledge, given the much better efficiency of lower spec ADL chips like the 12300 - that ADL/GC sees a downward step or increased drop in power/clock at some lower clock than what the 12900K reaches at 8 P cores active, but there's still no question that Zen3 scales _far_ better than ADL towards lower clocks, an advantage _somewhat_ offset by its higher uncore power, but nowhere near completely. ADL still has a slight IPC advantage, and wins out in ST applications that can take advantage of its high per-core boost even for lower spec chips + its low latencies. And it doesn't suffer as badly power-wise in less instruction dense workloads overall. But that doesn't make it more efficient than Zen3 - that simply isn't the case.



fevgatos said:


> Im telling you 130w 16P core CPU would score over 32k in CBR23.


And I don't care much about your pulled-from-thin-air numbers that do not account for the increased interconnect complexity and resultant core-to-core latency increase for such a larger die, or the other changes necessary for that implementation. Nor do you seem to grasp the issue of using CB as a single point of reference to somehow be the be-all, end-all reference point for performance. It's a single tiled rendering benchmark, with all the peculiarities and characteristics of such a workload - and isn't applicable to other nT workloads, let alone general workloads.


fevgatos said:


> I mean sapphire rapids are coming out in 2023, we can see then, but im pretty certain that will be the case. I mean it's easy to figure out, even with imperfect scaling 8P cores score almost 17k at 65w.


You mean the up-to-60c, 350W TDP, perennially delayed datacenter CPU that Intel _still_ hasn't released any actual specs for? Yeah, that seems like a very well functioning and unproblematic comparison, sure.


fevgatos said:


> What do you even mean. The 5950x already has a huge clock reduction in all core workloads compared to the 5800x. A 16p core at 130w would probably run anywhere between 3.5 to 3.8 ghz (again - based on what 8p cores do on half the wattage), while the 5950x runs at 3.9. At 240w it would definitely hit something along the lines of 4.3+ ghz.


"Huge"? It's a 15% drop (or the 5800X runs 18% faster, depending on what's your baseline). It's clearly notable, but ... so what? Remember, at that point the 5800X runs literally twice the power per core compared to the 5950X. As seen above, Zen3 scales extremely well with lower clocks, and doesn't flatline until very low clocks. ADL, on the other hand, seems to flatline (at least for a while) in the high 4GHz range.

Heck, if you want to provide more data to challenge what I'm taking from AT's testing: run POV-Ray at your various power limits and record the clock speeds. CB is far, far less instruction dense than POV-Ray (in other words: it's a much lighter and lower power workload) and would thus let a power limited ADL CPU clock far higher, so you can't use your CB numbers as a counterpoint to AT's POV-Ray numbers.


fevgatos said:


> Come on, that's a huge dodge. You don't need SPEC, there are hundreds of other benchmarks you can try. Cinbench, blender, corona, 7zip etc.


Oh, man, this made me laugh so hard. "You don't need an industry standard benchmark with a wide variety of different real-world tests baked in, there are hundreds of single, far less representative workloads you can try." Like, do you even understand what you are saying here? Cinebench is _one_ test. SPEC2017 is _twenty_ different tests (for ST, _23 _for nT). SPEC is a widely recognized industry standard, and includes rendering workloads, compression workloads, simulation workloads, and a lot more. What you mentioned were, let's see, a rendering workload, a rendering workload, a rendering workload, and a compression workload. Hmmmmm - I wonder which of these gives a more representative overview of the performance of a CPU?

Come on, man. Arguing for CB being a better benchmark than SPEC is like arguing that a truck stop hot dog is a better meal than a 7-course Michelin star restaurant menu. It's not even in the same ballpark.


fevgatos said:


> Choose your poison and lets compare 8 zen 3 vs 8 GC cores, I know for a fact the difference will be 20% minimum when consumption normalized and over 100% when performance normalized. But feel free to prove me wrong, id be glad - ill admit it and ill have learnt something


As I said, I would if I had time and we had the ability to normalize for all the variables involved - storage speed, cooling, other software on the system, etc. Sadly I don't have that time, and certainly don't have the spare hardware to run a new Windows install or reconfigure my cooling to match your system, etc. We do not have what is necessary to do a like-for-like, comparable test run - and running a single benchmark like you're arguing for wouldn't be a represenative view of performance anyhow. So, doing so would at best result in rough ballpark results with all kinds of unkown variables. Hardly a good way of doing a comparison.


----------



## Dr. Dro (Aug 7, 2022)

Valantar said:


> arguing that a truck stop hot dog is a better meal than a 7-course Michelin star restaurant menu. It's not even in the same ballpark.



Real talk? I agree, I'll take the hot dog, tbh. Hahahaha  



usiname said:


> Can you post 8p + 0e 12900k score with 65w in R23 posted on HWBot with benchmate validation? Or 6p + 0e with 50w against my gimped zen 3 ryzen 5500? I will make the same



I don't think this thread is the best option for us to discuss this, but I actually have enough academic interest on seeing how would Zen 3 and Alder Lake throughout different power limits would behave. I have a 5950X, it's a sample that has rather tight binning with very narrow CO range too, so I suppose I could run a few tests in its Eco mode configuration (which sets it to 65W/84W PPT specification), if you guys care enough, a new organized thread for this would be a fun effort.


----------



## Valantar (Aug 7, 2022)

Dr. Dro said:


> Real talk? I agree, I'll take the hot dog, tbh. Hahahaha


I would definitely take the Michelin restaurant meal - depending on the type of restaurant - but I would undoubtedly feel ectremely out of place eating there, and I'd be broke for months afterwards  It would be a once-in-a-lifetime type of thing. That's what makes the analogy work so well: hotdogs are cheap and easily available to essentially everyone (at least in most of the Western world), just like Cinebench. Michelin restaurant meals are decidedly not that. And in that case, @fevgatos is arguing that a single, freely available benchmark is somehow superior to a expensive ($1250!) but far more varied and representative one - i.e. confusing availability for quality. That doesn't mean the hotdog (or cinebench) aren't good for anything, just that they aren't the be-all, end-all of their respective categories. Cinebench is an excellent example of a lightweight tiled renderer, and a _perfect_ simulation of rendering in Cinema4D - that's what it does, after all. It just won't tell you much about anything else.


----------



## fevgatos (Aug 7, 2022)

Valantar said:


> No, it needs stupid high power to compete _in single threaded tasks_. Remember, it uses a lot less power per core in MT than in ST! The first core boosts _71W_ over idle power! True, this number also includes the uncore, memory controller etc. ramping up, but as Anandtech says, even when accounting for this: "we’re still looking at ~55-60 W enabled for _a single core.__" _(their emphasis). Compare this to Zen3, which peaks at less than 21W/core, no matter what. In other words, you can run three 5.05GHz (nominally 4.9GHz, but higher in real life according to AT's per-core testzing) Zen3 cores for each Golden Cove core at 5.2GHz in an instruction dense workload.
> 
> So, in short, ADL is quite wide, and specifically due to that it is a power hog at high clocks in heavy workloads. When the GC architecture was first discussed, people expressed concerns about the power hungry nature of expanding an X86 decoder beyond being 4-wide - and that's likely part of what we're seeing here. As instruction density rises, power consumption skyrockets, and ADL _needs_ that 250W power budget to beat - or even keep up with! - Zen3 in low threaded applications. Remember, the 12900K reaches 160W with just 4 P-cores active, and creeps up on 200W with six of them active. None of which are at that 5.2GHz clock, of course.
> 
> ...


You are constantly putting words in my mouth. I never said that CB or corona is better than SPEC. I said you don't need to spend 1200€ to buy SPEC, there are plenty of free benchmarks you can choose from. You even repeated it in your next post, that I said something I never did.

I dont really get why you are doing extrapolations when we have the CPUs and we can test them. Regarding zen 3 scaling better, that's absolutely false, and you can see that from the 12400. It's basically the worst GC core bin (it comes from a different die btw)and it ties the 5600x in performance / watt. Actually according to igorslab testing, the 12400 can get up to 65% more efficient than the 5600x. That's by far the worst P core binned part...im quoting from igorslab

_Once again, you can put the score in relation to the power consumption in order to map the efficiency. The Core i9-12400 is even 64 percentage points more efficient than the Ryzen 5 5600X! 
If you put power consumption and performance under full load into relation, then the Core i5-12400 only has to admit defeat to the Core i9-12900KF, which is the winner in the 125 watt limit. The Ryzen 5 5600X lands significantly further behind._









						Intel Core i5-12400 Workstation Review - How does real work succeed without glued-on E-cores? | Part 2 | Page 9 | igor'sLAB
					

Today we want to question where the upcoming Core i5-12400 can still score apart from colorful gaming worlds and whether it will remain just as frugal and efficient. Gaming is actually overrated when…




					www.igorslab.de
				







Also TPUP does some single thread consumption tests. The 5950x needs 45w over idle for the single core test, while the 12900k only needs 36w. Even from anandtech in the Povray test, 8 GC cores at 4.9 ghz result in 240w package power - 30w per core. How much wattage do 8 zen 3 cores need at 4.9ghz and what is the performance at that point??

Since usiname offered to run some tests with his zen 3, suggest to him the best case scenario for zen 3, since you dont like cinebench. Im pretty sure 8gc cores will cream 8 zen 3 cores even in that best case scenario.

I dont get what the problem is with sapphire rapid? Yet, it got delayed for 2023, so what? It will have a 16P core part that we can compare directly with the 5950x and youll realise that zen 3 loses the efficiency war - and actually the difference is that vast that not even zen 4 can close it.


----------



## usiname (Aug 7, 2022)

Welcome to our race for most efficiency CPU!








						Cinebench R23 efficiency race
					

Everyone with every cpu and architecture is welcome to join in our Cinebench R23 efficiency race! We have two categories: 6/12 cores up to 50W 8/16 cores up to 65W Disabling of cores is allowed. The only requirement is screenshot with BenchMate and is recommended to share a link to the result...




					www.techpowerup.com


----------



## Icon Charlie (Aug 7, 2022)

usiname said:


> Welcome to our race for most efficiency CPU!
> 
> 
> 
> ...


You sir are one smart cookie.  The old man approves.


----------



## ratirt (Aug 7, 2022)

Valantar said:


> There are rumors that Zen5 will use some form of hybrid arcutecture, or at least sets of cores with different tuning. Either way, regardless of why (and you're mostly right about why), Intel did the right thing by going hybrid, and while their implementation is imperfect (as expected for a first generation effort) it is still impressive in many ways. And there's no doubt hybrid architectures and increased use of accelerators are the future of CPUs/SoCs, and we'll be seeing more of this going forward.


He said the fact is not the rumors say. Rumors are being pushed as facts.



Icon Charlie said:


> Well that does not matter if you are going to pay extra cost for your motherboard as well as DDR5 and probably a new CPU cooler and/or PSU.
> 
> All of this hype is nonsense. The price increases will be there and so is the increase of wattage as well.   Before I even think of building a new rig and that is what is going to happen to most of us I want to see the REAL overall cost of performance vs wattage over the previous generations components.
> 
> ...


Well it does. People say all the time the motherboard will cost a lot. We will see how much these will cost. I doubt the difference will be that big. It will all depend on the features you will get and I can bet there will be a lot of options cheaper mid range, upper and top notch. Stop speculating and spreading rumors as facts.


----------



## mechtech (Aug 7, 2022)

Another die shrink to 5nm.  I wonder if there will be any desktop cpus under the traditional 65W rating.............whenever they come out.

$300 usd is decent for the 7700x if it holds true.  However I could get a 5700x for $220 usd, plus cheaper DDR4..................guess just play the waiting game.


----------



## Dr. Dro (Aug 7, 2022)

fevgatos said:


> You are constantly putting words in my mouth. I never said that CB or corona is better than SPEC. I said you don't need to spend 1200€ to buy SPEC, there are plenty of free benchmarks you can choose from. You even repeated it in your next post, that I said something I never did.
> 
> I dont really get why you are doing extrapolations when we have the CPUs and we can test them. Regarding zen 3 scaling better, that's absolutely false, and you can see that from the 12400. It's basically the worst GC core bin (it comes from a different die btw)and it ties the 5600x in performance / watt. Actually according to igorslab testing, the 12400 can get up to 65% more efficient than the 5600x. That's by far the worst P core binned part...im quoting from igorslab
> 
> ...



I can't help but notice you're including the IO die in the Ryzen calculations, Cezanne would probably be a better benchmark compared to ADL's design. 

Cine is fine but all it does is show your Cinema4D performance, imo


----------



## Valantar (Aug 7, 2022)

fevgatos said:


> You are constantly putting words in my mouth. I never said that CB or corona is better than SPEC. I said you don't need to spend 1200€ to buy SPEC, there are plenty of free benchmarks you can choose from. You even repeated it in your next post, that I said something I never did.


You didn't say it was better, but you did say "You don't need SPEC, there are hundreds of other benchmarks", in other words saying that those benchmarks are a reasonable replacement for SPEC. This is what I have argued against - none of the benchmarks you mentioned are, no single benchmark can ever be. Did I make a silly analogy about it? Yes, because IMO what you said was silly, and deserved a silly response. A single benchmark will never be representative of anything beyond itself - at best it can show a rough estimate of something more general, but with a ton of caveats. As for using a collection of various single benchmarks: sure, that's possible - but I sure do not have the time to research and put together a representative suite of freely available and unbiased benchmark applications that can come even remotely close to emulating what SPEC delivers. Do you?

The point being: I'm leaning on SPEC because it's a trustworthy, somewhat representative (outside of gaming) CPU test suite, and is the closest we get to an industry standard. And, crucially, because we have a suite of high quality reviews using it. I do not rely on things like CB as, well, the results are pretty much useless. Which chip is the fastest and/or most efficient shows us ... well, which chip is the fastest and most efficient _in cinebench_. Not generally. And the point here was something somewhat generalizeable, no? Heck, even GeekBench is superior to CB in that regard - at least it runs a variety of workloads.


fevgatos said:


> I dont really get why you are doing extrapolations when we have the CPUs and we can test them.


... I have explained that, at length? If you didn't grasp that, here's a brief summary: because we have absolutely zero hope of approaching the level of control, normalization and test reilability that good professional reviewers operate at.


fevgatos said:


> Regarding zen 3 scaling better, that's absolutely false, and you can see that from the 12400. It's basically the worst GC core bin (it comes from a different die btw)and it ties the 5600x in performance / watt. Actually according to igorslab testing, the 12400 can get up to 65% more efficient than the 5600x. That's by far the worst P core binned part...im quoting from igorslab
> 
> _Once again, you can put the score in relation to the power consumption in order to map the efficiency. The Core i9-12400 is even 64 percentage points more efficient than the Ryzen 5 5600X!
> If you put power consumption and performance under full load into relation, then the Core i5-12400 only has to admit defeat to the Core i9-12900KF, which is the winner in the 125 watt limit. The Ryzen 5 5600X lands significantly further behind._


And, once again, you take a result from a single benchmark and present it as if it is a general truth. I mean, come on: you even link to the source showing how that is for a single, specific workload - and a relatively low intensity, low threaded one at that. Which I have acknowledged, at quite some length, is a strength of ADL.

These results fall perfectly in line with the fact that MCM Zen3 suffers in efficiency at lower power levels due to the relatively high power consumption of through-package IF. This is established knowledge at this point. This doesn't mean the ADL core is more efficient, it shows that the full ADL package is more efficient at low power levels - like I've highlighted several times above. MCM Zen3 simply can't scale that low due to the high uncore power from IF.


fevgatos said:


> Also TPUP does some single thread consumption tests. The 5950x needs 45w over idle for the single core test, while the 12900k only needs 36w. Even from anandtech in the Povray test, 8 GC cores at 4.9 ghz result in 240w package power - 30w per core. How much wattage do 8 zen 3 cores need at 4.9ghz and what is the performance at that point??


1: TPU does power measurements at the wall, meaning full system power including every component + PSU losses. This introduces a lot of variability and room for error - IMO this is a severe weakness of TPU's testing methodology (but understandable due to the equipment costs of doing proper hardware power measurements). This is especially problematic for any low load scenario.
2: You are pretty much repeating back to me what I have already been saying: Zen3 has a high uncore power due to IF, and thus has a disadvantage at very low thread counts despite the cores themselves consuming far less power. Put it this way: That AMD 45W increase is something like 24W uncore + 21W core, while the Intel increase is more like 10W uncore + 26W core. Of course the addition of the system generally consuming more power when under load than at idle (RAM, I/O, cooling fans, etc. - at wattage deltas this low, everything matters), plus the effects of PSU efficiency and VRM efficiency all make these results essentially indistinguishable.
3: While the scores are very close, it's worth mentioning that the 5950X is faster than the 12900K in SuperPi. Though the usefulness of this ancient benchmark is... well, debatable for modern CPUs. Still - slightly higher power, slightly faster - things start evening out.


fevgatos said:


> Since usiname offered to run some tests with his zen 3, suggest to him the best case scenario for zen 3, since you dont like cinebench. Im pretty sure 8gc cores will cream 8 zen 3 cores even in that best case scenario.


Have you been paying attention at all? Whatsoever? I'm not interested in best case scenarios. I'm interested in actually representative results, that can tell us something resembling truth about these CPUs. I mean, the fact that you're framing it this way in the first place says quite a bit about your approach to benchmarks: you're looking to choose sides, rather than looking for knowledge. That's really, really not how you want to approach this.

And, again, unless it wasn't clear: there is no single workload that gives a representative benchmark score for a CPU. None. Even something relatively diverse with many workloads like SPEC (or GeekBench) is an approximation at best. But a single benchmark only demonstrates how the CPU performs in that specific benchmark, and might give a hint as to how it would perform in very similar workloads (i.e. 7zip gives an indication of compression performance, CB gives an indication of tiled renderer performance, etc.) - but dependent on the quirks of that particular software.

This is why I'm not interested in jumping on this testing bandwagon: because testing in any real, meaningful way would require time, software and equipment that likely none of us have. You seem to have either a woefully lacking understanding of the requirements for actually reliable testing, or your standards for what you accept as trustworthy are just far too low. Either way: this needs fixing.


fevgatos said:


> I dont get what the problem is with sapphire rapid? Yet, it got delayed for 2023, so what? It will have a 16P core part that we can compare directly with the 5950x and youll realise that zen 3 loses the efficiency war - and actually the difference is that vast that not even zen 4 can close it.


Sapphire Rapids has been delayed ... what is it, four times now? Due to hardware errors, security errors, etc.? Yeah, that's not exactly a good place to start for a high performance comparison. When it comes out, it won't be competing against Zen3, it'll be competing against Zen4 - EPYC Genoa.

As for your fabulations about what a 16c SR CPU will perform like at 130W or whatever - have fun with that. I'll trust actual benchmarks when the actual product reaches the market. From what leaks I've seen so far - which, again, aren't trustworthy, but they're all we have to go by - SR is a perfectly okay server CPU, but nothing special, and nowhere near the efficiency of Milan, let alone Genoa.

And, crucially, SR will be a mesh fabric rather than a ring bus, and will have larger caches all around, so it'll behave quite differently from MSDT ADL. Unlike AMD, Intel doesn't use identical core designs across their server and consumer lineups - and the differences often lead to quite interesting differences in performance scaling, efficiency, and performance in various specific workloads.



Dr. Dro said:


> I can't help but notice you're including the IO die in the Ryzen calculations, Cezanne would probably be a better benchmark compared to ADL's design.


This is part of the problem: The IOD is obviously a part of the CPU, so for any actual real-world CPU power consumption it needs to be included. But unlike the cores, it's a static load, so it doesn't scale with threads - leading to a higher baseline power for MCM Zen3. Meaning that MCM Zen3 will fall behind in efficiency at low power and threading (unless you're running a high power limit ADL in a very instruction heavy workload), but thanks to its much lower power individual cores will quickly overtake ADL as thread counts increase. All the while, @fevgatos mashes everything into a single, amorphous blob, failing to make crucial distinctions and generally being far too vague both in what specifically they're debating. Is it architectural efficiency? Is it efficiency of a specific CPU in a specific workload? Is it ST, MT or nT efficiency? Is it efficiency at a range of power levels, or at one or a few arbitrarily chosen power levels? And if the latter, what are the grounds for choosing these?


Dr. Dro said:


> Cine is fine but all it does is show your Cinema4D performance, imo


Exactly. It's a hot dog. It might be a _great _hotdog, but it does not represent food in general.


----------



## fevgatos (Aug 7, 2022)

Valantar said:


> You didn't say it was better, but you did say "You don't need SPEC, there are hundreds of other benchmarks", in other words saying that those benchmarks are a reasonable replacement for SPEC. This is what I have argued against - none of the benchmarks you mentioned are, no single benchmark can ever be. Did I make a silly analogy about it? Yes, because IMO what you said was silly, and deserved a silly response. A single benchmark will never be representative of anything beyond itself - at best it can show a rough estimate of something more general, but with a ton of caveats. As for using a collection of various single benchmarks: sure, that's possible - but I sure do not have the time to research and put together a representative suite of freely available and unbiased benchmark applications that can come even remotely close to emulating what SPEC delivers. Do you?
> 
> The point being: I'm leaning on SPEC because it's a trustworthy, somewhat representative (outside of gaming) CPU test suite, and is the closest we get to an industry standard. And, crucially, because we have a suite of high quality reviews using it. I do not rely on things like CB as, well, the results are pretty much useless. Which chip is the fastest and/or most efficient shows us ... well, which chip is the fastest and most efficient _in cinebench_. Not generally. And the point here was something somewhat generalizeable, no? Heck, even GeekBench is superior to CB in that regard - at least it runs a variety of workloads.
> 
> ...


Do you understand what best case scenario is and what it's used for? If Zen 3 loses in the best case scenario thaen no further testing needs to be done. For example CBR23 is a best case scenario for golden cove, so if they lose in CBR23 they will lose in everything else.

Regarding SR, you are missing the point. It doesn't matter at all what it will be competing against, the argument I made was that 16GC cores would wipe the 5950x off the face of the Earth in terms of efficiency, the same way 8 GC cores wipe the 5800x. So when SR will be released and what it will be facing when it does is completely irrelevant to the point im making.


----------



## InVasMani (Aug 8, 2022)

I'd like to know if a 7950X would be sufficient to run a single PC capable of 10-player co-op off of it at a practical frame rate acceptance of 60FPS+ or not. That would be very impressive. You only need 4 cores so it should be fine.


----------



## Valantar (Aug 8, 2022)

fevgatos said:


> Do you understand what best case scenario is and what it's used for? If Zen 3 loses in the best case scenario thaen no further testing needs to be done. For example CBR23 is a best case scenario for golden cove, so if they lose in CBR23 they will lose in everything else.


... and you still don't get the fact that you _simply can't know_ that a workload is a "best case scenario" for any given architecture or implementation of that architecture until you've done extensive testing across a wide variety of workloads. CB23 is absolutely not a "best case scenario" for GC - it's a benchmark it does well in. That's it. There are other benchmarks where it has a much more significant lead - and benchmarks where it falls significantly behind. Again: you're desperate for simplification, you seem dead set on wanting a single test that can somehow give a representative overview. As I apparently have to repeat myself until this sticks: *this does not exist, and never will*.

As for efficiency, I have linked to quite a few tests in which Zen3 is already either faster, more efficient, or both, when compared to ADL. I mean, just look at TPU's 12900K review? At stock, the 12900K loses to or ties with the 5950X in the following tests: Corona, Keyshot, V-Ray, UE4 game dev, Google Tesseract OCR, VMWare Workstation, 7-zip, AES & SHA3 encryption, H.264 & H.265 encoding. Now, we don't have power measurements for each of these tests, sadly. But we do know the stock power limits, as well as the per-core peak power of both CPUs. So, unless the 12900K has some kind of debilitating bottleneck that causes it to essentially sit idle, it is using slightly less power (in very light, low threaded workloads) as much power (in workloads that strike that balance of a few threads, but not very heavy ones) or more (in anything instruction dense or anything lightweight above ~3 active cores) than the 5950X. Some of these - rendering, compression, encryption and encoding, at least - are relatively instruction dense nT workloads, where the 12900K _will_ be using more power than the 144W-limited 5950X. Yet it still loses. So, that kind of disproves your "It's more efficient at _everything_", no?

Would a low-clocked 16c ADL chip have better efficiency than the 12900K in these tests? That depends on the test, how well it utilizes E cores, and what clocks that chip could sustain at your proposed power levels - including crucial details about the specific silicon implementation that render speculation on this utterly pointless. Still, it is highly unlikely that this would represent a massive, earth-shattering efficiency improvement.


fevgatos said:


> Regarding SR, you are missing the point. It doesn't matter at all what it will be competing against, the argument I made was that 16GC cores would wipe the 5950x off the face of the Earth in terms of efficiency, the same way 8 GC cores wipe the 5800x. So when SR will be released and what it will be facing when it does is completely irrelevant to the point im making.


And you entirely missed the point that the GC cores in SR aren't the same as the GC cores in ADL, and due to the different implementations their performance will vary quite a bit. And, once again: you have absoltuely no basis for claiming that a theoretical 16 P core CPU will be more efficient than the 5950X. None.

Heck, look at TPU's 12900K testing at various power limits. Sure, it _shines_ in low threaded workloads even with a 50W limit, demonstrating fantastic efficiency in those tasks, and great ST performance. But in anything multi threaded? In rendering tasks, at 125W it barely beats the 5800X, despite having 3x the threads and 8 E cores to help pull those loads. The same goes for chemistry and physics simulation, AI upscaling, game dev, 7-zip decompression, and all three video encoding tests. It even _loses_ to the 5800X in encryption workloads. Sure, the 5800X pulls a bit more power (138W vs. 125W max), but ... yeah. That amazing scaling you're talking about _doesn't exist_. ADL scales extremely well in light, low threaded tasks, and otherwise scales _fine_ in everything else. In MT/nT tests where it didn't already win by a ton, it loses _a lot_ of performance as you reduce its power limits.


----------



## springs113 (Aug 8, 2022)

efikkan said:


> Zen 2 and 3 turned out well eventually, but had a bumpy ride with BIOS/firmware issues for several months (I believe it was 4+ months for Zen 3).
> After maturity, they've been great though. My system which was built nearly one year ago has had zero crashes (if I recall correctly), and I run my computers for many months without reboot.
> 
> 
> ...


Are you for real?   Didn't Amd show clock speeds themselves?   I also don't recall zen 3 at launch ever having an issue,  but maybe i was too busy enjoying my launch purchases of all the un-obtanium back then between the consoles, cpus and gpus.  The 5800x3d is a beast of gaming chip,  compare it to its predecessor(zen2) and its running mate(5800x).


----------



## fevgatos (Aug 8, 2022)

Valantar said:


> So, that kind of disproves your "It's more efficient at _everything_", no?


No it doesnt cause my claim is that core for core gc is more efficient than zen 3. You cant disprove that claim by comparing 16 zen 3 cores with 8+8



Valantar said:


> Heck, look at TPU's 12900K testing at various power limits. Sure, it _shines_ in low threaded workloads even with a 50W limit, demonstrating fantastic efficiency in those tasks, and great ST performance. But in anything multi threaded? In rendering tasks, it barely beats the 5800X, despite having 3x the threads and 8 E cores to help pull those loads. The same goes for chemistry and physics simulation, AI upscaling, game dev, 7-zip decompression, and all three video encoding tests. It even _loses_ to the 5800X in encryption workloads. Sure, the 5800X pulls a bit more power (138W vs. 125W max), but ... yeah. That amazing scaling you're talking about _doesn't exist_. ADL scales extremely well in light, low threaded tasks, and otherwise scales _fine_ in everything else. In MT/nT tests where it didn't already win by a ton, it loses _a lot_ of performance as you reduce its power limits.


Yeah that review is obviously flawed. I dont know what he did wrong, but he did something. Its obvious from the results themselves, check the cbr23 numbers. The 12600k ties the 12900k in cbr23 at samr power consumption

And i know its wrong cause i have the freaking cpu. At stock with 125w power limit it scores 24k+ in cbr23. Actually you can even compare it with techspots 12700 review, at 65w it scores over 16k while tpu has the 12900k at 18k / 125w. With less cores mind you. Obviously flawed numbers



springs113 said:


> Are you for real?   Didn't Amd show clock speeds themselves?   I also don't recall zen 3 at launch ever having an issue,  but maybe i was too busy enjoying my launch purchases of all the un-obtanium back then between the consoles, cpus and gpus.  The 5800x3d is a beast of gaming chip,  compare it to its predecessor(zen2) and its running mate(5800x).


Actually zen 3 had lots of problems, some of them are fixed and some of them wont ever. The x570 specifically had some problems with ssd reads, usb disconnects, ftpm stuttering...


----------



## trparky (Aug 8, 2022)

OK seriously, do you get a paycheck from Pat Gelsinger?


----------



## Valantar (Aug 8, 2022)

fevgatos said:


> No it doesnt cause my claim is that core for core gc is more efficient than zen 3. You cant disprove that claim by comparing 16 zen 3 cores with 8+8


Sorry, but you're being wildly inconsistent here. Now you're saying your claim is that _the GC core_ is more efficient than _the Zen3 core_. Which we have conclusive evidence showing that it is not, through Anandtech's per-core power testing. Despite the 12900K being pushed stupidly high, and responding poorly to instruction dense workloads, it is still less efficient in lighter workloads such as most SPEC workloads, consuming 6-7W more than the peak power draw of any single Zen3 core, while barely outperforming it.

As I have written about at length above, there is a strong argument to be made for _Alder Lake_, the implemented CPUs, being more efficient at lighter (less instruction dense), low threaded workloads _than Zen3 CPUs_, but - repeating myself a lot here - this is not due to an advantage in _core_ efficiency, but due to _lower uncore power draw_. AMD's through-package Infinity Fabric gives their CPUs a higher base power draw regardless of the number of cores under load (though not at idle) than Intel's monolithic CPUs, meaning that _despite_ having a less efficient core, they win in _chip_ efficiency comparisons in these workloads _because the chip is more than just cores_.

I don't need any of TPU's data to disprove your statement that the GC core is more efficient than Zen3, because Anandtech's testing shows conclusively that it is the other way around, and that Zen3 scales extremely well at lower clocks (~<6.5W/core @3.775GHz for the 5950X; average ~2.6W (SPECint) to ~1.9W (SPECfp) @ 2.45GHz or higher for the EPYC 7763). Can you show me even a single GC core implementation that can demonstrate similarly low per-core power draws? Even in the same ballpark?


fevgatos said:


> Yeah that review is obviously flawed. I dont know what he did wrong, but he did something. Its obvious from the results themselves, check the cbr23 numbers. The 12600k ties the 12900k in cbr23 at samr power consumption
> 
> And i know its wrong cause i have the freaking cpu. At stock with 125w power limit it scores 24k+ in cbr23. Actually you can even compare it with techspots 12700 review, at 65w it scores over 16k while tpu has the 12900k at 18k / 125w. With less cores mind you. Obviously flawed numbers


Far too many variables in play here - differences in motherboard, BIOS revision, subsequent Intel microcode updates, and more. Until someone can deliver data of comparable quality that shows the review to be erroneous, I'll trust the review, thanks. You're very welcome to try and do so, but that'll require more than stating "my chip does X


fevgatos said:


> Actually zen 3 had lots of problems, some of them are fixed and some of them wont ever. The x570 specifically had some problems with ssd reads, usb disconnects, ftpm stuttering...


"Lots of problems" is quite a stretch. fTPM stuttering is relatively rare, and fixed; USB disconnects were B550-only and were fixed long ago, and AFAIK that SSD read speed thing only applied to chipset-connected SSDs (i.e. not CPU-connected ones, as are the majority) and was also fixed.

It's kind of funny, really. Whenever someone brings some nuance to your simplistic arguments and conclusions, you always try to shift the goal posts to suit your liking. The 12900K is more efficient at 125W than the 5950X! No, it's the GC core that's more efficient! No, we can't do comparisons with existing benchmarks - but we can run our own tests(?). No, we can't trust per-core power draw numbers from seasoned reviewers, because look at this benchmark result I got! It's almost as if, oh, I don't know, you have a vested interest in a certain party coming out as conclusively better in this comparison? Seriously though: I understand that you spent a lot of money on your CPU. And it's a great CPU! It's not even a terrible power hog if tuned sensibly, or in lighter workloads. But ... you need to leave that desperate defensiveness behind. It is perfectly okay that the thing you have bought is not conclusively and unequivocally _the best_. If that's the standard you live by, either you'll go through life deluding yourself, or you'll be consistently sad, angry and disappointed - because _the world doesn't work that way_. ADL is great. Zen3 is great. ADL is slightly faster; Zen3 is slightly more efficient in heavy or highly threaded loads - _generally_. There are significant caveats and exceptions to both of those overall trends. Neither is a bad choice. Period. And it's okay for there to be multiple good choices out there - in fact, I'd say it's great! Your desperate need for your chosen brand to be _the best_ is ... well, both leading you to make really bad conclusions in how you're looking at test and performance data, and probably not making you feel very good either. I would really recommend you take a step back and reconsider how you're looking at these things.


----------



## fevgatos (Aug 8, 2022)

Valantar said:


> Sorry, but you're being wildly inconsistent here. Now you're saying your claim is that _the GC core_ is more efficient than _the Zen3 core_. Which we have conclusive evidence showing that it is not, through Anandtech's per-core power testing. Despite the 12900K being pushed stupidly high, and responding poorly to instruction dense workloads, it is still less efficient in lighter workloads such as most SPEC workloads, consuming 6-7W more than the peak power draw of any single Zen3 core, while barely outperforming it.
> 
> As I have written about at length above, there is a strong argument to be made for _Alder Lake_, the implemented CPUs, being more efficient at lighter (less instruction dense), low threaded workloads _than Zen3 CPUs_, but - repeating myself a lot here - this is not due to an advantage in _core_ efficiency, but due to _lower uncore power draw_. AMD's through-package Infinity Fabric gives their CPUs a higher base power draw regardless of the number of cores under load (though not at idle) than Intel's monolithic CPUs, meaning that _despite_ having a less efficient core, they win in _chip_ efficiency comparisons in these workloads _because the chip is more than just cores_.
> 
> ...


The TPU review is absolutely wrong and you dont need any other data, their own data proves it. The 12600k cannot be more efficient than the 12900k. Worse bin, less P cores and half the ecores. Also techspots review tested a 12700 and at 65w it scores more than the 12900k at 100w. It's painfully obvious that the TPU review is wrong. I mean even the 5600x is more efficient at same wattage, LOL.

Personally I tested 3 12900k at 4 different mobos and they all came back with the same results, 23500 to 24500 at 125w. Nowhere near close to TPUs numbers. 

I never changed my argument, ive said repeatedly that the Ecores are inefficient at most wattages you would run a desktop CPU on, and that Golden cove cores are vastly more efficient than zen 3 cores at same wattage. That's my argument and it has never changed. I dont care if adl is the best, if it wasnt I would have bought something else. Anyways, there is a thread for people posting their numbers at same wattage, ill be back in 3 days and ill post some numbers. If zen 3 even gets close to 8 gc cores in efficiency ill throw my computer off the window.


----------



## Vario (Aug 8, 2022)

I've been thinking about doing a 7700X AM5 upgrade to my i5 8600K.


----------



## HenrySomeone (Aug 8, 2022)

Vario said:


> I've been thinking about doing a 7700X AM5 upgrade to my i5 8600K.


13700(k) will likely be considerably more potent. Honestly, as it looks right now, only 7950x will have some merit, unless you're willing to play the waiting game of what might eventually get released on AM5 platform, but if you want your performance now...


----------



## Why_Me (Aug 8, 2022)

Valantar said:


> Yeah the US "there might be sales tax, but we won't tell you until the second before you're paying" thing is incredibly shady and misleading.


There are 50 US states and each state has their own individual sales tax not to mention some states such as the one I live in has no sales tax.


----------



## chrcoluk (Aug 8, 2022)

The clocks are very impressive but I hope its not at the cost of power efficiency.


----------



## StrikerRocket (Aug 8, 2022)

Not going to upgrade anytime soon. This goes too fast and as soon as one gets used to a new system, a new architecture comes around, a new platform etc.
This is becoming too much I think. I'll stick to my 5900X and 3070 Ti for the time being.


----------



## Valantar (Aug 8, 2022)

fevgatos said:


> The TPU review is absolutely wrong and you dont need any other data, their own data proves it. The 12600k cannot be more efficient than the 12900k. Worse bin, less P cores and half the ecores. Also techspots review tested a 12700 and at 65w it scores more than the 12900k at 100w. It's painfully obvious that the TPU review is wrong. I mean even the 5600x is more efficient at same wattage, LOL.


Many possible explanations for this - for example, it could be indicative of the low power limit interfering with the boost algorithms, causing the CPU to be stuck in boost/throttle loops, which always kill performance. If this was the case, it would be quite reasonable for Intel to have fixed this afterwards, which would explain your different results.

Oh, btw, have you heard of this new invention called a link? I have linked to literally every single source I've referred to in this discussion. It would be nice if you could do others the same courtesy as is being done to you. It's not my job to corroborate your statements. Post your sources.


fevgatos said:


> I never changed my argument, ive said repeatedly that the Ecores are inefficient at most wattages you would run a desktop CPU on, and that Golden cove cores are vastly more efficient than zen 3 cores at same wattage. That's my argument and it has never changed.


But this is the thing: as you keep reformulating this core argument, you keep changing what you are arguing, because this core argument does not make logical sense. How? Simple: the only interpetation of "Golden Cove cores are more efficient than Zen3 cores at the same wattage" that makes logical sense is if you're looking at per-core power, not package power. Yet the only power numbers you care about - consistently, regardless of what other data is provided - is _package power_. Package power includes other significant power draws than the cores, and crucially these draws differ between chips and architectures, introducing a variable that ruins your data - you literally _can't_ get per-core power from package power, as there's other stuff mixed in there.

There are two possible logically congruent variants of your argument:
- That The GC _core_ is more efficient than the Zen3 _core_, on a core-for-core, core power only basis, at the same wattage
- That ADL as implemented, as a full chip, including cores and uncore, is more efficient than Zen3 at the same wattage

The first claim has been quite conclusively disproven by AnandTech's per-core power testing. The GC core in instruction dense workloads can scale to insane power levels, and even in lighter workloads needs notably more power than the highest power a Zen3 core ever reaches in order to eke out a small win.

The second point is crucially more complex, as the answer differs wildly across power levels as the effects of uncore power vs. core power scale, and of course carries with it the problem of an uneven playing field, where every ADL chip is operating at a significant downclock from its stock configuration, which privileges it over the more frugal at stock Zen3 CPUs. And, as has been discussed at massive length above: there is no conclusive, simple answer to this. ADL does indeed have an advantage at relatively light, low threaded workloads. It does not if the workload is instruction dense, or if the number of fully loaded cores exceeds ~4. Though again, due to how different workloads execute differently on different architectures, even these are oversimplified generalizations. The real answer: it's damn complicated, and they each have their strengths and weaknesses.


fevgatos said:


> I dont care if adl is the best, if it wasnt I would have bought something else. Anyways, there is a thread for people posting their numbers at same wattage, ill be back in 3 days and ill post some numbers. If zen 3 even gets close to 8 gc cores in efficiency ill throw my computer off the window.


It's been a while since I've seen someone contradict themselves so explicitly within the span of three sentences. Well done, I guess? "I don't care!/If I'm wrong I'll throw my PC out the window!" aren't ... quite in line with each other, now, are they?



Why_Me said:


> There are 50 US states and each state has their own individual sales tax not to mention some states such as the one I live in has no sales tax.


I'm well aware of that, but I don't see how that makes it logical for stores to not bake sales tax into their listed prices. A store is generally only located in one state, right? I can't imagine there are many stores straddling a state border. So they should be quite capable of pricing things with what people will actually be paying, as is done literally everywhere else. And for online retailers it would be as simple as having a pop-up asking which state you're in either when you first enter the site, or when you first add something to your cart.


----------



## Why_Me (Aug 8, 2022)

Valantar said:


> I'm well aware of that, but I don't see how that makes it logical for stores to not bake sales tax into their listed prices. A store is generally only located in one state, right? I can't imagine there are many stores straddling a state border. So they should be quite capable of pricing things with what people will actually be paying, as is done literally everywhere else. And for online retailers it would be as simple as having a pop-up asking which state you're in either when you first enter the site, or when you first add something to your cart.


A store located in New York for example where sales tax is high can only charge a customer the sales tax where said customer lives if the sale is done online.  So no matter what store I order from no matter where said store is located I pay no sales tax due to the state I live in has no sales tax.


----------



## Valantar (Aug 8, 2022)

StrikerRocket said:


> Not going to upgrade anytime soon. This goes too fast and as soon as one gets used to a new system, a new architecture comes around, a new platform etc.
> This is becoming too much I think. I'll stick to my 5900X and 3070 Ti for the time being.


Upgrading every generation makes no sense anyway - it just makes progress feel slower by chopping it up into tiny bits, while costing tons of money. That's a great PC you've got, and it'll be great for many years still, so no reason to upgrade for a while still.


chrcoluk said:


> The clocks are very impressive but I hope its not at the cost of power efficiency.


Given the increase in base clock it seems efficiency is maintained at least to some degree, though they're definitely pushing these hard. The chips should all do base clock continuously at TDP, which looks decent (from 3.4GHz @ 105W to 4.5GHz @170W), but bumping TDP from 105W to 170W and PPT from 144W to 230W is still quite a lot. PPT/TDC/EDC tuning will likely be even more useful for Zen4 than it is for Zen3 currently, and no doubt there'll be notable gains by setting lower power limits simply as the chips are scaling much higher in power than previously.



Why_Me said:


> A store located in New York for example where sales tax is high can only charge a customer the sales tax where said customer lives if the sale is done online.  So no matter what store I order from no matter where said store is located I pay no sales tax due to the state I live in has no sales tax.


Yes, exactly. Like I said: this is easily solved.


Valantar said:


> And for online retailers it would be as simple as having a pop-up asking which state you're in either when you first enter the site, or when you first add something to your cart.


Through this, they could easily adjust the listed price to match with the reality of what the customer will be paying. This really isn't complicated at all.


----------



## Kelutrel (Aug 8, 2022)

I'm really excited for the Zen4, but I have a 5900X that I built in march 2021 , and rebuilding my whole system now after a year and a half may not be justified by the assumed performance increase of a Zen4 platform ... I would have liked a 5900X3D but nope


----------



## InVasMani (Aug 8, 2022)

I hope they introduce some low power E variants perhaps they'll do that alongside 3DStacked Cache models!!? You'll already be paying a bit more for stacked cache may as well be a bit binned for more friendly power at the same time.


----------



## fevgatos (Aug 9, 2022)

Valantar said:


> Many possible explanations for this - for example, it could be indicative of the low power limit interfering with the boost algorithms, causing the CPU to be stuck in boost/throttle loops, which always kill performance. If this was the case, it would be quite reasonable for Intel to have fixed this afterwards, which would explain your different results.
> 
> Oh, btw, have you heard of this new invention called a link? I have linked to literally every single source I've referred to in this discussion. It would be nice if you could do others the same courtesy as is being done to you. It's not my job to corroborate your statements. Post your sources.


You are right, here is the link from techspots 12700 review. With 65w plimit it outscores tpus 12900k at 100w. That's simply preposterous 









						Intel Core i7-12700 + Intel B660 Review
					

Today we have a combo review of all-new Intel hardware, our first look at the affordable Intel B660 platform, the non-K Core i7-12700 and Intel's latest box...




					www.techspot.com
				




Also here is a 12900k at 125w from igorslab. 










						Core i9-12900KF, Core i7-12700K and Core i5-12600 in a workstation test with amazing results and an old weakness | Part 2 | Page 6 | igor'sLAB
					

So today I'll get serious and show you where Alder Lake S can really score aside from colorful gaming pixels. Gaming what? Completely overrated if you look at at least some of today's results.




					www.igorslab.de
				






Valantar said:


> But this is the thing: as you keep reformulating this core argument, you keep changing what you are arguing, because this core argument does not make logical sense. How? Simple: the only interpetation of "Golden Cove cores are more efficient than Zen3 cores at the same wattage" that makes logical sense is if you're looking at per-core power, not package power. Yet the only power numbers you care about - consistently, regardless of what other data is provided - is _package power_. Package power includes other significant power draws than the cores, and crucially these draws differ between chips and architectures, introducing a variable that ruins your data - you literally _can't_ get per-core power from package power, as there's other stuff mixed in there.
> 
> There are two possible logically congruent variants of your argument:
> - That The GC _core_ is more efficient than the Zen3 _core_, on a core-for-core, core power only basis, at the same wattage
> ...


Im talking about package power. Anandtech hasnt disproven anything, even if they are just checking core instead of package, they havent done so normalized have they?



Valantar said:


> been a while since I've seen someone contradict themselves so explicitly within the span of three sentences. Well done, I guess? "I don't care!/If I'm wrong I'll throw my PC out the window!" aren't ... quite in line with each other, now, are they?


Im just trying to tell you im pretty confident it is the case. And im confident cause i tested, repeatedly. Ive seen a tuned to the max 5800x score 16k in cbr23 at 150w, while 8 gc cores need.... 65 to match that. Yes cbr is a good scenario for alderlake but the differen is ridiculously big


----------



## gffermari (Aug 9, 2022)

Why does the 12400f use the same amount of power with the 5600X?
Both are 6/12, both consume about the same watts and both score similar numbers. It appears that the 5600X is slightly more efficient, practically no difference, than the 12400.

so does the 12700/12900 have so better binning that is twice more efficient than the ryzens?

it appears to me that the gc cores have similar efficiency to the zen 3 but they are just clocked way higher in order to be faster in apps/benchmarks.









						Intel Core i5-12400F Review - The AMD Challenger
					

The Intel Core i5-12400F comes at an extremely attractive price point, yet offers performance comparable to AMD's Ryzen 5 5600X. While Intel introduced a Hybrid core design with Alder Lake, the 12400F is a P-core only design, which helps avoid potential compatibility issues with E-cores.




					www.techpowerup.com


----------



## fevgatos (Aug 9, 2022)

gffermari said:


> Why does the 12400f use the same amount of power with the 5600X?
> Both are 6/12, both consume about the same watts and both score similar numbers. It appears that the 5600X is slightly more efficient, practically no difference, than the 12400.
> 
> so does the 12700/12900 have so better binning that is twice more efficient than the ryzens?
> ...


Τhe 12400 is a different die than the rest of the lineup and yes, it is the worst binned alderlake pretty much. The 12900ks is the best bin and should be the most efficient of them all, but havent tested it. According to igorslab though it require 124mv less than the 12900k for same clocks, so yeah, that one will knock efficiency out of the park, we are talking about numbers that zen 5 might not even be able to match

Also the review from TPUP is power from the wall, which is not really indicative. When you are testing that low wattage parts, a 5 or 10w discrepancy from the motherboard makes a huge difference. TPUP uses the maximum hero for the 12400, just the RGB and the actual screen on that motherboard throw the numbers off. You can check igorslab review which tests only CPU power, the 12400 is way more efficient than the 5600x.

It's up to 65% (that's HUGE) more efficient in lightly threaded workloads and around 20-25% more efficient in multithreaded workloads.









						Intel Core i5-12400 Workstation Review - How does real work succeed without glued-on E-cores? | Part 2 | Page 9 | igor'sLAB
					

Today we want to question where the upcoming Core i5-12400 can still score apart from colorful gaming worlds and whether it will remain just as frugal and efficient. Gaming is actually overrated when…




					www.igorslab.de
				





Intel's stock settings push the 12900k way way past it's efficiency point. They are trying to make it compete with the 5950x in MT performance, which it has no business doing imo. In all fairness, AMD's stock settings - as shown by the zen 4 leaks will also be out of the park, the only reason they didnt push the wattage with zen 3 is they didn't need to, Intel wasnt competing in MT performance with cometlake, so AMD decided to play the efficiency card. Now that Intel is pushing them AMD also raises the stock wattage


----------



## Valantar (Aug 9, 2022)

fevgatos said:


> Im talking about package power.


Then please, for the love of all that's good in this world, stop going on about "core efficiency". Package power is only indirectly indicative of core efficiency, and to extract core efficiency from package power you must be able to reliably remove uncore power from package power. Without doing so, there is no way whatsoever of knowing how much power the cores are consuming.


fevgatos said:


> Anandtech hasnt disproven anything, even if they are just checking core instead of package, they havent done so normalized have they?


Normalized for what? Your arbitrary power limits? They're running the chips as configured by Intel, allowing it to boost as high as it wants and the workload demands. And they demonstrated a wide range of power behaviours at these stock settings - in instruction dense POV-Ray, they saw a 71W increase over idle, which they estimate to be a 55-60W increase in core power. On the other hand, in the less instruction dense SPEC workloads they estimated core power at 25-30W. At (at least roughly) the same clocks. At which point it delivered marginally better performance than the 5950X, the cores in which peak at 20.6W in POV-Ray and similar to ADL likely consume a bit less across the SPEC suite.

That demonstrates that, as configured from the factory, at very similar clock speeds, Zen3 is more efficient than ADL as ADL beats it by ~5-16% while consuming notably more than 5-16% more power. Lowering the power limit will not change ADL's efficiency in this test, because the CPU is nowhere near hitting any reasonable power limit - even a 50W limit would likely deliver roughly the same performance in SPEC, and it will boost as opportunistically within this limit unless also frequency limited.


fevgatos said:


> Im just trying to tell you im pretty confident it is the case.


You're so confident that you're heavily emotionally invested in the outcome, yes, I see that. Doesn't change what I said above.


fevgatos said:


> And im confident cause i tested, repeatedly. Ive seen a tuned to the max 5800x score 16k in cbr23 at 150w, while 8 gc cores need.... 65 to match that. Yes cbr is a good scenario for alderlake but the differen is ridiculously big


But, even assuming that the numbers you're constantly citing out of nowhere are accurate, to repeat myself for the n'th time: this comparison is deeply, deeply flawed. Heck, this is _far_ more problematic than the purportedly "fundamentally flawed" testing you've been complaining about. Why? 'Cause you're comparing one clocked-to-the-rafters, pushed to extremes tuning of one chip, with a heavily power limited, and thus also clock limited, tuning of another. How does a 5800X perform at 65W? How do each of them perform across a range of sensible wattages? How do they perform outside of the one application that you love to cite because it can be run in one click and performs favorably on your platform of choice?



fevgatos said:


> Τhe 12400 is a different die than the rest of the lineup and yes, it is the worst binned alderlake pretty much. The 12900ks is the best bin and should be the most efficient of them all, but havent tested it. According to igorslab though it require 124mv less than the 12900k for same clocks, so yeah, that one will knock efficiency out of the park, we are talking about numbers that zen 5 might not even be able to match
> 
> Also the review from TPUP is power from the wall, which is not really indicative. When you are testing that low wattage parts, a 5 or 10w discrepancy from the motherboard makes a huge difference. TPUP uses the maximum hero for the 12400, just the RGB and the actual screen on that motherboard throw the numbers off. You can check igorslab review which tests only CPU power, the 12400 is way more efficient than the 5600x.
> 
> ...


There is something rather iffy going on with those Igor's Lab benchmarks - at least in terms of AMD power consumption. He reports his 5950X consuming 217W, which is ... well, crazy. That's the power consumption of a 5950X tuned to the extreme, with zero regard for efficiency, and it is certainly not reflective of anything resembling stock behaviour. If Igor didn't do that manually, then he should throw his AM4 testing motherboard out that window you're talking about and pick one that isn't garbage. A stock 5950X doesn't exceed 144W whatsoever - though if measuring at the EPS12V cable you'd also need to include VRM conversion losses in that sum - but that would be roughly equal across all platforms.

Edit: looking at Igor's test setup, the motherboard is configured with "PBO: Auto". In other words, it's running a motherboard-dependent auto OC. That is _not_ a stock-v-stock comparison. And that is some pretty bad test methodolgy. This essentially ruins any and all efficiency comparisons based on these numbers, as the motherboard is clearly overriding all power limits and pushing the chips far beyond stock power and voltage.


It's also kind of telling that you're _very_ set on being as generous as possible with Intel, repeating that the 12400 is "the worst possible bin" of ADL, etc, yet when coming to AMD, you consistently compare against the 5800X - _by far_ the most power hungry bin of Zen3, by a _massive_ margin. Remember, it has the same power limits as the 5900X and 5950X, with 50% and 100% more cores respectively, while clocking only a bit higher at base. Again: look at Anandtech's per-core power draw testing.  The 5800X consumes notably _more_ power per core in an 8-core load than both of those CPUs, while also clocking _lower_. So, for Intel, you're ever so generous, while for AMD you're consistently comparing against the worst bins. One might almost say that this is, oh, I don't know, a bit biased?

You're also wrong about your 12400/12900K binning statements - they're not the same die, so they're not comparable bins at all. They're different silicon implementations of the same architecture, and each represents a bin of its implementation. It's entirely possible that the 12400 is a low grade bin of its silicon, but unless you've got detailed clock and power scaling data for several examples of both chips, you can't make comparisons like that.

There's also the complexities of boost algorithms and thermal/power protection systems to take into account, which can throw off simple "more power=faster" assumptions. For example, my 5800X (from testing I did way back when) runs _faster_ in Cinebench when limited to 110W PPT than if let loose at 142W PPT. And significantly so - about 1000 points. Why? I can't say entirely for sure as I have neither the tools, skills nor time to pin-point this, but if I were to guess I'd say it's down to the higher power limit leading to higher boost power, meaning higher thermals, more leakage, and subsequent lower clocks to make up for this and protect the chip. Zen3 has a quite aggressive chip protection system that constantly monitors power, current, voltage, clock frequency, and more, and adjusts it all on the fly, meaning that tuning is complex and non-linear, and highly dependent on cooling.


----------



## fevgatos (Aug 9, 2022)

Valantar said:


> Normalized for what? Your arbitrary power limits? They're running the chips as configured by Intel, allowing it to boost as high as it wants and the workload demands.


Normalized for either consumption or performance. Great for them that they ran as configured by Intel but that's not my argument at all



Valantar said:


> But, even assuming that the numbers you're constantly citing out of nowhere are accurate, to repeat myself for the n'th time: this comparison is deeply, deeply flawed. Heck, this is _far_ more problematic than the purportedly "fundamentally flawed" testing you've been complaining about. Why? 'Cause you're comparing one clocked-to-the-rafters, pushed to extremes tuning of one chip, with a heavily power limited, and thus also clock limited, tuning of another. How does a 5800X perform at 65W? How do each of them perform across a range of sensible wattages? How do they perform outside of the one application that you love to cite because it can be run in one click and performs favorably on your platform of choice?


You think a comparison normalized for performance is deeply flawed? I mean come on, you cannot possible believe that. I don't believe you believe that. I said it before, normalized for consumption, 8 gc cores are around 20-25% more efficient, normalized for performance the difference is over 100%. So yeah, the 5800x at 65 can get up to 13-14k.

Again, performance normalized the difference will still be huge. You can put the 5800x at 50w for all I care, 8 gc cores will probably match the performance at 30w. I mean, 2 days left, im back and I can test it 

Outside of that one application the zen 3 is even more comedically bad. Ive tested gaming performance (granted, only one game), 8GC cores at 25w (yes, power limited to 25) match a 5800x in performance hitting 90+ watts in Farcry 6. They both scored around 110 fps if I remember correctly at 720p ultra + RT



Valantar said:


> It's also kind of telling that you're _very_ set on being as generous as possible with Intel, repeating that the 12400 is "the worst possible bin" of ADL, etc, yet when coming to AMD, you consistently compare against the 5800X - _by far_ the most power hungry bin of Zen3, by a _massive_ margin. Remember, it has the same power limits as the 5900X and 5950X, with 50% and 100% more cores respectively, while clocking only a bit higher at base. Again: look at Anandtech's per-core power draw testing.  The 5800X consumes notably _more_ power per core in an 8-core load than both of those CPUs, while also clocking _lower_. So, for Intel, you're ever so generous, while for AMD you're consistently comparing against the worst bins. One might almost say that this is, oh, I don't know, a bit biased?


Ive no idea what you are talking about. Im comparing core and power normalized, so it doesn't matter which Zen SKU the comparisons are done with. The 5950x with one CCD will perform pretty similarly to the 5800x at the same wattages, no? So your criticism is completely unwarranted.

And yes, ive tested a 12900k with only 6 GC cores active at 65w, it scored way more than the 12400 does, so its pretty apparent the 12400 is a horrible bin. I think i got 14k score, but again, dont remember off the top of my head




Valantar said:


> There is something rather iffy going on with those Igor's Lab benchmarks - at least in terms of AMD power consumption. He reports his 5950X consuming 217W, which is ... well, crazy. That's the power consumption of a 5950X tuned to the extreme, with zero regard for efficiency, and it is certainly not reflective of anything resembling stock behaviour. If Igor didn't do that manually, then he should throw his AM4 testing motherboard out that window you're talking about and pick one that isn't garbage. A stock 5950X doesn't exceed 144W whatsoever - though if measuring at the EPS12V cable you'd also need to include VRM conversion losses in that sum - but that would be roughly equal across all platforms.
> 
> Edit: looking at Igor's test setup, the motherboard is configured with "PBO: Auto". In other words, it's running a motherboard-dependent auto OC. That is _not_ a stock-v-stock comparison. And that is some pretty bad test methodolgy. This essentially ruins any and all efficiency comparisons based on these numbers, as the motherboard is clearly overriding all power limits and pushing the chips far beyond stock power and voltage.


But im not using igorslab for efficiency comparisons. Im using them to show you that a 12900k at 125w matches / outperforms a 5900x even at heavy MT workloads. Which is the exact opposite of what TPU said, where a 12900k at 125w is matched by the 12600k and loses to a 65w 12700. If you still cant admit that TPU results are absolutely completely hillariously flawed i don't know what else to tell you man....


----------



## trparky (Aug 9, 2022)

fevgatos said:


> If you still cant admit that TPU results are absolutely completely hillariously flawed i don't know what else to tell you man....


And if you think that, I say we get the resident TechPowerUp benchmark king in this thread to give his side of the story. Hey @W1zzard, let's see you give some input on this.


----------



## fevgatos (Aug 9, 2022)

trparky said:


> And if you think that, I say we get the resident TechPowerUp benchmark king in this thread to give his side of the story. Hey @W1zzard, let's say you give some input on this.


Go ahead, I hope he replies. I guarantee you 100% the benchmarks are flawed. Could be a bios thingy or something else, but its most definitely without a shadow of a doubt flawed. Im not the only one saying it, there is a thread on tom's hardware also making fun of that benchmarks, and even in the discussion of that very benchmark there were people doubting the results. That's cause they just don't make any sense, the 12600k cant be more efficient than the 12900k at same wattage, it's hillariously obvious. The flaw is so monumental, imagine if you clock the 5600x to 125w and suddenly it matches the 5950x. Well thats what you are looking at with those numbers...

Ive tested 3 12900k on 4 motherboards at 125w, all scored pretty much the same in CBR23, between 23500 and 24500. TPU scored 18k, lol


----------



## Valantar (Aug 9, 2022)

fevgatos said:


> Normalized for either consumption or performance. Great for them that they ran as configured by Intel but that's not my argument at all


I mean, I should just start linking you to previous responses at this point, as everything you bring up is asked and answered four pages ago. "Normalizing" for either of those mainly serves to hide the uneven starting point introduced by said normalization, as each "normalized" operating point represents a different change from the stock behaviour of each chip. In the case of power limits being lowered, this inherently privileges the chip being allowed the biggest reduction from stock, due to how DVFS curves work.


fevgatos said:


> You think a comparison normalized for performance is deeply flawed?


Yes, I really do. Outside of purely academic endeavors, who _ever_ tests PC components normalized for performance? I mean, doing so isn't even possible, given how different architectures perform differently in different tasks. If you adjust a 12900K so it perfectly matches an 11900K in Cinebench, then it will still be faster in some workloads, and possibly slower in others. Normalizing for performance for complex components like this is _literally impossible_. Unless, that is, you tune normalize for performance in every single workload, and then just record the power data? That sounds incredibly convoluted and time-consuming though.


fevgatos said:


> I mean come on, you cannot possible believe that. I don't believe you believe that.


Well, too bad. I have explained the issues with this to you at length multiple times now. If you're unable to grasp that these problems are significant, that's your problem, not mine.


fevgatos said:


> I said it before, normalized for consumption, 8 gc cores are around 20-25% more efficient, normalized for performance the difference is over 100%. So yeah, the 5800x at 65 can get up to 13-14k.


And, once again: _at what points?_ "Normalized for consumption"  - at what wattage? The only such comparison that would make sense would be _a range_, as any single test is nothing more than an unrepresentative snapshot. And any single workload, even across a range, is still only representative of itself. For such a comparison to have any hope whatsoever of being representative, you'd need to test a range of wattages in a range of workloads, and then graph out those differences. Anything less than that is _worthless_. Comparing the 12900K at 65W vs. the 5800X at 65W in CB23 tells us _only_ that exact thing: how each perform at that specific power level in that specific workload. You cannot reliably extrapolate _anything_ from that data - it's just not sufficient for that.

As for your "normalizing for performance": once again, you're just trying to use neutral and quasi-scientific wording to hide the fact that you really want to use a benchmark that's relatively friendly to ADL as the be-all, end-all representation of which of these CPUs is better, rather than actually wanting to gain actual knowledge about this.


fevgatos said:


> Again, performance normalized the difference will still be huge. You can put the 5800x at 50w for all I care, 8 gc cores will probably match the performance at 30w. I mean, 2 days left, im back and I can test it


I'm starting to sound like a broken record here, but: ADL has an advantage at lower power limits in less instruction dense workloads due to its lower uncore power draw.


fevgatos said:


> Outside of that one application the zen 3 is even more comedically bad. Ive tested gaming performance (granted, only one game), 8GC cores at 25w (yes, power limited to 25) match a 5800x in performance hitting 90+ watts in Farcry 6. They both scored around 110 fps if I remember correctly at 720p ultra + RT


And once again, pulling numbers out of nowhere as if this is even remotely believeable. Also, 720p? Wtf? And how oddly, unexpectedly convenient that the one game you're testing in is once again a game that's uncharacteristically performant on ADL generally. Hmmmmmm. Almost as if there might be a pattern here?


fevgatos said:


> Ive no idea what you are talking about. Im comparing core and power normalized, so it doesn't matter which Zen SKU the comparisons are done with. The 5950x with one CCD will perform pretty similarly to the 5800x at the same wattages, no? So your criticism is completely unwarranted.


... no. Did you even look at the AT testing? The 5950X, running 8 cores active, on the same CCX (they control for that in testing), in the same workload, at the same power limit as the 5800X, _clocks higher while consuming less power per core_.

It would be really, really helpful if you at least tried to understand what is being said to you. The boost behaviours, binning and DVFS characteristics of these chips are not the same. This is what I was saying about your "arguments" about binning on the 12400K: you're infinitely generous with giving Intel the benefit of the doubt, but you consistently pick worst case scenarios for AMD and show zero such generousness in that direction.


fevgatos said:


> And yes, ive tested a 12900k with only 6 GC cores active at 65w, it scored way more than the 12400 does, so its pretty apparent the 12400 is a horrible bin. I think i got 14k score, but again, dont remember off the top of my head


And yet more unsourced numbers pulled out of thin air. This is starting to get tiring, you know.


fevgatos said:


> But im not using igorslab for efficiency comparisons.


Uhhhhh... what? This is what you said, in literally your previous post:



fevgatos said:


> *You can check igorslab review which tests only CPU power, the 12400 is way more efficient than the 5600x.*



Could you at least stop flat out lying? That would be nice, thanks.


fevgatos said:


> Im using them to show you that a 12900k at 125w matches / outperforms a 5900x even at heavy MT workloads. Which is the exact opposite of what TPU said, where a 12900k at 125w is matched by the 12600k and loses to a 65w 12700. If you still cant admit that TPU results are absolutely completely hillariously flawed i don't know what else to tell you man....


I _don't know_ that TPU's testing is flawed - but I have explicitly said that this might indeed be the case. Given the number of possible reasons for this, and my complete lack of access to data surrounding their testing other than what's published, I really can't know.  It's absolutely possible that there's something wrong there.

However, you seem to fail to recognize that the Igor's Lab testing seems to be _similarly_ flawed, only in the other direction. As I explained above, it's entirely possible to _harm_ performance on AMD CPUs through giving them too much power, which drives up thermals, drives down clocks, increases leakage, and results in lower overall performance. Given that Igor's testing is with an auto OC applied and the power levels recorded are astronomical, this is very likely the case. So, if I agree to not look at TPU's results, will you agree to not look at Igor's Lab's results? 'Cause for this discussion, both seem to be equally invalid. (And no, you can't take the Igor's Lab Intel results and compare them to Zen3 results from elsewhere, as this introduces massive error potential into the data, as there's no chance of controlling for variables across the tests.



Oh, and a bit of a side note here: you are constantly switching back and forth between talking about "running the 12900K at X watts" and "8 GC cores at X watts". Are your tests all willy-nilly like this, or are you consistently running with or without E-cores enabled? That represents a pretty significant difference, after all.


----------



## ratirt (Aug 9, 2022)

trparky said:


> And if you think that, I say we get the resident TechPowerUp benchmark king in this thread to give his side of the story. Hey @W1zzard, let's see you give some input on this.


I'd really like to hear from @W1zzard about this entire, "TPU results are absolutely hilariously flawed".


----------



## fevgatos (Aug 9, 2022)

Valantar said:


> Yes, I really do. Outside of purely academic endeavors, who _ever_ tests PC components normalized for performance? I mean, doing so isn't even possible, given how different architectures perform differently in different tasks. If you adjust a 12900K so it perfectly matches an 11900K in Cinebench, then it will still be faster in some workloads, and possibly slower in others. Normalizing for performance for complex components like this is _literally impossible_. Unless, that is, you tune normalize for performance in every single workload, and then just record the power data? That sounds incredibly convoluted and time-consuming though.


CPUs, probably noone. Other PC hardware, sure, Fans and coolers for example



Valantar said:


> And, once again: _at what points?_ "Normalized for consumption"  - at what wattage? The only such comparison that would make sense would be _a range_, as any single test is nothing more than an unrepresentative snapshot. And any single workload, even across a range, is still only representative of itself. For such a comparison to have any hope whatsoever of being representative, you'd need to test a range of wattages in a range of workloads, and then graph out those differences. Anything less than that is _worthless_. Comparing the 12900K at 65W vs. the 5800X at 65W in CB23 tells us _only_ that exact thing: how each perform at that specific power level in that specific workload. You cannot reliably extrapolate _anything_ from that data - it's just not sufficient for that.


And I will when I'm back. I'll test everything that's there to test, assuming someone with a zen 3 is willing to participate. 


Valantar said:


> I'm starting to sound like a broken record here, but: ADL has an advantage at lower power limits in less instruction dense workloads due to its lower uncore power draw.


Is CBR23 less instruction dense? 


Valantar said:


> And once again, pulling numbers out of nowhere as if this is even remotely believeable. Also, 720p? Wtf? And how oddly, unexpectedly convenient that the one game you're testing in is once again a game that's uncharacteristically performant on ADL generally. Hmmmmmm. Almost as if there might be a pattern here?


What's not believable? Ill post you the results when I'm back, but im not sure what part you don't find believable.


Valantar said:


> Uhhhhh... what? This is what you said, in literally your previous post:
> 
> 
> 
> Could you at least stop flat out lying? That would be nice, thanks.


Im not lying, those are 2 different benchmarks from igorslab. The testing I posted at 125w wasn't to show efficiency compared to zen 3, I posted to show you that the TPU test at 125w was flawed. The 12400 testing had the 5600x at stock with PBO off, PPT power at 90w, so yes that one I used to compare efficiency.


Valantar said:


> I _don't know_ that TPU's testing is flawed - but I have explicitly said that this might indeed be the case. Given the number of possible reasons for this, and my complete lack of access to data surrounding their testing other than what's published, I really can't know.  It's absolutely possible that there's something wrong there.
> 
> However, you seem to fail to recognize that the Igor's Lab testing seems to be _similarly_ flawed, only in the other direction. As I explained above, it's entirely possible to _harm_ performance on AMD CPUs through giving them too much power, which drives up thermals, drives down clocks, increases leakage, and results in lower overall performance. Given that Igor's testing is with an auto OC applied and the power levels recorded are astronomical, this is very likely the case. So, if I agree to not look at TPU's results, will you agree to not look at Igor's Lab's results? 'Cause for this discussion, both seem to be equally invalid. (And no, you can't take the Igor's Lab Intel results and compare them to Zen3 results from elsewhere, as this introduces massive error potential into the data, as there's no chance of controlling for variables across the tests.
> 
> ...


But im not using igor's lab results as I've said before (ok, I used the ones for the 12400 / 5600x since they seem to be stock) as an efficiency comparison.

And yes all the tests that I compared to a 5800x were done with 8gc cores - ecores off. You keep saying it's only tests that favor alderlake, but it's not like it's my choice. Whenever I ask people to run something with their zen CPU after the claimed it's more efficient they basically dissapear. If you know someone willing to test with his zen 3, im all up for it.



ratirt said:


> I'd really like to hear from @W1zzard about this entire, "TPU results are absolutely hilariously flawed".


You think it's more likely that the 12600k is more efficient at same wattage while having less P coress and half the ecores? Ok man


----------



## Arc1t3ct (Aug 9, 2022)

fevgatos said:


> CPUs, probably noone. Other PC hardware, sure, Fans and coolers for example
> 
> 
> And I will when I'm back. I'll test everything that's there to test, assuming someone with a zen 3 is willing to participate.
> ...



Could you post your best CB23 score? I'd like to compare it against my KS. What mobo and ram do you use?


----------



## fevgatos (Aug 9, 2022)

Arc1t3ct said:


> Could you post your best CB23 score? I'd like to compare it against my KS. What mobo and ram do you use?


You mean oced? Around 29950, unify X and 6000c30 ram on a u12a cooler


----------



## chrcoluk (Aug 9, 2022)

Valantar said:


> Upgrading every generation makes no sense anyway - it just makes progress feel slower by chopping it up into tiny bits, while costing tons of money. That's a great PC you've got, and it'll be great for many years still, so no reason to upgrade for a while still.
> 
> Given the increase in base clock it seems efficiency is maintained at least to some degree, though they're definitely pushing these hard. The chips should all do base clock continuously at TDP, which looks decent (from 3.4GHz @ 105W to 4.5GHz @170W), but bumping TDP from 105W to 170W and PPT from 144W to 230W is still quite a lot. PPT/TDC/EDC tuning will likely be even more useful for Zen4 than it is for Zen3 currently, and no doubt there'll be notable gains by setting lower power limits simply as the chips are scaling much higher in power than previously.
> 
> ...


Well TDP seems to have gone up from 65w to 105w for x600X chip? so thats quite a loss of efficiency sadly.  Although I guess like you said if is a good enough board one can possibly tune it back down to 65w consumption.


----------



## Valantar (Aug 9, 2022)

chrcoluk said:


> Well TDP seems to have gone up from 65w to 105w for x600X chip? so thats quite a loss of efficiency sadly.  Although I guess like you said if is a good enough board one can possibly tune it back down to 65w consumption.


It has, but again, the base clock has also increased by a full GHz, so once again it's a bit of a balancing act. Less efficient overall - it's a 27% base clock increase for a 62% TDP increase after all - but a crapton more performance, both peak and sustained (before any architectural improvements). The good thing is, thanks to AMD's opportunistic boost algorithms and low per-core power draws, those boost clocks should survive even at much lower power targets if one wants to tune some. That also makes it quite likely that this - not unlike ADL - will be quite efficient at low threaded workloads, as lower Zen3 SKUs are quite held back by their clocks there. Still, it'll be really interesting to see how these things shake out once we have some actual reviews to look at.


----------



## HenrySomeone (Aug 9, 2022)

chrcoluk said:


> Well TDP seems to have gone up from 65w to 105w for x600X chip? so thats quite a loss of efficiency sadly.  Although I guess like you said if is a good enough board one can possibly tune it back down to 65w consumption.


They don't really have any other choice in order to stay at least semi-competitive against i5s (lower ones anyway, 7600x simply won't come close to 13600k) considering they will keep the 6/12 configuration for the 5th/6th time in a row while the big, bad, core-stingy (amd fanboy favorite title up until recently) Intel will have gone from 4/4 to 14/20 in the same time frame. I bet the red team is moaning over the lost opportunity to go to 12 core chiplets with Zen4 (like the rumors suggested a while ago), but back then they were probably betting on Intel's 10nm woes to continue at least another year or so and by the time it became apparent that won't be the case, it was already too late to change the design...


----------



## ratirt (Aug 10, 2022)

fevgatos said:


> You think it's more likely that the 12600k is more efficient at same wattage while having less P coress and half the ecores? Ok man


I really don't care what is more likely but rather what the results say. If you want to discredit @W1zzard's testing maybe you should point where you think the problem lies.
power limit a 12900k and it will lose performance. 10% down when power limit is at 175w. A lot of sites confirmed it. Go lower with the power limit performance tanks but efficiency goes up.
Where is the 12600K more efficient show me








						Intel Core i5-12600K Review - Winning Price/Performance
					

The Core i5-12600K is the price/performance king in the Intel Alder Lake lineup. With its competitive pricing of $300, it's a clear winner against AMD's Ryzen 5 5600X and faster than even the 5800X in many applications and games. This is the gaming CPU you want.




					www.techpowerup.com


----------



## fevgatos (Aug 10, 2022)

ratirt said:


> I really don't care what is more likely but rather what the results say. If you want to discredit @W1zzard's testing maybe you should point where you think the problem lies.
> power limit a 12900k and it will lose performance. 10% down when power limit is at 175w. A lot of sites confirmed it. Go lower with the power limit performance tanks but efficiency goes up.
> Where is the 12600K more efficient show me
> 
> ...


Right here



			https://tpucdn.com/review/intel-core-i9-12900k-alder-lake-tested-at-various-power-limits/images/cinebench-multi.png
		


According to the consumption table, the 12900k at 125w consumes 5 watts more than the 12600k while scoring very similarly. If you dont understand how that's absolutely impossible.... Its like boostint the 5600x to 125w and suddenly it matches the 5950x.

Also here is techspots review of the 12700, which has less cores and worse bin.



			https://static.techspot.com/articles-info/2391/bench/CB23-1.png
		


At 65w the 12700 outscores the 12900k at 100w from tpus review. Again, if you dont understand why that's absolutely impossible.. I dont know how to help


----------



## ratirt (Aug 10, 2022)

fevgatos said:


> According to the consumption table, the 12900k at 125w consumes 5 watts more than the 12600k while scoring very similarly. If you dont understand how that's absolutely impossible.... Its like boostint the 5600x to 125w and suddenly it matches the 5950x.Reply


Is the power consumption of the 12600K also set to 125w max? I guess not. 12600K without a power limit can draw 220w maybe that's why. Also it is a cinebench23 and if you tell me your 12900K draws 50W during gaming I am literally gonna flip. This is a heavy full load task and it does require power. When you power limit 12900K to 125W it wont show much change in gaming but in Cinebench it will show decrease in performance by around 15% (not sure if 15% is correct but it is above 10% for sure)


fevgatos said:


> Also here is techspots review of the 12700, which has less cores and worse bin.
> 
> https://static.techspot.com/articles-info/2391/bench/CB23-1.png
> At 65w the 12700 outscores the 12900k at 100w from tpus review. Again, if you dont understand why that's absolutely impossible.. I dont know how to help


12700 has the same config 12900 so I don't know what you are after. Also these are different set ups and maybe that matters here as well. 
What is also worth to point out, if power limits are lifted, TPUs 12900K test shows around 28k score just like TechSpot's. 
Maybe the 12900K sample is not so great when tested. It would have been better if TechSpot had them both tested that way. Getting information from one site and the other and compare is kinda sketchy if there is certain limitations etc.


----------



## fevgatos (Aug 10, 2022)

ratirt said:


> Is the power consumption of the 12600K also set to 125w max? I guess not. 12600K without a power limit can draw 220w maybe that's why. Also it is a cinebench23 and if you tell me your 12900K draws 50W during gaming I am literally gonna flip. This is a heavy full load task and it does require power. When you power limit 12900K to 125W it wont show much change in gaming but in Cinebench it will show decrease in performance by around 15% (not sure if 15% is correct but it is above 10% for sure)


Man are you for real? There are power consumption metrics in the review, yes the 12600k consumes 5w less than the 12900k at 125w while it scores the same. Which, as ive repeated multiple times, its impossible



ratirt said:


> 12700 has the same config 12900 so I don't know what you are after. Also these are different set ups and maybe that matters here as well.
> What is also worth to point out, if power limits are lifted, TPUs 12900K test shows around 28k score just like TechSpot's.
> Maybe the 12900K sample is not so great when tested. It would have been better if TechSpot had them both tested that way. Getting information from one site and the other and compare is kinda sketchy if there is certain limitations etc.


No 12700 isnt the same configuration. It has half the ecores yet at 65w it outperforms the 12900k at 100w,which, again, is absolutely impossible.

There is nothing sketchy about comparing across reviews, cbr23 is a repeatable workload and when tested at similar power limits the cpus should score the same. And i know cause ive tested, 4 motherboards and 3 cpus, all scored 23500 to 24500 at 125watts.

Ask anyone with a 12900k to test stock with 125w power limit, they will all verify what im telling you. They'll score over 23k points

I googled for you some reviews testing at 125w. They all verify what im saying, TPUs review is absolutely wrong. Here you go, 125w = 23500 score









						Intel Core i9-12900K at 125W | Club386
					

Little performance is lost when dropping power substantially.




					www.club386.com


----------



## ratirt (Aug 10, 2022)

fevgatos said:


> Man are you for real? There are power consumption metrics in the review, yes the 12600k consumes 5w less than the 12900k at 125w while it scores the same. Which, as ive repeated multiple times, its impossible


I don't see the power limit for the 12600k set to 120W. I see scores and power limits for 12900K only. Which means the 12600K is at stock and if that is the case than it can draw 220W.


			https://tpucdn.com/review/intel-core-i9-12900k-alder-lake-tested-at-various-power-limits/images/cinebench-multi.png
		

That is the one you brought. Show me what the power consumed by the 12600K of 120W because I literally don't see it FOR REAL.
Don't hesitate with examples from the graphs. Maybe it will be easier to understand what you mean or talk about.


fevgatos said:


> No 12700 isnt the same configuration. It has half the ecores yet at 65w it outperforms the 12900k at 100w,which, again, is absolutely impossible.
> 
> There is nothing sketchy about comparing across reviews, cbr23 is a repeatable workload and when tested at similar power limits the cpus should score the same. And i know cause ive tested, 4 motherboards and 3 cpus, all scored 23500 to 24500 at 125watts.
> 
> Ask anyone with a 12900k to test stock with 125w power limit, they will all verify what im telling you. They'll score over 23k points


True it has 4ecores less. Think about it if it isnt. Same power limit for both and one has more "mouths" to feed. Also base clock and boost clocks are different which means power required is different to sustain it. It is just a guess here but still possible.
Different boards and drivers used equals different power draw?
You will need to ask Wizz about the testing criteria not me or compare everything not only those things you disagree with.


----------



## fevgatos (Aug 10, 2022)

ratirt said:


> I don't see the power limit for the 12600k set to 120W. I see scores and power limits for 12900K only. Which means the 12600K is at stock and if that is the case than it can draw 220W.
> 
> 
> https://tpucdn.com/review/intel-core-i9-12900k-alder-lake-tested-at-various-power-limits/images/cinebench-multi.png
> ...


Here you go, the power consumption from TPUs review. The 12900k at 125w consumes 5w more than the 12600k



			https://tpucdn.com/review/intel-core-i9-12900k-alder-lake-tested-at-various-power-limits/images/power-multithread.png


----------



## Prima.Vera (Aug 10, 2022)

TheLostSwede said:


> Ah, forgot to add Japan to that list. For some reason, all non Japanese products seem to be stupidly overpriced and many Japanese products are also stupidly overpriced there.
> Can't see any pricing for that from here though.
> Time to come visit isla formosa...
> Basic 4800 MHz modules have been on sale here for as little as US$67 for 2x 8GB.
> ...


Funny thing is, also retarded, I can buy the same product from Amazon.com, and pay for transport, and it will still be 50% cheaper than the same one on Amazon.co.jp with free transport...


----------



## fevgatos (Aug 10, 2022)

ratirt said:


> True it has 4ecores less. Think about it if it isnt. Same power limit for both and one has more "mouths" to feed. Also base clock and boost clocks are different which means power required is different to sustain it. It is just a guess here but still possible.
> Different boards and drivers used equals different power draw?
> You will need to ask Wizz about the testing criteria not me or compare everything not only those things you disagree with.


That's not how it works. Saying it has less mouths to feed is completely a non argument. Do you think the 5600x will outscore the 5950x at 125w in CBR23? Of course not. More cores means each core will work at better efficiency cause it doesnt boost as high. 

Also check my previous post, I linked you a review of the 12900k with 125w power limit and it shows exactly what im saying, he scored 23500 at 125w. TPU scores 18k. That's just absurd


----------



## HenrySomeone (Aug 10, 2022)

Man, AMDumbs are really something to behold! No matter how much concrete evidence you lay on the table before them, they don't move a single inch in their beliefs! This behavior actually has all the characteristics of a religious cult, a really hardcore one...


----------



## ratirt (Aug 10, 2022)

fevgatos said:


> Here you go, the power consumption from TPUs review. The 12900k at 125w consumes 5w more than the 12600k
> 
> 
> 
> https://tpucdn.com/review/intel-core-i9-12900k-alder-lake-tested-at-various-power-limits/images/power-multithread.png


Yeah and you dont know if the 12600K is limited or not so what is your point? Apparently it is not limited.


fevgatos said:


> That's not how it works. Saying it has less mouths to feed is completely a non argument. Do you think the 5600x will outscore the 5950x at 125w in CBR23? Of course not. More cores means each core will work at better efficiency cause it doesnt boost as high.
> 
> Also check my previous post, I linked you a review of the 12900k with 125w power limit and it shows exactly what im saying, he scored 23500 at 125w. TPU scores 18k. That's just absurd


Maybe there's a problem with the motherboards driver or windows scheduler at the point of the review, that is why the score does not align with other sites or differ a lot?
Wizz has to clarify that I'm only looking for some sort of explanation.


----------



## fevgatos (Aug 10, 2022)

ratirt said:


> Yeah and you dont know if the 12600K is limited or not so what is your point? Apparently it is not limited.
> 
> Maybe there's a problem with the motherboards driver or windows scheduler at the point of the review, that is why the score does not align with other sites or differ a lot?
> Wizz has to clarify that I'm only looking for some sort of explanation.


What difference does it make if its limited or not? LIke wtf...you can't be serious,  It draws the same amount of power as the 12900k at 125w and it performs similar, which makes NO sense. Whether its limited or not is completely irrelevant


----------



## ratirt (Aug 11, 2022)

fevgatos said:


> What difference does it make if its limited or not? LIke wtf...you can't be serious,  It draws the same amount of power as the 12900k at 125w and it performs similar, which makes NO sense. Whether its limited or not is completely irrelevant


Everybody know what the difference is. Literally like talking to a chimp. Read what I said and answer the question. Where do you see, in the chart you sent me, the 12600K is limited to a 125w like you have mentioned. I think it is a simple question.


----------



## fevgatos (Aug 11, 2022)

ratirt said:


> Everybody know what the difference is. Literally like talking to a chimp. Read what I said and answer the question. Where do you see, in the chart you sent me, the 12600K is limited to a 125w like you have mentioned. I think it is a simple question.





			https://tpucdn.com/review/intel-core-i9-12900k-alder-lake-tested-at-various-power-limits/images/power-multithread.png
		


I already sent you the above link. Its the power draw numbers in cbr23. Cant you see that the 12600k consumes as much as the 12900k at 125w???


----------



## ratirt (Aug 11, 2022)

fevgatos said:


> https://tpucdn.com/review/intel-core-i9-12900k-alder-lake-tested-at-various-power-limits/images/power-multithread.png
> 
> 
> 
> I already sent you the above link. Its the power draw numbers in cbr23. Cant you see that the 12600k consumes as much as the 12900k at 125w???


OK so you say it draws 125w just like the 12900K which is limited to 125w. The 12600K is not limited and can draw 150w btw. 
Your problem is the 12900K under perform in CB23 with the power limit set to 125w? or what is the problem here?


----------



## Mussels (Aug 11, 2022)

fevgatos said:


> Whenever I ask people to run something with their zen CPU after the claimed it's more efficient they basically dissapear. If you know someone willing to test with his zen 3, im all up for it.



You mean me?
Who keeps pointing out result after result showing that you're making things up?
You create weird convoluted scenarios for your preferred setup, then ignore all information that disagrees.

I mean heck this post alone says it, i've posted dozens of reviews images and quotes at you but nope - i just disappear (from your memory, as you blank out anything that doesnt agree with you)


----------



## fevgatos (Aug 11, 2022)

ratirt said:


> OK so you say it draws 125w just like the 12900K which is limited to 125w. The 12600K is not limited and can draw 150w btw.
> Your problem is the 12900K under perform in CB23 with the power limit set to 125w? or what is the problem here?


Dude, are you daft? In that specific test it draws the exact same wattage as the 12900k. So.. At the same wattage the 12900k should slam the 12600k in cbr23. But it didnt. Therefore the test is laughably wrong

And yes, the 12900k underperforms in every power limited test, not just in the 125w. The 100, 75 and 50 are also hilariously wrong



Mussels said:


> You mean me?
> Who keeps pointing out result after result showing that you're making things up?
> You create weird convoluted scenarios for your preferred setup, then ignore all information that disagrees.
> 
> I mean heck this post alone says it, i've posted dozens of reviews images and quotes at you but nope - i just disappear (from your memory, as you blank out anything that doesnt agree with you)


No I wasn't talking about you, i dont even know who you are.

You cant have posted a result that proves me wrong, simply because im not wrong. I have the cpu, Heck i tested 3 of them on 4 mobos and all got the same results. Also every other review out there agrees with me (techspot, igorslab, club365). So whatever you think you postes that proves me wrong never happened im afraid.


----------



## Mussels (Aug 11, 2022)

fevgatos said:


> No I wasn't talking about you, i dont even know who you are.


I mean, i'm pretty sure the infractions from the last few times you've done this should stick out in your memory - but i'm not surprised denial is your survival strategy either


----------



## fevgatos (Aug 11, 2022)

Mussels said:


> I mean, i'm pretty sure the infractions from the last few times you've done this should stick out in your memory - but i'm not surprised denial is your survival strategy either


Giving me infractions doesnt make you right, go ahead and tell me where im wrong. I'm sorry but TPUs review is obviously horribly wrong and whoever claims otherwise is in absolute denial. You dont even have to compare it with another review to realize its wrong. The 12600k matching the 12900k at same wattage is an obvious red flag that something is completely messed up.

Anyways, I've already posted 3 more reviews that show the same thing (igors lab, techspot and club365), so whatever you are claiming here (which you havent made clear) is absolutely wrong as well.

I just checked your post history, wtf are you even talking about? You just said that my test setup is flawed and not the TPUs and then you left the conversation. So what links and proofs are you talking about, lol


----------



## Valantar (Aug 11, 2022)

fevgatos said:


> CPUs, probably noone. Other PC hardware, sure, Fans and coolers for example


.... sooooooo maybe that should tell you that simple tests work for simple products with few variables, while more complex products with more variables might need more complex testing? Just a thought.


fevgatos said:


> And I will when I'm back. I'll test everything that's there to test, assuming someone with a zen 3 is willing to participate.


Looking forward to seeing your results.


fevgatos said:


> Is CBR23 less instruction dense?


Yes. It's not super light, but it's not a particularly heavy workload. There's a reason why nobody uses CB as a measure for ST power draw.


fevgatos said:


> What's not believable? Ill post you the results when I'm back, but im not sure what part you don't find believable.


The part that's not believable is the sheer number of ... well, _numbers_ you keep pulling out of thin air with zero corroboration, whether from documenting your own testing or from other sources. You keep making statements that break significantly with results from other well established and trusted sources, which puts the onus on you to corroborate your data. Instead, you keep making unsubstantiated claims.


fevgatos said:


> Im not lying, those are 2 different benchmarks from igorslab. The testing I posted at 125w wasn't to show efficiency compared to zen 3, I posted to show you that the TPU test at 125w was flawed. The 12400 testing had the 5600x at stock with PBO off, PPT power at 90w, so yes that one I used to compare efficiency.


Ah, so you weren't lying, but you were lying? Got it. Cool.

Also: you're wrong. Igor tests AM4 systems with PBO auto, inlcuding their 12400 "workstation" (CPU focused loads) review. Unless you're looking at the gaming review, which literally doesn't have any CPU-based power testing, only game testing? I mean ... I shouldn't have to tell you that to test CPU power consumption, you need some kind of controllable load, and that games are _not_ this whatsoever. If you're looking at CPU efficiency, you need to run CPU tests to do so. That doesn't render the gaming tests invalid, but they have too many variables to pin-point the exact reasons for the specific power consumption - is the workload CPU or GPU bound, is there a GPU driver issue loading or keeping the CPU idle or other driver overhead that differs between CPU architecutres, does the game behave differently on AMD or Intel CPUs, does the game run at a higher FPS on one, requiring more CPU power to keep up, etc. You can't control for this in a game - there are too many variables - which means you can't actually test for anything resembling CPU architectural efficiency in games.


fevgatos said:


> And yes all the tests that I compared to a 5800x were done with 8gc cores - ecores off. You keep saying it's only tests that favor alderlake, but it's not like it's my choice. Whenever I ask people to run something with their zen CPU after the claimed it's more efficient they basically dissapear. If you know someone willing to test with his zen 3, im all up for it.


Really? As you have said time and time again: there are tons of benchmarks out there. So far I've only seen Cinebench from you? You seem to have the time and resources to do at least some benchmarking, so I'd recommend diversifying that workload a bit.


----------



## fevgatos (Aug 11, 2022)

Valantar said:


> .... sooooooo maybe that should tell you that simple tests work for simple products with few variables, while more complex products with more variables might need more complex testing? Just a thought.
> 
> Looking forward to seeing your results.
> 
> ...


What numbers are you talking about ive literally no idea. The cbr23 numbers already were corroborated by a review i linked from club365, they hit the exact number i said im getting (23500 to 24500).

I mean at this point im not sure, are we past the point of debating whether tpus review is wrong? Cause it absolutely is and if we can't even agree with that then this is all pointless.

Yes igors tested with pbo auto but in the case of the 5600x it reported a total package power of 90w, which if im not mistaken is the default PPT of the cpu, right? So it was basically running stock afaik. So the efficiency comparison between these 2 cpus (12400 and 5600x) were valid (assuming, again, the default ppt of 5600x is 90). The rest werent, thats why i didnt use them as a comparison point, a thing you have now accused me 3 times, claiming im lying when im absolutely not.


----------



## ratirt (Aug 11, 2022)

fevgatos said:


> Dude, are you daft? In that specific test it draws the exact same wattage as the 12900k. So.. At the same wattage the 12900k should slam the 12600k in cbr23. But it didnt. Therefore the test is laughably wrong
> 
> And yes, the 12900k underperforms in every power limited test, not just in the 125w. The 100, 75 and 50 are also hilariously wrong.


Stop insulting people with your attitude cause I have had your fanboish craze on this forum literally. It has been said confront this with W1zz and ask him about the specifics of the test or review. Maybe there are details you dont know about. Pinpoint your findings and go there and fight for the righteous cause.  
If you still wish to keep talking about how great 12900K and other Intel products are, or maybe specifically about the issues you have with W1zzard's review make a separate thread and stop flooding this, AMD 7000 series CPUs with your Intel stuff over and over since this is getting really annoying. You are in literally every AMD CPU related thread talking about how great Intel 12th gen is.


----------



## fevgatos (Aug 11, 2022)

ratirt said:


> Stop insulting people with your attitude cause I have had your fanboish craze on this forum literally. It has been said confront this with W1zz and ask him about the specifics of the test or review. Maybe there are details you dont know about. Pinpoint your findings and go there and fight for the righteous cause.
> If you still wish to keep talking about how great 12900K and other Intel products are, or maybe specifically about the issues you have with W1zzard's review make a separate thread and stop flooding this, AMD 7000 series CPUs with your Intel stuff over and over since this is getting really annoying. You are in literally every AMD CPU related thread talking about how great Intel 12th gen is.


If you stop replying, ill stop it as well
When i have 3 4 people quoting me, you dont expect me to reply?


----------



## Valantar (Aug 11, 2022)

fevgatos said:


> What numbers are you talking about ive literally no idea. The cbr23 numbers already were corroborated by a review i linked from club365, they hit the exact number i said im getting (23500 to 24500).


a) you keep saying things like


fevgatos said:


> A 16p core intel cpu at 130W would vastly outperform the 12900k at 240w and the 5950x at its current 125w limit. So power and heat aint an issue at all, its die space. Even at 240w a 16p core would be way easier to cool than the 12900k, and it would score over 36k in cbr23.





fevgatos said:


> Personally I tested 3 12900k at 4 different mobos and they all came back with the same results, 23500 to 24500 at 125w. Nowhere near close to TPUs numbers.





fevgatos said:


> You mean oced? Around 29950, unify X and 6000c30 ram on a u12a cooler





fevgatos said:


> And i know cause ive tested, 4 motherboards and 3 cpus, all scored 23500 to 24500 at 125watts.
> 
> Ask anyone with a 12900k to test stock with 125w power limit, they will all verify what im telling you. They'll score over 23k points


Without providing any sources or corroborating data for those statements. Though you've improved slightly in this regard since being called out on it, you still keep doing the same thing. How hard is it to select the text of your data, press ctrl+k, and paste in the link from the site you're referring to? This is just lazy.
b) I'm a tad dubious about those Club386 numbers simply because of the test setup: they don't differentiate between PL1 and PL2, instead just talking about "TDP" across all platforms, which makes it impossible to know what the actual settings are. That doesn't mean their numbers are wrong, it just means I don't know the power level they were actually achieved at.

There's also the issue of your consistent denial towards there being _any_ kind of issue with extrapolating MSDT GC performance into your theoretical 16c CPU, despite the indisputable fact that this would necessitate major reconfigurations of its internal fabric and other aspects crucial to performance.


fevgatos said:


> I mean at this point im not sure, are we past the point of debating whether tpus review is wrong? Cause it absolutely is and if we can't even agree with that then this is all pointless.


I've accepted long ago - explicitly, in posts directed at you - that there might be something wrong with that testing. The problem is, until we know what went wrong and how, and have equally stringently tested data saying otherwise, we're none the wiser. And sadly the reviews you've linked showing different results are much less thorough, or have issues of their own - the Club386 thing above, Igor's Lab running AM4 with an auto OC. The Techspot 12700 review has the same issue as Club386 - I love that they test at both "unlimited" (their board's stock behaviour) and "65" settings, but ... what are those 65W? PL1? PL2? What's Tau set to? Without detailing this, the data becomes a lot more murky.

Test methodology is crucial, and presenting that methodology clearly and with the necessary level of detail can make or break the quality of the conclusions derived from testing. There's a reason why everyone isn't a hardware reviewer: it's difficult, and takes a lot of care and attention, as well as the development of a mode of presentation that maintains this data while still making it readable and understandable to the audience. That's _hard_.


fevgatos said:


> Yes igors tested with pbo auto but in the case of the 5600x it reported a total package power of 90w, which if im not mistaken is the default PPT of the cpu, right? So it was basically running stock afaik.


No. PBO boosts all cores higher than stock, pushing voltages and thus core power higher than stock. I've shown above from my own testing how letting Zen3 boost too aggressively will _hurt_ performance, tanking efficiency. Why? Because it doesn't scale higher in terms of clocks, but in terms of thermals it can take more power if you'll let it - you just get nothing in return. That's why you don't test with an auto OC mode enabled.

As for lying: you literally said "I don't use Igor's Lab for efficiency comparisons", then admitted in the next sentence that you had indeed done so. And this is the core of the problem here: you don't consider your words before posting. Heck, I'm not necessarily sure that you're entirely wrong about things here, but you seem fundamentally unable to present things in a reasonable, measured, well-supported way, instead blurting out grandiose statements that you then have to walk back when challenged. Your way of discussing forces people to constantly be correcting you or desperately try to shove some nuance back into your simplifications, which is why we're in this quasi-hostile mode of discussing in the first place. Things would get a lot, lot better if you took some more care with how you phrased things, thought things through a bit more, corroborated your statements with data or sources consistently, and made nuanced arguments rather than black-and-white statements.


----------



## fevgatos (Aug 11, 2022)

Valantar said:


> a) you keep saying things like
> 
> 
> 
> ...


Techspot used the rm1 cooler for the 65w result. Im not sure exactly what your issue is with that one, to me its obvious they set pl2 to 65w. It wouldnt make sense any other way since, the way they phrased it in the review, it would be idiotic to have pl1 at 65 and unlimited pl2. Plus the score would have been higher if that was the case, as demonstrated by their power unlimited test.

I dont see anything wrong with igorslab review either. Yes zen were run with pbo but that's irrelevant, what matters is the 12900k perfromance at 125w. Since in the blender test it matches a pboed 5900x and slaps the 12600k, there is no way in cbr23 it gets matched by the 12600k.

Club365 used xtu to power limit, they even have a picture of their settings in the first page. And since the numbers perfectly match the ones i observed with 3 cpus tested witb 4 different motherboards, i have no reason to doubt them. The cpu is running at 4.3 ghz for the pcores



			https://www.club386.com/wp-content/uploads/2021/11/Power-Limiting1-1068x530.jpg
		


Today im back, i can show you the 29900 score and all that, but i find it irrelevant cause, how can i show you im running a u12a? I might as well have a custom loop for all you know.

Regarding what went wrong with tpu, fixed voltage is by far the most likely explanation. Msi boards are quite reknown (mine included) for doing shaenaningans, although he uses an asus hero, and as far as my experience with the asus apex goes, it wasnt doing any weird stuff when plimited,but who knows, maybe the hero does.

What i find really weird is how he himself didnt get puzzled with the results. That's by far my biggest surprise.


----------



## ratirt (Aug 11, 2022)

fevgatos said:


> When i have 3 4 people quoting me, you dont expect me to reply?


I expect you to stop flooding AMD threads with your fanboish Intel stuff and believe me no one will bother you and quote your comments if these aren't there. 
Respect a subject of a conversation in a thread. Fairly simple.


----------



## mahirzukic2 (Aug 11, 2022)

fevgatos said:


> Go ahead, I hope he replies. I guarantee you 100% the benchmarks are flawed. Could be a bios thingy or something else, but its most definitely without a shadow of a doubt flawed. Im not the only one saying it, there is a thread on tom's hardware also making fun of that benchmarks, and even in the discussion of that very benchmark there were people doubting the results. That's cause they just don't make any sense, the 12600k cant be more efficient than the 12900k at same wattage, it's hillariously obvious. The flaw is so monumental, imagine if you clock the 5600x to 125w and suddenly it matches the 5950x. Well thats what you are looking at with those numbers...
> 
> Ive tested 3 12900k on 4 motherboards at 125w, all scored pretty much the same in CBR23, between 23500 and 24500. TPU scored 18k, lol


Okay, I just found this review on Tom's Hardware. Take a look at "Intel Alder Lake Core i9-12900K and i5-12600K Power Consumption, Efficiency, and Thermals" section. It shows 12600k being more efficient than 12900k at both DDR4 and DDR5 configurations.

That's in "Handbrake power efficiency - *x264* renders per hour"







as well as "Handbrake power efficiency - *x265* renders per hour"




Here is in the "blender bmw27 power efficiency". Compare the y-axis height of these pairs (12900k vs 5950x) (12700k vs 5900x) (12600k vs 5800x).
Only 12600k beats 5800x in efficiency, other pairs lose by a big margin. This shows both the strengths of 12600k (sweet spot with P cores) and weakness of 5800x (bad energy efficiency compared to the rest of the zen 3 lineup).




as well as the "blender koro power efficiency".
Pretty much the same thing continues as in previous example. In this example 12600k does consume a little less power at the expense of being a little slower than 5800x, which puts their efficiency at about the same level or slightly in 12600k's favour.




Needless to say, in those first 2 examples 5950x (best amd's zen3 efficiency processor) compared to 12900k (one of the worst intel's 12th gen efficiency processor), it comes out with 50% better efficiency.
And this is as you say in the Tom's hardware's own benchmark which supposedly is making fun of TPU's benchmark results.

I'd advise you to put your money where your mouth is. Look at the benchmarks of the site you were citing before making such claims. This site (Tom's review) gave exactly the different results to what you were saying it did.


----------



## fevgatos (Aug 11, 2022)

mahirzukic2 said:


> Okay, I just found this review on Tom's Hardware. Take a look at "Intel Alder Lake Core i9-12900K and i5-12600K Power Consumption, Efficiency, and Thermals" section. It shows 12600k being more efficient than 12900k at both DDR4 and DDR5 configurations.
> 
> That's in "Handbrake power efficiency - *x264* renders per hour"
> 
> ...


If you think the graphs you just posted disagree with what Im saying then you either don't understand the graphs or what Im saying.

I never - ever - ever - ever - EVER - EVER EVER suggested that the 12900k at stock is more efficient than the the 12600k or the 5950x in MT workloads. Ever. Never ever. Actually, quite the contrary, I've said multiple times that at stock power limits it's extremely inefficient in these MT workloads. If you understood something different then the problem lies with you


----------



## mahirzukic2 (Aug 11, 2022)

fevgatos said:


> If you think the graphs you just posted disagree with what Im saying then you either don't understand the graphs or what Im saying.
> 
> I never - ever - ever - ever - EVER - EVER EVER suggested that the 12900k at stock is more efficient than the the 12600k or the 5950x in MT workloads. Ever. Never ever. Actually, quite the contrary, I've said multiple times that at stock power limits it's extremely inefficient in these MT workloads. If you understood something different then the problem lies with you


I think this is what you said:


Valantar said:


> ... Except Zen3 cores peak around 20W, while ADL P cores can draw 2-3x that much. More efficient at lower clocks? Depends on the workload. More efficient at stock? Not even close in any CPU heavy task. They do run very well in games though, with most of those being variable, low threaded workloads that let the CPU boost high to race to finish each frame's compute cycle, which suits ADL's high clocks and good IPC nicely. But, crucially, you can't reliably measure a CPUs efficiency in something that isn't a cpu-intensive task. And for anything CPU-intensive, both Zen3 and E cores are vastly more efficient at anything resembling stock power levels.





fevgatos said:


> *More efficient at everything*. What he is saying is that intel cant fit 16p cores cause of power draw which is absurd, cause we already know a p core outperforms a zen 3 core at same wattage. Therefore a 16p core intel would outperform the 5950x for example at same or lower wattage


It is pretty clearly not, as it is shown by both the TPU's and Tom's benchmarks.


----------



## fevgatos (Aug 11, 2022)

mahirzukic2 said:


> I think this is what you said:
> 
> 
> It is pretty clearly not, as it is shown by both the TPU's and Tom's benchmarks.


Do you understand what "same wattage" means? Of course when you test one CPU at 500watts and the other one at 50w the second one will be more efficient. Do you want me to bold the same wattage part or can you read it? Also - im talking about 8GC cores vs 8 zen 3 cores.


----------



## mahirzukic2 (Aug 11, 2022)

fevgatos said:


> Do you understand what "same wattage" means? Of course when you test one CPU at 500watts and the other one at 50w the second one will be more efficient. Do you want me to bold the same wattage part or can you read it? Also - im talking about 8GC cores vs 8 zen 3 cores.


So now it's "*More efficient at everything*." at some wattage? Some arbitrary wattage? Any wattage? No wattage?


----------



## fevgatos (Aug 11, 2022)

mahirzukic2 said:


> So now it's "*More efficient at everything*." at some wattage? Some arbitrary wattage? Any wattage? No wattage?


What do you mean "so now". That's what im saying from the frst post. And yes, at any wattage, from 10w all the way up to 300 watts


----------



## ratirt (Aug 11, 2022)

fevgatos said:


> What do you mean "so now". That's what im saying from the frst post. And yes, at any wattage, from 10w all the way up to 300 watts


Yes you are saying that from the first post and it is still 'so now' since you talk about efficiency with wattage you choose is best depending what you are comparing the CPU to not its advertised performance and wattage.
Either way case is closed. If you really wanna talk about 12th gen Intel please make your own thread.

Either way. 
I think 7000 series AMD will top the charts with the frequency 5.7Ghz if true. that is a substantial bump.


----------



## mahirzukic2 (Aug 11, 2022)

fevgatos said:


> What do you mean "so now". That's what im saying from the frst post. And yes, at any wattage, from 10w all the way up to 300 watts


But that's exactly the thing most people (now me too) have been trying to show you. That is just *not true*. And it's backed up by benchmarks from TPU and the one you mentioned on Tom's Hardware.
It's not more efficient at any wattage.
It's mostly less efficient at any wattage, and in some biased scenarios specifically Intel's Alder Lake undervolted and underclocked vs stock zen3 with PBO on, then intel's AL is more efficient than zen 3.
Then again, that's not really a fair comparison, as zen 3 can also be undervolted and underclocked, which still results in AL being less efficient, which corresponds to the stock values as well.


----------



## fevgatos (Aug 11, 2022)

mahirzukic2 said:


> But that's exactly the thing most people (now me too) have been trying to show you. That is just *not true*. And it's backed up by benchmarks from TPU and the one you mentioned on Tom's Hardware.
> It's not more efficient at any wattage.
> It's mostly less efficient at any wattage, and in some biased scenarios specifically Intel's Alder Lake undervolted and underclocked vs stock zen3 with PBO on, then intel's AL is more efficient than zen 3.
> Then again, that's not really a fair comparison, as zen 3 can also be undervolted and underclocked, which still results in AL being less efficient, which corresponds to the stock values as well.


I think you don't understand jack. I invited you to the other thread with results posted


----------



## mahirzukic2 (Aug 11, 2022)

fevgatos said:


> I think you don't understand jack. I invited you to the other thread with results posted


I don't own neither the AL nor Zen3 CPU in my home rig, but rather an old 2600k, which I am looking to replace with either Intel's 13th gen or Zen4 depending on how they fare and the reviews.
I have a Intel(R) Core(TM) i9-9980HK CPU on my work Macbook and a Intel(R) Core(TM) i7-12700H CPU in my work Dell XPS laptop.
One is running a Mac OS and the other a Ubuntu linux, so sadly I cant post any scores. Also these being laptop CPUs and as such not comparable to the desktop counterparts.


----------



## fevgatos (Aug 11, 2022)

mahirzukic2 said:


> I don't own neither the AL nor Zen3 CPU in my home rig, but rather an old 2600k. I have a Intel(R) Core(TM) i9-9980HK CPU on my work Macbook and a Intel(R) Core(TM) i7-12700H CPU in my work Dell XPS laptop.
> One is running a Mac OS and the other a Ubuntu linux, so sadly I cant post any scores. Also these being laptop CPUs and as such not comparable to the desktop counterparts.


Well you don't have to post any numbers, you can just check what 8 GC cores do against 8 Zen 3 cores in same wattage. And it's not pretty


----------



## Arc1t3ct (Aug 11, 2022)

fevgatos said:


> Well you don't have to post any numbers, you can just check what 8 GC cores do against 8 Zen 3 cores in same wattage. And it's not pretty



Could you please post a link to that thread? I'm very interested in your findings


----------



## fevgatos (Aug 11, 2022)

Arc1t3ct said:


> Could you please post a link to that thread? I'm very interested in your findings


There are no findings, nobody with 8 zen 3 cores posted any results yet. And I don't think they will, as I've said a couple of pages ago, whenever I ask someone they just disappear. But here's the link. 8 GC cores at 65 watts score 16500 to 17200   









						Cinebench R23 efficiency race
					

Everyone with every cpu and architecture is welcome to join in our Cinebench R23 efficiency race! We have two categories: 6/12 cores up to 50W 8/16 cores up to 65W Disabling of cores is allowed. The only requirement is screenshot with BenchMate and is recommended to share a link to the result...




					www.techpowerup.com


----------



## mahirzukic2 (Aug 11, 2022)

fevgatos said:


> Man are you for real? There are power consumption metrics in the review, yes the 12600k consumes 5w less than the 12900k at 125w while it scores the same. Which, as ive repeated multiple times, its impossible
> 
> 
> No 12700 isnt the same configuration. It has half the ecores yet at 65w it outperforms the 12900k at 100w,which, again, is absolutely impossible.
> ...


On this link https://www.club386.com/intel-core-i9-12900k-at-125w/6/ in the concluding section they still have that 5950x at stock has higher efficiency than 12900k at 125w let alone at stock, which is 241w.
Stock for stock, zen 3 in this review is about 50% more efficient than Alder Lake which is consistent with the other reviews (TPU and Tom's).


----------



## fevgatos (Aug 11, 2022)

mahirzukic2 said:


> On this link https://www.club386.com/intel-core-i9-12900k-at-125w/6/ in the concluding section they still have that 5950x at stock has higher efficiency than 12900k at 125w let alone at stock, which is 241w.


You realize I posted this link, right?


----------



## mahirzukic2 (Aug 11, 2022)

fevgatos said:


> You realize I posted this link, right?


Yes, a link which clearly shows in it's conclusion that 5950x is more efficient than 12900k at stock (50% more) as well as when power limited to 125w (about 10% more).
Which pretty much show exactly the opposite of what you have been claiming this whole time.


----------



## fevgatos (Aug 11, 2022)

mahirzukic2 said:


> Which pretty much show exactly the opposite of what you have been claiming this whole time.


What am I claiming? Can you repeat it for me please, cause apparently you haven't got a whiff yet


----------



## mahirzukic2 (Aug 11, 2022)

fevgatos said:


> Well you don't have to post any numbers, you can just check what 8 GC cores do against 8 Zen 3 cores in same wattage. And it's not pretty


I don't own Zen 3 nor Adler Lake right now. I have owned CPUs from both manufacturers over the years though, so I have no beef in the game.
All I care is bang for the buck, and lately (last few years) this has been sometimes in AMD's favour, sometimes in Intel's favour.



fevgatos said:


> What am I claiming? Can you repeat it for me please, cause apparently you haven't got a whiff yet


This:


> fevgatos said:
> *More efficient at everything*. What he is saying is that intel cant fit 16p cores cause of power draw which is absurd, cause we already know a p core outperforms a zen 3 core at same wattage. Therefore a 16p core intel would outperform the 5950x for example at same or lower wattage





fevgatos said:


> What do you mean "so now". That's what im saying from the frst post. And yes, at any wattage, from 10w all the way up to 300 watts


----------



## fevgatos (Aug 11, 2022)

mahirzukic2 said:


> I don't own Zen 3 nor Adler Lake right now. I have owned CPUs from both manufacturers over the years though, so I have no beef in the game.
> All I care is bang for the buck, and lately (last few years) this has been sometimes in AMD's favour, sometimes in Intel's favour.
> 
> 
> This:


And how exactly does the review I posted from club365 disprove that? He is testing 8+8 against 16 zen 3 cores.


----------



## mahirzukic2 (Aug 11, 2022)

fevgatos said:


> And how exactly does the review I posted from club365 disprove that? He is testing 8+8 against 16 zen 3 cores.


Well unfortunately, that is the best Intel has for now, and that's what they are testing it with. It's also close in price to 5950x, so it's fair game.
Even if we account for that and take 3900XT which is 12/24 compared to 12900k which is 16/24, even then it's 8% more efficient at stock for stock, and about 20% less efficient when the 12900k is power limited to 125w. Mind you this is last gen.
Since Zen 3 is about 15 - 20% faster than Zen 2, if we take that into account we would get (extrapolated numbers) that 5900x is 30% more efficient at stock for stock, and about the same efficiency when the 12900k is power limited to 125w. In order to be sure, Club386 should include a 5900x in those tests as well. That would give a better picture.

But I doubt that would change the current state, which is that Zen 3 is on average 50% more efficient and about the same or worse efficiency in the worst case (heavily biased towards intel by running it in optimised mode).


----------



## fevgatos (Aug 11, 2022)

mahirzukic2 said:


> Well unfortunately, that is the best Intel has for now, and that's what they are testing it with. It's also close in price to 5950x, so it's fair game.
> Even if we account for that and take 3900XT which is 12/24 compared to 12900k which is 16/24, even then it's 8% more efficient at stock for stock, and about 20% less efficient when the 12900k is power limited to 125w. Mind you this is last gen.
> Since Zen 3 is about 15 - 20% faster than Zen 2, if we take that into account we would get (extrapolated numbers) that 5900x is 30% more efficient at stock for stock, and about the same efficiency when the 12900k is power limited to 125w. In order to be sure, Club386 should include a 5900x in those tests as well. That would give a better picture.
> 
> But I doubt that would change the current state, which is that Zen 3 is on average 50% more efficient and about the same or worse efficiency in the worst case (heavily biased towards intel by running it in optimised mode).


It's heavily biased to test the CPU at same power limits? BIASED? Really? LOL

Of course the 5950x is a little bit more efficient at heavy MT workloads at same wattage, but alderlake is WAY WAY more efficient in lighter loads and ST workloads. From igor'slab review testing Autocad

*Once again, you can put the score in relation to the power consumption in order to map the efficiency. The Core i9-12900KF is even 71 percentage points more efficient than the Ryzen 9 5950X. I’d rather not even write anything about the Core i5-12600K.*









						Core i9-12900KF, Core i7-12700K and Core i5-12600 in a workstation test with amazing results and an old weakness | Part 2 | Page 9 | igor'sLAB
					

So today I'll get serious and show you where Alder Lake S can really score aside from colorful gaming pixels. Gaming what? Completely overrated if you look at at least some of today's results.




					www.igorslab.de
				





So choose your poison, if you want heavy MT then the 5950x is 10% more efficient at same wattage, if you run lighter less threaded workloads (autocad / photoshop / premiere etc.) then the 12900k can be up to *70% *(   ) more efficient. Options for everyone, isn't that nice?


----------



## Valantar (Aug 11, 2022)

fevgatos said:


> Techspot used the rm1 cooler for the 65w result. Im not sure exactly what your issue is with that one, to me its obvious they set pl2 to 65w.


My problem is that without them actually specifying this explicitly, we have to resort to the kind of speculation you just did. And another key problem in this discussion is that you don't see a problem with such speculation, as you assume that your speculation must by default be true. It _might_ be, but that is a level of insecurity that is not acceptable when trying to actually understand something as nuanced and fine-grained as what we're talking about here.


fevgatos said:


> It wouldnt make sense any other way since, the way they phrased it in the review, it would be idiotic to have pl1 at 65 and unlimited pl2. Plus the score would have been higher if that was the case, as demonstrated by their power unlimited test.


I agree that it wouldn't make sense in any other configuration, but again, this is not proof that that is how they configured it. Humans are only partially rational people, and we can only act on the basis of what we know. And, crucially, people fuck up too, doing things wrong or things they didn't mean to do. Unless explicitly told otherwise, we cannot assume that they configured it correctly. We can _hope_ they did, at best.


fevgatos said:


> I dont see anything wrong with igorslab review either. Yes zen were run with pbo but that's irrelevant, what matters is the 12900k perfromance at 125w. Since in the blender test it matches a pboed 5900x and slaps the 12600k, there is no way in cbr23 it gets matched by the 12600k.


As you're using that review as an efficiency comparison for both, it is obviously problematic to have PBO active for the 5950X, as that pollutes the data you're using for your comparison.


fevgatos said:


> Club365 used xtu to power limit, they even have a picture of their settings in the first page. And since the numbers perfectly match the ones i observed with 3 cpus tested witb 4 different motherboards, i have no reason to doubt them. The cpu is running at 4.3 ghz for the pcores
> 
> 
> 
> https://www.club386.com/wp-content/uploads/2021/11/Power-Limiting1-1068x530.jpg


That's good to know - though a bit hilarious that they (likely to "provide proof" of the configuration being active while benchmarking) covered up half the gauges with the Blender window. Still, just about readable.


fevgatos said:


> Today im back, i can show you the 29900 score and all that, but i find it irrelevant cause, how can i show you im running a u12a? I might as well have a custom loop for all you know.


...you seem to have entirely missed the point of mentioning cooling: the point isn't what specific cooling you're using, the point is that for a valid comparison, cooling must be eliminated as a variable, i.e. it must be the same across all systems tested. It's a huge part of why comparing benchmarks done by various forum members is inherently unrepresentative (unless you have a _huge_ selection and can control for a bunch of variables), as there's too much uncertainty. Heck, just where you live and your room temperature can have significant effects on results.


fevgatos said:


> Regarding what went wrong with tpu, fixed voltage is by far the most likely explanation. Msi boards are quite reknown (mine included) for doing shaenaningans, although he uses an asus hero, and as far as my experience with the asus apex goes, it wasnt doing any weird stuff when plimited,but who knows, maybe the hero does.


Wait, these boards set fixed voltages by default? Holy crap, that's ... that's like class action lawsuit levels of misconfiguration. I guess I'm glad I've never owned an MSI motherboard. I guess that might also go some way towards explaining Igor's crazy Zen3 power readings, as he seems to use MSI motherboards exclusively.


fevgatos said:


> What i find really weird is how he himself didnt get puzzled with the results. That's by far my biggest surprise.


My guess: probably didn't stand out enough to really make note of in the midst of a massive CPU launch review blitz. Still something that ought to be looked at though.



But back to the core issue here, and what has been discussed for the past few posts: your extrapolations, your attitude to them, and your inflexibility and inability to adjust and maintain nuance in your arguments. I went into this a bit in the previous post, and above as well, but it still seems to not be sinking in.

In short: it's quite possible - or, likely even - that a theoretical 16 GC P core CPU would be more efficient than a 5950X at some range of power levels. That's the nature of a wide design - if not pushed to high in terms of clocks, they're extremely efficient. Just look at Apple's M1 - it's so wide that it matches both of these cores at just 3/5ths the clock and a fraction of the power. But it's also _frickin' huge_. As is ADL, though not on quite the same scale. You can of course choose to ignore die space efficiency in your arguments for performance efficiency (though that's a cost, just like power is a cost), but it's a major disadvantage in this regard - but one that also brings with it the possibility of going wide-and-slow.

The problem is that ADL has rather unpredictable and complex power behaviour. It boosts extremely high, and can, as we've seen, consume 55-60W in instruction dense workloads for a single core - but also sits much lower even at stock in lighter workloads, at ~25-30W in SPEC ST, for example. That of course also means that MT clock scaling will vary wildly when strictly power limited - in instruction dense/heavy workloads, 16 such cores in ~115W (assuming 15W uncore power), would clock much, much lower than what that single core can do, even if that single core goes far, far beyond its efficiency sweet spot.

Further complicating this is the inherent efficiency disadvantage of Zen3 due to through-package IF, which means its uncore consumes ~10-15W more than Intel's. That's nearly a core at full load worth of difference, so definitely significant. And it means that we get scenarios such as this (which is made up, and obviously not accurate to anything at all, but at least in the ballpark)
ST, instruction dense: ADL is a bit faster (~20%?), but consumes ~70% more package power, losing clearly in efficiency.
ST, not instruction dense: ADL is a bit faster, consumes a bit more core power, but notably less uncore power, thus leading in efficiency - anything from a slight to clear lead depending on the workload.
Low threaded (2-4), not instruction dense: same as above.
Low threaded, instruction dense: ADL falls _way_ behind in efficiency unless power limited - but probably performs well when power limited. A complex range of efficiencies across vendors and chips.
nT, instruction dense: Likely an AMD advantage, as the higher uncore power makes less of an impact, while AMD's low per-core power means high clocks even at sustained heavy all-core loads.
nT, not instruction dense: Likely Intel advantage, assuming they maintain high clocks.

While this was all made up, what is the takeaway from seeing results such as these across various benchmarks and test suites? That just as with all other hardware, current boost behaviour, power limits, and other automated self-regulation processes makes testing - and reading results! - a lot more complicated, and conclusions are decreasingly simple and straightforward. And it especially makes theoretical, on-paper extrapolations more complicated, verging on impossible. The number of variables is increasing, and accounting for them is increasingly difficult.

We all know that ADL is fast, and performs well even at lower power limits - chips like the 12300 and 12100 demonstrate that very well. But there are still unanswered questions: What clocks would such a theoretical 16c chip be able to maintain across various workloads? How would uncore power change with the move to either a dual ring bus or mesh fabric? How would either of these affect performance through core-to-core latencies? How would cache configurations affect this - and how much cache would said chip have? If it doubled the cache of the 12900K, then that would again balloon the die area needed - and, once again, cost - while if L3 was kept lower, that would in turn hurt per-core performance.

On top of this, there are configuration differences and binning differences between existing CPUs that you are using to extrapolate your data, where you have consistently been highly selective - complaining of people bringing up the 12400 due to your impression that it's a terrible, inefficient bin, yet at the same time insisting on the 5800X - which is also the least efficient bin of desktop Zen3, by quite some distance - as the point of departure for comparing the two.

So: there are lots of unknowns here, and your extrapolations are far too simplistic, at times carry clear and obvious bias, and you are presenting them in a bombastic, nuance-free way that prompts counterarguments rather than constructive discussion. That doesn't mean that everything you have said is wrong - but it makes it impossible to have a constructive discussion. What could have been an interesting back-and-forth thought experiment instead degrades into an unconstructive shouting match because your way of presenting things forces everyone else into an involuntarily defensive stance trying desperately to add back some of the nuance your statements lack.

Which in turn, means that when you say hare-brained stuff like "ADL is more efficient everywhere" ... you're not only flat-out wrong, as we _know_ there are scenarios - such as heavy/instruction dense ST tasks, as well as MT tasks at stock power - where Zen3 is indisputably drastically more efficient - but you're wrong in an unconstructive way that fosters dissent and conflict. Even if not intended that way, that way of talking is troll language. If you want to have a rewarding and productive discussion, you need to present your arguments reasonably and with nuance. And with sources. Without that, all you're achieving is asking for pushback, and turning everything into unnecessary conflict.



fevgatos said:


> It's heavily biased to test the CPU at same power limits? BIASED? Really? LOL


I had to respond to this one last thing: as I've said about  652 million times in this thread: When testing at the same power limit means drastically different changes from stock power for each product being compared, then yes, that is indeed biased. It doesn't render the data useless, or the findings untrue, but it is an unequal comparison, as the products being tested all have stock configurations, and deviations from those thus represent changes from the inherent behaviour of the product. You can argue that the stock config of high end ADL is stupid, but that's another issue entirely - it doesn't make it any less biased to compare heavily underclocked ADL to stock-powered Zen3.


----------



## fevgatos (Aug 11, 2022)

Valantar said:


> In short: it's quite possible - or, likely even - that a theoretical 16 GC P core CPU would be more efficient than a 5950X at some range of power levels. That's the nature of a wide design - if not pushed to high in terms of clocks, they're extremely efficient. Just look at Apple's M1 - it's so wide that it matches both of these cores at just 3/5ths the clock and a fraction of the power. But it's also _frickin' huge_. As is ADL, though not on quite the same scale. You can of course choose to ignore die space efficiency in your arguments for performance efficiency (though that's a cost, just like power is a cost), but it's a major disadvantage in this regard - but one that also brings with it the possibility of going wide-and-slow.


I'm skipping everything else you said above cause I pretty much agree. Actually, I even agree with what you wrote here, the problem is - I was just responding to a user saying Intel didn't use 16P cores cause of heat and power issues. Which is obviously wrong, and that's what my arguments tried to prove throughout this thread


Valantar said:


> I had to respond to this one last thing: as I've said about  652 million times in this thread: When testing at the same power limit means drastically different changes from stock power for each product being compared, then yes, that is indeed biased. It doesn't render the data useless, or the findings untrue, but it is an unequal comparison, as the products being tested all have stock configurations, and deviations from those thus represent changes from the inherent behaviour of the product. You can argue that the stock config of high end ADL is stupid, but that's another issue entirely - it doesn't make it any less biased to compare heavily underclocked ADL to stock-powered Zen3.


We will never agree on that point I guess. Testing efficiency at stock only tells you how efficient the CPU's settings are. Which for me is absolutely useless. That's useful data only for someone that doesn't know how to powerlimit or doesn't care about efficiency to do so. Why should we use the layman as the standard for what's important?


----------



## DeeJay1001 (Aug 11, 2022)

Daven said:


> I wonder if the $300 to $600 gap between the 7700x and the 7900x will allow room for the 7800x3D. That would be the gaming chip to get at ~$450.


as far as we know all of the 7000 series chips will have 3D V-Cache.


----------



## ratirt (Aug 12, 2022)

fevgatos said:


> I'm skipping everything else you said above cause I pretty much agree. Actually, I even agree with what you wrote here, the problem is - I was just responding to a user saying Intel didn't use 16P cores cause of heat and power issues. Which is obviously wrong, and that's what my arguments tried to prove throughout this thread


That is one aspect of Intel's problem. Heat comes with frequency and voltage to sustain the workload. You have said that in light workloads Intel runs very well and we all know that but there is the full load MT workload which already seems to settle just at the edge of OK with power consumption and heat using the P and e cores. If you think all P core product would have been better than i disagree with that statement. Would it be possible for Intel to make a 16p core AL? Obviously it would be possible but considering Intel's experience there's dozen of reasons they didn't do it and heat is probably one of them but not just heat alone but combined with frequency voltage and ST and MT performance etc. in general. Also, Intel needed a win with AMD and AL did it in more areas. For what you know, Intel's decision abut adding ecores was the best they could go with to compete with AMD. As we already know it will continue further with 13th gen and more ecores for MT purposes where Intel still lacks in comparison to AMD.


----------



## fevgatos (Aug 12, 2022)

ratirt said:


> That is one aspect of Intel's problem. Heat comes with frequency and voltage to sustain the workload. You have said that in light workloads Intel runs very well and we all know that but there is the full load MT workload which already seems to settle just at the edge of OK with power consumption and heat using the P and e cores. If you think all P core product would have been better than i disagree with that statement. Would it be possible for Intel to make a 16p core AL? Obviously it would be possible but considering Intel's experience there's dozen of reasons they didn't do it and heat is probably one of them but not just heat alone but combined with frequency voltage and ST and MT performance etc. in general. Also, Intel needed a win with AMD and AL did it in more areas. For what you know, Intel's decision abut adding ecores was the best they could go with to compete with AMD. As we already know it will continue further with 13th gen and more ecores for MT purposes where Intel still lacks in comparison to AMD.


Heat / wattage is a non issue. Heck even if they left the same 240w power limit a 16P core would be way easier to cool because it would have a bigger die. That's just physics 101, not an opinion.


----------



## ratirt (Aug 12, 2022)

fevgatos said:


> Heat / wattage is a non issue. Heck even if they left the same 240w power limit a 16P core would be way easier to cool because it would have a bigger die. That's just physics 101, not an opinion.


What you said here is a speculation. I know where this is coming from but that may not be the case entirely. Maybe Intel went with ecores to balance the heat and die size? There has to be a reason why Intel didn't go with 16p core set up. It is not just heat or frequency or wattage or voltage, cash size, die size etc. I think it is a combination of all those aspects and a efficiency as well. Would 16p core AL processor be possible? Intel could do anything if they wanted to. You don't have 16p core AL not because of heat alone but considering all aspects of what a CPU should offer, 8+8 in general was a better solution to tackle the market and competing better.


----------



## fevgatos (Aug 12, 2022)

ratirt said:


> What you said here is a speculation. I know where this is coming from but that may not be the case entirely. Maybe Intel went with ecores to balance the heat and die size? There has to be a reason why Intel didn't go with 16p core set up. It is not just heat or frequency or wattage or voltage, cash size, die size etc. I think it is a combination of all those aspects and a efficiency as well. Would 16p core AL processor be possible? Intel could do anything if they wanted to. You don't have 16p core AL not because of heat alone but considering all aspects of what a CPU should offer, 8+8 in general was a better solution to tackle the market and competing better.


If you look at the die of the current 12900k, it's pretty obvious why they didn't go for a 16P CPU. That thing would be MASSIVE. It's already 35% bigger than the 2 CCD's of the 5950x. A 16P core would be something like 350++ mm². It has no space in the consumer market, cause people wouldn't pay for it. That's why they are saving it for the prosumer market with the Sapphire Rapids.


----------



## Valantar (Aug 12, 2022)

fevgatos said:


> I'm skipping everything else you said above cause I pretty much agree. Actually, I even agree with what you wrote here, the problem is - I was just responding to a user saying Intel didn't use 16P cores cause of heat and power issues. Which is obviously wrong, and that's what my arguments tried to prove throughout this thread


I still think heat and power issues are a significant part of why they haven't made this, but not on as basic a level as "it would be impossible to cool". Rather, heat and power issues in the form of a likely uncompetitive balance between per-core clocks and power/thermals, especially in tasks with a "medium" thread load (i.e. not ST or close, but not nT either). A huge die like that would be easier to cool, but would also need more power for various uncore uses, and would likely need a lot of dark silicon to avoid overheating at high all-core loads - but it would also need to be able to deliver those ~60W to each and every core, requiring a lot of power wiring (= even more area) and would most likely need to more aggressively shuffle threads between cores to spread out the heat during operations, complicating scheduling as well (something AMD already does quite aggressively). If this were to be a "the best CPU ever" type of product it would after all need to boost as high if not higher than current ADL, which would strain its power distribution and cooling.

Of course, cost and area are likely equally if not more crucial considerations here as well - a 16 P core CPU would come well into HEDT territory, might not be physically feasible on the LGA1700 package (it would _fit_, but would it get enough power where it needed it, etc?), and would be crazy expensive while likely having quite poor yields. And it's not like 8+8 ADL is a small die, after all, so they're already pushing things. Given that Intel has - reaosnably - seen that consumer HEDT is dead and has no sustainable basis for existence - I think they made the entirely correct choice with their hybrid design, allowing for great nT performance and many more cores in a smaller area - but at the cost of complexity and a period of growing pains. IMO, that's a far more sensible approach than "let's just jam as many cores as we can into this thing, screw it." Which, IMO, is a similar reasoning as to why AMD hasn't been moving towards >8 core CCDs - we're already in "more cores than most people will _ever_ make use of" territory with 16 (though there's an argument to be made for 8+16 being somewhat reasonable as E cores lack SMT), and per-core performance is still by far the most important thing.

There's also the consideration of a 16 P core configuration essentially placing itself by default as a quasi-HEDT offering, as its only real selling point would be "this doesn't have E cores, but 32 full power threads". With HEDT being (long) dead, that doesn't sound like an attractive selling point.


fevgatos said:


> We will never agree on that point I guess. Testing efficiency at stock only tells you how efficient the CPU's settings are.


But .... that statement is equally true at literally every possible setting, which means it's not an argument for or against anything whatsoever. The point is that stock settings are _more important_ than non-stock settings. It shows you how the product works, as configured by the manufacturer, who has positioned it extremely deliberately after tens of thousands of hours of testing, binning and tuning, to find the optimal competitive balance. It shows how that has resulted in a specific configuration of a specific silicon implementation of an architecture, and it speaks to the reasoning put into this tuning. It speaks to what is and isn't possible, and the concerns and considerations taken into account when setting specs.


fevgatos said:


> Which for me is absolutely useless. That's useful data only for someone that doesn't know how to powerlimit or doesn't care about efficiency to do so. Why should we use the layman as the standard for what's important?


Why should we use a handful of enthusiasts with specialist knowledge and unusual use cases as the standard for what's important? As we discussed in the other thread, you have a vastly overblown belief in how many people know how to, care enough to, and want to tune their CPUs (and GPUs). And, crucially: your argument here boils down to "I believe that what I like should be the standard." There is nothing in this approach that weighs sufficiently heavily to counteract the simple fact that stock settings are stock settings, with all that entails. And any arbitrarily chosen power level set by you or a reviewer will never be anything but arbitrary, unless a wide range of power levels are tested for an actual overview of the overall efficiency of the architecture as implemented. Essentially every CPU ever made runs at stock, and only at stock. Thus, stock operations are by far the most important measure, the basic measure that should always be taken first, and towards which comparisons are directed. Tuned operations are _also_ interesting to look at, but it's a niche scenario and can thus by default not be treated as the standard.

You can always argue that Intel ought to have configured the 12900K differently at stock - many of us have done so - but this falls in line with your overall pattern of generosity towards them, handily skipping obvious criticisms. After all, it could easily be argued that Intel should have configured these CPUs to run at peak efficiency for everyday users, while leaving tons of OC headroom for enthusiasts who know how to tune them - by your reasoning, that shouldn't be a problem whatsoever, right? But instead, it's clear from the tuning of these chips that Intel felt strong pressure to _win_ in terms of performance, and thus tuned these chips to ludicrous levels of power in order to decisively do so. Which grants valuable and interesting insight into Intel's reactions to AMD's perf/W ant MT dominance over the past few years, and shows just how seriously they've taken that threat. Which means that for the vast majority of people, these chips run less efficiently than they should, at higher power than what's necessary or even remotely efficient, all because Intel desperately needed a benchmarking win.

I mean, there are lots of useful things we can glean from the specific configuration of the 12900K, in terms of Intel's reasoning:
- 8 cores (or 10 like the 10900K) would be woefully uncompetitive against a 16c in MT tasks
- 16 cores would be too big (and hot, though they clearly ignored that part), unless they did something clever
- ADL can be quite efficient, but for a conclusive ST win it needs _power_
- ADL's high power ceiling renders boost tuning ... complicated, and performance is thus the same
- Intel needed to be seen as "striking back" against AMD after years of barely scraping by and clearly being unprepared for a resurgent competitor
- Intel is (finally!) starting to make some use of their vastly diverse portfolio of products in innovative ways
- There are clearly different parts of Intel competing, some clearly focusing on "higher number better" PR thinking, others on smart, complex designs, and likely a ton of compromise positions
- There can be vast chasms between "what wins benchmarks" and "what is the best operating parameters for this product", and the bencmark-oriented people won out with ADL



fevgatos said:


> That's why they are saving it for the prosumer market with the Sapphire Rapids.


I think you need to look up what "prosumer" means  Sapphire Rapids is a HPC/server CPU, not a prosumer one, and I would be _very_ surprised if they pushed out another "enthusiast HEDT" product after the PR faceplant of the """5GHz""" Xeon W-3175X, which had the tech world laughing at them for months afterwards.



fevgatos said:


> Heck even if they left the same 240w power limit a 16P core would be way easier to cool because it would have a bigger die. That's just physics 101, not an opinion.


This is true for nT workloads at least - for lower threaded workloads it would be more complex, as discussed above. It would also be able to make a lot better use of those 240W in terms of performance, at least in nT workloads. The interesting question to me would be, when weighing power, performance, and die area/cost (and, of course, how to weigh these is not a simple question at all), if it would be a more attractive proposition than a 5950X at 100W less power.


----------



## ratirt (Aug 12, 2022)

fevgatos said:


> If you look at the die of the current 12900k, it's pretty obvious why they didn't go for a 16P CPU. That thing would be MASSIVE. It's already 35% bigger than the 2 CCD's of the 5950x. A 16P core would be something like 350++ mm². It has no space in the consumer market, cause people wouldn't pay for it. That's why they are saving it for the prosumer market with the Sapphire Rapids.


As you can see yourself, making a chip that big, could have been problematic not because of the heat but the performance, price, die size etc. So many things to consider. If Intel would have made a big one like that and heat would have been a problem, they could easily lower the clocks and make it run OK with a proper power envelope but the performance might not have been sufficient. So I guess in order to balance the product (more less) we have ecores.


----------



## mahirzukic2 (Aug 12, 2022)

fevgatos said:


> Heat / wattage is a non issue. Heck even if they left the same 240w power limit a 16P core would be way easier to cool because it would have a bigger die. That's just physics 101, not an opinion.


I don't think heat is a problem as well. What would have been a problem is die area. It would be almost double what 12900k is. That would mean much worse yields, and more expensive product for the end consumer.
In the end I don't even think it might be more efficient than 12900k since it would have to have dual ring bus which would draw more power than single ring bus on top of additional cores.
Then finding a good binned such CPU to have both ring buses in tact as well as all the 16 cores, would be feat in it self. That processor would cost easily 50 - 100% more than 12900k in retail.
Would its performance be 100% more? Most likely not.



fevgatos said:


> If you look at the die of the current 12900k, it's pretty obvious why they didn't go for a 16P CPU. That thing would be MASSIVE. It's already 35% bigger than the 2 CCD's of the 5950x. A 16P core would be something like 350++ mm². It has no space in the consumer market, cause people wouldn't pay for it. That's why they are saving it for the prosumer market with the Sapphire Rapids.


Pretty much this.

Also on that note, it being already 35% bigger than 2 CCDs of the 5950x; You have repeatedly said that 1 AL core should be compared to 1 Zen 3 core even fully knowing that 1 AL core is like twice the size of Zen 3 core. This is obvious since AL is a wide cpu design. So CPU core surface area, its architecture, its freq/voltage curves, etc. are all parts of the CPU design.
Saying let's take one AL core at THIS specific wattage and compare it in THAT specific workload is kind of unfair comparison.
How about we compare 1 design to another design at the same/similar price points? I.e. what the reviews usually are doing? It has been done and usually Zen 3 cores are more efficient.
Now due to AL's design it can be more efficient in certain workloads than Zen 3, we all agree on that.
Having said that, AL is not more efficient than Zen 3 at everything at any wattage.



Valantar said:


> *This is true for nT workloads at least - for lower threaded workloads it would be more complex, as discussed above. It would also be able to make a lot better use of those 240W in terms of performance, at least in nT workloads. The interesting question to me would be, when weighing power, performance, and die area/cost (and, of course, how to weigh these is not a simple question at all), if it would be a more attractive proposition than a 5950X at 100W less power.*


This is one of the things we went over in course of Computer architectures when I was doing my BS in computer science (which was about 10 years ago).
Bonus info, this class was taught by a former Intel engineer who was working on the Haswell (before we even knew it was a thing, as architectures are made years in advance of the actual product hitting the shelves) CPU as a signal integrity engineer since he has a PHD in that area. This was one of my favourite classes, and in it we discussed that you have to balance different things in the CPU in regards of wideness of the cpu, it's die area, caches (L1, L2 L3), registers, yields, etc.
It's an art, and there's no right or wrong answer here. Just different implementations of the different architectures which will yield obviously different performance numbers.
I still keep in touch with this guy. He was recently working for Ampere before they unveiled their first ARM processor, afterwards he left Ampere for Microsoft. He also still teaches CPU architectures in Portland university part-time all these years.


----------



## fevgatos (Aug 12, 2022)

Valantar said:


> But .... that statement is equally true at literally every possible setting, which means it's not an argument for or against anything whatsoever. The point is that stock settings are _more important_ than non-stock settings. It shows you how the product works, as configured by the manufacturer, who has positioned it extremely deliberately after tens of thousands of hours of testing, binning and tuning, to find the optimal competitive balance. It shows how that has resulted in a specific configuration of a specific silicon implementation of an architecture, and it speaks to the reasoning put into this tuning. It speaks to what is and isn't possible, and the concerns and considerations taken into account when setting specs.
> 
> Why should we use a handful of enthusiasts with specialist knowledge and unusual use cases as the standard for what's important? As we discussed in the other thread, you have a vastly overblown belief in how many people know how to, care enough to, and want to tune their CPUs (and GPUs). And, crucially: your argument here boils down to "I believe that what I like should be the standard." There is nothing in this approach that weighs sufficiently heavily to counteract the simple fact that stock settings are stock settings, with all that entails. And any arbitrarily chosen power level set by you or a reviewer will never be anything but arbitrary, unless a wide range of power levels are tested for an actual overview of the overall efficiency of the architecture as implemented. Essentially every CPU ever made runs at stock, and only at stock. Thus, stock operations are by far the most important measure, the basic measure that should always be taken first, and towards which comparisons are directed. Tuned operations are _also_ interesting to look at, but it's a niche scenario and can thus by default not be treated as the standard.
> 
> You can always argue that Intel ought to have configured the 12900K differently at stock - many of us have done so - but this falls in line with your overall pattern of generosity towards them, handily skipping obvious criticisms. After all, it could easily be argued that Intel should have configured these CPUs to run at peak efficiency for everyday users, while leaving tons of OC headroom for enthusiasts who know how to tune them - by your reasoning, that shouldn't be a problem whatsoever, right? But instead, it's clear from the tuning of these chips that Intel felt strong pressure to _win_ in terms of performance, and thus tuned these chips to ludicrous levels of power in order to decisively do so. Which grants valuable and interesting insight into Intel's reactions to AMD's perf/W ant MT dominance over the past few years, and shows just how seriously they've taken that threat. Which means that for the vast majority of people, these chips run less efficiently than they should, at higher power than what's necessary or even remotely efficient, all because Intel desperately needed a benchmarking win.


I don't know why you think that ignorance is better just because more people choose to be ignorant. I fundamentally disagree with your way of thinking. Just yesterday I watched a review of some bequiet fans compared against the T30. There was a comment there saying, and im not kidding you, "im thinking about replacing my T30 cause they are too nosiy, these new bequiet fans seem like a good replacement". And all that because the user is probably running them at like 3000 RPM. He is literally going to spend even more money to replace a better product with a worse one. There are countless other people being gamers that are going to skip alderlake cause of power consumption, while they are looking at power chart numbers over prime 95 or something. And theyll end up with a zen 3 for example that might be less efficient in gaming. Do you think you are doing these people a favor by leaving them at their ignorance?

The reason why Intel (or even AMD for that matter) configure their CPUs the way they do is exactly because of reviewers. Intel wants you to think that the 12900k is faster than the 5950x in MT, which is not. But the average user looking at a review will just glance over the very popular CBR23 benchmark and conclude otherwise. For the same reasons zen 3 was powerlimited way lower, cause AMD had no competition in heavy MT workloads back in 2020.

Comparing stock and just stock for efficiency would be fine if people understood what they are looking at. They don't. They can't even fathom the fact that the 12900ks for example is a better binned 12900k can be a lot more efficient. They see powerdraw at 900w and that's all that matters to them.

And frankly, if what I'm suggesting (testing at similar power limits) wasn't very very important, TPU wouldn't have done it in the first place, right? I mean im pretty sure the time it took them to benchmark the CPU at so many different power levels would have been insane, and kudos to them for doing so.

EG1. You are right about the prosumer market, I thought SR would have been HEDT, for some reason I was thinking sierra forest would be the xeon part


----------



## Valantar (Aug 12, 2022)

mahirzukic2 said:


> Also on that note, it being already 35% bigger than 2 CCDs of the 5950x; You have repeatedly said that 1 AL core should be compared to 1 Zen 3 core even fully knowing that 1 AL core is like twice the size of Zen 3 core. This is obvious since AL is a wide cpu design. So CPU core surface area, its architecture, its freq/voltage curves, etc. are all parts of the CPU design.
> Saying let's take one AL core at THIS specific wattage and compare it in THAT specific workload is kind of unfair comparison.
> How about we compare 1 design to another design at the same/similar price points? I.e. what the reviews usually are doing? It has been done and usually Zen 3 cores are more efficient.
> Now due to AL's design it can be more efficient in certain workloads than Zen 3, we all agree on that.
> Having said that, AL is not more efficient than Zen 3 at everything at any wattage.


Doing some quick measurements in photoshop and calculating this based on the official die size (20.5x10.5mm), the ADL version of the GC core without its L3 cache (just the roughly rectangular part towards the top/bottom of the die, not the middle interconnect+cache strip) is ~2.24x3.29mm, or 7.4mm². The matching area of Vermeer Zen3 (L3C excluded) is ~2.6x1.6mm, or 4.2mm² - 57% the area. Which in turn begs the question of what would be the most efficient of a 16C ADL design vs. a 16/57*100=28C Zen3 design at the same wattage. Of course we'd be well beyond useful core counts at that point, but arguably, so is 16 cores for most consumers. And more cores wouldn't help Zen3 catch up in ST performance, but it would likely trounce any 16C ADL in nT performance at the same wattage - though it would in turn also increase the IF power cost due to needing even more IF links, and would need a larger package size, etc., etc. Essentially it would be an EPYC/Threadripper light, just with slightly less than half the IF power and not the crazy I/O.

Which then leaves us with yet another axis of variability, which in turn _also_ doesn't scale linearly across workloads, as most workloads aren't nT.


mahirzukic2 said:


> This is one of the things we went over in course of Computer architectures when I was doing my BS in computer science (which was about 10 years ago).
> 
> Bonus info, this class was taught by a former Intel engineer who was working on the Haswell (before we even knew it was a thing, as architectures are made years in advance of the actual product hitting the shelves) CPU as a signal integrity engineer since he has a PHD in that area. This was one of my favourite classes, and in it we discussed that you have to balance different things in the CPU in regards of wideness of the cpu, it's die area, caches (L1, L2 L3), registers, yields, etc.
> It's an art, and there's no right or wrong answer here. Just different implementations of the different architectures which will yield obviously different performance numbers.
> I still keep in touch with this guy. He was recently working for Ampere before they unveiled their first ARM processor, afterwards he left Ampere for Microsoft. He also still teaches CPU architectures in Portland university part-time all these years.


Fascinating! And yeah, there are far, far, far too many variables in play to pretend that there's any such thing as a "right answer" to any of this. That's what makes it so interesting!



fevgatos said:


> I don't know why you think that ignorance is better just because more people choose to be ignorant. I fundamentally disagree with your way of thinking.


And here we're back to the ... well, I think this highlights a part of the problem with this discusison: your reading of what others write is simple, superficial, and bordering on bad faith in its assumptions. How on earth am I "thinking that ignorance is better?" Have I said that anything is _better_ anywhere? No. I'm talking about _useful information, relevant knowledge_. And knowing how a chip operates at the parameters at which it will be operating in >99% of use cases is more relevant knowledge than how it _can_ operate in hand-tuned scenarios.


fevgatos said:


> Just yesterday I watched a review of some bequiet fans compared against the T30. There was a comment there saying, and im not kidding you, "im thinking about replacing my T30 cause they are too nosiy, these new bequiet fans seem like a good replacement". And all that because the user is probably running them at like 3000 RPM. He is literally going to spend even more money to replace a better product with a worse one. There are countless other people being gamers that are going to skip alderlake cause of power consumption, while they are looking at power chart numbers over prime 95 or something. And theyll end up with a zen 3 for example that might be less efficient in gaming. Do you think you are doing these people a favor by leaving them at their ignorance?


No, but then I have never argued for anything even remotely resembling what you're saying here either. Literally nothing you're saying here resembles what I've been saying _at all_. And if you think so: please, pretty please, try to actually read what I've been saying. Look at the words, and think about what they mean, without projecting some nefarious agenda into them. The solution to that CPU efficiency issue? Highlighting gaming power draw testing, and not using simplistic power testing for more than it's worth. (The problem with this is of course that gaming CPU power draw testing is _really friggin' difficult_ due to driver variability and overhead, platform differences, actual gaming performance differences (which is largely GPU bound) and more, making a like-for-like comparison near impossible.)

Also, I will continue rejecting the stupid and oversimplified false equivalency you keep making between tuning a CPU and applying a fan profile. Yes, both are done in BIOS, but that's about where the similarities end. The risk involved and the level of complexity are so staggeringly different that the two are simply not comparable. Also, fan profiles are reliably adjustable in software (though that software typically _sucks_, but that's another problem), unlike CPU power limits.


fevgatos said:


> The reason why Intel (or even AMD for that matter) configure their CPUs the way they do is exactly because of reviewers.


Again, oversimplified. It was configured that way because of the complex mix of competitive realities, architectural and implementational physical characteristics of the chip, marketing, strategy, and more. "Because of reviewers" is a reductive and stupidly oversimplified summation of that.


fevgatos said:


> Intel wants you to think that the 12900k is faster than the 5950x in MT, which is not. But the average user looking at a review will just glance over the very popular CBR23 benchmark and conclude otherwise. For the same reasons zen 3 was powerlimited way lower, cause AMD had no competition in heavy MT workloads back in 2020.


Uh ... Zen3 was power limited way lower because it literally doesn't scale higher. At all. There is _no_ meaningful performance improvement to be had by pushing Zen3 higher. So, Zen3 wasn't power limited way lower because of no competition, but because if they set it to 240W, performance would likely be _worse_ due to leakage currents and thermal issues, rather than better.


fevgatos said:


> Comparing stock and just stock for efficiency would be fine if people understood what they are looking at. They don't. They can't even fathom the fact that the 12900ks for example is a better binned 12900k can be a lot more efficient. They see powerdraw at 900w and that's all that matters to them.


... and they would understand what they were looking at if you instead showed them arbitrarily power limited non-stock configurations? Yeah, your logic is wildly inconsistent here.


fevgatos said:


> And frankly, if what I'm suggesting (testing at similar power limits) wasn't very very important, TPU wouldn't have done it in the first place, right? I mean im pretty sure the time it took them to benchmark the CPU at so many different power levels would have been insane, and kudos to them for doing so.


But ... nobody here is saying that this isn't important, or useful, or a great thing. It absolutely is! But it's an interesting and useful thing specifically for us enthusiasts, as it informs both our opinions and practices. It is explicitly _not_ useful for anyone outside of this very small niche hobby (and perhaps a few other very niche groups, like SI system tuners and the like), because for even a well informed general audience, all it would do would be sow confusion. "Why doesn't my PC act like this? Your benchmark said X, but mine is A?" Etc. Presenting complex information like this is a complex communicative and pedagogical act, and not one suited for something aimed at giving a relatively representative overview, like a product review. It's a follow-up-article thing, for people who are prone to reading follow-up articles. And that's perfectly fine.


fevgatos said:


> EG1. You are right about the prosumer market, I thought SR would have been HEDT, for some reason I was thinking sierra forest would be the xeon part


Ah, Sierra Forest is that E core only Xeon project, right? Yeah, no, SR is HPC/Server, AFAIK its biggest PR push is one of those exascale supercomputers. ... Aurora? I think? Intel HEDT is dead, and is showing no signs of returning. The closest thing they've got is Xeon-W - just like how AMD has killed TR, replacing it with TR Pro and EPYC (which are the same, just with marginally different tuning).


----------



## fevgatos (Aug 12, 2022)

Valantar said:


> And here we're back to the ... well, I think this highlights a part of the problem with this discusison: your reading of what others write is simple, superficial, and bordering on bad faith in its assumptions. How on earth am I "thinking that ignorance is better?" Have I said that anything is _better_ anywhere? No. I'm talking about _useful information, relevant knowledge_. And knowing how a chip operates at the parameters at which it will be operating in >99% of use cases is more relevant knowledge than how it _can_ operate in hand-tuned scenarios.
> 
> No, but then I have never argued for anything even remotely resembling what you're saying here either. Literally nothing you're saying here resembles what I've been saying _at all_. And if you think so: please, pretty please, try to actually read what I've been saying. Look at the words, and think about what they mean, without projecting some nefarious agenda into them. The solution to that CPU efficiency issue? Highlighting gaming power draw testing, and not using simplistic power testing for more than it's worth. (The problem with this is of course that gaming CPU power draw testing is _really friggin' difficult_ due to driver variability and overhead, platform differences, actual gaming performance differences (which is largely GPU bound) and more, making a like-for-like comparison near impossible.)


I don't know why we are arguing about this. Only people that don't know they can power limit their CPU would be interested in how the CPU's perform at stock power limits. Hence ignorance. A couple of days ago I read someone complaining about the TDP on Ryzen 5 being raised to 125w TDP instead of 65 for the zen 3 model. Like....why would you care unless you don't know that you can put it back to 65w? I mean I swear to you, I cannot possibly think of any other reason why an end user would care about what powerlimit intel or amd decides to  put on their products. I'm not even sure those are determined by the actual engineers, I think they are determined by the marketing department in order to position their CPU's against each others products.



Valantar said:


> Also, I will continue rejecting the stupid and oversimplified false equivalency you keep making between tuning a CPU and applying a fan profile. Yes, both are done in BIOS, but that's about where the similarities end. The risk involved and the level of complexity are so staggeringly different that the two are simply not comparable. Also, fan profiles are reliably adjustable in software (though that software typically _sucks_, but that's another problem), unlike CPU power limits.


I wasn't trying to compare the complexity, although in my opinion, it's easier to change your power limit than your fan curve. Not even kidding. At least for Intel, you can't even get into the bios without a huge POPUP asking you to choose the maximum wattage / cooler you have.

But my point is, a consumer will spend even more money than he already has to buy a worse product cause the review didn't bother testing at same noise levels. Which is pretty equivalent to testing at same power levels in my opinion when it comes to CPUs. Imagine random BOB replacing his 12900k for a 5900x cause the first is very inefficient, drawing 240w while the latter only sips power at 125w. Wouldn't that be completely stupid since he can limit his 12900k to 125w and actually outperform the 5900x? 



Valantar said:


> Uh ... Zen3 was power limited way lower because it literally doesn't scale higher. At all. There is _no_ meaningful performance improvement to be had by pushing Zen3 higher. So, Zen3 wasn't power limited way lower because of no competition, but because if they set it to 240W, performance would likely be _worse_ due to leakage currents and thermal issues, rather than better.


And yet that's exactly what Intel did. Actually, the 12900k scales even worse than the 5950x from 125 to 240w, yet here we are.



Valantar said:


> ... and they would understand what they were looking at if you instead showed them arbitrarily power limited non-stock configurations? Yeah, your logic is wildly inconsistent here.
> 
> But ... nobody here is saying that this isn't important, or useful, or a great thing. It absolutely is! But it's an interesting and useful thing specifically for us enthusiasts, as it informs both our opinions and practices. It is explicitly _not_ useful for anyone outside of this very small niche hobby (and perhaps a few other very niche groups, like SI system tuners and the like), because for even a well informed general audience, all it would do would be sow confusion. "Why doesn't my PC act like this? Your benchmark said X, but mine is A?" Etc. Presenting complex information like this is a complex communicative and pedagogical act, and not one suited for something aimed at giving a relatively representative overview, like a product review. It's a follow-up-article thing, for people who are prone to reading follow-up articles. And that's perfectly fine.


I'm just too tired to reading the same nonsense over and over again from people that don't understand what graphs show you. Oh look - X GPU at 300w is more efficient than Y GPU configured at 900w. Hooray, like who cares?  Again, unless you don't know you can change the power limits - quite easily imo (I know you disagree with that part), I don't see the point.


----------



## neatfeatguy (Aug 12, 2022)

This back and forth reminds me of a guy from some years ago (on a different forum), he flashed his HD 6950 with a 6970 BIOS. The flash took and the extra shaders were unlocked, but his card couldn't take the voltage boost to allow higher clocks to match an actual 6970 level. He argued with people defending his card and calling it a 6970 even though it was gimped.

The debate went back and forth on each side and no one in the end cared except for the couple of people wanting to pad their egos.


----------



## ratirt (Aug 12, 2022)

fevgatos said:


> I don't know why we are arguing about this. Only people that don't know they can power limit their CPU would be interested in how the CPU's perform at stock power limits. Hence ignorance. A couple of days ago I read someone complaining about the TDP on Ryzen 5 being raised to 125w TDP instead of 65 for the zen 3 model. Like....why would you care unless you don't know that you can put it back to 65w? I mean I swear to you, I cannot possibly think of any other reason why an end user would care about what powerlimit intel or amd decides to put on their products. I'm not even sure those are determined by the actual engineers, I think they are determined by the marketing department in order to position their CPU's against each others products.


This is still unbelievable what you are writing. And the problems that you are putting on there from so called "people" having problems with TDP of a processor? They are comparing Ryzen 5 to Ryzen 3 first of all and second what a bullshit of a problem especially if you can buy 5000 series ranging from 35w to 105w processors. Maybe they have picked the wrong processor if they are buying with TDP as a main purchase reason. It would seem again you want to justify your power limiting CPU argument by bringing up ''some people said'' they are upset with their purchase. 
If you buy according to TDP  you would not get 12900K and saying you can limit yest sure but you still use performance. It does not matter how much but you do which is not what you have paid for. 
So please, stop this nonsense with limiting power to satisfy the TDP argument. When you limit power you limit performance and it is not what you have paid for. It does not matter how much you limit the CPU it is not being advertised as that and the price is not reflecting your opinion and power limit factors and how much efficient it is at 35w limit. do you know why? Because if I were to buy a CPU for $800 (or what is the price now?) and limit it to 35W or 50w (it really doesnt matter at this point) I would rather buy a dedicated CPU with that power range 35w-50w. Heck, for $800 i can buy the whole PC. You see how your arguments melt quickly with your flawed logic.


fevgatos said:


> I wasn't trying to compare the complexity, although in my opinion, it's easier to change your power limit than your fan curve. Not even kidding. At least for Intel, you can't even get into the bios without a huge POPUP asking you to choose the maximum wattage / cooler you have.
> 
> But my point is, a consumer will spend even more money than he already has to buy a worse product cause the review didn't bother testing at same noise levels. Which is pretty equivalent to testing at same power levels in my opinion when it comes to CPUs. Imagine random BOB replacing his 12900k for a 5900x cause the first is very inefficient, drawing 240w while the latter only sips power at 125w. Wouldn't that be completely stupid since he can limit his 12900k to 125w and actually outperform the 5900x?


No. You get for what you pay not all the way around. You pay first and then you decide what you want this CPU to perform like? Full speed or half speed. This TDP or that TDP? That is bullshit. 
This power limitations, tweaking is for enthusiasts not for everyone. This is a ridiculous argument. You can try things and see where it lands with power limits and decrease in performance but with other perks you may find valuable. Why would he buy 12900K if the 240w tdp is the most concern for him? He should have gone with 5900x from the start or 12600 non-k or if he needs more cores 12900T. 


fevgatos said:


> And yet that's exactly what Intel did. Actually, the 12900k scales even worse than the 5950x from 125 to 240w, yet here we are.


Intel did this to squeeze whatever left performance was there. Even if the cost of power consumption was high they went with it. Even though they did, still didnt top all the charts with AMD top processor. 
That was their choice and you pay for that choice as well and a lot. Limiting it to brag about how efficient it is? Buy a 12900T which does not go above 110w in power consumption if i remember correctly but boost stays below 5GHz and it is cheaper.


----------



## fevgatos (Aug 12, 2022)

ratirt said:


> This is still unbelievable what you are writing. And the problems that you are putting on there from so called "people" having problems with TDP of a processor? They are comparing Ryzen 5 to Ryzen 3 first of all and second what a bullshit of a problem especially if you can buy 5000 series ranging from 35w to 105w processors. Maybe they have picked the wrong processor if they are buying with TDP as a main purchase reason. It would seem again you want to justify your power limiting CPU argument by bringing up ''some people said'' they are upset with their purchase.
> If you buy according to TDP  you would not get 12900K and saying you can limit yest sure but you still use performance. It does not matter how much but you do which is not what you have paid for.
> So please, stop this nonsense with limiting power to satisfy the TDP argument. When you limit power you limit performance and it is not what you have paid for. It does not matter how much you limit the CPU it is not being advertised as that and the price is not reflecting your opinion and power limit factors and how much efficient it is at 35w limit. do you know why? Because if I were to buy a CPU for $800 (or what is the price now?) and limit it to 35W or 50w (it really doesnt matter at this point) I would rather buy a dedicated CPU with that power range 35w-50w. Heck, for $800 i can buy the whole PC. You see how your arguments melt quickly with your flawed logic.
> 
> ...


That's nonsense. Are you saying that if the 12900k was limited to 125w it would be cheaper? Of course not. Therefore I'm not paying for the TDP, im paying for the CPU. This is the exact kind of arguments I have a problem with. They are completely meaningless. What does it even mean that you "pay for that performance". Makes no sense. Absolutely none. I bought the CPU for the ST and gaming performance, so even limited to 125w it performs exactly the same on those scenarios. I also paid to have a very efficient CPU, and when limited to 125w it's exactly that.


----------



## Mussels (Aug 13, 2022)

fevgatos said:


> Giving me infractions doesnt make you right, go ahead and tell me where im wrong. I'm sorry but TPUs review is obviously horribly wrong and whoever claims otherwise is in absolute denial. You dont even have to compare it with another review to realize its wrong. The 12600k matching the 12900k at same wattage is an obvious red flag that something is completely messed up.
> 
> Anyways, I've already posted 3 more reviews that show the same thing (igors lab, techspot and club365), so whatever you are claiming here (which you havent made clear) is absolutely wrong as well.
> 
> I just checked your post history, wtf are you even talking about? You just said that my test setup is flawed and not the TPUs and then you left the conversation. So what links and proofs are you talking about, lol


It's the part where you lied, that I called you out on.
You're just... continuing to lie. About everything.

No one can prove you wrong or even discuss what you're saying because you're just throwing things out chaotically - It's like you're so worried about being wrong that you're just throwing out anything and everything you can, acting like people getting bored and ignoring you is the same as you winning this argument you've made up.


Honestly, I've had enough - especially since the previous infractions you decided never happened were about you trolling AMD reviews and news posts on this exact topic.

*Have a week off, think some things through*. Come back with an improved attitude or it'll be a longer holiday next time... and don't try making alternate accounts. It'll be pretty obvious who you are, when no one else makes the wild claims you do.


----------



## ratirt (Aug 13, 2022)

fevgatos said:


> That's nonsense. Are you saying that if the 12900k was limited to 125w it would be cheaper? Of course not. Therefore I'm not paying for the TDP, im paying for the CPU. This is the exact kind of arguments I have a problem with. They are completely meaningless. What does it even mean that you "pay for that performance". Makes no sense. Absolutely none. I bought the CPU for the ST and gaming performance, so even limited to 125w it performs exactly the same on those scenarios. I also paid to have a very efficient CPU, and when limited to 125w it's exactly that.


Seriously read again because I'm not saying that and I have no idea where do you get this. BTW 12900T is cheaper and has a TDP  of 110w but that is not the point here. 
You are paying for performance obviously and then limit the performance to match your TDP but you still pay for performance. It does not make sense for me especially when you told the lame example about a dude buying 12900K and switching to 5950x due to hi TDP of the latter one. That was the stupidest example ever. Anyway read again cause you simply twist everything around or your comprehension of what's been said and what people are pointing out is very skewed.



Mussels said:


> It's the part where you lied, that I called you out on.
> You're just... continuing to lie. About everything.
> 
> No one can prove you wrong or even discuss what you're saying because you're just throwing things out chaotically - It's like you're so worried about being wrong that you're just throwing out anything and everything you can, acting like people getting bored and ignoring you is the same as you winning this argument you've made up.
> ...


I thought Aussies have everything 'upside down' but this dude there, he's got upside down, reflected of a water surface ran through a prism and mirror imaged. Can't follow his chain of thought literally and I don't think it is a language barrier.


----------



## 95Viper (Aug 14, 2022)

Get on topic... discuss the tech facts, not members.
Stop the insulting remarks.
Don't drag out a back & forth argument for 3 or more pages, it ruins the threads.


----------



## RandallFlagg (Aug 15, 2022)

mahirzukic2 said:


> <snip>
> 
> But I doubt that would change the current state, which is that Zen 3 is on average 50% more efficient and about the same or worse efficiency in the worst case (heavily biased towards intel by running it in optimised mode).



That's demonstrably wrong outside of a handfull of strictly multi-core workloads, which are typically used (erroneously) by these sites to show *'worst case'* power situations.  

PCWorld did a more thorough examination under more realistic scenarios than running Cinebench.  So has Igor's Lab.  

"*The problem with painting the entire 12th-gen chip with the broad brush of an all-core or single-core load is reality isn’t like that. *For the next results, we captured both systems running Puget System’s PugetBench Premiere Pro benchmark. Most assume Adobe Premiere Pro will hammer all of the CPU all of the time, but it’s really a mix of different CPU and different GPU tasks at work. It’s actually a little surprising, but instead of Ryzen 9 easily winning from its better all-core power efficiency, it’s pretty much dead even. In performance, the *Core i9 actually outscores the Ryzen 9 by 40 percent when the integrated graphics (IGP) are enabled and by 6 percent when it’s off.* *For this test, we show power consumption when the IGP is off.* Considering that Premiere is probably one of the more intensive applications a regular nerd will use, it tells us that those who insist the 12th-gen Core i9-12900K will be a “power hog” are vastly overstating the situation."

So with that, actual measured power draw over time, Purple is AMD the dark red is Intel 12900K.  

Gaming - 12900K wins on power efficiency, and is faster :












Adobe Lightroom and Photoshop, again the 12900K is more power efficient than the 5950X (purple is AMD, red Intel):







So what you are talking about is some kind of odd "proof" based on singular use cases, like Cinebench.  

So indeed, if your use case is to run renders all day long, 5950X is the chip for you.  

Real question, is that what you do with your PC?  

If not, why do you care about Cinebench?


----------



## Mussels (Aug 16, 2022)

"*Core i9 actually outscores the Ryzen 9 by 40 percent when the integrated graphics (IGP) are enabled and by 6 percent when it’s off.* *For this test, we show power consumption when the IGP is off."*

Yeah... because it's using hardware encoding/decoding of the IGP? That's a pretty good example of misinterpreting the results.
That 6% win, is the CPU performance difference.

As to what gamers do with their PC's?
They game. On anything but a 5950x.

Where you'll find that everyone talking about AMD being more efficient at gaming still holds up






Doesn't matter if its power consumption over time measured in never-ending tasks

Or tasks that benefit from finishing the job faster






It's strange the ways people reach to describe this stuff - no gamer needs or wants a 5950x or a 12900K/F.
The *gaming* performance difference between them is tiny.

When you're looking at options that perform within 6% (Per your claim)

or 1.5-10% (TPU's claims)






At low resolutions when not GPU limited, intel have a performance advantage. 
That's when their power consumption goes up.
Anything that prevents the CPU from reaching those high turbo states and the high power consumption, also prevents that performance advantage. They happen together, or not at all.


The argument that "My 12th gen intel isn't bad with power consumption because it's GPU limited" is just so... strange.
Because you can get that exact same performance from a CPU that wont suddenly triple in power usage any time the CPU actually has work to do.

Anything beyond a 12600K, is simply not power efficient by any metric - single threaded, multi threaded, or total consumption for a task like rendering.


You can get a 5600x or a 12600K and unless you're running at 1080p 360Hz with a 3090Ti, the higher end CPU's from AMD and Intel literally just throw away power for no gains.
Running GPU limited or with a frame cap reduces that, but if you *rely* on that you might as well underclock the CPU because you're relying on low enough load do automatically do it for you.

It's buying a 3090Ti, gaming at 720p 30Hz with Vsync on and claiming it's the most power efficient GPU of all time.

If you don't care about cinebench, rendering or multithreaded workloads why would you buy anything greater than a 6 core CPU?


----------



## ratirt (Aug 16, 2022)

Mussels said:


> "*Core i9 actually outscores the Ryzen 9 by 40 percent when the integrated graphics (IGP) are enabled and by 6 percent when it’s off.* *For this test, we show power consumption when the IGP is off."*
> 
> Yeah... because it's using hardware encoding/decoding of the IGP? That's a pretty good example of misinterpreting the results.
> That 6% win, is the CPU performance difference.
> ...


Here is a good example of those claims that 12900K uses 50watts while gaming.








From the reads here in few CPU demanding games the power usage jumps a to 140w in some cases for the 12900K.
Not to mention, the CPU is being utilized in 60% or something around. (Death Stranding for instance or Cyberpunk)


----------



## RandallFlagg (Aug 16, 2022)

ratirt said:


> Here is a good example of those claims that 12900K uses 50watts while gaming.
> 
> 
> 
> ...



Here's a good example, from the same channel using the same games, of the 12900K vs the 5950X showing it being a toss up in gaming.  

Most of the time you're getting 5% more FPS for 0-10% less power on the 12900K.  










Example :


----------



## neatfeatguy (Aug 16, 2022)

I'd like to say that I'm impressed by all the fanboy-ism here between what's slightly better in terms of performance/power usage of the 12900k and AMD's offerings, but all I see here is:






I don't even know what this thread is about anymore.....


----------



## ratirt (Aug 17, 2022)

RandallFlagg said:


> Here's a good example, from the same channel using the same games, of the 12900K vs the 5950X showing it being a toss up in gaming.
> 
> Most of the time you're getting 5% more FPS for 0-10% less power on the 12900K.
> 
> ...


That is exactly the problem with you Intel forever guys. You can't just settle on one thought you always have to expand and compare to AMD. It's like if someone makes a remark about Intel, it burns you like a hot flame. You are comparing to a 2yo processor that is one thing. Second, utilization of a game is mediocre to any full load. That is what your problem is. You argue about gaming and people been saying their CPU 12900k is using 50W of power in a game. That depends and on a game and workload of the CPU. Because the utilization of the CPU is low in a game which is not demanding enough for the CPU the power needed is low. Get the utilization higher and your power sky rockets. But, you disregard it. 12600k is a good CPU for gaming. What I'm saying here is, 12900k in your example is not power efficient across the board but only in low load scenarios which this processor is not designated for. Few frames difference isnt a huge difference and remember you are comparing to a 2yo CPU.
BTW nice cherry pick here is mine. Your 12900K is running at 4.9Ghz which is conservative and does not boost to 5.2Ghz. It is limited. Let it boost to 5.2 and the 132W will become 190W or so.


----------



## fevgatos (Aug 20, 2022)

Its already established that alderkale >> comet lake > zen 3 >> rocketlake in gaming efficiency. Zen 3 beats rocket lake and that's about it, it loses to everything else.


----------



## Arc1t3ct (Aug 23, 2022)

fevgatos said:


> Its already established that alderkale >> comet lake > zen 3 >> rocketlake in gaming efficiency. Zen 3 beats rocket lake and that's about it, it loses to everything else.


This!


----------



## fevgatos (Aug 24, 2022)

Mussels said:


> If you don't care about cinebench, rendering or multithreaded workloads why would you buy anything greater than a 6 core CPU?


Uhm, you realize you don't have to run the CPU at the stock 240w right? 

Also every review that test's gaming efficiency has the alderlake far far (and I mean FAR, like up to 70% far) ahead of Zen 3 except the 3d.



RandallFlagg said:


> Here's a good example, from the same channel using the same games, of the 12900K vs the 5950X showing it being a toss up in gaming.
> 
> Most of the time you're getting 5% more FPS for 0-10% less power on the 12900K.


From your video there are games that the 5950x consumes 55% more power for similar performance (ms fs for example).


----------



## pavle (Aug 26, 2022)

If I may post a question on topic - why not 6.0 GHz, are manufacturers hitting some sort of material limit (or is it too big of a chip)?
Ages ago some people had pentium4 running @ ~8.0GHz on liquid nitrogen cooling (10 or more generations of intel CPUs back).


----------



## HenrySomeone (Aug 26, 2022)

pavle said:


> If I may post a question on topic - why not 6.0 GHz, are manufacturers hitting some sort of material limit (or is it too big of a chip)?
> Ages ago some people had pentium4 running @ ~8.0GHz on liquid nitrogen cooling (10 or more generations of intel CPUs back).


Intel's Core i9-13900K Raptor Lake CPU outperforms Intel Core i9-12900K and Ryzen 9 5950X CPUs in new Benchmark    - TechnoSports
This seems to indicate that 6.0 and even over will indeed be possible for the first time (without extreme cooling).


----------



## fevgatos (Aug 26, 2022)

pavle said:


> If I may post a question on topic - why not 6.0 GHz, are manufacturers hitting some sort of material limit (or is it too big of a chip)?
> Ages ago some people had pentium4 running @ ~8.0GHz on liquid nitrogen cooling (10 or more generations of intel CPUs back).


6 ghz on all cores? You cant cool the chip, especially now at 7 and 5 nm. 6 ghz on a single core? Alderlake can hit 5.6 to 5.8 pretty easily, even on air cooling


----------



## kapone32 (Aug 26, 2022)

HenrySomeone said:


> Intel's Core i9-13900K Raptor Lake CPU outperforms Intel Core i9-12900K and Ryzen 9 5950X CPUs in new Benchmark    - TechnoSports
> This seems to indicate that 6.0 and even over will indeed be possible for the first time (without extreme cooling).


If you have a 420MM AIO and active cooling over the VRMs maybe.


----------



## HenrySomeone (Aug 26, 2022)

kapone32 said:


> If you have a 420MM AIO and active cooling over the VRMs maybe.


Even if, that's a huge milestone; it's been 14 years since 5.0 was doable (E8600).


----------



## fevgatos (Aug 26, 2022)

kapone32 said:


> If you have a 420MM AIO and active cooling over the VRMs maybe.


Why would the vrms out of all things require active cooling? Even affordable mobos have overkill vrms


----------



## InVasMani (Aug 26, 2022)

kapone32 said:


> If you have a 420MM AIO and active cooling over the VRMs maybe.



Which is fairly extreme not to mention the MB's won't be cheap or light weight. Though it'll be possibly the first crack at it with conventional 24/7 cooling options.



fevgatos said:


> Why would the vrms out of all things require active cooling? Even affordable mobos have overkill vrms



Many of which have active cooling for example the EVGA Z690 CLASSIFIED.


----------



## Valantar (Aug 26, 2022)

pavle said:


> If I may post a question on topic - why not 6.0 GHz, are manufacturers hitting some sort of material limit (or is it too big of a chip)?
> Ages ago some people had pentium4 running @ ~8.0GHz on liquid nitrogen cooling (10 or more generations of intel CPUs back).


There are always limits, and they don't follow our human desires for things aligning with numbering or ordering systems - that's just life. Those extra 300MHz might not be attainable at all, or might require an inordinate amount of power, or might drive thermal density higher than what can reasonably be cooled, or something else. Just the fact that we're seeing stock clock speeds come close to 6GHz is damn impressive, and speaks to the capabilities of these new production processes - but they follow the rules of physics and the specific traits of the silicon design, and clocks have to be set accordingly. The luxury of that aligning with "round number good" thinking is rare and essentially random.


----------



## kapone32 (Aug 26, 2022)

fevgatos said:


> 6 ghz on all cores? You cant cool the chip, especially now at 7 and 5 nm. 6 ghz on a single core? Alderlake can hit 5.6 to 5.8 pretty easily, even on air cooling


Alderlake or AMD it is the same (depending on the application). The perceived differences are subjective but a 10400F is about the best you can get in terms of price/performance and Z490 is nice but it's 2 generations old. I could get a 12400F but that would mean a new MB. Yes B650 boards are not as expensive as they could be in some instances but that is still the case. If Intel can have the 13th chip achieve the performance you are touting it would also need extreme cooling. The only problem is we don't know yet.



fevgatos said:


> Why would the vrms out of all things require active cooling? Even affordable mobos have overkill vrms


Do you have any idea what the power draw will be at 5.7 GHZ? Is the TDP on the chip not 240 Watts? There are some borads that already have it too. It doesn't matter how substantial the heat sink would be pulling 240 Watts through the VRMs would produce heat period.


----------



## HenrySomeone (Aug 26, 2022)

kapone32 said:


> Alderlake or AMD it is the same (depending on the application). The perceived differences are subjective but a 10400F is about the best you can get in terms of price/performance and Z490 is nice but it's 2 generations old. I could get a 12400F but that would mean a new MB. Yes B650 boards are not as expensive as they could be in some instances but that is still the case. If Intel can have the 13th chip achieve the performance you are touting it would also need extreme cooling. The only problem is we don't know yet.


Have you read the link? They are using an AIO, which might be considered "extreme" by some, but in this context, that's a gross misuse of the term, since it means phase change / dry ice / LN2. It is enthusiast cooling though, but that's to be expected and AIOs aren't all that rare anymore anyways.



kapone32 said:


> Do you have any idea what the power draw will be at 5.7 GHZ? Is the TDP on the chip not 240 Watts? There are some borads that already have it too. It doesn't matter how substantial the heat sink would be pulling 240 Watts through the VRMs would produce heat period.


A $250 Tomahawk handles 240 watts just fine. Direct a 120 mm fan at vrm heatsinks if you are using an AIO and it will also handle 350.


----------



## fevgatos (Aug 26, 2022)

kapone32 said:


> Alderlake or AMD it is the same (depending on the application). The perceived differences are subjective but a 10400F is about the best you can get in terms of price/performance and Z490 is nice but it's 2 generations old. I could get a 12400F but that would mean a new MB. Yes B650 boards are not as expensive as they could be in some instances but that is still the case. If Intel can have the 13th chip achieve the performance you are touting it would also need extreme cooling. The only problem is we don't know yet.
> 
> 
> Do you have any idea what the power draw will be at 5.7 GHZ? Is the TDP on the chip not 240 Watts? There are some borads that already have it too. It doesn't matter how substantial the heat sink would be pulling 240 Watts through the VRMs would produce heat period.


Even at 500w the mobo you would put a 13900k wouldnt have an issue. 240w tdp is easily handled by any z690. The cheapest z690 a pro from msi can handle it just fine

Also because of the massive size of the chip its easy to cool 240w. I have my 12900k on a u12a and its doing just fine. The 13900k will be even easier to cool



InVasMani said:


> Which is fairly extreme not to mention the MB's won't be cheap or light weight. Though it'll be possibly the first crack at it with conventional 24/7 cooling options.
> 
> 
> 
> ...


The motherboards with active cooling are the ones that need it the least

I remember my z690 ace having active cooling while its vrms were rated for 1800 amps .


----------



## InVasMani (Aug 26, 2022)

fevgatos said:


> Even at 500w the mobo you would put a 13900k wouldnt have an issue. 240w tdp is easily handled by any z690. The cheapest z690 a pro from msi can handle it just fine
> 
> Also because of the massive size of the chip its easy to cool 240w. I have my 12900k on a u12a and its doing just fine. The 13900k will be even easier to cool
> 
> ...



Kind of stronger VRM's aid in peak efficiency by distributing power delivery and heat. A good VRM setup means less VID voltage for CPU stability generally speaking and in turn less heat. Why do you think that boards less aimed at extreme overclocking need better VRM's and active cooling!!? I mean this is about pushing for a hypothetical 6GHz on next gen hardware. The VRM's are still going to get toasty in that scenario. 

I mean are you trying to argue a weaker entry level board should be trying to OC to 6GHz with fan attached and rubbish VRM design like I don't get it!!? The board itself will probably be coupled with weaker bios options as well and fewer PCB layers. Good luck with that compared to the person using a higher end MB. Call me nuts, but I think the person with the better MB will have a easier time at it.

You make good point on the size of the chip and cooling in terms of that versus concentrated heat. AMD needs to find a means to utilize the X3D cache to better disperse heat concentration further though 5800X3D is actually really good on efficiency. I think the picture frame approach is the best way possibly along with a slower larger TSV L4 cache underneath. They've got two options for that inside out or outside in and they could also invert it between connected chiplets for varied cache and chiplet designs. I can see that being a real ACE in tandem with big LITTLE approach.

They could 3D Stack inside 8 CPU cores in a smaller amount inside or outside a chiplet in larger amount for L3 cache though slower larger L4 underneath all of it that they mount to like pictures on a wall would make a lot of sense. I have my doubts that will happen with Zen 4 and stacked cache still be relatively new, but for Zen 5 maybe they've got time enough to consider the pro's and con's of it. 

If they utilize a smaller L3 stacked cache latency will be better for the L3 and less voltage getting in the way frequency scaling for the L3. A larger L3 is good for capacity though so it too could be good depending on usage. Perhaps we may see something akin to it with Zen 5. It might not be quite in the form of a picture/frame arrangement, but eventually I see it serving as good interconnect ring/mesh cache bezel between chiplets and pretty flexible one.


----------



## fevgatos (Aug 26, 2022)

InVasMani said:


> Kind of stronger VRM's aid in peak efficiency by distributing power delivery and heat. A good VRM setup means less VID voltage for CPU stability generally speaking and in turn less heat. Why do you think that boards less aimed at extreme overclocking need better VRM's and active cooling!!? I mean this is about pushing for a hypothetical 6GHz on next gen hardware. The VRM's are still going to get toasty in that scenario.
> 
> I mean are you trying to argue a weaker entry level board should be trying to OC to 6GHz with fan attached and rubbish VRM design like I don't get it!!? The board itself will probably be coupled with weaker bios options as well and fewer PCB layers. Good luck with that compared to the person using a higher end MB. Call me nuts, but I think the person with the better MB will have a easier time at it.
> 
> ...


I dont know what you are replying to. All im saying is even the cheapest z690 will easily drive any cpu at whatever frequencies you are trying to achieve. Active vrm cooling is useless nowadays, you will have trouble cooling the cpu way before your vrms can even get mildly hot.

Thats even more true for amd, how many amps can you cool on those tiny chiplets?


----------



## kapone32 (Aug 26, 2022)

fevgatos said:


> Even at 500w the mobo you would put a 13900k wouldnt have an issue. 240w tdp is easily handled by any z690. The cheapest z690 a pro from msi can handle it just fine
> 
> Also because of the massive size of the chip its easy to cool 240w. I have my 12900k on a u12a and its doing just fine. The 13900k will be even easier to cool
> 
> ...


We don't know what they will do as no one has a 13900K for use. I know that they have been doing overkill on VRMs for the last few generations but it still does not take away from the fact that you are talking about a CPU pulling 500W having no issues seems like a pipe dream as that is a trememdous amount of heat generated in that die space.



fevgatos said:


> I dont know what you are replying to. All im saying is even the cheapest z690 will easily drive any cpu at whatever frequencies you are trying to achieve. Active vrm cooling is useless nowadays, you will have trouble cooling the cpu way before your vrms can even get mildly hot.
> 
> Thats even more true for amd, how many amps can you cool on those tiny chiplets?


Are you trying to tell me that this board could easily handle 500W through the CPU.









						ASRock Z690 Phantom Gaming 4 LGA 1700 DDR4 ATX Intel Motherboard - Newegg.com
					

Buy ASRock Z690 Phantom Gaming 4 LGA 1700 Intel Z690 SATA 6Gb/s DDR4 ATX Intel Motherboard with fast shipping and top-rated customer service. Once you know, you Newegg!




					www.newegg.ca


----------



## fevgatos (Aug 26, 2022)

kapone32 said:


> We don't know what they will do as no one has a 13900K for use. I know that they have been doing overkill on VRMs for the last few generations but it still does not take away from the fact that you are talking about a CPU pulling 500W having no issues seems like a pipe dream as that is a trememdous amount of heat generated in that die space.


Youll have trouble cooling the cpu way before your vrms will need any attention. You cant really cool 300w cpus at normal ambients with any off the shelf method. Im not even sure you can do it with custom loops unless you are putting rads on the balcony.



kapone32 said:


> Are you trying to tell me that this board could easily handle 500W through the CPU.
> 
> 
> 
> ...


Are you telling me you are going to pair that mobo with a 13900k, and on top of that try to oc it? You are just not being reasonable.

The z690 pro a from msi can handle 300w just fine without any active cooling, and it's one of the cheapest z690 mobos.

I dont know why you are stuck at 500w, if you are making your cpu pull 500w your problem isnt the vrms. Its cooling that chip which is impossible


----------



## kapone32 (Aug 26, 2022)

fevgatos said:


> Youll have trouble cooling the cpu way before your vrms will need any attention. You cant really cool 300w cpus at normal ambients with any off the shelf method. Im not even sure you can do it with custom loops unless you are putting rads on the balcony.
> 
> 
> Are you telling me you are going to pair that mobo with a 13900k, and on top of that try to oc it? You are just not being reasonable.
> ...


This is what you said "Even at 500w the mobo you would put a 13900k wouldnt have an issue. 240w tdp is easily handled by any z690." I showed you a board I provide you with "any" moniker and you respond with a board from another vendor?


----------



## fevgatos (Aug 26, 2022)

kapone32 said:


> This is what you said "Even at 500w the mobo you would put a 13900k wouldnt have an issue. 240w tdp is easily handled by any z690." I showed you a board I provide you with "any" moniker and you respond with a board from another vendor?


Exactly. I said any mobo you would put on a 13900k. Is that a mobo you would? Why? It barely costs less than alternative options that are better.


----------



## kapone32 (Aug 26, 2022)

fevgatos said:


> Exactly. I said any mobo you would put on a 13900k. Is that a mobo you would? Why? It barely costs less than alternative options that are better.


No you said any Z690. Please keep it above board. Unless you are an Intel engineer you cannot tell me anything concrete about the 13900K like it will be easier to cool than the 12900K. Anyone who may know anything is most likely under NDA so it doesn't matter. 

Furthermore this is a thread about the performance of AMD's next Gen so please take your thoughts to the Intel 13th Gen thread. In regards to the thread 

I feel that the 7000 series chips will be faster than 5000 how much I don't know. What I can say with confidence though is if my 5950X ran at 5.7 GHZ on even a single core I would be very happy. We are in a really interesting time and both Companies HAVE to try hard to get our dollars so they will both have compelling products. AMD has to maintain the 15-20% increase in performance.


----------



## fevgatos (Aug 26, 2022)

kapone32 said:


> No you said any Z690. Please keep it above board.


No i didnt. You even quoted me and the statement is there. I said any mobo you would put a 13900k on. I said that specifically cause i knew someone being wrong on the Internet would resort to tactics like yours deliberately trying to find a product noone would buy to make his case.

You dont need to be an engineer to figure out the 13900k will be easier to cool. Its a given


----------



## InVasMani (Aug 26, 2022)

fevgatos said:


> I dont know what you are replying to. All im saying is even the cheapest z690 will easily drive any cpu at whatever frequencies you are trying to achieve. Active vrm cooling is useless nowadays, you will have trouble cooling the cpu way before your vrms can even get mildly hot.
> 
> Thats even more true for amd, how many amps can you cool on those tiny chiplets?



Better VRM's keep the CPU running cooler by drawing less voltage to remain stable. Hot VRM's aren't as good at remaining stable and with less phases tend to require more vcore compensation to remain stable. So in you're words not ours don't be ridiculous. Hell look at the VRM's on AM4 EVGA DARK and Alder Lake EVGA DARK you see a difference oh that's only 4 phases more VRM's. So take away 8 VRM phases from Intel and give AMD 4 and see how much easier it is to manipulate perceptions. 

Hell 4 to 8 VRM's was enough to get a respectable LGA 775 C2Q overclock on 65nm no less and the difference in efficency was also night and day between like a 4 VRM and 8 VRM or similar designed MB's usually it was something a bit like 4+1 vs 8+2, but even still. I mean yes cooling will almost always be the first major limitation to any overclock, but the second is typically the VRM design getting in the way unless you brute force it over obscure cooling methods.


----------



## Atomic77 (Sep 3, 2022)

Oh my freakin gosh this makes me feel really old and out dated. my hp laptop I got just got like 2 years ago has a Ryzen 5 3500u in it.


----------

