# Intel Core i5-12600



## W1zzard (Mar 25, 2022)

The Intel Core i5-12600 doesn't have any E-cores, which makes it a fundamentally different processor than the Core i5-12600K, and thus very different than the naming would suggest. The Core i5-12600 is actually the fastest non-hybrid Alder Lake processor you can buy, but should you?

*Show full review*


----------



## Selaya (Mar 25, 2022)

wow, 4% uplift from the 12400, thats quite a lot
the price tag tho, and the lack of the 12600F sku monkaS


----------



## Anymal (Mar 25, 2022)

12400f under 200eur is the one for a modest gaming PC. Until 5600 non x comes, will see.


----------



## mechtech (Mar 25, 2022)

Beautiful review, the only thing missing is the zen 1700 CPU for comparison  (kind of like how the RX570/580 that made a come back for gpu reviews)


----------



## no.taboo (Mar 26, 2022)

What settings were used for the cyberpunk 2077 benchmarks? They seem anomalous.


----------



## InVasMani (Mar 26, 2022)

Does the iGPU portion of these Alder Lake chips support DSC? Passing the display output alone with duplicate displays option to the iGPU and letting that output it to my display would be a nice perk if could use that as a way to get DSC out the render from the iGPU compressing it to the output from the render fed to the iGPU by my discrete GPU that lacks DSC. The display itself is DP1.4 HBR3 capable however. It would make a good way to salvage it's usefulness on the iGPU.


----------



## MachineLearning (Mar 26, 2022)

Not "To E, or not to E?" 

Great review as always.


----------



## Space Lynx (Mar 26, 2022)

if they had a 12600f non-k for like $220, I would buy it instantly.  fuck e-cores.  oc'ing doesn't interest me.  but on same hand i don't want something like the 12700f, which is like a base clock of 2.1 ghz... i don't know how well that boost in older games... so really my only option i guess is 12600kf since i am doing a budget build... and maybe the MSI Pro B660m for $140... and some cheap ddr4 3200 ram.  combine that with my 6700 xt... and away I go...

I mean Intel is getting 20-30 fps gains in some games at 1080p which is what I intend to game at. so that is important to me... I doubt the 5800x 3d cache really changes the game all that much.


----------



## Deleted member 202104 (Mar 26, 2022)

CallandorWoT said:


> if they had a 12600f non-k for like $220, I would buy it instantly.  fuck e-cores.  oc'ing doesn't interest me.  but on same hand i don't want something like the 12700f, which is like a base clock of 2.1 ghz... i don't know how well that boost in older games... so really my only option i guess is 12600kf since i am doing a budget build... and maybe the MSI Pro B660m for $140... and some cheap ddr4 3200 ram.  combine that with my 6700 xt... and away I go...
> 
> I mean Intel is getting 20-30 fps gains in some games at 1080p which is what I intend to game at. so that is important to me... I doubt the 5800x 3d cache really changes the game all that much.



Just grab a 5600x for $199 and any b550 board.









						AMD Ryzen 5 5600X 6-Core 3.7 GHz  AM4 CPU Processor - Newegg.com
					

Buy AMD Ryzen 5 5600X - Ryzen 5 5000 Series Vermeer (Zen 3) 6-Core 3.7 GHz Socket AM4 65W Desktop Processor - 100-100000065BOX with fast shipping and top-rated customer service. Once you know, you Newegg!




					www.newegg.com
				




You've got another hour and 40 minutes until the sale's over.


----------



## Space Lynx (Mar 26, 2022)

weekendgeek said:


> Just grab a 5600x for $199 and any b550 board.
> 
> 
> 
> ...



damn nice find!


----------



## jesdals (Mar 26, 2022)

Theres been a lot of nice offers for 12500 in Denmark, theres some time cheaper than the 12400


----------



## Turmania (Mar 26, 2022)

Initiial pricing is I think too close to 12600k range. I believe it will go down to 220 ish in a short while. Then it would make sense.


----------



## BigBonedCartman (Mar 26, 2022)

Another year another Intel Gimmick!

Gear Modes?

E-Cores?

***EyeRoll***


----------



## Deleted member 24505 (Mar 26, 2022)

However much hate for E cores, the 12700k was best in almost every graph, guess which chip i bought. No regrets. 

Either run with the E cores enabled or disabled imo still good either way as this review shows.


----------



## W1zzard (Mar 26, 2022)

Tigger said:


> However much hate for E cores, the 12700k was best in almost every graph, guess which chip i bought. No regrets.
> 
> Either run with the E cores enabled or disabled imo still good either way as this review shows.


Yeah the data here conclusively shows that you do want the E-Cores, especially if it's only a few dollars extra


----------



## bug (Mar 26, 2022)

W1zzard said:


> Yeah the data here conclusively shows that you do want the E-Cores, especially if it's only a few dollars extra


It depends, really. My E cores are disabled because I'm running Win10 and Linux will only support the new architecture starting with 5.18. Even then, it remains to be seen how well it can handle priorities.

But I get what you're saying: you do want the E-cores, because they bring crunching power, they're not anemic. At the same time, just look at 12600k in your own wPrime benchmark to see what happens when the workload lands on the wrong core.


----------



## thelawnet (Mar 26, 2022)

Why does the PL-removed 12600 use more power than the overclocked 12600? 

Also like:

* 12400 no PL = 4.4 GHz = 101.4
* 12600 no PL = 4.8 GHz (+9.1%) = 105.3/95.3/1.014 = +9.0%
* 12600 OC = 5.0 GHz (+4.2%) = 112.7 / 105.3 = +7.0%.

So why does the 9.1% extra MHz give 9.0% more performance but the 4.2% extra MHz give 7.0%?

Is there some difference between 5.0GHz 'OC' and 5.0 GHz 'pl disabled'?

Also  it would be nice to see something about the dual encoding engines in the UHD 770, don't think anyone has tested the impact of those.


----------



## Valantar (Mar 26, 2022)

It is just me or does this SKU feel a bit ... redundant? I mean, sure, it outperforms the 12400 ... barely. I know Intel are the absolute masters of incremental product segmentation, and I guess the answer to the question of "why does this SKU exist?" is "it's $30 more than a 12400 yet costs the same to produce", but ... well, I guess it makes sense when you're operating on the scale of Intel. Still seems unnecessary to me.


Quite interesting to see just how much faster the 12600K manages to be though. A mostly irrelevant difference in gaming, but definitely there in productivity. Those E cores definitely help in heavily threaded tasks that can make use of them (or for keeping lightweight background tasks out of the way of higher performance ones).


----------



## W1zzard (Mar 26, 2022)

thelawnet said:


> Why does the PL-removed 12600 use more power than the overclocked 12600?


I think my manual OC voltage is lower than what the CPU runs at default


----------



## Assimilator (Mar 26, 2022)

Valantar said:


> It is just me or does this SKU feel a bit ... redundant? I mean, sure, it outperforms the 12400 ... barely. I know Intel are the absolute masters of incremental product segmentation, and I guess the answer to the question of "why does this SKU exist?" is "it's $30 more than a 12400 yet costs the same to produce", but ... well, I guess it makes sense when you're operating on the scale of Intel. Still seems unnecessary to me.
> 
> 
> Quite interesting to see just how much faster the 12600K manages to be though. A mostly irrelevant difference in gaming, but definitely there in productivity. Those E cores definitely help in heavily threaded tasks that can make use of them (or for keeping lightweight background tasks out of the way of higher performance ones).


But... but... E-CORES ARE A HACK AND INTEL IS STEALING FROM US

(or whatever nonsense the AMD fanbois in this forum use to justify hating E-cores).


----------



## Valantar (Mar 26, 2022)

Assimilator said:


> But... but... E-CORES ARE A HACK AND INTEL IS STEALING FROM US
> 
> (or whatever nonsense the AMD fanbois in this forum use to justify hating E-cores).


Lol, I haven't seen to many of those opinions luckily, but then I haven't been spending as much time on the forums recently. Me, I'm looking forward to seeing how e-cores add up for mobile, particularly in those 2P+nE chips. Hybrid CPUs of various kinds are likely the way of the future, so while ADL is definitely imperfect, it still does a lot of things right.


----------



## InVasMani (Mar 26, 2022)

Assimilator said:


> But... but... E-CORES ARE A HACK AND INTEL IS STEALING FROM US
> 
> (or whatever nonsense the AMD fanbois in this forum use to justify hating E-cores).


It's more like Intel high refresh crowd. The AMD fan's embraced Ryzen and multi-core long before E-cores even arrived.



Valantar said:


> Lol, I haven't seen to many of those opinions luckily, but then I haven't been spending as much time on the forums recently. Me, I'm looking forward to seeing how e-cores add up for mobile, particularly in those 2P+nE chips. Hybrid CPUs of various kinds are likely the way of the future, so while ADL is definitely imperfect, it still does a lot of things right.


I agree that hybrid chips will certainly be ironed out and more integrated into designs moving forward. The harder and more complicated node shrinks become the more work around solutions will make sense. The low hanging fruit are disappearing. I think what needs to happen next with this type of big/LITTLE chiplet approach from either AMD/Intel is just using and assigning one chipet or the other to be treated by the OS itself as a foreground or background chiplet like with processor scheduling for CPU time slices. Combining TSV with 3D Stacked Cache for the L2/L3 cache between two chiplets different core count sizes could be interesting too. 

Having separate BCLK clocks for each chiplet would be a nice step too. It would enable different memory frequencies for each and in turn that could lead to efficiency gains. It could enable the potential use a different SPD/JEDEC timing profile and memory clock frequency for each chiplet. That's pretty cool given they can vary in terms of frequency, timings, and voltages so it would lead to better heat and efficiency management between chiplets.


----------



## PapaTaipei (Mar 26, 2022)

Great review. I got a 12600k non OC best upgrade I ever made so far. Had a 6600k before.


----------



## thelawnet (Mar 27, 2022)

Valantar said:


> It is just me or does this SKU feel a bit ... redundant? I mean, sure, it outperforms the 12400 ... barely. I know Intel are the absolute masters of incremental product segmentation, and I guess the answer to the question of "why does this SKU exist?" is "it's $30 more than a 12400 yet costs the same to produce", but ... well, I guess it makes sense when you're operating on the scale of Intel. Still seems unnecessary to me.
> 
> 
> Quite interesting to see just how much faster the 12600K manages to be though. A mostly irrelevant difference in gaming, but definitely there in productivity. Those E cores definitely help in heavily threaded tasks that can make use of them (or for keeping lightweight background tasks out of the way of higher performance ones).



It's surely less redundant than the 10600, which differed ONLY from the 10600K in being 4.8 GHz locked, rather than 4.8 GHz unlocked. 

Here the SKUs are:

* i3-12100 -4.3 GHz, sensible chip
* i3-12300 -4.4 GHz, uh.....
* i5-12400 - 4.4 GHz - the popular chip
* i5-12500  - 4.6GHz, bigger IGP
* i5-12600 - 4.8 GHz
* i5-12600K - 4.9 GHz, more cores, unlocked

Here there's a BIG difference to the 12600K, the only question is price.


----------



## arandomguy (Mar 27, 2022)

What was the actual memory configuration used for the tests in this review? Unless I'm missing something the only mention is in the test bed section which lists both a DDR5 and DDR4 configuration but it isn't specified which one is actually used.

Also are all the Alder Lake comparison points using the DDR5 configuration? This seems to be the case matching against the 12300 review (as the results match for the DDR5 data set). If so can this information be specified in future content?

On this topic however I'm wondering if having DDR5, if this is the case, as the standard comparison point is really the best choice for CPUs like this. Are people looking to buy the 12600 and lower (such as the 12300) really considering the premium of DDR5 over DDR4? At least to me it doesn't seem like it would make sense to buy in this segment for cost reasons while at the same time paying a much higher premium for memory.


----------



## AnarchoPrimitiv (Mar 27, 2022)

Wait, so this chip is only 4% ahead of the 5600x?  So does that mean that the P-cores only have a very small IPC uplift when compared to Zen3?


----------



## Valantar (Mar 27, 2022)

thelawnet said:


> It's surely less redundant than the 10600, which differed ONLY from the 10600K in being 4.8 GHz locked, rather than 4.8 GHz unlocked.


As well as TDP, while delivering the same core count, making it a similar but lower power option. That's a rather different situation than this - and is how Intel has operated for a decade. K SKU at a high TDP, non-X SKU that is otherwise very similar but with a lower base clock and lower TDP. This overturns that in favor of ... incremental improvements on lower end SKUs while co-opting the name of a SKU with 4 more cores? That's just weird. Rather than "buy K if you want performance, can deal with the power, and want to OC; buy non-K for lower power (and a bit lower performance)", it's now "buy K if you want performance, can deal with the power, and want to OC; buy non-K for ... an entirely different product that barely stands apart from its two lower priced, lower tier siblings".


thelawnet said:


> Here the SKUs are:
> 
> * i3-12100 -4.3 GHz, sensible chip
> * i3-12300 -4.4 GHz, uh.....
> ...


A listing like this without taking core counts into account is ... rather weird, no? i3s are 4c8t, i5s are either 6c12t or 10c16t. But as I said, Intel are the masters of incremental product segmentation. And I fully agree that the i3-12300 doesn't make any more sense than this chip - but then, that isn't what is reviewed here, is it? IMO, having three non-K i5s is pretty dumb overall. Arguably one would have been sufficient - but again, that brings us back to "well, with three there are two that are a tad more expensive". Which always seems to be Intel's game. But regardless, this SKU is pretty much redundant in terms of performance, barely outperforming the cheaper 12400, while being drastically different from (and significantly weaker than) the K SKU it shares a name with.


----------



## Selaya (Mar 27, 2022)

i mean the entire ADL segmentation is just a total fucking trainwreck.
the 12600 doesn't have E-cores. the 12600K does.
the 12700 does have E-cores. as does the 12700K.
???????????????


----------



## Valantar (Mar 27, 2022)

Selaya said:


> i mean the entire ADL segmentation is just a total fucking trainwreck.
> the 12600 doesn't have E-cores. the 12600K does.
> the 12700 does have E-cores. as does the 12700K.
> ???????????????


Yep. They could at least have kept it as cosistent as same number = same hardware features. Just call this the 12500, ditch one of the other unnecessary 6P SKUs, and things would be a lot more coherent.


----------



## piloponth (Mar 27, 2022)

Sooo, its been months since 12th gen release.
Are there any B660 boards with ext. clock generator, and ideally with DDR4 support?


----------



## chrcoluk (Mar 27, 2022)

Compared manually to the 12900k review, when you consider price, tdp, and brokenness of big little design this is easily the best alder lake chip.

Also why is H670 not more common, from value point of view probably best chipset? or is it only slightly cheaper than Z690? CPU overclocking might not be cared for by customer  H670 does away with it but keeps the pcie lanes.

If I was upgrading today using Intel platform it would be 12600 and H670 combo, as not wasting on E cores and excessive VRM on board, since not overclocking.

Thats my thoughts thanks for the review.


----------



## thelawnet (Mar 27, 2022)

Valantar said:


> As well as TDP, while delivering the same core count, making it a similar but lower power option. That's a rather different situation than this - and is how Intel has operated for a decade. K SKU at a high TDP, non-X SKU that is otherwise very similar but with a lower base clock and lower TDP. This overturns that in favor of ... incremental improvements on lower end SKUs while co-opting the name of a SKU with 4 more cores? That's just weird. Rather than "buy K if you want performance, can deal with the power, and want to OC; buy non-K for lower power (and a bit lower performance)", it's now "buy K if you want performance, can deal with the power, and want to OC; buy non-K for ... an entirely different product that barely stands apart from its two lower priced, lower tier siblings".
> 
> A listing like this without taking core counts into account is ... rather weird, no? i3s are 4c8t, i5s are either 6c12t or 10c16t. But as I said, Intel are the masters of incremental product segmentation. And I fully agree that the i3-12300 doesn't make any more sense than this chip - but then, that isn't what is reviewed here, is it? IMO, having three non-K i5s is pretty dumb overall. Arguably one would have been sufficient - but again, that brings us back to "well, with three there are two that are a tad more expensive". Which always seems to be Intel's game. But regardless, this SKU is pretty much redundant in terms of performance, barely outperforming the cheaper 12400, while being drastically different from (and significantly weaker than) the K SKU it shares a name with.



Uh? 

The i5-10600 is rated at 65W ('TDP') at all-core base frequency. 
So was the i3-10100.

This number is essentially a fiction with fiddled frequencies to meet the target. Therefore the i3 went as high as 4x3.9 GHz, whereas the i5 had a base of 6x3.3 GHz (or less).
Alder Lake has up to 3.5 GHz for the 4 core TDP now reduced to 60W, and 3.3 GHz for the 6 core.

These numbers are not very meaningful in that if someone, let's say Asrock, created a board, that was incapable of dealing with the  arbitrary 'TDP' of a K chip (125W), then people scream that it is terrible.
That TDP is e.g. 8 x 3.6 GHz + 4 x 2.7 GHz, as on the 12700K

Intel now provide a second more useful 'TDP', which for the non-K i5s is 117W. This is therefore 6 x 4.8 GHz. 

Meanwhile the 12600K is up to 8 x 4.9 +  4 x 3.6 = 150W

The only problem with Intel's numbers is that they aren't SKU-specific.

I believe the 12600 uses every bit of that 117W, whereas the 12400F can run all day at 4.4 GHz on 75W or less.



Valantar said:


> As well as TDP, while delivering the same core count, making it a similar but lower power option. That's a rather different situation than this - and is how Intel has operated for a decade. K SKU at a high TDP, non-X SKU that is otherwise very similar but with a lower base clock and lower TDP. This overturns that in favor of ... incremental improvements on lower end SKUs while co-opting the name of a SKU with 4 more cores? That's just weird. Rather than "buy K if you want performance, can deal with the power, and want to OC; buy non-K for lower power (and a bit lower performance)", it's now "buy K if you want performance, can deal with the power, and want to OC; buy non-K for ... an entirely different product that barely stands apart from its two lower priced, lower tier siblings".
> 
> A listing like this without taking core counts into account is ... rather weird, no? i3s are 4c8t, i5s are either 6c12t or 10c16t. But as I said, Intel are the masters of incremental product segmentation. And I fully agree that the i3-12300 doesn't make any more sense than this chip - but then, that isn't what is reviewed here, is it? IMO, having three non-K i5s is pretty dumb overall. Arguably one would have been sufficient - but again, that brings us back to "well, with three there are two that are a tad more expensive". Which always seems to be Intel's game. But regardless, this SKU is pretty much redundant in terms of performance, barely outperforming the cheaper 12400, while being drastically different from (and significantly weaker than) the K SKU it shares a name with.



For Comet Lake and Rocket Lake you had non-K SKUs with low base frequency and a low TDP. Since most people just buy a board which ignores the TDP, we now have two TDPs, the same one from Comet/Rocket Lake, and one at full turbo, maxed chip. 

The i5-12600k should be the i6 or something, and it is quite confusing.

I assumed it was obvious that the i3 and i5 are different C/T so didnt list that. 

Again, clearly there is more justification for having multiple i5 chips in the past in that:

12500 vs 12400 is a totally different GPU (32 vs 24 EUs, 2 encoding engines vs 1)
12600k vs 12600 is a totally different CPU (6+4 cores vs 6)

whereas in the past it was literally only down to clock speeds and the TDP you cite could be done with BIOS settings anyway so not significant.


----------



## qubit (Mar 27, 2022)

I really don't like that hybrid design and this chip solves that, with compromises.

I'd always intended to upgrade my aged, but trusty 4 core 2700K top of the range CPU to at least an 8 core CPU with all cores being performance cores and likely top of the range too. However, this is hardly top of the range and has 6 cores which doesn't sit that well with me. The only way to do that from Intel now is to buy the top model 12900K and disable the E cores which is a lot of performance and money to throw away for this, so I won't go for this option. On top of that, the only performance metric I'm interested in is gaming and there's hardly any difference between this and the 12900K at 720p, let alone higher resolutions so it's a lot of extra money for not much more performance. I like the low temps of the 12600 at stock, too.

Therefore, if I was upgrading right now, I might get the 12600 and be done with it, compromises and all. However, it's still not pressing for me to upgrade, especially as I'm not gaming so much lately and the next gen Intel CPUs are due sometime this year too, so I'll see what they have to offer. My 2700K still feels perfectly snappy on the desktop, so not much pressure to upgrade there. It does show its age in games though and that's my main driver for upgrading, not Windows 11 with its annoying rounded corners.

Of course, my system's days alas are numbered. Assuming such an old system doesn't suffer hardware failure, that crunch point is 14.10.2025, when Microsoft ends Windows 10 support, so I've got three years and change left in it, but it's not really that long overall. However, the hardware landscape will be dramatically different then, including available graphics cards. I'm sure that NVIDIA will have hit the reassuringly unaffordable 5 grand price point by then for its top cards lol.

Great review, as always.


----------



## bug (Mar 27, 2022)

qubit said:


> I really don't like that hybrid design and this chip solves that, with compromises.


It could be a God-send for mobiles. For desktops... it's a solution looking for a problem.
I can see how it makes sense from an engineering point of view (i.e. get almost Skylake levels of performance from a tiny piece of the whole die). But that doesn't mean it's an automatic win for the end users. If anything, it's a net loss on the desktop since the only difference from an homogeneous design is that now your work can end up on an E core and take more time to finish. The only thing this design improves is when you saturate all cores and the E cores a little more extra HP on top of that. But honestly, how often do you saturate 12 or 16 threads?

Edit: Good thing the P-cores are good enough on their own, though.


----------



## chrcoluk (Mar 27, 2022)

Think we might get a 12700 non k model? 8 P cores no E cores.


----------



## qubit (Mar 27, 2022)

bug said:


> It could be a God-send for mobiles. For desktops... it's a solution looking for a problem.
> I can see how it makes sense from an engineering point of view (i.e. get almost Skylake levels of performance from a tiny piece of the whole die). But that doesn't mean it's an automatic win for the end users. If anything, it's a net loss on the desktop since the only difference from an homogeneous design is that now your work can end up on an E core and take more time to finish. The only thing this design improves is when you saturate all cores and the E cores a little more extra HP on top of that. But honestly, how often do you saturate 12 or 16 threads?
> 
> Edit: Good thing the P-cores are good enough on their own, though.


Exactly, E cores are for laptops, not desktops, where it might make sense to keep power consumption down under most scenarios.

However, I don't see the 12900 as 8 P cores with a bonus 8 E cores, but as a 16 core CPU with 8 of them crippled, especially as they don't even support hyperthreading and that's what I don't like about it.

Intel have been clever with their marketing here. Notice how one can buy a 6 core CPU with no on-die E cores, but not an 8 core one when it would be trivial to make it. This strikes me as preventing it from cannibalizing sales of their high end CPUs as 8 cores with HT are a potent mix and certainly isn't for the benefit of the customer.

I can just see the Intel apologists coming on here and attempting to counter my point with vitriol. Let's see if that happens...


----------



## ThrashZone (Mar 27, 2022)

Hi,
E cores are just thermal defective cores which intel used to bin out but now use.


----------



## newtekie1 (Mar 27, 2022)

It annoys me that the i5-12600 doesn't have the same core configuration as the i5-12600K. IMO, they should always have the same core configuration if they have the same model number +/- the K. The clock speed can be slightly different, but the core configuration should be the same.



ThrashZone said:


> Hi,
> E cores are just thermal defective cores which intel used to bin out but now use


That isn't true at all. The E cores are designed completely different than the P cores.



bug said:


> it's a solution looking for a problem.


It's a solution to a problem that does actually currently exists. Governments, in their never ending fight to reduce energy usage, are implementing regulations that computers now have a power limit when they are sitting idle or under light load, even desktops. So the E-cores allow more powerful P-Cores(and more of them) while still keeping the computer under the power limits. It's stupid on the government's side, but that's an entirely different discussion. And the new laws have already made some PC manufacturers pull certain high end models of their computers from the markets those laws cover.

In and ideal world, Intel would have a processor that is all P-cores, as many of them as they could fit on the die space. I figure if they replaced the E-cores with P-cores, they could have a 10 or 12 core 12th gen CPU with all P-cores. Maybe call this beast the i9-12950K. This CPU would only be available in OEM systems outside those jurisdictions with the strict energy laws. But there's also nothing stopping someone in those areas from upgrading their computer themselves with this processor. Of course this won't happen because it doesn't make a lot of business sense for Intel. It's another die they have to design and product test. And they probably wouldn't make much money off of it.


----------



## qubit (Mar 27, 2022)

@newtekie1 Green lobby strikes again.


----------



## Valantar (Mar 27, 2022)

thelawnet said:


> Uh?
> 
> The i5-10600 is rated at 65W ('TDP') at all-core base frequency.
> So was the i3-10100.


Uh ... where did that i3-10100 come from? I responded to your out-of-the-blue mention of the i5-10600, which, in case you forgot, you brought up as a response to me saying the 12600 seems kind of redundant. I honestly don't see what you're getting at at all here. Have I complained about the TDP of this chip? No, I've complained that it doesn't make sense at this point in the product stack, doesn't fit the 12600 name (in light of the spec differences vs. the K), and that this just follows Intel's habit of having way too many SKUs. Whether previous i5s and i3s shared a TDP is entirely irrelevant to that point.


thelawnet said:


> This number is essentially a fiction with fiddled frequencies to meet the target.


You see that this sentence is contradicting itself, right? Yes, TDP is very different from actual power targets. I've never said anything even relating to that. But those "fiddled frequencies" are precisely why they can give chips different TDPs, and why at any reasonably low TDP a higher core count chip will have a lower base clock. Which is a meaningful distinction, as how much you're able to cool is highly variable between different PCs.


thelawnet said:


> Therefore the i3 went as high as 4x3.9 GHz, whereas the i5 had a base of 6x3.3 GHz (or less).
> Alder Lake has up to 3.5 GHz for the 4 core TDP now reduced to 60W, and 3.3 GHz for the 6 core.


Again, apparently I have to remind you here: I made a statement about the 12600 seeming rather redundant. You responded to that with a list of SKUs and frequencies, which ... I assume was supposed to contradict that somehow? But which also for some reason left out some of the major differentiating factors between those SKUs, making the list rather useless overall - and still it didn't bring any clarity to why there needs to be 3 6c12t i5 SKUs, or why this SKU shares a name with a 10c16t chip for some reason.


thelawnet said:


> These numbers are not very meaningful in that if someone, let's say Asrock, created a board, that was incapable of dealing with the  arbitrary 'TDP' of a K chip (125W), then people scream that it is terrible.
> That TDP is e.g. 8 x 3.6 GHz + 4 x 2.7 GHz, as on the 12700K


.... relevance? If a bad motherboard has a bad VRM, does that affect whether or not a CPU SKU is unnecessary?


thelawnet said:


> Intel now provide a second more useful 'TDP', which for the non-K i5s is 117W. This is therefore 6 x 4.8 GHz.


Again: Yes, but relevance? I only brought up TDPs as you failed to account for that as a differentiating factor between the two chips _you_ brought up in order to contradict _my _point.


thelawnet said:


> Meanwhile the 12600K is up to 8 x 4.9 +  4 x 3.6 = 150W
> 
> The only problem with Intel's numbers is that they aren't SKU-specific.
> 
> I believe the 12600 uses every bit of that 117W, whereas the 12400F can run all day at 4.4 GHz on 75W or less.


I don't know what river you're paddling up currently, but it bears no relation to what I was saying, nor my response to you. Why are we discussing TDPs? My point was about _product segmentation_. TDPs play into that, but they are one of many variables, and discussing TDP alone gets us nowhere. Also, do I need to remind you that my first post here said that "Intel are the absolute masters of incremental product segmentation" - does that somehow imply that this hasn't been true up until now? It should really be plenty clear that this isn't new - I'm simply pointing out that this is a particularly egregious example of it.


thelawnet said:


> For Comet Lake and Rocket Lake you had non-K SKUs with low base frequency and a low TDP.


This has literally been how Intel SKUs have worked since Skylake, though arguably since Sandy Bridge: for any model number, a K SKU is unlocked, higher clocked, and might have a higher TDP than the non-K SKU, but they were the same hardware and were configured very similarly outside of this. They've now broken with this system, for no good reason beyond making three near-identical i5 SKUs that perform within a few % of each other. I find that worthy of pointing out.


thelawnet said:


> Since most people just buy a board which ignores the TDP, we now have two TDPs, the same one from Comet/Rocket Lake, and one at full turbo, maxed chip.


Yes, that's a good thing. But also entirely irrelevant to this discussion. Whether previous TDP figures were nonsense or not (they mostly were), _I'm talking about product segmentation_.


thelawnet said:


> The i5-12600k should be the i6 or something, and it is quite confusing.


No, the i5-12600 non-K should have been a 10c16t chip or not have existed at all. Then they could have had a couple of 6c12t i5s below that and this would have been a lot less messy.


thelawnet said:


> I assumed it was obvious that the i3 and i5 are different C/T so didnt list that.


So what was the point of the list? To say that different chips have clock frequencies a few % apart from each other? I don't see how that in any way refutes my point about this being a redundant SKU.


thelawnet said:


> Again, clearly there is more justification for having multiple i5 chips in the past in that:
> 
> 12500 vs 12400 is a totally different GPU (32 vs 24 EUs, 2 encoding engines vs 1)
> 12600k vs 12600 is a totally different CPU (6+4 cores vs 6)


"totally different GPU" - but they're both uselessly slow, so ... who cares? Sorry, but that's not a meaningful differentiator for anyone otuside of perhaps a few digital signage OEMs. Nobody in the world cares if their desktop CPUs comes with a 24 or 32-core Intel iGPU. Xe is better than their previous stuff, and can compete with Vega when the drivers work, but they generally don't, so that point is moot. This just underscores my point of Intel producing a ton of useless SKUs for no good reason.


thelawnet said:


> whereas in the past it was literally only down to clock speeds and the TDP you cite could be done with BIOS settings anyway so not significant.


... which is kind of my point, no? That Intel is creating an ever-increasing number of undifferentiated SKUs that have no meaningful differences? I mean, you're actually here making excuses for them ("this one has a marginally faster iGPU!"). You're also acting as if this segmentation isn't _entirely_ by choice. There's nothing forcing Intel to have three non-K i5 SKUs whatsoever - and again, nobody cares about that iGPU. Nobody. If this 12600 was called the 12500 and the 12500 didn't exist, things would look a lot more sensible.



newtekie1 said:


> It annoys me that the i5-12600 doesn't have the same core configuration as the i5-12600K. IMO, they should always have the same core configuration if they have the same model number +/- the K. The clock speed can be slightly different, but the core configuration should be the same.


Yep. Breaking this system is beyond stupid.


newtekie1 said:


> That isn't true at all. The E cores are designed completely different than the P cores.


Again, entirely true. No idea what @ThrashZone is on about here, but E cores are an entirely different architecture than P cores.


newtekie1 said:


> It's a solution to a problem that does actually currently exists. Governments, in their never ending fight to reduce energy usage, are implementing regulations that computers now have a power limit when they are sitting idle or under light load, even desktops. So the E-cores allow more powerful P-Cores(and more of them) while still keeping the computer under the power limits. It's stupid on the government's side, but that's an entirely different discussion. And the new laws have already made some PC manufacturers pull certain high end models of their computers from the markets those laws cover.


Sorry, but this is pure nonsense. You seem to have bought into some of the sensationalism and misinformation that got tossed around a while ago when some new environmental regulations (that OEMs had known about for _years_) came into effect, causing non-compliant OEMs to halt sales of certain models. Failure to comply with these regulations is _only_ the responsibility of said OEMs, as compliant components were plentiful and they had several years' notice. The only computers pulled from the market were also pulled due to using low efficiency PSUs, and not because of the power consumption of any of their other components.

As for the goal of this being to bring down idle power consumption: that's likely partly true, but given that current Intel mobile CPUs idle in the mW range, the differences from adding E cores is relatively minor overall - especially as the differences between desktops and mobile in this regard comes down to larger boards and more AICs requiring more power, not the CPUs themselves (as well as high powered desktop PSUs generally being very inefficient at low loads). There is no way in which E cores affect any of this meaningfully, so presenting that as the reasoning just doesn't add up. Intel's main motivation for adding E cores is to compete with AMD's _massive_ MT efficiency lead, as well as Apple, as it's clear their P architecture just can't deliver the necessary combination of efficiency and speed.


newtekie1 said:


> In and ideal world, Intel would have a processor that is all P-cores, as many of them as they could fit on the die space. I figure if they replaced the E-cores with P-cores, they could have a 10 or 12 core 12th gen CPU with all P-cores.


The die space used by the 4-core E clusters is widely documented, and is roughly the same as a single P core, so they would top out at 10 in the same die area, but you'd then also have lower clocks and increased thermal density, while getting fewer threads for your trouble. Most likely, a 10-core Golden Cove chip would be quite underwhelming due to thermal limitations. In most MT heavy applications, 4 E cores deliver more performance than two more P cores would, at least after the scheduler was updated to keep track of them. There are still applications that don't manage to make use of them, but those are growing increasingly rare.


newtekie1 said:


> Maybe call this beast the i9-12950K. This CPU would only be available in OEM systems outside those jurisdictions with the strict energy laws. But there's also nothing stopping someone in those areas from upgrading their computer themselves with this processor. Of course this won't happen because it doesn't make a lot of business sense for Intel. It's another die they have to design and product test. And they probably wouldn't make much money off of it.


Such a chip would likely meet those idle power requirements just fine - just like an all-P core ADL i5 does, after all. Cores can be power and clock gated after all, so why would a 10P CPU consume more power at idle than a 6P one? And those 6P CPUs are sold in those jurisdictions. So, sorry, but your reasoning here doesn't add up. You're giving environmental regulations the blame for something they have literally zero effect on - the architectural traits of Intel's P cores and how many of them can be packed into a CPU package and made to perform well. Intel isn't being stopped by regulations, they're stopped by their inability to put more than 8 of these cores in a single package and have them clock high enough to run well.

Me? I trust Intel's engineers to know what they're doing with the resources available to them. They made E cores for a reason, and that reason isn't because the gub'mint forced them to do so. For what they do, E cores are _good_, and smarter and more dynamic PCs is a net benefit and a necessity if we want faster PCs going forward - the era of brute-forcing your way to victory is coming to an end. And that's also a good thing. And lastly, I don't hold to scapegoating governmental regulations (which are _only_ beneficial in this scenario) in lieu of attributing blame where it actually lies: with those engineers, and/or with the product designers at various OEMs. Intel has a high performance architecture that has efficiency and area issues. E cores are a solution to that. And OEMs failing to design compliant products when given several years' notice have nobody but themselves to blame.


----------



## Wirko (Mar 27, 2022)

thelawnet said:


> It's surely less redundant than the 10600, which differed ONLY from the 10600K in being 4.8 GHz locked, rather than 4.8 GHz unlocked.
> 
> Here the SKUs are:
> 
> ...


Bah, that's nothing compared to Haswell + Haswell refresh. At least eight i3 and seven i5 models without suffixes, plus the K chips. And an endless line of T's and S's, but maybe not all of those were avilable in retail.


----------



## newtekie1 (Mar 27, 2022)

qubit said:


> @newtekie1 Green lobby strikes again.


I mean, it is what it is. I've been feeling the higher electric bills in the past year or so, so a lower power idling computer that still has tons of horsepower for gaming and multi-threaded work is A-OK with me. But I'd be nice if the option was there to just go full on beastly CPU if you wanted.



Valantar said:


> Sorry, but this is pure nonsense. You seem to have bought into some of the sensationalism and misinformation that got tossed around a while ago when some new environmental regulations (that OEMs had known about for _years_) came into effect, causing non-compliant OEMs to halt sales of certain models. Failure to comply with these regulations is _only_ the responsibility of said OEMs, as compliant components were plentiful and they had several years' notice. The only computers pulled from the market were also pulled due to using low efficiency PSUs, and not because of the power consumption of any of their other components.
> 
> As for the goal of this being to bring down idle power consumption: that's likely partly true, but given that current Intel mobile CPUs idle in the mW range, the differences from adding E cores is relatively minor overall - especially as the differences between desktops and mobile in this regard comes down to larger boards and more AICs requiring more power, not the CPUs themselves (as well as high powered desktop PSUs generally being very inefficient at low loads). There is no way in which E cores affect any of this meaningfully, so presenting that as the reasoning just doesn't add up. Intel's main motivation for adding E cores is to compete with AMD's _massive_ MT efficiency lead, as well as Apple, as it's clear their P architecture just can't deliver the necessary combination of efficiency and speed.




The fact is it is not just an inefficient power supply issue. The limits actually just got stricter in some areas. Selling a gaming PC with a high end graphics card is getting harder, and these E-Cores are the solution to that problem. The laptop processors aren't really an argument here. Yes, some idle at mW in some cases, but the high end gaming ones don't. And they also often aren't nearly as powerful as a desktop processor at full speed either.



Valantar said:


> The die space used by the 4-core E clusters is widely documented, and is roughly the same as a single P core, so they would top out at 10 in the same die area, but you'd then also have lower clocks and increased thermal density, while getting fewer threads for your trouble. Most likely, a 10-core Golden Cove chip would be quite underwhelming due to thermal limitations. In most MT heavy applications, 4 E cores deliver more performance than two more P cores would, at least after the scheduler was updated to keep track of them. There are still applications that don't manage to make use of them, but those are growing increasingly rare.


Not really, a single P-Core with HT enabled is responsible for about 30w of power under load. At the same time, disabling the E-cores results in a power drop of about 25w, but that number might be inaccurate because the P-cores were allowed to boost higher and use more power since the E-cores were not taking up some of the power budget. But even if we assume the E-cores only use 25w, taking them out and adding 2 P-cores would only increase the power by 35w. And I'm talking about a high end processor here, something Intel could increase the power budget on easily to make up for that 35w and still keep the same or extremely close boost clock speeds.



Valantar said:


> Such a chip would likely meet those idle power requirements just fine - just like an all-P core ADL i5 does, after all. Cores can be power and clock gated after all, so why would a 10P CPU consume more power at idle than a 6P one? And those 6P CPUs are sold in those jurisdictions. So, sorry, but your reasoning here doesn't add up. You're giving environmental regulations the blame for something they have literally zero effect on - the architectural traits of Intel's P cores and how many of them can be packed into a CPU package and made to perform well. Intel isn't being stopped by regulations, they're stopped by their inability to put more than 8 of these cores in a single package and have them clock high enough to run well.



But we know that isn't true. The 4-Core 12300, using the same die as this i5-12600 uses less power at idle. In fact the 12600 is using about 10% more power at idle than the 12300. And those P-cores on the 12300 are physically disabled, meaning their power consumption is actually 0. Power and clock gating a core does not reduce it's power consumption to 0.


----------



## qubit (Mar 27, 2022)

newtekie1 said:


> I mean, it is what it is. I've been feeling the higher electric bills in the past year or so, so a lower power idling computer that still has tons of horsepower for gaming and multi-threaded work is A-OK with me. But I'd be nice if the option was there to just go full on beastly CPU if you wanted.


Yeah, +1 buddy.

We've got 50% higher electricity and gas bills here in brexit Blighty, too.

I wanted my new PC to be basically "double" my old one: twice the cores minimum (full performance cores of course), double the memory to 32GB and a really beastly powerful graphics card paired with a 4K 144Hz monitor. Why? Because I can. Pure, enthusiast logic! Alas, the CPU on Intel's side only exists as the 12900 and NVIDIA have helped ensure that their top cards remain reassuringly unaffordable.

If the right CPU had been available, I'd have probably pulled the trigger around now.

I might have a look at what AMD offers, but they've not quite got the same gaming performance as Alder Lake and I still don't trust them as much for troublefree performance as Intel, given what I see in the forums. While I did have intermittent stability problems with my PC for a long time, it turned out to be a bad memory stick, solved by buying a new one, so not a platform problem.

Regardless, I'll have to upgrade by 14.10.2025 when Windows 10 support runs out. Wish me luck.


----------



## W1zzard (Mar 27, 2022)

MachineLearning said:


> Not "To E, or not to E?"
> 
> Great review as always.


Review subtitle has been updated


----------



## chrcoluk (Mar 27, 2022)

Whilst I like this model, I do agree on the naming scheme concerns, this should have been under its own number.


----------



## Valantar (Mar 27, 2022)

newtekie1 said:


> The fact is it is not just an inefficient power supply issue. The limits actually just got stricter in some areas. Selling a gaming PC with a high end graphics card is getting harder, and these E-Cores are the solution to that problem.


Again: this is simply not true. AFAIK there are no environmental regulations anywhere significant that regulate maximum power consumption of a PC under load. I mean, this would be impossible to regulate in practice, as PCs come in a million shapes, sizes, use cases and performance levels. The new Californian regulations, which caused that hubbub in the middle of last year, only applied to idle power consumption, and the only PCs that were held back from sale because of it were due to them having insufficiently efficient PSUs - i.e. pure laziness/cheapness on the part of the OEM, given that they had several years to prepare, and compliant PSUs are abundant.


newtekie1 said:


> The laptop processors aren't really an argument here. Yes, some idle at mW in some cases, but the high end gaming ones don't. And they also often aren't nearly as powerful as a desktop processor at full speed either.


While an U series will idle lower than a H series or a desktop chip, the tech is the same, so unless your motherboard has for some reason disabled its sleep states, they will idle at equivalent power levels. More hardware present will drive up idle power, but that's relatively insignificant. Also, "they aren't nearly as powerful as a desktop processor at full speed" is completely irrelevant here - the regulations in question don't cover that use case, and regardless of this _it's the same silicon with different power levels_. It's a configuration difference, nothing more.


newtekie1 said:


> Not really, a single P-Core with HT enabled is responsible for about 30w of power under load. At the same time, disabling the E-cores results in a power drop of about 25w, but that number might be inaccurate because the P-cores were allowed to boost higher and use more power since the E-cores were not taking up some of the power budget. But even if we assume the E-cores only use 25w, taking them out and adding 2 P-cores would only increase the power by 35w. And I'm talking about a high end processor here, something Intel could increase the power budget on easily to make up for that 35w and still keep the same or extremely close boost clock speeds.


Yes, "only" 35W - on top of, what, 240? And sure, you can run them _far_ more efficiently if you limit the boost clock and power level, but ... the E cores still deliver _massively_ better efficiency. They just can't keep up in the high end, or in latency-sensitive workloads (like games) thanks to their shared L2 cache and indirect ring bus connection. But in any power limited scenario - even 240W - 8 E-cores at peak clocks still outperform 2 P-cores (4t) at peak clocks unless the workload is highly latency or cache sensitive. But the doubled thread count still means the E cores deliver more performance/area/watt outside of a few workloads, and are especially useful for lighter background tasks or highly threaded workloads that can make use of them.

Also, your argument here is a bit ... well, inconsistent. If we're talking about performance within a given power envelope, whatever it may be, adding 35W to that inherently breaks the comparison.


newtekie1 said:


> But we know that isn't true. The 4-Core 12300, using the same die as this i5-12600 uses less power at idle. In fact the 12600 is using about 10% more power at idle than the 12300. And those P-cores on the 12300 are physically disabled, meaning their power consumption is actually 0. Power and clock gating a core does not reduce it's power consumption to 0.


Power gating does indeed reduce power consumption to 0 - it literally means turning off the power to a portion of the silicon. The reason why the 12600 consumes more power is likely that its extra cores are fluctuating in and out of sleep, or it could be down to the rest of the system - remember, TPU's power measurements are full system, so they include _everything_ including PSU losses (which account for quite a bit at idle, given how inefficient most PSUs are in that wattage range). A 4W difference like between the 12300 and 12600 is utterly meaningless in that perspective, as there are too many complicating factors to trust that measurement - it's well within any reasonable margin of error for a full-system measurement. You'd need an EPS cable measurement to get anything even remotely reliable.


----------



## bug (Mar 28, 2022)

newtekie1 said:


> It's a solution to a problem that does actually currently exists. Governments, in their never ending fight to reduce energy usage, are implementing regulations that computers now have a power limit when they are sitting idle or under light load, even desktops. So the E-cores allow more powerful P-Cores(and more of them) while still keeping the computer under the power limits. It's stupid on the government's side, but that's an entirely different discussion. And the new laws have already made some PC manufacturers pull certain high end models of their computers from the markets those laws cover.


I don't agree with that. According to current benchmarks, there are no tangible power savings whatsoever. Maybe the scheduler isn't smart enough to make proper use of the E core in light-load scenarios, or maybe it's bugged, but currently the E cores definitely do not lower power draw. Plus, I'm not aware of regulations targeting CPUs or PCs specifically.


----------



## Valantar (Mar 28, 2022)

bug said:


> I don't agree with that. According to current benchmarks, there are no tangible power savings whatsoever. Maybe the scheduler isn't smart enough to make proper use of the E core in light-load scenarios, or maybe it's bugged, but currently the E cores definitely do not lower power draw. Plus, I'm not aware of regulations targeting CPUs or PCs specifically.


Yeah, they exist to deliver increased MT performance at any given power level k(as a response to AMD), and to allow mobile chips higher core counts without necessitating 100W+ boost power (again mostly a response to AMD). There's absolutely nothing indicating that they exist in order to meet some kind of regulatory limit.


----------



## bug (Mar 28, 2022)

Valantar said:


> Yeah, they exist to deliver increased MT performance at any given power level k(as a response to AMD), and to allow mobile chips higher core counts without necessitating 100W+ boost power (again mostly a response to AMD). There's absolutely nothing indicating that they exist in order to meet some kind of regulatory limit.


For the record, E cores make sense when you look at perf/die area, but we need the scheduler to be smart enough to send light loads to them properly. What we're seeing right now in Win11 (send a window to the background, watch it becoming a low-priority task) seems to be quite far from that. And I'm not very confident a scheduler can be smart enough to figure things out properly, unless programs themselves start providing hints.


----------



## Deleted member 24505 (Mar 28, 2022)

MT is important though, so why does it matter how they make it better. E cores will make sense when the scheduler works properly, I won't disable mine as they have no detrimental effect on my PC


----------



## bug (Mar 28, 2022)

Tigger said:


> MT is important though, so why does it matter how they make it better. E cores will make sense when the scheduler works properly, I won't disable mine as they have no detrimental effect on my PC


Except that, with a less than perfect scheduler, they do. Start to compile something that takes a long time, bring your YouTube browser window to the foreground to watch something while compiling and watch your compiler being relegated to the E cores.
E cores can be helpful, but in order to do that, you have to keep your eyes on the scheduler more often than not. Whether that's worth it, only you can say. And it will vary from one person to another. Luckily, save for the 8P/0E cores, you have a selection of CPUs including pretty much everything, so it's all good (if a bit confusing for the less informed buyer - but then again, when hasn't that been a problem?).


----------



## Deleted member 24505 (Mar 28, 2022)

bug said:


> Except that, with a less than perfect scheduler, they do. Start to compile something that takes a long time, bring your YouTube browser window to the foreground to watch something while compiling and watch your compiler being relegated to the E cores.
> E cores can be helpful, but in order to do that, you have to keep your eyes on the scheduler more often than not. Whether that's worth it, only you can say. And it will vary from one person to another. Luckily, save for the 8P/0E cores, you have a selection of CPUs including pretty much everything, so it's all good (if a bit confusing for the less informed buyer - but then again, when hasn't that been a problem?).



I did say when the scheduler works properly. 

So if i go into the bios and disable the E cores. My MT performance will go down but what would the advantage be?


----------



## Wirko (Mar 28, 2022)

bug said:


> For the record, E cores make sense when you look at perf/die area, but we need the scheduler to be smart enough to send light loads to them properly. What we're seeing right now in Win11 (send a window to the background, watch it becoming a low-priority task) seems to be quite far from that. And I'm not very confident a scheduler can be smart enough to figure things out properly, unless programs themselves start providing hints.


Fully agreed. But how to handle those light loads? One example that hasn't been researched enough: when you have all P cores running one thread each, and you need to run another 1 or 2 threads (or more), do you gain more performance if you put them on the P cores, or the E cores? I expect the latter to be the case but we need more testing with various types of CPU load.
Also, the scheduler will improve, I have no doubt. Little by little. Intel needs a kiloton of telemetry data first, only then can they improve such a complex program (no, no sarcasm here).


----------



## bug (Mar 28, 2022)

Tigger said:


> I did say when the scheduler works properly.
> 
> So if i go into the bios and disable the E cores. My MT performance will go down but what would the advantage be?


Your MT performance goes down only if you routinely saturate all P cores. The advantage is that, when you don't saturate all P cores (i.e. most of the time), you don't run the risk of a CPU intensive task being banished to an E core.
Plus, without E cores, you can safely run Win10 


Wirko said:


> Fully agreed. But how to handle those light loads? One example that hasn't been researched enough: when you have all P cores running one thread each, and you need to run another 1 or 2 threads (or more), do you gain more performance if you put them on the P cores, or the E cores? I expect the latter to be the case but we need more testing with various types of CPU load.
> Also, the scheduler will improve, I have no doubt. Little by little. Intel needs a kiloton of telemetry data first, only then can they improve such a complex program (no, no sarcasm here).


Exactly, that hard to describe on paper, much less implement in silicon.
If your threads aren't very memory intensive, the best scenario may be running everything on a single core. If they are memory intensive, you want them spread out. Waking up an E core for a single thread may or may not be worth it, from a power point of view.

So yeah, until I learn Intel has made headroom in this area, I'd rather stick with a classic configuration.


----------



## Selaya (Mar 28, 2022)

bug said:


> For the record, E cores make sense when you look at perf/die area, but we need the scheduler to be smart enough to send light loads to them properly. What we're seeing right now in Win11 (send a window to the background, watch it becoming a low-priority task) seems to be quite far from that. And I'm not very confident a scheduler can be smart enough to figure things out properly, unless programs themselves start providing hints.


the way the E-cores are working rn (MT pad) honestly what we need is a _dumber_ not smarter schedule
since their point seems to be padding MT performance, they should be kept idle until all P-cores are at (almost) full load and then and only then should they be activated, for extra MT performance headroom


----------



## Deleted member 24505 (Mar 28, 2022)

I bet that's not how BIG.little works on phones, no one complains about that. I just ignore them, if they work right, fine, if not dgaf as it does not impact me at all. still a fast CPU with or without the E cores active.


----------



## bug (Mar 28, 2022)

Selaya said:


> the way the E-cores are working rn (MT pad) honestly what we need is a _dumber_ not smarter schedule
> since their point seems to be padding MT performance, they should be kept idle until all P-cores are at (almost) full load and then and only then should they be activated, for extra MT performance headroom


That would make sense. But I'm afraid they'll just try to put light workloads on E cores, trying to gain some advantage on mobile. And that will mess with the desktop scheduler as well.


Tigger said:


> I bet that's not how BIG.little works on phones, no one complains about that. I just ignore them, if they work right, fine, if not dgaf as it does not impact me at all. still a fast CPU with or without the E cores active.


Sticking your head in the sand can make you feel better, but it still doesn't mean "it does not impact me at all".


----------



## Deleted member 24505 (Mar 28, 2022)

bug said:


> That would make sense. But I'm afraid they'll just try to put light workloads on E cores, trying to gain some advantage on mobile. And that will mess with the desktop scheduler as well.
> 
> Sticking your head in the sand can make you feel better, but it still doesn't mean "it does not impact me at all".



How does it impact me? my PC seemingly runs fine, no apparent lag. If i had something to whine about, i would, but i haven't. Whatever is going on in the background is not having no effect on my PC that i can discern. What should i be seeing if it is going tits up?

I don't get what i am supposed to be seeing. Are others having problems with the E cores or the scheduler? As far as i can see, i am not. I am very happy with the way my PC runs.

Btw-I just ignore them, the E cores, not any problems there might be.


----------



## bug (Mar 28, 2022)

Tigger said:


> How does it impact me? my PC seemingly runs fine, no apparent lag. If i had something to whine about, i would, but i haven't. Whatever is going on in the background is not having no effect on my PC that i can discern. What should i be seeing if it is going tits up?


I have already answered that a few posts above.


----------



## Deleted member 24505 (Mar 28, 2022)

bug said:


> I have already answered that a few posts above.



Ok i will start panicking then that i bought the wrong setup and should have bought AMD as the dreaded E cores are a waste of silicon and should not exist, or maybe i can just disable them in the bios and pretend they are not there. 

I guess yours are disabled as you seem to have such a problem with them.


----------



## Valantar (Mar 28, 2022)

bug said:


> For the record, E cores make sense when you look at perf/die area, but we need the scheduler to be smart enough to send light loads to them properly. What we're seeing right now in Win11 (send a window to the background, watch it becoming a low-priority task) seems to be quite far from that. And I'm not very confident a scheduler can be smart enough to figure things out properly, unless programs themselves start providing hints.


This really shouldn't be a problem at all, and MS are making good headway getting there - they just haven't quite arrived yet. The scheduler putting out-of-focus high performance threads onto the E cores frankly ought to be easily solved - they can just check if the workload is sufficiently demanding and if it is, leave it be (barring other more pressing tasks needing that core). This also seems to be relatively application specific, which might speak to some complexity in how that workload is limited and how that makes the scheduler treat it. Still, a more dynamic scheduler will always make performance a tad less predictable, but it also lets the system allocate resources more dynamically, which is a huge benefit.

As for the need for the programs themselves giving hints: they can to a certain degree (threads can be set as higher/lower priority etc.), but that's also what Thread Director is supposed to do. If TD worked as advertised, no compute-bound task would ever get set as "background" and allocated to E cores only, but apparently it still struggles with some applications. Still, all the necessary parts are already there.


Wirko said:


> Fully agreed. But how to handle those light loads? One example that hasn't been researched enough: when you have all P cores running one thread each, and you need to run another 1 or 2 threads (or more), do you gain more performance if you put them on the P cores, or the E cores? I expect the latter to be the case but we need more testing with various types of CPU load.
> Also, the scheduler will improve, I have no doubt. Little by little. Intel needs a kiloton of telemetry data first, only then can they improve such a complex program (no, no sarcasm here).


An E core will _drastically_ outperform SMT on a P core. Intel's SMT increases performance over a single thread by ~25%. In SPEC, the E cores deliver 65% (SPECint) and 55% (SPECfp) of the performance of a P core. SPEC is of course not universally representative, but it gives a good ballpark estimate - and 55% is quite a lot more than 25%.


----------



## bug (Mar 28, 2022)

Tigger said:


> Ok i will start panicking then that i bought the wrong setup and should have bought AMD as the dreaded E cores are a waste of silicon and should not exist, or maybe i can just disable them in the bios and pretend they are not there.
> 
> I guess yours are disabled as you seem to have such a problem with them.


Sure, knee-jerk reactions always work when the brain gets too strained.


----------



## ThrashZone (Mar 28, 2022)

Hi,
Cooling tends to vary
If good enough cooling is used doubt you'd notice any issues with e cores being used or not.


----------



## InVasMani (Mar 28, 2022)

bug said:


> For the record, E cores make sense when you look at perf/die area, but we need the scheduler to be smart enough to send light loads to them properly. What we're seeing right now in Win11 (send a window to the background, watch it becoming a low-priority task) seems to be quite far from that. And I'm not very confident a scheduler can be smart enough to figure things out properly, unless programs themselves start providing hints.


The OS should just treat each chiplet as foreground/background assignment if possible and allow the end user to swap them to desired usage based on if you desire the P cores or E cores to be the foreground chiplets. It might not be possible with the way designed though I'm not certain. I would imagine AMD could do their own take on it that would work that way and Intel could follow up with something like that for it's successor or the one after depending on how far along it is.


----------



## bug (Mar 28, 2022)

Valantar said:


> This really shouldn't be a problem at all, and MS are making good headway getting there - they just haven't quite arrived yet. The scheduler putting out-of-focus high performance threads onto the E cores frankly ought to be easily solved - they can just check if the workload is sufficiently demanding and if it is, leave it be (barring other more pressing tasks needing that core). This also seems to be relatively application specific, which might speak to some complexity in how that workload is limited and how that makes the scheduler treat it. Still, a more dynamic scheduler will always make performance a tad less predictable, but it also lets the system allocate resources more dynamically, which is a huge benefit.
> 
> As for the need for the programs themselves giving hints: they can to a certain degree (threads can be set as higher/lower priority etc.), but that's also what Thread Director is supposed to do. If TD worked as advertised, no compute-bound task would ever get set as "background" and allocated to E cores only, but apparently it still struggles with some applications. Still, all the necessary parts are already there.


I agree with all that. I'm just not sure whether Windows can tell the difference between a shell running a CPU-intensive compile or transcode task and a shell running a low-priority rsync or smth. If it can, we're golden.


----------



## Valantar (Mar 28, 2022)

InVasMani said:


> The OS should just treat each chiplet as foreground/background assignment if possible and allow the end user to swap them to desired usage based on if you desire the P cores or E cores to be the foreground chiplets. It might not be possible with the way designed though I'm not certain. I would imagine AMD could do their own take on it that would work that way and Intel could follow up with something like that for it's successor or the one after depending on how far along it is.


Chiplet? Intel's CPUs are (so far) monolithic. Also, that is precisely the behaviour that @bug was complaining about above: if they're fixed to foreground/background and you want to do something lightweight while a heavy task is running, that heavy task gets shifted to the E cores, tanking performance. That's hardly ideal, right? And while having the option for user intervention is good, this _really_ ought to be automatic. Manual thread management is _really_ not something you want to be doing on a regular basis - it would be a major hassle, wildly inflexible, and could cause all kinds of issues. Heck, PCs are supposed to be kind of smart, no? Having to direct their operations manually is kind of antithetical to that.


bug said:


> I agree with all that. I'm just not sure whether Windows can tell the difference between a shell running a CPU-intensive compile or transcode task and a shell running a low-priority rsync or smth. If it can, we're golden.


Yeah, those systems clearly need some tuning. Not that I have a single clue about how schedulers are programmed, but IMO there ought to be several conditions that keep a thread in P cores no matter what, but controls also need to be granular - wholesale shifting of all threads for an application to the E cores, for example, sounds rather excessive unless that app is creating no load at all.


----------



## W1zzard (Mar 28, 2022)

bug said:


> I agree with all that. I'm just not sure whether Windows can tell the difference between a shell running a CPU-intensive compile or transcode task and a shell running a low-priority rsync or smth. If it can, we're golden.


Intel in their briefings made it sound like the CPU is able to tell that difference. Technically the shell isn't even involved in that, because it just spawns another process.

Compile and transcode are fundamentally different though. Compile spawns a gazillion of ultra-short lived processes (one per file compiled), while transcode is a single long-running process that properly puts the core(s) at 100%


----------



## tussinman (Mar 28, 2022)

AnarchoPrimitiv said:


> Wait, so this chip is only 4% ahead of the 5600x?  So does that mean that the P-cores only have a very small IPC uplift when compared to Zen3?


Hard to say because IPC doesn't necessarly transition to benchmarks and real world performance. 12600 for example is supposedly 20% higher IPC then 11th gen yet the 11600k is within 5% of both the 5600x and 12600 non-k both in the overall CPU test and 1080p/1440p gaming.


----------



## Valantar (Mar 28, 2022)

tussinman said:


> Hard to say because IPC doesn't necessarly transition to benchmarks and real world performance. 12600 for example is supposedly 20% higher IPC then 11th gen yet the 11600k is within 5% of both the 5600x and 12600 non-k both in the overall CPU test and 1080p/1440p gaming.


You can't really bring IPC measurements alone into a gaming benchmark, simply because it's also dependent on the GPU, interconnects, etc. IPC is also task dependent (and typically averaged across some broad test suite to account for this), and gaming tends to be rather different than most CPU only loads. So while IPC differences are really useful in understanding the difference between various architectures, they're hardly the be-all, end-all of real world performance.


----------



## tussinman (Mar 29, 2022)

Valantar said:


> You can't really bring IPC measurements alone into a gaming benchmark, simply because it's also dependent on the GPU, interconnects, etc. IPC is also task dependent (and typically averaged across some broad test suite to account for this), and gaming tends to be rather different than most CPU only loads. So while IPC differences are really useful in understanding the difference between various architectures, they're hardly the be-all, end-all of real world performance.


Exactly. Gaming and non-gaming cpu test (overall) will differ little sometimes between gens since it's so task dependent


----------



## Deleted member 24505 (Mar 29, 2022)

bug said:


> Sure, knee-jerk reactions always work when the brain gets too strained.



Maybe i should have put /s (sarcasm which i guess you never got)


----------



## Why_Me (Mar 29, 2022)

BigBonedCartman said:


> Another year another Intel Gimmick!
> 
> Gear Modes?
> 
> ...


Obviously you don't read benchmarks.


----------



## bug (Mar 29, 2022)

W1zzard said:


> Intel in their briefings made it sound like the CPU is able to tell that difference. Technically the shell isn't even involved in that, because it just spawns another process.
> 
> Compile and transcode are fundamentally different though. Compile spawns a gazillion of ultra-short lived processes (one per file compiled), while transcode is a single long-running process that properly puts the core(s) at 100%


They may have said that, but in your testing, the scheduler still messed up dispatching wPrime. And that's a simple scenario, like you point out, it can get way more complicated.


----------



## Taraquin (Mar 29, 2022)

I think 5600X is a better deal, with pbo+co and better ram oc (3800+ vs 3400-3700 gear 1) you get about identical performance at a lower price due to high price of B660. 12400F and 12600KF is also better deals.


----------



## bug (Mar 29, 2022)

Taraquin said:


> I think 5600X is a better deal, with pbo+co and better ram oc (3800+ vs 3400-3700 gear 1) you get about identical performance at a lower price due to high price of B660. 12400F and 12600KF is also better deals.


It's not that easy to compare. To overclock 5600X properly (even if it's overclocking itself), cheaper boards may not be enough (crappy VRM and everything). But yes, when the CPU price and performance are so close together, I'd make the choice based on where I can find the cheaper motherboard that meets my requirements.


----------



## Taraquin (Mar 29, 2022)

bug said:


> It's not that easy to compare. To overclock 5600X properly (even if it's overclocking itself), cheaper boards may not be enough (crappy VRM and everything). But yes, when the CPU price and performance are so close together, I'd make the choice based on where I can find the cheaper motherboard that meets my requirements.


Even a basic B450 can OC 5600X with ease, it uses very little power (even at 4.8GHz@1.32v it uses max 115W during load). 5800X and higher is tougher.


----------



## AusWolf (Mar 29, 2022)

It looks like a nice little gaming CPU, although the 12500 and 12400 seem to offer better value for the money. I didn't expect to start thinking about upgrading my 11700 for the next couple of years, but now I'm actually considering buying something like this just for platform longevity. Only my wallet is shaking its head.


----------



## bug (Mar 29, 2022)

Taraquin said:


> Even a basic B450 can OC 5600X with ease, it uses very little power (even at 4.8GHz@1.32v it uses max 115W during load). 5800X and higher is tougher.


Oh they'll all overclock, I'm sure of that. But you won't always hit overclock levels you see people bragging about on the Internet.


----------



## Valantar (Mar 29, 2022)

AusWolf said:


> It looks like a nice little gaming CPU, although the 12500 and 12400 seem to offer better value for the money. I didn't expect to start thinking about upgrading my 11700 for the next couple of years, but now I'm actually considering buying something like this just for platform longevity. Only my wallet is shaking its head.


Question: isn't a pre-emptive upgrade for platform longevity a bit... backwards? If anything, that way you're only denying yourself access to whatever features are new when you actually need an upgrade, no?


----------



## Taraquin (Mar 29, 2022)

bug said:


> Oh they'll all overclock, I'm sure of that. But you won't always hit overclock levels you see people bragging about on the Internet.


You can do a +200 pbo and co, this will net you 4.85GHz allcore and 4.5-4.65GHz allcore at 76W limit or 4.6-4.75GHz allcore on 88W limit. With manual OC there is very little to gain beyond that, often less. Even a basic A320 board can handle 88W in cinebench


----------



## AusWolf (Mar 29, 2022)

Valantar said:


> Question: isn't a pre-emptive upgrade for platform longevity a bit... backwards? If anything, that way you're only denying yourself access to whatever features are new when you actually need an upgrade, no?


Point taken. 11th gen it is, until I truly need an upgrade.


----------



## newtekie1 (Mar 31, 2022)

bug said:


> I don't agree with that. According to current benchmarks, there are no tangible power savings whatsoever. Maybe the scheduler isn't smart enough to make proper use of the E core in light-load scenarios, or maybe it's bugged, but currently the E cores definitely do not lower power draw.


The difference between E-cores only and P-Cores only at idle is about 10w. They do make a difference.



bug said:


> Plus, I'm not aware of regulations targeting CPUs or PCs specifically.


There are currently 6 states in the US that regulate idle power consumption of PCs. And high end gaming PCs are hard to get under those limits with high end graphics cards pre-installed. Every watt counts there.


----------



## bug (Mar 31, 2022)

newtekie1 said:


> The difference between E-cores only and P-Cores only at idle is *about 10w*. They do make a difference.


No, it isn't: https://www.techpowerup.com/review/intel-core-i5-12600/20.html
Only 2W between 12600 and 12600k. 12600 is clocked a bit lower, if they were clocked the same the difference would probably be 2.5W.


newtekie1 said:


> There are currently 6 states in the US that regulate idle power consumption of PCs. And high end gaming PCs are hard to get under those limits with high end graphics cards pre-installed. Every watt counts there.


I very much doubt the targets are that hard to meet (unless you want to put some numbers on that claim), everything is pretty damn good at idling these days.


----------



## Valantar (Mar 31, 2022)

bug said:


> No, it isn't: https://www.techpowerup.com/review/intel-core-i5-12600/20.html
> Only 2W between 12600 and 12600k. 12600 is clocked a bit lower, if they were clocked the same the difference would probably be 2.5W.
> 
> I very much doubt the targets are that hard to meet (unless you want to put some numbers on that claim), everything is pretty damn good at idling these days.


Also, remember that these are whole system power measurements. That's easily within margin of error for a measurement like that.


----------



## newtekie1 (Mar 31, 2022)

bug said:


> No, it isn't: https://www.techpowerup.com/review/intel-core-i5-12600/20.html
> Only 2W between 12600 and 12600k. 12600 is clocked a bit lower, if they were clocked the same the difference would probably be 2.5W.


Yes, it is. https://tpucdn.com/review/intel-core-i9-12900k-e-cores-only-performance/images/power-idle.png

Even if you overclock the processor, with the E-cores enabled is almost 10w of power savings on the high end chips at idle thanks to the E-cores. And the difference would likely be 1w or so higher if the P-Cores had HT enabled. And yes, these laws are so strict that 1w actually matters.



bug said:


> I very much doubt the targets are that hard to meet (unless you want to put some numbers on that claim), everything is pretty damn good at idling these days.


There are literally manufacturers that had to pull gaming PC models with high end graphics cards from those markets because they couldn't meet the requirements. If you want hard numbers, manufacturers had to pull models from the market that idled at just 66w because it didn't meet the standards. In fact, the cap for idle power is, essentially, 60w for most desktop computers including gaming computers. They use a KwH/yr calculation, but it essentially amounts to if you plug 60w idle into the calculation, it gives you 60KwH/yr, which is the limit. This means the 12900K(and 12700k) test system without E-Cores and with HT turned off, still wouldn't meet the requirements. However, the 12900K system, overclocked, with E-cores still enabled actually does. The 12600 level is right on the edge, and anything lower doesn't really matter so Intel didn't care about stripping the E-cores from those.


----------



## Valantar (Mar 31, 2022)

newtekie1 said:


> Yes, it is. https://tpucdn.com/review/intel-core-i9-12900k-e-cores-only-performance/images/power-idle.png
> 
> Even if you overclock the processor, with the E-cores enabled is almost 10w of power savings on the high end chips at idle thanks to the E-cores. And the difference would likely be 1w or so higher if the P-Cores had HT enabled. And yes, these laws are so strict that 1w actually matters.


Considering that the 12600 which has no E-cores idles at exactly the same power, that doesn't hold up to scrutiny. Most likely something happens with the power management of the 12900K when the E cores are disabled, causing it to idle at a higher power level. That kind of stuff is pretty common when changing the configuration of large-scale features like this, after all.


newtekie1 said:


> There are literally manufacturers that had to pull gaming PC models with high end graphics cards from those markets because they couldn't meet the requirements. If you want hard numbers, manufacturers had to pull models from the market that idled at just 66w because it didn't meet the standards. In fact, the cap for idle power is, essentially, 60w for most desktop computers including gaming computers. They use a KwH/yr calculation, but it essentially amounts to if you plug 60w idle into the calculation, it gives you 60KwH/yr, which is the limit. This means the 12900K(and 12700k) test system without E-Cores and with HT turned off, still wouldn't meet the requirements. However, the 12900K system, overclocked, with E-cores still enabled actually does. The 12600 level is right on the edge, and anything lower doesn't really matter so Intel didn't care about stripping the E-cores from those.


There was literally one example, which was an Alienware desktop that used an old, inefficient PSU and thus failed to meet these requirements, and would have passed easily if they had used a reasonably modern PSU. Please stop spreading FUD, and I'd recommend watching the GN video on this that I linked previously, as it explains the entirety of the "issue" at length. There is absolutely no way E-cores exist in order to pass idle power requirements, which is proven by the simple fact that CPUs with P-cores only idle at the same power levels.


----------



## bug (Mar 31, 2022)

newtekie1 said:


> Yes, it is. https://tpucdn.com/review/intel-core-i9-12900k-e-cores-only-performance/images/power-idle.png
> 
> Even if you overclock the processor, with the E-cores enabled is almost 10w of power savings on the high end chips at idle thanks to the E-cores. And the difference would likely be 1w or so higher if the P-Cores had HT enabled. And yes, these laws are so strict that 1w actually matters.


I'm not sure what i should be looking at in that picture.
I have already sent you a link showing 12600 idling at 57W vs 12600k idling at 55W. I really don't know where you came up with 10W from.

As for manufacturers not being able to meet the 60W idle power, that's more about systems going crazy. I mean W1zzard's system seems to have no problem meeting that and it's pretty extreme with 32GB RAM, one one the most featureful motherboard, high-end CPU and GPU, water cooling at whatnot.
60W idle power... When I started building PCs, our only choice was between a 200W or a 240/250W PSU.


----------



## Valantar (Mar 31, 2022)

bug said:


> I'm not sure what i should be looking at in that picture.
> I have already sent you a link showing 12600 idling at 57W vs 12600k idling at 55W. I really don't know where you came up with 10W from.
> 
> As for manufacturers not being able to meet the 60W idle power, that's more about systems going crazy. I mean W1zzard's system seems to have no problem meeting that and it's pretty extreme with 32GB RAM, one one the most featureful motherboard, high-end CPU and GPU, water cooling at whatnot.
> 60W idle power... When I started building PCs, our only choice was between a 200W or a 240/250W PSU.


Not to mention that those tests are run with a 1200W PSU that ... well, doesn't perform admirably at idle (that's the 1000W version - the 1200W version is likely even less efficient at low loads).





That graph tells us that its efficiency in the ~50W output range (either 12V, minor rails, or a mix) is in the 70-75% efficiency range. As this is for DC output wattage, for a 57W AC idle reading, assuming a best-case 75% efficiency, that's 57/100*75=42.75W DC power, at that point - but more importantly, *a quarter of the power* wasted as PSU losses. For comparison, the ATX 3.0 PSU standard *requires 60% efficiency from 10W or 2% output power*, and recommends 70% at that level, where this PSU scores less than 60% in the linked review above. And as no PSU has a flat efficiency curve at its extreme low end, efficiency would still rise rapidly from this 10W point. In other words, these test setups _are_ compliant, and are using older PSU designs than the most recent standard, which will further lower idle power draw through lowered PSU losses. If that same 42.75W DC draw saw 85% efficiency rather than 75, the AC load would be just 50W, or 53.4W at 80%. These clearly aren't massive differences, but are more than enough to ensure that a PC like this is perfectly compliant for years to come.

I don't think it's possible to go much further than this without starting to power gate entire AICs or onboard controllers on idle, which would be .... well, troublesome in practice. But these levels are perfectly attainable with even high end hardware today.


----------



## InVasMani (Apr 1, 2022)

1500w PSU it's not much, but it's all mine...proceeds to use like 10% to 20% of rated wattage on average.


----------

