# True nature of E-cores and how effective are they?



## wheresmycar (Aug 30, 2022)

*PLEASE*, NO INTEL/AMD FANBOY ANTAGONISM!

What is your opinion on intels motivation for introducing e-cores. Sometime ago I met a self-confessed AMD jock suggesting intels E-cores are just a poor attempt to overshadow AMD's "core count" which can't be achieved with performance cores alone. Although my buying decision is mostly based on benchmarks and price, i have to admit now everytime I come across e-cores I do end up second guessing the purpose behind them.

Whether the above statement is true or not, are e-cores effectively achieving what they're designed for?

- If yes, how effectively and in what type of workloads

- If no, might this be just down to poor implemention with ADL with positive signs going forward (next Gen/Gens)?


----------



## cvaldes (Aug 30, 2022)

While I don't know how Intel's E-core implementation works, for sure they are motivated to improve performance-per-watt which is a super important metric for their Datacenter business.

Apple's philosophy is that performance cores handle intensive workloads that are sensitive to latency and efficiency cores handle background and threaded workloads. This M-series SoC philosophy carries over from their long history doing the same with A-series mobile SoCs. I remember Apple making some sort of claim that their efficiency (Blizzard) cores provide about 70% of the performance of their performance (Avalanche) cores at a fraction of the power.

I surmise that a lot of the actual performance gains are heavily tied to the CPU task scheduler and its ability to accurately assign workloads to the right silicon.

Of course, Apple takes this a step further by putting ML cores and GPU cores on the same package, all of them drawing from the same RAM. Apple also has hardware media transcoders, security silicon, and I believe some signal processing stuff. They introduced these on their homegrown T2 Security Chip before they unveiled the M-series silicon.

For Apple, it's important to acknowledge that over 85% of their Mac unit sales are notebook models. They are supremely motivated to offer great performance-per-watt for battery powered devices.

It's important to recognize that Datacenter is the fastest growing business for the big three: Intel, AMD, and Nvidia. Much of their focus on silicon features addresses that business instead of the traditional PC (desktop or notebook) market which was pretty stagnant before pandemic driven work-at-home policies temporarily bolstered PC sales.

For sure, Datacenter business growth prospects are also driving Intel's dive into discrete GPU development far more than PC gaming.


----------



## ir_cow (Aug 30, 2022)

I believe TPU already has a article on this topic.


----------



## Mussels (Aug 30, 2022)

Intel needed them to compete with AMD's multi threaded performance.
Intels cores are less power efficient, and in order to compete they had to keep adding cores or raising clock speeds - and they're already pushing the wattages far higher than we've ever seen before.

In order to keep the ST performance and gain MT performance, they added power efficient cores to boost the MT performance.


For most users, they're pretty useless. Gamers have no benefits from them, home users have no benefits - it's not like windows will shut down to E-cores only to save power on the desktop or anything, they just turn on to help with heavily threaded workloads.


----------



## phanbuey (Aug 30, 2022)

They are space-efficient cores that give more mutlithreaded oomph, as well as additional threads for background tasks for the least amount of die space, there's a piece of silicon (thread director) that automatically runs background tasks on them while the foreground application runs on the performance cores.

In practice the implementation works well - especially for a first gen product.

I would disagree with @Mussels - for gamers they allow the P cores to run the game while providing additional threads for background tasks, making the cost much less on the foreground application.

12700k maxing the E-cores when downloading a game in the background and still having P-cores for gaming ! : intel (reddit.com)

The end result is that you have the same or better MT and extremely high ST on a less dense and less power efficient node than tsmc 5/7.  So I would say they work well enough that without them Intel would not be competitive in the desktop space.


----------



## GerKNG (Aug 30, 2022)

wheresmycar said:


> What is your opinion on intels motivation for introducing e-cores


a desperate attempt to not lose by a LOT in any kind of multithreaded workloads against a 142WZen 2 Ryzen from 3 years ago.
more cores are barely possible when 8 cores already pull north of 200W.

in my opinion e-cores have absolutely no reason to exist in something that does not run off a battery.


----------



## cvaldes (Aug 30, 2022)

It's worth pointing out that US federal government has power efficiency mandates for computing equipment, starting with the power supply unit but realistically encompassing everything in a PC (as well as peripherals) from a holistic viewpoint. It's safe to assume that the feds will continue to push for higher efficiency levels in the future.

Remember that electricity is money. If you have 10,000 people at the General Accounting Office step away from their desktop PCs a couple of hours a day for meals, breaks, meetings, whatever, that's power/money to be saved.

There's an EnergyStar sticker on your monitor for a reason.

Let's say the policy is to leave a desktop PC on 24x7 for maintenance (software updates, security scans) and backup purposes. A full-time employee (220 days a year, 8 hours/day) is only on their computer 20% of the time. Okay, maybe it'll go to sleep/hibernate, but "Wake on LAN" will revive the system. Still, any time a system is idling, it's using power so doing tasks in the background. If they shut off their PCs on weekends, they're still on their computers about a third of the time.

And as E-cores/P-cores silicon becomes more prevalent, it is likely that Microsoft will optimize Windows 11, Windows 12, and beyond to take better advantage of the differences between these cores. How Windows handles E-cores today is unlikely how it will be five years from now. Alder Lake is the first generation of consumer CPUs with this technology but it's here to stay.

It is foolish to look at Alder Lake and say, "This is how it will be forever." We are already surrounded by other instances of differentiated silicon. CPU E-cores are the latest most prominent consumer-facing development but it certainly isn't the first nor last.

Somewhere in a lab, there's a function being done on prototype silicon that is currently being handled in software by your typical CPU. I don't know what it is but people are working on it. AV1 decoding? Its successor? Its successor's successor?


----------



## Mussels (Aug 30, 2022)

phanbuey said:


> They are space-efficient cores that give more mutlithreaded oomph, as well as additional threads for background tasks for the least amount of die space, there's a piece of silicon (thread director) that automatically runs background tasks on them while the foreground application runs on the performance cores.
> 
> In practice the implementation works well - especially for a first gen product.
> 
> ...



If you need to use process lasso and manually screw with things to get the E-cores to help, they're not useful - you could do the same on any regular CPU
The comments section on that very post says that windows already does this for dual CCX ryzen


snip for the lazy:





Edit: poster was playing starcraft II, a game that uses one thread for the game and one for the graphics rendering.
He said he notices lag downloading while gaming on an antique DX9 game - because his P cores lose the boost frequencies when they multi task


----------



## oxrufiioxo (Aug 30, 2022)

I feel like most buyers of ADL seem pretty happy with their purchases and me from the outside looking in it seems the E cores accomplished what intel set out for them to do, boost MT performance without making the CPU consume 400w of power. Alder lake is the first intel arch that is remotely exciting in forever though and both the 12600k and 12700k are hard to beat if building a new system price/performance. 

Personally I'm not a huge fan and it will likely be at least till Meteor lake that I feel comfortable supporting intels hybrid arch assuming it's actually better than Zen4 X3D/Zen 5 if it slips even further.


----------



## Panther_Seraphin (Aug 30, 2022)

E-cores IMO should really be focused on Laptop/Ultra small/Small form factor PCs for power efficency/cooling limitations. IF and WHEN operating systems AND software become more aware of big.LITTLE sort of architecture it will have relevance in higher end desktops/workstations etc.

Intel however have shoe horned them into High end desktop chips purely to make up for core counts/efficeny claims as they are fighting a node deficency currently and will be 2 nodes down when Zen 4 releases in <4 weeks now.


----------



## cvaldes (Aug 30, 2022)

Mussels said:


> If you need to use process lasso and manually screw with things to get the E-cores to help, they're not useful - you could do the same on any regular CPU


That's poor support by the operating system today, not inferiority of the E-core technology as a concept. In this case, Intel can build it but Microsoft (or the Linux developers) need to properly implement the feature.

I don't recall Apple nailing E-core support the first time around on whatever iPhone SoC it debuted.

Eventually Microsoft will figure this out. I don't know when but they will.


----------



## natr0n (Aug 30, 2022)

Simplistically e-cores/e-peen doesnt do wonders.


----------



## ppn (Aug 30, 2022)

Intel "Meteor Lake" 2P+8E Silicon Annotated

Using Meteor lake as an example

4 threads Ecore ~ 4,7  mm² and it lacks AVX 512 unit
4 threads Pcore ~ 8,6  mm² including the AVX 512 unit that is locked on Alder lake.

4 threads Ecore @ 4.0 Ghz is 2000 CPUz score
4 threads Pcore @ 5.0 Ghz is 2000 CPUz score but it needs 100% more power and it takes 100% real estate to do so


----------



## phanbuey (Aug 30, 2022)

I think it doesn't matter this generation which CPU you go for, they're going to be very close in performance regardless.

to OPs original question:  are E cores good at what they're designed for? I think everyone that has used them would say yes.  Benchmarks would say yes.  The fact that Intel are competitive at all would point to yes.

People who haven't used them or those who hate intel because they were run by a bunch of anti-competitive sheisters and BS artists for over a decade, would say no (and I kind of get it).

That being said if you're on the fence between AM5 and Raptor Lake, I don't think e cores are really the consideration.  I would go AM5 u will have a platform last more than 1 generation and when X3D lands -- that's where the performance will be.


----------



## Psychoholic (Aug 30, 2022)

Anecdotal but my 12900K system is the smoothest and most responsive system I have ever had, this includes my previous 10900K and 3900X machines, So i guess the E-Cores are doing a good job at background tasks.


----------



## dragontamer5788 (Aug 30, 2022)

https://www.ti.com/lit/an/scaa035b/scaa035b.pdf
		


Here is the fundamental model taught to EEs about how much power a transistor uses. Its 16 pages, though dense and definitely need above-average math skills to make it through.

One of the major formula is as follows (there are other contributions to power, but I think this is one of the most "pieces" of power consumption. We'll ignore the other bits of the document for simplicity):





CL IIRC is a function of how small the transistors are. So the smaller the transistor, the less capacitance, the less power they use. This, along with density (ie: packing 1-million transistors in smaller-and-smaller areas) is why advanced nodes is such a big deal, less capacitance means less power, and more transistors mean more parallelism.

That being said, if we assume the same process, we are stuck as far as capacitance is concerned.

------------

The thing we can control as engineers is:

1. Voltage -- the lower the voltage, the slower the clock will be. But notice the "square" on voltage, 1.5V will have 2.25x more power usage than 1V.

2. Frequency -- The higher the frequency, the more power usage there is. 3000 MHz will have 50% more power than 2000 MHz.

3. Number of outputs -- The number of bits that change each clock tick is more of a software thing than a hardware thing. But different hardware designs could use fewer bits-per-clock tick, but they'd be slower.

Fundamentally, if you're aiming for low power usage, you'll make a dramatically different design than if you're aiming for absolute speed.

---------

E-cores probably are designed to have much lower clocks, at much much lower voltage, with fewer bits changing per clock-tick. This dramatically slows down code, but the power-decrease is multiplicative. If you can "speed the code up" with multiple threads, say using 4-cores (each with only say, 10% power usage), you'll have 40% overall power, rather than 100% power of 1-core running as fast as possible.

That's the general idea. Of course, there's more complications but this should give you at least an idea of what EEs are thinking with these chip designs. Overall, the physics make sense, but there's a big question if the software will be written correctly for this new model of computation. After all: we already know that its not possible to turn all algorithms into parallel forms. And in many cases, 4x threads may only lead to 2x speedup, so maybe the power-savings won't be as good as expected.


----------



## Mussels (Aug 30, 2022)

Psychoholic said:


> Anecdotal but my 12900K system is the smoothest and most responsive system I have ever had, this includes my previous 10900K and 3900X machines, So i guess the E-Cores are doing a good job at background tasks.


I mean sure, but I also felt the same way when i upgraded to my ryzen 1400 back in the day 


You can't be sure it's the E cores that makes it smoother, and not just being a faster CPU overall with more cores/threads


----------



## wheresmycar (Aug 30, 2022)

appreciate everyones input on this. I understand for a gamer, basic business applications, photoshop, "MT-occasional" video editing/rendering and general use the e-cores are uncooperative hence not a feature worth aspiring to. If correct, this makes things a little easier to understand. Although multi-threaded video rendering demands may benefit to some degree these are just home videos I sometimes work on (maybe once or twice a month)

As someone rightly guessed, i'm looking into e-cores and sharpening up on the know-how prior to pulling the trigger on Z4/RPL. I've been leaning more towards the AM5 socket for that gen-2-gen support which was a mouthful with AM4 although I'm more than happy to wait and see everything come into benchmark fruition (Zen 4, RPL and 40-series cards). My 9700K and 2080 system is holding up well so no hurry!


----------



## openbox1980 (Aug 30, 2022)

I have used a laptop for a long time now, for me e-cores are worthless. Sure I have a fast laptop, its 6p cores, 8 ecores, but the battery life on intel laptop are way worse than modern day AMD processor laptops. 

I couldnt wait any longer for the AMD 6000 processors to come out so I bought a laptop with a intel processor. And to this day, not enough laptops have the newer 6000 apu processor, the ones that do have them are not in the my market or its not in my target customer type.


----------



## dirtyferret (Aug 30, 2022)

Mussels said:


> Intels cores are less power efficient, and in order to compete they had to keep adding cores or raising clock speeds...For most users, they're pretty useless. Gamers have no benefits from them, home users have no benefits


but why the general obsession over core count, threads, e-cores, p-cores, i-core, x-core., etc.,etc.,  Shouldn't we be judging CPUs as a whole not just parts of them?  As one of your fellow Aussie's, techspot/hardware unboxed, have stated many times _"Games don't require a certain number of cores, they never have and they never will. Games require a certain level of CPU performance, it's really that simple"_


----------



## gffermari (Aug 30, 2022)

That approach is the future and AMD will have to adapt.
We only need 6-8 extremely fast cores at ridiculously high frequency and a truckload of small ones to work when is needed (MT).

The concept all p cores is a dead end.


----------



## dirtyferret (Aug 30, 2022)

Psychoholic said:


> Anecdotal but my 12900K system is the smoothest and most responsive system I have ever had, this includes my previous 10900K and 3900X machines,





Mussels said:


> You can't be sure it's the E cores that makes it smoother, and not just being a faster CPU overall with more cores/threads



I'm currently all Intel (12600k, 9700k, 10400) plus laptops although 20 years ago it was the exact opposite.  For work, I can't tell the difference between any of the CPUs, for gaming I can't tell the difference between the 9700k & 12600k (10400 is only used for work).


----------



## Toothless (Aug 30, 2022)

I swear some of you people love to make something harmless into something controversial. E-cores are fantastic at managing background processes, leaving more room for the P-cores to be open for heavier tasks. 

This isn't about power usage. It's not about the Skylake equal of these smaller cores being, well, not as good. It's just support cores in a sense.


----------



## ThrashZone (Aug 30, 2022)

Hi,
If an os actually optimizes these e threads let me know 
Until then I'll stick to real cores with two threads thank you very much


----------



## ERazer (Aug 30, 2022)

ECores is only as good if software supports it well (looking at MS)


----------



## P4-630 (Aug 30, 2022)

It's the future and I'm supporting it.


----------



## AM4isGOD (Aug 30, 2022)

Why such vitriol and hatred of E cores or even whatever Intel is doing by the AMD users? you aren't even using Intel, so what does it matter to you. It seems like just hatred of Intel to me. You chose to buy AMD so why spout so much hate for ADL/E cores. 

However much power Intel with E cores uses, you don't have to use it, or cool it. So what if it is inefficient compared to what AMD is producing, you are not going to buy it or use it. I am so sick of hearing the same anti E core crap from near every AMD user on TPU. 

My 12700k runs cool, and is as good as near any AMD CPU out right now (AM5 is not yet) The only reason i kinda regret it is i am sick of hearing the same shite every day on TPU.


----------



## ShrimpBrime (Aug 30, 2022)

I have a 12400F. 

No E cores.

I can't tell they are not there. Excellent performance. 

Intel wanted to try some innovations and so we have it. 

Don't see them useful for my personal use case, but I'm sure someone likes the extra cores.


----------



## Mussels (Aug 31, 2022)

gffermari said:


> That approach is the future and AMD will have to adapt.
> We only need 6-8 extremely fast cores at ridiculously high frequency and a truckload of small ones to work when is needed (MT).
> 
> The concept all p cores is a dead end.


I agree - this is why I like the theory that AMD will mix generations of cores to make E and P cores less costly to implement, and use older stock.
Zen 4 (P cores) + Zen 3 (E cores) for example

If they lock them in the area that Zen 3 has it's best power efficiency (lets say 4.4GHz) they'd make some fantastic E cores for no real investment from AMD - they already a plentiful supply of them, and they're already modular thanks to the CCX designs.

I also agree with ERazer - it all comes down to OS support.
Right now AMD use their chipset driver to set preferred cores, so they have a method of doing this (set all P cores to preferred cores, and most of the problem is solved)


----------



## hat (Aug 31, 2022)

I don't see why it matters. From where I sit, with my 2600k, the e-cores look pretty good if they are indeed as strong as a Skylake core. I know we've had many iterations of Skylake already, but if we can now get those down into low power "efficiency" cores, that seems pretty good... and they're stronger than my 2600k, anyway. Without knowing much about the design, I suspect they're just bare CPU cores without much of the fancy stuff like AVX support added on...


----------



## Mussels (Aug 31, 2022)

AM4isGOD said:


> Why such vitriol and hatred of E cores or even whatever Intel is doing by the AMD users? you aren't even using Intel, so what does it matter to you. It seems like just hatred of Intel to me. You chose to buy AMD so why spout so much hate for ADL/E cores.
> 
> However much power Intel with E cores uses, you don't have to use it, or cool it. So what if it is inefficient compared to what AMD is producing, you are not going to buy it or use it. I am so sick of hearing the same anti E core crap from near every AMD user on TPU.
> 
> My 12700k runs cool, and is as good as near any AMD CPU out right now (AM5 is not yet) The only reason i kinda regret it is i am sick of hearing the same shite every day on TPU.


Nah the problem is your warped views on what efficiency means
Your 12700k runs cool because you barely use it

If I capped my FPS to 60 my "hot" 5800x would barely hit 40C, but it'd be stupid of me to tell everyone that all 5800x run that cold or that those barely above idle figures were 'efficient' as if that's how the CPU always runs in every task.



hat said:


> I don't see why it matters. From where I sit, with my 2600k, the e-cores look pretty good if they are indeed as strong as a Skylake core. I know we've had many iterations of Skylake already, but if we can now get those down into low power "efficiency" cores, that seems pretty good... and they're stronger than my 2600k, anyway. Without knowing much about the design, I suspect they're just bare CPU cores without much of the fancy stuff like AVX support added on...



The concept is great, i'm all for mixed E and P cores - the problem is that intel are still going balls to the wall with their clock speeds past the efficiency settings entirely.
Intel Core i9-12900K E-Cores Only Performance Review - Power Consumption & Efficiency | TechPowerUp



Single threaded, the E-cores are great - chart topping.
The problem is they're used for multi threaded... and they fall behind.
These cores designed purely for efficiency are less efficient than Zen3, Zen 2, and even intels 9th and 10th gen CPUs.






They've been pushed too high on clocks and voltage to beat AMD in benchmarks, negating their entire purpose for existing.


----------



## A Computer Guy (Aug 31, 2022)

Mussels said:


> I agree - this is why I like the theory that AMD will mix generations of cores to make E and P cores less costly to implement, and use older stock.
> Zen 4 (P cores) + Zen 3 (E cores) for example
> 
> If they lock them in the area that Zen 3 has it's best power efficiency (lets say 4.4GHz) they'd make some fantastic E cores for no real investment from AMD - they already a plentiful supply of them, and they're already modular thanks to the CCX designs.
> ...


I'm hoping in terms of OS support they can add some parameter per thread to alter/configure threads to automatically prefer or land on Best, Medium, or Low performance cores.  Keep in mind this is different than Critical, High, Medium, or Low priority threads.  I'm talking about CPU affinity per thread not process.  It would take some time for software to be updated but once done software can better tune itself to core types (looking at your Corsair you boost killer) and in terms of gaming perhaps offer the end user a way to optimize their game threads without having to use something like process lasso.  And then also enable Virtualization guests to translate that to host cores for the win.


----------



## nguyen (Aug 31, 2022)

Mussels said:


> Nah the problem is your warped views on what efficiency means - and your rabid fanboyism.
> Your 12700k runs cool because you barely use it
> 
> If I capped my FPS to 60 my "hot" 5800x would barely hit 40C, but it'd be stupid of me to tell everyone that all 5800x run that cold or that those barely above idle figures were 'efficient' as if that's how the CPU always runs in every task.
> ...






So with E-cores disabled, 8P cores get 20K scores, that 2500points per P-core
8E cores take up the same die space as 2P cores, let say Intel make 10P cores, that would get them 25000 points, meanwhile 8P+8E get them 27700 points.

There are non-K version for people who prefer better effieciency at stock (much cheaper too), and have no clue how to tune their PC. Only idiots pay more for K version and not tune their PC to how they like it.


----------



## lilhasselhoffer (Aug 31, 2022)

So...most of the hype is pretty stupid.  You'll see interesting perspectives that e-cores are the future, that p-cores are the only thing that matters...and then in the same breathe people might tell you that RISC-V will trounce any x86-64 processor on the planet.

The simplest bit of this is that Intel and AMD both experiment...with some experiments being better than others.  Starting off...Itanium.  If anyone fanboys for Intel ask how that turned out.
Now, the AMD fanboys are equally as stupid.  Ask them how Bulldozer worked, when each processor had to share a part of the pipeline.  Read: terrible.


To the things at hand.  You have got multiple core CPUs.  Each of these cores is scheduled to perform different tasks...based upon software.  E cores, or efficiency cores, are stripped down to do less intensive or otherwise lower demand processes.  Think background processes with low priority.  The catch is that in most cases there's very little reason for home users to have so many processes open and running that they'd see need for E cores.
Is this a tactic for Intel to claim that their processors have the most cores?  I can't attest to their board meetings...but Intel used to sell itself on having the highest frequencies and greatest single thread performance.  AMD has made huge leaps there...and Intel has made relatively little progress.  By cost, throwing twice the cores onto a chip and claiming they are the best bang for the buck is...pretty easy to call a win.  Given their history...I think Intel is trying for any win.


Now...the final bit.  Is it worth having E cores?  There are arguments for it...in the same way that ray tracing was the thing that sold Nvidia cards.  In a very limited usage case, you could see relatively large performance gains.  So...why do I think it's not ready for prime time?  Well, right now if you're a common user a 5600x is a silly value for the money.  It has the same core and thread count as a $600 CPU from 2011...with better connectivity, it's up-to-date, and at about 33% of that price (unadjusted for inflation).  In more than a decade we've only managed to start using 6 cores fully on the consumer market...up from the 2 that was the style at the time.
If it isn't clear, my opinion is that the E cores lack integral scheduling to take advantage of them in anything but Windows 11, Windows 11 is currently suffering the every other version of Windows sucks, and even if it did you'd not expect things to be so clear.

Personally, I haven't done Intel products for a few years.  They are treading water, and I think that the reliance on a software scheduler is going to make it very hard to justify E cores for anyone but power users who may be better off considering their workloads on different hardware.  AMD fans should really be getting flashbacks...because this is basically the argument put forward for Bulldozer.  
Most of my games and software library runs on 1-3 cores...based upon the windows scheduler.  I run one program at high priority, maybe a second at equal, and a bunch of stuff that can be stacked on a nearly hibernative core and still be updated.  I have no reason to believe that what Intel is putting out with an inflated core count will serve me better than AMD...and the fact that Intel doesn't seem to be basing their marketing material on that (I see about 5:1 them going after gamers above core count)  is a tacit agreement.  While my experience is anecdote, you can find your own by simply doing what you do and recording data inside the windows scheduler...  If you're genuinely pinging all cores then it's probably worth adding more...assuming that you've got way more processes running than your core count.


----------



## R-T-B (Aug 31, 2022)

hat said:


> I don't see why it matters. From where I sit, with my 2600k, the e-cores look pretty good if they are indeed as strong as a Skylake core. I know we've had many iterations of Skylake already, but if we can now get those down into low power "efficiency" cores, that seems pretty good... and they're stronger than my 2600k, anyway. Without knowing much about the design, I suspect they're just bare CPU cores without much of the fancy stuff like AVX support added on...


ecores support avx, avx2, etc.  The only thing they lack is hyperthreading and avx512


----------



## lilhasselhoffer (Aug 31, 2022)

nguyen said:


> View attachment 260079
> So with E-cores disabled, 8P cores get 20K scores, that 2500points per P-core
> 8E cores take up the same die space as 2P cores, let say Intel make 10P cores, that would get them 25000 points, meanwhile 8P+8E get them 27700 points.
> 
> There are non-K version for people who prefer better effieciency at stock (much cheaper too), and have no clue how to tune their PC. Only idiots pay more for K version and not tune their PC to how they like it.



So...help me here.

You're responding to someone saying that you are not using your 12700k.  Your response is that you disabled a large chunk of the silicon...thus literally disabling the components that this thread is meant to discuss.
You then link to a video that compare a 12900F and 12900k.  One of the video's conclusions is that there's a 25% difference in power draw between the F and K SKUs...and the performance difference is 0-4%.  So...the result is that you pay more for the k, you have a much higher power draw, and you have a boost that is functionally within the error for the testing methodology to be reasonably chalked-up to regular process variation.  You then say you can get the performance of the k to that of the f by disabling the nice shiny new E cores...and experience an uplift by tuning...despite the literal cited video stating more frequency<>better performance??? 

You have to be trolling...right?


The alternative is that you are endorsing somebody spend extra money to get:
1) Slightly higher clocks at a much higher power draw.
2) No iGPU...so if you want quick sync that's gone.  Of course,,,same die but binned so it's just dark silicon.
3) A much higher thermal envelope.  125/241 TDP versus a 65/202 watt TDP.
4) The ability to disable stuff...because there's nothing quite like paying extra for power windows then immediately ripping them out 
All of this is "acceptable" because anybody who wants any ease in over/underclocking knows they have to spend more.  If they do buy something...maybe for non-clocking reasons...in the k series they're idiots.



I'm really having a hard time swallowing this when the argument should be about the E-cores...which seem to be entirely unused in the one provided example.  Of course, you could be a fanboy...  You could be stating that somehow E cores aren't doing jack dandy based upon the numbers.

I mean; the 12600k has 6 P cores, is clocked 48% higher than the 12400f, which has 6 P cores, and in quad core calculations somehow only manages 20% faster speeds.  If you instead compare octo-core... where the processors both either multi-thread or use E cores the gap becomes 33%...so a 48% clock increase against a chip with 4 more physical cores can still not manage to keep up with the huge increase in clock frequency.  
CPU numbers


Consider me skeptical.  I bought Sandy Bridge, I avoided Bulldozer.  I did this because artificially swelling core counts was stupid.  I'm buying Ryzen 3.  I'm skipping big.little.  It's the same stupidity.  You're welcome to continue to feed the Intel machine.  I'd prefer to vote with my wallet, and tell Intel to make steps forward rather than invest their money making a bad product that runs really fast.  If they could integrate the scheduler as a hardware component this would be a different story...but it seems like Intel is relearning the AMD lesson from Bulldozer, software is king, and hardware without software is blowing money on nothing.





Let me TL;DR this.  
My worst boss continues to tell me "It's about how fast we can make the car go."  In this case, an analog for how many units of a thing we can make.  That's...cool.  The problem is that's 1950's thinking.  
It's not about how fast the car can go, it should be about how far we can run on a tank of gas.  The analog there is that the production of parts has to be metered by how tightly we control inputs, how efficiently we can run, and how we balance resources to be as profitable as possible.  

Intel wants to sell more, and use that gas.  They're running out 10 miles into a 50 mile race, 10 minutes in.  AMD has given up on clocking to the moon, and is less about single thread performance.  They make it 30 miles into the race, at 25 minutes.  Arm is a diesel vehicle.  They theoretically could go the 50 miles, but cannot enter the race because the world isn't yet ready for diesel.  AMD is not the solution.  It is not finishing the race.  It is not the fastest.  Thing is, we pick from what we have...and right now AMD is the best option for most.  The only way we make Intel better is to vote with our wallets, and force them to either fundamentally redesign their engine (what AMD did with Ryzen), or compete by undercutting on price.  

To extend the metaphor just slightly, E cores are making an electric hybrid vehicle.  Theoretically they are powerful...but when you have to contain 2 drive trains and the brains to make them work there's precious little that actually makes them work better...even if you feel better.


----------



## nguyen (Aug 31, 2022)

lilhasselhoffer said:


> So...help me here.
> 
> You're responding to someone saying that you are not using your 12700k.  Your response is that you disabled a large chunk of the silicon...thus literally disabling the components that this thread is meant to discuss.
> You then link to a video that compare a 12900F and 12900k.  One of the video's conclusions is that there's a 25% difference in power draw between the F and K SKUs...and the performance difference is 0-4%.  So...the result is that you pay more for the k, you have a much higher power draw, and you have a boost that is functionally within the error for the testing methodology to be reasonably chalked-up to regular process variation.  You then say you can get the performance of the k to that of the f by disabling the nice shiny new E cores...and experience an uplift by tuning...despite the literal cited video stating more frequency<>better performance???
> ...



I have no idea what you are arguing about except that you want to argue, I don't have a 12700K

So you are ignored now


----------



## tabascosauz (Aug 31, 2022)

Sounds like some of y'all need to re-read the first line of the OP:



wheresmycar said:


> *PLEASE*, NO INTEL/AMD FANBOY ANTAGONISM!
> 
> What is your opinion on intels motivation for introducing e-cores.



It's about e-cores, not whether your ego is in bed with AMD or Intel. No reason to be playing the fanboy card at all. Lay off the insults or thread is closed.


----------



## ExcuseMeWtf (Aug 31, 2022)

I have specifically went for i3-12100F to avoid E-cores, because I am still using Windows 10, AFAIK there are no planned improvements for scheduling to account for those in that system.


----------



## AM4isGOD (Aug 31, 2022)

Whatever the reason for Intel introducing the E cores, they greatly increase the MT performance which is surely a good thing. Imo there is no doubt, when the scheduler is working correctly, they will pay off. As i have said before, which some people ignore, is it is a waste running background tasks on a P core. When you are gaming surely you only want your game running exclusively on your P cores and not all the other background crap. 

With a AMD CPU with P cores only, when you are gaming, all the other crap tasks you have running are using your P cores which will surely have a detrimental effect on your games performance.  

Admittedly the E cores are probably not being managed perfectly at the moment, but at some point they will be, then when Intel users are gaming, their background crap tasks will run properly on the E cores, whereas AMD users will have all them same tasks eating game performance on the P core only CPU.


----------



## ThrashZone (Aug 31, 2022)

Hi,
Doubt anyone would run out of gaming resources using an 8 core and I'd probably call this a main stream core count now days.


----------



## Easy Rhino (Aug 31, 2022)

The global cabal is hell bent on getting off of oil and gas and therefore energy prices are expected to skyrocket. Consumers won't be purchasing that new phone every year or new PC every 3-4 years, and enterprise won't be replacing servers every 4-5 years if their power bills are all double and tripling.


----------



## R0H1T (Aug 31, 2022)

R-T-B said:


> ecores support avx, avx2, etc.  The only thing they lack is hyperthreading and avx512


And they won't have AVX512 anytime soon, unless they follow the AMD approach. Not to mention essentially neutering AVX512 that's actually present on the P cores! Then there's switching the tasks between various cores & of course priority, which is less of a problem if the OS can handle it properly. Right now Intel is just throwing (E)cores for namesakes & ironically to counter AMD with "more cores"


----------



## P4-630 (Aug 31, 2022)

R0H1T said:


> AVX512


Isn't used for gaming anyway.


----------



## dgianstefani (Aug 31, 2022)

Mussels said:


> Nah the problem is your warped views on what efficiency means
> Your 12700k runs cool because you barely use it
> 
> If I capped my FPS to 60 my "hot" 5800x would barely hit 40C, but it'd be stupid of me to tell everyone that all 5800x run that cold or that those barely above idle figures were 'efficient' as if that's how the CPU always runs in every task.
> ...


They're only designed for area efficiency, power efficiency is a second priority.


----------



## Ellertis (Aug 31, 2022)

Mussels said:


> I agree - this is why I like the theory that AMD will mix generations of cores to make E and P cores less costly to implement, and use older stock.
> Zen 4 (P cores) + Zen 3 (E cores) for example
> 
> If they lock them in the area that Zen 3 has it's best power efficiency (lets say 4.4GHz) they'd make some fantastic E cores for no real investment from AMD - they already a plentiful supply of them, and they're already modular thanks to the CCX designs.
> ...


The leaks suggest Zen5(P) + zen4c(E)


----------



## TheoneandonlyMrK (Aug 31, 2022)

dgianstefani said:


> They're only designed for area efficiency, power efficiency is a second priority.


And that's why they're shit and implemented shit IMHO.

They're should be no more than 8 E core's.

They should be actually efficient.

They should be doing 95% of all the work.

With only time sensitive apps pushed to P core's.

But they are not , do not.

Why because they're only there to increase the core count number on the spec sheet.

I got only love for efficiency core's, bit this tat ain't that.

Opinion over.


----------



## P4-630 (Aug 31, 2022)

Mobile phones have it at least for a decade already.....
I wouldn't be surprised if the next gen game consoles will also come with "efficiency" and "performance" cores....


----------



## dgianstefani (Aug 31, 2022)

TheoneandonlyMrK said:


> And that's why they're shit and implemented shit IMHO.
> 
> They're should be no more than 8 E core's.
> 
> ...


Everything on that list can be fixed with kernel scheduling, software optimisation and bios limits.  The hardware is fine.


----------



## ThrashZone (Aug 31, 2022)

Hi,
Yeah the new hedt chip.

Issue is waiting for ms to optimize it 
They can't even design a decent start menu


----------



## TheoneandonlyMrK (Aug 31, 2022)

dgianstefani said:


> Everything on that list can be fixed with kernel scheduling, software optimisation and bios limits.  The hardware is fine.


But won't be, and that's why this implementation won't win me over.

They're true nature is a spec list booster, and I would admit they work well on that term.

They're effective all right, without them Intel would be f£#@ed, simple.


----------



## dgianstefani (Aug 31, 2022)

TheoneandonlyMrK said:


> But won't be, and that's why this implementation won't win me over.


So your argument is that Microsoft won't continue to update windows 11 and it's core scheduler, Intel will no longer release bios updates for it's motherboards, and software development will completely stagnate as of the release of 12th/13th gen Intel?


----------



## TheoneandonlyMrK (Aug 31, 2022)

dgianstefani said:


> So your argument is that Microsoft won't continue to update windows 11 and it's core scheduler, Intel will no longer release bios updates for it's motherboards, and software development will completely stagnate as of the release of 12th/13th gen Intel?


No my argument is Intel won't put that effort in.
It is not how they are using those core's.

This isn't arms big little, there's work's.


----------



## dgianstefani (Aug 31, 2022)

TheoneandonlyMrK said:


> No my argument is Intel won't put that effort in.
> It is not how they are using those core's.
> 
> This isn't arms big little, there's work's.


So Intel's long term strategy, which consists of parallel core development, which they are specifically betting on with many (if not all) their upcoming architectural designs with, including tile based chiplets, is also to, wait for it, completely discontinue optimizations for said designs?

Interesting opinion.


----------



## R0H1T (Aug 31, 2022)

dgianstefani said:


> Microsoft won't continue to update windows 11 and it's core scheduler


Some things can't be fixed with kernel optimizations. Just giving you an example like AMD's bulldozer wasn't really "fixed" till win8 *IIRC*.


----------



## dgianstefani (Aug 31, 2022)

R0H1T said:


> Some things can't be fixed with kernel optimizations. Just giving you an example like AMD's bulldozer wasn't really "fixed" till win8 *IIRC*.


AMD's bulldozer was more like bull**** tbh, joke architecture that almost bankrupted AMD, Zen literally saved them. Not sure the comparison is fair here, but I take your point.


----------



## TheoneandonlyMrK (Aug 31, 2022)

dgianstefani said:


> So Intel's long term strategy, which consists of parallel core development, which they are specifically betting on with many (if not all) their upcoming architectural designs with, including tile based chiplets, is also to, wait for it, completely discontinue optimizations for said designs?
> 
> Interesting opinion.


How you turn what I say into your theology's is amusing.

So wrote simply.

They work EXACTLY as Intel intended already.

Yes, they will continue to develop them and things could change.

But as is now.

They're pushing E core's to higher core counts.

And higher frequencies.

Solely to compete on multi core performance and core count.

They're not using them right IMHO and never will in this design with or without Microsoft assist.

Because if they did they're performance would be sub par verses AMD.

Later designs might differ but not likely since they cannot make a core count equal design, performance parity part economically that competes in Any other way ATM.


----------



## dirtyferret (Aug 31, 2022)

dgianstefani said:


> So your argument is that Microsoft won't continue to update windows 11 and it's core scheduler, Intel will no longer release bios updates for it's motherboards, and software development will completely stagnate as of the release of 12th/13th gen Intel?


Whoa, slow down there maestro. There's a Windows 11?  I think you meant to say Windows 1 - 16 bit right?  Software and hardware upgrades, patches, new releases?  When has that ever happened?


----------



## dgianstefani (Aug 31, 2022)

dirtyferret said:


> Whoa, slow down there maestro. There's a Windows 11?  I think you meant to say Windows 1 - 16 bit right?  Software and hardware upgrades, patches, new releases?  When has that ever happened?


I know right? Progress? Is that what they call what those pesky developers do?


----------



## dirtyferret (Aug 31, 2022)

P4-630 said:


> Mobile phones have it at least for a decade already.....


And Qualcomm says they want to enter the desktop market in 2024.  It will be interesting to see what design they (along with Nuvia which they purchased) may bring.


----------



## phanbuey (Aug 31, 2022)

TheoneandonlyMrK said:


> How you turn what I say into your theology's is amusing.
> 
> So wrote simply.
> 
> ...



This is true for the K and the highest end SKUs, they have to yeet those products for that 2% win, but the i5's and non-K skus where they're actually tuned properly really do benefit.  Just no one ever pays attention to those - in those cases the performance / $ is massive and the power is completely under control.


----------



## R0H1T (Aug 31, 2022)

dirtyferret said:


> And Qualcomm says they want to enter the desktop market in 2024.


Where QC is at today they'll be pouring unleaded Gasoline on their investments on anything in the desktop space. It's already shrinking massively & unless they have an x86 Apple Killer of a chip they'll not even get consolation prize for competing against AMD or Intel at the time.


----------



## Dr. Dro (Aug 31, 2022)

I've brought it up before but I strongly believe that the E-cores are a step Intel has taken towards increasing processor density in the same package area. The main target of this would obviously be the server market, but largely thanks to Ryzen, we've also seen common desktop tasks beginning to take advantage of 8+ core processors. The idea is ingenious, increase density and develop the architecture into a high-performance one at the same time, reaping the rewards for both.

Currently, the E-cores perform worse but as time goes on, they will increase the density and performance of these cores, eventually coupling this with Foveros 3D packaging and ever more advanced lithography nodes that permit transistor densities that are not currently feasible with 7 nm-class technology such as Intel 10 or TSMC N7, and the result is that you may some day have a veritable multithreading monster within a regular desktop footprint. They already managed to double E-core density in one generation with Raptor Lake and this is just the desktop market. I would not be surprised if Intel eventually offers an advanced processor with say, 8 P-cores and 120 E-cores targeted at something like a workstation/HEDT market, just like AMD does with Threadripper Pro today. Eventually, those will be our i9's, too.


----------



## dgianstefani (Aug 31, 2022)

Dr. Dro said:


> I've brought it up before but I strongly believe that the E-cores are a step Intel has taken towards increasing processor density in the same package area. The main target of this would obviously be the server market, but largely thanks to Ryzen, we've also seen common desktop tasks beginning to take advantage of 8+ core processors. The idea is ingenious, increase density and develop the architecture into a high-performance one at the same time, reaping the rewards for both.
> 
> Currently, the E-cores perform worse but as time goes on, they will increase the density and performance of these cores, eventually coupling this with Foveros 3D packaging and ever more advanced lithography nodes that permit transistor densities that are not currently feasible with 7 nm-class technology such as Intel 10 or TSMC N7, and the result is that you may some day have a veritable multithreading monster within a regular desktop footprint. They already managed to double E-core density in one generation with Raptor Lake and this is just the desktop market. I would not be surprised if Intel eventually offers an advanced processor with say, 8 P-cores and 120 E-cores targeted at something like a workstation/HEDT market, just like AMD does with Threadripper Pro today. Eventually, those will be our i9's, too.


I would take it even further and say the E cores do not perform worse at all - they do exactly what they're designed to do - increase multithread at the expense of much less die space than P cores, which do not really scale past eight, realistically, in both chip area cost or actual need for them.


----------



## Dr. Dro (Aug 31, 2022)

dgianstefani said:


> I would take it even further and say the E cores do not perform worse at all - they do exactly what they're designed to do - increase multithread at the expense of much less die space than P cores, which do not really scale past eight, realistically, in both chip area cost or actual need for them.



Agreed, E-cores are probably the future, and I think AMD will also eventually adopt a similar strategy with a hybrid architecture processor. Zen 2 is a great candidate to be used as efficient cores with Ryzen, as we've seen it's a relatively unassuming yet high performance architecture, and Mendocino showed us that using Zen 2 with modern lithography nodes makes for exceptionally small chips that can pack a punch, assuming AMD's engineers are able to modify it and package it in such a way, I would definitely expect a product like that to eventually exist, say, a hypothetical processor mixing 16 Zen 4 cores with another 16 3D stacked Zen 2 cores, for example.


----------



## TheoneandonlyMrK (Aug 31, 2022)

dgianstefani said:


> I know right? Progress? Is that what they call what those pesky developers do?


Not once has any of your counter arguments in any way pushed aside any of my opinions or claims.

Instead you went with trying to prove me irrational.

All whilst the best you got is Microsoft will fix it.

We're six months in, no, no they won't.

It's working AS intended Intel fanboi types have core parity and highest IPC argument in the bag.

What else mattered to Intel when making efficiency core's.
Sure as shit wasn't power use or temperature or efficiency ftm.

You got nothing but a laugh, great counter, not.


----------



## R0H1T (Aug 31, 2022)

Or how about x3D with regular Zen cores? The issue with Intel's approach isn't just the E cores or mismatched instruction set support ~ they're also dealing with something like switching from 1c/2t HT cores to normal ones without HT, that also will affect performance & efficiency* IMO negatively*.


----------



## R-T-B (Aug 31, 2022)

Easy Rhino said:


> The global cabal is hell bent on getting off of oil and gas and therefore energy prices are expected to skyrocket. Consumers won't be purchasing that new phone every year or new PC every 3-4 years, and enterprise won't be replacing servers every 4-5 years if their power bills are all double and tripling.


Cool but what does it have to do with ecores?



P4-630 said:


> Isn't used for gaming anyway.


And never will be if this keeps up.



R0H1T said:


> they're also dealing with something like switching from 1c/2t HT cores to normal ones without HT


That's not how it works.  You switch a thread, not a core.


----------



## TheoneandonlyMrK (Aug 31, 2022)

R-T-B said:


> Cool but what does it have to do with ecores?
> 
> 
> And never will be if this keeps up.
> ...


I think his point is two interdependent threads on one HT core to two separate core's.


----------



## Dr. Dro (Aug 31, 2022)

TheoneandonlyMrK said:


> Not once has any of your counter arguments in any way pushed aside any of my opinions or claims.
> 
> Instead you went with trying to prove me irrational.
> 
> ...



We only use Windows in our day-to-day lives because of the large back catalog of supported legacy software and the commercial software pledge to it. It's always been glacial in the pace which it adapts to modern computing and it carries decades of baggage in legacy code it simply cannot get rid of. I mean; really, Windows 11 22H2 still ships with the phone dialer application introduced in NT 4.0.

If mostly everything I've ever used wasn't designed straight against Microsoft Windows, i'd be a long-time Linux user by now, and it is in Linux that you should expect to see proper support for the bleeding edge, for the corner cases and for all sorts of wacky hardware that may appear someday. I don't think this is a discredit towards Intel - but rather, towards Microsoft, and even then, it's not like Microsoft can do much about it - you can already imagine the endless whining and complaints if they ever decide to axe software backcompat and limit it to say, apps designed for Windows 8.1 and later only - damned if they do, damned if they don't.



R0H1T said:


> Or how about x3D with regular Zen cores? The issue with Intel's approach isn't just the E cores or mismatched instruction set support ~ they're also dealing with something like switching from 1c/2t HT cores to normal ones without HT, that also will affect performance & efficiency* IMO negatively*.



A properly optimized operating system should be able to discern between these two types of architectures and assign tasks suitable for each type of core, Android phones have been doing it for years, and now there's hardware-assisted thread scheduling in Alder Lake so it really should be a matter of software optimization, which... Windows is just not, you step outside of its comfort zone and all things can happen.


----------



## R0H1T (Aug 31, 2022)

R-T-B said:


> That's not how it works. You switch a thread, not a core.


You're switching the thread from a P core to an E core, all ADL chips have HT on P cores *IIRC*. How is that wrong? I didn't say switching from just one core to another, more like P (cores) to E which are quite different.



Dr. Dro said:


> A properly optimized operating system should be able to discern between these two types of architectures and assign tasks suitable for each type of core, Android phones have been doing it for years, and now there's hardware-assisted thread scheduling in Alder Lake so it really should be a matter of software optimization, which... Windows is just not, you step outside of its comfort zone and all things can happen.


Optimizations only come after the hardware's out there in the market, this is their first real test case on a full fledged desktop chip. Android is quite different in that regard & addresses such needs differently, you're after all doing it on a phone.


----------



## R-T-B (Aug 31, 2022)

TheoneandonlyMrK said:


> I think his point is two interdependent threads on one HT core to two separate core's.


If anything that would be a benefit.



R0H1T said:


> You're switching the thread from a P core to an E core, all ADL chips have HT on P cores *IIRC*.


Yes, but it's still just one thread.  Not two.  Not more.


----------



## R0H1T (Aug 31, 2022)

That depends on the application. Some applications spawn one thread per core even if it's for HT, like 7zip for example probably winRAR as well. If threading & handling multiple cores was so easy we would've had 1000c/2000t (E?) cores by now.


----------



## Dr. Dro (Aug 31, 2022)

R0H1T said:


> You're switching the thread from a P core to an E core, all ADL chips have HT on P cores *IIRC*. How is that wrong? I didn't say switching from just one core to another, more like P (cores) to E which are quite different.
> 
> 
> Optimizations only come after the hardware's out there in the market, this is their first real test case on a full fledged desktop chip. Android is quite different in that regard & addresses such needs differently, you're after all doing it on a phone.



I agree in general, and that also applies to Arc - hardware will only ever mature and kinks will only ever be fixed if there is wide deployment of any given hardware architecture. Alder Lake is the first such widely-deployed product, and any inefficiencies should be iterated upon as newer generations roll out.

Limited field testing with hybrid architectures before Alder Lake had been a thing with the Lakefield CPUs, though that was restricted to just a few laptop models, mostly from Samsung. The i5-L16G7 for example actually pioneered more than just hybrid architecture (it was a 5-core chip with 1 Sunny Cove P-core and 4 Tremont E-cores), it was also the first Foveros 3D SKU to ship, complete with 8 GB of 3D-stacked, on-package LPDDR4X DRAM, and this was back in 2020. 









						Intel® Core™ i5-L16G7 Processor (4M Cache, up to 3.0GHz) - Product Specifications | Intel
					

Intel® Core™ i5-L16G7 Processor (4M Cache, up to 3.0GHz) quick reference with specifications, features, and technologies.




					www.intel.com
				




Microsoft likely had a sample of this even earlier, as they no doubt have an interest in supporting Intel's latest technological advancements, so all things considered, they have known about this for some time and it's been about a year since it's a mass-deployed product, being slow to adapt is entirely in MSFT's back if you ask me.



R0H1T said:


> That depends on the application. Some applications spawn one thread per core even if it's for HT, like 7zip for example probably winRAR as well. If threading & handling multiple cores was so easy we would've had 1000c/2000t (E?) cores by now.



Software should be able to adapt if the constraints of the operating system are dropped. 7-zip and other archivers like RAR are quite multi-core friendly, and I suspect the developers of both software can optimize priority to make the best of such an architecture, the same should go for video encoders (which already do this based on CPU architecture and instruction set).


----------



## Denver (Aug 31, 2022)

wheresmycar said:


> *PLEASE*, NO INTEL/AMD FANBOY ANTAGONISM!
> 
> What is your opinion on intels motivation for introducing e-cores. Sometime ago I met a self-confessed AMD jock suggesting intels E-cores are just a poor attempt to overshadow AMD's "core count" which can't be achieved with performance cores alone. Although my buying decision is mostly based on benchmarks and price, i have to admit now everytime I come across e-cores I do end up second guessing the purpose behind them.
> 
> ...



In my view, the strategy of small cores is directly linked to intel's slowness in advancing in chip manufacturing. With no extra space from a denser lithograph, they needed effective cores per area to match the massive MT performance of the Ryzen line up 

This strategy is limited by TDP and heating etc...


----------



## phanbuey (Aug 31, 2022)

Denver said:


> In my view, the strategy of small cores is directly linked to intel's slowness in advancing in chip manufacturing. With no extra space from a denser lithograph, they needed effective cores per area to match the massive MT performance of the Ryzen line up
> 
> This strategy is limited by TDP and heating etc...


This is exactly right.

Conversely it shows that they CAN match the massive MT performance using this design on a less dense node (at the cost of TDP). 

The chip with e-core is far superior to one without given the same die space and lithography being held constant -- basically intel is squeezing the most performance they can through design/innovation because their node sucks.   Clearly it works they took ST performance crown and matched MT performance on an inferior node.  So to answer the question of "do e cores work/ make sense" based on the raw numbers answer is clearly yes -- ADL s would not be in the same league without them.


----------



## R-T-B (Aug 31, 2022)

R0H1T said:


> Some applications spawn one thread per core even if it's for HT, like 7zip for example probably winRAR as well. If threading & handling multiple cores was so easy we would've had 1000c/2000t (E?) cores by now.


None of that matters to the scheduler which works on a per thread basis.


----------



## dragontamer5788 (Aug 31, 2022)

Its not the OS that needs to be rewritten for E-cores or P-cores, its software in general. That's what people don't understand. If Blizzard / Activision decided to optimize for E-cores, then the next release of Overwatch or whatever will be faster on E-cores.

Microsoft can create libraries / algorithms to try and figure things out at the OS level. But its not like the OS is an oracle that knows how programs work, it just collects dumb statistics and makes broad general guesses. Its not Microsoft's responsibility to make more efficient use of E-cores.

----------

And there-in lies the problem. Zen4 cores are more similar to P-cores, so it will make more sense for developers to optimize for Zen4/P-cores, rather than spending efforts making things better for E-cores.


----------



## Assimilator (Aug 31, 2022)

From an engineering perspective, while Intel's cores are big and hot they make up for this with raw performance, and since the vast majority of consumer workloads today are (still) single-threaded, 12 of Intel's cores can probably beat 16 of AMD's.

The problem there is simple marketing, because 12 < 16, and consumers are all about meaningless numbers, and therefore marketing departments are all about meaningless numbers, and therefore it became impossible for Intel to launch products with inferior core counts, even though their cores are arguably better than AMD's.

But Intel has one thing that AMD doesn't: heterogenous x86 CPU lines, with powerful Core and efficient Atom. So why not take those 12 powerful (P) cores, add 4 Atom (E) cores, and boom you have your 16-core CPU and you can now compete again in meaningless numbers? And that's exactly what they did.

So the E-core concept was somewhat of a desperation move, driven by necessity rather than any desire for efficiency, but it is the kind of innovative thinking I honestly didn't believe Intel was capable of anymore. And going forward, it's ultimately a good thing for Intel:
* they can still claim core-count leads over AMD
* they don't have to throw their big/hot core design out the window and start again immediately (it is quite obvious that the Skylake-derivative architecture is at the end of its life and needs replacing, but the addition of E-cores have allowed Intel to eke another generation or two out of it)
* instead of Atom being the unloved red-haired stepchild, since it is now in all Intel's CPUs as E-cores it should get significantly more resources to become a better architecture

and the industry/consumers, because scheduling improvements to be aware of heterogenous core counts should make Arm's second attempt at breaking into the Windows desktop significantly easier than the last try, and competition is a good thing.

Of course, the scheduling is the biggest stumbling block because (a) it seems it still isn't very good, even after significant development (b) Microsoft has chosen to gate it behind Windows 11.


----------



## dragontamer5788 (Aug 31, 2022)

Assimilator said:


> But Intel has one thing that AMD doesn't: heterogenous x86 CPU lines, with powerful Core and efficient Atom. So why not take those 12 powerful (P) cores, add 4 Atom (E) cores, and boom you have your 16-core CPU and you can now compete again in meaningless numbers? And that's exactly what they did.



I disagree.

The iPhone / Android market has been swept up by big.LITTLE designs for the last decade. Its a proven design that physically makes sense: efficient cores really use grossly less power for only a little bit loss of performance. (Ex: 12% power usage for 50% performance, which is exactly what cell-phone users want). With the Apple M2 chip being released with *TWO DAYS* worth of battery life, Intel (who makes most of their money from laptops) is running scared, for good reason. If Intel's laptops are to compete against Apple laptops, they must use the big.LITTLE design, and P vs E-cores is an adequate version of the concept.

The issue is that Microsoft is likely not going to rewrite all their software (ex: Word, Office, etc. etc.) to efficiently use E-cores. And video gamers almost certainly won't use those E-cores either.

So we're in a situation where people won't get the huge battery lives that the Apple-crew is getting, but AMD is striking them from the other end with high-performance Zen4 cores.



> Of course, the scheduling is the biggest stumbling block because (a) it seems it still isn't very good, even after significant development (b) Microsoft has chosen to gate it behind Windows 11.



I mean, if Microsoft had to spend a few million bucks on developers developing a new scheduler in the OS to handle P-cores / E-cores, it makes sense for Microsoft to try to make money off of it to recoup the costs. Schedulers are hard, and expert computer programmers who know how to write schedulers are expensive.

This is a totally different issue going on. Even in the Android / iPhone world, the proper scheduling of big vs LITTLE cores is very much a dark art and no one does it well yet. The decades of OS theory was built upon the assumption of homogenous (all the same) cores. When you have "some cores are faster than other cores", there's very weird and non-intuitive math that occurs. OS design barely understands power-efficient scheduling (invented for this era of cell-phone apps), let alone this new era of "sometimes power-efficient, sometimes high-performance, based on the User's expectations".


----------



## P4-630 (Aug 31, 2022)

dirtyferret said:


> And Qualcomm says they want to enter the desktop market in 2024.  It will be interesting to see what design they (along with Nuvia which they purchased) may bring.



Or:
_Surface Pro "9" ARM variant being powered by a custom Snapdragon 8cx Gen3 SoC, dubbed the Microsoft SQ3.

First Windows on ARM desktop PC in the form of a developer kit dubbed “Project Volterra,” which I’m told features the Snapdragon 8cx Gen3 SoC and includes the same neural processing unit (NPU) AI features and power that is expected to ship in the Surface Pro 9 with ARM._









						Microsoft to merge Surface Pro X ARM and Surface Pro 9 Intel versions under one product line
					

Windows on ARM may be getting a promotion to the main Surface Pro line.




					www.windowscentral.com


----------



## R0H1T (Aug 31, 2022)

R-T-B said:


> None of that matters to the scheduler which works on a per thread basis.


Nope, applications can override the built-in performance counters to set for instance priority themselves. Sandra's SiSoft *IIRC* fixes it's service priority to high even when you manually lower it. It mainly depends on the application. While the software can't randomly choose which core (P or E) it runs on it can definitely be programmed to not allow core parking/unparking or the system entering lower power states et al. I'm thinking E cores won't activate unless the OS/application wants the system to enter lower power state, or whatever the thread director says it to do.


dragontamer5788 said:


> Its not the OS that needs to be rewritten for E-cores or P-cores, its software in general. That's what people don't understand. If Blizzard / Activision decided to optimize for E-cores, then the next release of Overwatch or whatever will be faster on E-cores.


You can mitigate that to a certain extent using the registry.


----------



## dragontamer5788 (Aug 31, 2022)

R0H1T said:


> You can mitigate that to a certain extent using the registry.



Yes, Windows is surprisingly flexible.

But I assume most users won't know how to use NUMA settings or P vs E-cores, or thread affinity. They'll rely upon the developers to "set sane settings". Registry and/or application startup settings through PowerShell (and other ways of achieving this) are always possible, but "users are dumb" (or at least, users won't spend the time learning these features).


----------



## Vayra86 (Aug 31, 2022)

Toothless said:


> I swear some of you people love to make something harmless into something controversial. E-cores are fantastic at managing background processes, leaving more room for the P-cores to be open for heavier tasks.
> 
> This isn't about power usage. It's not about the Skylake equal of these smaller cores being, well, not as good. It's just support cores in a sense.



I agree. The problem isn't E-cores, its the high peak power limits to get benchmark results. And that goes for both camps, but Intel is the bigger offender.

That said, they do their job regardless, both in getting top bench results and in munching on MT loads. Though I'm pretty far away from calling it a 'design win'. It shows itself purely as a desperate attempt to keep building monolithic CPUs when its clear as day a chiplet concept is the future. It works, for now, and until the moment people really can present heavy loads on these CPUs while gaming. That's when the ugly rears it head as you bump into power limits and perhaps even throttling to base clock. What Intel does do for x86, is pave the road for chiplet based CPUs with different core configurations. 'A chiplet of E cores with your P's sir?'

The real question is, will a gamer run into those limits. Not extremely likely, even a 4 year old CPU is still major overkill. But if you run heavier parallelized loads, E cores directly incur a penalty on P core performance or vice versa, while Ryzen does not have this issue, while Ryzen also carries a much lower TDP ceiling for peak performance.

Burst vs consistency, in a nutshell, when put under heavy stress.


----------



## cvaldes (Aug 31, 2022)

dragontamer5788 said:


> I mean, if Microsoft had to spend a few million bucks on developers developing a new scheduler in the OS to handle P-cores / E-cores, it makes sense for Microsoft to try to make money off of it to recoup the costs. Schedulers are hard, and expert computer programmers who know how to write schedulers are expensive.


Microsoft has the money to hire expert computer programmers. So does Intel. And AMD. And Nvidia.

If they fumble the ball, it won't be because of money. And being first out of the gate doesn't guarantee a win. Go ask former Windows Mobile programmers about their perspective.

So does Apple and their operating system software has supported E-cores for years, first with A-series SoCs and now M-series SoCs. Of course, they were supremely motivated to do so due to battery life constraints. Over 85% of Mac unit sales are notebook models; nothing new, it has been like this for over a decade.

It's harder for Microsoft since they are under pressure to support legacy software and they don't control the hardware.

In the end, Microsoft will figure this out. Today's Microsoft is much different than the Microsoft of the '90s. While they have Xbox, their primary focus is enterprise/datacenter/cloud today. They don't even make much money from Office. I bought a cheap lifetime key for Office Home & Student 2019 for less than $35.

My guess is that some of their datacenter customers are pressing for better functionality from CPUs that have E-cores. Some people here don't seem to understand that the datacenter business is substantial and growing far faster than the gaming PC business (which itself is a subset of the overall desktop PC market).

A lot of this comes down to power efficiency. Institutional and government customers also have strong interest in the performance-per-watt metric, something that many PC gamers don't care about.


----------



## Vayra86 (Aug 31, 2022)

dgianstefani said:


> I know right? Progress? Is that what they call what those pesky developers do?


One could argue Win 11 is a major step back, rather than progress.

The user base sure does.

Developers exist to get mistreated for such concepts by management.


----------



## cvaldes (Aug 31, 2022)

Vayra86 said:


> One could argue Win 11 is a major step back, rather than progress.
> 
> The user base sure does.
> 
> Developers exist to get mistreated for such concepts by management.


Which is why Microsoft needs to put effort into improving the task scheduler and other idiosyncrasies with Windows 11.

Again, it's not just about little ol' Joe Gamer living in his mom's basement.

It's more about that 5,000 Alder Lake (Windows 11 pre-installed) desktop PC order that Dell received from the General Accounting Office's purchasing department.

Microsoft will make more of a widespread impact by optimizing Windows 11 and Teams for some Alder Lake business PC than making sure all the graphical elements on Start Menu have rounded corners on Joe Gamer's RGB discotheque PC.

Hell, from a PC gaming perspective, if Microsoft can get OBS to run better on a CPU with E-cores in tandem with a AAA game versus a comparable CPU that only has P-cores, the Windows 11 task scheduler programmers can high five each other.


----------



## dgianstefani (Aug 31, 2022)

Vayra86 said:


> One could argue Win 11 is a major step back, rather than progress.
> 
> The user base sure does.
> 
> Developers exist to get mistreated for such concepts by management.


The UI is trash, so is the further erosion of privacy, the underlying codebase is an evolution of 10, and superior in many regards.


----------



## ThrashZone (Aug 31, 2022)

Vayra86 said:


> One could argue Win 11 is a major step back, rather than progress.
> 
> The user base sure does.
> 
> Developers exist to get mistreated for such concepts by management.


Hi,
Yep striped to the bone and rebuilt 11 isn't bad but you now realize it's 10 with some settings moved around so no biggie use 10 and be done.


----------



## openbox1980 (Aug 31, 2022)

dragontamer5788 said:


> I disagree.
> 
> The iPhone / Android market has been swept up by big.LITTLE designs for the last decade. Its a proven design that physically makes sense: efficient cores really use grossly less power for only a little bit loss of performance. (Ex: 12% power usage for 50% performance, which is exactly what cell-phone users want). With the Apple M2 chip being released with *TWO DAYS* worth of battery life, Intel (who makes most of their money from laptops) is running scared, for good reason. If Intel's laptops are to compete against Apple laptops, they must use the big.LITTLE design, and P vs E-cores is an adequate version of the concept.
> 
> ...


Intel still needs to do something with the power draw of their processors in the laptop market. Most of their laptops are getting no more than 5hrs on battery, while AMD apus are normally getting over 8hrs. 

Its why I think e-cores arent really that good, even for laptops.


----------



## Assimilator (Aug 31, 2022)

dragontamer5788 said:


> Intel (who makes most of their money from laptops)


Completely incorrect.


----------



## Dr. Dro (Aug 31, 2022)

openbox1980 said:


> AMD apus are normally getting over 8hrs.



It's really dependent on the laptop type, build, SKU... Intel's U- and Y- processor SKUs are some of the lowest power SoCs on the market. But perhaps most importantly, battery capacity... my particular laptop has that problem, the battery is so small that it feels closer to a battery backup (such as an UPS) than something you were actually intended to use as a power source for the computer. Buuut it's a gaming laptop, so there's that.


----------



## phanbuey (Aug 31, 2022)

openbox1980 said:


> Intel still needs to do something with the power draw of their processors in the laptop market. Most of their laptops are getting no more than 5hrs on battery, while AMD apus are normally getting over 8hrs.
> 
> Its why I think e-cores arent really that good, even for laptops.



e cores are actually less efficient than P cores - they are made for die space - but because they are so slow compared to P/Zen3/4 cores, even though they are 'low power',  they actually have to work harder to finish given tasks and therefore use slightly more power.  In laptops this is obvious -- the alder lake laptop chips are aimed at performance at the cost of battery life.  I think the future iterations will get better at this but agree here -- in their current state they aren't good for laptops.


----------



## dragontamer5788 (Aug 31, 2022)

Assimilator said:


> Completely incorrect.



Unfortunately for you, I know how to read a 10k document.



			https://www.intc.com/filings-reports/annual-reports/content/0000050863-22-000007/0000050863-22-000007.pdf
		


Page 86, breakdown of revenue by Intel's sectors. Client computing group is by far the largest. Of the CCG portion of Intel, the laptop group makes far more revenue than the desktop group.

Intel's #1 segment, by volume, is the laptop portion of their "CCG" / Client Computing Group. Desktop and Datacenter (and other portions) are smaller.

EDIT: That's 25 Billion to Laptops, and 23 Billion to Data-center, and only 11 Billion to Desktop.


----------



## R-T-B (Aug 31, 2022)

R0H1T said:


> Nope


I mean...  you just went on a tangent about process priority and how it can be requested, which has nothing to do with the fact the scheduler still operates on threads, sorry.



Vayra86 said:


> The user base sure does.


Meh.  It ain't perfect I'll be the first to admit, but I still think most of the whining is from people with no intention to use it longterm.  Not really the "user base."


----------



## cvaldes (Sep 1, 2022)

R-T-B said:


> Meh.  It ain't perfect I'll be the first to admit, but I still think most of the whining is from people with no intention to use it longterm.  Not really the "user base."


At some point most of them may not have a choice if both Intel and AMD commit to the E-core/P-core route for their consumer CPU stacks. Once properly implemented (hardware, manufacturing node, software, etc.), the pros will far outweigh the cons. We've already seen this in the mobile space.

You can't buy an M-series powered Mac with only E-cores or P-cores. If you want a Mac with only P-cores, you can choose between the Mac mini 2018 (Intel Core i5 or i7) or the Mac Pro (a selection of Intel Xeon). Soon those will go away.

I can't buy an iPhone with only Avalanche cores either.

These whiners may end up eventually being part of the user base whether they "like" it or not. Sure they have alternatives today if they don't want Alder Lake. Someday they probably won't.

My guess is that AMD have been running CPUs with E-cores for several years in some lab in Santa Clara. And half of the people in that building probably have iPhones. This concept of differentiated silicon isn't new anymore. AMD added raytracing cores to RDNA2 GPUs.

Maybe AMD waiting on the sidelines letting Intel stumble through with early Windows support until they can debut something more polished. Maybe it's more difficult getting E-cores and P-cores to play nicely on 2 CCDs. Who knows?

It's not like Dr. Lisa Su is suddenly going to slap her forehead while she's eating lunch at her desk and say "Hey, why don't we have different CPU cores for different tasks?!?" and it'll magically materialize a few months later.


----------



## wheresmycar (Sep 1, 2022)

dragontamer5788 said:


> Unfortunately for you, I know how to read a 10k document.
> 
> 
> 
> ...



interesting info...

data center revenue... does that include commercial desktops/small business desktop type servers or are these included in the Client Computing Group?



Denver said:


> In my view, the strategy of small cores is directly linked to intel's slowness in advancing in chip manufacturing. With no extra space from a denser lithograph, they needed effective cores per area to match the massive MT performance of the Ryzen line up
> 
> This strategy is limited by TDP and heating etc...



Yeah this is the sort of stuff discussed when I initially added _"Sometime ago I met a self-confessed AMD jock suggesting intels E-cores are just a poor attempt to overshadow AMD's "core count" which can't be achieved with performance cores alone". _He did mention limitations in the chips make-up due to higher temps or power consumption hence e-cores are somewhat just filling the gap.

Can't complain though, with or without e-cores ADL's done a fantastic job... it would be interesting to see if e-cores play a more refined role in the long run especially if Intel sticks with monolithic designs... although not sure if this is correct, are earlier rumours of Meteor Lake moving to multi chip modules (MCM) now official or is intel playing DIE HARD with mono-bono-4-life?


----------



## AusWolf (Sep 1, 2022)

Mussels said:


> I mean sure, but I also felt the same way when i upgraded to my ryzen 1400 back in the day
> 
> 
> You can't be sure it's the E cores that makes it smoother, and not just being a faster CPU overall with more cores/threads


My Ryzen 3 3100 HTPC feels more responsive than my Core i7 11700 main desktop simply because it doesn't have half as many automatically starting background programs installed.

There's many things that make a PC responsive.


----------



## mama (Sep 1, 2022)

dragontamer5788 said:


> Its not the OS that needs to be rewritten for E-cores or P-cores, its software in general. That's what people don't understand. If Blizzard / Activision decided to optimize for E-cores, then the next release of Overwatch or whatever will be faster on E-cores.
> 
> Microsoft can create libraries / algorithms to try and figure things out at the OS level. But its not like the OS is an oracle that knows how programs work, it just collects dumb statistics and makes broad general guesses. Its not Microsoft's responsibility to make more efficient use of E-cores.
> 
> ...


e-cores do not help in gaming.


----------



## nguyen (Sep 1, 2022)

mama said:


> e-cores do not help in gaming.



Having more than 8C/16T do not help in gaming either though, just look at 10900K/5900X/5950X

IMO assuming same price category

8Cores+3DVcache > 8P+8E > 8Cores


----------



## dragontamer5788 (Sep 1, 2022)

wheresmycar said:


> data center revenue... does that include commercial desktops/small business desktop type servers or are these included in the Client Computing Group?











						Operating Segments
					

Intel’s data-centric and PC-centric operating segments include technologies and solutions for processing, data analysis, storage, and data transfer.




					www.intc.com
				






> Client Computing Group​CCG creates platforms designed for end-user form factors, focusing on higher growth segments of 2-in-1, thin-and-light, *commercial* and gaming, and growing opportunities in areas such as connectivity.



So yeah, CCG includes commercial desktops and commercial laptops. "Data Center" is...



> Datacenter and AI Group​DCAI focuses on developing leadership data center products, including Intel® Xeon® server and field programmable gate array (FPGA) products, as well as driving the company’s overall artificial intelligence (AI) strategy.





mama said:


> e-cores do not help in gaming.


Depends on the game, depends on how the code was written. You can write code to do anything. The question is economics. The managers / technical leads *choose* what to be good with as they write the code, as they test the code, as they debug and iterate the code.

It seems like Apple's and Android's big.LITTLE cores do fine in various mobile-gaming tasks. Because in these situations, power-efficient gaming is a bigger premium than the desktop or laptop world.


----------



## Easy Rhino (Sep 1, 2022)

R-T-B said:


> Cool but what does it have to do with ecores?



Ecores are meant to improve the wattage/performance ratio.


----------



## Assimilator (Sep 1, 2022)

dragontamer5788 said:


> Unfortunately for you, I know how to read a 10k document.
> 
> 
> 
> ...


I stand corrected - I was under the impression that data centre has always been the highest revenue generator.


----------



## TheoneandonlyMrK (Sep 1, 2022)

Dr. Dro said:


> We only use Windows in our day-to-day lives because of the large back catalog of supported legacy software and the commercial software pledge to it. It's always been glacial in the pace which it adapts to modern computing and it carries decades of baggage in legacy code it simply cannot get rid of. I mean; really, Windows 11 22H2 still ships with the phone dialer application introduced in NT 4.0.
> 
> If mostly everything I've ever used wasn't designed straight against Microsoft Windows, i'd be a long-time Linux user by now, and it is in Linux that you should expect to see proper support for the bleeding edge, for the corner cases and for all sorts of wacky hardware that may appear someday. I don't think this is a discredit towards Intel - but rather, towards Microsoft, and even then, it's not like Microsoft can do much about it - you can already imagine the endless whining and complaints if they ever decide to axe software backcompat and limit it to say, apps designed for Windows 8.1 and later only - damned if they do, damned if they don't.
> 
> ...


Yes but your rant on windows has completely missed the point I made.

There's no rework of scheduler or bios or anything else required.

The E core's work exactly as Intel intended.

They're not bothered about efficiency(or they're efforts border on offensive) so long as ST is competitive and MT is too.

Then job done, mic dropped, party time.

Believe what you want, as will I.


----------



## Dr. Dro (Sep 1, 2022)

TheoneandonlyMrK said:


> Yes but your rant on windows has completely missed the point I made.
> 
> There's no rework of scheduler or bios or anything else required.
> 
> ...



I didn't rant, though, and I didn't say that the E-cores don't work as Intel intended? I said that Windows doesn't work as it should.


----------



## Zach_01 (Sep 2, 2022)

Denver said:


> In my view, the strategy of small cores is directly linked to intel's slowness in advancing in chip manufacturing. With no extra space from a denser lithograph, they needed effective cores per area to match the massive MT performance of the Ryzen line up
> 
> This strategy is limited by TDP and heating etc...


I believe so...


wheresmycar said:


> Yeah this is the sort of stuff discussed when I initially added _"Sometime ago I met a self-confessed AMD jock suggesting intels E-cores are just a poor attempt to overshadow AMD's "core count" which can't be achieved with performance cores alone". _He did mention limitations in the chips make-up due to higher temps or power consumption hence e-cores are somewhat just filling the gap.
> 
> Can't complain though, with or without e-cores ADL's done a fantastic job... it would be interesting to see if e-cores play a more refined role in the long run especially if Intel sticks with monolithic designs... although not sure if this is correct, are earlier rumours of Meteor Lake moving to multi chip modules (MCM) now official or is intel playing DIE HARD with mono-bono-4-life?



Personal opinion...

Intel is struggling to get their node(s) straight for more than 5 years now, and TSMC is waving them from afar...
If Intel could, the E-cores concept wouldn't be on the table for at least another 5, maybe more years. They had to be creative and looked into mobile (phones/tablets and such) industry to their solution.

E-cores work... dont work as intended... Who knows what really Intel and AMD intend... we can only assume and we ought to not believe what they are serving us on their advertising.

The thing is at the end that ADL, as benchmarks show, have the performance to compete. At the cost of power... yes as they are on 10nm (they can call it whatever they like BTW, its still 10nm) against the 5/7nm nodes. Most likely the could do a better job on efficiency but thats not what today is about. Competition is good but sometimes not... And you will understand what Im saying further down.

MCM design wont/cant help them to reduce power. They still have the 10nm at their disposal for the moment and so much they can do.
Only to make fabrication a little more cost effective as smaller dies have better yields. But these things cant be done from 1 day to another.

AMD has taken the MCM path clearly for:
1. The unified design across all segments
2. Smaller dies

Both lead to better profit margins and not better efficiency.

If they haven't had the 5/7nm nodes from TSMC they would be on exactly the same position with intel, power wise. But AMD is working hard to catch up... lol

Hence...

Another matter (or not) that I haven't really see on discussions, after presentation of AM5...
That AMD is catching up on Intel on power. Top tier desktop CPU is now in the 200W (I say 200+W) territory.
We can understand that when:
A 105W TDP CPU has 140+W PPT it means that a 170W TDP CPU has.... ???W PPT... 200W?... 220W?

Those slides suggest the following to me...







*That 7950X will have a 220~230W PPT*
If you do the math combined on those 2 slides and the fact that 12900K is a 240W PPT, you will find it.

Along with Intel, nVidia, Radeon... we can now welcome AMD CPUs to the inefficiency in the name of competition.

I wonder what will stop them...


Edit: typo(s)


----------



## MrJiggyDancer (Sep 9, 2022)

nguyen said:


> View attachment 260079
> So with E-cores disabled, 8P cores get 20K scores, that 2500points per P-core
> 8E cores take up the same die space as 2P cores, let say Intel make 10P cores, that would get them 25000 points, meanwhile 8P+8E get them 27700 points.
> 
> There are non-K version for people who prefer better effieciency at stock (much cheaper too), and have no clue how to tune their PC. Only idiots pay more for K version and not tune their PC to how they like it.


Apples to oranges.  You're comparing overclocked P+E scores to power limited P-only score.


----------



## MrJiggyDancer (Sep 9, 2022)

dgianstefani said:


> So Intel's long term strategy, which consists of parallel core development, which they are specifically betting on with many (if not all) their upcoming architectural designs with, including tile based chiplets, is also to, wait for it, completely discontinue optimizations for said designs?
> 
> Interesting opinion.





TheoneandonlyMrK said:


> No my argument is Intel won't put that effort in.
> It is not how they are using those core's.
> 
> This isn't arms big little, there's work's.


I'd argue even ARM's implementation is flawed.  Do we really have any evidence that b.L really adds any efficiency in their architecture?


----------



## MrJiggyDancer (Sep 9, 2022)

Assimilator said:


> I stand corrected - I was under the impression that data centre has always been the highest revenue generator.


12th gen laptops do not have better battery life than 11th gen laptops...


----------



## Gungar (Sep 9, 2022)

MrJiggyDancer said:


> I'd argue even ARM's implementation is flawed.  Do we really have any evidence that b.L really adds any efficiency in their architecture?



Efficiency in price (way less expensive than real 8 core cpu), in energy consumption probably not or very little difference.


----------



## Bomby569 (Sep 9, 2022)

A CPU is a package, for the consumer what matters is the performance not the tech. Intel had to do this because they were running out of space on the die and the thermals problem. It's a win for them. A necessity for sure, but many breaktroughs come out of necessity.

AMD seems to be doing fine without them (trusting their own words), the new Ryzen will have a smaller die.

Two different approaches, none are wrong or right. It's actually better they are trying their own thing and not just copying each other. For the consumer it's not relevant at all.


----------



## Panther_Seraphin (Sep 9, 2022)

Its back to the days of Athlon XP era with the Ghz war

We have just moved from Raw speed to Core counts

Imagine trying to market a 10 Core part vs a 16 Core part to the average Joe at the same prices. Who do you think people are going to for? Or more Accurately 4-6 core Intel vs 6-8-12 core AMD

I can actually see there being an advantage in a Server environment in certain designs. Think storage or VM hosting where getting 80% of the core performance for 50-60 die size and power draw will be big positives. Or where PCI-e lanes are more beneficial and you arent needing the latest and greatest CPU performance. I believe this is where Bergamo is possibly being aimed at with 128 "E" cores instead of the 96 "P" cores of Genoa.


----------



## Vario (Sep 10, 2022)

nguyen said:


> View attachment 260079
> So with E-cores disabled, 8P cores get 20K scores, that 2500points per P-core
> 8E cores take up the same die space as 2P cores, let say Intel make 10P cores, that would get them 25000 points, meanwhile 8P+8E get them 27700 points.
> 
> There are non-K version for people who prefer better effieciency at stock (much cheaper too), and have no clue how to tune their PC. Only idiots pay more for K version and not tune their PC to how they like it.


It is nice to have onboard video.  The price is close in the real world between the F and K.


----------



## Mussels (Sep 12, 2022)

lilhasselhoffer said:


> So...help me here.
> 
> You're responding to someone saying that you are not using your 12700k.  Your response is that you disabled a large chunk of the silicon...thus literally disabling the components that this thread is meant to discuss.
> You then link to a video that compare a 12900F and 12900k.  One of the video's conclusions is that there's a 25% difference in power draw between the F and K SKUs...and the performance difference is 0-4%.  So...the result is that you pay more for the k, you have a much higher power draw, and you have a boost that is functionally within the error for the testing methodology to be reasonably chalked-up to regular process variation.  You then say you can get the performance of the k to that of the f by disabling the nice shiny new E cores...and experience an uplift by tuning...despite the literal cited video stating more frequency<>better performance???
> ...


This is the trouble i've been having discussing this on the forums (and keep getting called a fanboy, a shill, etc etc)
If you need to change almost every aspect of the stock behaviour, that's not some elitist awesome superduper thing to brag about - it's a sign the product is bad.


No one argues intel don't have great single threaded performance - the problem is that they're making you pay for more and more terrible E-cores to get it, and then to sustain it you need to disable the E-cores to keep the TDP down...


----------



## nguyen (Sep 12, 2022)

Mussels said:


> This is the trouble i've been having discussing this on the forums (and keep getting called a fanboy, a shill, etc etc)
> If you need to change almost every aspect of the stock behaviour, that's not some elitist awesome superduper thing to brag about - it's a sign the product is bad.
> 
> 
> No one argues intel don't have great single threaded performance - the problem is that they're making you pay for more and more terrible E-cores to get it, and then to sustain it you need to disable the E-cores to keep the TDP down...



LOL yeah let play pretend that Intel non-K series don't exist   , as those non-K model require zero tweak to achive great performance and efficiency, and then some people with 5800X come in and say they can tweak their 5800X to achieve better efficiency


----------



## Vayra86 (Sep 12, 2022)

nguyen said:


> LOL yeah let play pretend that Intel non-K series don't exist   , as those non-K model require zero tweak to achive great performance and efficiency, and then some people with 5800X come in and say they can tweak their 5800X to achieve better efficiency



Non K also likes to boost over 5 Ghz and RPL is going further in that direction.









						Intel® Core™ i9-12900 Processor (30M Cache, up to 5.10 GHz) - Product Specifications | Intel
					

Intel® Core™ i9-12900 Processor (30M Cache, up to 5.10 GHz) quick reference with specifications, features, and technologies.




					www.intel.com
				




Nobody is playing pretend here, the max turbo on this non K part is* 202 W*. Meanwhile this is supposed to be a '65W TDP' CPU like the old days.

So tell me, as a novice that knows the old Intel 65W non K CPUs, how does this power behaviour match my cooling solution scaled to 65W? @Mussels is fully correct here. When you buy Ryzen, you get a CPU that _by default has full customization options, much like AMD CPUs had historically._ But people buy Intel because 'its good out of the box'...


----------



## nguyen (Sep 12, 2022)

Vayra86 said:


> Non K also likes to boost over 5 Ghz and RPL is going further in that direction.
> 
> 
> 
> ...



Show me 1 review where 5800X can achieve its advertised performance with a 65W cooler then, I bet you can't

Meanwhile 12700 with stock 65W cooler review, pretty much matching 5800X (with a better cooler) in performance and came out with better efficiency.

Then the discussion devolve to people can tweak their 5800X, etc....which is just coping


----------



## Vayra86 (Sep 12, 2022)

nguyen said:


> Show me 1 review where 5800X can achieve its advertised performance with a 65W cooler then, I bet you can't
> 
> Meanwhile 12700 with stock 65W cooler review


I don't care about how it compares to AMD, we're talking about what the CPU does and 'how it works' within 65W.

Your very own review link shows up to 28% perf loss when the 65W limit is enforced and running on the RM1 stock cooler. In every benchmark there is a noticeable loss of performance. The CPU throttles when the limit is above 65W. Stock however is* not limited to 65W peak.*

Just stop the red/blue pissing contest for a minute, god almighty.


----------



## nguyen (Sep 12, 2022)

Vayra86 said:


> When you buy Ryzen, you get a CPU that _by default has full customization options, much like AMD CPUs had historically._ But people buy Intel because 'its good out of the box'...





Vayra86 said:


> I don't care about how it compares to AMD, we're talking about what the CPU does and 'how it works' within 65W.
> 
> Your very own review link shows up to 28% perf loss when the 65W limit is enforced and running on the RM1 stock cooler. In every benchmark there is a noticeable loss of performance. The CPU throttles when the limit is above 65W. Stock however is* not limited to 65W peak.*
> 
> Just stop the red/blue pissing contest for a minute, god almighty.



Hm...I sense some sort of hypocrisy here, may be just the wind.

12700 achieve commendable performance with its stock cooler, but it's bad because it lost performance vs maxing it out, and also maxing it out is bad because it lose efficiency, what a stupid argument


----------



## Vayra86 (Sep 12, 2022)

nguyen said:


> Hm...I sense some sort of hypocrisy here, may be just the wind


You're the one bringing up the argument about non K being nicely usable untweaked? I'm showing you that you'll lose a whole lot of performance that way, which is the point @Mussels was making. So 'stock settings' != 'stock performance' at all.

The fact this also goes for Ryzen, I'm not disputing at all...


----------



## nguyen (Sep 12, 2022)

Vayra86 said:


> You're the one bringing up the argument about non K being nicely usable untweaked? I'm showing you that you'll lose a whole lot of performance that way, which is the point @Mussels was making. So 'stock settings' != 'stock performance' at all.



Try cooling 5800X with a 65W Wrath Stealth cooler then


----------



## Vayra86 (Sep 12, 2022)

nguyen said:


> Try cooling 5800X with a 65W Wrath Stealth cooler then


That's a 105W part, not 65.


----------



## nguyen (Sep 12, 2022)

Vayra86 said:


> That's a 105W part, not 65.



But the "242W" 12700 run just fine with a 65W cooler? matching 5800X performance?
Are you coping?


----------



## Vayra86 (Sep 12, 2022)

nguyen said:


> 12700 achieve commendable performance with its stock cooler, but it's bad because it lost performance vs maxing it out, and also maxing it out is bad because it lose efficiency, what a stupid argument


See, this is where its all going wrong: NOBODY is saying the 12700 is bad. Or any other Intel ADL part.

The point is the massive gap between advertised TDP of 65W and peak of 202, or higher in the upcoming gen, and how this affects stock settings and user experience. The gap on competition is smaller and the clock behaviour is less bursty as a result, which also affects cooler requirements.

Ryzen is going in a similar direction and I dislike it just as well.



nguyen said:


> But the "242W" 12700 run just fine with a 65W cooler? matching 5800X performance?
> Are you coping?


Can you grow up a little, maybe?


----------



## nguyen (Sep 12, 2022)

Vayra86 said:


> See, this is where its all going wrong: NOBODY is saying the 12700 is bad. Or any other Intel ADL part.
> 
> The point is the massive gap between advertised TDP of 65W and peak of 202, or higher in the upcoming gen, and how this affects stock settings and user experience. The gap on competition is smaller and the clock behaviour is less bursty as a result, which also affects cooler requirements.
> 
> Ryzen is going in a similar direction and I dislike it just as well.



If you don't know how to make your equipments work as you intended, the problem is on you.


----------



## Vayra86 (Sep 12, 2022)

nguyen said:


> LOL yeah let play pretend that Intel non-K series don't exist   , as those non-K model *require zero tweak to achive great performance and efficiency*, and then some people with 5800X come in and say they can tweak their 5800X to achieve better efficiency





nguyen said:


> *If you don't know how to make your equipments work as you intended, the problem is on you.*


The point from your own review was *that you needed to adjust stock boost behaviour and manually limit the CPU to 65W *out of the box, buddy. And we're full circle now, thanks.

Just so we don't get things mixed up, here:

_"When using the RM1 box cooler without any power limits, the 12700 was thermally limited to a score of 19714 points. That's an 8% reduction when compared to what we saw with the Corsair H170i.

*With the 65w spec enforced*, the score dropped to 16017 points, which is a similar level of performance to that of the Ryzen 7 5800X and Core i9-10900K. A very respectable result given how little power the 12700 is using here."_

See? No discussion about efficiency being _possibly great on ADL, _nobody is fighting that battle. *But it does require a tweak.*


----------



## nguyen (Sep 12, 2022)

Vayra86 said:


> The point from your own review was *that you needed to adjust stock boost behaviour and manually limit the CPU to 65W *out of the box, buddy. And we're full circle now, thanks.
> 
> Just so we don't get things mixed up, here:
> 
> ...



How do you know that 65W spec enforced is not the default setting on every motherboard? guessing?


----------



## Vayra86 (Sep 12, 2022)

nguyen said:


> How do you know that 65W spec enforced is not the default setting on every motherboard? guessing?


Again from techspot:

_"Non-K parts on the other hand, like the Core i7-12700 which are technically 65W parts -- but also not really -- _*still suffer from a loosely defined spec*_, which can see them clock as low as 2.1 GHz on the P-cores for AVX workloads, though without power limits should maintain an all-core frequency of 4.5 GHz, which is a 114% increase over the base frequency."_

This is the point. We've seen it before with Intel and motherboard vendors where loosely defined specs led to unwanted behaviour. Back then it was about Z-boards. Now we're talking about non-OC boards suffering that situation.









						PSA: Don't Buy This Asrock Motherboard
					

In testing new Intel B660 motherboards for an upcoming VRM roundup, we felt compelled to stop and look at the Asrock B660M-HDV, only because Asrock is at...




					www.techspot.com


----------



## nguyen (Sep 12, 2022)

Vayra86 said:


> Again from techspot:
> 
> _"Non-K parts on the other hand, like the Core i7-12700 which are technically 65W parts -- but also not really -- _*still suffer from a loosely defined spec*_, which can see them clock as low as 2.1 GHz on the P-cores for AVX workloads, though without power limits should maintain an all-core frequency of 4.5 GHz, which is a 114% increase over the base frequency."_
> 
> ...



Yawn...some boards enforce power limit by default and some don't, so again if you don't know how to make your equipment works as intented, the problem is on you. I'm not going to be hypocritical and talk like the average PC DIYers are so clueless they don't know what TDP their CPUs are running at...or that they can't read reviews before buying something


----------



## dragontamer5788 (Sep 12, 2022)

Mussels said:


> This is the trouble i've been having discussing this on the forums (and keep getting called a fanboy, a shill, etc etc)
> If you need to change almost every aspect of the stock behaviour, that's not some elitist awesome superduper thing to brag about - it's a sign the product is bad.
> 
> 
> No one argues intel don't have great single threaded performance - the problem is that they're making you pay for more and more terrible E-cores to get it, and then to sustain it you need to disable the E-cores to keep the TDP down...



Not necessarily?

The P vs E core thing, much like big.LITTLE for the ARM world, has physics-level principles that make sense. The software doesn't quite exist yet to fully take advantage of P vs E cores, meaning its now the user's job to make up the difference.  Windows always had good, low-level, controls for which cores get *affinity* to which processes/threads. Yes, it takes manual effort, but an advanced user can say "these threads use E-cores, those threads use P-cores".

For pure single-threaded performance, you're likely correct that disabling the E-cores is best.

However, for multi-threaded performance, as well as power-efficient background processes, the E-cores will be best. We know this because its already true in the big.LITTLE ARM world.

Intel tried to make automagical thread configuration, and so did Microsoft Windows. Alas, there's only so much "auto-configuration" that you can do. At the end of the day, its the programmer/developer's responsibility to set sane defaults (and barring that, it becomes the IT Administrator's job to set up the threads/processes correctly). Since the typical modern computer user doesn't have an IT degree or understanding, they will inevitably set these knobs to an inefficient setting.

I don't think that makes this whole exercise useless however. Its just the nature of a 1st generation product of a very complicated scheduling/OS level problem. The real issue is that Microsoft / Intel needed to be working on this 10-years ago when ARM made their first big.LITTLE chips. 

Especially because of Apple's advancements with M1 and M2, with 2-day battery lives and whatnot. People are beginning to notice the poor power-efficiency of Intel machines (and AMD) over Apple these days. It will be worthwhile to perform a large, optimization pass over the software run on laptops to allow for multi-day computing on one battery charge... it hasn't happened yet though. First, the hardware needs to come out (success: P vs E cores), then the software will change. There's no other way for this to move forward.


----------



## Mikael Andersson (Sep 12, 2022)

Psychoholic said:


> Anecdotal but my 12900K system is the smoothest and most responsive system I have ever had, this includes my previous 10900K and 3900X machines, So i guess the E-Cores are doing a good job at background tasks.


That's the advantage of having sixteen cores and twenty-four threads.


----------



## dirtyferret (Sep 12, 2022)

Mussels said:


> This is the trouble i've been having discussing this on the forums (and keep getting called a fanboy, a shill, etc etc)


I would never call you a fanboy or shill; I would simply say you are best with a little white wine, shallots, and frites.  I will say reading the whole discussion, it's hard for people to understand some of the points made


Mussels said:


> If you need to change almost every aspect of the stock behaviour, that's not some elitist awesome superduper thing to brag about - it's a sign the product is bad.


what stock behavior?  Are we talking about, some bad mobos from asrock or do all stock settings need to be changed and for what user purpose?  Real world use or high benchmark use?


Vayra86 said:


> The point is the massive gap between advertised TDP of 65W and peak of 202, or higher in the upcoming gen, and how this affects stock settings and user experience. The gap on competition is smaller and the clock behaviour is less bursty as a result, which also affects cooler requirements.


I'm looking at two 65w CPUs (Intel 12400 and 5600x) so I'm not playing favorites here.  Both are similar in gaming power used while playing Cyberpunk and temp wise they are similar while using blender, probably a worst case scenario for most people.  We (as educated consumers) know the TDP recommendations by Intel and AMD have been pure BS for years so why is that surprising now?














Vayra86 said:


> This is the point. We've seen it before with Intel and motherboard vendors where loosely defined specs led to unwanted behaviour. Back then it was about Z-boards. Now we're talking about non-OC boards suffering that situation.



Is this an asrock issue or an issue with all Intel z motherboards?


----------



## Vayra86 (Sep 12, 2022)

dirtyferret said:


> I would never call you a fanboy or shill; I would simply say you are best with a little white wine, shallots, and frites.  I will say reading the whole discussion, it's hard for people to understand some of the points made
> 
> what stock behavior?  Are we talking about, some bad mobos from asrock or do all stock settings need to be changed and for what user purpose?  Real world use or high benchmark use?
> 
> ...


You're looking at the wrong CPUs, the above ten+ posts are about the higher end non K's. Note how the temp graph is sharply going up from 12600k onwards. We know the lower and mid range is cool because the core count is lower and so is the peak frequency, by up to a full Ghz even.

As for the TDP values being BS, this is untrue. The underlying principles have been changing the past generations, curiously in the same pace as Intel was losing its leadership.



dirtyferret said:


> Is this an asrock issue or an issue with all Intel z motherboards?


I think this is an issue that can exist in the first place because Intel has defined the spec in a silly way, and again, this isn't about Z boards where some tweaking can be expected from users, but about non overclockable chipsets and parts.

I had the same criticism on AMD's approach in the past. Being loose with what MB vendors can do, is always troublesome to the end users. It really only serves to mislead customers.



nguyen said:


> Yawn...some boards enforce power limit by default and some don't, so again if you don't know how to make your equipment works as intented, the problem is on you. I'm not going to be hypocritical and talk like the average PC DIYers are so clueless they don't know what TDP their CPUs are running at...or that they can't read reviews before buying something



History is full of examples of just that, and really, the average PC DIY job is a complete learning process from beginning to end, for most average PC DIY'ers.

So yes, some clarity would certainly be nice.


----------

