Apple has experience with big.LITTLE for close to a decade, yes it isn't iOS but you're telling me that their experience with Axx chips or ARM over the years won't help them here? Yes technically MS also had Windows on ARM but we know where that went.
CPU development cycles for a new arch are in the ~5 year range. In other words, MS has known for at least 3+ years that Intel is developing a big+little-style chip. Test chips have been available for at least a year. If MS haven't managed to make the scheduler work decently with that in that time, it's their own fault.
No of course not but without the actual chips out there how can MS optimize for it? You surely don't expect win11 to be 100% perfect right out the gate with something that's basically releasing after the OS was RTMed? Real world user feedback & subsequent telemetry data will be needed to better tune for ADL ~ that's just a reality. Would you say that testing AMD with those skewed L3 results was also just as fair?
Perfect? No. Pretty good? Yes. See above.
And ... the AMD L3 bug is a bug. A known, published bug. Are there any known bugs for ADL scheduling? Not that I've heard of. If there are, reviews should be updated. Until then, the safe assumption is that the scheduler is doing its job decently, as performance is good. These aren't complex questions.
View attachment 223678View attachment 223679
So why does the GPU test bench use 4000Mhz modules with the 5800x? Also, Previous benchmarks show even higher fps. 112 vs 96.
Because the GPU test bench is trying to eliminate CPU bottlenecks, rather than present some sort of representative example of CPU performance? My 5800X gives me WHEA errors at anything above 3800, so ... yeah.
According to
Igor Lab's review (<- linked here) where they measure CPU power consumption when gaming -
and measure watts consumed per fps
Alder Lake is doing very very well.
That looks pretty good - if that's representative, the E cores are clearly doing their job. I would guess that is highly dependent on the threading of the game and how the scheduler treats it though.
Anandtech does that iirc, but I feel for our enthusiast audience that it's reasonable to go beyond the very conservative memory spec and use something that's fairly priced and easily attainable
Yep, as I was trying to say I see both as equally valid, just showing different things. It's doing anything else - such as pushing each chip as far as it'll go - that I have a problem with.
Wait, are those light blue numbers idle numbers? How on earth are they managing 250W idle power draw? Or are those ST load numbers? Why are there no legends for this graph? I can't even find them on their site, wtf? If the below text is supposed to indicate that the light blue numbers are indeed idle, there is something
very wrong with either their configurations or they measure that. Modern PC platforms idle in the ~50W range, +/- about 20W depending on the CPU, RAM, GPU and so on.
LOL I'd buy that. Extrapolating from the Anandtech review, a 32 E-Core processor will roughly use 192W @ Max utilization
Well, you'd need to factor in a fabric capable of handling those cores, so likely a bit more. Still, looking forward to seeing these in mobile applications.