"In reality, the power limit is set to 241 W, which pretty much lets the processor suck as much power as it wants and negatively affects energy efficiency at the cost of higher performance."
Isn't that up to the motherboard to decide? I mean, if I enable "Asus Optimiser" on mine, my 65 W 11700 turns into a 200 W CPU.
From W1zzard's explanations it's essentially a normalization of MCE for K-SKU chips, with ignoring the on-paper 125W spec not only being the norm but expected power programming for motherboards. At least now there should be some modicum of standardization, if nothing else.
Alder lake is currently heavily hamstring by ram. Gear 2 runs the IMC in half the speed, so in order to match a 4000c16 kit for example you need something like a 7000+ ddr5 stick. When / if these come along the difference between the 5950x and the 12900k will grow larger.
From my testings, in order to match a 3200c12 gear 1 ram setup you need 4500c17+ in gear 2. Obviously 4500c17 is way better than the current DDR5 offerings. Sure some games and apps actually prefer bandwidth over latency, but generally speaking, the current DDR5 ram we have are really, really, really slow
3200c12? Who runs that? Has anyone even sold 3200c12 kits? The same goes for 4500c17. Sure, tuning to that level is possible with some RAM, but it's not something even remotely normal. And this is a CPU review, trying to speak to generalizeable, expected, normal performance, not "we binned and tuned our RAM to within an inch of its life" performance. Tuning things to the extreme is
not what you want to do in a product review like this. There are two equally valid test methodologies for a review like this: use a fast, commonly available kit at XMP/DOCP settings, or stick to JEDEC settings. Anything else and you're leaving the realm of reproducible performance results.
Also, while DDR5 does come with a latency regression overall, you can't do a 1:1 comparison to DDR4 due to how differently the two types of RAM work - there are fundamental changes to how data is handled that will impact effective latencies differently across the generations. I'm not saying it's faster, but 1:1 comparisons are flawed. It's kind of obvious that a late-gen high-end DDR4 kit will be better than a first-gen DDR5 kit, even if that kit is "high end" for its generation. Still, several sites have tested the same CPU with both DDR4 and DDR5 and found relatively minor performance differences (though fast DDR4 is generally faster) - screenshots are in this thread. Calling it "heavily hamstrung" is an exaggeration.
Edit: L
ooking at AnandTech's memory and cache subsystem latency testing demonstrates how 1:1 DDR4-to-DDR5 latency spec comparisons are problematic. (Yes, they test at slow JEDEC specs, but that's irrelevant, as DDR4-3200c20 is still much lower latency than DDR5-4800c40 - 12,5ms vs. 16.7ms CAS.) The measured latency difference between DDR4 and DDR5 on the same CPU is less than the 4,3-ish ms advantage indicated by CAS. Of course CAS latency is hardly the be-all, end-all of latency, and even at JEDEC specs memory training is left to the motherboard - but isn't it then safe to assume that the motherboards would do a better job at optimally training mature DDR4 than brand-new DDR5? Yet the latency numbers are nearly identical.
This of course doesn't mean that you can't get much lower latencies with currently available DDR4 kits vs. currently available DDR5 kits - there are very fast DDR4 kits out there, after all, and DDR5 is so far quite slow. But it does show that 1:1 latency spec comparisons aren't really valid across these two memory generations.
Yeah they missed the mark on that one, currently. I think some refinements need to happen on the OS side of things to properly utilize the big/little dynamic. The potential for energy savings is there.
That sounds unrealistic to me. Minor improvements? Sure. But in an nT workload, you'll be loading all cores until you hit the power limit no matter what. Unless you want the OS to override the BIOS power limits dynamically, or to artificially limit power and performance by sequestering heavy nT loads to E cores only (or E cores + some arbitrary, low number of P cores), there isn't much that
can change there.
For more variably threaded workloads, there might be optimizations in the scheduler and thread handling, but ADL already seems to handle this reasonably well. Not to mention that any test of variable threaded workloads is inherently unrepresentative of other variably threaded workloads, so at the very least you'll need several (that each produce reliable results) so that you can make some claim to representativity.