• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Alder Lake CPUs common discussion

So, having read quite a lot of reviews here are some pertinent and important conclusions:
  • Intel has pushed their P-cores for MT scenarios to extremes to be the absolute performance king and at least be faster than 5900X and rival 5950X in many cases. This results in an insane power consumption out of the box but only for heavy MT tasks, e.g. video encoding, rendering, software compilation, math calculations - not something average people do daily.
  • This extreme power consumption does not translate into every day scenarios like modestly threaded applications or games - in fact many reviewers show that ADL CPUs are the most power efficient in games. Igor's Lab, AnandTech and computerbase.de have shown that limiting their TDP to 125W or even lower does not meaningfully affect frame rates.
  • It seems very likely that if P-cores maximum frequency is decreased by just 200-300MHz their efficiency will be incredible.
  • Factory OC'ing is not new and NVIDIA, AMD, Apple have been doing that for at least a couple of years. No one is crying foul because of that.
TLDR: Overall ADL CPUs are great sans an extreme factory OC for heavy MT scenarios which can be easily mitigated by limiting their power consumption by setting the PL1 limit in BIOS. At the moment the only issue is the price of the platform because even though the CPUs are competitively priced, you need to purchase a quite expensive motherboard, DDR5 RAM (the faster the better) and a decent cooling solution (preferably AIO).

Too many reviewers are fishing for views and ad revenue, so having loud and disparaging headlines which aren't necessarily representative of the real world is their way of achieving that which is quite sad.
While some users see only what they want to, from only there perspective while saying everyone's wrong hmnn.
 
It's 10nm still... I mean I know they're calling it 7 or whatever but it's a 10nm lithography, not going to be as power efficient as zen 3 at 7nm. But honestly an amazing step forward in performance either way.
It's more or less the same as Zen's 7nm. That's why they renamed their process, similar transistor densities. For years TSMC and Samsung named their processes to make it look like they were on par with Intel, where in reality their density was always one step behind. Of course, this all was before Intel's 10nm "smashing success".

So, having read quite a lot of reviews here are some pertinent and important conclusions:
  • Intel has pushed their P-cores for MT scenarios to extremes to be the absolute performance king and at least be faster than 5900X and rival 5950X in many cases. This results in an insane power consumption out of the box but only for heavy MT tasks, e.g. video encoding, rendering, software compilation, math calculations - not something average people do daily.
  • This extreme power consumption does not translate into every day scenarios like modestly threaded applications or games - in fact many reviewers show that ADL CPUs are the most power efficient in games. Igor's Lab, AnandTech and computerbase.de have shown that limiting their TDP to 125W or even lower does not meaningfully affect frame rates.
  • It seems very likely that if P-cores maximum frequency is decreased by just 200-300MHz their efficiency will be incredible.
  • Factory OC'ing is not new and NVIDIA, AMD, Apple have been doing that for at least a couple of years. No one is crying foul because of that.
TLDR: Overall ADL CPUs are great sans an extreme factory OC for heavy MT scenarios which can be easily mitigated by limiting their power consumption by setting the PL1 limit in BIOS. At the moment the only issue is the price of the platform because even though the CPUs are competitively priced, you need to purchase a quite expensive motherboard, DDR5 RAM (the faster the better) and a decent cooling solution (preferably AIO).

Too many reviewers are fishing for views and ad revenue, so having loud and disparaging headlines which aren't necessarily representative of the real world is their way of achieving that which is quite sad.
This is what AMD used to do with their GPUs: push them to insane power requirements just to be able to claim they're on par with Nvidia. It's how underclocking/undervolting became a thing with AMD owners.

There's a lot more to learn about AL (e.g. a CPU that draws more power, but finishes a task quicker, may still use less energy than a CPU that uses less power, but for longer). But the simple fact is that fully loaded, out-of-the-box, AL burns through a lot of power. On the upside, using AVX512 doesn't result in even more power burnt. It stays within the same limits, which is a first for AVX512.

If I have a bone to pick with AL, it's the scheduler. Intel only went for Win11 support and apparently even that isn't foolproof yet. Win10 support falls well short (can be worked around manually, but that's subpar). And Linux patches haven't even begun to land, which is very uncharacteristic for Intel :(

That said, I still think a 12600k with a lower specced mobo (or even better a 12600 with an H mobo) are/will be great value for the money.
 
It's more or less the same as Zen's 7nm. That's why they renamed their process, similar transistor densities. For years TSMC and Samsung named their processes to make it look like they were on par with Intel, where in reality their density was always one step behind. Of course, this all was before Intel's 10nm "smashing success".


This is what AMD used to do with their GPUs: push them to insane power requirements just to be able to claim they're on par with Nvidia. It's how underclocking/undervolting became a thing with AMD owners.

There's a lot more to learn about AL (e.g. a CPU that draws more power, but finishes a task quicker, may still use less energy than a CPU that uses less power, but for longer). But the simple fact is that fully loaded, out-of-the-box, AL burns through a lot of power. On the upside, using AVX512 doesn't result in even more power burnt. It stays within the same limits, which is a first for AVX512.

If I have a bone to pick with AL, it's the scheduler. Intel only went for Win11 support and apparently even that isn't foolproof yet. Win10 support falls well short (can be worked around manually, but that's subpar). And Linux patches haven't even begun to land, which is very uncharacteristic for Intel :(

That said, I still think a 12600k with a lower specced mobo (or even better a 12600 with an H mobo) are/will be great value for the money.

I'm 100% sure Windows 10 support will arrive sooner or later, after all the OS will be supported until 2029. For early ADL adopters who insist on running this CPU along with Windows 10 there are at least two solutions:
1) Pinning applications to fast cores (which can be done automatically since there are utilities for that, e.g. dAffinity, Bill's Processor Manager, Process Lasso or even command line/lnk editing - affinity mask calculator).
2) Disabling E-cores altogether.

No big deal as far as I can see.
 
Last edited:
So, having read quite a lot of reviews here are some pertinent and important conclusions:
  • Intel has pushed their P-cores for MT scenarios to extremes to be the absolute performance king and at least be faster than 5900X and rival 5950X in many cases. This results in an insane power consumption out of the box but only for heavy MT tasks, e.g. video encoding, rendering, software compilation, math calculations - not something average people do daily.
  • This extreme power consumption does not translate into every day scenarios like modestly threaded applications or games - in fact many reviewers show that ADL CPUs are the most power efficient in games. Igor's Lab, AnandTech and computerbase.de have shown that limiting their TDP to 125W or even lower does not meaningfully affect frame rates.
  • It seems very likely that if P-cores maximum frequency is decreased by just 200-300MHz their efficiency will be incredible.
  • Factory OC'ing is not new and NVIDIA, AMD, Apple have been doing that for at least a couple of years. No one is crying foul because of that.
TLDR: Overall ADL CPUs are great sans an extreme factory OC for heavy MT scenarios which can be easily mitigated by limiting their power consumption by setting the PL1 limit in BIOS. At the moment the only issue is the price of the platform because even though the CPUs are competitively priced, you need to purchase a quite expensive motherboard, DDR5 RAM (the faster the better) and a decent cooling solution (preferably AIO).

Too many reviewers are fishing for views and ad revenue, so having loud and disparaging headlines which aren't necessarily representative of the real world is their way of achieving that which is quite sad.

Very much inline with Linus review, i guess i watch the right one. Since he got that big guy (sorry idk his name) his content is really good.
 
pair this with B660 and some decent DDR4 and you get the best budget gaming build
 
I think Intel won't be happy with the sales figures with Alder Lake.

1) Needs Windows 11 which though not many but few people won't upgrade to because of perceived or otherwise stability and quality concerns.
2) While Alder Lake's gaming power consumption is comparable and not very far from Zen 3, the damage is already done methinks. People already think Alder Lake is hot and power hungry which barely edges out Zen 3.
3) Generally unfavourable component cost: GPU, DDR5, motherboard, better cooler and PSU (because of hot and power-hungry perception).
4) New architecture weirdness. Though I will admit if people could stick out with Ryzen 1000 weirdness, Intel will stick with them too. Though I personally know few people who changed from AMD to Intel because of 1000 and 2000 series weirdness.

Intel's got an uphill battle. Though by sheer volume alone it'll trounce AMD anyway.

While I got no no bet on the race, imo the only silver lining is game publisher/developers removing the tumor that is Denuvo, because the tumor isn't compatible with Alder Lake. I bet they wouldn't have bothered if AMD was the one introduced big.LITTLE and faced the same issue.
 
I'm 100% sure Windows 10 support will arrive sooner or later, after all the OS will be supported until 2029. For early ADL adopters who insist on running this CPU along with Windows 10 there are at least two solutions:
1) Pinning applications to fast cores (which can be done automatically since there are utilities for that, e.g. dAffinity, Bill's Processor Manager, Process Lasso or even command line/lnk editing - affinity mask calculator).
2) Disabling E-cores altogether.

No big deal as far as I can see.
If it were that simple, MS wouldn't have built a new scheduler for Win11 ;)
Pinning may work in some cases, but it won't in others. E.g. If I pin my browser the the P core, tabs I don't watch can't be moved to the E cores. Not a deal breaker, but a subpar experience for sure.
 
If it were that simple, MS wouldn't have built a new scheduler for Win11 ;)
Pinning may work in some cases, but it won't in others. E.g. If I pin my browser the the P core, tabs I don't watch can't be moved to the E cores. Not a deal breaker, but a subpar experience for sure.
It is simple when you know which applications should run on which cores. The real issue is doing that automatically/intelligently without either wasting watts unnecessarily or slowing everything down.

Even for Windows 11 it's not all rosy yet: I've seen reviews where e.g. databases were tested and they were extremely slowed down and worked slower than on RKL/CML/Zen 3 CPUs because Windows 11 decided to run them, as a background task, on e-cores.
 
  • Like
Reactions: bug
It is simple when you know which applications should run on which cores.
I gave you an example above: same app, you want it to run on both cores, depending on the circumstances. Not simple ;)
 
At 125W 12900K is quite competitive though it's beaten by 5950X at just 88W. Looks like Intel wanted to achieve maximum performance at any cost :-( Let's see what Meteor Lake will bring.

View attachment 223689

Here's in-depth coverage from computerbase.de.

Here's the problem with Computerbase.de's report - crippled DDR5 that no early adopter will run :

1636123957287.png
 
I think Intel won't be happy with the sales figures with Alder Lake.

1) Needs Windows 11 which though not many but few people won't upgrade to because of perceived or otherwise stability and quality concerns.
2) While Alder Lake's gaming power consumption is comparable and not very far from Zen 3, the damage is already done methinks. People already think Alder Lake is hot and power hungry which barely edges out Zen 3.
3) Generally unfavourable component cost: GPU, DDR5, motherboard, better cooler and PSU (because of hot and power-hungry perception).
4) New architecture weirdness. Though I will admit if people could stick out with Ryzen 1000 weirdness, Intel will stick with them too. Though I personally know few people who changed from AMD to Intel because of 1000 and 2000 series weirdness.

Intel's got an uphill battle. Though by sheer volume alone it'll trounce AMD anyway.

While I got no no bet on the race, imo the only silver lining is game publisher/developers removing the tumor that is Denuvo, because the tumor isn't compatible with Alder Lake. I bet they wouldn't have bothered if AMD was the one introduced big.LITTLE and faced the same issue.
I dont know where the needs Windows 11 comes from, apparently the scheduler will help, but it doesn't need Windows 11. I think Techpowerup are doing a Windows 10 review soon, and club386 have already tested it on Windows 10.

I think 241W for a cpu released in 2021 is bad news, the PC industry is going against the current eco meta elsewhere. Looking at the power data for when e-cores were disabled they dont actually seem to be saving power, but rather they just low performance cores given a good marketing name.

Agreed on the component cost, the motherboard prices are completely unacceptable, will DDR5 ever hit DDR4 pricing levels in the future?

Agree also on the architecture changes, Windows 11 is the best case scenario, you have to account older and non windows OS as well.
 
I think 241W for a cpu released in 2021 is bad news, the PC industry is going against the current eco meta elsewhere. Looking at the power data for when e-cores were disabled they dont actually seem to be saving power, but rather they just low performance cores given a good marketing name.

In a normal use scenario they use as much power as a 5950x and are more powerfull. Only if you are doing some specific tasks the power usage is higher then the 5050x. Power is not a problem for 90% of the people with alder lake.

 
I'm 100% sure Windows 10 support will arrive sooner or later, after all the OS will be supported until 2029. For early ADL adopters who insist on running this CPU along with Windows 10 there are at least two solutions:
1) Pinning applications to fast cores (which can be done automatically since there are utilities for that, e.g. dAffinity, Bill's Processor Manager, Process Lasso or even command line/lnk editing - affinity mask calculator).
2) Disabling E-cores altogether.

No big deal as far as I can see.

Probably already know this but capability is already there on Win 10, a few registry tweaks enable it. OFC that is probably not 'supported'.

There are already some issues with the scheduler on some games using AL though. OFC, you can disable the e-cores on AL with the scroll lock button on some motherboard so you don't even have to go into bios.

It's worth noting that this type of thing is clearly lowering Alder Lake's comparative performance scores right now. As patches and fixes are deployed, it's going to get faster in the aggregated results.

Just take a look at the wPrime benchmark here, or the bizarre MS Flight Sim results at other sites. These things will get fixed and I think a lot of sites will need to revisit their reviews in a few months.

Given the number of new techs here though - big.LITTLE, PCIe 5, entirely new core uArch (in fact, two new uArchs on one chip), a chipset that is actually quite changed from previous gen, I'm surprised at this point that more problems have not come up.
 
In a normal use scenario they use as much power as a 5950x and are more powerfull. Only if you are doing some specific tasks the power usage is higher then the 5050x. Power is not a problem for 90% of the people with alder lake.

See? This is what I'm talking about. Alder Lake only burns its undies in non-gaming heavy tasks but wrong first impression has already been spread about.

And @chrcoluk Windows 11 is NEEDED for Alder Lake unless all you're gonna do is gaming which isn't that much affected tbh.
Start at 6:41. TPU still can't do YT timestamps.

Seriously it's so much hoops needed jumping to get the best out of Alder Lake. I can see only dedicated fanboys and "enthusiasts" who want the current year's best of the best clamoring to buy them.
 
See? This is what I'm talking about. Alder Lake only burns its undies in non-gaming heavy tasks but wrong first impression has already been spread about.

And @chrcoluk Windows 11 is NEEDED for Alder Lake unless all you're gonna do is gaming which isn't that much affected tbh.
Start at 6:41. TPU still can't do YT timestamps.

Seriously it's so much hoops needed jumping to get the best out of Alder Lake. I can see only dedicated fanboys and "enthusiasts" who want the current year's best of the best clamoring to buy them.
This is why I wanted a Windows 10 review ;)

Windows 10 works albeit with a significant performance hit then, I suppose Windows 10 users disable the e-cores and treat it as a 8/16 cpu.

In regards to the power, the media does tend to over represent content creation workloads, however those workloads shouldn't be ignored completely, even I have started to do cpu based encoding now as I record a lot of my gaming using x264 mode in OBSS (cpu based encoding).
 
So, having read quite a lot of reviews here are some pertinent and important conclusions:
  • Intel has pushed their P-cores for MT scenarios to extremes to be the absolute performance king and at least be faster than 5900X and rival 5950X in many cases. This results in an insane power consumption out of the box but only for heavy MT tasks, e.g. video encoding, rendering, software compilation, math calculations - not something average people do daily.
  • This extreme power consumption does not translate into every day scenarios like modestly threaded applications or games - in fact many reviewers show that ADL CPUs are the most power efficient in games. Igor's Lab, AnandTech and computerbase.de have shown that limiting their TDP to 125W or even lower does not meaningfully affect frame rates.
  • It seems very likely that if P-cores maximum frequency is decreased by just 200-300MHz their efficiency will be incredible.
  • Factory OC'ing is not new and NVIDIA, AMD, Apple have been doing that for at least a couple of years. No one is crying foul because of that.
TLDR: Overall ADL CPUs are great sans an extreme factory OC for heavy MT scenarios which can be easily mitigated by limiting their power consumption by setting the PL1 limit in BIOS. At the moment the only issue is the price of the platform because even though the CPUs are competitively priced, you need to purchase a quite expensive motherboard, DDR5 RAM (the faster the better) and a decent cooling solution (preferably AIO).

Too many reviewers are fishing for views and ad revenue, so having loud and disparaging headlines which aren't necessarily representative of the real world is their way of achieving that which is quite sad.

So all the anti Intel bullshit about massive power consumption was just the usual anti Intel TPU crap, well no surprise for me there. Well done Intel on your comeback.
 
Here's the problem with Computerbase.de's report - crippled DDR5 that no early adopter will run :

View attachment 223836
It's not that. As Anand explained, DDR5 can only stretch its legs in memory intensive and highly threaded workloads. I.e. mostly science stuff, not games.
Seriously it's so much hoops needed jumping to get the best out of Alder Lake. I can see only dedicated fanboys and "enthusiasts" who want the current year's best of the best clamoring to buy them.
There are models in the pipeline that are all P cores. If you don't like hoops, that is ;)
Or, as pointed out above, get a motherboard that will disable the E cores at the press of ScollLock.

Tbh, AL has brought Intel back in the game, offers some solid choices (12600k and 12600kf), but it's not a must have or a game changing contender. Intel should have set on it one or two more quarters and figure out the scheduler, instead of bringing it out in the form it is today.

On the other hand, AL is about changing so much at once, CPU arch, DDR, PCIe and Thunderbolt support, turbo handling (generally a big engineering no-no) that it's surprising it hasn't turned out even more of a mess.

So all the anti Intel bullshit about massive power consumption was just the usual anti Intel TPU crap, well no surprise for me there. Well done Intel on your comeback.
How was it crap? Fully loaded, the 12900k burns through a lot of power - indefinitely for k chips. The fact that not all workloads fully load the chip is not a power saving feature.
 


Intel's articles regarding DRM on Alder Lake and games compatibility.

Be aware, compatibility is even more broken on Windows 10.
 
Regarding the w11 almost requirement to get the best out of it, i think it's fair tbh, it's new tech, MS is moving on and probably don't want to have the trouble of doing double work on 10 and 11.
 
Regarding the w11 almost requirement to get the best out of it, i think it's fair tbh, it's new tech, MS is moving on and probably don't want to have the trouble of doing double work on 10 and 11.
It's fair from a technical point of view. It's unfortunate when you have been behind for years and are looking to move as many SKUs as possible.
 
It's not that. As Anand explained, DDR5 can only stretch its legs in memory intensive and highly threaded workloads. I.e. mostly science stuff, not games.

So are you saying that the use of DDR5-4400 didn't cripple their DDR5 Alder Lake setups? Because that is what I said, and the low speed of DDR5 they used does affect latency too since the real latency in nanoseconds is a function of the CL (clocks) and speed (clocks per second), and it does affect their FPS results significantly. That's a 10% drop in FPS vs a tuned DDR4-3200 C12 setup.
 
In a normal use scenario they use as much power as a 5950x and are more powerfull. Only if you are doing some specific tasks the power usage is higher then the 5050x. Power is not a problem for 90% of the people with alder lake.

Just watched it, I think he over plays pci-e gen 5 when he says will allow future gpu's to be fully utilised, gen 4 makes no difference to RTX 3000 series vs gen 3. I think its one thing for a manufacturer to over play it, but a reviewer should know better.
 
It's fair from a technical point of view. It's unfortunate when you have been behind for years and are looking to move as many SKUs as possible.

It was the way to get a ahead of the race, with those small cores with wouldn't stand a change.

But i still don't see that as much of a problem (it is must certanly not deal breaker), i mean i haven't upgraded but from what i've seen W11 is pretty good apart of some bugs, but they aren't at the point of destroying your experience. You can live with it.
 
  • Like
Reactions: bug
So are you saying that the use of DDR5-4400 didn't cripple their DDR5 Alder Lake setups? Because that is what I said, and the low speed of DDR5 they used does affect latency too since the real latency in nanoseconds is a function of the CL (clocks) and speed (clocks per second), and it does affect their FPS results significantly. That's a 10% drop in FPS vs a tuned DDR4-3200 C12 setup.
Cripple? Not by a long shot. Anand didn't see much difference between DDR4-3200 and DDR5-4800 (those are the officially supported speeds, sans overclocking), except for highly multithreaded stuff.
In other words, in the absence of very intensive memory access, DDR4 already offers all the bandwidth you need.

It was the way to get a ahead of the race, with those small cores with wouldn't stand a change.

But i still don't see that as much of a problem (it is must certanly not deal breaker), i mean i haven't upgraded but from what i've seen W11 is pretty good apart of some bugs, but they aren't at the point of destroying your experience. You can live with it.
Not disputing any of that, but all those hitches are enough for everyone to remember AL launch as "rushed" or "half-assed". Intel could have done without that.
 
Seriously it's so much hoops needed jumping to get the best out of Alder Lake. I can see only dedicated fanboys and "enthusiasts" who want the current year's best of the best clamoring to buy them.

Why are we complaining about possibly jumping through hoops? We're enthusiasts, we're not strangers to tweaking. Whether Alder Lake is the profitable holy grail for Intel is not my concern, and isn't anything I have control over. What I want is to have control over is how my CPU behaves, to the best possible extent, as CPUs get more complex and more intertwined with software and firmware.
  • When the 11 scheduler forgot about Zen 3, I can only take it up the ass as games think Core 7 (quality ranked #11/12 lmao) on CCD2 is their "preferred core". No control.
  • Even in normal operation, I watch as Windows constantly juggles between Core 0 (junk) and Core 1, because Windows is still infatuated with Core 0. No control.
  • After -30 Curve Optimizer, Core 2 is honestly 5.0GHz capable basically all the time, but I can't use it because neither AMD nor Windows thinks it's "preferred". No control.
There is no control or alternative, because in most benches changing Core Affinity in Taskmanager makes the bench extremely unstable for repeated runs. CPU-Z doesn't even work, CB R23 is dubious but will certainly crash. Don't like it? Buy another 5900X in hopes of getting a better one, or live with it because AGESA doesn't change general CPPC behaviour beyond immediate post-launch fixes.

The obvious solution is for AMD to allow advanced users to edit the CPPC ranking, but that's a pipe dream and a half.

Here we have not 1 but 3 options to prevent "focus switching", or load undesirably jumping to E-cores. AT reiterated this twice, once in the preview and once in the review:

alder lake stop focus switching.png


And yet we're complaining about having some semblance of control? Granted, I understand that Thread Director isn't perfect at the moment, it's clearly visible in the occasional game/benchmark, every reviewer has encountered at least one. But everything has to start somewhere, and ADL looks like it's starting from a better position than either Zen 2 and Zen 3 did at launch, firmware-wise.
 
Last edited:
Back
Top