Thursday, October 21st 2021

Intel Alder Lake Doesn't Look Like an Overclockers Dream

Another day, another Intel Alder Lake leak, although this time it seems to be the same Core i9-12900K retail CPU that is being tested in China. Some additional details have been provided on its ability to overclock and although it's perfectly possible to overclock these upcoming CPUs from Intel, it's going to be hard to cool them, even for very small gains in clock speeds.

An all core P-core overclock, with the E-cores at default requires quite the Voltage bump as well, since according to the leaked information, going from 4.9 GHz and a power draw of 233 Watts, with a CPU Voltage of 1.275 V to 5.2 GHz, sees a jump of almost 100 Watts. The CPU Voltage also has to be bumped to 1.38 V in the sample used. However, pushing the CPU to 5.3 GHz requires 1.44 V and pushes the CPU power to a massive 400 W, which is high-end GPU territory. That said, we're hearing that not all CPUs need this high Voltage to hit 5.2 GHz, although we also understand that 5.3 GHz is not a speed that will be easily attained. Apparently the best way to get the most performance out of these news CPUs will be to tune the turbo settings, rather than to try and overclock them.
Source: @OneRaichu
Add your own comment

62 Comments on Intel Alder Lake Doesn't Look Like an Overclockers Dream

#26
KarymidoN
freeagentThat is awesome man! I have the ZM-MFC2 and with my current rig I have seen just over 630w. Just running WCG on the CPU and the system draws 300w lol..
your computer is a beast tho. Mine is more low range (R7 3700X, 32gb DDR4 2933, GTX 1070, NVME SSD for windows and HD for the files), most of the Times i use the PC on the "Power Saving" windows profile so when i'm just watching a movie or browsing the web its around 160~180w total system Power.
When i see News like this on 300W only in the CPU it really shocks me!
Posted on Reply
#27
Cobain
freeagentI have a nice AMD system too, also overclocked but undervolted lower than yours, so I am not disappointed in the slightest.

I didn’t pledge allegiance to them or Intel though.. I have more Intel systems than AMD’s..

But honestly, if you are bored with hardware news take up a new hobby for awhile. I stepped away for a few years to play with other toys too.
So why make fun of someone who undervolts, if you are doing it yourself? Makes 0 sense to me.

Plus I´m not bored with hardware, I´m just bored with hardware discussions nowadays. You know, discussions like this one or seeing ppl on forums using 100 watts more on their CPUs/GPUs for 4 fps more, and PBO with 1,5v for 1% improvement. Or people judging a CPU by its AIDA64 FPU + max possible almost unstable super overclock power consumption, etc. It´s just boring man, and this kind of news only feeds that poor mentality
Posted on Reply
#28
freeagent
CobainSo why make fun of someone who undervolts, if you are doing it yourself? Makes 0 sense to me.

Plus I´m not bored with hardware, I´m just bored with hardware discussions nowadays. You know, discussions like this one or seeing ppl on forums using 100 watts more on their CPUs/GPUs for 4 fps more. Or people judging a CPU by its AIDA64 FPU power consumption, etc. It´s just boring man
I wasn't making fun of anyone. You felt compelled to tell me about your AMD system, so I did the same.

That may be, but your attitude said otherwise. Feel free to discuss away, I am all for it, but don't start calling people shills and the like just because they chose one company over another. That isn't directed entirely at you btw, so don't feel singled out. I will be more choosey with my words the next time, today I am a bit tired.. late night.
Posted on Reply
#29
yeeeeman
Is this something new or....11900k did the same, 10900K the same, 9900K the same. Are people so...short term memory these days?
Posted on Reply
#30
Darmok N Jalad
With the complexity of current CPUs and GPUs, overclocking is starting to get built right into the design. We saw it with the 5700XT. AMD basically said each card would hit different boosts based on a series of sensor readings that get made and the variability of each GPU, and those readings can be made in milliseconds. Intel threw something similar out there with Rocket Lake, where lifting thermal restrictions results in the chip boosting as long and as hard as thermal limits allow. This just seems to be the way both companies plan to arrive at peak performance numbers now.

There is nothing wrong with this idea in that it gives the user the most possible performance out the the hardware within its design limits. However, that means that peak power consumption needs to be taken into account. It’s becoming a key differentiator in how these companies are reaching peak performance with their products. If one company requires 400W to outperform a competitor, that has to come into account when you are specing your parts for a system build. If you don’t plan to need this performance, then it’s a lot of expense for nothing. Energy isn’t getting cheaper or more abundant. It’s getting more scarce and more expensive. I’m honestly a little bit surprised we are seeing this approach to achieve more performance. I guess the assumption is most folks don’t buy the highest end products.
Posted on Reply
#31
Cobain
Darmok N JaladWith the complexity of current CPUs and GPUs, overclocking is starting to get built right into the design. We saw it with the 5700XT. AMD basically said each card would hit different boosts based on a series of sensor readings that get made and the variability of each GPU, and those readings can be made in milliseconds. Intel threw something similar out there with Rocket Lake, where lifting thermal restrictions results in the chip boosting as long and as hard as thermal limits allow. This just seems to be the way both companies plan to arrive at peak performance numbers now.

There is nothing wrong with this idea in that it gives the user the most possible performance out the the hardware within its design limits. However, that means that peak power consumption needs to be taken into account. It’s becoming a key differentiator in how these companies are reaching peak performance with their products. If one company requires 400W to outperform a competitor, that has to come into account when you are specing your parts for a system build. If you don’t plan to need this performance, then it’s a lot of expense for nothing. Energy isn’t getting cheaper or more abundant. It’s getting more scarce and more expensive. I’m honestly a little bit surprised we are seeing this approach to achieve more performance. I guess the assumption is most folks don’t buy the highest end products.
I personally take the power consumtpion very seriously when I buy hardware. Yes, price and or versus performance is the first factor, but power is very important to me. However I don´t measure that power on Furmark or FPU benchmarks, because I want computers to play games and edit music. I want to know the power usage on those tasks, not when I am attempting to beat a benchmark world record.

Ppl used to make a big deal about Ryzen 5000 power consumption compared to Intel, but then if you play a game a compare a 10700k vs 5800x power usage, you will see like 15w-20w more on the Intel side, while you will see 150w more on Aida64. Useless, irrelevant.
Posted on Reply
#32
TheoneandonlyMrK
CobainHardware discussions are so boring these days man...

Really no one cares if some ppl is trying to make intel look bad. I would be more interested in undervolt numbers, stock clocks and go as low as possible on voltage, or even 100mhz less.

I dont care if the cpu has 3% more performance using 300 or 400 watts, irrelevant to me. Plus, we dont buy CPUs to run Cinebench, Aida or any other useless benchmark all day long. Show me a single game using 225w even

Just wait for that bullzoid or frame chasers guy to make a 50 minute vídeo rambling about motherboards and telling you need to spend 400€ if you want to use a 12900k, like if anyone would need such a vrm

Boring
Yeah Except, some people do actually use their pc for more than gaming occasionally, since 2005 I think I have had a pc heater/folder-cruncher, used to be an FX8350, looks like this could show that heater a thing or two.

Point being even under clocked as mine always are, flat out 24/7 requires a chip that sips(100watt max) or the price of a f£#@&£ GPU spending on cooling, as I have.
And trust me don't go cheap unless your a casual gamer.

Incidentally my entire system uses about 400watts to fold and crunch , down clocked a bit but GPU/CPU loaded up.
Posted on Reply
#34
yotano211
I like to overclock my laptops processor ³% or 82w to run porn .00003% faster.
Posted on Reply
#35
Cobain
TheoneandonlyMrKYeah Except, some people do actually use their pc for more than gaming occasionally, since 2005 I think I have had a pc heater/folder-cruncher, used to be an FX8350, looks like this could show that heater a thing or two.

Point being even under clocked as mine always are, flat out 24/7 requires a chip that sips(100watt max) or the price of a f£#@&£ GPU spending on cooling, as I have.
And trust me don't go cheap unless your a casual gamer.

Incidentally my entire system uses about 400watts to fold and crunch , down clocked a bit but GPU/CPU loaded up.
If someone needs a CPU for intensive tasks that take advantage of every instruction and every core on the chip. my advice is: Don´t bother with mainstream platforms.

There are wayyy better options for that kind of stuff.
Posted on Reply
#36
windwhirl
yotano211I like to overclock my laptops processor ³% or 82w to run porn .00003% faster.
I legit wasted a minute of my life to find any way to put those numbers in a formula that resulted in the number 69 :D :laugh:
Posted on Reply
#37
RJARRRPCGP
Am I seeing X99 and X299-system-like OC power consumption here? Or like FX? (for example, 5960X and FX 9590) Albeit it's got more cores, especially it being an i9!

So, I guess there's a way around it, especially for gamers.....
Posted on Reply
#38
Aquinus
Resident Wat-man
My god, and I thought that my 3930k consumed a lot of power with a strong overclock. This is insane.
Posted on Reply
#39
TheoneandonlyMrK
RJARRRPCGPAm I seeing X99 and X299-system-like OC power consumption here? Or like FX? (for example, 5960X and FX 9590) Albeit it's got more cores, especially it being an i9!

So, I guess there's a way around it, especially for gamers.....
Like an FX, be serious, the FX used less power to get to 5.5, it would just about do the work of those E core's though so there's that.
Posted on Reply
#40
InVasMani
It's the E cores I'd be more interested on overclocking, but how far can they be pushed beyond stock? Is it like Haswell 102MHz on the BCLK or Skylake's BCLK overclocking that can be pushed more heavily!? I don't see much legroom to overclock the P cores they already are clocked high and the wattage to push them further has to be substantial and appears to be. I don't see that point in that, but the E cores at 3.7GHz I could totally see trying to push towards 4GHz with a modest bump to voltage and wattage.
Posted on Reply
#41
RJARRRPCGP
TheoneandonlyMrKLike an FX, be serious, the FX used less power to get to 5.5, it would just about do the work of those E core's though so there's that.
So FX turned out to be better this time, like I feared. Especially the 8370s! (which I never got a chance to have, but I apparently had a golden-sample-looking FX 8350 from 2014 (a late run) that I doubt I maxed out, as I only tried it at a paltry 4.4) (The VID was only 1.2-something volts on my FX 8350!)
Posted on Reply
#42
bug
windwhirlNGL, I already accepted that "CPU overclock" is dead and buried for anything outside of merely demonstrating how far one can take a CPU, at least if you're not willing to invest in some very beefy cooling (step aside, 240 mm radiators, 360 mm is the new minimum for trying your hand at overclock.) Plus with the way the CPUs get smarter in handling their own clocks, it might as well become an exercise in futility soon.
My thoughts exactly. CPUs have learned how to overclock themselves for a few generations now. To the point casual overclocking is all but dead. I mean, just look at some recent CU reviews right on TPU: quite often a manually overclocked CPU will score lower than a CPU left at default, because the latter can boost one or two cores only, when the workload so demands.
Posted on Reply
#43
InVasMani
It's probably more the case that PWM controller's have gotten better at supplying and managing voltage to the VRM's.
Posted on Reply
#44
Jism
Lew ZealandYeah, IMO it's more fun to find out what all architectures can do with some undervolting, incuding GPUs. I have CPUs and GPUs from all vendors and every one is undervolted except this 9700f which simply will not run stably at it's top turbo with any undervolt, the first pyrite sample I think I've ever received. Works at spec though so I can't complain, and with an .-05v it'll run all core 4.2GHz for a lot of power savings and a minimal performance reduction from it's typical 4.5 all core turbo.
If a CPU or GPU is being taped out, they are not going to run a custom profile for each and every chip. Nope. All chips from one wafer are'nt identical. So one chip might be cool and all at 1.2V and the other requires 1.25v to operate. They just set something in the middle that would work under all conditions. This is why there's headroom left in CPU's and GPU's; some GPU's might work problemless with 1090mv and others require 1140mv to do the very same. It's just luck of the draw really.

Other then that; the above does'nt suprise me. Intels TDP is'nt the TDP your getting at AMD. The PL stages is what make these CPU's so damn hungry. And on top of that the small node pretty much makes it very hard to cool. Lapping, liquid metal, high end watercooler for example, thats the region you need to start looking for if you want marginal improvements over stock. I mean we came from era's where your 300Mhz CPU could be oveclocked to 450Mhz. Or your Athlon 600Mhz just a rebranded 750Mhz was. Or the FX from 3.2Ghz up to 5Ghz if your cooling and board allowed it.

Now it's just ramp up large cooling and let the chip decide whats best for it while keeping silicon health in place. This is how AMD boost works pretty much. Keep it constant under 60 degrees and that boost will be in there a life time.
RJARRRPCGPSo FX turned out to be better this time, like I feared. Especially the 8370s! (which I never got a chance to have, but I apparently had a golden-sample-looking FX 8350 from 2014 (a late run) that I doubt I maxed out, as I only tried it at a paltry 4.4) (The VID was only 1.2-something volts on my FX 8350!)
Well, in order to get passed 4.4Ghz ~ 4.5Ghz your board had to support the current the chip needed, and be free of any AMD pre-determined overcurrent limits. If i'm correct it was 25A on the 12V line or so. Higher end boards could yield all the way up to 35 to 40 amps or so. The whole FX line where great overclockers. Not just core-clock wise but also the CPU/NB which was responsible for the L3 Cache / speed as well. That is something most reviewers never really highlighted, but that was the money shot in relation of overclocking in cranking those minimum FPS in games up.

FX's where just badly timed really in a era where single core still had the crown in applications. You can tell because the FX still holds to this day in various games while running a higher end graphics card.
Posted on Reply
#45
RJARRRPCGP
JismWell, in order to get passed 4.4Ghz ~ 4.5Ghz your board had to support the current the chip needed, and be free of any AMD pre-determined overcurrent limits. If i'm correct it was 25A on the 12V line or so. Higher end boards could yield all the way up to 35 to 40 amps or so. The whole FX line where great overclockers. Not just core-clock wise but also the CPU/NB which was responsible for the L3 Cache / speed as well. That is something most reviewers never really highlighted, but that was the money shot in relation of overclocking in cranking those minimum FPS in games up.

FX's where just badly timed really in a era where single core still had the crown in applications. You can tell because the FX still holds to this day in various games while running a higher end graphics card.
I felt the same way, too, I of course didn't expect it to do better in Halo, LOL.

4.4 was without a Vcore increase. I needed more cooling before I even felt like giving it a run with x.264 2-pass encode. I think I had the Sabertooth 990 FX R 2.0 set to aggressive VRM settings.
Posted on Reply
#46
yotano211
windwhirlI legit wasted a minute of my life to find any way to put those numbers in a formula that resulted in the number 69 :D :laugh:
I ran your calucations with NASA's super computer, 69.69 is correct.
Posted on Reply
#47
Darmok N Jalad
JismFX's where just badly timed really in a era where single core still had the crown in applications.
What it really lacked was a Windows scheduler that knew the FX was a hybrid 2+1 ALU/FPU design. Windows wouldn’t dispatch an FPU thread to an idle FPU, but would instead assign it to one of the ALUs that was already waiting on its shared FPU to complete a task. I think I got that right anyway. Basically, the chips could have performed better in their day, but just didn’t get the OS support. AMD was essentially broke at that time and couldn’t get that kind of support. I bet Adler Lake will perform worse on Windows 10. It’s just how much that remains to be seen.
Posted on Reply
#48
KarymidoN
freeagentThat is awesome man! I have the ZM-MFC2 and with my current rig I have seen just over 630w. Just running WCG on the CPU and the system draws 300w lol..

Just got home from work, here's my old ZM-MFC3. Still working (barelly), 3 of the 4 temp sensors are dead. 39ºC on the Exaust. CPU Die at 51°C with lightwork and 78ºC Full load.
Posted on Reply
#49
Crackong
Darmok N JaladWhat it really lacked was a Windows scheduler that knew the FX was a hybrid 2+1 ALU/FPU design. Windows wouldn’t dispatch an FPU thread to an idle FPU, but would instead assign it to one of the ALUs that was already waiting on its shared FPU to complete a task. I think I got that right anyway. Basically, the chips could have performed better in their day, but just didn’t get the OS support. AMD was essentially broke at that time and couldn’t get that kind of support. I bet Adler Lake will perform worse on Windows 10. It’s just how much that remains to be seen.
The "leaks" says Win10 scheduler would only use the E-cores in Alder Lake, and leave all the P-cores idle.
Posted on Reply
Add your own comment
Mar 15th, 2025 19:46 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts