• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core i9-14900K Raptor Lake Tested at Power Limits Down to 35 W

AMD's mini Zen 4c cores explained: They're nothing like Intel's Efficient cores | PC Gamer

Intels E-cores are not the same, and that's where they cause issues - those missing instructions break programs when they get bounced from a P core to an E core, because suddenly they can't operate and crash
There are some weird statements regarding scheduler complexity, core size ratios and missing instructions in that PCGamer article. After Intel disabled AVX512, the instruction sets are the same on P and E - but if you have any source that went into details, and discovered differences, I'm genuinely interested.

Optimal scheduling on an AMD hybrid CPU would be an extremely difficult task, you'd have cores with *four* distinct performance levels, namely Zen 4 without HT, 4c without HT, 4 with HT, and 4c with HT. Intel "only" has P, E, and P with HT. HT gets in the way all the time, I consider it a harder nut to crack than hybrid cores.
 
There are some weird statements regarding scheduler complexity, core size ratios and missing instructions in that PCGamer article. After Intel disabled AVX512, the instruction sets are the same on P and E - but if you have any source that went into details, and discovered differences, I'm genuinely interested.

Optimal scheduling on an AMD hybrid CPU would be an extremely difficult task, you'd have cores with *four* distinct performance levels, namely Zen 4 without HT, 4c without HT, 4 with HT, and 4c with HT. Intel "only" has P, E, and P with HT. HT gets in the way all the time, I consider it a harder nut to crack than hybrid cores.
I think he just meant that Intel's efficiency and performance cores don't have the same IPC, they also stopped being efficiency cores when they were forced to run at higher clocks:

1698574372851.png


"Below 15 watts, Gracemont achieves higher performance while consuming less power than Golden Cove. Add around 6 watts for uncore power, and we’re roughly within the power targets of thin and light laptops.

Looking through the entire power range, Gracemont struggles to scale well past 3-4 watts per core. That’s completely expected from a microarchitecture targeting low power. Unfortunately, the i7-12700K’s stock settings push Gracemont way past its sweet spot. Golden Cove cores also run into diminishing returns, but show much better scaling with power"


Even zen2 already tramples the efficiency cores on what they are supposed to be.

1698574599431.png


Plus, AMD wouldn't have any major problems creating a hybrid chip. With zen4c they can put 16c in the same space as 8 regular Zen4 cores, so to create a CPU focused on MT, it just need the configuration 8 Zen4 running at high frequency + 16Zen4c at low frequency on the other CCD, the logic would not be different from the 7950x3D.

1698575044004.png
 
After Intel disabled AVX512, the instruction sets are the same on P and E - but if you have any source that went into details, and discovered differences, I'm genuinely interested.
+1 on that
 
all this work for retesting a 13900ks basically.

The 13900KS actually seems to behave better under heavy load. Better quality silicon on average.

The 13700K is currently about a $60USD savings (having gone to discount) and I have yet to see a situation where I’d use the extra E-cores based on current testing across multiple sites.

Fair enough, I suppose. If all you are interested on are the P-cores, you'll do just fine with the i7-13700K.

Plus, AMD wouldn't have any major problems creating a hybrid chip. With zen4c they can put 16c in the same space as 8 regular Zen4 cores, so to create a CPU focused on MT, it just need the configuration 8 Zen4 running at high frequency + 16Zen4c at low frequency on the other CCD, the logic would not be different from the 7950x3D.

View attachment 319365

Such a processor configuration will never exist, not even in the server space. All intents and purposes, Zen 4 is already done.

May I tell you to look at the performance compromises of Bergamo so you understand precisely why such a hybrid would be a horrible idea? Even the 7950X3D's X3D+standard hybrid design is already a compromise, these CPUs already rely on a custom scheduler driver as it is. IMO, if anything, AMD should release a dual CCD X3D instead of this hybrid garbage or god forbid - chuck "c" cores with half cache capacity. This is a desktop processor, "density" be damned. 16 cores is more than adequate for anything in this space, and I'd much rather have had a i9 with 16 P-cores than this 8+16 configuration.
 
How can you completely disable these cores, only those who do not have processors with E-core cores have a problem with them.

Well that's just not true, whatever the reason may be. I am one of those people, and I'm not the only one.

Have you actually tested games with and without ecores? Everything I've tested runs considerably worse without them.

And they would run even better if you replaced the E-cores with P-cores.
You're seeing a benefit when 8 P-cores are not enough, and in that situation E-cores are better than HyperThreading (which allows for more threads, but it slows them down).
For ideal gaming conditions, you want to have as many identical cores as possible, without HT. So an ideal gaming CPU would have 16 monolithic P-cores with no HT. That way you can have up to 16 threads, all executed at the highest possible speed.


This has already been established. E-cores are not efficient in terms of power, they are efficient in terms of die size. An E-core cluster has more performance than a single P-core, but it also consumes more power.
So if you replaced 16 E-cores with 4 P-cores, you would have less performance, but also less power consumption, but the die size would be roughly the same. So a 12 P-core CPU would be possible.
Having 16 P-cores supposedly isn't possible currently because of the ring bus. They'd need a dual ring bus design just like in previous HEDT CPUs, but that adds latency. They are supposed to have 16 P-cores in a future architecture after Arrow Lake.

Games are difficult to parallelize, and even if you can spread tasks into many threads, all those tasks can be completed faster if all your cores are faster. And there's always the main thread the controls all the other threads. That's exactly why newer CPUs with fewer cores perform better than older CPUs with more cores. Single-threaded performance has always been key, and it always will be.
Even if you cap your framerate, a CPU with higher single-threaded performance will reduce frametime spikes from stutters (streaming, shader compilation etc.).
 
I have zero interest in using them either. They're a bandaid to win in short benchmarks and not useful to consumers.
How do you explain that a non-K i5 with these cores crushes the ryzen 5 in multi-threading?
I don't know which games you're talking about, I don't have them all. I have no problems even with the ones played occasionally.
Yes, AMD's are the prettiest, but... where are they?
Let's honestly admit that you are chewing the same donuts as in the case of ray tracing and DLSS. Nothing is good if AMD doesn't have it, but they suddenly become delicious when AMD succeeds in creating a clone.

Well that's just not true, whatever the reason may be. I am one of those people, and I'm not the only one.
Yes, you are the exception that confirms the rule.
 
The 13900KS actually seems to behave better under heavy load. Better quality silicon on average.



Fair enough, I suppose. If all you are interested on are the P-cores, you'll do just fine with the i7-13700K.



Such a processor configuration will never exist, not even in the server space. All intents and purposes, Zen 4 is already done.

May I tell you to look at the performance compromises of Bergamo so you understand precisely why such a hybrid would be a horrible idea? Even the 7950X3D's X3D+standard hybrid design is already a compromise, these CPUs already rely on a custom scheduler driver as it is. IMO, if anything, AMD should release a dual CCD X3D instead of this hybrid garbage or god forbid - chuck "c" cores with half cache capacity. This is a desktop processor, "density" be damned. 16 cores is more than adequate for anything in this space, and I'd much rather have had a i9 with 16 P-cores than this 8+16 configuration.
Yeah, Coming from the guy with an i9 CPU that integrates a ton of E-cores, I confess that this is a very unexpected post.

I'm aligned with the idea that 8 fast cores is more than enough for gaming, 16 cores is just for specific cases and for bragging rights. I also don't think they should follow Intel's dirty strategy, but IF AMD wants, there is the possibility of easily creating hybrid design using the magic of modularity, so they would win in all aspects including numbers (core count) and synthetic benchmarks, and the MT-focused design could even coexist with the standard line and X3D. There are many options on the table for AMD to use, including SMT2 and Quad-channel.
 
Gentlemen, I recommend you to go out into the street and protest in front of the AMD headquarters. "Freedom P cores", "More P cores", "No E cores", "Down with E Cores"... you find what to write on the placard.
Starting with the 12th series, we have tons of reviews. No conclusion that these cores blamed by you sabotage the processor, in single or multi threading. On the contrary, they all agreed that they are a solution of the future, and I recommend you to change your glasses if you do not see well the results from these reviews.
Don't you like it? Don't buy!

In other words, if some terrorists sabotage your computers, reducing without your knowledge 25% of the cores, or frequency, or RAM speed, you won't even notice until a monitoring program reveals the "crime". It's just the eternal click where everyone shouts, no one listens.

I repeat: if you don't like it, don't buy it! It's that simple.
 
My take is mixed. AMD were first to the game on having uneven cores but seem largely forgiven for it by the TPU community (who happen to be mostly Ryzen owners). Their CCX issues, led AMD to add a mode to change scheduling so games would use the correct core's on some CPU's.

Because the review industry is so influential and they have put a lot of focus on things like cinebench in reviews, Intel having not have the ability to build a CPU with just p-cores to outperform AMD decided to go in a new direction with the E cores. I was definitely sceptical when I first read about them, I wont pretend I wasnt.

However I am now starting to see how they can be beneficial. There is some value in to having cores designed for one task, and cores designed for another task. I treat E cores as a background service set of cores, and P cores as interactive cores aka games. I treat things like compiling and encoding as a service type workload in case there is confusion on that from this post.

As I am now using one of these CPU's the 13700k, this has led to me learning a ton of stuff about the Windows CPU scheduler, especially the hidden power schemes settings which allow heavy manipulation of CPU behaviour as well as scheduling of processes, sadly the TPU community hasnt taken to the little bits I shared so I stopped sharing, however there is some value to be had with the ability to schedule certain things to cores that are more suited to the task.

However I do also have criticism, one could argue if I had a 12 P core CPU instead of 8+8, I could still do what I am doing by e.g. assigning all background stuff, and browsing to the last 4 cores, and reserving first 8 cores for gaming, it would likely have just a good result consider also as still same thread count with HTT for the background stuff, the main losses is I would lose automatic E core scheduling which Windows can take care off (so more manual affinity) and also lower performance on heavy threaded tasks like compiling and software encoding, but the reality is I no longer software encode due to the energy crisis, and I dont compile anything on this PC. I do agree with the points that these are cinebench CPU's, because the review industry now focuses so much on these type of benches, Intel have started lumping in these E cores, I have little doubt this is the main reason they exist, for marketing. Isnt it odd the lowest end chips have no E cores and by coincidence they are the same chip's not sent to reviewers.

Also P cores by default park on these hybrid CPU's, I tweaked this to keep the 2 preferred cores always unparked at no visible power draw, but it was noticeable when I tested keeping them all unparked by default, on my highest performance profile the P cores are always unparked in case of potential issues in some applications and games. Am still investigating impact of core parking on scheduling, there is likely a latency penalty of some sort in specific workloads though, hence me keeping them all unparked on my highest performance profile.
 
Yeah, Coming from the guy with an i9 CPU that integrates a ton of E-cores, I confess that this is a very unexpected post.

I'm aligned with the idea that 8 fast cores is more than enough for gaming, 16 cores is just for specific cases and for bragging rights. I also don't think they should follow Intel's dirty strategy, but IF AMD wants, there is the possibility of easily creating hybrid design using the magic of modularity, so they would win in all aspects including numbers (core count) and synthetic benchmarks, and the MT-focused design could even coexist with the standard line and X3D. There are many options on the table for AMD to use, including SMT2 and Quad-channel.

Honestly, the standard processors should be phased out in favor of a full 3D stacked lineup in the future, except at the low-budget end.

Gentlemen, I recommend you to go out into the street and protest in front of the AMD headquarters. "Freedom P cores", "More P cores", "No E cores", "Down with E Cores"... you find what to write on the placard.
Starting with the 12th series, we have tons of reviews. No conclusion that these cores blamed by you sabotage the processor, in single or multi threading. On the contrary, they all agreed that they are a solution of the future, and I recommend you to change your glasses if you do not see well the results from these reviews.
Don't you like it? Don't buy!

The notion of "E-cores being detrimental" has to do with Alder Lake processors having the ring bus clock tied to the E-core top frequency. In Raptor Lake, this does not apply: the ring and E-core clock domains are completely decoupled, and with the increases in E-core speed, they are comparable in speed to a Sky or Kaby Lake core, but with an impossibly small footprint. Not too bad!
 
And they would run even better if you replaced the cores with P-cores.
But that's irrelevant. I was responding to whether or not you should disable ecores for games. The answer is a resounding no.

Intel can't replace ecores with pcores, cause people for some reason have different expectations and double standards. When Intel was heavily dominating in gaming performance (ie. 7700k), reviews and end users were heavily criticizing the fact that they are severely lacking in MT performance. But when AMD is severely lacking in MT performance, it is irrelevant and 5% extra gaming performance is the end all be all (5800x 3d, 7800x 3d). What is intel supposed to do? Ecores are the only solution
 
Last edited:
The 13900KS actually seems to behave better under heavy load.
Because it has PL1 = PL2 = 320 W, the 14900K has PL1 = PL2 = 253 W, but it boosts other cores to 5.7 instead of 5.6
 
Last edited:
Because it has PL1 = PL2 = 350 W, the 14900K has PL1 = PL2 = 253 W, but it boosts other cores to 5.7 instead of 5.6

I believe it's actually 320 W (as per Intel spec sheet) in the officially supported "Extreme config" (otherwise applying PL1=PL2 at 253 W - if you are running the processor fully stock, this is the configuration it will apply unless you enable the extended wattage range on the motherboard settings. The regular i9-13900K and the -14900K use the standard S-line PL1=125 W and PL2=253 W.

From what the community's been able to gather thus far, seems like the 13900KS has a slight edge in clock stability over time, possibly due to binning or maybe far too aggressive clocks for the standard-power S-line configuration, maybe a bit of both.
 
AMD chips can't run that low due to the IO die, performance plummets under 50w. So - it doesn't matter.

Other Intel chips don't make a difference, I've tested a 12900k and a 13900k, they both get similar performance at 35w. Both undervolted, my CBR23 score was 15.200 for the 12900k and 15.900 for the 13900k.
Anandtech got 49.3% of Cinebench R23 MT performance with the 7950X @ 35W vs 30.6% for the 13900k
Here we see 14900K get 30.7%.

@65W the values are (7950X vs 13900K vs 14900K)
81.1% vs 56.6% vs 56.2%
 
Anandtech got 49.3% of Cinebench R23 MT performance @ 35W vs 30.6% for the 13900k
Here we see 14900K get 30.7%.

@65W the values are (79500X vs 13900K vs 14900K)
81.1% vs 56.6% vs 56.2%
Anand techs numbers are wrong. He is using TDP on the amd cpus, not actual power draw.
 
Anand techs numbers are wrong. He is using TDP on the amd cpus, not actual power draw.

Yes, the 7950X drew about 15% more power than the 13900K at the "35W" setting though both CPUs used >35W. And more importantly:

The 7950X was 50% faster at this setting.

So around 35W, the 7950X is considerably more efficient than the 13900K.

Though I fully expect completely disproven claims like this:

AMD chips can't run that low due to the IO die, performance plummets under 50w. So - it doesn't matter.

...to be parroted here again and again like those people who repeat things to themselves in an attempt to make them true.
 
Last edited:
Personally, I concede Zen 4 is way ahead in power efficiency - but to Intel's merit, they are still using their 10 nm-class lithography (Intel 7) to build these CPUs while Ryzen is built on a much more advanced TSMC N5 node.

The 14900K is not an inefficient or bad processor - and for the cost, it's adequately performant. It might lack the binning shine of the 13900KS (which is subjective, and definitely NOT worth the extra $250), and it may be the most horrifyingly low effort garbage that Intel has ever released - but Raptor Lake is still as powerful and performant as it was a year ago when the original 13900K released. It's just not a new product and Intel should be deeply ashamed of coming up with this nonsense, but I guess they are hurting pretty badly in their latest financial results and admitting that they have failed to develop and release a new series of products for 2023 (and they have NOT) would likely cause more than a few heads to roll at the company, Gelsinger's included.

My singular complaint, really. What actual moron thought that it'd be a good idea to call this a 14th generation without a SINGLE. PHYSICAL. CHANGE. to the processor? Well, obviously the moron trying to save the company's face. There's no new CPUID, no new stepping, no new feature, no refinement in lithography, no improvement in the fabrication process - you get absolutely NOTHING coming from a 13th Gen i5 or i9, people who upgrade to i7 get a boring 4 extra E-cores, boo-hoo... and now Intel seems to be looking to keep APO gated to 14th gen in a hopeless upsell attempt.

It really stings too - their "14nm+++++" phase was due to their process node failing, but their situation right now is that despite not having been able to finish developing a new processor, they already have the next two nodes available in their foundry services and sampling to customers already. It's just baffling. I'd be willing to forgive if they have Arrow Lake on the market within 6 months and don't withhold the upgraded software from the functionally and physically equal 13th Gen CPU, but I somehow doubt that's going to happen.
 
Yes, the 7950X drew about 15% more power than the 13900K at the "35W" setting though both CPUs used >35W. And more importantly:

The 7950X was 50% faster at this setting.

So around 35W, the 7950X is considerably more efficient than the 13900K.

Though I fully expect completely disproven claims like this:



...to be parroted here again and again like those people who repeat things to themselves in an attempt to make them true.
On Computerbase testing the score on the 7950X does plummet somewhere between 65 and 45W, the 7950X @65W is about 55% faster than 45W, while the 13900K is about 27% faster @65W compared to 45W. Which puts it from 8% slower to 7% faster at those power limits. But the 13900K seems to also have this issue below 40W.
When comparing the Anandtech CB R23 numbers to Computerbase ones the 13900K numbers are pretty close when looking at 65W and above, with the 35W and 45W being quite different, but the 7950X are pretty much always one tier above.

@35W(Anandtech) and 45W(ComputerBase)
13900K - 12370 / 18283
7950X - 18947 / 16831

@65W
13900K - 22911 / 23474
7950X - 31179 / 21538

@88W (Computerbase only)
13900K - 27872
7950X - 30194

Der8auer 13900K power limit tests show similar scaling to Computerbase 13900K. On CB R20 the 13900K scored 6572@40W on Der8auer testing and 7028@45W on Computerbase testing, this goes all the way up to stock. Der8auer tests also show the 13900K score dropping a lot when comparing 20W to 40W, which seems to match with the scaling seen in both Anandtech and TPU testing.
 
Last edited:
On Computerbase testing the score on the 7950X does plummet somewhere between 65 and 45W, the 7950X @65W is about 55% faster than 45W, while the 13900K is about 27% faster @65W compared to 45W. Which puts it from 8% slower to 7% faster at those power limits. But the 13900K seems to also have this issue below 40W.
When comparing the Anandtech CB R23 numbers to Computerbase ones the 13900K numbers are pretty close when looking at 65W and above, with the 35W and 45W being quite different, but the 7950X are pretty much always one tier above.

@35W(Anandtech) and 45W(ComputerBase)
13900K - 12370 / 18283
7950X - 18947 / 16831

@65W
13900K - 22911 / 23474
7950X - 31179 / 21538

@88W (Computerbase only)
13900K - 27872
7950X - 30194

Der8auer 13900K power limit tests show similar scaling to Computerbase 13900K. On CB R20 the 13900K scored 6572@40W on Der8auer testing and 7028@45W on Computerbase testing, this goes all the way up to stock. Der8auer tests also show the 13900K score dropping a lot when comparing 20W to 40W, which seems to match with the scaling seen in both Anandtech and TPU testing.

I don't have a Zen 4 CPU, but I did have the Zen 3 flagship (5950X) back when I had my AM4 machine. If it remains the same, the IO die will use around 15 to 18 W by itself, which means the CPUs cannot realistically operate below 35 W, and performance just nosedives below 65.

If PPT limit is set too low (eg. 10 W), it will violate configured spec and throttle as hard as it can, it basically will never exceed the lower frequency minimum (I believe it was 400 MHz), and I suspect there was some clock modulation (stretching) going on, it was just hilariously slow and chokey. Performance gets very nasty very fast.
 
I don't have a Zen 4 CPU, but I did have the Zen 3 flagship (5950X) back when I had my AM4 machine. If it remains the same, the IO die will use around 15 to 18 W by itself, which means the CPUs cannot realistically operate below 35 W, and performance just nosedives below 65.

If PPT limit is set too low (eg. 10 W), it will violate configured spec and throttle as hard as it can, it basically will never exceed the lower frequency minimum (I believe it was 400 MHz), and I suspect there was some clock modulation (stretching) going on, it was just hilariously slow and chokey. Performance gets very nasty very fast.
So lew zealand was completely wrong.... :roll:
 
So lew zealand was completely wrong.... :roll:

No, but LOL you are so predictable. Please at least try to understand some details for once:

Dr. Dro is talking about a 5950X which uses a 12nm IO die and about using 10W, while Anandtech tested a 7950X which uses a much lower power 6nm IO die and at ~35W handily beat the 13900K in efficiency.
 
No, but LOL you are so predictable. Please at least try to understand some details for once:

Dr. Dro is talking about a 5950X which uses a 12nm IO die and about using 10W, while Anandtech tested a 7950X which uses a much lower power 6nm IO die and at ~35W handily beat the 13900K in efficiency.
But it doesn't, cause the 7950x wasn't running at 35w but 46....
 
On Raptor Lake is very important what limit will be used - Current limit (A) or PL (W). In my experience the Current limit(A) gives more power efficiency than the Power Limit(W).
 
Back
Top