• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core i9-14900K Raptor Lake Tested at Power Limits Down to 35 W

The 7950x3D is an improved version of the 7950x not the other way around. AMD added 3D cache and lowered power consumption to get similar performance that's progress and good.
The 7950X gets about the same performance as the 3D variant when set to the same power limit. The efficiency gains on MT at 142W is mostly due to the power limit being closer to the sweet spot for it.
And the 7900X at 142W is on average ~2% slower than stock in heavy MT workloads, an even smaller difference than the 7950X ~5% slower average.
 
I dont get why such test stop at 125W for the testing. I have seen multiple outlets doing this. We would like to see at least 1 point between 125W and 253W, preferably 2.
One important use case for such tests is for people that cannot run at 253W or higher due to thermal limitations (cooling cost, warm climate, small room), for them to understand how much they need to lower the power and the associated performance loss.
 
Why not a 14700K? Seems like this 8P+12E SKU is actually the one to get, with the pricing being what it is and all. The 14900K... well, it's cheaper than a 13900KS and is basically one, so I guess you can't hit it too much other than for being as low effort as Intel could have

The 13700K is currently about a $60USD savings (having gone to discount) and I have yet to see a situation where I’d use the extra E-cores based on current testing across multiple sites.
 
I dont get why such test stop at 125W for the testing. I have seen multiple outlets doing this. We would like to see at least 1 point between 125W and 253W, preferably 2.
One important use case for such tests is for people that cannot run at 253W or higher due to thermal limitations (cooling cost, warm climate, small room), for them to understand how much they need to lower the power and the associated performance loss.
Pick a number, I’ll do a round of testing just for you
 
Last edited:
SS2.png
SS3.png


So its $500++ Celeron.Got it.
 
Last edited:
Ain't bottle a wonderful thing. :toast:



Honestly at that strict a power limit, you can probably optimize per use case. I'd shave 7 of the P-cores and three clusters of E-cores, and attempt to maintain frequency.

Probably would still kick for a lot of applications

At that point, less cores with more clock speed will come into play, and an i5-13600 non-K will win easily with good energy efficiency (especially as a non-K has lower PL1/PL2). And you won’t have to perform all sorts of oddball tweaks.

P

Pick a number, I’ll do a round of testing just for you

I would be interested (seeing as non-K CPUs like the i7-13700 have a PL1 of 65w and a PL/2 of 219w, which is the rumored/leaked spec of the 14700 I believe as well) to see what you would get with this setting, or dropping to PL/1 65w, PL/2 of 200 with an undervolt that most CPUs would be predicted to do without issue (.1v? .075v?). But my focus would there is probably little reason other than doing a non-K to K power/performance comparison.
 
I dont get why such test stop at 125W for the testing. I have seen multiple outlets doing this. We would like to see at least 1 point between 125W and 253W, preferably 2.
One important use case for such tests is for people that cannot run at 253W or higher due to thermal limitations (cooling cost, warm climate, small room), for them to understand how much they need to lower the power and the associated performance loss.
I highly doubt anyone worried about thermal limitations such as cooling cost would even be thinking about buying a 14900k. It's like buying a Raptor and being concerned about how many miles per gallon you get on your daily commute :laugh: but really though on a serious note these two don't go hand in hand for the intended buyer LOL. Especially when relative game performance at 125W to 253W is already a tiny 1.4% at 1440P and a 35.2C drop in temps which is what most tinkerers would do. Why split hairs? But, you do you boo.
The 13700K is currently about a $60USD savings (having gone to discount) and I have yet to see a situation where I’d use the extra E-cores based on current testing across multiple sites.
Yet another weird point. Save $25 ($393.99 vs $419.00 as current pricing for the 13700k and 14700k) instead of getting the latest option? That shouldn't even be a consideration on a new build :kookoo:. Heck even as far as the $60 you mention. Wouldn't even but on my radar with such a miniscule price difference in the overall cost. I don't see why anyone in their right mind would choose to go with a 13th gen X700K(F) on a fresh build now regardless if the cores are being taken advantage of at this point in time. Even better, you could get the 14700KF and it would cost the same at $394.00. Who needs integrated graphics when you know you are going to buy a GPU anyway. Failsafe I suppose incase you get a DOA GPU. Of course, if you already have a 13th gen i7, yes I wouldn't feel the need to jump on the 14th bandwagon. That makes sense.

Just my thoughts LOL fire away.
 
Last edited:
Looks like 65W-95W would be the sweet range.....................depending on what you're doing.

also
 
Yet another weird point. Save $25 ($393.99 vs $419.00 as current pricing for the 13700k and 14700k) instead of getting the latest option? That shouldn't even be a consideration on a new build :kookoo:. Heck even as far as the $60 you mention. Wouldn't even but on my radar with such a miniscule price difference in the overall cost. I don't see why anyone in their right mind would choose to go with a 13th gen X700K(F) on a fresh build now regardless if the cores are being taken advantage of at this point in time. Even better, you could get the 14700KF and it would cost the same at $394.00. Who needs integrated graphics when you know you are going to buy a GPU anyway. Failsafe I suppose incase you get a DOA GPU. Of course, if you already have a 13th gen i7, yes I wouldn't feel the need to jump on the 14th bandwagon. That makes sense.

Just my thoughts LOL fire away.

One, I usually don’t get F variants because I find QuickSync a useful feature, and you don’t get that on an F. Two, not $393.99, but $363.99 vs $419.99.

I have nothing against your thoughts; but my reasoning (I had an i5) works for me, being:

-The additional two P-cores (and their hyperthreading) are useful to me. I’d have bought an i7 at the time, but Raptor Lake Refresh (at the time) hinted at much more of a performance improvement than there turned out to be, and I bought the i5 to save a bit (I can now sell it though) prior to likely getting RL Refresh. I believe now that was a mistake but hindsight is always 20/20.
-The 14700k upon release was shown to have little to no performance gains in what I do day-to-day over a 13700k. At the same time, it was shown to consume even more power, and run even warmer. Why should I work on cooling four additional E-cores that are providing me no tangible benefit?
-The money I save can be used for better cooling, or upgrading from my EVGA RTX 3080Ti down the road, either of which make more sense to me (I’m skipping Ada Lovelace and waiting for the generation after).
-I wasn’t going with a fresh build. I have a Gigabyte Z690 Aorus Master I got open box for half price at Microcenter early this year, and it’s the perfect board for me, a near-flagship board at a more normal price. I got open-box G.Skill Trident Z DDR5 RAM as well, between those two components I saved almost $300 off new prices. I’m picky about mainboards and I see myself as having this platform for awhile.

One of the rumors that turned out to be false was that the i5-14600k would be eight P-cores and eight E-cores. A second was that there would be some new voltage regulation tech on the “14th” gen. This was what I was looking ahead to along with possible process improvements, as the i5 has had much lower wattage than the i7 or i9. Now that the reality of the “14th” gen is here, all of the current after-release research I’ve done leads me to believe my best choice is an i7-13700k tuned for efficiency. If I could have predicted all of this, I admit I’d have likely skipped the i5 purchase and just gone i7 in the first place.

You do you. My reasoning works for me; it doesn’t have to for anyone else (though Steve at GN, Jay, and others have largely said if the 13th saves you more than a few bucks, go with what’s cheaper as well). I take no offense, I just know what my needs are, and they aren’t yours.
 
Last edited:
I would be interested (seeing as non-K CPUs like the i7-13700 have a PL1 of 65w and a PL/2 of 219w, which is the rumored/leaked spec of the 14700 I believe as well) to see what you would get with this setting, or dropping to PL/1 65w, PL/2 of 200 with an undervolt that most CPUs would be predicted to do without issue (.1v? .075v?). But my focus would there is probably little reason other than doing a non-K to K power/performance comparison.
You can't reliably simulate chips like that, we don't know their voltages and clocks and binning

If just 1 then 200W, if 2 then 165W and 210W. Thanks! :lovetpu:
Will see what I can do next week
 
Very cheap for the performance and efficiency it delivers, the diluted average means you won't see games where the X3D beats the i9 by 30-40% like TW. Plus, People who really need powerful CPUs because "time is money" should go for TR workstation, not desktop CPUs.

What are you talking about? MSRP? Because I see the price of the X3D dropping to almost the price of the i5 13600k often.
Cheap? Seriously? In all the reviews, the X3D processors have the most disastrous performance per dollar ratio even when you use the RTX 4090.
I tested all the installed games with the benchmark included and I didn't see any difference between the i5-13500 and the 14700KF. None! The video card is 3070Ti, mainstream in 2021, middle now.
So, if I replace 13500 with 7800X3D for this video card, I get a big damage to my wallet, zero increase in performance in games and applications.
Tests prohibited by reviewers are those with X3D versus non-X3D comparisons using video cards from the entry-enthusiast range. We only have comparisons for enthusiasts, although over 90% of players use weaker video cards. I'm really curious to see what performance boost the 7800X3D brings in front of the 7700X using an RTX 4070 Ti video card or weaker or equivalent AMD. We know for sure that the 7700X is cheaper (much cheaper) and beats the 7800X3D in applications.
 
I see an arctic 420 mentioned in setup, am I missing something or are there no results mentioned with a watercooler involved?
 
I see an arctic 420 mentioned in setup, am I missing something or are there no results mentioned with a watercooler involved?
Correct, I used the Noctua only. Test Setup table has been updated
 
"It's not only important how much power is consumed, but also how quickly a task is completed—taking both into account results in "efficiency."

Taking both into account you get "energy"

Both are true in context. As you've noted,

Power x Time = Energy

In a fixed workload like a CB run, leveraging that relationship can reveal relative efficiency. To keep things simple, let's assume two configurations that use the same amount of power. Using the first equation, the setup that took less time would have consumed less energy, indicating greater efficiency.

The wording is maybe not the clearest, but I don't think it's wrong.
 
The only thing that bothers me is the lack of software ingenuity for Intel to automatize the process of fine-tuning processors for the users, as long as it may take the software to do so.
A software that selects a few operation modes and tunes them silicon-specific should have been a part of XTU the moment 12th gen has landed.

It is worth mentioning that people who want to absolutely minmax 13th and 14th gen capabilities can tune each core's peak voltage and frequency individually, really squeezing every last drop the silicon has to offer in terms of binning. Roughly setting PL1-2 peak wattage with a bit of undervolting bias is a simple solution that should also be included in an a simple software function for users. Not as something you delve into, but as a generic function which lets you pick package power and runs stability tests on in order to set a safe voltage
 
Whats interesting with 13th gen, and perhaps if you get time you can try it with 14th gen, is undervolting cache seems as effective as vcore adjustments.

But yeah this testing shows, that the chips are basically factory overclocked, circa 10% performance gained by more than doubling the power consumption. Way too far up the curve.

This is why some of us dont buy this any temp below 100C is wasted performance nonsense, intel have basically cranked it up no matter the cost.

So without u/v 125w is sweet spot and probably should have been official spec, the temp also doesnt need an insane cooler either, and if prepared to u/v 95w.

Feels like they are like the car industry in late 1970s, early 80s where they had to deal with sorting out their inefficient engines.

@dj-electric yeah the auto o/c tools I would like to see auto undervolt feature's added.
 
Last edited:
Cheap? Seriously? In all the reviews, the X3D processors have the most disastrous performance per dollar ratio even when you use the RTX 4090.
I tested all the installed games with the benchmark included and I didn't see any difference between the i5-13500 and the 14700KF. None! The video card is 3070Ti, mainstream in 2021, middle now.
So, if I replace 13500 with 7800X3D for this video card, I get a big damage to my wallet, zero increase in performance in games and applications.
Tests prohibited by reviewers are those with X3D versus non-X3D comparisons using video cards from the entry-enthusiast range. We only have comparisons for enthusiasts, although over 90% of players use weaker video cards. I'm really curious to see what performance boost the 7800X3D brings in front of the 7700X using an RTX 4070 Ti video card or weaker or equivalent AMD. We know for sure that the 7700X is cheaper (much cheaper) and beats the 7800X3D in applications.
The price difference between the 7700X and 7800X3D is only U$40-50, which isn't a significant dent in your budget, and it's impressive among chips within the high-end gaming CPU segment

But I agree that an i5 or even a Ryzen 7600/5600x/5700x is more than enough for most people.
 
I dont get why such test stop at 125W for the testing. I have seen multiple outlets doing this. We would like to see at least 1 point between 125W and 253W, preferably 2.
One important use case for such tests is for people that cannot run at 253W or higher due to thermal limitations (cooling cost, warm climate, small room), for them to understand how much they need to lower the power and the associated performance loss.

Offices don't need maxed out CPU's. Thats why that whole option is in the bios to cap it's power consumption. In office PC's there's usually no high end stuff in regards of cooling, just stock CPU coolers.

It's amazing to see that in between 125W and 254W there's barely no difference. Intel needs to stop with the 16E cores and head to performance cores again.
 
Offices don't need maxed out CPU's. Thats why that whole option is in the bios to cap it's power consumption. In office PC's there's usually no high end stuff in regards of cooling, just stock CPU coolers.

It's amazing to see that in between 125W and 254W there's barely no difference. Intel needs to stop with the 16E cores and head to performance cores again.

Neither do they need kneecapped i9s, that's a waste of money and resources. They need 35-65W 6P+4E i5s which are still overkill for the vast majority of office work.
 
The only thing that bothers me is the lack of software ingenuity for Intel to automatize the process of fine-tuning processors for the users, as long as it may take the software to do so.
A software that selects a few operation modes and tunes them silicon-specific should have been a part of XTU the moment 12th gen has landed.

It is worth mentioning that people who want to absolutely minmax 13th and 14th gen capabilities can tune each core's peak voltage and frequency individually, really squeezing every last drop the silicon has to offer in terms of binning. Roughly setting PL1-2 peak wattage with a bit of undervolting bias is a simple solution that should also be included in an a simple software function for users. Not as something you delve into, but as a generic function which lets you pick package power and runs stability tests on in order to set a safe voltage
Motherboards and silicon lottery make it impossible to establish an effective preset. You can manually set PL1/2 and the rest is handled by software with automatic adjustment of frequencies and voltages. There is a whole palette of tools in BIOS or XTU for fine tuning. You can manually adjust the voltage for each core P or E and Uncore (SA, MC, cache).

The price difference between the 7700X and 7800X3D is only U$40-50, which isn't a significant dent in your budget, and it's impressive among chips within the high-end gaming CPU segment

But I agree that an i5 or even a Ryzen 7600/5600x/5700x is more than enough for most people.
You pay an extra $50... for what?
The $50 is almost enough to change the 7700XT to the 7800XT in the purchase list, video cards that the 7800X3D cannot help. And the 7700X, according to reviews, surpasses the 7800X3D in applications.
It is just one of the many examples in which choosing an X3D over a non-X3D only brings damage to the wallet.

Intel needs to stop with the 16E cores and head to performance cores again.
AMD disagrees with you. The reason I don't use e-core is because they are still working on something viable. Ray-tracing history repeats itself.
How can you completely disable these cores, only those who do not have processors with E-core cores have a problem with them.

....................................

100 minutes with 14700KF (stock settings) only with www (news and forums) and youtube in the background.
Average CPU: 6W
If we activate all the LED crap (I can't stand them), we triple the consumption.
I think I will resist the next bill.
Clipboard01.jpg
 
Last edited:
This goes to show that Intel really need to release a P core only series for gamers, the concept of one "best of everything" processor is failing them badly.

Best single core performance
Best multi core performance
Low wattage


pick one at a time, because it cant do them at all once.

Yet without the E-cores raising the wattages, they wouldnt need as high end motherboards, cooling, PSUs etc

AMD disagrees with you. The reason I don't use e-core is because they are still working on something viable. Ray-tracing history repeats itself.
How can you completely disable these cores, only those who do not have processors with E-core cores have a problem with them.
Incorrect, AMD's C cores are an entirely different approach to E-cores, since AMD's C cores support the exact same instructions as their regular cores.
AMD's mini Zen 4c cores explained: They're nothing like Intel's Efficient cores | PC Gamer

Intels E-cores are not the same, and that's where they cause issues - those missing instructions break programs when they get bounced from a P core to an E core, because suddenly they can't operate and crash - this happens with a few games anti-cheat mechanisms, as one common example.
I've got 3 CPU's here with E-cores, and while i don't disable them (because i need them for testing purposes) I have zero interest in using them either. They're a bandaid to win in short benchmarks and not useful to consumers.
 
Intels E-cores are not the same, and that's where they cause issues - those missing instructions break programs when they get bounced from a P core to an E core, because suddenly they can't operate and crash - this happens with a few games anti-cheat mechanisms, as one common example.
I've got 3 CPU's here with E-cores, and while i don't disable them (because i need them for testing purposes) I have zero interest in using them either. They're a bandaid to win in short benchmarks and not useful to consumers.
Have you actually tested games with and without ecores? Everything I've tested runs considerably worse without them.
 
Motherboards and silicon lottery make it impossible to establish an effective preset. You can manually set PL1/2 and the rest is handled by software with automatic adjustment of frequencies and voltages. There is a whole palette of tools in BIOS or XTU for fine tuning. You can manually adjust the voltage for each core P or E and Uncore (SA, MC, cache).
My whole take is that you shouldn't establish voltage presets, only power ones and let the software play with voltages during load tests.
 
Back
Top