Monday, September 18th 2023

Intel Core i5-14600K Benchmarked

An Intel Core i5-14600K processor has been benchmarked roughly a month before its expected rollout at retail—previous leaks have this particular model placed as the most wallet friendly offering within a range of 14th Gen Core "Raptor Lake Refresh" CPUs. A total of six SKUs, with K and KF variants, are anticipated to launch on October 17. An official unveiling of new processor product lineups is scheduled for tomorrow at Team Blue's Innovation event. China's ECSM has managed to acquire an engineering sample (ES) of the aforementioned i5-14600K model, and put it through the proverbial ringer in Cinebench R23, Cinebench 2024, and CPU-Z. The brief report did not disclose any details regarding exact testbench conditions, so some of the results could be less than reliable/accurate.

ECSM's screenshot from CPU-Z re-confirms the Core i5-14600K's already leaked specs—six high-performance Raptor Cove cores running at a 3.50 GHz base clock, going up to 5.30 GHz (a 200 MHz gain over its predecessor: Core i5-13600K). Eight efficiency-oriented Gracemont cores running up to 4.0 GHz—100 MHz more than on the predecessor. The Core i5-14600K and i5-13600K share the same designations of 24 MB L3 cache and 125 W PBP—the leaked engineering sample was shown to have a core voltage of 1.2 V. The previous gen CPU operates on 1.14 V. ECSM noted that CPU package power consumption reached 160 W, and: "currently, the burn-in voltage is still quite out of control, especially for the two 8P models, both of which are at 1.4 V+. However, there is still a lot of room for manual voltage reduction."

Tom's Hardware and VideoCardz have produced some comparison charts based on ECSM's data, and external material from Guru3D and CGDirector:
Sources: ECSM, VideoCardz, Tom's Hardware
Add your own comment

60 Comments on Intel Core i5-14600K Benchmarked

#26
phanbuey
fevgatosSeriously, wtf?

You don't think there are many perf / w improvements? A 12900k at 35 watts is much faster than a 8700k at stock running at what, 140 watts if I remember correctly? How much improvement do you expect? The 14600k should be faster than your 8700k while consuming 1/4 of the wattage. That is INSANE actually. If we had that progress on any other devices in that short amount of time, air conditions would be consuming 50 watts now blasting at full load.
This, my OC on 8700K at 5.1 hit 190W in stress.
Posted on Reply
#27
fevgatos
phanbueyThis, my OC on 8700K at 5.1 hit 190W in stress.
Yeah but at stock I remember mine hovering around 130-140 watts. Im sure a 14600k at the same 130-140 watts will just smack it senseless.
phanbueyThis, my OC on 8700K at 5.1 hit 190W in stress.
Οk so I ran the numbers, a stock 8700k will score around 9-9.5k in CBR23, depending on whether or not yo consider MCE stock behavior. For context, MCE on coffeelake didnt just disable power limits like it does nowadays, it also clocked all core clocks to the single core turbo boost, so basically 4.7 ghz. Anyways, a 13600k scores 23k in CBR23 @ 125w, so it is 2.6 times faster while consuming less power. A 12900k scores 24.500 and a 13900k scores 32k. So yeah, efficiency has improved drastically.

Now if we want to make that comparison at ISO performance, a 13600k should be matching 8700ks performance at around 30-35 watts. So basically 1/4th - 1/5th of the power consumption. Whoever is not impressed with this, I don't know what to tell you.
Posted on Reply
#28
phanbuey


Bone stock 13700KF - small -25mv undervolt - air cooled by peerless assasin and a kryosheet. 201W 30K

If they can add efficiency with the new voltage regulation improvements, 160W actually might be high for that 14600K.
Posted on Reply
#29
lexluthermiester
fevgatosThe 14600k should be around 4 times faster than your 8700k.
Not quite. Double seems about the level of performance increase.
Posted on Reply
#30
Vayra86
Don't worry guys, I get the point ;) At the same time, we know all Intel CPUs these days are clocking high and then are operating way beyond their optimal V/F curve. The efficiency is there, I won't deny. But there is also a lot of waste. And I'm also looking at primarily gaming, for other workloads, especially synthetics, you will see the full depth of the advantage between CPUs, but in most general use cases, you won't.

And then we get to gaming, where the CPU is brutally inefficient overall, as proven by the stellar wattage numbers produced by a 7800X3D in gaming, easily a half to a third of an equally performant Intel CPU. I underline this point because a lot of what we perceive to be advantages and improvements don't always pay off in practice, but at the same time, we do run CPUs now that are capable of burning twice the wattage we used to have.

There's a space there we aren't seeing in reviews/testing, for sure. And I'm not saying this just because I have an 8700K. I'm saying this because that 8700K still uses around 70-90 watts in most games today, even on a 7900XT, even in titles where I am severely CPU limited, and even then I'm still scoring an FPS remarkably close to the latest greatest. The desire to upgrade my CPU isn't that big, even though I know I can get more than 20% higher frames in gaming here and there. CPUs got faster, CPUs got more efficient, but that efficiency is definitely not transparent enough to say it applies everywhere.
Posted on Reply
#31
fevgatos
lexluthermiesterNot quite. Double seems about the level of performance increase.
In multithreaded workloads it should be way higher than just double. The 8700k scores around 9k in CBR23, the 13600k already hits 25k.
phanbuey

Bone stock 13700KF - small -25mv undervolt - air cooled by peerless assasin and a kryosheet. 201W 30K

If they can add efficiency with the new voltage regulation improvements, 160W actually might be high for that 14600K.
Air coolers ftw
Vayra86And then we get to gaming, where the CPU is brutally inefficient overall, as proven by the stellar wattage numbers produced by a 7800X3D in gaming, easily a third of an equally performant Intel CPU. I underline this point because a lot of what we perceive to be advantages and improvements don't always pay off in practice, but at the same time, we do run CPUs now that are capable of burning twice the wattage we used to have.

There's a space there we aren't seeing in reviews/testing, for sure. And I'm not saying this just because I have an 8700K. I'm saying this because that 8700K still uses around 70-90 watts in most games today, even on a 7900XT, even in titles where I am severely CPU limited, and even then I'm still scoring an FPS remarkably close to the latest greatest. The desire to upgrade my CPU isn't that big, even though I know I can get more than 20% higher frames in gaming here and there. CPUs got faster, CPUs got more efficient, but that efficiency is definitely not transparent enough to say it applies everywhere.
What if I told you a 12900k consumes between 50 to 70w at 4k with a 4090?

Yes, the 13900k is a power hog if left unchecked running games at 720p with a 4090,but that's not a very realistic scenario. In 4k it usually hovers below 100w

EG1. The above numbers are based on heaviest of games like TLOU and cyberpunk. On your average game the numbers are much lower.
Posted on Reply
#32
Vayra86
fevgatosWhat if I told you a 12900k consumes between 50 to 70w at 4k with a 4090?

Yes, the 13900k is a power hog if left unchecked running games at 720p with a 4090,but that's not a very realistic scenario. In 4k it usually hovers below 100w

EG1. The above numbers are based on heaviest of games like TLOU and cyberpunk. On your average game the numbers are much lower.
Its irrelevant, these are not high CPU stress scenarios at all. That 720p result is where its at. Not realistic? No - games don't push the CPU, simple as, but the CPU does need a substantial amount of wattage to keep that chip going. There is waste, simple, and a higher wattage CPU is more likely to produce more waste as it boosts higher but there isn't 'more work done' for the perceptive of the gamer. It still runs the game, it just produces more frames. That's a key difference we keep forgetting. There is the balance against the GPU and the requirement of the game towards both CPU and GPU, but there is also the fact that we just always like to have more, even if we don't really need it. A faster CPU enables more, and then also uses more. Gaming is not a workload that's begin to end and then you finish it faster, like a synthetic bench.

The fun fact is, I would also see around 70W at 4K with a 4090 on my 8700K.
I'm seeing 90W today in Starfield, a game that truly loves to load the CPU (lets not speak of performance relative to that...), and it results in FPS results pretty close to what's being benched for 7900XT on recent CPUs. And there are many more examples where I am left wondering whether there is some magic I don't know of, or that CPU perf in gaming has just pretty much hit a wall, not unlike how different CPUs felt in the past. Its either enough, too little, or overkill that barely pays off.
Posted on Reply
#33
fevgatos
Vayra86Its irrelevant, these are not high CPU stress scenarios at all. That 720p result is where its at. Not realistic? No - games don't push the CPU, simple as, but the CPU does need a substantial amount of wattage to keep that chip going. There is waste, simple, and a higher wattage CPU is more likely to produce more waste as it boosts higher but there isn't 'more work done' for the perceptive of the gamer. It still runs the game, it just produces more frames. That's a key difference we keep forgetting. There is the balance against the GPU and the requirement of the game towards both CPU and GPU, but there is also the fact that we just always like to have more, even if we don't really need it. A faster CPU enables more, and then also uses more.

The fun fact is, I would also see 70W at 4K with a 4090 on my 8700K.
Ok, here are some real numbers on full CPU bound scenarios 720p resolution.

Warzone 2 = between 50 and 70 watts
Kingdom come = between 50 and 70 watts
Remnant 2 = between 33 and 52 watts
Hogwarts = between 40 and 60 watts

And the list goes on and on. The maximum power draw i've ever seen in a game was in 720p TLOU were the 12900k hit 115 watts but most games it's half that or even less.

Now with the 13900k, yes, i've gone up to a whooping 170 watts in cyberpunk, but again, those are very academic numbers. After all, when you are running 720p with a 4090, you are basically benching. Im fairly confident you can power limit it to 90w and lose like 3% performance or something.
Vayra86I'm seeing 90W today in Starfield, a game that truly loves to load the CPU (lets not speak of performance relative to that...), and it results in FPS results pretty close to what's being benched for 7900XT on recent CPUs. And there are many more examples where I am left wondering whether there is some magic I don't know of, or that CPU perf in gaming has just pretty much hit a wall, not unlike how different CPUs felt in the past. Its either enough, too little, or overkill that barely pays off.
In starfield - again - full CPU bound scenario, im between 90 and 100w in that big city. In other areas im around 50-60. Nothing gets close to TLOU in terms of power draw.
Posted on Reply
#34
Vayra86
fevgatosOk, here are some real numbers on full CPU bound scenarios 720p resolution.

Warzone 2 = between 50 and 70 watts
Kingdom come = between 50 and 70 watts
Remnant 2 = between 33 and 52 watts
Hogwarts = between 40 and 60 watts

And the list goes on and on. The maximum power draw i've ever seen in a game was in 720p TLOU were the 12900k hit 115 watts but most games it's half that or even less.

Now with the 13900k, yes, i've gone up to a whooping 170 watts in cyberpunk, but again, those are very academic numbers. After all, when you are running 720p with a 4090, you are basically benching. Im fairly confident you can power limit it to 90w and lose like 3% performance or something.
You're just underlining the fact that higher wattage CPUs at stock are going to use more watts for similar perf, you know. You are talking about tweaked CPUs here, and the moment you let a 13900K run stock on that same workload, the wattage explodes.

Cán these CPUs use less, they certainly can! But its not how they're delivered, and not every CPU can even be tweaked.
fevgatosIn starfield - again - full CPU bound scenario, im between 90 and 100w in that big city. In other areas im around 50-60. Nothing gets close to TLOU in terms of power draw.
Exactly the point... So... have we progressed a lot then, if a 8700K does 90W and you still use that today. I'm still seeing 50+ FPS in cities. You might see 90-100 with more variance. Both are perfectly playable.

Here's a screen - this is with crowd density maxed, and most other settings too, and not 4K (3440x1440).

Posted on Reply
#35
fevgatos
Vayra86Exactly the point... So... have we progressed a lot then, if a 8700K does 90W and you still use that today. I'm still seeing 50+ FPS in cities. You might see 90-100 with more variance. Both are perfectly playable.
Im seeing an average of 120-130 in that big city. So - yes, more than twice the fps for similar power draw?
Vayra86You're just underlining the fact that higher wattage CPUs at stock are going to use more watts for similar perf, you know. You are talking about tweaked CPUs here, and the moment you let a 13900K run stock on that same workload, the wattage explodes.

Cán these CPUs use less, they certainly can! But its not how they're delivered, and not every CPU can even be tweaked.
So your problem with these CPUs is how they are delivered? Well, lucky you, that's an insanely easy problem to solve. Im fully confident you know how to power limit, it will take you what, 5-10 seconds?
Posted on Reply
#36
Vayra86
fevgatosIm seeing an average of 120-130 in that big city. So - yes, more than twice the fps for similar power draw?


So your problem with these CPUs is how they are delivered? Well, lucky you, that's an insanely easy problem to solve. Im fully confident you know how to power limit, it will take you what, 5-10 seconds?
You seem to keep thinking I have a problem with things, I don't. Its an observation.

In synthetics we thought to see 2,6x efficiency I read earlier and you mentioned 1/4th. There's a substantial gap.
Posted on Reply
#37
fevgatos
Vayra86You seem to keep thinking I have a problem with things, I don't. Its an observation.

In synthetics we thought to see 2,6x efficiency I read earlier and you mentioned 1/4th. There's a substantial gap.
I never mentioned synthetics. Cinebench isn't a synthetic workload. It's an actual application called cinema 4d.

I tested the same area as your screenshot, locked framerate to 62, power draw was at 49 watts. So yeah, not a huge leap, but games are a different kind of beast all together. Especially this game in particular. I don't think starfield should be used to compare this kind of thing.
Posted on Reply
#38
lexluthermiester
fevgatosIn multithreaded workloads it should be way higher than just double.
What it seems like it should and what happens in real world practice often don't jive. Given that it's been just less than 6 years and performance has still slightly more than doubled, things are still progressing.
fevgatosThe 8700k scores around 9k in CBR23, the 13600k already hits 25k.
But that is one synthetic benchmark. One benchmark does not a standard of performance make..
fevgatosCinebench isn't a synthetic workload. It's an actual application called cinema 4d.
Cinebench is most definitely a synthetic workload. It is a realistic workload benchmark which why it is given more merit than most, but it is an artificial runtime.
Posted on Reply
#39
fevgatos
lexluthermiesterWhat it seems like it should and what happens in real world practice often don't jive. Given that it's been just less than 6 years and performance has still slightly more than doubled, things are still progressing.

But that is one synthetic benchmark. One benchmark does not a standard of performance make..
Cinebench is not synthetic. Do you understand what a synthetic workload is?
Posted on Reply
#40
lexluthermiester
fevgatosCinebench is not synthetic.
Yes, it is, stop arguing.
Posted on Reply
#41
fevgatos
lexluthermiesterYes, it is, stop arguing.
Οk, so you don't understand what synthetic is. Lol
Posted on Reply
#42
Vayra86
fevgatosΟk, so you don't understand what synthetic is. Lol
No you just like to argue about irrelevant points ;) Cinebench the way it is run for this purpose is most definitely a synthetic bench because you're rendering that same picture every time. Its not a real workload with variance. The same thing applies to, effectively a canned game benchmark. You could argue its synthetic just the same because players never play that canned run ever. They come into the game with similar game logic, but have it manipulated (and more often than not: added to!) with whatever they do in the game. In the same way, Starfield's performance varies highly depending on where you are and even what you do. In combat, FPS takes a nosedive. Indoors, FPS skyrockets. Etc.

Synthetic:
  • Relating to, involving, or of the nature of synthesis.
  • Produced by synthesis, especially not of natural origin.
  • Prepared or made artificially.
  • Not natural or genuine; artificial or contrived.
Posted on Reply
#43
fevgatos
Vayra86No you just like to argue about irrelevant points ;) Cinebench the way it is run for this purpose is most definitely a synthetic bench because you're rendering that same picture every time. Its not a real workload with variance. The same thing applies to, effectively a canned game benchmark. You could argue its synthetic just the same because players never play that canned run ever. They come into the game with similar game logic, but have it manipulated (and more often than not: added to!) with whatever they do in the game. In the same way, Starfield's performance varies highly depending on where you are and even what you do. In combat, FPS takes a nosedive. Indoors, FPS skyrockets. Etc.

Synthetic:
  • Relating to, involving, or of the nature of synthesis.
  • Produced by synthesis, especially not of natural origin.
  • Prepared or made artificially.
  • Not natural or genuine; artificial or contrived.
Cinebench is made by maxon, the company behind cinema 4d. They created cinebench so system integrators can easily benchmark their systems performance in....you guessed it, cinema 4d. Which is a real world application. Calling cinebench synthetic is just wild.
Posted on Reply
#44
Mussels
Freshwater Moderator
phanbueyvs 8700K? yeah it's almost doubled performance (60-80% faster) on ST and over doubled on MT...







None of the above - 100% CPU usage and no Throttle state



My undervolted 13700K pulls 210W @ 5.3Ghz (stock) during cinebench, so these chips are actually pretty efficient if you don't yeet them at 1.4v at 5.9ghz.
We can't see anything about throttling in that image.

If it was HWinfo showing effective clocks and the rest of the power figures it might be trustworthy, but FRAPS? or whatever that is can easily show an incomplete picture.
I can go use HWmonitor and show CPU's at 8GHz at 255c, not all software is reliable.
fevgatosCinebench is made by maxon, the company behind cinema 4d. They created cinebench so system integrators can easily benchmark their systems performance in....you guessed it, cinema 4d. Which is a real world application. Calling cinebench synthetic is just wild.
It's absolutely synthetic. It's 100% the same test every single time.

It's based on a realistic workload so it's a USEFUL synthetic, but it's still synthetic.
Posted on Reply
#45
fevgatos
MusselsIt's absolutely synthetic. It's 100% the same test every single time.

It's based on a realistic workload so it's a USEFUL synthetic, but it's still synthetic.
Ok, it's a synthetic benchmark in which the performance directly correlates to real world applications. Calling it just a "synthetic" to discard the performance numbers is wrong.
Posted on Reply
#46
las
Vayra86Synthetic performance, yes. Theory vs practice. How often can you actually extract that perf I wonder... when it comes to gaming, we see plateaus of performance more so than a major jump. And of course I was part joking about it...

But now consider the fact these turbo to double the wattage too. I doubt there are many real perf/w improvements in a vast number of workloads when both CPUs run stock.
Performance in both real world applications and gaming have gone up big time as well. IPC improved alot and clockspeeds went up as well. 8700K is nowhere near top CPUs today, neither is my 9900K even at 5.2 GHz. It is loosing big to even a stock i5-13600K in both applications and gaming.

But sure, keep thinking your 8700K at just 4.6 GHz is close to new CPUs... I bet your 7900XT is even bottlenecked by it in many demanding games and you have no resizeable bar support on top. Tons of games can and will use 8 cores today, especially when paired with a higher end GPU.
Posted on Reply
#47
phanbuey
MusselsWe can't see anything about throttling in that image.

If it was HWinfo showing effective clocks and the rest of the power figures it might be trustworthy, but FRAPS? or whatever that is can easily show an incomplete picture.
I can go use HWmonitor and show CPU's at 8GHz at 255c, not all software is reliable.


It's absolutely synthetic. It's 100% the same test every single time.

It's based on a realistic workload so it's a USEFUL synthetic, but it's still synthetic.
AIDA throttle monitor.
Posted on Reply
#48
Vayra86
lasPerformance in both real world applications and gaming have gone up big time as well. IPC improved alot and clockspeeds went up as well. 8700K is nowhere near top CPUs today, neither is my 9900K even at 5.2 GHz. It is loosing big to even a stock i5-13600K in both applications and gaming.

But sure, keep thinking your 8700K at just 4.6 GHz is close to new CPUs... I bet your 7900XT is even bottlenecked by it in many demanding games and you have no resizeable bar support on top. Tons of games can and will use 8 cores today, especially when paired with a higher end GPU.
I know it is.
Posted on Reply
#49
Aerpoweron
fevgatosThat is not how you compare efficiency. At all. Unless you run everything at same wattage, any efficiency comparison is just nonsensical.
280W vs 253W was close enough for me for comparison. I could power limit the TR to 253W tho and see how well it does.
But i would also like to see the real power draw of the 14600K for Cinebench, assuming the board is configured to allow 253W all the time for it.
Posted on Reply
#50
Mussels
Freshwater Moderator
fevgatosOk, it's a synthetic benchmark in which the performance directly correlates to real world applications. Calling it just a "synthetic" to discard the performance numbers is wrong.
This discussion merely came up because some people think 'synthetic' means 'bad'.

It's synthetic, but it's still a valid and useful benchmark. It's possible for hardware to be tweaked for it due to popularity, and thats why TPU tests with more than one rendering program.
fevgatosThat is not how you compare efficiency. At all. Unless you run everything at same wattage, any efficiency comparison is just nonsensical.
That's just ridiculous.
That's not how efficiency works in the slightest - that's IPC.
It's also a terrible idea, because components are designed to work in sync with each other and if you set them outside their architectures optimal ranges, they'll generally perform far worse.
It also has nothing to do with how any of these products are intended to run, so it's a data point for IPC and otherwise utterly useless to everyone. Double so when that IPC varies per workload per design - SSE, AVX, AVX-512, etc etc.


Efficiency is time taken to complete a task. A faster, higher wattage part can complete the task quicker. THAT is efficiency. Energy efficiency is when you math the time taken with energy consumed.
Posted on Reply
Add your own comment
Oct 20th, 2024 07:35 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts