Monday, September 18th 2023
Intel Core i5-14600K Benchmarked
An Intel Core i5-14600K processor has been benchmarked roughly a month before its expected rollout at retail—previous leaks have this particular model placed as the most wallet friendly offering within a range of 14th Gen Core "Raptor Lake Refresh" CPUs. A total of six SKUs, with K and KF variants, are anticipated to launch on October 17. An official unveiling of new processor product lineups is scheduled for tomorrow at Team Blue's Innovation event. China's ECSM has managed to acquire an engineering sample (ES) of the aforementioned i5-14600K model, and put it through the proverbial ringer in Cinebench R23, Cinebench 2024, and CPU-Z. The brief report did not disclose any details regarding exact testbench conditions, so some of the results could be less than reliable/accurate.
ECSM's screenshot from CPU-Z re-confirms the Core i5-14600K's already leaked specs—six high-performance Raptor Cove cores running at a 3.50 GHz base clock, going up to 5.30 GHz (a 200 MHz gain over its predecessor: Core i5-13600K). Eight efficiency-oriented Gracemont cores running up to 4.0 GHz—100 MHz more than on the predecessor. The Core i5-14600K and i5-13600K share the same designations of 24 MB L3 cache and 125 W PBP—the leaked engineering sample was shown to have a core voltage of 1.2 V. The previous gen CPU operates on 1.14 V. ECSM noted that CPU package power consumption reached 160 W, and: "currently, the burn-in voltage is still quite out of control, especially for the two 8P models, both of which are at 1.4 V+. However, there is still a lot of room for manual voltage reduction."Tom's Hardware and VideoCardz have produced some comparison charts based on ECSM's data, and external material from Guru3D and CGDirector:
Sources:
ECSM, VideoCardz, Tom's Hardware
ECSM's screenshot from CPU-Z re-confirms the Core i5-14600K's already leaked specs—six high-performance Raptor Cove cores running at a 3.50 GHz base clock, going up to 5.30 GHz (a 200 MHz gain over its predecessor: Core i5-13600K). Eight efficiency-oriented Gracemont cores running up to 4.0 GHz—100 MHz more than on the predecessor. The Core i5-14600K and i5-13600K share the same designations of 24 MB L3 cache and 125 W PBP—the leaked engineering sample was shown to have a core voltage of 1.2 V. The previous gen CPU operates on 1.14 V. ECSM noted that CPU package power consumption reached 160 W, and: "currently, the burn-in voltage is still quite out of control, especially for the two 8P models, both of which are at 1.4 V+. However, there is still a lot of room for manual voltage reduction."Tom's Hardware and VideoCardz have produced some comparison charts based on ECSM's data, and external material from Guru3D and CGDirector:
60 Comments on Intel Core i5-14600K Benchmarked
Now if we want to make that comparison at ISO performance, a 13600k should be matching 8700ks performance at around 30-35 watts. So basically 1/4th - 1/5th of the power consumption. Whoever is not impressed with this, I don't know what to tell you.
Bone stock 13700KF - small -25mv undervolt - air cooled by peerless assasin and a kryosheet. 201W 30K
If they can add efficiency with the new voltage regulation improvements, 160W actually might be high for that 14600K.
And then we get to gaming, where the CPU is brutally inefficient overall, as proven by the stellar wattage numbers produced by a 7800X3D in gaming, easily a half to a third of an equally performant Intel CPU. I underline this point because a lot of what we perceive to be advantages and improvements don't always pay off in practice, but at the same time, we do run CPUs now that are capable of burning twice the wattage we used to have.
There's a space there we aren't seeing in reviews/testing, for sure. And I'm not saying this just because I have an 8700K. I'm saying this because that 8700K still uses around 70-90 watts in most games today, even on a 7900XT, even in titles where I am severely CPU limited, and even then I'm still scoring an FPS remarkably close to the latest greatest. The desire to upgrade my CPU isn't that big, even though I know I can get more than 20% higher frames in gaming here and there. CPUs got faster, CPUs got more efficient, but that efficiency is definitely not transparent enough to say it applies everywhere.
Yes, the 13900k is a power hog if left unchecked running games at 720p with a 4090,but that's not a very realistic scenario. In 4k it usually hovers below 100w
EG1. The above numbers are based on heaviest of games like TLOU and cyberpunk. On your average game the numbers are much lower.
The fun fact is, I would also see around 70W at 4K with a 4090 on my 8700K.
I'm seeing 90W today in Starfield, a game that truly loves to load the CPU (lets not speak of performance relative to that...), and it results in FPS results pretty close to what's being benched for 7900XT on recent CPUs. And there are many more examples where I am left wondering whether there is some magic I don't know of, or that CPU perf in gaming has just pretty much hit a wall, not unlike how different CPUs felt in the past. Its either enough, too little, or overkill that barely pays off.
Warzone 2 = between 50 and 70 watts
Kingdom come = between 50 and 70 watts
Remnant 2 = between 33 and 52 watts
Hogwarts = between 40 and 60 watts
And the list goes on and on. The maximum power draw i've ever seen in a game was in 720p TLOU were the 12900k hit 115 watts but most games it's half that or even less.
Now with the 13900k, yes, i've gone up to a whooping 170 watts in cyberpunk, but again, those are very academic numbers. After all, when you are running 720p with a 4090, you are basically benching. Im fairly confident you can power limit it to 90w and lose like 3% performance or something. In starfield - again - full CPU bound scenario, im between 90 and 100w in that big city. In other areas im around 50-60. Nothing gets close to TLOU in terms of power draw.
Cán these CPUs use less, they certainly can! But its not how they're delivered, and not every CPU can even be tweaked. Exactly the point... So... have we progressed a lot then, if a 8700K does 90W and you still use that today. I'm still seeing 50+ FPS in cities. You might see 90-100 with more variance. Both are perfectly playable.
Here's a screen - this is with crowd density maxed, and most other settings too, and not 4K (3440x1440).
In synthetics we thought to see 2,6x efficiency I read earlier and you mentioned 1/4th. There's a substantial gap.
I tested the same area as your screenshot, locked framerate to 62, power draw was at 49 watts. So yeah, not a huge leap, but games are a different kind of beast all together. Especially this game in particular. I don't think starfield should be used to compare this kind of thing.
Synthetic:
If it was HWinfo showing effective clocks and the rest of the power figures it might be trustworthy, but FRAPS? or whatever that is can easily show an incomplete picture.
I can go use HWmonitor and show CPU's at 8GHz at 255c, not all software is reliable. It's absolutely synthetic. It's 100% the same test every single time.
It's based on a realistic workload so it's a USEFUL synthetic, but it's still synthetic.
But sure, keep thinking your 8700K at just 4.6 GHz is close to new CPUs... I bet your 7900XT is even bottlenecked by it in many demanding games and you have no resizeable bar support on top. Tons of games can and will use 8 cores today, especially when paired with a higher end GPU.
But i would also like to see the real power draw of the 14600K for Cinebench, assuming the board is configured to allow 253W all the time for it.
It's synthetic, but it's still a valid and useful benchmark. It's possible for hardware to be tweaked for it due to popularity, and thats why TPU tests with more than one rendering program. That's just ridiculous.
That's not how efficiency works in the slightest - that's IPC.
It's also a terrible idea, because components are designed to work in sync with each other and if you set them outside their architectures optimal ranges, they'll generally perform far worse.
It also has nothing to do with how any of these products are intended to run, so it's a data point for IPC and otherwise utterly useless to everyone. Double so when that IPC varies per workload per design - SSE, AVX, AVX-512, etc etc.
Efficiency is time taken to complete a task. A faster, higher wattage part can complete the task quicker. THAT is efficiency. Energy efficiency is when you math the time taken with energy consumed.