• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i9-12900KS

"this is exactly how Turbo Boost 3.0 is designed to work, the problem is that even though we're using Windows 11, and all communication protocols between the OS and CPU are active, there's still cases where threads don't end up on these two cores and thus lose a bit of performance." - It's strange, considering that I never run into this problem with my 11700 on Windows 10. Maybe Windows 11's scheduler is a bit flawed?

Good review otherwise. :)
 
Waiting for my KS to arrive , I bought the 9900KS and never regretted it for a moment.
I’m not as keen on the recently announced 3090ti though, I’m already gaming in my pants after a couple of hours running a 3090 OC.- the heat is atrocious on a 3090, so a 3090ti will clearly be a lot worse, then there is the power draw….
I think they need to start thinking very seriously about limiting power draw before governments legislate and ruin it - as they do everything else.
They are robbing us blind now across Europe for electricity and this comes as british MP’s now get their electricity paid for by us the tax payers now on top of a £2200 pay rise, so clearly it doesn’t matter them, but “we are all in it together“ eh…
 
Last edited:
Great review as always. I would like to suggest one new benchmark though I think would be really cool to see: Civilization 6 Time-between-Turns, it probably just scales with single-threaded performance more than anything but it would be nice to see.
 
Waiting for my KS to arrive , I bought the 9900KS and never regretted it for a moment.
I’m not as keen on the recently announced 3090ti though, I’m already gaming in my pants after a couple of hours running a 3090 OC.- the heat is atrocious on a 3090, so a 3090ti will clearly be a lot worse, then there is the power draw….
I think they need to start thinking very seriously about limiting power draw before governments legislate and ruin it - as they do everything else.
They are robbing us blind now across Europe for electricity and this comes as british MP’s now get their electricity paid for by us the tax payers now on top of a £2200 pay rise, so clearly it doesn’t matter them, but “we are all in it together“ eh…

Many truths in this post.
 
Honestly, meh. Disgraceful power draw and heat, a cpu for p!ssing contest only.

OCing Alder Lake should be illegal, it's a farcical waste of energy, stock 12900K needs no help. I hope Raptor Lake focuses on greatly reducing power draw but I suspect we won't see huge improvements until Arrow Lake.
 
hahahahhahah..........wtf this is worst than 3090 Ti, +$150 freaking dollars for 1%? that's even margin of error.......also more consumes power, what a joke.
This is the champion of useless hardware.
 
For the funs I ran the Blender BMW test as well to see where my 5950X stands against that chip. Mine finished the run in 83.39 seconds while hovering around 209W for CPU package power as reported by HWinfo. Also just for those that are curious, while the test was running the all core VID reported by HWinfo was between 1.1 and 1.2 volts and the all core clock speed was between 4.35 and 4.42 GHz.
 
I'm just curious... [generally speaking] looking at the power consumption charts what is the best method to estimate power consumption whilst gaming. I'm assuming its a mix of the single-threaded and Multi-threaded chart? I'm also assuming its way below the MT 298W rating?

For what its worth, I run a 12900K with a D15, set to 5.1 all core at 1.23v fixed. Gaming can be between 35-85w, with temps in the 60s usually. Its rare to see a spike to the low 80s.

The chip can suck down crazy power if you let it and will look scary on charts. But if it is gaming that you are after, you dont need to let it or worry about it.

(*E: More relevent info:
I disabled the Ecores, left ring on auto - which results in typical 4700ring which is about 500 more than default
running about half a year now without issue)
 
Last edited:
In Prime 95 the CPU uses more power consumption than my PSU's official rating. If I got a identical PSU I couldn't officially run a 12900KS and a RTX 4090 together because they both exceed my PSU's rated wattage. It officially would probably work along some fans to aid in cooling them, but what a sh*ts show between the two of them. It'll be embarrassing if AMD decides to play along too. This isn't where things were suppose to lead with smaller manufacturing nodes even higher power draw devices. What next memory makers pushing DDR5 to 2.8v!!?
 
Last edited:
A suggestion for reviews in the efficiency section (because right now it's just consumption).

You need a game demo that lasts 5 minutes and is a good mix of GPU and CPU usage.
By good mix I mean when the demo was running it would tax the GPU and CPU in such a way that the numbers would be close to the average of all the games benchmarked in the review.

Once you have it, you run it - measuring the machine's power consumption when you do. And like you do on the 1080/1440/4k results pages, adjust the review sample's FPS to 100%
12900KS: 5m Demo: 14.2kJ consumption, 100% performance
11900K: 5m Demo: 11.6kJ consumption, 93.2% relative

To figure out how much more power the 12900k used, you put it's consumption over the 11900K's

14.2/11.6, which is 122.4%

But to get efficiency, you need to adjust for how much less work the 11900K did.

And this is the last calculation 122.4*0.932 =

114.1

The 12900K takes 114.1% of the power to do the same work of the 11900K (for example)

You can reduce this to a ratio: 1.141 to 1

Then you can multiply 1.141 by the 11900K's consumption during a different a game to estimate the 12900K's draw.

Obviously the more different the other game is, the less accurate the estimation is. But that's not the point.

The point is the demo stays the same as the reviews progress. And you can compare efficiency. When saving for future use, you just need to remember to put the reviewed card's frames per second beside its 100% so the results are transferrable. Posting it in the review would be helpful too, though isn't necessary, especially if you always exported results to the next review. (multiply the results you want to include by the ratio of review 1's 100% FPS to review 2's 100% FPS).

The math isn't complicated. Just thought I'd throw out the idea in an easily useful, hopefully persuasive form.

Also, it wouldn't be much more work to run the demo at all the common resolution. Other than the first time (gathering comparison points) it's not much work. Then as time goes on the database grows.

We'd be able to see how much hardware is improving, both speed and energy wise! Able to compare any two points in time. Until the benchmark becomes too old to stress the hardware. Unless Intel and AMD stagnate for another decade after this short reprieve
 
Last edited:
On the gaming testing front is there any chance that some none FPS metrics could be incorporated next time the suite gets an overhaul? Stuff like AI turn time for civ 6 (or 7 if released), late game tic rates (or an abstraction) for the Paradox Grand Strategy games like CK3, Stellaris, EU4. Late game city builder tic rates and so on.

While the 720p results give an indication finding a way to actually benchmark those games where the CPU is far more important for the gameplay experience than the GPU would be a great addition to the CPU test suite.

For most of the games above my 2200G could do 4k at playable frame rates. The issues that part has (and still does) is late game hitching when calculating, turn time and the fact that late game 'fastest' game speed often becomes slower than early game 'normal' making it drag. FPS is rarely an issue worth complaining about.

As for this CPU. Does not seem worth it over the 12900K. Barely any extra performance for a pretty hefty price increase.
 
I respect them for introducing current year value king - the 12100F, but this 12900KS is a joke processor. Even worse than both the 8086K and 9900KS. +2% in games? What were they thinking? This is a product for people with more money than brains or for extreme overclockers. But it's binned anyway and can't overlock notably high.

12900KS+RTX3090TI anyone? :laugh:
 
Honestly, meh. Disgraceful power draw and heat, a cpu for p!ssing contest only.

OCing Alder Lake should be illegal, it's a farcical waste of energy, stock 12900K needs no help. I hope Raptor Lake focuses on greatly reducing power draw but I suspect we won't see huge improvements until Arrow Lake.

Decreasing the number of P-cores and increasing significantly the number of low-clocked E-cores (perhaps in a higher IPC iteration) may do a lot for power consumption in multithreaded tasks. A large number of cores that clock to low speeds (thus to low voltages) is mainly how the 5950X achieves a relatively low power consumption in such loads.

Also, having fully independent voltage rails for all cores should help since right now on Alder Lake the voltage applied to all cores is the highest requested by any core and uncore. In other words, currently you cannot simultaneously have one fast core and many slow cores at separate voltages. The slow cores would have the same voltage as the fast one.

I think for short bursts (~seconds to perhaps a couple minutes) and single-core loads it's fine to use the CPU to the maximum of its capabilities, but by doing some analysis of performance versus configured PL1 (i.e. package power) it's clear that allowing the CPU to consume 50–100+% more than about 100-120W is definitely not going to yield corresponding performance increases and will just be wasteful (and noisy, stressful, etc) for significant compute tasks (video or 3D rendering, etc).

I overclocked my 12700K but mostly for single-core performance and multi-core bursts. For long-term compute loads I have a relatively low PL1 and Tau limits set in place. I think most people will just run their overclocked CPUs with no limits.
 
A suggestion for reviews in the efficiency section (because right now it's just consumption).

You need a game demo that lasts 5 minutes and is a good mix of GPU and CPU usage.
By good mix I mean when the demo was running it would tax the GPU and CPU in such a way that the numbers would be close to the average of all the games benchmarked in the review.

Once you have it, you run it - measuring the machine's power consumption when you do. And like you do on the 1080/1440/4k results pages, adjust the review sample's FPS to 100%
12900KS: 5m Demo: 14.2kJ consumption, 100% performance
11900K: 5m Demo: 11.6kJ consumption, 93.2% relative

To figure out how much more power the 12900k used, you put it's consumption over the 11900K's

14.2/11.6, which is 122.4%

But to get efficiency, you need to adjust for how much less work the 11900K did.

And this is the last calculation 122.4*0.932 =

114.1

The 12900K takes 114.1% of the power to do the same work of the 11900K (for example)

You can reduce this to a ratio: 1.141 to 1

Then you can multiply 1.141 by the 11900K's consumption during a different a game to estimate the 12900K's draw.

Obviously the more different the other game is, the less accurate the estimation is. But that's not the point.

The point is the demo stays the same as the reviews progress. And you can compare efficiency. When saving for future use, you just need to remember to put the reviewed card's frames per second beside its 100% so the results are transferrable. Posting it in the review would be helpful too, though isn't necessary, especially if you always exported results to the next review. (multiply the results you want to include by the ratio of review 1's 100% FPS to review 2's 100% FPS).

The math isn't complicated. Just thought I'd throw out the idea in an easily useful, hopefully persuasive form.

Also, it wouldn't be much more work to run the demo at all the common resolution. Other than the first time (gathering comparison points) it's not much work. Then as time goes on the database grows.

We'd be able to see how much hardware is improving, both speed and energy wise! Able to compare any two points in time. Until the benchmark becomes too old to stress the hardware. Unless Intel and AMD stagnate for another decade after this short reprieve
Good idea, here's some data.

cyberpunk-2077-1280-720.png
power-gaming.png


Actually the power measurement is at 1440p, but to get more scaling let's just use 720p data for FPS

Dividing power by FPS for each of those:
power-frame.png


This does look promising indeed.

Thoughts on the unit "Watt per Frame" ? Technically it is Joules per Frame because 1 W = 1 J/s and 1 FPS = 1 Frame/s, so power / FPS = (J/s) / (F/s) = (J/s) * (s/F) = J/F.

Guess "Energy per Frame"?
 
Way too power hungry and way too expensive! Why go for this KS variant over the K variant? There is no point other than people having too much money lying around and wanting to just give their money to Intel for nothing. Literally 1% faster than the K version, but way hotter and way more power hungry. This CPU consumes more power than top of the line graphic cards!

It looks even worse compared to the 5950x as its again only 1% faster overall, yet consumes over 150W more power and has temps of 100+ degrees Celsius. I'm still sticking with AMD to punish Intel for almost a decade of garbage 2% incremental speed increases when they had a monopoly even though its now a tossup between their CPU's. The last 3 years I'd say AMD had the better processors and way more value oriented ones, but now its equal, but I'm going with AMD still as I don't want to reward Intel just yet for a full decade of garbage products!

I'm looking at AMD's Ryzen 6000 or 7000 whatever they call it to see how it stacks against intel's latest offerings. From what I've read its supposedly going to be available early Autumn.
 
TDP MAX of i9-12900KS cant be 241W, but its defined for 260W by Intel....
 
TDP MAX of i9-12900KS cant be 241W, but its defined for 260W by Intel....
The default PL1 and PL2 values are 241 W, that's what the processor knows about, everything else is just made up numbers that are listed for one reason or the other.

The 260 W number is from an earlier leak that apparently turned out to be false
 
But 14900 isn't out yet.
yes but we can see for example that the 11900K was matched by the 12600K, these chips don't have a lifespan of years but months.

I mean sure you could buy the most hot, inefficient, steaming mess of a bloated CPU and blast it at 30 jigawatts for the next 10 years, but that doesn't seem very rational.
 
@W1zzard u right, sorry, only base TDP is higher
1648892374017.png
 
Intel just want to be on top of everything. This CPU is just a waste of resources, time and money. Only stupid geeks, with tones of money, would buy this monstrosity.
 
Honestly, meh. Disgraceful power draw and heat, a cpu for p!ssing contest only.

OCing Alder Lake should be illegal, it's a farcical waste of energy, stock 12900K needs no help. I hope Raptor Lake focuses on greatly reducing power draw but I suspect we won't see huge improvements until Arrow Lake.
OCing any modern CPU should be illegal. The built-in "multi core enhancement" feature of motherboards should be plenty enough for everybody.

My 11700 (non-K) maxes out its 200 W limit set by my Asus Tuf motherboard in Prime95 and draws around 160-170 W in Cinebench R23 at 4.4 GHz. I literally needed to have my 280 mm AIO just to enable MCE on a locked (non-K) CPU! I'm not complaining because it never goes above 75-77 °C under normal loads (like CB R23 for example), but c'mon... who needs more than this for home use?

Way too power hungry and way too expensive! Why go for this KS variant over the K variant? There is no point other than people having too much money lying around and wanting to just give their money to Intel for nothing. Literally 1% faster than the K version, but way hotter and way more power hungry. This CPU consumes more power than top of the line graphic cards!

It looks even worse compared to the 5950x as its again only 1% faster overall, yet consumes over 150W more power and has temps of 100+ degrees Celsius. I'm still sticking with AMD to punish Intel for almost a decade of garbage 2% incremental speed increases when they had a monopoly even though its now a tossup between their CPU's. The last 3 years I'd say AMD had the better processors and way more value oriented ones, but now its equal, but I'm going with AMD still as I don't want to reward Intel just yet for a full decade of garbage products!

I'm looking at AMD's Ryzen 6000 or 7000 whatever they call it to see how it stacks against intel's latest offerings. From what I've read its supposedly going to be available early Autumn.
Honestly, I think even K versions are a waste of money nowadays. One can just buy a no-suffix base version with a good B-series motherboard, unlock the power limits and call it a day. It's much cheaper than struggling for a +100 MHz boost on a wastefully expensive K or KS chip.
 
smooth experience
 
Back
Top