• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

14900k - Tuned for efficiency - Gaming power draw

Speaking of RAM the dynamic memory frequency tuning looks like a neat way to tune performance and efficiency further.
 
Speaking of RAM the dynamic memory frequency tuning looks like a neat way to tune performance and efficiency further.
I used to tune ram with PiMod to gather efficiency. Each iteration goal was to be as near exact same time as the previous. And back then essentially on DDR2 systems PiMod was heavy enough to call the ram tuning pretty stable if it passed.

Past few weeks, been experimenting with 7zip. It's bundled with Benchmate, not a typical benchmark I'm familiar with running, but it seems pretty sensitive to memory tweaking. It's also good for building some heat.

The performance is going up if the times are going down. This is not a mention to burn in testing, just efficiency and performance.
 
I'm playing with tightening latency a bit 6800 CL30 seems to run atm and true latency isn't too bad by DDR5 standards.
 
Just for comparison, my ryzen 7900 gets around 18000 points in cinebench at 44Watts. At stock 65Watts its 24000 points and 30000 points at 180Watts (on 15C-20C ambient). Imo its like exponentially increasing waste of electricity so it's wiser to lower performance a bit during gaming to achieve much better energy consumption.

You can simply tune the RAM timings to regain the lost 5-10 FPS with only 1-2 Watts extra on RAM.
20% extra performance for 200% extra power draw seems absurdly stupid, but unfortunately the manufacturers find this totally OK and ship the CPUs with these settings! And all that in a general movement towards energy savings, global warming mitigation and enviroment protection...

I am afraid that memory timings tuning is a matter of the most serious PC enthusiasts, for example I am a technical enthusiast but in computers I never reached that level. So I will gladly accept the small drop in performance and not bother with this mysterious stuff...
 
20% extra performance for 200% extra power draw seems absurdly stupid, but unfortunately the manufacturers find this totally OK and ship the CPUs with these settings! And all that in a general movement towards energy savings, global warming mitigation and enviroment protection...

I am afraid that memory timings tuning is a matter of the most serious PC enthusiasts, for example I am a technical enthusiast but in computers I never reached that level. So I will gladly accept the small drop in performance and not bother with this mysterious stuff...

Been playing FF15 a multi threaded game, with HT off. Still no performance issues and CPU package power typically 10-20w. (13700k)
 
Last edited:
Buying a 14900K for gaming is pretty waste tho, I'd buy 14700K any day over it, ends up at the same clockspeed on p-cores after tweaking, money saved goes to other parts

Hell even i5 delivers almost identical performance, personally I'd want 8 p-cores tho.
 
Buying a 14900K for gaming is pretty waste tho, I'd buy 14700K any day over it ...
Well, 14700K certainly is a better value for money than 14900K, and very good value for a 20 core CPU generally.

14900K has a chance to be a better quality silicon than 14700K and for given frequency it can require lower voltage, and thus to be more energy efficient.

Tuning a 14900K for efficiency is the topic of this thread.
 
Last edited:
Well, 14700K certainly it a better value for money than 14900K, and very good value for a 20 core CPU generally.

14900K has a chance to be a better quality silicon than 14700K and for given frequency it can require lower voltage, and thus to be more energy efficient.

Tuning a 14900K for efficiency is the topic of this thread.

I don't see 14900K or 14700K as a 28 and 20 core count CPUs really. They have 8 powerful cores and a bunch of slow ones.
 
Well, 14700K certainly it a better value for money than 14900K, and very good value for a 20 core CPU generally.

14900K has a chance to be a better quality silicon than 14700K and for given frequency it can require lower voltage, and thus to be more energy efficient.

Tuning a 14900K for efficiency is the topic of this thread.
A bigger silicon is always easier to tune for efficiency. A smaller silicon (edit: from the same architecture), on the other hand, doesn't have to be tuned, and is way cheaper, too.
 
Last edited:
A smaller silicon (edit: from the same architecture), on the other hand, doesn't have to be tuned, and is way cheaper, too.
I am not sure if you are speaking about 14700K, because it in terms of inefficiency out of the box is nearly as bad as 14900K.

I don't see 14900K or 14700K as a 28 and 20 core count CPUs really. They have 8 powerful cores and a bunch of slow ones.
These "slow cores" provide a majority of multithread performance and when the CPU is power limited contribute greatly to its mutithread performance efficiency.
 
I am not sure if you are speaking about 14700K, because it in terms of inefficiency out of the box is nearly as bad as 14900K.


These "slow cores" provide a majority of multithread performance and when the CPU is power limited contribute greatly to its mutithread performance efficiency.
If I needed multithreadded performance along with top tier gaming performance I would buy 7950X3D or Zen 5 in a few months. I wouldn't need a 360 AIO either and I would be able to use a newer AM5 chip in a few years too.

Efficiency cores are slow and mostly used to bump core count on paper and raise synthetic benchmark numbers like Cinebench.

Big.LITTLE makes LITTLE sense for desktop chips outside of this. Pointless really.

The only reason Intel went this route is because they could not keep up in terms of core count and multithreaded performance but their performance cores are too power hungry to make a proper 12-16 perf core part. They were forced to do big.LITTLE.

And I can see you bought into the marketing by stating 14700K as a 20 core CPU when in reality it's 8 cores + a bunch of cinebench cores :laugh:
Not a single of Intels desktop chips have more than 8 perf cores, which is kinda sad. Thankfully 8 cores is the sweet spot for most people, especially gamers. Sadly 7800X3D destroys even 14900K in gaming, while using 50 watts on average.

An Intel CPU with true 16C/32T would probably eat 700-800 watts.

This is why Arrow Lake and 20A/18A process is going to be a very important node for Intel.

Intel 7 = 10nm.
Intel 4 = 7nm.
Intel 20A/18A = 4/3nm - If they pull this off in 2024, they will be on par with TSMC. Lets hope Meteor Lake and Arrow Lake will be good.
 
Last edited:
And I can see you bought into the marketing by stating 14700K as a 20 core CPU when in reality it's 8 cores + a bunch of cinebench cores :laugh:

...
An Intel CPU with true 16C/32T would probably eat 700-800 watts.

This is why Arrow Lake and 20A/18A process is going to be a very important node for Intel. ... If they pull this off in 2024, they will be on par with TSMC. Lets hope Meteor Lake and Arrow Lake will be good.
They concluded that with given silicon area 8P+16E config has better multithread performance than 12 P cores alone. That is all.

Monolithic chip with 16 P cores would be probably pretty large and expensive for consumer PCs. Such chip would draw only as much power as you would allow it to.

I am curious who will win the 1 thread perfomance race: Arrow lake or Zen 5?
 
They concluded that with given silicon area 8P+16E config has better multithread performance than 12 P cores alone. That is all.

Monolithic chip with 16 P cores would be probably pretty large and expensive for consumer PCs. Such chip would draw only as much power as you would allow it to.

I am curious who will win the 1 thread perfomance race: Arrow lake or Zen 5?

They are 100% held back because perf cores use so much power. Their biggest 13th and 14th gen chips can suck 400+ watts when power limits are removed but even at stock they generally require a 360 AIO to keep cool enough and some will still see cores hit 100c depending on air flow, I wonder if 14900KS will hit 450 watts when power limits are removed. 300-320 watts stock or so.

Intel has been forced to push clockspeeds to the limits to compete really.

This is why next gen ML/AL and Intel 20A/18A node is the most important move in a loooong time for Intel and hopefully they also increase cache size across the board.

Intel APO shows that big.LITTLE approach don't really work, even if you use Windows 11 with Thread Director. Software still choose the wrong cores but Intel has to do this stuff MANUALLY per chip basis (per perf core count more likely). This can turn into a big headache for Intel going forward. Hopefully tho, they will release APO for 12th and 13th gen too.

Single thread performance don't matter much anymore. AMD won gaming by increasing cache while lowering clocks and gamers should look into a balance here. 7800X3D beats 14900K in gaming with 1/3 the power usage and ~1 GHz lower clockspeeds.

Games and apps are much more multithreadded today than 5 years ago, however many of them still fail to use the correct cores on big.LITTLE chips

So while Intels chips are not too bad I don't like the idea of big.LITTLE on desktop because software compatability will always be a problem here. You also see this on 7950X3D, which loses to 7800X3D in most games because the 8 cores all have 3D cache while only 2 CCD on 7950X3D have 3D cache and many games uses the wrong cores too.

Having a fixed number of IDENTICAL CORES is the best thing for DESKTOP. Software can then use whatever and perform as expected.
 
Last edited:
Single thread performance don't matter much anymore. AMD won gaming by increasing cache while lowering clocks and gamers should look into a balance here. 7800X3D beats 14900K in gaming with 1/3 the power usage and ~1 GHz lower clockspeeds.

Huh? You are mixing 2 things together. General performance and performance of a specialised CPU for gaming. Single thread performance is the most important thing, everything depends on that.

Having a fixed number of IDENTICAL CORES is the best thing for DESKTOP. Software can then use whatever and perform as expected.
Intel now has 3 different cores in Meteor lake, new are 2 small cores hidden in IO tile which can handle some system and low performance tasks as video playback. Really smart IMO.
 
I am not sure if you are speaking about 14700K, because it in terms of inefficiency out of the box is nearly as bad as 14900K.
Sorry, I should have specified: I'm speaking about Core i5 and below.
 
Huh? You are mixing 2 things together. General performance and performance of a specialised CPU for gaming. Single thread performance is the most important thing, everything depends on that.


Intel now has 3 different cores in Meteor lake, new are 2 small cores hidden in IO tile which can handle some system and low performance tasks as video playback. Really smart IMO.

I don't see why that is smart, my GPU handles video playback

Single thread is not really the most important thing these days. Most apps and games are multithreadded at this point
 
Last edited:
I don't see why that is smart, my GPU handles video playback

Single thread is not really the most important thing these days. Most apps and games are multithreadded at this point
Probably because a single low power cpu core can do the job drawing less power than most if not any gpu can. I'd love to see some numbers on that though.

Single thread is still a very relevant metric, while a lot of games can make use of multiple cores, single thread performance often dictates fps cap.

Beagle is right though, the 14700 might be 99 of the 14900, but better binned silicon means a slightly higher clock or slightly less voltage for the same one which is a win.

What I'd like to know is where does the efficiency peak at now and have we gotten the best fps/w moved further up the clock speed. We've talked about 14900K metrics being good (thirsty untuned), but does a given wattage cap (say 65 or 95) give better fps than the 12900/13900 (if you attempt to ignore ipc)
 
Probably because a single low power cpu core can do the job drawing less power than most if not any gpu can. I'd love to see some numbers on that though.
If you mean video playback, any Intel iGPU or Nvidia GPU (or even AMD up to RDNA 2) can do that with much less power than a CPU.
 
Probably because a single low power cpu core can do the job drawing less power than most if not any gpu can. I'd love to see some numbers on that though.

Single thread is still a very relevant metric, while a lot of games can make use of multiple cores, single thread performance often dictates fps cap.

Beagle is right though, the 14700 might be 99 of the 14900, but better binned silicon means a slightly higher clock or slightly less voltage for the same one which is a win.

What I'd like to know is where does the efficiency peak at now and have we gotten the best fps/w moved further up the clock speed. We've talked about 14900K metrics being good (thirsty untuned), but does a given wattage cap (say 65 or 95) give better fps than the 12900/13900 (if you attempt to ignore ipc)
7800X3D has the lowest clock of all newer CPUs and still wins in most games because of cache, but yeah a few games tends to prefer high clockspeeds, not alot tho

My GPU barely uses power when decoding video. Don't Intel use iGPU for decoding as well? Seems pointless to have CPU cores do it
 
This thread was about tuning a 14900k (and other Intel cpus) for gaming, yet here we are talking about 700w power draw on 16 p core cpus. For sure, a 16p core with 125w power limit will decide to ignore it and pull 700 watts. Makes sense....
 
Didn't you read above?

"My 14900K will draw less power if I disable hyper-threading" is about as smart as "my body will require less food if I cut off both of my legs". If you buy a $600 CPU only to immediately disable half of its features just to get acceptable power draw, you're not being smart - you are, in fact, being the exact opposite.

I'm getting really tired of seeing these "Intel CPUs can be power efficient too" threads/posts. Nobody cares that they can be, the point is that, at stock, they are not. The fact that it's possible to make these CPUs consume sane amounts of power is not the saving grace that everyone who uses them seems to think it is. If it's not good out of the box, i.e. how the vast majority of users will experience it because most users don't tweak CPU power consumption, it's not good period.
Are you suggesting reviews should be done without XMP? That would be silly right?

Nobody cares about what majority of people will do. If majority of people care about efficiency then they shouldn't buy a K cpu and run it out of the box, that would be idiotic. There are non k and T versions - the most efficient CPUs on planet Earth, buy those instead.
 
I too can make random shit up.
Funny enough, he's not. Intel is dropping HT for 'rentable units' as early as the next generation of Intel CPUs.

HT is outdated and too often hurts performance. Logical cores will never be better than physical ones. Rentable Units aim to partition incoming instructions into separate partitions on separate cores (Not logical cores) to ensure data keeps flowing and cores are utilized without delay.
 
Intel is dropping HT

Could they also be doing this because HT has been vulnerable to attacks in the past? Is this still a viable question with HT on, or has this problem been fixed up via CPU micro codes?
 
Are you suggesting reviews should be done without XMP? That would be silly right?
XMP is another bucket of worms. If the compatibility is poor the user experience may be riddled with problems (hello AM4, not sure about modern Intel).
Another consumer unfriendly area, although it improved, but still unfriendly.

Nobody cares about what majority of people will do.
Someone should, in particular the vendors producing the product. Out of the box experience effects brand reputation so someone should care or your going to lose money to a competitor.

If majority of people care about efficiency then they shouldn't buy a K cpu and run it out of the box, that would be idiotic. There are non k and T versions - the most efficient CPUs on planet Earth, buy those instead.
Reviewers typically ignore non-K CPU's. When making a purchasing decision for building your own PC are you more likely to go with a product nobody is providing feedback for?
 
Back
Top