• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i9-13900 (non-K) Spotted with 5.60 GHz Max Boost, Geekbenched

Joined
May 31, 2016
Messages
4,446 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
Sorry my edit came in late, check out the 12600K vs 5600X
Which CPU is more efficient and better future proof? I would say the 12600K, well it came out a year later after all
Why the 12600K has cap on the frequency and power? It boosts lower and uses less power. Obviously it will be faster than a 5600x but that is not the point @Mussels tried to present.
If you take off the cap and let it boost it will deliver more FPS for sure but, the power it will use will be inadequately higher.
the 12600K will be faster than a 5600x but not more efficient that is for sure. Especially in near future when lets say hypothetically, all of these 2 CPUs will have to be utilized 100% given there is no cap on power or frequency and they both work at stock.
 
Last edited:
Joined
Nov 11, 2016
Messages
3,476 (1.17/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
Why the 12600K has cap on the frequency and power? It boosts lower and uses less power. Obviously it will be faster than a 5600x but that is not the point @Mussels tried to present.
If you take off the cap and let it boost it will deliver more FPS for sure but, the power it will use will be inadequately higher.
the 12600K will be faster than a 5600x but not more efficient that is for sure. Especially in near future when lets say hypothetically, all of these 2 CPUs will have to be utilized 100% given there is no cap on power or frequency and they both work at stock.

That's because 12600k has all cores turbo boost of 4.5ghz, single core boost is 4.9ghz. These CPU are running at stock.
 

Mussels

Freshwater Moderator
Joined
Oct 6, 2004
Messages
58,413 (7.91/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
Sorry my edit came in late, check out the 12600K vs 5600X
Which CPU is more efficient and better future proof? I would say the 12600K, well it came out a year later after all
I'd bloody hope so, seeing how its a more expensive CPU with almost double the cores, double the TDP and wattage that requires a more expensive motherboard, RAM and cooling

1661834787304.png


Why the 12600K has cap on the frequency and power? It boosts lower and uses less power. Obviously it will be faster than a 5600x but that is not the point @Mussels tried to present.
If you take off the cap and let it boost it will deliver more FPS for sure but, the power it will use will be inadequately higher.
the 12600K will be faster than a 5600x but not more efficient that is for sure. Especially in near future when lets say hypothetically, all of these 2 CPUs will have to be utilized 100% given there is no cap on power or frequency and they both work at stock.
This.
Yes, you can definitely tweak 12th gen to be efficient just like i've done with my 3090
The problem is that the moment you're not GPU limited, that CPU wattage will climb exponentially


Look at the spiderman reddit threads
360mm AIO, 83% usage and 90C - at 63FPS.
Tell me this guy would be having a great time if his system wasn't bottlenecked, and the CPU usage went higher as his FPS did?
1661835080033.png


Another one where the games being sent to P and E cores but not SMT cores, and he's performance crippled - because his system wasnt providing enough wattage to boost the CPU
1661835238115.png





Games will not get easier on CPUs over time.
If your CPU cant provide the performance long-term without throttling, you're going to have a shitty experience like all these people are. "It's not 100% usage, why is it slowing down!"


Marvel's Spider-Man Remastered PC Performance Analysis (dsogaming.com)

These guys got close
What’s also interesting here is how Hyper-Threading/SMT can affect the game’s performance. On CPUs with less than six physical cores, we see performance improvements when HT is active. On the other hand, performance degrades on CPUs that are equipped with more than six physical cores.
Because with those cores and threads active, they're passing PL1 limits and the CPU is clocking down - disabling SMT is saving power, letting them clock higher
 
Joined
Nov 11, 2016
Messages
3,476 (1.17/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
I'd bloody hope so, seeing how its a more expensive CPU with almost double the cores, double the TDP and wattage that requires a more expensive motherboard, RAM and cooling



This.
Yes, you can definitely tweak 12th gen to be efficient just like i've done with my 3090
The problem is that the moment you're not GPU limited, that CPU wattage will climb exponentially


Look at the spiderman reddit threads
360mm AIO, 83% usage and 90C - at 63FPS.
Tell me this guy would be having a great time if his system wasn't bottlenecked, and the CPU usage went higher as his FPS did?


Another one where the games being sent to P and E cores but not SMT cores, and he's performance crippled - because his system wasnt providing enough wattage to boost the CPU


Games will not get easier on CPUs over time.
If your CPU cant provide the performance long-term without throttling, you're going to have a shitty experience like all these people are. "It's not 100% usage, why is it slowing down!"


Marvel's Spider-Man Remastered PC Performance Analysis (dsogaming.com)

These guys got close

Because with those cores and threads active, they're passing PL1 limits and the CPU is clocking down - disabling SMT is saving power, letting them clock higher

Hm...if only there were ADL with low TDP for people who don't know how to tweak their PC, like the 12600 or 12700 :roll:

12700 with stock RM1 cooler, using 50W less than 5800X while performing the same in games
CP2077_Power.png
1080p_Average.png
 
Last edited:
Joined
May 31, 2016
Messages
4,446 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
I'd bloody hope so, seeing how its a more expensive CPU with almost double the cores, double the TDP and wattage that requires a more expensive motherboard, RAM and cooling

View attachment 259982


This.
Yes, you can definitely tweak 12th gen to be efficient just like i've done with my 3090
The problem is that the moment you're not GPU limited, that CPU wattage will climb exponentially


Look at the spiderman reddit threads
360mm AIO, 83% usage and 90C - at 63FPS.
Tell me this guy would be having a great time if his system wasn't bottlenecked, and the CPU usage went higher as his FPS did?
View attachment 259983

Another one where the games being sent to P and E cores but not SMT cores, and he's performance crippled - because his system wasnt providing enough wattage to boost the CPU
View attachment 259984




Games will not get easier on CPUs over time.
If your CPU cant provide the performance long-term without throttling, you're going to have a shitty experience like all these people are. "It's not 100% usage, why is it slowing down!"


Marvel's Spider-Man Remastered PC Performance Analysis (dsogaming.com)

These guys got close

Because with those cores and threads active, they're passing PL1 limits and the CPU is clocking down - disabling SMT is saving power, letting them clock higher
Obviously. I did similar thing with my 6900xt. I have undervolted a bit and it runs very well. I'm not saying the 12000 series are bad CPUs at all. They have a lot of performance in them but that comes with a huge cost of efficiency. Saying that at 45w the 12900K is efficient, is like saying my 1970 Shevy Nova (cool car btw) 5l engine at 20Mph is very efficient or my Hammer is very efficient when I don't drive it. Are we really that low to use these kind of reasoning to make our point?
What I have noticed, Ryzens power increase when full load is marginal and the power to sustain some load is fair. 12000 series Intel power at full load is huge power to sustain low load is low but that is kinda obvious. Intel pushed it a lot and power required is so damn high.
Another crazy justification is buying 12900K lowering its voltage, frequency, wattage, play games and say that this CPU is very efficient. That is the stupidest thing I have ever heard and some people say that thinking they are correct in their conclusions as a general factor.
If you dont have enough cores and game uses ecores then you are screwed but maybe its like this. You simply limit your CPU and it needs to clock higher and use more power because the load is simply high. If a game use all available cores for whatever reason, that CPU would behave like a CB23 bench. Use 60%-70% and up of the CPU and your power goes up exponentially if you want to keep high FPS. It is that simple though. I don't think measuring power consumption via gaming is a right choice. It should have been measured by load scenario.

Hm...if only there were ADL with low TDP for people who don't know how to tweak their PC, like the 12600 or 12700 :roll:
Yes these two are efficient because they are locked with power but also performance. In games performance at 1080p (TPU test), they stack where 2 year old 5600x and 5800x are or even those without X models. Lock the 12000 series CPU to be efficient and their great performance kinda vanishes with it. These 2 are for gaming but the 12900K is not and saying it is efficient when you play some games is ridiculous and that goes for all CPUs not just the 12900K or KS. Yet you don't need a lot of performance while playing games that is why the non-k are good for playing. Either way this is changing and time will show that CPUs will require more performance. Especially with newer beefier graphics.
There is a correlation though. GPUs use more power since companies squeeze as much as possible from them and CPUs have to to the same to keep up or at least not bottleneck the GPU with current standards of playing like 144Hz or 240hz monitors and FPS respectively.
 
Last edited:
Joined
Nov 11, 2016
Messages
3,476 (1.17/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
Obviously. I did similar thing with my 6900xt. I have undervolted a bit and it runs very well. I'm not saying the 12000 series are bad CPUs at all. They have a lot of performance in them but that comes with a huge cost of efficiency. Saying that at 45w the 12900K is efficient, is like saying my 1970 Shevy Nova (cool car btw) 5l engine at 20Mph is very efficient or my Hammer is very efficient when I don't drive it. Are we really that low to use these kind of reasoning to make our point?
What I have noticed, Ryzens power increase when full load is marginal and the power to sustain some load is fair. 12000 series Intel power at full load is huge power to sustain low load is low but that is kinda obvious. Intel pushed it a lot and power required is so damn high.
Another crazy justification is buying 12900K lowering its voltage, frequency, wattage, play games and say that this CPU is very efficient. That is the stupidest thing I have ever heard and some people say that thinking they are correct in their conclusions as a general factor.
If you dont have enough cores and game uses ecores then you are screwed but maybe its like this. You simply limit your CPU and it needs to clock higher and use more power because the load is simply high. If a game use all available cores for whatever reason, that CPU would behave like a CB23 bench. Use 60%-70% and up of the CPU and your power goes up exponentially if you want to keep high FPS. It is that simple though. I don't think measuring power consumption via gaming is a right choice. It should have been measured by load scenario.


Yes these two are efficient because they are locked with power but also performance. In games performance at 1080p (TPU test), they stack where 2 year old 5600x and 5800x are or even those without X models. Lock the 12000 series CPU to be efficient and their great performance kinda vanishes with it. These 2 are for gaming but the 12900K is not and saying it is efficient when you play some games is ridiculous and that goes for all CPUs not just the 12900K or KS.

Ryzen 5000 efficiency got destroyed by locked ADL

Blender_Power.png


What is your justification now? that Ryzen 5000 can be tweaked, but apparently 12900K shouldn't be tweaked at all :roll:
 
Joined
May 31, 2016
Messages
4,446 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
Ryzen 5000 efficiency got destroyed by locked ADL

View attachment 259988

What is your justification now? that Ryzen 5000 can be tweaked, but apparently 12900K shouldn't be tweaked at all :roll:
Locked AL? Dude we have been talking about it in this thread over and over. Are you really that blind and you thing what you say is legit and have any sort of merit?
How is it destroyed? I see AL matched 5000 series in consumption. now compare this consumption to performance as well. AL will be faster but not by much. I can lock any CPU to whatever power and claim it is efficient. What a stupid and none reflective argument. What Intel did is match with efficiency when locked a CPUs that are 2 years old with ecores to advertise as higher core CPU. It is called marketing and you are a victim of it. grow up.
 
Joined
Nov 11, 2016
Messages
3,476 (1.17/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
Locked AL? Dude we have been talking about it in this thread over and over. Are you really that blind and you thing what you say is legit and have any sort of merit?
How is it destroyed? I see AL matched 5000 series in consumption. now compare this consumption to performance as well. AL will be faster but not by much. I can lock any CPU to whatever power and claim it is efficient. What a stupid and none reflective argument. What Intel did is match with efficiency when locked a CPUs that are 2 years old with ecores to advertise as higher core CPU. It is called marketing and you are a victim of it. grow up.

You are complaining in a thread about Locked RPL 13900 non-K that the K model are using so much power.

First you say people shouldn't tweak their 12900K for better efficiency, then you say you can tweak your Ryzen for better efficiency, what a hypocrite
 
Last edited by a moderator:
Joined
May 31, 2016
Messages
4,446 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
You are complaining in a thread about Locked RPL 13900 non-K that the K model are using so much power, please grow up and grow out of your AMD fanboyism

First you say people shouldn't tweak their 12900K for better efficiency, then you say you can tweak your Ryzen for better efficiency, what a hypocrite
I'm complaining? I'm pointing out something or give insight about a product not jerk off every time I hear Intel.
You literally take everything out of context twist and turn to make your point. I'm not a fanboy but you are definitely and people recognize that. Especially when you fail to comprehend, the difference between tweaked CPU and stock when you pay for the product performance wise not how efficient it will get when you tweak it. Maybe Intel should charge you Intel fanboys according to how much can a CPU be tweaked instead and advertise as such?
Never said you should not tweak; I tweak mine and both GPU and CPU. What I'm saying is you should not measure efficiency of a CPU or GPU when you tweak or limit its power and performance in an environment that will not utilize the entire CPU's or GPU's potential. That is exactly what you do and keep arguing about it.
I talk about 12900K because the conversation steered into this direction and I still talk about Intel's products nonetheless unlike your endeavors in any other AMD or NV thread for that matter.
 
Last edited by a moderator:
Joined
Jan 27, 2015
Messages
1,747 (0.48/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
FWIW here is a 10850K with 5.1Ghz all core running single thread CPU-Z. 10850k is one of the least efficient chips made in the last decade, it's worse than a 10900K (it's a downbinned 10900K).

Max power draw was 67W. If I were to set it to normal 'turbo' instead of all core, it would drop 20-25W from there, putting it around 45W. I also set this to 4 threads and got 107W.

There's nothing at all out of line with a 65W 13900 being able to run single core or even a couple of cores at 5.6Ghz boost at under 65W given the better node and other enhancements (e-cores with lower clock speeds and so on). You should not even have to do any tweaks.



1661969377154.png


1661969636333.png
 
Joined
Jun 14, 2020
Messages
3,550 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
not for OEM or stock boards
That's why this annoys me

Users either get locked to low power settings - and locked performance (look at all the pissed off intel laptop users we get in the throttlestop forum)

It's becoming:

1.CPU's are reviewed on high end unlocked supercooled platforms and everyone bases performance off those values
2. Home users get boards that lock the power limits down, and users never see that performance


As long as they actually get more performance for that power consumption...

Newer intel advertising got more accurate or more honest, but they still have some pretty shitty efficiency: The only time they aren't bottom of the charts is when the E-cores are used.
Intels P cores are not efficient by any metric.

Ironically, 11th gen was pretty good single threaded, but pure garbage MT.

View attachment 259831


You cant discuss the performance of the P cores as if they have the efficiency of the E-cores - very little can or will use both, other than a few specific workloads and synthetic tests.
The E-cores do nothing for gamers, for example.


TDP is thermal design power, not "total wattage" so they do both have some leniency here.


Seeing 65W TDP becoming 95W peak or similar was fine if those peak values weren't constant - because short boosts wouldnt overwhelm a 65W TDP designed cooler.
Intels 10700 broke that by making 65W become 215W, and it's been meaningless ever since.
Why do you expect a cpu thats asked to use 240w to be efficient??? This makes no sense. Of course alderlake will not be efficient at 240w. The real question is, if you care about efficiency why are you running it at 240w??? It makes absolutely no sense. Limit it to 125w and suddenly its more efficienct than any cpu out there in 99.9% tasks. It will lose (by a tiny amount btw, like 5%) to a 5950x and only in heavy rendering.

Why the 12600K has cap on the frequency and power? It boosts lower and uses less power. Obviously it will be faster than a 5600x but that is not the point @Mussels tried to present.
If you take off the cap and let it boost it will deliver more FPS for sure but, the power it will use will be inadequately higher.
the 12600K will be faster than a 5600x but not more efficient that is for sure. Especially in near future when lets say hypothetically, all of these 2 CPUs will have to be utilized 100% given there is no cap on power or frequency and they both work at stock.
AlderLake is more efficient than zen 3 in gaming. Sometimes by up to 70%. Why are we still having this discussion? What the actual...
 
Joined
Aug 18, 2022
Messages
202 (0.23/day)
Why do you expect a cpu thats asked to use 240w to be efficient??? This makes no sense. Of course alderlake will not be efficient at 240w. The real question is, if you care about efficiency why are you running it at 240w??? It makes absolutely no sense. Limit it to 125w and suddenly its more efficienct than any cpu out there in 99.9% tasks. It will lose (by a tiny amount btw, like 5%) to a 5950x and only in heavy rendering.


AlderLake is more efficient than zen 3 in gaming. Sometimes by up to 70%. Why are we still having this discussion? What the actual...

You are preaching to the deaf
 
Joined
Jun 14, 2020
Messages
3,550 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
You are preaching to the deaf
Its funny though. Seems like the only people running the 12900k at 240w are the ones that care about the performance, and the ones that want to complain that they are not efficient
 
Joined
May 31, 2016
Messages
4,446 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
AlderLake is more efficient than zen 3 in gaming. Sometimes by up to 70%. Why are we still having this discussion? What the actual...
Is there a CPU that you consider not efficient? Considering you evaluate efficiency on a tweaked CPU with no limitations to tweaking or any other criteria aside power consumption, there is no inefficient CPU. It simply doesn't exist since you can always make it efficient by lowering power consumption. You clearly disregard any other criteria and focus mainly on power consumption.
Does that 70% you mentioned come with the AL CPU being limited to 35W or what is it? gaming is not the whole story you know. productivity, rendering etc basically the whole suite should be put into account to evaluate CPU's efficiency and performance etc. not just one scenario like gaming and you chose games with barely any utilization for the CPU. This is also important and why it is being omitted here? Being more efficient in games depending on a game. There are more demanding games that make AL use more power unless you get a less cores CPU.
I can bet you, that when NV adalovelace and AMD's RDNA3 show up and you will have to check these cards' performance using a 1080p gaming scenario in certain games, AL will use way more power than with a 3090. Why? because the CPU utilization will go up with so much faster cards to feed them and use these cards at 100% which will boost FPS and result in higher power draw for the CPU.
 
Joined
Jun 14, 2020
Messages
3,550 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Obviously. I did similar thing with my 6900xt. I have undervolted a bit and it runs very well. I'm not saying the 12000 series are bad CPUs at all. They have a lot of performance in them but that comes with a huge cost of efficiency. Saying that at 45w the 12900K is efficient, is like saying my 1970 Shevy Nova (cool car btw) 5l engine at 20Mph is very efficient or my Hammer is very efficient when I don't drive it. Are we really that low to use these kind of reasoning to make our point?
And this is where you are just flat out wrong. Yes, the 12900k with out of the box settings is inefficient in heavy multithreading. That part is true. Thats different to "the 12900k is inefficient", unless again you are specifically talking about out of the box settings. But testing out of the box doesn't tell you Anything about the actual architectural efficiency. For that you have to test 2 cpus at same wattage. There is no other way to do it.

For example, the 7950x will be less efficient at stock than the 5950x. Does that mean zen 4 is less efficient than zen 3? NO. In order to figure that out you need to put the 7950X at the same wattage as the 5950x, and then youll realise zen 4 >> zen 3.

Well apply the same logic to the 12900k. When you run it at same wattage as the 5950x, it loses by 5% in heavy multithreading and wins in everything else.

And thats where the whole problem lies. When you are saying the 12900k is inefficient what you really mean is "with the out of the box settings in heavy mt". With that statement im perfectly in agreement. But for me that type is comparison is absolutely useless, I don't care about the out of the box settings.

Is there a CPU that you consider not efficient? Considering you evaluate efficiency on a tweaked CPU with no limitations to tweaking or any other criteria aside power consumption, there is no inefficient CPU.
Of course. The 11900k for example. You can limit it to 125w, test it against other cpus at 125w and it will be the slowest,so the less efficient. On the other hand the 12900k will be the fastest or the second fastest, right next to the 5950x.

Is
Does that 70% you mentioned come with the AL CPU being limited to 35W or what is it? gaming is not the whole story you know.
Nope, at stock. Derbauer and igorslab tested 12900k vs 5950x, the first was up to 70% more efficient
 
Joined
May 31, 2016
Messages
4,446 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
For example, the 7950x will be less efficient at stock than the 5950x. Does that mean zen 4 is less efficient than zen 3? NO. In order to figure that out you need to put the 7950X at the same wattage as the 5950x, and then youll realise zen 4 >> zen 3.
really? And where do you get it from? Since I recall AMD's charts saying the 7950x is 37% more efficient than a 5950x. No you don't put them at the same wattage since these are different products and obviously it is a natural course of action the newer CPU will be a bit better in crunching data at a lower then advertised wattage.
AMD claims 67% more efficient or so when 7950x is 35w or so in comparison to 5950x but what's the point of the 7950x at 35w being efficient if the 5950x will be 125W advertised and will be faster than a 7950x at 35w cap? WTF is the point here about that comparison in a desktop market? There is no benefit for Desktop but in a Laptop environment it is a huge improvement and you need to recognize these differences.
And this is where you are just flat out wrong. Yes, the 12900k with out of the box settings is inefficient in heavy multithreading. That part is true. Thats different to "the 12900k is inefficient", unless again you are specifically talking about out of the box settings. But testing out of the box doesn't tell you Anything about the actual architectural efficiency. For that you have to test 2 cpus at same wattage. There is no other way to do it.
You are wrong my friend. In general term the 12900K is not efficient. You cant evaluate efficiency by choosing a scenario that suits you to prove a point but instead you evaluate everything and raw conclusion from that. games dont utilize CPU in a manner that you can evaluate its efficiency by any metric. Any CPU being utilized in 20% will be efficient. Literally any CPU will use low power.
Of course. The 11900k for example. You can limit it to 125w, test it against other cpus at 125w and it will be the slowest,so the less efficient. On the other hand the 12900k will be the fastest or the second fastest, right next to the 5950x.
Yes which means you artificially limit it and then evaluate its efficiency? lol.
Nope, at stock. Derbauer and igorslab tested 12900k vs 5950x, the first was up to 70% more efficient
yes in a game that uses what 10% of the CPU? WTF is wrong with you? You can say it is efficient in games or low utilization workload or light workload but it is not efficient in MT workloads. When you put into account all aspects of the CPU and metrics to evaluate it. So games is efficient (depending on a game), heavy workload it is not efficient by any standard. In general, putting all those together, it is not an as efficient as other CPUs in the market.
 
Joined
Mar 10, 2010
Messages
11,878 (2.20/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Do you play cinebench all day? because these CPU don't need 200W for their performance numbers running games
Some of us do more with our pc then leave it off most the day then two hours light gaming.

I get your point but PC were not made to game, that's a side hustle, people like you over state.
 
Joined
Jun 14, 2020
Messages
3,550 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
really? And where do you get it from? Since I recall AMD's charts saying the 7950x is 37% more efficient than a 5950x. No you don't put them at the same wattage since these are different products and obviously it is a natural course of action the newer CPU will be a bit better in crunching data at a lower then advertised wattage.

Thats exactly what im saying. Amd claims 37% more efficient AT SAME WATTAGE!!! They are not testing the cpus at stock, they are testing at same power levels! Cause that is the only way you can measure architectural efficiency, else you are just testing out of the box settings!

You are wrong my friend. In general term the 12900K is not efficient. You cant evaluate efficiency by choosing a scenario that suits you to prove a point but instead you evaluate everything and raw conclusion from that. games dont utilize CPU in a manner that you can evaluate its efficiency by any metric. Any CPU being utilized in 20% will be efficient. Literally any CPU will use low power.
No you are wrong. Amd themselves tested architectural efficiency at same wattage. Are you saying they are wrong for doing so?

Yes which means you artificially limit it and then evaluate its efficiency? lol.
What do you mean artificially limit it? All cpus are artificially limited. The only way to test architectural efficiency is at same wattage. The same way you only test IPC at same clockspeeds. How hard is that to understand...

yes in a game that uses what 10% of the CPU? WTF is wrong with you? You can say it is efficient in games or low utilization workload or light workload but it is not efficient in MT workloads. When you put into account all aspects of the CPU and metrics to evaluate it. So games is efficient (depending on a game), heavy workload it is not efficient by any standard. In general, putting all those together, it is not an as efficient as other CPUs in the market.
What difference does it make? It uses 10% on both cpus and 12900k is more efficient. Tough luck

Any CPU being utilized in 20% will be efficient. Literally any CPU will use low power.
It is obvious by this sentence that you dont understand what efficiency is. It doesnt matter how much power it uses. Efficiency is how much work the cpu does with that amount of power. So yes every cpu can drop down to 20 watts, but that doesn't mean they are efficient. A cpu that does 50 points at 20w is way less efficient than one that does 100 points at 20 watts.
 
Joined
May 31, 2016
Messages
4,446 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
Thats exactly what im saying. Amd claims 37% more efficient AT SAME WATTAGE!!! They are not testing the cpus at stock, they are testing at same power levels! Cause that is the only way you can measure architectural efficiency, else you are just testing out of the box settings!
Sure that is true but what do you expect i dont get it. You want it to be less efficient at the same wattage? It is a different node what did you expect is going to happen? they show the efficiency but there is no way it would have been all the way around.
No you are wrong. Amd themselves tested architectural efficiency at same wattage. Are you saying they are wrong for doing so?
So what they have tested? It is a different node and that is what they are showing. node efficiency not the CPU itself. The CPU is just a medium to show TSMC's node efficiency. You want a CPU efficiency test? make the 7950x on the same node as the 5950x is and then test and see how much more efficient the CPU will be. You want AL efficiency test of the arch ? test it against 11900k and make the AL on a 14nm just like the 11900k is. Then you will have the CPU efficiency and you will know how much better the CPU architecture is. Obviously you will have to get 2 metrics power and performance to validate it across all workloads not just one. Then and only then you will be 100% able to tell how efficient one is from the other.
What do you mean artificially limit it? All cpus are artificially limited. The only way to test architectural efficiency is at same wattage. The same way you only test IPC at same clockspeeds. How hard is that to understand...
No you dont get it. You can't test efficiency at the same wattage since these are different nodes. You don't test CPU efficiency but Node efficiency using the CPU to demonstrate it. You need to be as close to the same environment as possible. you dont lock the CPUs to whatever wattage you think it's best. The lower the wattage the smaller the node will have an advantage and that is obvious. Remember when HWUB tested the core count impact on gaming? they did not use several different Intel CPUs right? Why not? Because they are still different so they used 11900k (if i remember correctly) and lock cores to test 4c 6c 8c scenarios. They have not used 12600k because of the cache difference.
Node change will always come with better efficiency no matter how you slice it. Check 10900k vs 11900k for efficiency when they are locked to same wattage because it makes sense and please dont get the wattage 35w. That is ridiculous for a desktop processor. Keep in mind 10900k has 2 core more.
What difference does it make? It uses 10% on both cpus and 12900k is more efficient. Tough luck
OK and when it uses 60% on both it is not as efficient as 10% so why 10% has to be the better metric or better evaluation point for efficiency of a CPU? why not 60% or else why not 100% which you know it will suck at 100% utilization right? why 10% not 100% when you know you want to use or at some point you will have to use 100% no matter what. How 10% utilization is reflective and valid for efficiency metric across the board? Maybe you will lock the CPU at 10%-20% utilization forever?

It is obvious by this sentence that you dont understand what efficiency is. It doesnt matter how much power it uses. Efficiency is how much work the cpu does with that amount of power. So yes every cpu can drop down to 20 watts, but that doesn't mean they are efficient. A cpu that does 50 points at 20w is way less efficient than one that does 100 points at 20 watts.
Do I really? You play with math my friend that is all.
 
Joined
Jun 14, 2020
Messages
3,550 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Sure that is true but what do you expect i dont get it. You want it to be less efficient at the same wattage? It is a different node what did you expect is going to happen? they show the efficiency but there is no way it would have been all the way around.

So what they have tested? It is a different node and that is what they are showing. node efficiency not the CPU itself. The CPU is just a medium to show TSMC's node efficiency. You want a CPU efficiency test? make the 7950x on the same node as the 5950x is and then test and see how much more efficient the CPU will be. You want AL efficiency test of the arch ? test it against 11900k and make the AL on a 14nm just like the 11900k is. Then you will have the CPU efficiency and you will know how much better the CPU architecture is. Obviously you will have to get 2 metrics power and performance to validate it across all workloads not just one. Then and only then you will be 100% able to tell how efficient one is from the other.

No you dont get it. You can't test efficiency at the same wattage since these are different nodes. You don't test CPU efficiency but Node efficiency using the CPU to demonstrate it. You need to be as close to the same environment as possible. you dont lock the CPUs to whatever wattage you think it's best. The lower the wattage the smaller the node will have an advantage and that is obvious. Remember when HWUB tested the core count impact on gaming? they did not use several different Intel CPUs right? Why not? Because they are still different so they used 11900k (if i remember correctly) and lock cores to test 4c 6c 8c scenarios. They have not used 12600k because of the cache difference.
Node change will always come with better efficiency no matter how you slice it. Check 10900k vs 11900k for efficiency when they are locked to same wattage because it makes sense and please dont get the wattage 35w. That is ridiculous for a desktop processor. Keep in mind 10900k has 2 core more.

OK and when it uses 60% on both it is not as efficient as 10% so why 10% has to be the better metric or better evaluation point for efficiency of a CPU? why not 60% or else why not 100% which you know it will suck at 100% utilization right? why 10% not 100% when you know you want to use or at some point you will have to use 100% no matter what. How 10% utilization is reflective and valid for efficiency metric across the board? Maybe you will lock the CPU at 10%-20% utilization forever?


Do I really? You play with math my friend that is all.
Ill repeat myself. The ONLY way to test architectural efficiency is at same wattage. Period. There is no arguing with that. If you dont understand why then really I don't know how to help you. Testing any other way leads to absurdities.

The 12900t is at 65w, and its more efficient than the 12900k. Therefore using your method, the alderlake architecture is more efficient than the alderlake architecture. Thats where your method of comparing architectures leads. Can you explain to me how the above makes any sense to you?

Also according to your method, amd is flat out lying. The 7950X scores 38k at 230w, while the 5950x scores 26k at 125w. Therefore zen 3 is more efficient than zen 4 according to your method. Yet amd says the opposite....
 
Joined
May 31, 2016
Messages
4,446 (1.42/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
Ill repeat myself. The ONLY way to test architectural efficiency is at same wattage. Period. There is no arguing with that. If you dont understand why then really I don't know how to help you. Testing any other way leads to absurdities.

The 12900t is at 65w, and its more efficient than the 12900k. Therefore using your method, the alderlake architecture is more efficient than the alderlake architecture. Thats where your method of comparing architectures leads. Can you explain to me how the above makes any sense to you?

Also according to your method, amd is flat out lying. The 7950X scores 38k at 230w, while the 5950x scores 26k at 125w. Therefore zen 3 is more efficient than zen 4 according to your method. Yet amd says the opposite....
I ask again.
1st:
Why 10% utilization and power draw of an AL CPU (but also any other CPU for that matter) is reflective as for how efficient the architecture is and not a 100% CPU utilization? Which makes you quote over and over how efficient AL is in games you choose to showcase the efficiency showing low utilization levels and obviously power draws.
2nd
How can you evaluate a CPU architecture vs CPU architecture and make a comparison between the two, knowing the nodes for both are totally different and both are being evaluated by random low power limit chosen by an evaluator even though both CPUs are desktop segment processors?

side question.
Would you evaluate efficiency and performance of a server processor for instance by the lowest possible wattage the CPU can handle, highest possible, or stock wattage set by the manufacturer on a variety of benchmarks?
 
Joined
Jun 14, 2020
Messages
3,550 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
I ask again.
1st:
Why 10% utilization and power draw of an AL CPU (but also any other CPU for that matter) is reflective as for how efficient the architecture is and not a 100% CPU utilization? Which makes you quote over and over how efficient AL is in games you choose to showcase the efficiency showing low utilization levels and obviously power draws.
If you are running games, then obviously your concern is efficiency in games. There is no point for a gamer to buy a cpu that is efficienct in cinebench when he is going to use it for gaming.

That of course applies to every workload. If a cpu is more efficient in lets say autocad or Premiere, there is no point arguing whether these programs push the cpu to 100% or 1%. Its completely irrelevant.

I know my renault megane is very inefficient at 200 kmh, but i bought it cause its efficient at 150kmh that im driving
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.60/day)
Location
Ex-usa | slava the trolls
But gaming plus streaming, and gaming plus a free Windows to update, run other apps in the background, can be quite resource heavy.
Never leave your PC with the core count which is only sufficient to run one game..
 
Joined
Jun 14, 2020
Messages
3,550 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
2nd
How can you evaluate a CPU architecture vs CPU architecture and make a comparison between the two, knowing the nodes for both are totally different and both are being evaluated by random low power limit chosen by an evaluator even though both CPUs are desktop segment processors?

The nodes are a fundamental part of a cpu architecture. When designing the architecture, it was based with a specific node in mind. But even that is irrelevant. You cant change the node, but you can change the power limit. And nobody argued that you should test at a low power limit. Im arguing that you should test at the SAME power limit. It can be 50 watts or 500 watts

Would you evaluate efficiency and performance of a server processor for instance by the lowest possible wattage the CPU can handle, highest possible, or stock wattage set by the manufacturer on a variety of benchmarks?
I would evaluate it by the efficiency at the wattage im going to run it at. If im trying to decide between 2 cpus, ill test them both at the wattage im going to be running them at.

All of your points are still irrelevant though. You CANNOT test at different wattages. That way youll end up that alderlake is more efficient than itself.
 
Top