Friday, August 26th 2022

Intel Core i9-13900 (non-K) Spotted with 5.60 GHz Max Boost, Geekbenched

An Intel Core i9-13900 "Raptor Lake" (non-K) processor was spotted in the wild by Benchleaks. The non-K parts are expected to have 65 W Processor Base Power and aggressive power-management, compared to the unlocked i9-13900K, although the core configuration is identical: 8 P-cores, and 16 E-cores. Besides tighter power limits out of the box, and a locked multiplier, the i9-13900 also has lower clocks, with its maximum boost frequency for the P-cores set 5.60 GHz, compared to the 5.80 GHz of the i9-13900K. It's still a tad higher than the 5.40 GHz of the i7-13700K.

Tested in Geekbench 5.4.5, the i9-13900 scores 2130 points in the single-threaded test, and 20131 points in the multi-threaded one. Wccftech tabulated these scores in comparison to the current-gen flagship i9-12900K. The i9-13900 ends up 10 percent faster than the i9-12900K in the single-threaded test, and 17 percent faster in the multi-threaded. The single-threaded uplift is thanks to the higher IPC of the "Raptor Cove" P-core, and slightly higher boost clock; while the multi-threaded score is helped not just by the higher IPC, but also the addition of 8 more E-cores.
Sources: Benchleaks (Twitter), Wccftech
Add your own comment

77 Comments on Intel Core i9-13900 (non-K) Spotted with 5.60 GHz Max Boost, Geekbenched

#51
nguyen
ratirtWhy the 12600K has cap on the frequency and power? It boosts lower and uses less power. Obviously it will be faster than a 5600x but that is not the point @Mussels tried to present.
If you take off the cap and let it boost it will deliver more FPS for sure but, the power it will use will be inadequately higher.
the 12600K will be faster than a 5600x but not more efficient that is for sure. Especially in near future when lets say hypothetically, all of these 2 CPUs will have to be utilized 100% given there is no cap on power or frequency and they both work at stock.
That's because 12600k has all cores turbo boost of 4.5ghz, single core boost is 4.9ghz. These CPU are running at stock.
Posted on Reply
#52
Mussels
Freshwater Moderator
nguyenSorry my edit came in late, check out the 12600K vs 5600X
Which CPU is more efficient and better future proof? I would say the 12600K, well it came out a year later after all
I'd bloody hope so, seeing how its a more expensive CPU with almost double the cores, double the TDP and wattage that requires a more expensive motherboard, RAM and cooling

ratirtWhy the 12600K has cap on the frequency and power? It boosts lower and uses less power. Obviously it will be faster than a 5600x but that is not the point @Mussels tried to present.
If you take off the cap and let it boost it will deliver more FPS for sure but, the power it will use will be inadequately higher.
the 12600K will be faster than a 5600x but not more efficient that is for sure. Especially in near future when lets say hypothetically, all of these 2 CPUs will have to be utilized 100% given there is no cap on power or frequency and they both work at stock.
This.
Yes, you can definitely tweak 12th gen to be efficient just like i've done with my 3090
The problem is that the moment you're not GPU limited, that CPU wattage will climb exponentially


Look at the spiderman reddit threads
360mm AIO, 83% usage and 90C - at 63FPS.
Tell me this guy would be having a great time if his system wasn't bottlenecked, and the CPU usage went higher as his FPS did?


Another one where the games being sent to P and E cores but not SMT cores, and he's performance crippled - because his system wasnt providing enough wattage to boost the CPU





Games will not get easier on CPUs over time.
If your CPU cant provide the performance long-term without throttling, you're going to have a shitty experience like all these people are. "It's not 100% usage, why is it slowing down!"


Marvel's Spider-Man Remastered PC Performance Analysis (dsogaming.com)

These guys got close
What’s also interesting here is how Hyper-Threading/SMT can affect the game’s performance. On CPUs with less than six physical cores, we see performance improvements when HT is active. On the other hand, performance degrades on CPUs that are equipped with more than six physical cores.
Because with those cores and threads active, they're passing PL1 limits and the CPU is clocking down - disabling SMT is saving power, letting them clock higher
Posted on Reply
#53
nguyen
MusselsI'd bloody hope so, seeing how its a more expensive CPU with almost double the cores, double the TDP and wattage that requires a more expensive motherboard, RAM and cooling



This.
Yes, you can definitely tweak 12th gen to be efficient just like i've done with my 3090
The problem is that the moment you're not GPU limited, that CPU wattage will climb exponentially


Look at the spiderman reddit threads
360mm AIO, 83% usage and 90C - at 63FPS.
Tell me this guy would be having a great time if his system wasn't bottlenecked, and the CPU usage went higher as his FPS did?


Another one where the games being sent to P and E cores but not SMT cores, and he's performance crippled - because his system wasnt providing enough wattage to boost the CPU


Games will not get easier on CPUs over time.
If your CPU cant provide the performance long-term without throttling, you're going to have a shitty experience like all these people are. "It's not 100% usage, why is it slowing down!"


Marvel's Spider-Man Remastered PC Performance Analysis (dsogaming.com)

These guys got close

Because with those cores and threads active, they're passing PL1 limits and the CPU is clocking down - disabling SMT is saving power, letting them clock higher
Hm...if only there were ADL with low TDP for people who don't know how to tweak their PC, like the 12600 or 12700 :roll:

12700 with stock RM1 cooler, using 50W less than 5800X while performing the same in games
Posted on Reply
#54
ratirt
MusselsI'd bloody hope so, seeing how its a more expensive CPU with almost double the cores, double the TDP and wattage that requires a more expensive motherboard, RAM and cooling




This.
Yes, you can definitely tweak 12th gen to be efficient just like i've done with my 3090
The problem is that the moment you're not GPU limited, that CPU wattage will climb exponentially


Look at the spiderman reddit threads
360mm AIO, 83% usage and 90C - at 63FPS.
Tell me this guy would be having a great time if his system wasn't bottlenecked, and the CPU usage went higher as his FPS did?


Another one where the games being sent to P and E cores but not SMT cores, and he's performance crippled - because his system wasnt providing enough wattage to boost the CPU





Games will not get easier on CPUs over time.
If your CPU cant provide the performance long-term without throttling, you're going to have a shitty experience like all these people are. "It's not 100% usage, why is it slowing down!"


Marvel's Spider-Man Remastered PC Performance Analysis (dsogaming.com)

These guys got close

Because with those cores and threads active, they're passing PL1 limits and the CPU is clocking down - disabling SMT is saving power, letting them clock higher
Obviously. I did similar thing with my 6900xt. I have undervolted a bit and it runs very well. I'm not saying the 12000 series are bad CPUs at all. They have a lot of performance in them but that comes with a huge cost of efficiency. Saying that at 45w the 12900K is efficient, is like saying my 1970 Shevy Nova (cool car btw) 5l engine at 20Mph is very efficient or my Hammer is very efficient when I don't drive it. Are we really that low to use these kind of reasoning to make our point?
What I have noticed, Ryzens power increase when full load is marginal and the power to sustain some load is fair. 12000 series Intel power at full load is huge power to sustain low load is low but that is kinda obvious. Intel pushed it a lot and power required is so damn high.
Another crazy justification is buying 12900K lowering its voltage, frequency, wattage, play games and say that this CPU is very efficient. That is the stupidest thing I have ever heard and some people say that thinking they are correct in their conclusions as a general factor.
If you dont have enough cores and game uses ecores then you are screwed but maybe its like this. You simply limit your CPU and it needs to clock higher and use more power because the load is simply high. If a game use all available cores for whatever reason, that CPU would behave like a CB23 bench. Use 60%-70% and up of the CPU and your power goes up exponentially if you want to keep high FPS. It is that simple though. I don't think measuring power consumption via gaming is a right choice. It should have been measured by load scenario.
nguyenHm...if only there were ADL with low TDP for people who don't know how to tweak their PC, like the 12600 or 12700 :roll:
Yes these two are efficient because they are locked with power but also performance. In games performance at 1080p (TPU test), they stack where 2 year old 5600x and 5800x are or even those without X models. Lock the 12000 series CPU to be efficient and their great performance kinda vanishes with it. These 2 are for gaming but the 12900K is not and saying it is efficient when you play some games is ridiculous and that goes for all CPUs not just the 12900K or KS. Yet you don't need a lot of performance while playing games that is why the non-k are good for playing. Either way this is changing and time will show that CPUs will require more performance. Especially with newer beefier graphics.
There is a correlation though. GPUs use more power since companies squeeze as much as possible from them and CPUs have to to the same to keep up or at least not bottleneck the GPU with current standards of playing like 144Hz or 240hz monitors and FPS respectively.
Posted on Reply
#55
nguyen
ratirtObviously. I did similar thing with my 6900xt. I have undervolted a bit and it runs very well. I'm not saying the 12000 series are bad CPUs at all. They have a lot of performance in them but that comes with a huge cost of efficiency. Saying that at 45w the 12900K is efficient, is like saying my 1970 Shevy Nova (cool car btw) 5l engine at 20Mph is very efficient or my Hammer is very efficient when I don't drive it. Are we really that low to use these kind of reasoning to make our point?
What I have noticed, Ryzens power increase when full load is marginal and the power to sustain some load is fair. 12000 series Intel power at full load is huge power to sustain low load is low but that is kinda obvious. Intel pushed it a lot and power required is so damn high.
Another crazy justification is buying 12900K lowering its voltage, frequency, wattage, play games and say that this CPU is very efficient. That is the stupidest thing I have ever heard and some people say that thinking they are correct in their conclusions as a general factor.
If you dont have enough cores and game uses ecores then you are screwed but maybe its like this. You simply limit your CPU and it needs to clock higher and use more power because the load is simply high. If a game use all available cores for whatever reason, that CPU would behave like a CB23 bench. Use 60%-70% and up of the CPU and your power goes up exponentially if you want to keep high FPS. It is that simple though. I don't think measuring power consumption via gaming is a right choice. It should have been measured by load scenario.


Yes these two are efficient because they are locked with power but also performance. In games performance at 1080p (TPU test), they stack where 2 year old 5600x and 5800x are or even those without X models. Lock the 12000 series CPU to be efficient and their great performance kinda vanishes with it. These 2 are for gaming but the 12900K is not and saying it is efficient when you play some games is ridiculous and that goes for all CPUs not just the 12900K or KS.
Ryzen 5000 efficiency got destroyed by locked ADL



What is your justification now? that Ryzen 5000 can be tweaked, but apparently 12900K shouldn't be tweaked at all :roll:
Posted on Reply
#56
ratirt
nguyenRyzen 5000 efficiency got destroyed by locked ADL



What is your justification now? that Ryzen 5000 can be tweaked, but apparently 12900K shouldn't be tweaked at all :roll:
Locked AL? Dude we have been talking about it in this thread over and over. Are you really that blind and you thing what you say is legit and have any sort of merit?
How is it destroyed? I see AL matched 5000 series in consumption. now compare this consumption to performance as well. AL will be faster but not by much. I can lock any CPU to whatever power and claim it is efficient. What a stupid and none reflective argument. What Intel did is match with efficiency when locked a CPUs that are 2 years old with ecores to advertise as higher core CPU. It is called marketing and you are a victim of it. grow up.
Posted on Reply
#57
nguyen
ratirtLocked AL? Dude we have been talking about it in this thread over and over. Are you really that blind and you thing what you say is legit and have any sort of merit?
How is it destroyed? I see AL matched 5000 series in consumption. now compare this consumption to performance as well. AL will be faster but not by much. I can lock any CPU to whatever power and claim it is efficient. What a stupid and none reflective argument. What Intel did is match with efficiency when locked a CPUs that are 2 years old with ecores to advertise as higher core CPU. It is called marketing and you are a victim of it. grow up.
You are complaining in a thread about Locked RPL 13900 non-K that the K model are using so much power.

First you say people shouldn't tweak their 12900K for better efficiency, then you say you can tweak your Ryzen for better efficiency, what a hypocrite
Posted on Reply
#58
ratirt
nguyenYou are complaining in a thread about Locked RPL 13900 non-K that the K model are using so much power, please grow up and grow out of your AMD fanboyism

First you say people shouldn't tweak their 12900K for better efficiency, then you say you can tweak your Ryzen for better efficiency, what a hypocrite
I'm complaining? I'm pointing out something or give insight about a product not jerk off every time I hear Intel.
You literally take everything out of context twist and turn to make your point. I'm not a fanboy but you are definitely and people recognize that. Especially when you fail to comprehend, the difference between tweaked CPU and stock when you pay for the product performance wise not how efficient it will get when you tweak it. Maybe Intel should charge you Intel fanboys according to how much can a CPU be tweaked instead and advertise as such?
Never said you should not tweak; I tweak mine and both GPU and CPU. What I'm saying is you should not measure efficiency of a CPU or GPU when you tweak or limit its power and performance in an environment that will not utilize the entire CPU's or GPU's potential. That is exactly what you do and keep arguing about it.
I talk about 12900K because the conversation steered into this direction and I still talk about Intel's products nonetheless unlike your endeavors in any other AMD or NV thread for that matter.
Posted on Reply
#59
95Viper
Stay on topic.
Stop the insults.
Posted on Reply
#60
RandallFlagg
FWIW here is a 10850K with 5.1Ghz all core running single thread CPU-Z. 10850k is one of the least efficient chips made in the last decade, it's worse than a 10900K (it's a downbinned 10900K).

Max power draw was 67W. If I were to set it to normal 'turbo' instead of all core, it would drop 20-25W from there, putting it around 45W. I also set this to 4 threads and got 107W.

There's nothing at all out of line with a 65W 13900 being able to run single core or even a couple of cores at 5.6Ghz boost at under 65W given the better node and other enhancements (e-cores with lower clock speeds and so on). You should not even have to do any tweaks.





Posted on Reply
#61
JustBenching
Musselsnot for OEM or stock boards
That's why this annoys me

Users either get locked to low power settings - and locked performance (look at all the pissed off intel laptop users we get in the throttlestop forum)

It's becoming:

1.CPU's are reviewed on high end unlocked supercooled platforms and everyone bases performance off those values
2. Home users get boards that lock the power limits down, and users never see that performance


As long as they actually get more performance for that power consumption...

Newer intel advertising got more accurate or more honest, but they still have some pretty shitty efficiency: The only time they aren't bottom of the charts is when the E-cores are used.
Intels P cores are not efficient by any metric.

Ironically, 11th gen was pretty good single threaded, but pure garbage MT.




You cant discuss the performance of the P cores as if they have the efficiency of the E-cores - very little can or will use both, other than a few specific workloads and synthetic tests.
The E-cores do nothing for gamers, for example.


TDP is thermal design power, not "total wattage" so they do both have some leniency here.


Seeing 65W TDP becoming 95W peak or similar was fine if those peak values weren't constant - because short boosts wouldnt overwhelm a 65W TDP designed cooler.
Intels 10700 broke that by making 65W become 215W, and it's been meaningless ever since.
Why do you expect a cpu thats asked to use 240w to be efficient??? This makes no sense. Of course alderlake will not be efficient at 240w. The real question is, if you care about efficiency why are you running it at 240w??? It makes absolutely no sense. Limit it to 125w and suddenly its more efficienct than any cpu out there in 99.9% tasks. It will lose (by a tiny amount btw, like 5%) to a 5950x and only in heavy rendering.
ratirtWhy the 12600K has cap on the frequency and power? It boosts lower and uses less power. Obviously it will be faster than a 5600x but that is not the point @Mussels tried to present.
If you take off the cap and let it boost it will deliver more FPS for sure but, the power it will use will be inadequately higher.
the 12600K will be faster than a 5600x but not more efficient that is for sure. Especially in near future when lets say hypothetically, all of these 2 CPUs will have to be utilized 100% given there is no cap on power or frequency and they both work at stock.
AlderLake is more efficient than zen 3 in gaming. Sometimes by up to 70%. Why are we still having this discussion? What the actual...
Posted on Reply
#62
AM4isGOD
fevgatosWhy do you expect a cpu thats asked to use 240w to be efficient??? This makes no sense. Of course alderlake will not be efficient at 240w. The real question is, if you care about efficiency why are you running it at 240w??? It makes absolutely no sense. Limit it to 125w and suddenly its more efficienct than any cpu out there in 99.9% tasks. It will lose (by a tiny amount btw, like 5%) to a 5950x and only in heavy rendering.


AlderLake is more efficient than zen 3 in gaming. Sometimes by up to 70%. Why are we still having this discussion? What the actual...
You are preaching to the deaf
Posted on Reply
#63
JustBenching
AM4isGODYou are preaching to the deaf
Its funny though. Seems like the only people running the 12900k at 240w are the ones that care about the performance, and the ones that want to complain that they are not efficient
Posted on Reply
#64
ratirt
fevgatosAlderLake is more efficient than zen 3 in gaming. Sometimes by up to 70%. Why are we still having this discussion? What the actual...
Is there a CPU that you consider not efficient? Considering you evaluate efficiency on a tweaked CPU with no limitations to tweaking or any other criteria aside power consumption, there is no inefficient CPU. It simply doesn't exist since you can always make it efficient by lowering power consumption. You clearly disregard any other criteria and focus mainly on power consumption.
Does that 70% you mentioned come with the AL CPU being limited to 35W or what is it? gaming is not the whole story you know. productivity, rendering etc basically the whole suite should be put into account to evaluate CPU's efficiency and performance etc. not just one scenario like gaming and you chose games with barely any utilization for the CPU. This is also important and why it is being omitted here? Being more efficient in games depending on a game. There are more demanding games that make AL use more power unless you get a less cores CPU.
I can bet you, that when NV adalovelace and AMD's RDNA3 show up and you will have to check these cards' performance using a 1080p gaming scenario in certain games, AL will use way more power than with a 3090. Why? because the CPU utilization will go up with so much faster cards to feed them and use these cards at 100% which will boost FPS and result in higher power draw for the CPU.
Posted on Reply
#65
JustBenching
ratirtObviously. I did similar thing with my 6900xt. I have undervolted a bit and it runs very well. I'm not saying the 12000 series are bad CPUs at all. They have a lot of performance in them but that comes with a huge cost of efficiency. Saying that at 45w the 12900K is efficient, is like saying my 1970 Shevy Nova (cool car btw) 5l engine at 20Mph is very efficient or my Hammer is very efficient when I don't drive it. Are we really that low to use these kind of reasoning to make our point?
And this is where you are just flat out wrong. Yes, the 12900k with out of the box settings is inefficient in heavy multithreading. That part is true. Thats different to "the 12900k is inefficient", unless again you are specifically talking about out of the box settings. But testing out of the box doesn't tell you Anything about the actual architectural efficiency. For that you have to test 2 cpus at same wattage. There is no other way to do it.

For example, the 7950x will be less efficient at stock than the 5950x. Does that mean zen 4 is less efficient than zen 3? NO. In order to figure that out you need to put the 7950X at the same wattage as the 5950x, and then youll realise zen 4 >> zen 3.

Well apply the same logic to the 12900k. When you run it at same wattage as the 5950x, it loses by 5% in heavy multithreading and wins in everything else.

And thats where the whole problem lies. When you are saying the 12900k is inefficient what you really mean is "with the out of the box settings in heavy mt". With that statement im perfectly in agreement. But for me that type is comparison is absolutely useless, I don't care about the out of the box settings.
ratirtIs there a CPU that you consider not efficient? Considering you evaluate efficiency on a tweaked CPU with no limitations to tweaking or any other criteria aside power consumption, there is no inefficient CPU.
Of course. The 11900k for example. You can limit it to 125w, test it against other cpus at 125w and it will be the slowest,so the less efficient. On the other hand the 12900k will be the fastest or the second fastest, right next to the 5950x.
ratirtIs
Does that 70% you mentioned come with the AL CPU being limited to 35W or what is it? gaming is not the whole story you know.
Nope, at stock. Derbauer and igorslab tested 12900k vs 5950x, the first was up to 70% more efficient
Posted on Reply
#66
ratirt
fevgatosFor example, the 7950x will be less efficient at stock than the 5950x. Does that mean zen 4 is less efficient than zen 3? NO. In order to figure that out you need to put the 7950X at the same wattage as the 5950x, and then youll realise zen 4 >> zen 3.
really? And where do you get it from? Since I recall AMD's charts saying the 7950x is 37% more efficient than a 5950x. No you don't put them at the same wattage since these are different products and obviously it is a natural course of action the newer CPU will be a bit better in crunching data at a lower then advertised wattage.
AMD claims 67% more efficient or so when 7950x is 35w or so in comparison to 5950x but what's the point of the 7950x at 35w being efficient if the 5950x will be 125W advertised and will be faster than a 7950x at 35w cap? WTF is the point here about that comparison in a desktop market? There is no benefit for Desktop but in a Laptop environment it is a huge improvement and you need to recognize these differences.
fevgatosAnd this is where you are just flat out wrong. Yes, the 12900k with out of the box settings is inefficient in heavy multithreading. That part is true. Thats different to "the 12900k is inefficient", unless again you are specifically talking about out of the box settings. But testing out of the box doesn't tell you Anything about the actual architectural efficiency. For that you have to test 2 cpus at same wattage. There is no other way to do it.
You are wrong my friend. In general term the 12900K is not efficient. You cant evaluate efficiency by choosing a scenario that suits you to prove a point but instead you evaluate everything and raw conclusion from that. games dont utilize CPU in a manner that you can evaluate its efficiency by any metric. Any CPU being utilized in 20% will be efficient. Literally any CPU will use low power.
fevgatosOf course. The 11900k for example. You can limit it to 125w, test it against other cpus at 125w and it will be the slowest,so the less efficient. On the other hand the 12900k will be the fastest or the second fastest, right next to the 5950x.
Yes which means you artificially limit it and then evaluate its efficiency? lol.
fevgatosNope, at stock. Derbauer and igorslab tested 12900k vs 5950x, the first was up to 70% more efficient
yes in a game that uses what 10% of the CPU? WTF is wrong with you? You can say it is efficient in games or low utilization workload or light workload but it is not efficient in MT workloads. When you put into account all aspects of the CPU and metrics to evaluate it. So games is efficient (depending on a game), heavy workload it is not efficient by any standard. In general, putting all those together, it is not an as efficient as other CPUs in the market.
Posted on Reply
#67
TheoneandonlyMrK
nguyenDo you play cinebench all day? because these CPU don't need 200W for their performance numbers running games
Some of us do more with our pc then leave it off most the day then two hours light gaming.

I get your point but PC were not made to game, that's a side hustle, people like you over state.
Posted on Reply
#68
JustBenching
ratirtreally? And where do you get it from? Since I recall AMD's charts saying the 7950x is 37% more efficient than a 5950x. No you don't put them at the same wattage since these are different products and obviously it is a natural course of action the newer CPU will be a bit better in crunching data at a lower then advertised wattage.
Thats exactly what im saying. Amd claims 37% more efficient AT SAME WATTAGE!!! They are not testing the cpus at stock, they are testing at same power levels! Cause that is the only way you can measure architectural efficiency, else you are just testing out of the box settings!
ratirtYou are wrong my friend. In general term the 12900K is not efficient. You cant evaluate efficiency by choosing a scenario that suits you to prove a point but instead you evaluate everything and raw conclusion from that. games dont utilize CPU in a manner that you can evaluate its efficiency by any metric. Any CPU being utilized in 20% will be efficient. Literally any CPU will use low power.
No you are wrong. Amd themselves tested architectural efficiency at same wattage. Are you saying they are wrong for doing so?
ratirtYes which means you artificially limit it and then evaluate its efficiency? lol.
What do you mean artificially limit it? All cpus are artificially limited. The only way to test architectural efficiency is at same wattage. The same way you only test IPC at same clockspeeds. How hard is that to understand...
ratirtyes in a game that uses what 10% of the CPU? WTF is wrong with you? You can say it is efficient in games or low utilization workload or light workload but it is not efficient in MT workloads. When you put into account all aspects of the CPU and metrics to evaluate it. So games is efficient (depending on a game), heavy workload it is not efficient by any standard. In general, putting all those together, it is not an as efficient as other CPUs in the market.
What difference does it make? It uses 10% on both cpus and 12900k is more efficient. Tough luck
ratirtAny CPU being utilized in 20% will be efficient. Literally any CPU will use low power.
It is obvious by this sentence that you dont understand what efficiency is. It doesnt matter how much power it uses. Efficiency is how much work the cpu does with that amount of power. So yes every cpu can drop down to 20 watts, but that doesn't mean they are efficient. A cpu that does 50 points at 20w is way less efficient than one that does 100 points at 20 watts.
Posted on Reply
#69
ratirt
fevgatosThats exactly what im saying. Amd claims 37% more efficient AT SAME WATTAGE!!! They are not testing the cpus at stock, they are testing at same power levels! Cause that is the only way you can measure architectural efficiency, else you are just testing out of the box settings!
Sure that is true but what do you expect i dont get it. You want it to be less efficient at the same wattage? It is a different node what did you expect is going to happen? they show the efficiency but there is no way it would have been all the way around.
fevgatosNo you are wrong. Amd themselves tested architectural efficiency at same wattage. Are you saying they are wrong for doing so?
So what they have tested? It is a different node and that is what they are showing. node efficiency not the CPU itself. The CPU is just a medium to show TSMC's node efficiency. You want a CPU efficiency test? make the 7950x on the same node as the 5950x is and then test and see how much more efficient the CPU will be. You want AL efficiency test of the arch ? test it against 11900k and make the AL on a 14nm just like the 11900k is. Then you will have the CPU efficiency and you will know how much better the CPU architecture is. Obviously you will have to get 2 metrics power and performance to validate it across all workloads not just one. Then and only then you will be 100% able to tell how efficient one is from the other.
fevgatosWhat do you mean artificially limit it? All cpus are artificially limited. The only way to test architectural efficiency is at same wattage. The same way you only test IPC at same clockspeeds. How hard is that to understand...
No you dont get it. You can't test efficiency at the same wattage since these are different nodes. You don't test CPU efficiency but Node efficiency using the CPU to demonstrate it. You need to be as close to the same environment as possible. you dont lock the CPUs to whatever wattage you think it's best. The lower the wattage the smaller the node will have an advantage and that is obvious. Remember when HWUB tested the core count impact on gaming? they did not use several different Intel CPUs right? Why not? Because they are still different so they used 11900k (if i remember correctly) and lock cores to test 4c 6c 8c scenarios. They have not used 12600k because of the cache difference.
Node change will always come with better efficiency no matter how you slice it. Check 10900k vs 11900k for efficiency when they are locked to same wattage because it makes sense and please dont get the wattage 35w. That is ridiculous for a desktop processor. Keep in mind 10900k has 2 core more.
fevgatosWhat difference does it make? It uses 10% on both cpus and 12900k is more efficient. Tough luck
OK and when it uses 60% on both it is not as efficient as 10% so why 10% has to be the better metric or better evaluation point for efficiency of a CPU? why not 60% or else why not 100% which you know it will suck at 100% utilization right? why 10% not 100% when you know you want to use or at some point you will have to use 100% no matter what. How 10% utilization is reflective and valid for efficiency metric across the board? Maybe you will lock the CPU at 10%-20% utilization forever?
fevgatosIt is obvious by this sentence that you dont understand what efficiency is. It doesnt matter how much power it uses. Efficiency is how much work the cpu does with that amount of power. So yes every cpu can drop down to 20 watts, but that doesn't mean they are efficient. A cpu that does 50 points at 20w is way less efficient than one that does 100 points at 20 watts.
Do I really? You play with math my friend that is all.
Posted on Reply
#70
JustBenching
ratirtSure that is true but what do you expect i dont get it. You want it to be less efficient at the same wattage? It is a different node what did you expect is going to happen? they show the efficiency but there is no way it would have been all the way around.

So what they have tested? It is a different node and that is what they are showing. node efficiency not the CPU itself. The CPU is just a medium to show TSMC's node efficiency. You want a CPU efficiency test? make the 7950x on the same node as the 5950x is and then test and see how much more efficient the CPU will be. You want AL efficiency test of the arch ? test it against 11900k and make the AL on a 14nm just like the 11900k is. Then you will have the CPU efficiency and you will know how much better the CPU architecture is. Obviously you will have to get 2 metrics power and performance to validate it across all workloads not just one. Then and only then you will be 100% able to tell how efficient one is from the other.

No you dont get it. You can't test efficiency at the same wattage since these are different nodes. You don't test CPU efficiency but Node efficiency using the CPU to demonstrate it. You need to be as close to the same environment as possible. you dont lock the CPUs to whatever wattage you think it's best. The lower the wattage the smaller the node will have an advantage and that is obvious. Remember when HWUB tested the core count impact on gaming? they did not use several different Intel CPUs right? Why not? Because they are still different so they used 11900k (if i remember correctly) and lock cores to test 4c 6c 8c scenarios. They have not used 12600k because of the cache difference.
Node change will always come with better efficiency no matter how you slice it. Check 10900k vs 11900k for efficiency when they are locked to same wattage because it makes sense and please dont get the wattage 35w. That is ridiculous for a desktop processor. Keep in mind 10900k has 2 core more.

OK and when it uses 60% on both it is not as efficient as 10% so why 10% has to be the better metric or better evaluation point for efficiency of a CPU? why not 60% or else why not 100% which you know it will suck at 100% utilization right? why 10% not 100% when you know you want to use or at some point you will have to use 100% no matter what. How 10% utilization is reflective and valid for efficiency metric across the board? Maybe you will lock the CPU at 10%-20% utilization forever?


Do I really? You play with math my friend that is all.
Ill repeat myself. The ONLY way to test architectural efficiency is at same wattage. Period. There is no arguing with that. If you dont understand why then really I don't know how to help you. Testing any other way leads to absurdities.

The 12900t is at 65w, and its more efficient than the 12900k. Therefore using your method, the alderlake architecture is more efficient than the alderlake architecture. Thats where your method of comparing architectures leads. Can you explain to me how the above makes any sense to you?

Also according to your method, amd is flat out lying. The 7950X scores 38k at 230w, while the 5950x scores 26k at 125w. Therefore zen 3 is more efficient than zen 4 according to your method. Yet amd says the opposite....
Posted on Reply
#71
ratirt
fevgatosIll repeat myself. The ONLY way to test architectural efficiency is at same wattage. Period. There is no arguing with that. If you dont understand why then really I don't know how to help you. Testing any other way leads to absurdities.

The 12900t is at 65w, and its more efficient than the 12900k. Therefore using your method, the alderlake architecture is more efficient than the alderlake architecture. Thats where your method of comparing architectures leads. Can you explain to me how the above makes any sense to you?

Also according to your method, amd is flat out lying. The 7950X scores 38k at 230w, while the 5950x scores 26k at 125w. Therefore zen 3 is more efficient than zen 4 according to your method. Yet amd says the opposite....
I ask again.
1st:
Why 10% utilization and power draw of an AL CPU (but also any other CPU for that matter) is reflective as for how efficient the architecture is and not a 100% CPU utilization? Which makes you quote over and over how efficient AL is in games you choose to showcase the efficiency showing low utilization levels and obviously power draws.
2nd
How can you evaluate a CPU architecture vs CPU architecture and make a comparison between the two, knowing the nodes for both are totally different and both are being evaluated by random low power limit chosen by an evaluator even though both CPUs are desktop segment processors?

side question.
Would you evaluate efficiency and performance of a server processor for instance by the lowest possible wattage the CPU can handle, highest possible, or stock wattage set by the manufacturer on a variety of benchmarks?
Posted on Reply
#72
JustBenching
ratirtI ask again.
1st:
Why 10% utilization and power draw of an AL CPU (but also any other CPU for that matter) is reflective as for how efficient the architecture is and not a 100% CPU utilization? Which makes you quote over and over how efficient AL is in games you choose to showcase the efficiency showing low utilization levels and obviously power draws.
If you are running games, then obviously your concern is efficiency in games. There is no point for a gamer to buy a cpu that is efficienct in cinebench when he is going to use it for gaming.

That of course applies to every workload. If a cpu is more efficient in lets say autocad or Premiere, there is no point arguing whether these programs push the cpu to 100% or 1%. Its completely irrelevant.

I know my renault megane is very inefficient at 200 kmh, but i bought it cause its efficient at 150kmh that im driving
Posted on Reply
#73
ARF
But gaming plus streaming, and gaming plus a free Windows to update, run other apps in the background, can be quite resource heavy.
Never leave your PC with the core count which is only sufficient to run one game..
Posted on Reply
#74
JustBenching
ratirt2nd
How can you evaluate a CPU architecture vs CPU architecture and make a comparison between the two, knowing the nodes for both are totally different and both are being evaluated by random low power limit chosen by an evaluator even though both CPUs are desktop segment processors?
The nodes are a fundamental part of a cpu architecture. When designing the architecture, it was based with a specific node in mind. But even that is irrelevant. You cant change the node, but you can change the power limit. And nobody argued that you should test at a low power limit. Im arguing that you should test at the SAME power limit. It can be 50 watts or 500 watts
ratirtWould you evaluate efficiency and performance of a server processor for instance by the lowest possible wattage the CPU can handle, highest possible, or stock wattage set by the manufacturer on a variety of benchmarks?
I would evaluate it by the efficiency at the wattage im going to run it at. If im trying to decide between 2 cpus, ill test them both at the wattage im going to be running them at.

All of your points are still irrelevant though. You CANNOT test at different wattages. That way youll end up that alderlake is more efficient than itself.
Posted on Reply
#75
ARF
fevgatosThe nodes are a fundamental part of a cpu architecture. When designing the architecture, it was based with a specific node in mind. But even that is irrelevant. You cant change the node, but you can change the power limit. And nobody argued that you should test at a low power limit. Im arguing that you should test at the SAME power limit. It can be 50 watts or 500 watts
Yeah, certain architecture details such as the cache sizes depend on the transistor density, the higher it is, the more cache you get.
But there also happens backporting, when a given chip is somehow built using a previous node.
ratirt2nd
How can you evaluate a CPU architecture vs CPU architecture and make a comparison between the two, knowing the nodes for both are totally different and both are being evaluated by random low power limit chosen by an evaluator even though both CPUs are desktop segment processors?

side question.
Would you evaluate efficiency and performance of a server processor for instance by the lowest possible wattage the CPU can handle, highest possible, or stock wattage set by the manufacturer on a variety of benchmarks?
Every design has a sweet spot where its performance per watt is highest.
Also, you can compare nodes - transistor per mm squared.
Posted on Reply
Add your own comment
Dec 27th, 2024 12:18 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts