Friday, August 26th 2022
Intel Core i9-13900 (non-K) Spotted with 5.60 GHz Max Boost, Geekbenched
An Intel Core i9-13900 "Raptor Lake" (non-K) processor was spotted in the wild by Benchleaks. The non-K parts are expected to have 65 W Processor Base Power and aggressive power-management, compared to the unlocked i9-13900K, although the core configuration is identical: 8 P-cores, and 16 E-cores. Besides tighter power limits out of the box, and a locked multiplier, the i9-13900 also has lower clocks, with its maximum boost frequency for the P-cores set 5.60 GHz, compared to the 5.80 GHz of the i9-13900K. It's still a tad higher than the 5.40 GHz of the i7-13700K.
Tested in Geekbench 5.4.5, the i9-13900 scores 2130 points in the single-threaded test, and 20131 points in the multi-threaded one. Wccftech tabulated these scores in comparison to the current-gen flagship i9-12900K. The i9-13900 ends up 10 percent faster than the i9-12900K in the single-threaded test, and 17 percent faster in the multi-threaded. The single-threaded uplift is thanks to the higher IPC of the "Raptor Cove" P-core, and slightly higher boost clock; while the multi-threaded score is helped not just by the higher IPC, but also the addition of 8 more E-cores.
Sources:
Benchleaks (Twitter), Wccftech
Tested in Geekbench 5.4.5, the i9-13900 scores 2130 points in the single-threaded test, and 20131 points in the multi-threaded one. Wccftech tabulated these scores in comparison to the current-gen flagship i9-12900K. The i9-13900 ends up 10 percent faster than the i9-12900K in the single-threaded test, and 17 percent faster in the multi-threaded. The single-threaded uplift is thanks to the higher IPC of the "Raptor Cove" P-core, and slightly higher boost clock; while the multi-threaded score is helped not just by the higher IPC, but also the addition of 8 more E-cores.
77 Comments on Intel Core i9-13900 (non-K) Spotted with 5.60 GHz Max Boost, Geekbenched
This.
Yes, you can definitely tweak 12th gen to be efficient just like i've done with my 3090
The problem is that the moment you're not GPU limited, that CPU wattage will climb exponentially
Look at the spiderman reddit threads
360mm AIO, 83% usage and 90C - at 63FPS.
Tell me this guy would be having a great time if his system wasn't bottlenecked, and the CPU usage went higher as his FPS did?
Another one where the games being sent to P and E cores but not SMT cores, and he's performance crippled - because his system wasnt providing enough wattage to boost the CPU
Games will not get easier on CPUs over time.
If your CPU cant provide the performance long-term without throttling, you're going to have a shitty experience like all these people are. "It's not 100% usage, why is it slowing down!"
Marvel's Spider-Man Remastered PC Performance Analysis (dsogaming.com)
These guys got close Because with those cores and threads active, they're passing PL1 limits and the CPU is clocking down - disabling SMT is saving power, letting them clock higher
12700 with stock RM1 cooler, using 50W less than 5800X while performing the same in games
What I have noticed, Ryzens power increase when full load is marginal and the power to sustain some load is fair. 12000 series Intel power at full load is huge power to sustain low load is low but that is kinda obvious. Intel pushed it a lot and power required is so damn high.
Another crazy justification is buying 12900K lowering its voltage, frequency, wattage, play games and say that this CPU is very efficient. That is the stupidest thing I have ever heard and some people say that thinking they are correct in their conclusions as a general factor.
If you dont have enough cores and game uses ecores then you are screwed but maybe its like this. You simply limit your CPU and it needs to clock higher and use more power because the load is simply high. If a game use all available cores for whatever reason, that CPU would behave like a CB23 bench. Use 60%-70% and up of the CPU and your power goes up exponentially if you want to keep high FPS. It is that simple though. I don't think measuring power consumption via gaming is a right choice. It should have been measured by load scenario. Yes these two are efficient because they are locked with power but also performance. In games performance at 1080p (TPU test), they stack where 2 year old 5600x and 5800x are or even those without X models. Lock the 12000 series CPU to be efficient and their great performance kinda vanishes with it. These 2 are for gaming but the 12900K is not and saying it is efficient when you play some games is ridiculous and that goes for all CPUs not just the 12900K or KS. Yet you don't need a lot of performance while playing games that is why the non-k are good for playing. Either way this is changing and time will show that CPUs will require more performance. Especially with newer beefier graphics.
There is a correlation though. GPUs use more power since companies squeeze as much as possible from them and CPUs have to to the same to keep up or at least not bottleneck the GPU with current standards of playing like 144Hz or 240hz monitors and FPS respectively.
What is your justification now? that Ryzen 5000 can be tweaked, but apparently 12900K shouldn't be tweaked at all :roll:
How is it destroyed? I see AL matched 5000 series in consumption. now compare this consumption to performance as well. AL will be faster but not by much. I can lock any CPU to whatever power and claim it is efficient. What a stupid and none reflective argument. What Intel did is match with efficiency when locked a CPUs that are 2 years old with ecores to advertise as higher core CPU. It is called marketing and you are a victim of it. grow up.
First you say people shouldn't tweak their 12900K for better efficiency, then you say you can tweak your Ryzen for better efficiency, what a hypocrite
You literally take everything out of context twist and turn to make your point. I'm not a fanboy but you are definitely and people recognize that. Especially when you fail to comprehend, the difference between tweaked CPU and stock when you pay for the product performance wise not how efficient it will get when you tweak it. Maybe Intel should charge you Intel fanboys according to how much can a CPU be tweaked instead and advertise as such?
Never said you should not tweak; I tweak mine and both GPU and CPU. What I'm saying is you should not measure efficiency of a CPU or GPU when you tweak or limit its power and performance in an environment that will not utilize the entire CPU's or GPU's potential. That is exactly what you do and keep arguing about it.
I talk about 12900K because the conversation steered into this direction and I still talk about Intel's products nonetheless unlike your endeavors in any other AMD or NV thread for that matter.
Stop the insults.
Max power draw was 67W. If I were to set it to normal 'turbo' instead of all core, it would drop 20-25W from there, putting it around 45W. I also set this to 4 threads and got 107W.
There's nothing at all out of line with a 65W 13900 being able to run single core or even a couple of cores at 5.6Ghz boost at under 65W given the better node and other enhancements (e-cores with lower clock speeds and so on). You should not even have to do any tweaks.
Does that 70% you mentioned come with the AL CPU being limited to 35W or what is it? gaming is not the whole story you know. productivity, rendering etc basically the whole suite should be put into account to evaluate CPU's efficiency and performance etc. not just one scenario like gaming and you chose games with barely any utilization for the CPU. This is also important and why it is being omitted here? Being more efficient in games depending on a game. There are more demanding games that make AL use more power unless you get a less cores CPU.
I can bet you, that when NV adalovelace and AMD's RDNA3 show up and you will have to check these cards' performance using a 1080p gaming scenario in certain games, AL will use way more power than with a 3090. Why? because the CPU utilization will go up with so much faster cards to feed them and use these cards at 100% which will boost FPS and result in higher power draw for the CPU.
For example, the 7950x will be less efficient at stock than the 5950x. Does that mean zen 4 is less efficient than zen 3? NO. In order to figure that out you need to put the 7950X at the same wattage as the 5950x, and then youll realise zen 4 >> zen 3.
Well apply the same logic to the 12900k. When you run it at same wattage as the 5950x, it loses by 5% in heavy multithreading and wins in everything else.
And thats where the whole problem lies. When you are saying the 12900k is inefficient what you really mean is "with the out of the box settings in heavy mt". With that statement im perfectly in agreement. But for me that type is comparison is absolutely useless, I don't care about the out of the box settings. Of course. The 11900k for example. You can limit it to 125w, test it against other cpus at 125w and it will be the slowest,so the less efficient. On the other hand the 12900k will be the fastest or the second fastest, right next to the 5950x. Nope, at stock. Derbauer and igorslab tested 12900k vs 5950x, the first was up to 70% more efficient
AMD claims 67% more efficient or so when 7950x is 35w or so in comparison to 5950x but what's the point of the 7950x at 35w being efficient if the 5950x will be 125W advertised and will be faster than a 7950x at 35w cap? WTF is the point here about that comparison in a desktop market? There is no benefit for Desktop but in a Laptop environment it is a huge improvement and you need to recognize these differences. You are wrong my friend. In general term the 12900K is not efficient. You cant evaluate efficiency by choosing a scenario that suits you to prove a point but instead you evaluate everything and raw conclusion from that. games dont utilize CPU in a manner that you can evaluate its efficiency by any metric. Any CPU being utilized in 20% will be efficient. Literally any CPU will use low power. Yes which means you artificially limit it and then evaluate its efficiency? lol. yes in a game that uses what 10% of the CPU? WTF is wrong with you? You can say it is efficient in games or low utilization workload or light workload but it is not efficient in MT workloads. When you put into account all aspects of the CPU and metrics to evaluate it. So games is efficient (depending on a game), heavy workload it is not efficient by any standard. In general, putting all those together, it is not an as efficient as other CPUs in the market.
I get your point but PC were not made to game, that's a side hustle, people like you over state.
Node change will always come with better efficiency no matter how you slice it. Check 10900k vs 11900k for efficiency when they are locked to same wattage because it makes sense and please dont get the wattage 35w. That is ridiculous for a desktop processor. Keep in mind 10900k has 2 core more. OK and when it uses 60% on both it is not as efficient as 10% so why 10% has to be the better metric or better evaluation point for efficiency of a CPU? why not 60% or else why not 100% which you know it will suck at 100% utilization right? why 10% not 100% when you know you want to use or at some point you will have to use 100% no matter what. How 10% utilization is reflective and valid for efficiency metric across the board? Maybe you will lock the CPU at 10%-20% utilization forever? Do I really? You play with math my friend that is all.
The 12900t is at 65w, and its more efficient than the 12900k. Therefore using your method, the alderlake architecture is more efficient than the alderlake architecture. Thats where your method of comparing architectures leads. Can you explain to me how the above makes any sense to you?
Also according to your method, amd is flat out lying. The 7950X scores 38k at 230w, while the 5950x scores 26k at 125w. Therefore zen 3 is more efficient than zen 4 according to your method. Yet amd says the opposite....
1st:
Why 10% utilization and power draw of an AL CPU (but also any other CPU for that matter) is reflective as for how efficient the architecture is and not a 100% CPU utilization? Which makes you quote over and over how efficient AL is in games you choose to showcase the efficiency showing low utilization levels and obviously power draws.
2nd
How can you evaluate a CPU architecture vs CPU architecture and make a comparison between the two, knowing the nodes for both are totally different and both are being evaluated by random low power limit chosen by an evaluator even though both CPUs are desktop segment processors?
side question.
Would you evaluate efficiency and performance of a server processor for instance by the lowest possible wattage the CPU can handle, highest possible, or stock wattage set by the manufacturer on a variety of benchmarks?
That of course applies to every workload. If a cpu is more efficient in lets say autocad or Premiere, there is no point arguing whether these programs push the cpu to 100% or 1%. Its completely irrelevant.
I know my renault megane is very inefficient at 200 kmh, but i bought it cause its efficient at 150kmh that im driving
Never leave your PC with the core count which is only sufficient to run one game..
All of your points are still irrelevant though. You CANNOT test at different wattages. That way youll end up that alderlake is more efficient than itself.
But there also happens backporting, when a given chip is somehow built using a previous node. Every design has a sweet spot where its performance per watt is highest.
Also, you can compare nodes - transistor per mm squared.