Sunday, September 26th 2021

SiSoftware Compiles Early Performance Preview of the Intel Core i9-12900K

It's not every day that a software company that specializes in benchmarking software decides to compile the performance data of unreleased products found in their online database, but this is what SiSoftware just did for the Intel Core i9-12900K. So far, it's a limited set of tests that have been run on the CPU and what we're looking at here is a set of task specific benchmarks. SiSoftware doesn't provide any system details, so take these numbers for what they are.

The benchmarks consist of three categories, Vector SIMD Native, Cryptographic Native and Financial Analysis Native. Not all tests have been run on the Core i9-12900K and SiSoftware themselves admit that they don't have enough data points to draw any final conclusions. Unlike other supposedly leaked benchmark figures, the Core i9-12900K doesn't look like a clear winner here, as it barely beats the AMD Ryzen 9 5900X in some tests, while it's beaten by it and even the Core i9-11900K in other tests. It should be noted that the Core i9-11900K does use AVX512 where supported which gives it a performance advantage to the other CPUs in some tests. We'll let you make up your own mind here, but one thing is certain, we're going to have to wait for proper reviews before the race is over and a winner is crowned.

Update: As the original article was taken down and there were some useful references in it, you can find a screen grab of it here.
Sources: SiSoftware, via @TUM_APISAK
Add your own comment

69 Comments on SiSoftware Compiles Early Performance Preview of the Intel Core i9-12900K

#51
efikkan
All the current graphics APIs (OpenGL, Vulkan, Direct3D) works by submitting a queue of operations for the GPU to process, and the rendering thread will wait until the GPU is ready to accept more. If the CPU is not fast enough to keep up with the GPU, we call it CPU bottlenecked, then the GPU will spend a lot of cycles idling waiting for more work to do. When the CPU is fully able to keep the GPU saturated, the workload is GPU bottlenecked, and this is what we want, since you now should get the scaling potential of your graphics hardware. This is about how well I can explain it without diving into technical details and code examples, but I hope most of you should get the point.

Currently an i5-11600K or a Ryzen 5 5600X is able to keep a RTX 3090 saturated in most games, and that's actually a good spot to be in, so gamers can put as much money into their graphics card as they can. So unless new and more demanding games arrive soon, a lot of people will get disappointed when Alder Lake arrives, despite it being a very performant CPU architecture. It will probably show great gains in most workloads except gaming, and that's not a bad thing. Ideally, games shouldn't be CPU bottlenecked at all. But with pretty much everything else, including web browsing, office work and general responsiveness, Alder Lake is likely to provide noticeable improvements. Considering how much bloated everything is getting these days, this should be exciting. I'm surprised to see how laggy even a simple spreadsheet in MS Office have gotten, not to mention the CPU load of basic web pages.
Posted on Reply
#52
95Viper
Behave!
And stop the insults and arguing.
Move on.

Quotes from the Guidelines:
Posting in a thread
Be polite and Constructive, if you have nothing nice to say then don't say anything at all.
This includes trolling, continuous use of bad language (ie. cussing), flaming, baiting, retaliatory comments, system feature abuse, and insulting others.
Do not get involved in any off-topic banter or arguments. Please report them and avoid instead.
Thank You, and, Have a Good Morning.
Posted on Reply
#53
docnorth
Hopefully when a new CPU-Z leak arrives (tomorrow?), the comments will be more objective.
OberonSuddenly people are up in arms about Sandra, which has been around and in use for almost 25 years. There's plenty of information about what each benchmark entails available on their website if you actually want to find out.

Here's a screenshot of the whole article since it has been taken down, just in case more people want to claim things like it isn't optimized for Intel's hybrid architecture, or that the results are invalid because it's running on Windows 10, or whatever other justification they want to come up with beyond "the product isn't out yet."

I can only judge this benchmark by using the results of the 3 already known CPU's. Well in the first 2 slides of a multicore oriented test, 11900K beats 10 core 10900K (easily) and 8+8 core 12900K and especially in the second slide it performs better even compared to the 12 core 5900X. Most of us would expect different CPU testing results for 10900K, 5900X and 11900K.
Posted on Reply
#55
Tom Sunday
95ViperBehave!
And stop the insults and arguing.
Move on. Thank You, and, Have a Good Morning.
Yes you are right all the way. Arguing or getting smart is most certainly not productive. Informed productivity is what tech-channels should be all about. As to the subject discussion: It’s not over until it’s over and probably not until November when all the much contested (AMD & Intel) data is in and has been regurgitated a few times over here on the tech-channels. For me right now the most important thing to know is what the Intel stock will look like if they come out to be the clear winner in this particular race over essentially hairline performance differences. Sheer product availability will also have its song played as well. Interesting debate times ahead and then with WIN 11 in tow at least the tech-channels have something really to talk about besides AIO, memory, SSD and headphones upgrades, etc.
Posted on Reply
#56
docnorth
TheLostSwedeYou mean this one?
videocardz.com/newz/intel-core-i9-12900k-allegedly-scores-27-higher-than-ryzen-9-5950x-in-cpu-z-single-thread-benchmark

CPU-Z is not as trustworthy as SiSoft imho and this screenshot even less so.
Yes, and I'm afraid we'll see again comments like in this thread. Personally I consider CPU-Z one simple and reliable benchmark (maybe except for RL, which could not totally convert increased IPC to real world gains for several reasons).
Posted on Reply
#57
kane nas
docnorthYes, and I'm afraid we'll see again comments like in this thread. Personally I consider CPU-Z one simple and reliable benchmark (maybe except for RL, which could not totally convert increased IPC to real world gains for several reasons).
CPU-Z reliable benchmark...Νope.
I have seen 5950x with pbo and curve optimizer at 5.2ghz to catch 725 but honestly on which planet the 5600x can catch 790 we are not talking about oc with some kind of exotic cooling this number is with a water cooling 280mm.
Posted on Reply
#58
lexluthermiester
The CPUZ benchmark is fine. It is but one among many useful metrics that can be used to gauge performance. Let's stop with the crapping on it, which is not the purpose of this thread.
Posted on Reply
#59
Patriot
lexluthermiesterThe CPUZ benchmark is fine. It is but one among many useful metrics that can be used to gauge performance. Let's stop with the crapping on it, which is not the purpose of this thread.
Yeah back to the fanboi war. I need something to watch while I eat my popcorn.

I hope intel returns to competition...and reduces power usage to sane levels.... And doesn't use win 11 scheduler and compiler bs to do it.
I want price competition and performance competition. Then the consumer wins.
Posted on Reply
#60
docnorth
kane nasCPU-Z reliable benchmark...Νope.
I have seen 5950x with pbo and curve optimizer at 5.2ghz to catch 725 but honestly on which planet the 5600x can catch 790 we are not talking about oc with some kind of exotic cooling this number is with a water cooling 280mm.
This must be a 280x280 water cooler...
Seriously if the result is real, you should contact TPU so that can find and fix the bug.
Posted on Reply
#61
lexluthermiester
docnorthThis must be a 280x280 water cooler...
Seriously if the result is real, you should contact TPU so that can find and fix the bug.
Why would they contact TPU? Do you think TPU makes CPUZ? They do not, W1zzard makes GPUZ, but that's not the same utility..
Posted on Reply
#62
Darmok N Jalad
PatriotI hope intel returns to competition...and reduces power usage to sane levels.... And doesn't use win 11 scheduler and compiler bs to do it.
I don’t think most of those will be true. I think they will return to performance competition, but I’m afraid power consumption is here to stay, and any ADL SKU that uses both core types will absolutely depend on the scheduler in Win11. I hope reviews comprehensively test the CPU on both Win10 and Win11 so that we can see just how important the new scheduler will be on multi-threaded workloads. In many multi-threaded workloads, the threads jump around from core to core. Imagine playing a game where the efficiency cores start getting tasks inadvertently. I don’t see how Windows 10 will avoid this unless the scheduler is backported.

It reminds me of Bulldozer, where an 8 core CPU was 8ALU/4FPU design. The problem was 2 ALUs were joined to an FPU and Windows didn’t know what to do, so the CPU often underperformed due to resource mismanagement. Lakefield was a precursor to Adler Lake, and its 1/4 big.LITTLE configuration performed quite terribly. You can bet Intel and MS worked closely together to get Windows11 working right, and it’s no coincidence that Windows 11 went from “just recently announced” to launching ahead of Adler Lake.
Posted on Reply
#63
docnorth
lexluthermiesterWhy would they contact TPU? Do you think TPU makes CPUZ? They do not, W1zzard makes GPUZ, but that's not the same utility..
Correct, but the meaning remains the same.
Posted on Reply
#64
KarymidoN
docnorthYes, and I'm afraid we'll see again comments like in this thread. Personally I consider CPU-Z one simple and reliable benchmark (maybe except for RL, which could not totally convert increased IPC to real world gains for several reasons).
i remember when 1st zen Ryzen was a beast on this bench, then they updated it and the scores dropped a LOT... lets wait and see folks
Posted on Reply
#65
docnorth
KarymidoNi remember when 1st zen Ryzen was a beast on this bench, then they updated it and the scores dropped a LOT... lets wait and see folks
Agreed.
Posted on Reply
#66
lexluthermiester
docnorthCorrect, but the meaning remains the same.
Wait, what? Are you saying CPUZ and GPUZ are the same?
Posted on Reply
#67
docnorth
lexluthermiesterWait, what? Are you saying CPUZ and GPUZ are the same?
No, what I meant is if the result is real, the developer should be informed, so the bug can be found and fixed.
Posted on Reply
#68
efikkan
KarymidoNi remember when 1st zen Ryzen was a beast on this bench, then they updated it and the scores dropped a LOT... lets wait and see folks
docnorthNo, what I meant is if the result is real, the developer should be informed, so the bug can be found and fixed.
To you both;
Several synthetic benchmarks changed after Zen launched, they changed because the developers decided to change the weighting of the benchmark scores. I would assume they run the same code across different CPUs (otherwise a direct comparison would be pointless), which would mean it can't be a software bug. I'm fairly sure they changed the weighting because these benchmarks made Zen/Zen 2 CPUs look way better than reflected in real world benchmarks.

This exposes one of the fundamental problems of synthetic benchmarking; there is really no fair way to do it, especially if you want to generate some kind of total score for the CPU. There will always be disagreements on how to weight different workloads to create a total score. In reality, synthetic benchmarks are only interesting for theoretical discussions, and no one should base their purchasing decisions on them. What you should look for is real world benchmarks matching your needs, and if that can't be found, a weighted average of real world benchmarks.

-----

Doing good benchmarking is actually fairly hard. Developers who try to optimize code face this challenge regularly, usually to see which code changes perform better, not to see which hardware is better. This usually means to isolate small changes, run them through millions of iterations to exaggerate a difference enough to make it measurable. Then combine a bunch of these small changes into something that may make a whole algorithm 20-50% faster, or a whole application.

I find it very fascinating to see how much it matters to write optimized code on newer architectures. I have some 3D math code that I've been using and improving for many years, and seen how my optimized implementations keep getting proportionally faster than their baseline counterparts on newer architectures, e.g. Sandy-Bridge -> Haswell -> Skylake. And it's obvious to me that the benefits of writing good code is growing, not shrinking, with faster and more superscalar architectures. So even though faster CPU front-ends helps extract more from existing code, increasingly superscalar architectures can extract even more parallelization from better code. The other day I was comparing an optimization across a lot of code that got ~5-10% extra performance on Skylake. Then I tested it on Zen 3 and got similar improvements vs. unoptimized code, except in one edge case I got like 100% extra performance, and this from just a tiny change. Examples like this makes me more excited than ever to see what Golden Cove(Alder Lake) and Zen 4 brings to the table. We are nowhere near the end of what performance we can extract per thread, and the upcoming architectures in the next 10 years or so should bring exciting performance improvements.
Posted on Reply
#69
First Strike
OberonSuddenly people are up in arms about Sandra, which has been around and in use for almost 25 years. There's plenty of information about what each benchmark entails available on their website if you actually want to find out.

Here's a screenshot of the whole article since it has been taken down, just in case more people want to claim things like it isn't optimized for Intel's hybrid architecture, or that the results are invalid because it's running on Windows 10, or whatever other justification they want to come up with beyond "the product isn't out yet."

Sisoft Sandra does not represent well the performance for a PC. It's more inclined toward HPC, serving as a basic common metrics. If you have been following Sisoft Sandra, I think you should know this. For years almost no one has used Sisoft Sandra in an MSDT review.

The problem is that they actually compiled their data and pretend it as a product comparison. It's like one searched disinfectant chemistry database and started to give out body injection advice. Again, okay to have unuseful data, not okay to pretend it to be useful. I'm glad they took down that article.

I read the original article you posted. Pretty interesting to see that they acknowledge the presence of hardware AES accelarator, while they try to analyze something out of the cryptographic data. The writer knew very well this analysis was going nowhere.
Posted on Reply
Add your own comment
Apr 1st, 2025 09:59 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts