Sunday, September 26th 2021

SiSoftware Compiles Early Performance Preview of the Intel Core i9-12900K
It's not every day that a software company that specializes in benchmarking software decides to compile the performance data of unreleased products found in their online database, but this is what SiSoftware just did for the Intel Core i9-12900K. So far, it's a limited set of tests that have been run on the CPU and what we're looking at here is a set of task specific benchmarks. SiSoftware doesn't provide any system details, so take these numbers for what they are.
The benchmarks consist of three categories, Vector SIMD Native, Cryptographic Native and Financial Analysis Native. Not all tests have been run on the Core i9-12900K and SiSoftware themselves admit that they don't have enough data points to draw any final conclusions. Unlike other supposedly leaked benchmark figures, the Core i9-12900K doesn't look like a clear winner here, as it barely beats the AMD Ryzen 9 5900X in some tests, while it's beaten by it and even the Core i9-11900K in other tests. It should be noted that the Core i9-11900K does use AVX512 where supported which gives it a performance advantage to the other CPUs in some tests. We'll let you make up your own mind here, but one thing is certain, we're going to have to wait for proper reviews before the race is over and a winner is crowned.
Update: As the original article was taken down and there were some useful references in it, you can find a screen grab of it here.
Sources:
SiSoftware, via @TUM_APISAK
The benchmarks consist of three categories, Vector SIMD Native, Cryptographic Native and Financial Analysis Native. Not all tests have been run on the Core i9-12900K and SiSoftware themselves admit that they don't have enough data points to draw any final conclusions. Unlike other supposedly leaked benchmark figures, the Core i9-12900K doesn't look like a clear winner here, as it barely beats the AMD Ryzen 9 5900X in some tests, while it's beaten by it and even the Core i9-11900K in other tests. It should be noted that the Core i9-11900K does use AVX512 where supported which gives it a performance advantage to the other CPUs in some tests. We'll let you make up your own mind here, but one thing is certain, we're going to have to wait for proper reviews before the race is over and a winner is crowned.
Update: As the original article was taken down and there were some useful references in it, you can find a screen grab of it here.
69 Comments on SiSoftware Compiles Early Performance Preview of the Intel Core i9-12900K
Currently an i5-11600K or a Ryzen 5 5600X is able to keep a RTX 3090 saturated in most games, and that's actually a good spot to be in, so gamers can put as much money into their graphics card as they can. So unless new and more demanding games arrive soon, a lot of people will get disappointed when Alder Lake arrives, despite it being a very performant CPU architecture. It will probably show great gains in most workloads except gaming, and that's not a bad thing. Ideally, games shouldn't be CPU bottlenecked at all. But with pretty much everything else, including web browsing, office work and general responsiveness, Alder Lake is likely to provide noticeable improvements. Considering how much bloated everything is getting these days, this should be exciting. I'm surprised to see how laggy even a simple spreadsheet in MS Office have gotten, not to mention the CPU load of basic web pages.
And stop the insults and arguing.
Move on.
Quotes from the Guidelines: Thank You, and, Have a Good Morning.
videocardz.com/newz/intel-core-i9-12900k-allegedly-scores-27-higher-than-ryzen-9-5950x-in-cpu-z-single-thread-benchmark
CPU-Z is not as trustworthy as SiSoft imho and this screenshot even less so.
I have seen 5950x with pbo and curve optimizer at 5.2ghz to catch 725 but honestly on which planet the 5600x can catch 790 we are not talking about oc with some kind of exotic cooling this number is with a water cooling 280mm.
I hope intel returns to competition...and reduces power usage to sane levels.... And doesn't use win 11 scheduler and compiler bs to do it.
I want price competition and performance competition. Then the consumer wins.
Seriously if the result is real, you should contact TPU so that can find and fix the bug.
It reminds me of Bulldozer, where an 8 core CPU was 8ALU/4FPU design. The problem was 2 ALUs were joined to an FPU and Windows didn’t know what to do, so the CPU often underperformed due to resource mismanagement. Lakefield was a precursor to Adler Lake, and its 1/4 big.LITTLE configuration performed quite terribly. You can bet Intel and MS worked closely together to get Windows11 working right, and it’s no coincidence that Windows 11 went from “just recently announced” to launching ahead of Adler Lake.
Several synthetic benchmarks changed after Zen launched, they changed because the developers decided to change the weighting of the benchmark scores. I would assume they run the same code across different CPUs (otherwise a direct comparison would be pointless), which would mean it can't be a software bug. I'm fairly sure they changed the weighting because these benchmarks made Zen/Zen 2 CPUs look way better than reflected in real world benchmarks.
This exposes one of the fundamental problems of synthetic benchmarking; there is really no fair way to do it, especially if you want to generate some kind of total score for the CPU. There will always be disagreements on how to weight different workloads to create a total score. In reality, synthetic benchmarks are only interesting for theoretical discussions, and no one should base their purchasing decisions on them. What you should look for is real world benchmarks matching your needs, and if that can't be found, a weighted average of real world benchmarks.
-----
Doing good benchmarking is actually fairly hard. Developers who try to optimize code face this challenge regularly, usually to see which code changes perform better, not to see which hardware is better. This usually means to isolate small changes, run them through millions of iterations to exaggerate a difference enough to make it measurable. Then combine a bunch of these small changes into something that may make a whole algorithm 20-50% faster, or a whole application.
I find it very fascinating to see how much it matters to write optimized code on newer architectures. I have some 3D math code that I've been using and improving for many years, and seen how my optimized implementations keep getting proportionally faster than their baseline counterparts on newer architectures, e.g. Sandy-Bridge -> Haswell -> Skylake. And it's obvious to me that the benefits of writing good code is growing, not shrinking, with faster and more superscalar architectures. So even though faster CPU front-ends helps extract more from existing code, increasingly superscalar architectures can extract even more parallelization from better code. The other day I was comparing an optimization across a lot of code that got ~5-10% extra performance on Skylake. Then I tested it on Zen 3 and got similar improvements vs. unoptimized code, except in one edge case I got like 100% extra performance, and this from just a tiny change. Examples like this makes me more excited than ever to see what Golden Cove(Alder Lake) and Zen 4 brings to the table. We are nowhere near the end of what performance we can extract per thread, and the upcoming architectures in the next 10 years or so should bring exciting performance improvements.
The problem is that they actually compiled their data and pretend it as a product comparison. It's like one searched disinfectant chemistry database and started to give out body injection advice. Again, okay to have unuseful data, not okay to pretend it to be useful. I'm glad they took down that article.
I read the original article you posted. Pretty interesting to see that they acknowledge the presence of hardware AES accelarator, while they try to analyze something out of the cryptographic data. The writer knew very well this analysis was going nowhere.