Sunday, September 26th 2021

SiSoftware Compiles Early Performance Preview of the Intel Core i9-12900K

It's not every day that a software company that specializes in benchmarking software decides to compile the performance data of unreleased products found in their online database, but this is what SiSoftware just did for the Intel Core i9-12900K. So far, it's a limited set of tests that have been run on the CPU and what we're looking at here is a set of task specific benchmarks. SiSoftware doesn't provide any system details, so take these numbers for what they are.

The benchmarks consist of three categories, Vector SIMD Native, Cryptographic Native and Financial Analysis Native. Not all tests have been run on the Core i9-12900K and SiSoftware themselves admit that they don't have enough data points to draw any final conclusions. Unlike other supposedly leaked benchmark figures, the Core i9-12900K doesn't look like a clear winner here, as it barely beats the AMD Ryzen 9 5900X in some tests, while it's beaten by it and even the Core i9-11900K in other tests. It should be noted that the Core i9-11900K does use AVX512 where supported which gives it a performance advantage to the other CPUs in some tests. We'll let you make up your own mind here, but one thing is certain, we're going to have to wait for proper reviews before the race is over and a winner is crowned.

Update: As the original article was taken down and there were some useful references in it, you can find a screen grab of it here.
Sources: SiSoftware, via @TUM_APISAK
Add your own comment

69 Comments on SiSoftware Compiles Early Performance Preview of the Intel Core i9-12900K

#26
TranceHead
R0H1TWell that's kind of the point ~ it's (mostly) useless unless mobos support it & even then it could be sketchy if not implemented properly!

This one supported both DDR2 and DDR3 (not both at once though)
I believe AL DDR4/5 support is more for flexibility of MoBo manufacturers, until CAS latencies come down, DDR5 may be a hard sell for some enthusiasts so the manufacturer can just make ddr4 and ddr5 variants of their boards
Posted on Reply
#27
watzupken
I feel people should not expect Intel to have a miracle inthe form of Alder Lake thag will allow them to pull themselves away from competition significantly. While its got 16 cores in the form of 8 pef, and 8 eff, it is not the same as 16 pef cores. Under load that favors multithread well, the big little config may not perform as well. They are not competing with 16 Bulldozer cores. The Zen3 cores are quite potent and easily beats the Skylake based cores easily.
Posted on Reply
#28
Patriot
watzupkenI feel people should not expect Intel to have a miracle inthe form of Alder Lake thag will allow them to pull themselves away from competition significantly. While its got 16 cores in the form of 8 pef, and 8 eff, it is not the same as 16 pef cores. Under load that favors multithread well, the big little config may not perform as well. They are not competing with 16 Bulldozer cores. The Zen3 cores are quite potent and easily beats the Skylake based cores easily.
All they really need to do is have better single threaded performance and beat AMDs 12c in multithreaded, IF they price it accordingly.
Then the AMD 16c becomes a tradeoff rather than clear winner.

Rocket lake was competitive in single threaded and lost heavily in multi, and used more power...
Alders lake has to compete against Zen3+ and its added 15% ish FPS gains.
Posted on Reply
#29
Oberon
PatriotAll they really need to do is have better single threaded performance and beat AMDs 12c in multithreaded, IF they price it accordingly.
Then the AMD 16c becomes a tradeoff rather than clear winner.

Rocket lake was competitive in single threaded and lost heavily in multi, and used more power...
Alders lake has to compete against Zen3+ and its added 15% ish FPS gains.
Leaked prices would indicate that Intel expects the 12900K to land somewhere between the 5900X and 5950X. If it was going to be faster, it would be more expensive.
Posted on Reply
#30
sk8er
TranceHead
This one supported both DDR2 and DDR3 (not both at once though)
I believe AL DDR4/5 support is more for flexibility of MoBo manufacturers, until CAS latencies come down, DDR5 may be a hard sell for some enthusiasts so the manufacturer can just make ddr4 and ddr5 variants of their boards
Mine was P45-8D memory lover (with Q9550) still working normally the last time i remember, almost identical layout with DDR2 & 3.
The box says The world First 8 dimms mobo, with bonus Global Star cpu cooler, i like it a lot :D
Posted on Reply
#31
Darmok N Jalad
OberonLeaked prices would indicate that Intel expects the 12900K to land somewhere between the 5900X and 5950X. If it was going to be faster, it would be more expensive.
Perhaps. Didn't the 11-series pretty much see a price cut from MSRP almost right out of the gate?
Posted on Reply
#32
qcmadness
Darmok N JaladPerhaps. Didn't the 11-series pretty much see a price cut from MSRP almost right out of the gate?
Just because the original RKL MSRP are not competitive at all.
Posted on Reply
#33
katzi
Darmok N JaladIn all my years, I can’t recall many boards supporting 2 kinds of memory. Granted, I might have missed something! The last time I could recall, it was when SDRAM came out over 25 years ago. I had a board that supported both RAM and SDRAM, though you could only use one or the other. Usually the dual support means the CPU will support both, but it’s vendor’s choice on which one to implement. I think Apollo Lake also had DDR3/DDR4 support, but no boards supported both. It would probably be a really complex design, too.
I had a Foxconn board back during the Core 2 Duo/Quad days that supported DDR2 and DDR3 it had two DIMM slots of each
Posted on Reply
#34
R0H1T
MikeSnowWhich tool did you use to get to those settings?
Quick CPU, the settings are available through registry as well but I'm not sure if all the important ones wrt scheduler are at one place so that's why it comes in handy.
Posted on Reply
#35
yeeeeman
quite certain this is run only on a single cluster (that is 8 cores). Either the efficiency or the performance ones, but I would be inclined to think that these were run on the little cores.
Posted on Reply
#36
lexluthermiester
OberonSuddenly people are up in arms about Sandra, which has been around and in use for almost 25 years. There's plenty of information about what each benchmark entails available on their website if you actually want to find out.

Here's a screenshot of the whole article since it has been taken down, just in case more people want to claim things like it isn't optimized for Intel's hybrid architecture, or that the results are invalid because it's running on Windows 10, or whatever other justification they want to come up with beyond "the product isn't out yet."

Thank You for that!
ShurikNConsidering Alder Lake was touted as the next coming of Jesus
It wasn't. But the results do show that there is merit to the design on X86.
TranceHeaduntil CAS latencies come down, DDR5 may be a hard sell for some enthusiasts
This. As with every new generation of memory, it takes time for the industry to refine it's offerings to surpass the previous generation.
Posted on Reply
#37
Exilarch
Company I work for is very close to Intel. And everyone hates them. The devs, the devops, guys from the datacenters and even the business side is starting to argue that very high power consumption isn't worth the subsidies Intel gives us.
Posted on Reply
#38
TheLostSwede
News Editor
yeeeemanquite certain this is run only on a single cluster (that is 8 cores). Either the efficiency or the performance ones, but I would be inclined to think that these were run on the little cores.
Please read the provided link to the screen grab of the original source that I added, it explains a bit more.
Posted on Reply
#39
ZoneDymo
OberonLeaked prices would indicate that Intel expects the 12900K to land somewhere between the 5900X and 5950X. If it was going to be faster, it would be more expensive.
idk, depends entirely on what Intel wants to do, if they want to win some mindshare back they might want to be faster yet cheaper then 5900x, I mean it is "only" 8 big cores so I would think it would be a power-move as well.
Posted on Reply
#40
Krzych
I just hope ADL is going to make up for Rocket Lake disaster in gaming performance and add it's own gain on top, making it around 40% faster in games vs Comet Lake, as it should be. Benchmarks doesn't matter anymore, Rocket Lake launch illustrated very well how irrelevant all of those benchmarks are for gaming CPUs, with 20% gains left and right and then 0% in actual games. Generational gains are just not across the board anymore, it will now be very common to see new CPU getting massive gains in some bechmarks and then no gains in others.
Posted on Reply
#41
efikkan
PatriotAlders lake has to compete against Zen3+ and its added 15% ish FPS gains.
KrzychI just hope ADL is going to make up for Rocket Lake disaster in gaming performance and add it's own gain on top, making it around 40% faster in games vs Comet Lake, as it should be. Benchmarks doesn't matter, Rocket Lake launch illustrated very well how irrelevant all of those benchmarks are for gaming CPUs, with 20% gains left and right and then 0% in actual games.
To you both:
Zen3+'s 15% FPS gains was when running the CPUs at a lower clock speed.
FPS gains flatten out when the CPU is no longer the bottleneck, with most current games that happens a little over 4 GHz for Skylake family CPUs. This is also the reason why Rocket Lake showed little gains in gaming over Skylake. So until games become more CPU demanding, we should expect Alder Lake and other new CPUs to show minimal gains in games, despite having much faster cores.
Posted on Reply
#42
Krzych
efikkanTo you both:
Zen3+'s 15% FPS gains was when running the CPUs at a lower clock speed.
FPS gains flatten out when the CPU is no longer the bottleneck, with most current games that happens a little over 4 GHz for Skylake family CPUs. This is also the reason why Rocket Lake showed little gains in gaming over Skylake. So until games become more CPU demanding, we should expect Alder Lake and other new CPUs to show minimal gains in games, despite having much faster cores.
I've heard that before. You are making up imaginary bottlenecks.

The reason why RKL didn't bring gains was because it was an afterthought backport that got better cores and no other upgrades. The reason why AMD is able to make big gen on gen gains, and go beyond the imaginary bottleneck that you are implying with their 5000 series CPUs and beat Comet Lake by big margin in some games, is because their generational updates are comprehensive, and they are going to do that again with Zen4. Alder Lake is a comprehensive upgrade and overhaul this time around as well, and it should not have the same problems as Rocket Lake. It is RKL arch bottlenecking itself not games bottlenecking it.
Posted on Reply
#43
Steevo
KrzychI've heard that before. You are making up imaginary bottlenecks.

The reason why RKL didn't bring gains was because it was an afterthought backport that got better cores and no other upgrades. The reason why AMD is able to make big gen on gen gains, and go beyond the imaginary bottleneck that you are implying with their 5000 series CPUs and beat Comet Lake by big margin in some games, is because their generational updates are comprehensive, and they are going to do that again with Zen4. Alder Lake is a comprehensive upgrade and overhaul this time around as well, and it should not have the same problems as Rocket Lake. It is RKL arch bottlenecking itself not games bottlenecking it.
You must write for CNN.

It's a known fact that at higher resolution CPUs aren't the bottleneck for FPS. It's the GPU that is.

We have entered the Era where only scientific work, benchmarks, and poorly optimized software are the reasons CPUs aren't "fast enough".
Posted on Reply
#44
Krzych
SteevoYou must write for CNN.

It's a known fact that at higher resolution CPUs aren't the bottleneck for FPS. It's the GPU that is.

We have entered the Era where only scientific work, benchmarks, and poorly optimized software are the reasons CPUs aren't "fast enough".
This is a very basic overgeneralization that was created for the sake of easier explanation and then turned into a myth over time. Resolution doesn't have anything to do with CPU bottleneck. The behavior of each game individually is what matters. If say 10900K is able to pull only 45 FPS in one scene because of CPU bottleneck (for example the game uses only one core) then it will have the same 45 FPS at 1080p, 1440p, 2160p or any other solution of the same aspect ratio (aspect ratio affects performance in CPU bound scenarios) as long as your GPU can handle it. Saying that CPU simply isn't a bottleneck at higher resolution period is very basic misunderstanding of how things work, somewhat bizarre for someone who was posting 2 posts a day on a tech forum for the last 16 years. What have you been doing all that time?

Again, I understand why this generalization is used when talking to people who are just getting into this for example, but to bring something like that up in a more advanced discussion is rather embarrassing.
Posted on Reply
#45
Steevo
KrzychThis is a very basic overgeneralization that was created for the sake of easier explanation and then turned into a myth over time. Resolution doesn't have anything to do with CPU bottleneck. The behavior of each game individually is what matters. If say 10900K is able to pull only 45 FPS in one scene because of CPU bottleneck (for example the game uses only one core) then it will have the same 45 FPS at 1080p, 1440p, 2160p or any other solution of the same aspect ratio (aspect ratio affects performance in CPU bound scenarios) as long as your GPU can handle it. Saying that CPU simply isn't a bottleneck at higher resolution period is very basic misunderstanding of how things work, somewhat bizarre for someone who was posting 2 posts a day on a tech forum for the last 16 years. What have you been doing all that time?

Again, I understand why this generalization is used when talking to people who are just getting into this for example, but to bring something like that up in a more advanced discussion is rather embarrassing.
Durrrrrr......

www.techpowerup.com/review/amd-ryzen-7-5800x/15.html



"On popular demand from comments over the past several CPU reviews, we are including game tests at 720p (1280x720 pixels) resolution. All games from our CPU test suite are put through 720p using a RTX 2080 Ti graphics card and Ultra settings. This low resolution serves to highlight theoretical CPU performance because games are extremely CPU-limited at this resolution. Of course, nobody buys a PC with an RTX 2080 Ti to game at 720p, but the results are of academic value because a CPU that can't do 144 frames per second at 720p will never reach that mark at higher resolutions. So, these numbers could interest high-refresh-rate gaming PC builders with fast 120 Hz and 144 Hz monitors. Our 720p tests hence serve as synthetic tests in that they are not real world (720p isn't a real-world PC-gaming resolution anymore) even though the game tests themselves are not synthetic (they're real games, not 3D benchmarks)."

The architecture difference in out of order execution between AMD and Intel are so close that cache hits and cache size seem to be the determining factor. Neither is going to be able to pull a huge win unless they have some never before seen act of pre-determined calculations so cache hits are always 100%, or they find a way to reduce latency to 0ns. Either are impossible so it's down to cache and latency to cache.


What have I been doing for 16 years? Building servers for medical offices, working, having kids that have grown into teens able to drive not that it's any of your fucking business. I have also been paying attention to the evolution and revolution of tech, you are merely getting your fingers wet, let me know when you wake-up at 6AM the first time you fall asleep.at a on site install waiting for a RAID array to rebuild.
Posted on Reply
#46
efikkan
KrzychI've heard that before. You are making up imaginary bottlenecks.
That's nonsense.
FPS is a mesure of GPU performance, not CPU performance. And when the CPU is fast enough to fully saturate the GPU, the GPU becomes the bottleneck. This should be elementary knowledge, even to those without a CS degree.
If someone released a CPU with 10x faster cores tomorrow, it would not change the performance in most current games.
KrzychThe reason why RKL didn't bring gains was because it was an afterthought backport that got better cores and no other upgrades.
That's an absurd statement.
Rocket Lake does in fact have higher single threaded performance. What the developers thought and felt during the development is irrelevant.
KrzychThe reason why AMD is able to make big gen on gen gains, and go beyond the imaginary bottleneck that you are implying with their 5000 series CPUs and beat Comet Lake by big margin in some games, is because their generational updates are comprehensive, and they are going to do that again with Zen4. Alder Lake is a comprehensive upgrade and overhaul this time around as well, and it should not have the same problems as Rocket Lake. It is RKL arch bottlenecking itself not games bottlenecking it.
Clearly you are not familiar with Sunny Cove's design, or other CPU designs for that matter.
Both Sunny Cove and Golden Cove offer 19% IPC gains over their predecessor. The issues with Rocket Lake are tied to the inferior production node, not the underlying architecture.
KrzychThis is a very basic overgeneralization that was created for the sake of easier explanation and then turned into a myth over time. Resolution doesn't have anything to do with CPU bottleneck.
CPU overhead is mostly linear with frame rate.
When the resolution is lower, the GPU bottleneck becomes less and the CPU bottleneck greater since the frame rate increases. With most games today you have to run them at 720p and/or low details with an absurdly high frame rate to show a significant difference between CPUs. This becomes a pointless and futile effort when actual gamers will not run the hardware under such conditions.
Posted on Reply
#47
Krzych
SteevoDurrrrrr......

www.techpowerup.com/review/amd-ryzen-7-5800x/15.html



"On popular demand from comments over the past several CPU reviews, we are including game tests at 720p (1280x720 pixels) resolution. All games from our CPU test suite are put through 720p using a RTX 2080 Ti graphics card and Ultra settings. This low resolution serves to highlight theoretical CPU performance because games are extremely CPU-limited at this resolution. Of course, nobody buys a PC with an RTX 2080 Ti to game at 720p, but the results are of academic value because a CPU that can't do 144 frames per second at 720p will never reach that mark at higher resolutions. So, these numbers could interest high-refresh-rate gaming PC builders with fast 120 Hz and 144 Hz monitors. Our 720p tests hence serve as synthetic tests in that they are not real world (720p isn't a real-world PC-gaming resolution anymore) even though the game tests themselves are not synthetic (they're real games, not 3D benchmarks)."

The architecture difference in out of order execution between AMD and Intel are so close that cache hits and cache size seem to be the determining factor. Neither is going to be able to pull a huge win unless they have some never before seen act of pre-determined calculations so cache hits are always 100%, or they find a way to reduce latency to 0ns. Either are impossible so it's down to cache and latency to cache.
This test only proves my point, it tests maximum framerate that CPUs can hit in tested games and clearly states that those CPUs are never going to reach higher FPS than that, regardless of resolution, which makes the statement of "CPU is no longer a bottleneck at higher resolutions" obviously incorrect. How often that actually happens in practice is whole another matter entirely, because you will be GPU bound in many games, that is where this oversimplification is coming from, but it is still objectively incorrect.
SteevoWhat have I been doing for 16 years? Building servers for medical offices, working, having kids that have grown into teens able to drive not that it's any of your fucking business. I have also been paying attention to the evolution and revolution of tech, you are merely getting your fingers wet, let me know when you wake-up at 6AM the first time you fall asleep.at a on site install waiting for a RAID array to rebuild.
That's an unnecessary overreaction. Obviously I didn't ask about your personal life, I assumed this goes without saying. All I asked was how is it that you don't understand the behavior of games and still throwing around oversimplified and incorrect principles despite almost 2 decades of experience around PCs and presumably games, if you enter the discussion about that specifically. There is no need for your personal information. Maybe I shouldn't have said that, but it was you starting the discussion with "CNN", which I assume is US TV news channel, so that was a huge insult right off the bat.
efikkanThat's nonsense.
FPS is a mesure of GPU performance, not CPU performance. And when the CPU is fast enough to fully saturate the GPU, the GPU becomes the bottleneck. This should be elementary knowledge, even to those without a CS degree.
If someone released a CPU with 10x faster cores tomorrow, it would not change the performance in most current games.
efikkanCPU overhead is mostly linear with frame rate.
When the resolution is lower, the GPU bottleneck becomes less and the CPU bottleneck greater since the frame rate increases. With most games today you have to run them at 720p and/or low details with an absurdly high frame rate to show a significant difference between CPUs. This becomes a pointless and futile effort when actual gamers will not run the hardware under such conditions
And yet again you are creating imaginary bottlenecks and going by child logic of "you are going to be GPU bound anyway so there is no point in having faster CPU", as if a few latest AAA games were all that you have ever seen and played. I don't really see a point of further replying, you are just flat out wasting my time at this point.
Posted on Reply
#48
lexluthermiester
KrzychThis is a very basic overgeneralization
No, his assessment was spot-on.
Krzychthat was created for the sake of easier explanation and then turned into a myth over time.
No, this is reality. Being the owner of several PCs with a wide range of CPU's, I can confirm definitively that for the last 12 years CPU's have reached a point were most are more than enough to do most general computing tasks efficiently. Anything 4cores and up will get the job done quickly, but even most dual cores do well. And for the last 6 years, there are no examples of mainstream(6core+) CPU's that can't do any task asked of them and do them in a speedy fashion. At this point in time, CPU makers are competing more against themselves than they are trying to meet the demands of software tasks.
Krzychyou are just flat out wasting my time at this point.
No, you did that to yourself...
Posted on Reply
#49
Krzych
lexluthermiesterNo, his assessment was spot-on.

No, this is reality. Being the owner of several PCs with a wide range of CPU's, I can confirm definitively that for the last 12 years CPU's have reached a point were most are more than enough to do most general computing tasks efficiently. Anything 4cores and up will get the job done quickly, but even most dual cores do well. And for the last 6 years, there are no examples of mainstream(6core+) CPU's that can't do any task asked of them and do them in a speedy fashion. At this point in time, CPU makers are competing more against themselves than they are trying to meet the demands of software tasks.
You are making the same mistake as he does, looking at this with child logic and through your subjective experience. Reality is that resolution has nothing to do with CPU bottleneck and the point of CPU bottleneck for a given game and CPU combination is the same for every resolution of the same aspect ratio, that is all I have been saying and you keep bringing your subjective technically incorrect oversimplified versions of that, or trying to bring in software other than what was originally discussed - games and only games.
lexluthermiesterNo, you did that to yourself...
That much is true, but I am stuck for few hours anyway, so what to do... :p What I really meant by that though is that the discussion is normally supposed to go somewhere but if we cannot get past entry-level oversimplifications and for something as basic as CPU bottlenecking in games then it won't.

Nevermind I guess.
Posted on Reply
#50
lexluthermiester
KrzychYou are making the same mistake as he does, looking at this with child logic
LOL@"child logic" :rolleyes: Way to keep it classy there. :slap:
Krzychthrough your subjective experience.
No, this is an experience LOTS of people are having. Just because YOU have not experienced it doesn't mean that it is not being experienced by others. Try be less self-centered, eh.
KrzychReality is that resolution has nothing to do with CPU bottleneck
Yes, it does. And MASSIVE amounts of benchmarking proves that point. It is VERY common knowledge. Your ignorance to reality does not alter reality. Context much?
Krzychthat is all I have been saying and you keep bringing your subjective technically incorrect oversimplified versions
Oh that's adorable. That "you keep bringing" statement directly implies I have repeatedly responded to you. I responded to you once before you made that statement. That does not qualify as repeated effort.
KrzychThat much is true, but I am stuck for few hours anyway, so what to do...
Ahh, so you admit you're trolling everyone. Now where is that button.. CLICK!
Posted on Reply
Add your own comment
Dec 22nd, 2024 06:21 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts