Sunday, July 31st 2022
Intel Core i5-13600K and Core i7-13700K QS CPUs Benchmarked
Is there anything better than yet another benchmark leak of upcoming products? This time around we don't have to make do with Geekbench or some other useless benchmark, as a bilibili user in the PRC has posted a video where he has put the upcoming Intel Core i5-13600K and Core i7-13700K CPUs through 10 different games, plus 3DMark Fire Strike and Time Spy. This has been done at 1080p, 1440p and 2160p at that, using a GeForce RTX 3090 Ti graphics card. Both CPUs are QS or Qualification Samples, which means they're going to be close to identical to retail chips, unless there are some last minute issues that are discovered. The CPUs were tested using an ASRock Z690 Steel Legends WiFi 6E motherboard, well, two actually, as both a DDR4 and a DDR5 version were used. The DDR4 RAM was running at 3600 MHz with slow-ish timings of 18-22-22 in gear 1, whereas the DDR5 memory was running at 5200 MHz, most likely at 40-40-40 timings, although the modules were rated for 6400 MHz, in both cases we're looking at 32 GB.
Courtesy of @harukaze5719, we have some much easier to read graphs than those provided by the person that tested the two CPUs, but we've included the full graphs below as well. Each CPU was compared to its current SKU equivalent from Intel and in many of the games tested, the gain was a mere percent or less to three or four percent. However, in some games—at specific resolutions—especially when paired with DDR5 memory, the performance gain was as much as 15-20 percent. A few of the games tested, such as FarCry 6 at 4K, the game ends up being GPU limited, so a faster CPU doesn't help here as you'll see in the graphs below. There are some odd results as well, where the DDR5 equipped systems saw a regression in performance, so it's hard to draw any final conclusions from this test. That said, both CPUs should offer a decent performance gain, as long as the game in question isn't GPU limited, of around five percent at 1440p when paired with DDR5 memory.
Sources:
bilibili video, bilibili graphs, @harukaze5719 graphs
Courtesy of @harukaze5719, we have some much easier to read graphs than those provided by the person that tested the two CPUs, but we've included the full graphs below as well. Each CPU was compared to its current SKU equivalent from Intel and in many of the games tested, the gain was a mere percent or less to three or four percent. However, in some games—at specific resolutions—especially when paired with DDR5 memory, the performance gain was as much as 15-20 percent. A few of the games tested, such as FarCry 6 at 4K, the game ends up being GPU limited, so a faster CPU doesn't help here as you'll see in the graphs below. There are some odd results as well, where the DDR5 equipped systems saw a regression in performance, so it's hard to draw any final conclusions from this test. That said, both CPUs should offer a decent performance gain, as long as the game in question isn't GPU limited, of around five percent at 1440p when paired with DDR5 memory.
84 Comments on Intel Core i5-13600K and Core i7-13700K QS CPUs Benchmarked
As, 12900KS - 5.5ghz out of the box. I am hoping 13900ks to be 6.0ghz out of box - Thats pretty exciting - the only thing is how much power :)
Tryting to keep up with claims here, but forgetting to name a CPU makes it harder
Going by the TPU review, the 5600x?
If so, i dont see the comparison... the 12600K is $130Au more, as well as requiring more expensive boards, RAM and cooling. Miss any one of those, and you do get reduced performance throwing all the equations off.
Unless you've redefined the meaning of the word, Intel are at a severe DISadvantage with energy efficiency, even with the E cores.
Did I misunderstand what you're trying to say, or have you got some warped info from somewhere?
Intel have some raw performance advantages, but performance pretty much lines up as
5800x - 12600k
5900x - 12700k
5950x -12900k (Intel does have an MT advantage here, the rest are more equal)
MT they trade blows pretty well til the very top - where the extra costs can be worth it to someone who needs the extra rendering speed for work
ST intel pulls ahead in certain workloads, but thats due to Zen 3's boost hitting heat density issues - all Zen 3 chips boost to basically the same MHz value. Any improvements to the IHS or overall CPU design could easily improve on that.
seriously i'm so baffled trying to make sense of this, unless you're talking about something like the warped userbench results Heh, not here in aus. The price cuts intel made are very US centric
I'm seeing a lot of people gobbling up older B450 stock with 5900x and 32GB of DDR4 with the new GPU prices Didn't you hear that they're rumoured to be bringing Zen 4 to the AM4 platform?
AM5 is DDR5, AM4 is DDR4.
Doesn't mean they cant release Zen4 on both, and that seems increasingly likely.
If you could interpret the graphs you would understand it yourself. The 12600k at 125w beats both the 5600x and the 5800x in performance, handily. So if you lower it's power limit in order to match the performance it would handily beat them in efficiency as well. The 12600k can match the 5800x at 70-75W
Regarding efficiency I was talking about the GC cores specifically. Golden cove / Pcores, whatever you wanna call them, walk all over zen 3 in terms of efficiency. Just test 8gc cores to 8 zen 3 cores at same wattage and...yeah. The gap would be huge. 8 zen 3 cores need 150w+ to match 8GC cores at 65w.
chipsandcheese.com/2022/01/28/alder-lakes-power-efficiency-a-complicated-picture/
And at 3.5GHz frequency Zen 2 beat AL more than twice.
Now I couldn't find a test that would show something like encoding 2h video and measure kWh spent for total system. But I'm certain that any Zen 3 would win vs Alder Lake. You can also scroll few comments above where kilojoule power for Cinebench run is graphed from TPU review, Zen 3 wins the efficiency, and isn't all that far in total performance either.
(edit: TPU saved my earlier post after all, yay! made edits to make sense)
Modern processors can switch power states in a millisecond, Zen 3 for example can change frequency and voltage states over 1000 times a second, so you can easily understand how this mechanic works.
I'm aware of chips and cheese, great place, it's actually written up by an acquaintance of mine, we shared a Discord server. Great guys and always a great read. You should read their IPC projection article, it's actually amazing.
It is definitely true that you need to account for each architecture's characteristics and even each sample's operating conditions to come up with a precise estimate, but the general formula is there in every case. In this curve there will be a maximum efficiency and a balanced point, and past a certain point the energy requirements begin spiraling out of control until you need massive increases to tap into minimal potential.
"Completing a task faster doesnt matter if the task doesnt end"
Rendering work? Sure, higher power draw balances out.
Gaming? Hell no. It's a goddamn mess. If you're using 200W more and playing the same game it doesnt matter if the FPS is slightly higher, for a task like that all that matters is the total consumed over the entire time. And for the intel side, that's just insanely worse unless you throw in power limits that reduce the performance anyway.