Friday, April 3rd 2020

Ryzen 7 3700X Trades Blows with Core i7-10700, 3600X with i5-10600K: Early ES Review

Hong Kong-based tech publication HKEPC posted a performance review of a few 10th generation Core "Comet Lake-S" desktop processor engineering samples they scored. These include the Core i7-10700 (8-core/16-thread), the i5-10600K (6-core/12-thread), the i5-10500, and the i5-10400. The four chips were paired with a Dell-sourced OEM motherboard based on Intel's B460 chipset, 16 GB of dual-channel DDR4-4133 memory, and an RX 5700 XT graphics card to make the test bench. This bench was compared to several Intel 9th generation Core and AMD 3rd generation Ryzen processors.

Among the purely CPU-oriented benchmarks, the i7-10700 was found to be trading blows with the Ryzen 7 3700X. It's important to note here, that the i7-10700 is a locked chip, possibly with 65 W rated TDP. Its 4.60 GHz boost frequency is lesser than that of the unlocked, 95 W i9-9900K, which ends up topping most of the performance charts where it's compared to the 3700X. Still the comparison between i7-10700 and 3700X can't be dismissed, since the new Intel chip could launch at roughly the same price as the 3700X (if you go by i7-9700 vs. i7-9700K launch price trends).
The Ryzen 7 3700X beats the Core i7-10700 in Cinebench R15, but falls behind in Cinebench R20. The two end up performing within 2% of each other in CPU-Z bench, 3DMark Time Spy and FireStrike Extreme (physics scores). The mid-range Ryzen 5 3600X has much better luck warding off its upcoming rivals, with significant performance leads over the i5-10600K and i5-10500 in both versions of Cinebench, CPU-Z bench, as well as both 3DMark tests. The i5-10400 is within 6% of the i5-10600K. This is important, as the iGPU-devoid i5-10400F could retail at price points well under $190, two-thirds the price of the i5-10600K.
These performance figures should be taken with a grain of salt since engineering samples have a way of performing very differently from retail chips. Intel is expected to launch its 10th generation Core "Comet Lake-S" processors and Intel 400-series chipset motherboards on April 30. Find more test results in the HKEPC article linked below.
Source: HKEPC
Add your own comment

97 Comments on Ryzen 7 3700X Trades Blows with Core i7-10700, 3600X with i5-10600K: Early ES Review

#26
londiste
watzupkenI read apparently the next gen Intel CPU is going to require a new socket. So this 4xx series chipset is an upgrade dead end.
Next gen will be on the same socket. The one after that will be a new one. Intel is on a 2-year cadence with sockets and so far they seem to continue on the same path.
R0H1TThis isn't new, selling new chipsets is a big business for Intel & planned obsolescence arguably a bigger one!
Chipsets are cheap. I do not see how selling chipsets would be a big business for Intel. For motherboard manufacturers, maybe, but that is already different.
R0H1TThe performance hit will still be there, people need to get this out of their head that hardware fixes will not result in any performance loss!
This is simply incorrect. The bolded part is what you should try and get in your head. At least this has been the case with Intel's fixes so far.
Posted on Reply
#27
R0H1T
londisteChipsets are cheap. I do not see how selling chipsets would be a big business for Intel. For motherboard manufacturers, maybe, but that is already different.
I'm not sure where you're going with this? Intel makes a new chipset for each gen, many a times killing backward compatibility even when previous gen chipsets could support the latest chips. Lots of WR were broken on Z270, for 8700k & 8600k even when Intel didn't officially support it. Sure we could argue ad nauseam about power delivery though AM4 also had same limitations. More chipsets sold - more profits, as simple as that!
This is simply incorrect. The bolded part is what you should try and get in your head.
As compared to unmitigated systems is it, because that's what I was talking about? As for your claim I assume you have numbers to prove the hypothesis? Fact is while a truly apples to apples comparison is hard, given that the OS & software have also been updated, hardware mitigations do not negate the performance penalty that the "fixed" chip has baked in now especially as compared to the original uarch!
londisteCPUs with hardware mitigations perform at the same level that original CPUs did without mitigations. Phoronix has tested and found exactly that.
There is some overall performance hit due to software changes to mitigate Spectre but that affects everyone across the board.
You have 10xxx review numbers then, as compared to totally unpatched 6xxx systems?
Intel chips have to be patched for meltdown as well as a bunch of other vulnerabilities, including SGX, the patches aren't limited to spectre :rolleyes:

This is the latest I could find on phoronix, again I'll add that a truly apples to apples comparison is nigh impossible but any mitigation, hardware or software, will have an impact on performance!
www.phoronix.com/scan.php?page=article&item=3900x-9900k-mitigations&num=8
Posted on Reply
#28
londiste
CPUs with hardware mitigations perform at the same level that original CPUs did without mitigations. Phoronix has tested and found exactly that.
There is some overall performance hit due to software changes to mitigate Spectre but that affects everyone across the board.

I can't find the exact test right now. Look for 9900K R0 results in mitigation performance article before MDS was found/published.
The problem with finding a good comparison for this is that Intel has increasing amount of mitigations in 3 or 4 different steppings plus most of the time there is an issue that is not fixed in hardware :D
Posted on Reply
#29
Braggingrights
The i7-10700 tops out at 4.7 GHz initial turbo and 4.8 GHz on turbo 3.0... the diagram that keeps getting passed around has the single core and the all core numbers reversed... as if the all core turbo is going to be 4.8 GHz and the single core only 4.6 GHz... jeebus!
Posted on Reply
#30
midnightoil
londiste"Won't" might be a thing. Intel definitely can if they want to. Intel has smaller dies and more margins to cut especially if you consider Intel keeps the manufacturing profit as well which goes to TSMC for AMD CPUs.
Based on pictures in the source article Intel is still/again using the 6-core dies for 10600K. Think about it this way - Ryzen 3000 CPUs are 125mm^2 12nm IO die plus 75mm^2 7nm CCD die. Intel's 6-core is 149mm^2 14nm die. Intel 8-core die is 175mm^2 which should still be very good in terms of manufacturing cost. Hell, even 10-die is ~200mm^2 which is right where Zen/Zen+ dies were.
Wut?

Zen are chiplets designs ... the yields are way better than Intel's. Hence why AMD have much lower costs, which have been discussed constantly for the last 3 years now.

It's way worse now for Intel than it was back in 2017. These 10xxx series chips push clocks and power draw way beyond what Intel's 14nm process was ever intended or supposed to reach.

It wouldn't surprise me if yield for i9 10xxx chips is less than 40%. I'd be absolutely amazed if it was much over 50%.

AMD can push their price way lower whilst still retaining a decent margin.

---

Anyway, these look like a very poor proposition vs Ryzen 3xxx .... and are likely to look outright Pentium 4-ish vs Ryzen 4xxx.
Posted on Reply
#31
Braggingrights
midnightoilWut?

Zen are chiplets designs ... the yields are way better than Intel's. Hence why AMD have much lower costs, which have been discussed constantly for the last 3 years now.

It's way worse now for Intel than it was back in 2017. These 10xxx series chips push clocks and power draw way beyond what Intel's 14nm process was ever intended or supposed to reach.

It wouldn't surprise me if yield for i9 10xxx chips is less than 40%. I'd be absolutely amazed if it was much over 50%.

AMD can push their price way lower whilst still retaining a decent margin.

---

Anyway, these look like a very poor proposition vs Ryzen 3xxx .... and are likely to look outright Pentium 4-ish vs Ryzen 4xxx.
Ahh my Penny 4 1.6 GHz just coasting along at 3 GHz... those were the days
Posted on Reply
#32
efikkan
midnightoilZen are chiplets designs ... the yields are way better than Intel's. Hence why AMD have much lower costs, which have been discussed constantly for the last 3 years now.
Just because chiplets are advantageous, doesn't mean it beats the yields of another node. Also remember that the advantages of chiplets increase with die size. The yields of Intel's 14nm++ is outstanding, and a ~200mm² chip should have no issues there. TSMC's 7nm node is about twice as expensive as their 14nm node, and AMD still needs the IO die on 14nm, so cost should certainly be an advantage for Intel.
Posted on Reply
#33
Metroid
If this is trade blows, imagine when the ryzen 3xxx came out, more like spanked then hehe
Posted on Reply
#34
londiste
midnightoilZen are chiplets designs ... the yields are way better than Intel's. Hence why AMD have much lower costs, which have been discussed constantly for the last 3 years now.
Do you realize that Intel 4-core Skylake-ish CPUs are pretty much exactly the same size as Matisse's IO Die? Even with the small die and good yields, that 7nm CCD is not cheaper than +25mm^2 or +50mm^2 larger 14nm chip. These are all relatively small chips - we are talking under 200mm^2 here.
efikkanTSMC's 7nm node is about twice as expensive as their 14nm node
In the end of last year AMD said 7nm costs 60% more.
R0H1TYou have 10xxx review numbers then, as compared to totally unpatched 6xxx systems?
Intel chips have to be patched for meltdown as well as a bunch of other vulnerabilities, including SGX, the patches aren't limited to spectre :rolleyes:

This is the latest I could find on phoronix, again I'll add that a truly apples to apples comparison is nigh impossible but any mitigation, hardware or software, will have an impact on performance!
www.phoronix.com/scan.php?page=article&item=3900x-9900k-mitigations&num=8
This is with 9900K having MDS mitigations in software (mds: Mitigation of Clear buffers; SMT vulnerable).
9900K gets hit by 5.5% vs 3900X 3.7%. 3900X should actually do even slightly better. For some reason Zen2 seem to have a slightly heavier spectre_v2 mitigation enabled.

Edit:
My point was about hardware fixes. If you look at the enabled mitigations, both CPUs have several mitigations active for spec_store_bypass, spectre_v1 and spectre_v2:
- Ryzen 9 3900X:
l1tf: Not affected +
mds: Not affected +
meltdown: Not affected +
spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp +
spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization +
spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: always-on RSB filling

- Core i9 9900K:
l1tf: Not affected +
mds: Mitigation of Clear buffers; SMT vulnerable +
meltdown: Not affected +
spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp +
spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization +
spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling
Edit2:
EPYC Rome vs Cascade Lake at similar mitigation setup - 2.2% vs 2.8% impact from mitigations:
www.phoronix.com/scan.php?page=article&item=epyc-rome-mitigations&num=1
Posted on Reply
#35
Braggingrights
Intel will still win gaming, AMD will have the moral win and everyone will ignore logic and common sense to support their color... it's sport for nerds
Posted on Reply
#36
londiste
BraggingrightsAMD will have the moral win
AMD will have the power efficiency win. Probably price/perf win also.
Posted on Reply
#37
Braggingrights
londisteAMD will have the power efficiency win. Probably price/perf win also.
And nobody will care because gaming is sexy. And really you've gotta hand it to them, they are still in the race with 14nm against 7nm, that's an achievement on it's own, like taking an old Ford and hotting it up to beat a Ferrari, hey someone should make a movie like that... and before you call me a fanboi my GOAT proccy is the Athlon Tbird 1000
Posted on Reply
#38
efikkan
BraggingrightsIntel will still win gaming, AMD will have the moral win and everyone will ignore logic and common sense to support their color... it's sport for nerds
Claiming there is such a thing as a "moral win" is bias.
Zen 2 have several advantages, including energy efficiency, and performance advantages in several areas including (large) video encoding, blender rendering, etc. which are relevant considerations for many buyers.
The decision should come down to which product is objectively better for the specific user's use case. Unlike a few years ago where there was a single clear option, today the "winner" heavily depends on the use case.
Posted on Reply
#39
londiste
efikkanZen 2 have several advantages, including energy efficiency, and performance advantages in several areas including (large) video encoding, blender rendering, etc. which are relevant considerations for many buyers.
Energy efficiency remains a big plus but Zen2 is bound to lose performance advantage at this point. In large part, it comes down to Intel shipping CPUs with SMT/HT again.
Posted on Reply
#40
Braggingrights
efikkanClaiming there is such a thing as a "moral win" is bias.
Zen 2 have several advantages, including energy efficiency, and performance advantages in several areas including (large) video encoding, blender rendering, etc. which are relevant considerations for many buyers.
The decision should come down to which product is objectively better for the specific user's use case. Unlike a few years ago where there was a single clear option, today the "winner" heavily depends on the use case.
I used the term very loosely... to match my own
Posted on Reply
#41
londiste
R0H1TI'm not sure where you're going with this? Intel makes a new chipset for each gen, many a times killing backward compatibility even when previous gen chipsets could support the latest chips. Lots of WR were broken on Z270, for 8700k & 8600k even when Intel didn't officially support it. Sure we could argue ad nauseam about power delivery though AM4 also had same limitations. More chipsets sold - more profits, as simple as that!
H310 RCP is $26, B365 is 28€ and Z370 is $47. Rest are probably somewhere in between.
Most of 300-series chipsets are reportedly on 14nm process and are 50-60mm^2.
I do not see how that would be hugely profitable for them. Even less so given shortage of manufacturing capacity.
Posted on Reply
#42
petedread
Intell should not be allowed to have more than four cores for none HEDT. Not after refusing to do it for so long and then calling chiplets glued together cores. Lol, I'm still bitter about all those i7's I bought and the HEDT stuff that always felt like a letdown. Though every I7 I had from 2700k to 8700k hit 5Ghz.
Posted on Reply
#43
Slizzo
petedreadIntell should not be allowed to have more than four cores for none HEDT. Not after refusing to do it for so long and then calling chiplets glued together cores. Lol, I'm still bitter about all those i7's I bought and the HEDT stuff that always felt like a letdown. Though every I7 I had from 2700k to 8700k hit 5Ghz.
I dunno. My 4.8GHz 7820X, and 4.9GHz 10940X haven't let me down yet.

Actually, I should make my 10940X a 5.0GHz processor. Still have plenty of headroom.
Posted on Reply
#44
yeeeeman
R0H1TThe performance hit will still be there, people need to get this out of their head that hardware fixes will not result in any performance loss!
If you don't install newest software and updates you won't get any performance hit.
petedreadIntell should not be allowed to have more than four cores for none HEDT. Not after refusing to do it for so long and then calling chiplets glued together cores. Lol, I'm still bitter about all those i7's I bought and the HEDT stuff that always felt like a letdown. Though every I7 I had from 2700k to 8700k hit 5Ghz.
Intel strategy was never more cores. That is valid from Core 2 duo. They had 4 cores starting from Core 2 Quad chips up until Skylake.
Their strategy was advancing process technology as fast as possible, and for many years they were the best, easily 2-3 years in front of every other manufacturer. This process advantage allowed them to increase the cores sizes/caches, keep frequencies relatively low (under 4 Ghz usually) and improve performance each generation.
The improvements for each generations were considered based on various aspects, like cost, competition, software optimization for multi core, etc.
Intel had basically 0 competition up until 2017. None. For that you can thank AMD.
Also, software wasn't very multithreaded up until 2015-2016, right when Skylake launched.
So, if you use your brain, you will see that there was no point in this world for Intel to launch a 20 core CPU in 2013, when games usually used max 2-4 cores and even Windows 7 or 8 was limited to a small number of cores.
Professional users on the other hand had options in the name of HEDT products that increased number of cores each year.

I think this still holds true today, with the exception that the sweet spot for number of cores has now moved to 8 cores, thanks to consoles.
So use your brain and understand all the variables involved and then start making judgements.
Posted on Reply
#45
R0H1T
yeeeemanIf you don't install newest software and updates you won't get any performance hit.
You don't get any of the latest optimizations either, like the recent MATLAB ones. Besides not patching the system (OS) or installing the latest software in mostly a non option, especially if you're dealing with any amount of critical data &/or sensitive information lest you want a ton of class action lawsuits?
Posted on Reply
#46
ARF
yeeeemanIf you don't install newest software and updates you won't get any performance hit.


Intel strategy was never more cores. That is valid from Core 2 duo. They had 4 cores starting from Core 2 Quad chips up until Skylake.
Their strategy was advancing process technology as fast as possible, and for many years they were the best, easily 2-3 years in front of every other manufacturer. This process advantage allowed them to increase the cores sizes/caches, keep frequencies relatively low (under 4 Ghz usually) and improve performance each generation.
The improvements for each generations were considered based on various aspects, like cost, competition, software optimization for multi core, etc.
Intel had basically 0 competition up until 2017. None. For that you can thank AMD.
Also, software wasn't very multithreaded up until 2015-2016, right when Skylake launched.
So, if you use your brain, you will see that there was no point in this world for Intel to launch a 20 core CPU in 2013, when games usually used max 2-4 cores and even Windows 7 or 8 was limited to a small number of cores.
Professional users on the other hand had options in the name of HEDT products that increased number of cores each year.

I think this still holds true today, with the exception that the sweet spot for number of cores has now moved to 8 cores, thanks to consoles.
So use your brain and understand all the variables involved and then start making judgements.
If you use your brain, you will maybe get the idea that games use as many cores as the most popular CPU SKUs are currently on the market with.
Example - if only 10% of the users have 12-core CPUs, of course the games won't utilise it.
The games will wait until 60% or more of the users have these 12-core CPU SKUs.
Posted on Reply
#47
Darmok N Jalad
yeeeemanIntel had basically 0 competition up until 2017. None. For that you can thank AMD.
You left out the part about Intel’s anti-trust behavior that kept AMD from gaining marketshare when there actually was competition. No doubt AMD made some big design missteps, but they did not fail in a vacuum. Intel forked over $1B+ to AMD to settle before a trial.
Posted on Reply
#48
londiste
ARFIf you use your brain, you will maybe get the idea that games use as many cores as the most popular CPU SKUs are currently on the market with.
Example - if only 10% of the users have 12-core CPUs, of course the games won't utilise it.
The games will wait until 60% or more of the users have these 12-core CPU SKUs.
Games have had a number of threads as probable target since 2005/2006 with Xbox360/PS3. Since 2013 multithreading has been very heavily incentivized with 6-7 available threads on Xbox One and PS4 with low single core performance. Changes to games are very-very gradual and not all use cases for games are able to benefit to the same degree.
Posted on Reply
#49
efikkan
yeeeemanIntel strategy was never more cores. That is valid from Core 2 duo. They had 4 cores starting from Core 2 Quad chips up until Skylake…
While AMD's return to competition has certainly pushed some extra focus on more cores, which to some extent is useful, many are forgetting that there were plans of 6-core Skylake before details of Zen was known to the public. While Intel's 14nm node is very good today, it was terrible in the beginning.
ARFIf you use your brain, you will maybe get the idea that games use as many cores as the most popular CPU SKUs are currently on the market with.
Example - if only 10% of the users have 12-core CPUs, of course the games won't utilise it.
The games will wait until 60% or more of the users have these 12-core CPU SKUs.
Those who understands how code works knows it's the type of workload which limits the scaling potential. Asynchronous workloads, like large encoding workloads, non-realtime rendering, and many server workloads can scale nearly linearly until you reach a hardware or OS bottleneck. Synchronous workloads however, like most applications and certainly games, will have more limited scaling potential and will sooner or later reach a point of diminishing returns, precisely where this limit resides depends on the workload, and can't really be eliminated even if you wanted to. Games for instance can't keep scaling the frame rate up to 16 threads, not today and not 10 years from now. More cores are certainly useful to offload background tasks and let the game run undisturbed, but games will not need more than 2-3 threads to feed the GPU(except edge cases) and a few threads to do game simulation, network, audio etc. Beyond that, increasing the thread count for the game will only add synchronization overhead, and considering modern game engines run at tick rates ~100-200 Hz, there is not a lot of CPU time in each iteration.

As any good programmer can tell you; doing multithreading well is hard, and doing multithreading badly is worse than no multithreading at all. And just because an application spawns extra threads doesn't mean it benefits performance.
Posted on Reply
#50
ARF
efikkanWhile AMD's return to competition has certainly pushed some extra focus on more cores, which to some extent is useful, many are forgetting that there were plans of 6-core Skylake before details of Zen was known to the public. While Intel's 14nm node is very good today, it was terrible in the beginning.


Those who understands how code works knows it's the type of workload which limits the scaling potential. Asynchronous workloads, like large encoding workloads, non-realtime rendering, and many server workloads can scale nearly linearly until you reach a hardware or OS bottleneck. Synchronous workloads however, like most applications and certainly games, will have more limited scaling potential and will sooner or later reach a point of diminishing returns, precisely where this limit resides depends on the workload, and can't really be eliminated even if you wanted to. Games for instance can't keep scaling the frame rate up to 16 threads, not today and not 10 years from now. More cores are certainly useful to offload background tasks and let the game run undisturbed, but games will not need more than 2-3 threads to feed the GPU(except edge cases) and a few threads to do game simulation, network, audio etc. Beyond that, increasing the thread count for the game will only add synchronization overhead, and considering modern game engines run at tick rates ~100-200 Hz, there is not a lot of CPU time in each iteration.

As any good programmer can tell you; doing multithreading well is hard, and doing multithreading badly is worse than no multithreading at all. And just because an application spawns extra threads doesn't mean it benefits performance.
Well, it seems the majority of work is done purely by the GPUs, while the CPUs are responsible for supportive tasks like running the OS.

But with so powerful 16-core Ryzen CPUs, the programmers can start realising that they can offload the heavy work off the GPU and force it on the CPU.
Physics, AI, etc. All need CPU acceleration.
Posted on Reply
Add your own comment
Jul 17th, 2024 22:27 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts