Friday, April 3rd 2020

Ryzen 7 3700X Trades Blows with Core i7-10700, 3600X with i5-10600K: Early ES Review

Hong Kong-based tech publication HKEPC posted a performance review of a few 10th generation Core "Comet Lake-S" desktop processor engineering samples they scored. These include the Core i7-10700 (8-core/16-thread), the i5-10600K (6-core/12-thread), the i5-10500, and the i5-10400. The four chips were paired with a Dell-sourced OEM motherboard based on Intel's B460 chipset, 16 GB of dual-channel DDR4-4133 memory, and an RX 5700 XT graphics card to make the test bench. This bench was compared to several Intel 9th generation Core and AMD 3rd generation Ryzen processors.

Among the purely CPU-oriented benchmarks, the i7-10700 was found to be trading blows with the Ryzen 7 3700X. It's important to note here, that the i7-10700 is a locked chip, possibly with 65 W rated TDP. Its 4.60 GHz boost frequency is lesser than that of the unlocked, 95 W i9-9900K, which ends up topping most of the performance charts where it's compared to the 3700X. Still the comparison between i7-10700 and 3700X can't be dismissed, since the new Intel chip could launch at roughly the same price as the 3700X (if you go by i7-9700 vs. i7-9700K launch price trends).
The Ryzen 7 3700X beats the Core i7-10700 in Cinebench R15, but falls behind in Cinebench R20. The two end up performing within 2% of each other in CPU-Z bench, 3DMark Time Spy and FireStrike Extreme (physics scores). The mid-range Ryzen 5 3600X has much better luck warding off its upcoming rivals, with significant performance leads over the i5-10600K and i5-10500 in both versions of Cinebench, CPU-Z bench, as well as both 3DMark tests. The i5-10400 is within 6% of the i5-10600K. This is important, as the iGPU-devoid i5-10400F could retail at price points well under $190, two-thirds the price of the i5-10600K.
These performance figures should be taken with a grain of salt since engineering samples have a way of performing very differently from retail chips. Intel is expected to launch its 10th generation Core "Comet Lake-S" processors and Intel 400-series chipset motherboards on April 30. Find more test results in the HKEPC article linked below.
Source: HKEPC
Add your own comment

97 Comments on Ryzen 7 3700X Trades Blows with Core i7-10700, 3600X with i5-10600K: Early ES Review

#51
efikkan
ARFWell, it seems the majority of work is done purely by the GPUs, while the CPUs are responsible for supportive tasks like running the OS.
In modern game engines, all the heavy lifting during rendering is done by the GPU. The CPU only need keep the GPU fed, and builds queues of commands which the GPU processes. Having a dozen threads to build such queues serves no purpose. The trend in GPU architectures is that the GPU can work with less interaction from the CPU, meaning that games in the future will be less CPU bound.
ARFBut with so powerful 16-core Ryzen CPUs, the programmers can start realising that they can offload the heavy work off the GPU and force it on the CPU.
Physics, AI, etc. All need CPU acceleration.
Well, that's pretty much the opposite of acceleration. :rolleyes:
Posted on Reply
#52
ARF
efikkanIn modern game engines, all the heavy lifting during rendering is done by the GPU. The CPU only need keep the GPU fed, and builds queues of commands which the GPU processes. Having a dozen threads to build such queues serves no purpose. The trend in GPU architectures is that the GPU can work with less interaction from the CPU, meaning that games in the future will be less CPU bound.


Well, that's pretty much the opposite of acceleration. :rolleyes:
Have you seen Cinebench and how it renders an image and the more cores/threads you throw at it, the faster it gets.
You must have the games behaving in the same way, otherwise it's pure wastage of silicon.
Just run your games on a GPU, then.
Posted on Reply
#53
londiste
ARFHave you seen Cinebench and how it renders an image and the more cores/threads you throw at it, the faster it gets.
Have you seen how long it takes to render one frame in Cinebench? Now imagine you want to do 60 or 120 of these in any given second. Plus all the management, gaming logic, physics, animation etc.
Cinebench is doing ray tracing. Games are using a far more efficient ways to render a scene and it is being done on GPU.
Posted on Reply
#54
ARF
londisteHave you seen how long it takes to render one frame in Cinebench? Now imagine you want to do 60 or 120 of these. Plus all the management, gaming logic, physics, animation etc.
Cinebench is doing ray tracing. Games are using a far more efficient ways to render a scene and it is being done on GPU.
I have seen 3D Mark CPU-accelerated footage and it's far faster than 1 FPS in Cinebench. Cinebench is very heavy and pure ray-tracing.
Physics needs faster CPUs and you get physics done on the CPU.

So, no, it's not done and will not, and should not be done on GPU.



Posted on Reply
#55
efikkan
ARFHave you seen Cinebench and how it renders an image and the more cores/threads you throw at it, the faster it gets.
You must have the games behaving in the same way, otherwise it's pure wastage of silicon.
Just run your games on a GPU, then.
I certainly do, but Cinebench is not realtime rendering.
In a game at 120 FPS, each frame have a 8.3 ms window for everything during a single frame. In modern OS's like Windows or Linux you can easily get latencies of 0.1-1 ms (or more) due to scheduling, since they are not realtime operating systems. Good luck with having your 16 render threads sync up many times within a single frame without causing serious stutter.

You clearly didn't understand the contents of my previous post. I mentioned Asynchronous vs. synchronous workloads. Non-realtime rendering jobs deal with "large" work chunks on the second or minute scale, and can work independently and only need to sync up when they need the next one. In this case the synchronization overhead becomes negligible, which is why such workloads can scale to almost an arbitrary count of worker threads.

Realtime rendering is however a pipeline of operations which needs to be performed within a very tight performance budget in the ms scale, and steps of this pipeline is down on the microsecond scale. Then any synchronization overhead becomes very expensive, and such overhead usually grows with thread count.
Posted on Reply
#56
londiste
Which 3Dmark CPU-accelerated footage? Physics test? This is testing physics, which is different from rendering, raytracing or otherwise.
Physics is both a complex and simple problem at the same time. Parts of it are better run on CPU, parts are better run on GPU.

GPU is a lot (A LOT) of simple computation devices for parallel compute and largely a SIMD device.
CPU is a complex compute device that is a lot more powerful and independent.
Both have their strengths and weaknesses but especially in games they complement each other.

Edit:
By the way, what you see in 3DMark Physics tests are still rendered on GPU although its load is deliberately kept as low as possible.
Posted on Reply
#57
midnightoil
efikkanJust because chiplets are advantageous, doesn't mean it beats the yields of another node. Also remember that the advantages of chiplets increase with die size. The yields of Intel's 14nm++ is outstanding, and a ~200mm² chip should have no issues there. TSMC's 7nm node is about twice as expensive as their 14nm node, and AMD still needs the IO die on 14nm, so cost should certainly be an advantage for Intel.
You can't be serious. Yields outstanding this far up the clock and voltage / current / power curve on this many (monolithic) cores? 10nm was meant to take over Intel's leading edge designs 3 years ago, and 14nm(++++++++) is stretched more and more thinly. Yields are alleged to be absolutely appalling on their -X and server platforms, and they're at much lower clocks.

Yields have probably never been worse on 'small' chips for a desktop platform than this 10xxx series. How could they have been? These are stretched to absolute breaking point. And why? Because they have no other choice.

It's why you have the obscenity of a desktop 16 core 3950X, on Clevo's new laptop workstation platform, limited to a strict 65W TDP, and Intel's new top 8 core laptop chip drawing 176W on a similar platform.
Posted on Reply
#58
GoldenX
btarunrThe key notches are different. It won't fit.
Dremel.
Posted on Reply
#59
TheinsanegamerN
watzupkenI think the question is how much power is this i7 10700 really drawing when outperforming the R7 3700X? Its locked from people manually overclocking it, but does not mean that it will not draw over and above the TDP when it boost, since we already know how this TDP works for Intel. Moreover, what is stopping people from getting extra performance by overclocking the 3700X while you can't do the same for the i7 10700? There is no magic bullet here since this is still pretty much a 14nm chip, no different from a Coffee Lake chip. At this point they can only beat AMD Zen 2 is really by pushing clockspeed hard and matching price.
Well, most DIY motherboards already get ryzen 3000 near their limit. Ryzen 3000 is a total dud when it comes to overclocking, very little headroom and rampant power consumption to maintain all core OC for a whopping like 3% gain over just letting the CPU manage itself.

Ryzen 4000 is a much greater threat then 3700x OC is. Rumors are pointing to 15% IPC increase and 300-500 mhz higher clock rates. Even if AMD only managed 10% IPC jump with the same clocks or 5% IPC jump with their CPUs able to hit 4.7-4.8 GHz reliably instead of 4.5-4.6, they would take what remains of intel's performance crown, especially in games as AMD's cache changes should dramatically reduce per core latency, which is what holds Ryzen back in gaming applications.

The 10 series from intel is gonna bomb at this rate. Bonkers power draw that makes the FX 9590 look civilized and heat production that even 360mm rads struggle to handle.
Posted on Reply
#60
R0H1T
It's already a Hindenburger at this point, might as well deflate it :nutkick:
Posted on Reply
#61
ppn
Intel should be able to retake it any time with willow cove. I'm waiting for ddr5 anyways. So the real fight is on 5nm intel vs 3nm tsmc.
Posted on Reply
#62
efikkan
TheinsanegamerNRumors are pointing to 15% IPC increase and 300-500 mhz higher clock rates. Even if AMD only managed 10% IPC jump with the same clocks or 5% IPC jump with their CPUs able to hit 4.7-4.8 GHz reliably instead of 4.5-4.6, they would take what remains of intel's performance crown, especially in games as AMD's cache changes should dramatically reduce per core latency, which is what holds Ryzen back in gaming applications.
300-500 MHz higher sustained clocks is unlikely. AMD have themselves stated that they expect clock speeds to decrease over the coming years.

I'm not going to speculate about Zen 3's IPC gains, especially when such rumors are either completely bogous or based on cherry-picked benchmarks which have nothing to do with actual IPC. I've seen estimates ranging from ~7-8% to over 20% (+/- 5%), and such claims are likely BS, because anyone who actually knows would know precisely, not give a large range, as IPC is already an averaged number. And it's very unlikely that anyone outside AMD actually knows until the last few months.

The good news for AMD and gaming is that the CPU only have to be fast enough to feed the GPU, and as we can already see with Intel's CPUs pushing ~5 GHz, the gains are really minimal compared to the Skylakes boosting to ~4.5 GHz. Beyond that point you only really gain some more stable minimum frame rates, except for edge cases of course. If Intel today launched a new CPU with 20% higher performance per core, it wouldn't be much faster than i9-9900K in gaming (1440p), at least not until games all of a sudden becomes much more demanding on the CPU side while feeding the GPUs, which is not likely. Zen 2 is already fairly close to Skylake in gaming, Zen 3 should have a good chance to achieve parity, even with modest gains. It really comes down to what kind of areas are improving. Intel's success in gaming is largely due to the CPU's front-end; prefetching, branch-prediciton, out-of-order-window, etc. While other areas like FPU performance will mean much less for gaming. As I said, IPC is already an averaged number, across a wide range of workloads, which means a 10% gain in IPC doesn't mean 10% gain in everything, it could easily mean 20% gain in video encoding and 2% gain in gaming, etc.
ppnIntel should be able to retake it any time with willow cove. I'm waiting for ddr5 anyways. So the real fight is on 5nm intel vs 3nm tsmc.
I'm just curious, why wait for DDR5 of all things?
If you really need memory bandwidth, just buy one of the HEDT platforms, and you'll have plenty. Most non-server workloads aren't usually limited by memory bandwidth anyway, so that would be the least of my concerns for a build.

And then there is always the next big one…
I'm more interested in architectural improvements than nodes. Now that CPUs of 8-12 cores are already widely available as "mainstream", the biggest noticeable gain to end-users would be performance per core.
Posted on Reply
#63
londiste
efikkanIntel's success in gaming is largely due to the CPU's front-end; prefetching, branch-prediciton, out-of-order-window, etc. While other areas like FPU performance will mean much less for gaming.
Memory latency? Renoir should give an answer soon if that is the case.
midnightoilYou can't be serious. Yields outstanding this far up the clock and voltage / current / power curve on this many (monolithic) cores? 10nm was meant to take over Intel's leading edge designs 3 years ago, and 14nm(++++++++) is stretched more and more thinly. Yields are alleged to be absolutely appalling on their -X and server platforms, and they're at much lower clocks.

Yields have probably never been worse on 'small' chips for a desktop platform than this 10xxx series. How could they have been? These are stretched to absolute breaking point. And why? Because they have no other choice.

It's why you have the obscenity of a desktop 16 core 3950X, on Clevo's new laptop workstation platform, limited to a strict 65W TDP, and Intel's new top 8 core laptop chip drawing 176W on a similar platform.
All the 5.x numbers are marketing. These chips will do 5.0 or a little above and have done for a long while now. Intel is just content pushing higher voltages to chips, following AMD's example. Chips that do not clock as high will be sold as non-K models or lower tier models.

Yields on 14nm with chips this size are excellent, there is no doubt about that.
Servers are different. LCC (10-core) is 325 mm^2, HCC (18-core) is 485 mm^2 and XCC (28-core) is 694 mm^2. LCC yields are not a big problem, HCC is so-so and XCC yields are definitely a problem.

That 3950X score is 65 ECO mode score, meaning "65W TDP" - that is 88-90W.
10980X 107W PL2 and 56s tau are a disgrace, but not that unexpected.
There is a huge difference there, why overblow the numbers to this degree is beyond me.
Posted on Reply
#64
ARF
efikkan300-500 MHz higher sustained clocks is unlikely. AMD have themselves stated that they expect clock speeds to decrease over the coming years.

I'm not going to speculate about Zen 3's IPC gains, especially when such rumors are either completely bogous or based on cherry-picked benchmarks which have nothing to do with actual IPC. I've seen estimates ranging from ~7-8% to over 20% (+/- 5%), and such claims are likely BS, because anyone who actually knows would know precisely, not give a large range, as IPC is already an averaged number. And it's very unlikely that anyone outside AMD actually knows until the last few months.

The good news for AMD and gaming is that the CPU only have to be fast enough to feed the GPU, and as we can already see with Intel's CPUs pushing ~5 GHz, the gains are really minimal compared to the Skylakes boosting to ~4.5 GHz. Beyond that point you only really gain some more stable minimum frame rates, except for edge cases of course. If Intel today launched a new CPU with 20% higher performance per core, it wouldn't be much faster than i9-9900K in gaming (1440p), at least not until games all of a sudden becomes much more demanding on the CPU side while feeding the GPUs, which is not likely. Zen 2 is already fairly close to Skylake in gaming, Zen 3 should have a good chance to achieve parity, even with modest gains. It really comes down to what kind of areas are improving. Intel's success in gaming is largely due to the CPU's front-end; prefetching, branch-prediciton, out-of-order-window, etc. While other areas like FPU performance will mean much less for gaming. As I said, IPC is already an averaged number, across a wide range of workloads, which means a 10% gain in IPC doesn't mean 10% gain in everything, it could easily mean 20% gain in video encoding and 2% gain in gaming, etc.


I'm just curious, why wait for DDR5 of all things?
If you really need memory bandwidth, just buy one of the HEDT platforms, and you'll have plenty. Most non-server workloads aren't usually limited by memory bandwidth anyway, so that would be the least of my concerns for a build.

And then there is always the next big one…
I'm more interested in architectural improvements than nodes. Now that CPUs of 8-12 cores are already widely available as "mainstream", the biggest noticeable gain to end-users would be performance per core.
londisteMemory latency? Renoir should give an answer soon if that is the case.
Intel's last ace is the ring bus which has limits to be put in up to 10-core processors, and the bad Windows scheduler.

AMD's Zen 3 will likely have an 8-core CCX, so that jumping from core to core on different CCX adding incredible amounts of latency will be gone.

And Intel will be RIP.
londisteThat 3950X score is 65 ECO mode score, meaning "65W TDP" - that is 88-90W.
It likely boosts up to the mentioned by you TDP but then settles in its targeted limit of 65-watt!
Posted on Reply
#65
londiste
ARFIt likely boosts up to the mentioned by you TDP but then settles in its targeted limit of 65-watt!
No. Ryzen 3000 works at PLL = 135% TDP unless any other limits are hit.
What you are talking about is Intel's system where PL1 = TDP, PL2 = boost power limit and Tau is the time CPU boosts higher than PL1.

Both are simplified from how they actually function but that is the gist of it.
Posted on Reply
#66
ARF
londisteNo. Ryzen 3000 works at PLL = 135% TDP unless any other limits are hit.
What you are talking about is Intel's system where PL1 = TDP, PL2 = boost power limit and Tau is the time CPU boosts higher than PL1.

Both are simplified from how they actually function but that is the gist of it.
This is how the Ryzen 9 4900HS with its 35-watt TDP behaves during the HU review, watch from 18:33 on:



Posted on Reply
#67
londiste
4900HS is different from desktop Ryzen 3000 CPUs in this regard.
Posted on Reply
#68
Braggingrights
Remember when 10Ghz was on a roadmap and seemed just around the corner? Bring on the quantum computers
Posted on Reply
#69
ARF
BraggingrightsRemember when 10Ghz was on a roadmap and seemed just around the corner? Bring on the quantum computers
Quantum computers can't operate in your living room.
In the best case, you may take a little bit of their computing power over the cloud....... but I doubt it will be anytime soon.
Our internet connections are too slow.

And you can always take normal semiconductors silicon chips and build supercomputers for the very same purpose.

For now, AMD with Zen is your solution with multiple cores.

AMD CTO Mark Papermaster: More Cores Coming in the 'Era of a Slowed Moore's Law'
www.tomshardware.com/news/amd-cto-mark-papermaster-more-cores-coming-in-the-era-of-a-slowed-moores-law
Posted on Reply
#70
Totally
londiste"Won't" might be a thing. Intel definitely can if they want to. Intel has smaller dies and more margins to cut especially if you consider Intel keeps the manufacturing profit as well which goes to TSMC for AMD CPUs.
Based on pictures in the source article Intel is still/again using the 6-core dies for 10600K. Think about it this way - Ryzen 3000 CPUs are 125mm^2 12nm IO die plus 75mm^2 7nm CCD die. Intel's 6-core is 149mm^2 14nm die. Intel 8-core die is 175mm^2 which should still be very good in terms of manufacturing cost. Hell, even 10-die is ~200mm^2 which is right where Zen/Zen+ dies were.
Isn't the chiplets constant on amd cpu? There should'nt be a difference in size between 4/6/8 cores, so that advantage disappears until AMD has to throw in another at 12/16 cores
Posted on Reply
#71
Braggingrights
ARFQuantum computers can't operate in your living room.
I think you're underestimating my living room

Posted on Reply
#72
londiste
TotallyIsn't the chiplets constant on amd cpu? There should'nt be a difference in size between 4/6/8 cores, so that advantage disappears until AMD has to throw in another at 12/16 cores
Chiplets are a constant. Chiplets are a big plus for two reasons:
1. Avoiding big dies. Think competing with and overshadowing 18/28-core Intel Xeons, which is what AMD EPYC is currently very successful at.
2. Yields on a cutting edge node. This is largely down to die size.

On smaller dies, chiplet design is not necessarily a benefit.
- Memory latency (and latency to cores on another die) has been talked about a lot and this is a flip side of chiplet coin. This is not generally a problem for server CPUs as environment, goals and software for those are meant to be well parallelized and distributed. There are niches that get hit but this is very minor. For desktop, these are a bunch of things that do get affected - games are the most obvious one both due to the way games work as well as games being a big thing for desktop market.
- At the same time, something like 200mm^2 is not a large die for an old manufacturing process and yields are not a problem with these. This is the size of a 10-core Intel Skylake-derived CPU. It is probably relevant to mention AMD has been competing well (and with good prices) with dies that size for the last 3 years. AMD 8-core Ryzen 3000 has 125mm^2 IO Die (which by itself is the same size as Intel 4-core CPU) and 75mm^2 CCD.
Posted on Reply
#73
ppn
So if intel shrinks 11 series to 10nm double density 10 core skylake will measure 100mm2.
Posted on Reply
#74
ARF
ppnSo if intel shrinks 11 series to 10nm double density 10 core skylake will measure 100mm2.
11 series is Rocket Lake pretty much every information says it's 14nm.

10nm is scrapped for the S series.
Posted on Reply
#75
Cybrshrk
TheinsanegamerNWell, most DIY motherboards already get ryzen 3000 near their limit. Ryzen 3000 is a total dud when it comes to overclocking, very little headroom and rampant power consumption to maintain all core OC for a whopping like 3% gain over just letting the CPU manage itself.

Ryzen 4000 is a much greater threat then 3700x OC is. Rumors are pointing to 15% IPC increase and 300-500 mhz higher clock rates. Even if AMD only managed 10% IPC jump with the same clocks or 5% IPC jump with their CPUs able to hit 4.7-4.8 GHz reliably instead of 4.5-4.6, they would take what remains of intel's performance crown, especially in games as AMD's cache changes should dramatically reduce per core latency, which is what holds Ryzen back in gaming applications.

The 10 series from intel is gonna bomb at this rate. Bonkers power draw that makes the FX 9590 look civilized and heat production that even 360mm rads struggle to handle.
I've read through all 3 pages of comments here and I see alot of "speculation" on how these Intel chips are worse than amd's offering but at the end of the day no one really care about "fixes" that hurt performance or "yields" or anything is you're trying to use to justify the fact that AMD even after 3 (and will probably 4) cpu generations they still can't take Intel down for what 80% of users really care about which is gaming performance.

No matter how you try to spin it Intel will still offer the highest fps in a consumer chip for the majority of games now and into the near future and until amd can claim this people will not care about anything else you're using to try and make amd look like the "right" you choice.

The only choice for most gamers is for my "X" amount of dollars to spend which platform will give me the most FPS in the games I play.

It looks like that even with all these "vulnerability fixes" and pushing things to their max the Intel chips will still be the best for gamers and until this changes amd will always be fighting an uphill battle.

I've been ready since the 1800x to jump on the ryzen train but sadly when benchmarks came out the 7700k was the better gaming choice and now with a new upgrade looming for me it still looks like even after 3+ years Intel will still be the place I go for maximum gaming performance rig along with whatever will take the top spot for gpu performance in the upcoming releases from either Nvidia or AMD.

I'm no fanboy of anything but the highest performance and nothing so far shows me as having any other choice than Intel once again.
Posted on Reply
Add your own comment
Jul 17th, 2024 22:32 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts