• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Ryzen 7 3700X Trades Blows with Core i7-10700, 3600X with i5-10600K: Early ES Review

ARF

Joined
Jan 28, 2020
Messages
4,356 (2.67/day)
Location
Ex-usa | slava the trolls
While AMD's return to competition has certainly pushed some extra focus on more cores, which to some extent is useful, many are forgetting that there were plans of 6-core Skylake before details of Zen was known to the public. While Intel's 14nm node is very good today, it was terrible in the beginning.


Those who understands how code works knows it's the type of workload which limits the scaling potential. Asynchronous workloads, like large encoding workloads, non-realtime rendering, and many server workloads can scale nearly linearly until you reach a hardware or OS bottleneck. Synchronous workloads however, like most applications and certainly games, will have more limited scaling potential and will sooner or later reach a point of diminishing returns, precisely where this limit resides depends on the workload, and can't really be eliminated even if you wanted to. Games for instance can't keep scaling the frame rate up to 16 threads, not today and not 10 years from now. More cores are certainly useful to offload background tasks and let the game run undisturbed, but games will not need more than 2-3 threads to feed the GPU(except edge cases) and a few threads to do game simulation, network, audio etc. Beyond that, increasing the thread count for the game will only add synchronization overhead, and considering modern game engines run at tick rates ~100-200 Hz, there is not a lot of CPU time in each iteration.

As any good programmer can tell you; doing multithreading well is hard, and doing multithreading badly is worse than no multithreading at all. And just because an application spawns extra threads doesn't mean it benefits performance.

Well, it seems the majority of work is done purely by the GPUs, while the CPUs are responsible for supportive tasks like running the OS.

But with so powerful 16-core Ryzen CPUs, the programmers can start realising that they can offload the heavy work off the GPU and force it on the CPU.
Physics, AI, etc. All need CPU acceleration.
 
Joined
Jun 10, 2014
Messages
2,909 (0.79/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Well, it seems the majority of work is done purely by the GPUs, while the CPUs are responsible for supportive tasks like running the OS.
In modern game engines, all the heavy lifting during rendering is done by the GPU. The CPU only need keep the GPU fed, and builds queues of commands which the GPU processes. Having a dozen threads to build such queues serves no purpose. The trend in GPU architectures is that the GPU can work with less interaction from the CPU, meaning that games in the future will be less CPU bound.

But with so powerful 16-core Ryzen CPUs, the programmers can start realising that they can offload the heavy work off the GPU and force it on the CPU.
Physics, AI, etc. All need CPU acceleration.
Well, that's pretty much the opposite of acceleration. :rolleyes:
 

ARF

Joined
Jan 28, 2020
Messages
4,356 (2.67/day)
Location
Ex-usa | slava the trolls
In modern game engines, all the heavy lifting during rendering is done by the GPU. The CPU only need keep the GPU fed, and builds queues of commands which the GPU processes. Having a dozen threads to build such queues serves no purpose. The trend in GPU architectures is that the GPU can work with less interaction from the CPU, meaning that games in the future will be less CPU bound.


Well, that's pretty much the opposite of acceleration. :rolleyes:

Have you seen Cinebench and how it renders an image and the more cores/threads you throw at it, the faster it gets.
You must have the games behaving in the same way, otherwise it's pure wastage of silicon.
Just run your games on a GPU, then.
 
Joined
Feb 3, 2017
Messages
3,598 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
Have you seen Cinebench and how it renders an image and the more cores/threads you throw at it, the faster it gets.
Have you seen how long it takes to render one frame in Cinebench? Now imagine you want to do 60 or 120 of these in any given second. Plus all the management, gaming logic, physics, animation etc.
Cinebench is doing ray tracing. Games are using a far more efficient ways to render a scene and it is being done on GPU.
 
Last edited:

ARF

Joined
Jan 28, 2020
Messages
4,356 (2.67/day)
Location
Ex-usa | slava the trolls
Have you seen how long it takes to render one frame in Cinebench? Now imagine you want to do 60 or 120 of these. Plus all the management, gaming logic, physics, animation etc.
Cinebench is doing ray tracing. Games are using a far more efficient ways to render a scene and it is being done on GPU.

I have seen 3D Mark CPU-accelerated footage and it's far faster than 1 FPS in Cinebench. Cinebench is very heavy and pure ray-tracing.
Physics needs faster CPUs and you get physics done on the CPU.

So, no, it's not done and will not, and should not be done on GPU.



 
Joined
Jun 10, 2014
Messages
2,909 (0.79/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Have you seen Cinebench and how it renders an image and the more cores/threads you throw at it, the faster it gets.
You must have the games behaving in the same way, otherwise it's pure wastage of silicon.
Just run your games on a GPU, then.
I certainly do, but Cinebench is not realtime rendering.
In a game at 120 FPS, each frame have a 8.3 ms window for everything during a single frame. In modern OS's like Windows or Linux you can easily get latencies of 0.1-1 ms (or more) due to scheduling, since they are not realtime operating systems. Good luck with having your 16 render threads sync up many times within a single frame without causing serious stutter.

You clearly didn't understand the contents of my previous post. I mentioned Asynchronous vs. synchronous workloads. Non-realtime rendering jobs deal with "large" work chunks on the second or minute scale, and can work independently and only need to sync up when they need the next one. In this case the synchronization overhead becomes negligible, which is why such workloads can scale to almost an arbitrary count of worker threads.

Realtime rendering is however a pipeline of operations which needs to be performed within a very tight performance budget in the ms scale, and steps of this pipeline is down on the microsecond scale. Then any synchronization overhead becomes very expensive, and such overhead usually grows with thread count.
 
Joined
Feb 3, 2017
Messages
3,598 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
Which 3Dmark CPU-accelerated footage? Physics test? This is testing physics, which is different from rendering, raytracing or otherwise.
Physics is both a complex and simple problem at the same time. Parts of it are better run on CPU, parts are better run on GPU.

GPU is a lot (A LOT) of simple computation devices for parallel compute and largely a SIMD device.
CPU is a complex compute device that is a lot more powerful and independent.
Both have their strengths and weaknesses but especially in games they complement each other.

Edit:
By the way, what you see in 3DMark Physics tests are still rendered on GPU although its load is deliberately kept as low as possible.
 
Last edited:
Joined
Mar 6, 2011
Messages
155 (0.03/day)
Just because chiplets are advantageous, doesn't mean it beats the yields of another node. Also remember that the advantages of chiplets increase with die size. The yields of Intel's 14nm++ is outstanding, and a ~200mm² chip should have no issues there. TSMC's 7nm node is about twice as expensive as their 14nm node, and AMD still needs the IO die on 14nm, so cost should certainly be an advantage for Intel.

You can't be serious. Yields outstanding this far up the clock and voltage / current / power curve on this many (monolithic) cores? 10nm was meant to take over Intel's leading edge designs 3 years ago, and 14nm(++++++++) is stretched more and more thinly. Yields are alleged to be absolutely appalling on their -X and server platforms, and they're at much lower clocks.

Yields have probably never been worse on 'small' chips for a desktop platform than this 10xxx series. How could they have been? These are stretched to absolute breaking point. And why? Because they have no other choice.

It's why you have the obscenity of a desktop 16 core 3950X, on Clevo's new laptop workstation platform, limited to a strict 65W TDP, and Intel's new top 8 core laptop chip drawing 176W on a similar platform.
 
Joined
Oct 2, 2015
Messages
3,013 (0.94/day)
Location
Argentina
System Name Ciel
Processor AMD Ryzen R5 5600X
Motherboard Asus Tuf Gaming B550 Plus
Cooling ID-Cooling 224-XT Basic
Memory 2x 16GB Kingston Fury 3600MHz@3933MHz
Video Card(s) Gainward Ghost 3060 Ti 8GB + Sapphire Pulse RX 6600 8GB
Storage NVMe Kingston KC3000 2TB + NVMe Toshiba KBG40ZNT256G + HDD WD 4TB
Display(s) AOC Q27G3XMN + Samsung S22F350
Case Cougar MX410 Mesh-G
Audio Device(s) Kingston HyperX Cloud Stinger Core 7.1 Wireless PC
Power Supply Aerocool KCAS-500W
Mouse EVGA X15
Keyboard VSG Alnilam
Software Windows 11
Joined
Dec 28, 2012
Messages
3,624 (0.86/day)
System Name Skunkworks
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software openSUSE tumbleweed/Mint 21.2
I think the question is how much power is this i7 10700 really drawing when outperforming the R7 3700X? Its locked from people manually overclocking it, but does not mean that it will not draw over and above the TDP when it boost, since we already know how this TDP works for Intel. Moreover, what is stopping people from getting extra performance by overclocking the 3700X while you can't do the same for the i7 10700? There is no magic bullet here since this is still pretty much a 14nm chip, no different from a Coffee Lake chip. At this point they can only beat AMD Zen 2 is really by pushing clockspeed hard and matching price.
Well, most DIY motherboards already get ryzen 3000 near their limit. Ryzen 3000 is a total dud when it comes to overclocking, very little headroom and rampant power consumption to maintain all core OC for a whopping like 3% gain over just letting the CPU manage itself.

Ryzen 4000 is a much greater threat then 3700x OC is. Rumors are pointing to 15% IPC increase and 300-500 mhz higher clock rates. Even if AMD only managed 10% IPC jump with the same clocks or 5% IPC jump with their CPUs able to hit 4.7-4.8 GHz reliably instead of 4.5-4.6, they would take what remains of intel's performance crown, especially in games as AMD's cache changes should dramatically reduce per core latency, which is what holds Ryzen back in gaming applications.

The 10 series from intel is gonna bomb at this rate. Bonkers power draw that makes the FX 9590 look civilized and heat production that even 360mm rads struggle to handle.
 
Joined
Apr 12, 2013
Messages
6,991 (1.70/day)
It's already a Hindenburger at this point, might as well deflate it :nutkick:
 

ppn

Joined
Aug 18, 2015
Messages
1,231 (0.38/day)
Intel should be able to retake it any time with willow cove. I'm waiting for ddr5 anyways. So the real fight is on 5nm intel vs 3nm tsmc.
 
Joined
Jun 10, 2014
Messages
2,909 (0.79/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Rumors are pointing to 15% IPC increase and 300-500 mhz higher clock rates. Even if AMD only managed 10% IPC jump with the same clocks or 5% IPC jump with their CPUs able to hit 4.7-4.8 GHz reliably instead of 4.5-4.6, they would take what remains of intel's performance crown, especially in games as AMD's cache changes should dramatically reduce per core latency, which is what holds Ryzen back in gaming applications.
300-500 MHz higher sustained clocks is unlikely. AMD have themselves stated that they expect clock speeds to decrease over the coming years.

I'm not going to speculate about Zen 3's IPC gains, especially when such rumors are either completely bogous or based on cherry-picked benchmarks which have nothing to do with actual IPC. I've seen estimates ranging from ~7-8% to over 20% (+/- 5%), and such claims are likely BS, because anyone who actually knows would know precisely, not give a large range, as IPC is already an averaged number. And it's very unlikely that anyone outside AMD actually knows until the last few months.

The good news for AMD and gaming is that the CPU only have to be fast enough to feed the GPU, and as we can already see with Intel's CPUs pushing ~5 GHz, the gains are really minimal compared to the Skylakes boosting to ~4.5 GHz. Beyond that point you only really gain some more stable minimum frame rates, except for edge cases of course. If Intel today launched a new CPU with 20% higher performance per core, it wouldn't be much faster than i9-9900K in gaming (1440p), at least not until games all of a sudden becomes much more demanding on the CPU side while feeding the GPUs, which is not likely. Zen 2 is already fairly close to Skylake in gaming, Zen 3 should have a good chance to achieve parity, even with modest gains. It really comes down to what kind of areas are improving. Intel's success in gaming is largely due to the CPU's front-end; prefetching, branch-prediciton, out-of-order-window, etc. While other areas like FPU performance will mean much less for gaming. As I said, IPC is already an averaged number, across a wide range of workloads, which means a 10% gain in IPC doesn't mean 10% gain in everything, it could easily mean 20% gain in video encoding and 2% gain in gaming, etc.

Intel should be able to retake it any time with willow cove. I'm waiting for ddr5 anyways. So the real fight is on 5nm intel vs 3nm tsmc.
I'm just curious, why wait for DDR5 of all things?
If you really need memory bandwidth, just buy one of the HEDT platforms, and you'll have plenty. Most non-server workloads aren't usually limited by memory bandwidth anyway, so that would be the least of my concerns for a build.

And then there is always the next big one…
I'm more interested in architectural improvements than nodes. Now that CPUs of 8-12 cores are already widely available as "mainstream", the biggest noticeable gain to end-users would be performance per core.
 
Joined
Feb 3, 2017
Messages
3,598 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
Intel's success in gaming is largely due to the CPU's front-end; prefetching, branch-prediciton, out-of-order-window, etc. While other areas like FPU performance will mean much less for gaming.
Memory latency? Renoir should give an answer soon if that is the case.
You can't be serious. Yields outstanding this far up the clock and voltage / current / power curve on this many (monolithic) cores? 10nm was meant to take over Intel's leading edge designs 3 years ago, and 14nm(++++++++) is stretched more and more thinly. Yields are alleged to be absolutely appalling on their -X and server platforms, and they're at much lower clocks.

Yields have probably never been worse on 'small' chips for a desktop platform than this 10xxx series. How could they have been? These are stretched to absolute breaking point. And why? Because they have no other choice.

It's why you have the obscenity of a desktop 16 core 3950X, on Clevo's new laptop workstation platform, limited to a strict 65W TDP, and Intel's new top 8 core laptop chip drawing 176W on a similar platform.
All the 5.x numbers are marketing. These chips will do 5.0 or a little above and have done for a long while now. Intel is just content pushing higher voltages to chips, following AMD's example. Chips that do not clock as high will be sold as non-K models or lower tier models.

Yields on 14nm with chips this size are excellent, there is no doubt about that.
Servers are different. LCC (10-core) is 325 mm^2, HCC (18-core) is 485 mm^2 and XCC (28-core) is 694 mm^2. LCC yields are not a big problem, HCC is so-so and XCC yields are definitely a problem.

That 3950X score is 65 ECO mode score, meaning "65W TDP" - that is 88-90W.
10980X 107W PL2 and 56s tau are a disgrace, but not that unexpected.
There is a huge difference there, why overblow the numbers to this degree is beyond me.
 
Last edited:

ARF

Joined
Jan 28, 2020
Messages
4,356 (2.67/day)
Location
Ex-usa | slava the trolls
300-500 MHz higher sustained clocks is unlikely. AMD have themselves stated that they expect clock speeds to decrease over the coming years.

I'm not going to speculate about Zen 3's IPC gains, especially when such rumors are either completely bogous or based on cherry-picked benchmarks which have nothing to do with actual IPC. I've seen estimates ranging from ~7-8% to over 20% (+/- 5%), and such claims are likely BS, because anyone who actually knows would know precisely, not give a large range, as IPC is already an averaged number. And it's very unlikely that anyone outside AMD actually knows until the last few months.

The good news for AMD and gaming is that the CPU only have to be fast enough to feed the GPU, and as we can already see with Intel's CPUs pushing ~5 GHz, the gains are really minimal compared to the Skylakes boosting to ~4.5 GHz. Beyond that point you only really gain some more stable minimum frame rates, except for edge cases of course. If Intel today launched a new CPU with 20% higher performance per core, it wouldn't be much faster than i9-9900K in gaming (1440p), at least not until games all of a sudden becomes much more demanding on the CPU side while feeding the GPUs, which is not likely. Zen 2 is already fairly close to Skylake in gaming, Zen 3 should have a good chance to achieve parity, even with modest gains. It really comes down to what kind of areas are improving. Intel's success in gaming is largely due to the CPU's front-end; prefetching, branch-prediciton, out-of-order-window, etc. While other areas like FPU performance will mean much less for gaming. As I said, IPC is already an averaged number, across a wide range of workloads, which means a 10% gain in IPC doesn't mean 10% gain in everything, it could easily mean 20% gain in video encoding and 2% gain in gaming, etc.


I'm just curious, why wait for DDR5 of all things?
If you really need memory bandwidth, just buy one of the HEDT platforms, and you'll have plenty. Most non-server workloads aren't usually limited by memory bandwidth anyway, so that would be the least of my concerns for a build.

And then there is always the next big one…
I'm more interested in architectural improvements than nodes. Now that CPUs of 8-12 cores are already widely available as "mainstream", the biggest noticeable gain to end-users would be performance per core.

Memory latency? Renoir should give an answer soon if that is the case.

Intel's last ace is the ring bus which has limits to be put in up to 10-core processors, and the bad Windows scheduler.

AMD's Zen 3 will likely have an 8-core CCX, so that jumping from core to core on different CCX adding incredible amounts of latency will be gone.

And Intel will be RIP.


That 3950X score is 65 ECO mode score, meaning "65W TDP" - that is 88-90W.

It likely boosts up to the mentioned by you TDP but then settles in its targeted limit of 65-watt!
 
Last edited:
Joined
Feb 3, 2017
Messages
3,598 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
It likely boosts up to the mentioned by you TDP but then settles in its targeted limit of 65-watt!
No. Ryzen 3000 works at PLL = 135% TDP unless any other limits are hit.
What you are talking about is Intel's system where PL1 = TDP, PL2 = boost power limit and Tau is the time CPU boosts higher than PL1.

Both are simplified from how they actually function but that is the gist of it.
 

ARF

Joined
Jan 28, 2020
Messages
4,356 (2.67/day)
Location
Ex-usa | slava the trolls
No. Ryzen 3000 works at PLL = 135% TDP unless any other limits are hit.
What you are talking about is Intel's system where PL1 = TDP, PL2 = boost power limit and Tau is the time CPU boosts higher than PL1.

Both are simplified from how they actually function but that is the gist of it.

This is how the Ryzen 9 4900HS with its 35-watt TDP behaves during the HU review, watch from 18:33 on:

1586078678570.png


 
Joined
Feb 3, 2017
Messages
3,598 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
4900HS is different from desktop Ryzen 3000 CPUs in this regard.
 
Joined
Mar 25, 2020
Messages
52 (0.03/day)
System Name THE FORTRESS
Processor INTEL CORE i7-10700K
Motherboard MSI MPG Z490 GAMING PLUS
Cooling BE QUIET DARK ROCK 4
Memory CORSAIR VENGEANCE DDR4 3000MHz 16GB
Video Card(s) MSI RTX 2070 SUPER GAMING X TRIO 8GB
Storage SAMSUNG 970 PRO 1TB - CRUCIAL X8 SSD 1TB - ADATA HD770G 1TB
Display(s) SAMSUNG QA65Q7FN 4K 65 INCH TV (120HZ @ 1440p IN PC MODE)
Case BE QUIET DARK BASE PRO 900 REVISION 2
Audio Device(s) SOUND BLASTERX AE-5 - LOGITECH Z-5500 SPEAKERS - SENNHEISER HD598SE CANS
Power Supply SEASONIC PRIME 750W PLATINUM
Mouse RAZER DEATHADDER ELITE
Keyboard LOGITECH K800
Software WIN10 PRO 64
Benchmark Scores STABILITY SILENCE... SPEED
Remember when 10Ghz was on a roadmap and seemed just around the corner? Bring on the quantum computers
 

ARF

Joined
Jan 28, 2020
Messages
4,356 (2.67/day)
Location
Ex-usa | slava the trolls
Remember when 10Ghz was on a roadmap and seemed just around the corner? Bring on the quantum computers

Quantum computers can't operate in your living room.
In the best case, you may take a little bit of their computing power over the cloud....... but I doubt it will be anytime soon.
Our internet connections are too slow.

And you can always take normal semiconductors silicon chips and build supercomputers for the very same purpose.

For now, AMD with Zen is your solution with multiple cores.

AMD CTO Mark Papermaster: More Cores Coming in the 'Era of a Slowed Moore's Law'
 
Joined
Nov 21, 2010
Messages
2,297 (0.46/day)
Location
Right where I want to be
System Name Miami
Processor Ryzen 3800X
Motherboard Asus Crosshair VII Formula
Cooling Ek Velocity/ 2x 280mm Radiators/ Alphacool fullcover
Memory F4-3600C16Q-32GTZNC
Video Card(s) XFX 6900 XT Speedster 0
Storage 1TB WD M.2 SSD/ 2TB WD SN750/ 4TB WD Black HDD
Display(s) DELL AW3420DW / HP ZR24w
Case Lian Li O11 Dynamic XL
Audio Device(s) EVGA Nu Audio
Power Supply Seasonic Prime Gold 1000W+750W
Mouse Corsair Scimitar/Glorious Model O-
Keyboard Corsair K95 Platinum
Software Windows 10 Pro
"Won't" might be a thing. Intel definitely can if they want to. Intel has smaller dies and more margins to cut especially if you consider Intel keeps the manufacturing profit as well which goes to TSMC for AMD CPUs.
Based on pictures in the source article Intel is still/again using the 6-core dies for 10600K. Think about it this way - Ryzen 3000 CPUs are 125mm^2 12nm IO die plus 75mm^2 7nm CCD die. Intel's 6-core is 149mm^2 14nm die. Intel 8-core die is 175mm^2 which should still be very good in terms of manufacturing cost. Hell, even 10-die is ~200mm^2 which is right where Zen/Zen+ dies were.

Isn't the chiplets constant on amd cpu? There should'nt be a difference in size between 4/6/8 cores, so that advantage disappears until AMD has to throw in another at 12/16 cores
 
Joined
Mar 25, 2020
Messages
52 (0.03/day)
System Name THE FORTRESS
Processor INTEL CORE i7-10700K
Motherboard MSI MPG Z490 GAMING PLUS
Cooling BE QUIET DARK ROCK 4
Memory CORSAIR VENGEANCE DDR4 3000MHz 16GB
Video Card(s) MSI RTX 2070 SUPER GAMING X TRIO 8GB
Storage SAMSUNG 970 PRO 1TB - CRUCIAL X8 SSD 1TB - ADATA HD770G 1TB
Display(s) SAMSUNG QA65Q7FN 4K 65 INCH TV (120HZ @ 1440p IN PC MODE)
Case BE QUIET DARK BASE PRO 900 REVISION 2
Audio Device(s) SOUND BLASTERX AE-5 - LOGITECH Z-5500 SPEAKERS - SENNHEISER HD598SE CANS
Power Supply SEASONIC PRIME 750W PLATINUM
Mouse RAZER DEATHADDER ELITE
Keyboard LOGITECH K800
Software WIN10 PRO 64
Benchmark Scores STABILITY SILENCE... SPEED
Joined
Feb 3, 2017
Messages
3,598 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
Isn't the chiplets constant on amd cpu? There should'nt be a difference in size between 4/6/8 cores, so that advantage disappears until AMD has to throw in another at 12/16 cores
Chiplets are a constant. Chiplets are a big plus for two reasons:
1. Avoiding big dies. Think competing with and overshadowing 18/28-core Intel Xeons, which is what AMD EPYC is currently very successful at.
2. Yields on a cutting edge node. This is largely down to die size.

On smaller dies, chiplet design is not necessarily a benefit.
- Memory latency (and latency to cores on another die) has been talked about a lot and this is a flip side of chiplet coin. This is not generally a problem for server CPUs as environment, goals and software for those are meant to be well parallelized and distributed. There are niches that get hit but this is very minor. For desktop, these are a bunch of things that do get affected - games are the most obvious one both due to the way games work as well as games being a big thing for desktop market.
- At the same time, something like 200mm^2 is not a large die for an old manufacturing process and yields are not a problem with these. This is the size of a 10-core Intel Skylake-derived CPU. It is probably relevant to mention AMD has been competing well (and with good prices) with dies that size for the last 3 years. AMD 8-core Ryzen 3000 has 125mm^2 IO Die (which by itself is the same size as Intel 4-core CPU) and 75mm^2 CCD.
 

ppn

Joined
Aug 18, 2015
Messages
1,231 (0.38/day)
So if intel shrinks 11 series to 10nm double density 10 core skylake will measure 100mm2.
 

ARF

Joined
Jan 28, 2020
Messages
4,356 (2.67/day)
Location
Ex-usa | slava the trolls
So if intel shrinks 11 series to 10nm double density 10 core skylake will measure 100mm2.

11 series is Rocket Lake pretty much every information says it's 14nm.

10nm is scrapped for the S series.
 
Top