Monday, October 21st 2019

AMD Ryzen 9 3950X Beats Intel Core i9-10980XE by 24% in 3DMark Physics
AMD's upcoming Ryzen 9 3950X socket AM4 processor beats Intel's flagship 18-core processor, the Core i9-10980XE, by a staggering 24 percent at 3DMark Physics, according to a PC Perspective report citing TUM_APISAK. The 3950X is a 16-core/32-thread processor that's drop-in compatible with any motherboard that can run the Ryzen 9 3900X. The i9-10980XE is an 18-core/36-thread HEDT chip that enjoys double the memory bus width as the AMD chip, and is based on Intel's "Cascade Lake-X" silicon. The AMD processor isn't at a tangible clock-speed advantage. The 3950X has a maximum boost frequency of 4.70 GHz, while the i9-10980XE isn't much behind, at 4.60 GHz, but things differ with all-core boost.
When paired with 16 GB of dual-channel DDR4-3200 memory, the Ryzen 9 3950X powered machine scores 32,082 points in the CPU-intensive physics tests of 3DMark. In comparison, the i9-10980XE, paired with 32 GB of quad-channel DDR4-2667 memory, scores just 25,838 points as mentioned by PC Perspective. Graphics card is irrelevant to this test. It's pertinent to note here that the 3DMark physics test scales across practically any number of CPU cores/threads, and the AMD processor could be benefiting from a higher all-core boost frequency than the Intel chip. Although AMD doesn't mention a number in its specifications, the 3950X is expected to have an all-core boost frequency that's north of 4.00 GHz, as its 12-core sibling, the 3900X, already offers 4.20 GHz all-core. In contrast, the i9-10980XE has an all-core boost frequency of 3.80 GHz. This difference in boost frequency, apparently, even negates the additional 2 cores and 4 threads that the Intel chip enjoys, in what is yet another example of AMD having caught up with Intel in the IPC game.
Sources:
TUM_APISAK (Twitter), PC Perspective
When paired with 16 GB of dual-channel DDR4-3200 memory, the Ryzen 9 3950X powered machine scores 32,082 points in the CPU-intensive physics tests of 3DMark. In comparison, the i9-10980XE, paired with 32 GB of quad-channel DDR4-2667 memory, scores just 25,838 points as mentioned by PC Perspective. Graphics card is irrelevant to this test. It's pertinent to note here that the 3DMark physics test scales across practically any number of CPU cores/threads, and the AMD processor could be benefiting from a higher all-core boost frequency than the Intel chip. Although AMD doesn't mention a number in its specifications, the 3950X is expected to have an all-core boost frequency that's north of 4.00 GHz, as its 12-core sibling, the 3900X, already offers 4.20 GHz all-core. In contrast, the i9-10980XE has an all-core boost frequency of 3.80 GHz. This difference in boost frequency, apparently, even negates the additional 2 cores and 4 threads that the Intel chip enjoys, in what is yet another example of AMD having caught up with Intel in the IPC game.
143 Comments on AMD Ryzen 9 3950X Beats Intel Core i9-10980XE by 24% in 3DMark Physics
Intel's top of the line HEDT offerings are struggling against AMD's mainstream CPUs. Oh boy. That's a plot twist if I ever saw one, who would have envisioned this say 3 years ago. Makes sense, that 14nm+++ or whatever it is by this point isn't an bottomless well of scalability. People totally buy HEDT chips and don't overclock them. Someone can buy a workstation for editing video and have no knowledge whatsoever about overclocking and such, they just got a fast CPU that they use as it is out of the box.
So I'd imagine all core boost around 3.6-3.8 ghz will Hope's of 4ghz, lol. because its 32c/64t compared to half that? It's also $1500 compared to $1000. The 3960x is $1000 and 24/48t. Depends on what you are trying to compare by.. core/thread count, HEDT to HEDT, price...etc.
Edit: AMDs HEDT/Threadripper product stack is crazy if the rumors are true. A 64c/128t monster for $2500 at the top.. $1000 24c/48t chip at the bottom. Wowzas! What does zen2 server line look like?!!
Edit2: read a better source..16c/32t at the bottom... no price on that.
I was one of those crazy users that you know bought a high core count chip for well....its cores.
Im not going to OC and pull a KW from the wall to watch netflix.
72MB cache beats 24.75MB cache
~148W peak PBO power consumption beats an unrealistically-low 165W base consumption before boost.
$750 beats $1000.
PCIe 4.0 beats PCIe 3.0
On top of all that, have Intel closed all of the exposed side-channel attack vulnerabilities in hardware yet, or are they still recycling old architecture with microcode/software patches?
I'm sure there are still going to be plenty of benchmarks where Intel wins, either due to AVX512 vs AVX256, or simply old code that was optimised with an Intel compiler but Team Blue simply has no good cards on the table and a pretty weak hand at the moment.
I've overclocked nearly every CPU I've owned since my first upgrade from a 486 to an AM586 way back in the mid-90's. Earlier this year I had a 7900x that I was planning to de-lid and see how far I could push it, but an opportunity to replace it with this (already de-lidded) 7960x came up so I took it. All-core turbo on this chip is only 3.6GHz, which is below my personal threshold for single-thread performance of at least 4GHz on modern chips. Ideally, closer to 5GHz but I'm not pushing this chip to 5GHz on ambient cooling. So I applied a moderate overclock to reach this desired level of performance and it is perfectly stable functioning in a system that is both a media server and video production workstation.
Multi-threaded performance of course matters (otherwise why buy HEDT?) but single-thread performance still matters also. For example, the decoder engine for VC1 is still single-threaded, so when someone queues up a VC1 video to play on my media server, I want to know the system can fill up the buffer fast enough to avoid stuttering/buffering. With base clock as low as 3GHz that's not a guarantee. Boost clock at 3.6 gets you closer, but I would rather have an over-engineered solution and not need the extra performance, than vice versa.
There are many applications for a HEDT machine which requires many horse power but reliability and stability is on top of the list.
A 7980xe delided and 4.6GHz all-core OC sounds great.
However I won't trust it for, let say, a 4 hours continuous 3D rendering task, day after day.
When Money is on the table, overclocking is just another uncertainty and should be avoided.
Like what? are your delids and OCs supposed to impress me?
www.techpowerup.com/forums/threads/7980xe-delid.251203/
Also, there is a stark difference between a mild OC such as the one I am running, and the 4.6GHz all-core OC you cite, wouldn't you say? Surely you don't believe that mild overclocking (from 3.6GHz to 4GHz) is at all analagous to pushing a chip to its limits. I'm not sure why you're responding to me in this way, I'm attempting to engage in a reasoned debate in good faith. It seems that you have taken offense by my statements of opinion, though I have offered no offense.
And your "mild" overclocking pushes the power consumption of the CPU to 300W + territory.
Then you need an AIO to calm the CPU down, thus introduces another point of failure to the system.
Don't get me wrong.
OC is fun, I do it in my personal rig.
But when stuff needs to be done quick yet reliably, OC should be out of the equation.
Intel disagrees, btw:
www.google.com/search?sxsrf=ACYBGNS88FhvmJm3zw9PuNT3MXucDGY4Iw%3A1571765306301&ei=OjyvXcn7EYjY-wTzv7eQBQ&q=intel+enthusiast&oq=intel+enthusiast&gs_l=psy-ab.3..35i39j0i67j0l6j0i22i30l2.5757.5757..7085...0.2..0.134.134.0j1......0....1..gws-wiz.......0i71.EVZY6DhnEhU&ved=0ahUKEwjJgIKGsrDlAhUI7J4KHfPfDVIQ4dUDCAs&uact=5
Person 1 spends 10 000$ (HEDT) on on rig to play games.
Person 2 spends 10 000$ (HEDT) on on rig to play games and does stuff like OCing.
Person 3 spends 500$ on on rig to play games.
Person 4 spends 500$ on on rig to play games and does stuff like OCing.
Which of these people are the enthusiasts and why ? Not one clue what that's supposed to be.
Did you just ... google "Intel" and "enthusiast" hoping something will show up ?
The point is, the company selling these products bothers to associate their products with the term "enthusiast". Who are you to say they're wrong? Clearly they're spending money marketing specific products to a distinct market segment. They might have a slightly better idea about this than you do...
Why look at Rome when we have Matisse at home?
Simple: Rome got all stops pulled. Those things run full tilt vs Intels best offerings (price vs price, you buy two AMD systems for the price of one Intel).
Look at what AMD is doing in servers. Epyc 7452 vs Xeon Gold 6148 for example:
Time in seconds (lower is better):
Or any other Epyc vs Xeon (ignoring prices)
Worst case AMD outperforms best case Intel
And there is also that one use case where AMD made a 127% performance jump over themself:
Sources:
www.computerbase.de/2019-06/amd-epyc-7452-rome-cpu-32-kerne-benchmark/
www.anandtech.com/show/14694/amd-rome-epyc-2nd-gen
1. Have theoretical conversations about products
2. Help the uninitiated with their problems
3. Have at least something from the current gen (NVME expansion card)
4. Take courses or read literature on technology
5. Get excited for new product releases (TR3, Comet Lake)
6. Be passionate about your purchases
7. Be informed before you buy anything
8. Do your own testing (if you can) on products instead of watching Youtube videos and taking that as gospel
9. Have more equipment than you will ever need
10. Be biased based on experience on what you recommend but not in a negative way.
...which can mean anything... But there are a few in the list that would disqualify a lot of people from being an enthusiast. Hilarious convo... only on TPU. o_O