• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Ryzen 7 5800X3D Geekbenched, About 9% Faster Than 5800X

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,149 (2.90/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
We should be clear that this is 9% better performance with an ~8% reduction in base clock and ~4% drop in boost clocks. This isn't 9% better performance at the same clocks.
 
Joined
May 2, 2017
Messages
7,762 (2.97/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
We should be clear that this is 9% better performance with an ~8% reduction in base clock and ~4% drop in boost clocks. This isn't 9% better performance at the same clocks.
That's a really important distinction.
Wanna bet its an engineering sample? We dont know any facts, nor what build its in, so shouldnt be taken as fact, wait for the real reviews.
It wouldn't be recognized as a 5800X3D if it was an ES - those don't match the hardware IDs of retail CPUs.
 

SL2

Joined
Jan 27, 2006
Messages
1,982 (0.29/day)
I'd really wait for the gaming benchmarks.

It makes no sense to be expecting a 20% gaming uplift when single core Geekbench score (the result that usually quite well represents speed in games) shows no uplift, even regression.
Since when did Geekbench become relevant for gamers?
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,149 (2.90/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
That's a really important distinction.
It really is. If you want to go even further with this, just take a look at the EPYC chips. Look at the first 4 pages of this review over at Phoronix for the EPYC 7773X. If you think 96MB of cache helps, imagine 768MB of it. Some applications improve performance by an absolutely massive number, just by using that extra cache, without using extra power. It's insane. Granted, these are HPC applications, but it goes to show how much cache can help.
 
Joined
May 11, 2018
Messages
1,025 (0.46/day)
But the leaked Geekbench scores of Adler Lake quite well predicted that Intel has a competitive processor (if you disregard the downsides), in gaming and in productivity.
 

SL2

Joined
Jan 27, 2006
Messages
1,982 (0.29/day)
But the leaked Geekbench scores of Adler Lake quite well predicted that Intel has a competitive processor (if you disregard the downsides), in gaming and in productivity.
That doesn't mean anything. Alder was improved in more traditional ways. X3D is the same CPU as before, no other improvements besides the cache.
How would we know for sure that GB would be able to reflect the performance? We don't.
X3D might still be crap, but Gbench isn't the way to figure that out.

There are numerous examples of GB contradicting reality.

Have a look at the top list at Gbench. The first 90 entries are EPYC's only, but we all know that there are quite a few Core/Ryzen CPU's that would beat them in gaming.
 
Last edited:
Joined
May 20, 2020
Messages
1,316 (0.88/day)
One digit performance uplift? Nothing special, but since there is more data kept closer to the processor, there might still be more benefits to be had, if the price is right of course.
 
Joined
May 2, 2017
Messages
7,762 (2.97/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
But the leaked Geekbench scores of Adler Lake quite well predicted that Intel has a competitive processor (if you disregard the downsides), in gaming and in productivity.
Correlation does not imply causation. Just because one architecture sees an equal increase in two workloads doesn't mean that those two workloads are utilizing the same hardware in the same ways, especially for a system as complex as CPUs today.
One digit performance uplift? Nothing special, but since there is more data kept closer to the processor, there might still be more benefits to be had, if the price is right of course.
Context: It's the exact same architecture, at lower clocks, in a different type of workload than what it's being marketed towards. Hardly surprising. We'll have to wait for gaming benchmarks to tell what the change in gaming performance is like.
 
Joined
May 11, 2018
Messages
1,025 (0.46/day)
We should be clear that this is 9% better performance with an ~8% reduction in base clock and ~4% drop in boost clocks. This isn't 9% better performance at the same clocks.

The article doesn't point out that in single core there is even a regression, not 9% better performance.

And stated base and boost clocks in Ryzen processors don't really correspond to clocks at which processors run single core and multi core loads, but more of an abstract idea. Which can change, making comparisons like that very hard.
 
Joined
Sep 20, 2021
Messages
290 (0.29/day)
Processor Ryzen 7 7900x
Motherboard Asrock B650E PG Riptide WiFi
Cooling Underfloor CPU cooling
Memory 2x32GB 6600
Video Card(s) RX 7900 XT OC Edition
Storage Kingston Fury Renegade 1TB, Seagate Exos 12TB
Display(s) MSI Optix MAG301RF 2560x1080@200Hz
Case Phanteks Enthoo Pro
Power Supply NZXT C850 850W Gold
Mouse Bloody W95 Max Naraka
It depends on the game and what the game scales with. Many newer games like Troy and Cyberpunk prefers BW over latency. Many older games like latency more.


No, in some games they will be close, in other games 5800X3D will be 20% faster, AMD compared 5900X with 5800X3D in their marketingslides.
That's very interesting, would 5800x3D be faster compared with fine-tuned 5900x (or 5800x), and how much.
In a few titles 5800x3D will still be faster but overall would it worth it, because in everything other it will be slower.
 
Joined
May 11, 2018
Messages
1,025 (0.46/day)
That doesn't mean anything. Alder was improved in more traditional ways. X3D is the same CPU as before, no other improvements besides the cache.
How would we know for sure that GB would be able to reflect the performance? We don't.
X3D might still be crap, but Gbench isn't the way to figure that out.

There are numerous examples of GB contradicting reality.

Have a look at the top list at Gbench. The first 90 entries are EPYC's only, but we all know that there are quite a few Core/Ryzen CPU's that would beat them in gaming.

And why, for God's sake, would you look at multicore synthetic test results for gaming?
 
Joined
Jan 18, 2020
Messages
713 (0.44/day)
Supposedly the extra cache is only useful in a very narrow range of applications, including gaming. So we'll see when gaming benchmarks come out what benefit the cache has.

That being said, processor only matters in lower res / high refresh rate environment anyway. At 4k with full quality pretty much any CPU from last 5 years or even longer will produce similar results.
 

SL2

Joined
Jan 27, 2006
Messages
1,982 (0.29/day)
And why, for God's sake, would you look at multicore synthetic test results for gaming?
lol, really? Are we having that much trouble following a thread with text in it? You started it. Have you even read the OP?

You said, without specifying which GB benchmark:
But the leaked Geekbench scores of Adler Lake quite well predicted that Intel has a competitive processor (if you disregard the downsides), in gaming and in productivity.
Then I showed an example where GB doesn't predict gaming performance well, trying to point out how unreliable GB is to begin with, and now you're having issues with that? :roll:

The OP is about 9 % higher numbers in multithread GB, soo... what's the problem? Did someone just hijack your TPU account?

GB is crap for most things on TPU.
 
Joined
May 11, 2018
Messages
1,025 (0.46/day)
lol, really? Are we having that much trouble following a thread with text in it? You started it. Have you even read the OP?

You said, without specifying which GB benchmark:

Then I showed an example where GB doesn't predict gaming performance well, trying to point out how unreliable GB is to begin with, and now you're having issues with that? :roll:

The OP is about 9 % higher numbers in multithread GB, soo... what's the problem? Did someone just hijack your TPU account?

GB is crap for most things on TPU.

Before going into childish personal attacks, noone ever looks at synthetic multicore results and expects gaming results from them. For all the duration of multicore processors, 17 years. Everyone reads my comment about Adler Lake scores as single core for gaming and multi core for productivity. If you don't, don't blame it on me.

I stated that 9% uplift in multicore AND REGRESSION in single core Geekbench result is a very bad prognosis for gaming increase that AMD is promising. Because multicore synthetic results are largely irrelevant in gaming, still. Doesn't matter which bechmarking tool you use.

Could the Geekbench be relatively unaffected by larger cache in single core test, but the games benefit from it greatly? It's possible, I have no idea how far a synthetic test from a real world load like a game is. I'd rather expect the reverse, benchmark benefitting more since it would fit in cache, and then real world usage struggling.

I imagine not all games will then see this increase, some will benefit more, some less - different to a pure performance increase due to higher frequency, for instance.
 
Last edited:

ARF

Joined
Jan 28, 2020
Messages
4,251 (2.63/day)
Location
Ex-usa | slava the trolls
9% is a negligible performance improvement and literally very disappointing, in the ballpark of simple rebrands.
No user would ever notice better user experience with this.

Why does AMD even waste its time and resources, while instead doesn't pull the next generation Zen 4 CPUs launch forward?
 
Joined
May 31, 2016
Messages
4,352 (1.47/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
I imagine not all games will then see this increase, some will benefit more, some less - different to a pure performance increase due to higher frequency, for instance.
Hmm that is only one side of the coin. CPU stalls due to data feed and this happens on any CPU despite its architecture or clock frequency. Higher cache capacity lowers the stalls for any CPU data feed thus you get more CPU performance even though you have two exactly the same CPUs and the one with larger cache and lower frequency comes on top. Not all is frequency you know. The single core performance might be lower *lower frequency which is obvious) but due to cache capacity increase the stalls don't happen that much often and the CPU can do the tasks faster. And, which has been proven, in some cases significantly faster if the cache capacity is increased.
 
Joined
Oct 12, 2005
Messages
683 (0.10/day)
Before going into childish personal attacks, noone ever looks at synthetic multicore results and expects gaming results from them. For all the duration of multicore processors, 17 years. Everyone reads my comment about Adler Lake scores as single core for gaming and multi core for productivity. If you don't, don't blame it on me.

I stated that 9% uplift in multicore AND REGRESSION in single core Geekbench result is a very bad prognosis for gaming increase that AMD is promising. Because multicore synthetic results are largely irrelevant in gaming, still. Doesn't matter which bechmarking tool you use.

Could the Geekbench be relatively unaffected by larger cache in single core test, but the games benefit from it greatly? It's possible, I have no idea how far a synthetic test from a real world load like a game is. I'd rather expect the reverse, benchmark benefitting more since it would fit in cache, and then real world usage struggling.

I imagine not all games will then see this increase, some will benefit more, some less - different to a pure performance increase due to higher frequency, for instance.
Cache scaling benchmark are available on the internet. Hardware Unboxed made a great video on how it was really the added cache on higher Intel SKU that helped gaming performance and not much the increased core count.

Geekbench is an aggregate of multiples workload and cannot be used to extrapolate on another specific workload. It can be used as a global indices but it have few to no correlation to gaming.

Games are semi large loops (each frames) that need to be run as fast as possible, It's somewhat different than many workload that aren't that large or aren't that repetitive.

Game like CS:GO had their main loop mostly fitting into the L3 cache of Zen3 giving it huge performance boost way above the average IPC gain in benchmark like GB. With this cache, it's quite possible that those gain will be extended to way more games.

But that is a debate for competitive gamers mostly that game in 1080p low with high refresh screen. For most average gamers, they mostly want to put the maximum details at the maximum resolutions and they will be GPU limited anyway. That is also one of the main reason why for most people, ADL is not consuming a huge amount of power in gaming. It have to wait all the time for the GPU to finish rendering the frame.

Anyway, you want a CPU fast enough so it won't be a problem but then you want to be GPU limited since it's generally less spiky than when you are CPU limited.
 
Last edited:

Keullo-e

S.T.A.R.S.
Joined
Dec 16, 2012
Messages
11,477 (2.72/day)
Location
Finland
System Name 4K-gaming
Processor AMD Ryzen 7 5800X
Motherboard Gigabyte B550M Aorus Elite
Cooling Alphacool Eisbaer w/ 2x 240 rads
Memory 48GB Kingston Fury DDR4-3200
Video Card(s) Asus GeForce RTX 3080 TUF w/ EKWB FC
Storage ~4TB SSDs + 6TB external HDDs
Display(s) Acer 27" 4K120 + Lenovo 32" 4K60
Case Define Mini C
Audio Device(s) Asus TUF H3 Wireless
Power Supply EVGA Supernova G2 750W
Mouse Logitech G MX518 + Asus TUF P1 mousepad
Keyboard Roccat Vulcan 121 AIMO
VR HMD Oculus Rift CV1
Software Windows 11 Pro
Benchmark Scores It runs Crysis remastered at 4K
I'd really wait for the gaming benchmarks.

It makes no sense to be expecting a 20% gaming uplift when single core Geekbench score (the result that usually quite well represents speed in games) shows no uplift, even regression.
Exactly, when they market it as the fastest gaming processor, the gaming tests are those which interests me the most.
 
Joined
Feb 20, 2020
Messages
9,340 (5.87/day)
Location
Louisiana
System Name Ghetto Rigs z490|x99|Acer 17 Nitro 7840hs/ 5600c40-2x16/ 4060/ 1tb acer stock m.2/ 4tb sn850x
Processor 10900k w/Optimus Foundation | 5930k w/Black Noctua D15
Motherboard z490 Maximus XII Apex | x99 Sabertooth
Cooling oCool D5 res-combo/280 GTX/ Optimus Foundation/ gpu water block | Blk D15
Memory Trident-Z Royal 4000c16 2x16gb | Trident-Z 3200c14 4x8gb
Video Card(s) Titan Xp-water | evga 980ti gaming-w/ air
Storage 970evo+500gb & sn850x 4tb | 860 pro 256gb | Acer m.2 1tb/ sn850x 4tb| Many2.5" sata's ssd 3.5hdd's
Display(s) 1-AOC G2460PG 24"G-Sync 144Hz/ 2nd 1-ASUS VG248QE 24"/ 3rd LG 43" series
Case D450 | Cherry Entertainment center on Test bench
Audio Device(s) Built in Realtek x2 with 2-Insignia 2.0 sound bars & 1-LG sound bar
Power Supply EVGA 1000P2 with APC AX1500 | 850P2 with CyberPower-GX1325U
Mouse Redragon 901 Perdition x3
Keyboard G710+x3
Software Win-7 pro x3 and win-10 & 11pro x3
Benchmark Scores Are in the benchmark section
Hi,
Wake me up when there's a real gaming benchmark run on it :sleep:
 
Joined
Feb 21, 2006
Messages
2,045 (0.31/day)
Location
Toronto, Ontario
System Name The Expanse
Processor AMD Ryzen 7 5800X3D
Motherboard Asus Prime X570-Pro BIOS 5013 AM4 AGESA V2 PI 1.2.0.Ca.
Cooling Corsair H150i Pro
Memory 32GB GSkill Trident RGB DDR4-3200 14-14-14-34-1T (B-Die)
Video Card(s) AMD Radeon RX 7900 XTX 24GB (24.6.1)
Storage WD SN850X 2TB / Corsair MP600 1TB / Samsung 860Evo 1TB x2 Raid 0 / Asus NAS AS1004T V2 14TB
Display(s) LG 34GP83A-B 34 Inch 21: 9 UltraGear Curved QHD (3440 x 1440) 1ms Nano IPS 160Hz
Case Fractal Design Meshify S2
Audio Device(s) Creative X-Fi + Logitech Z-5500 + HS80 Wireless
Power Supply Corsair AX850 Titanium
Mouse Corsair Dark Core RGB SE
Keyboard Corsair K100
Software Windows 10 Pro x64 22H2
Benchmark Scores 3800X https://valid.x86.fr/1zr4a5 5800X https://valid.x86.fr/2dey9c
here is my 5800X PBO tuned score

geekbench52.PNG
 
Joined
Aug 9, 2019
Messages
1,555 (0.87/day)
Processor 7800X3D 2x16GB CO
Motherboard Asrock B650m HDV
Cooling Peerless Assassin SE
Memory 2x16GB DR A-die@6000c30 tuned
Video Card(s) Asus 4070 dual OC 2610@915mv
Storage WD blue 1TB nvme
Display(s) Lenovo G24-10 144Hz
Case Corsair D4000 Airflow
Power Supply EVGA GQ 650W
Software Windows 10 home 64
Benchmark Scores Superposition 8k 5267 Aida64 58.5ns
That's very interesting, would 5800x3D be faster compared with fine-tuned 5900x (or 5800x), and how much.
In a few titles 5800x3D will still be faster but overall would it worth it, because in everything other it will be slower.
If you finetune both I'm unsure. Large cache makes ram tuning less important, and if pbo+co is not available on 5800X3D that will make 5900X atleast get closer.
 
Joined
Oct 12, 2005
Messages
683 (0.10/day)
If fine tuning was something everyone could get, it would be in the stock performance. But stock is just a guarantee baseline and there will always be something on the table.
 
Joined
Nov 13, 2007
Messages
10,304 (1.70/day)
Location
Austin Texas
Processor 13700KF Undervolted @ 5.6/ 5.5, 4.8Ghz Ring 200W PL1
Motherboard MSI 690-I PRO
Cooling Thermalright Peerless Assassin 120 w/ Arctic P12 Fans
Memory 48 GB DDR5 7600 MHZ CL36
Video Card(s) RTX 4090 FE
Storage 2x 2TB WDC SN850, 1TB Samsung 960 prr
Display(s) Alienware 32" 4k 240hz OLED
Case SLIGER S620
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse Xlite V2
Keyboard RoyalAxe
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.
I have a feeling if this is that good in geekbench, it will be a monster in games.
 
Top