Wednesday, March 23rd 2022

AMD Ryzen 7 5800X3D Geekbenched, About 9% Faster Than 5800X

Someone with access to an AMD Ryzen 7 5800X3D processor sample posted some of the first Geekbench 5 performance numbers for the chip, where it ends up 9% faster than the Ryzen 7 5800X, on average. AMD claimed that the 5800X3D is "the world's fastest gaming processor," with the 3D Vertical Cache (3D V-cache) technology offering gaming performance uplifts over the 5800X akin to a new generation, despite being based on the same "Zen 3" microarchitecture, and lower clock speeds. The Ryzen 7 5800X3D is shown posting scores of 1633 points 1T and 11250 points nT in one run; and 1637/11198 points in the other; when paired with 32 GB of dual-channel DDR4-3200 memory.

These are 9% faster than a typical 5800X score on this benchmark. AMD's own gaming performance claims see the 5800X3D score a performance uplift above 20% over the 5800X, closing the gap with the Intel Core i9-12900K. The 3D V-cache technology debuted earlier this week with the EPYC "Milan-X" processors, where the additional cache provides huge performance gains for applications with large data-sets. AMD isn't boasting too much about the multi-threaded productivity performance of the 5800X3D because this is ultimately an 8-core/16-thread processor that's bound to lose to the Ryzen 9 5900X/5950X, and the i9-12900K, on account of its lower core-count.
Source: Wccftech
Add your own comment

105 Comments on AMD Ryzen 7 5800X3D Geekbenched, About 9% Faster Than 5800X

#26
Valantar
AquinusWe should be clear that this is 9% better performance with an ~8% reduction in base clock and ~4% drop in boost clocks. This isn't 9% better performance at the same clocks.
That's a really important distinction.
DeathtoGnomesWanna bet its an engineering sample? We dont know any facts, nor what build its in, so shouldnt be taken as fact, wait for the real reviews.
It wouldn't be recognized as a 5800X3D if it was an ES - those don't match the hardware IDs of retail CPUs.
Posted on Reply
#27
SL2
BwazeI'd really wait for the gaming benchmarks.

It makes no sense to be expecting a 20% gaming uplift when single core Geekbench score (the result that usually quite well represents speed in games) shows no uplift, even regression.
Since when did Geekbench become relevant for gamers?
Posted on Reply
#28
Aquinus
Resident Wat-man
ValantarThat's a really important distinction.
It really is. If you want to go even further with this, just take a look at the EPYC chips. Look at the first 4 pages of this review over at Phoronix for the EPYC 7773X. If you think 96MB of cache helps, imagine 768MB of it. Some applications improve performance by an absolutely massive number, just by using that extra cache, without using extra power. It's insane. Granted, these are HPC applications, but it goes to show how much cache can help.
Posted on Reply
#29
Cutechri
Every time I see Geekbench, I ignore.
Posted on Reply
#30
Bwaze
But the leaked Geekbench scores of Adler Lake quite well predicted that Intel has a competitive processor (if you disregard the downsides), in gaming and in productivity.
Posted on Reply
#31
SL2
BwazeBut the leaked Geekbench scores of Adler Lake quite well predicted that Intel has a competitive processor (if you disregard the downsides), in gaming and in productivity.
That doesn't mean anything. Alder was improved in more traditional ways. X3D is the same CPU as before, no other improvements besides the cache.
How would we know for sure that GB would be able to reflect the performance? We don't.
X3D might still be crap, but Gbench isn't the way to figure that out.

There are numerous examples of GB contradicting reality.

Have a look at the top list at Gbench. The first 90 entries are EPYC's only, but we all know that there are quite a few Core/Ryzen CPU's that would beat them in gaming.
browser.geekbench.com/v5/cpu/multicore
Posted on Reply
#32
pavle
One digit performance uplift? Nothing special, but since there is more data kept closer to the processor, there might still be more benefits to be had, if the price is right of course.
Posted on Reply
#33
Valantar
BwazeBut the leaked Geekbench scores of Adler Lake quite well predicted that Intel has a competitive processor (if you disregard the downsides), in gaming and in productivity.
Correlation does not imply causation. Just because one architecture sees an equal increase in two workloads doesn't mean that those two workloads are utilizing the same hardware in the same ways, especially for a system as complex as CPUs today.
pavleOne digit performance uplift? Nothing special, but since there is more data kept closer to the processor, there might still be more benefits to be had, if the price is right of course.
Context: It's the exact same architecture, at lower clocks, in a different type of workload than what it's being marketed towards. Hardly surprising. We'll have to wait for gaming benchmarks to tell what the change in gaming performance is like.
Posted on Reply
#34
Bwaze
AquinusWe should be clear that this is 9% better performance with an ~8% reduction in base clock and ~4% drop in boost clocks. This isn't 9% better performance at the same clocks.
The article doesn't point out that in single core there is even a regression, not 9% better performance.

And stated base and boost clocks in Ryzen processors don't really correspond to clocks at which processors run single core and multi core loads, but more of an abstract idea. Which can change, making comparisons like that very hard.
Posted on Reply
#35
AVATARAT
TaraquinIt depends on the game and what the game scales with. Many newer games like Troy and Cyberpunk prefers BW over latency. Many older games like latency more.


No, in some games they will be close, in other games 5800X3D will be 20% faster, AMD compared 5900X with 5800X3D in their marketingslides.
That's very interesting, would 5800x3D be faster compared with fine-tuned 5900x (or 5800x), and how much.
In a few titles 5800x3D will still be faster but overall would it worth it, because in everything other it will be slower.
Posted on Reply
#36
Bwaze
MatsThat doesn't mean anything. Alder was improved in more traditional ways. X3D is the same CPU as before, no other improvements besides the cache.
How would we know for sure that GB would be able to reflect the performance? We don't.
X3D might still be crap, but Gbench isn't the way to figure that out.

There are numerous examples of GB contradicting reality.

Have a look at the top list at Gbench. The first 90 entries are EPYC's only, but we all know that there are quite a few Core/Ryzen CPU's that would beat them in gaming.
browser.geekbench.com/v5/cpu/multicore
And why, for God's sake, would you look at multicore synthetic test results for gaming?
Posted on Reply
#37
mb194dc
Supposedly the extra cache is only useful in a very narrow range of applications, including gaming. So we'll see when gaming benchmarks come out what benefit the cache has.

That being said, processor only matters in lower res / high refresh rate environment anyway. At 4k with full quality pretty much any CPU from last 5 years or even longer will produce similar results.
Posted on Reply
#38
SL2
BwazeAnd why, for God's sake, would you look at multicore synthetic test results for gaming?
lol, really? Are we having that much trouble following a thread with text in it? You started it. Have you even read the OP?

You said, without specifying which GB benchmark:
BwazeBut the leaked Geekbench scores of Adler Lake quite well predicted that Intel has a competitive processor (if you disregard the downsides), in gaming and in productivity.
Then I showed an example where GB doesn't predict gaming performance well, trying to point out how unreliable GB is to begin with, and now you're having issues with that? :roll:

The OP is about 9 % higher numbers in multithread GB, soo... what's the problem? Did someone just hijack your TPU account?

GB is crap for most things on TPU.
Posted on Reply
#39
Bwaze
Matslol, really? Are we having that much trouble following a thread with text in it? You started it. Have you even read the OP?

You said, without specifying which GB benchmark:

Then I showed an example where GB doesn't predict gaming performance well, trying to point out how unreliable GB is to begin with, and now you're having issues with that? :roll:

The OP is about 9 % higher numbers in multithread GB, soo... what's the problem? Did someone just hijack your TPU account?

GB is crap for most things on TPU.
Before going into childish personal attacks, noone ever looks at synthetic multicore results and expects gaming results from them. For all the duration of multicore processors, 17 years. Everyone reads my comment about Adler Lake scores as single core for gaming and multi core for productivity. If you don't, don't blame it on me.

I stated that 9% uplift in multicore AND REGRESSION in single core Geekbench result is a very bad prognosis for gaming increase that AMD is promising. Because multicore synthetic results are largely irrelevant in gaming, still. Doesn't matter which bechmarking tool you use.

Could the Geekbench be relatively unaffected by larger cache in single core test, but the games benefit from it greatly? It's possible, I have no idea how far a synthetic test from a real world load like a game is. I'd rather expect the reverse, benchmark benefitting more since it would fit in cache, and then real world usage struggling.

I imagine not all games will then see this increase, some will benefit more, some less - different to a pure performance increase due to higher frequency, for instance.
Posted on Reply
#40
ARF
9% is a negligible performance improvement and literally very disappointing, in the ballpark of simple rebrands.
No user would ever notice better user experience with this.

Why does AMD even waste its time and resources, while instead doesn't pull the next generation Zen 4 CPUs launch forward?
Posted on Reply
#41
ratirt
BwazeI imagine not all games will then see this increase, some will benefit more, some less - different to a pure performance increase due to higher frequency, for instance.
Hmm that is only one side of the coin. CPU stalls due to data feed and this happens on any CPU despite its architecture or clock frequency. Higher cache capacity lowers the stalls for any CPU data feed thus you get more CPU performance even though you have two exactly the same CPUs and the one with larger cache and lower frequency comes on top. Not all is frequency you know. The single core performance might be lower *lower frequency which is obvious) but due to cache capacity increase the stalls don't happen that much often and the CPU can do the tasks faster. And, which has been proven, in some cases significantly faster if the cache capacity is increased.
Posted on Reply
#42
Punkenjoy
BwazeBefore going into childish personal attacks, noone ever looks at synthetic multicore results and expects gaming results from them. For all the duration of multicore processors, 17 years. Everyone reads my comment about Adler Lake scores as single core for gaming and multi core for productivity. If you don't, don't blame it on me.

I stated that 9% uplift in multicore AND REGRESSION in single core Geekbench result is a very bad prognosis for gaming increase that AMD is promising. Because multicore synthetic results are largely irrelevant in gaming, still. Doesn't matter which bechmarking tool you use.

Could the Geekbench be relatively unaffected by larger cache in single core test, but the games benefit from it greatly? It's possible, I have no idea how far a synthetic test from a real world load like a game is. I'd rather expect the reverse, benchmark benefitting more since it would fit in cache, and then real world usage struggling.

I imagine not all games will then see this increase, some will benefit more, some less - different to a pure performance increase due to higher frequency, for instance.
Cache scaling benchmark are available on the internet. Hardware Unboxed made a great video on how it was really the added cache on higher Intel SKU that helped gaming performance and not much the increased core count.

Geekbench is an aggregate of multiples workload and cannot be used to extrapolate on another specific workload. It can be used as a global indices but it have few to no correlation to gaming.

Games are semi large loops (each frames) that need to be run as fast as possible, It's somewhat different than many workload that aren't that large or aren't that repetitive.

Game like CS:GO had their main loop mostly fitting into the L3 cache of Zen3 giving it huge performance boost way above the average IPC gain in benchmark like GB. With this cache, it's quite possible that those gain will be extended to way more games.

But that is a debate for competitive gamers mostly that game in 1080p low with high refresh screen. For most average gamers, they mostly want to put the maximum details at the maximum resolutions and they will be GPU limited anyway. That is also one of the main reason why for most people, ADL is not consuming a huge amount of power in gaming. It have to wait all the time for the GPU to finish rendering the frame.

Anyway, you want a CPU fast enough so it won't be a problem but then you want to be GPU limited since it's generally less spiky than when you are CPU limited.
Posted on Reply
#43
Ruru
S.T.A.R.S.
BwazeI'd really wait for the gaming benchmarks.

It makes no sense to be expecting a 20% gaming uplift when single core Geekbench score (the result that usually quite well represents speed in games) shows no uplift, even regression.
Exactly, when they market it as the fastest gaming processor, the gaming tests are those which interests me the most.
Posted on Reply
#44
ThrashZone
Hi,
Wake me up when there's a real gaming benchmark run on it :sleep:
Posted on Reply
#45
Rouxenator
Now they just need to start stacking iGPU cores. Then they can bury the dGPU in the past were it belongs.
Posted on Reply
#46
Makaveli
here is my 5800X PBO tuned score

Posted on Reply
#47
Taraquin
AVATARATThat's very interesting, would 5800x3D be faster compared with fine-tuned 5900x (or 5800x), and how much.
In a few titles 5800x3D will still be faster but overall would it worth it, because in everything other it will be slower.
If you finetune both I'm unsure. Large cache makes ram tuning less important, and if pbo+co is not available on 5800X3D that will make 5900X atleast get closer.
Posted on Reply
#48
Punkenjoy
If fine tuning was something everyone could get, it would be in the stock performance. But stock is just a guarantee baseline and there will always be something on the table.
Posted on Reply
#49
phanbuey
I have a feeling if this is that good in geekbench, it will be a monster in games.
Posted on Reply
#50
Bwaze
phanbueyI have a feeling if this is that good in geekbench, it will be a monster in games.
But that's the thing - it isn't.

9% increase just in multicore and even slightly lower score in single core load usually means better productivity (rendering, vidro encoding and other stuff that scales well), not better gaming.
Posted on Reply
Add your own comment
Dec 27th, 2024 02:50 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts