Wednesday, March 23rd 2022

AMD Ryzen 7 5800X3D Geekbenched, About 9% Faster Than 5800X

Someone with access to an AMD Ryzen 7 5800X3D processor sample posted some of the first Geekbench 5 performance numbers for the chip, where it ends up 9% faster than the Ryzen 7 5800X, on average. AMD claimed that the 5800X3D is "the world's fastest gaming processor," with the 3D Vertical Cache (3D V-cache) technology offering gaming performance uplifts over the 5800X akin to a new generation, despite being based on the same "Zen 3" microarchitecture, and lower clock speeds. The Ryzen 7 5800X3D is shown posting scores of 1633 points 1T and 11250 points nT in one run; and 1637/11198 points in the other; when paired with 32 GB of dual-channel DDR4-3200 memory.

These are 9% faster than a typical 5800X score on this benchmark. AMD's own gaming performance claims see the 5800X3D score a performance uplift above 20% over the 5800X, closing the gap with the Intel Core i9-12900K. The 3D V-cache technology debuted earlier this week with the EPYC "Milan-X" processors, where the additional cache provides huge performance gains for applications with large data-sets. AMD isn't boasting too much about the multi-threaded productivity performance of the 5800X3D because this is ultimately an 8-core/16-thread processor that's bound to lose to the Ryzen 9 5900X/5950X, and the i9-12900K, on account of its lower core-count.
Source: Wccftech
Add your own comment

105 Comments on AMD Ryzen 7 5800X3D Geekbenched, About 9% Faster Than 5800X

#76
ratirt
eidairaman1The sweet Spot Chip I say is the 5800 OEM.
The 5800x is also good. The price has dropped as well. Actually, I see prices are dropping and that includes GPUs as well.
To be fair, I dont think i will be going for zen 4. If anything, that 5800X3d will have to do for an upgrade. Time will tell.
Posted on Reply
#77
Valantar
AquinusLike I said, go check out that EPYC review on Phoronix. In the server space, this kind of thing is making improvements far larger than a 9% gain. It does make me wonder how well more cache would scale.
That is some seriously impressive stuff. Some of those improvements are downright staggering. Definitely explains why AMD would go to the trouble of making a product like this - that first ASKAP 1.0 OpenMP Gridding benchmark has a more than 2x increase! That's clearly an outlier, but damn, if your workload can make use of that cache, some of these speedups are incredible, even when accounting for the marginal increase in power over the 7763.
Posted on Reply
#78
birdie
MaelwyseYour comparison shouldn't be run. Period. It has different OSes. that's like comparing a plum to a hamster. it doesn't work the same way.
facepalm.jpg

Geekbench is OS agnostic (at least on Windows) and shows near the same performance under different versions of Windows.

Internally Windows 11 identifies itself as Windows 10 and has a very similar task scheduler which was updated to properly support ADL but it doesn't affect Zen CPUs because they not heterogeneous.

Lastly, would be nice to see where Windows 10 and Windows 11 results for GB wildly differ but I guess the most you can spit out is "a plum to a hamster" - shows how much you know, how much you care and what kind of "arguments" you have. I wouldn't call your comment outright asinine and inane but it surely looks like it to me.

Here take this:

www.techspot.com/article/2349-windows-11-performance/
www.anandtech.com/show/17047/the-intel-12th-gen-core-i912900k-review-hybrid-performance-brings-hybrid-complexity/11
hothardware.com/news/microsoft-windows-11-performance-preview

Some quantitative data from trusted reviewers.

It's astounding to see so many likes on your post. Shows the level of discussion here.
Posted on Reply
#79
Aquinus
Resident Wat-man
ValantarThat is some seriously impressive stuff. Some of those improvements are downright staggering. Definitely explains why AMD would go to the trouble of making a product like this - that first ASKAP 1.0 OpenMP Gridding benchmark has a more than 2x increase! That's clearly an outlier, but damn, if your workload can make use of that cache, some of these speedups are incredible, even when accounting for the marginal increase in power over the 7763.
I think people underestimate how much a cache miss can hurt and how much they can add up. At some point along the way, memory speeds started mattering again and I suspect that's from larger applications and there being a higher ratio of cache misses which depends on faster memory speed to pick up the slack. Big cache means higher hit ratio, which improves memory access latencies which is really what you want to focus on because "waiting" for anything in a CPU is wasteful, which includes cache misses.
Posted on Reply
#80
stimpy88
Oh dear, AMD. 9% is not going to keep Intel at bay, is it. Is there a 9% cost difference between the 5800X and the 5800X3D? :D
Posted on Reply
#81
SL2
stimpy88Oh dear, AMD. 9% is not going to keep Intel at bay, is it. Is there a 9% cost difference between the 5800X and the 5800X3D? :D
If running GBench all day is what you do then yes, you should be worried. Otherwise, wait for other tests.

I mean, it's 3D so it must be better, right? :roll:
Posted on Reply
#82
Dr. Dro
MaelwyseYour comparison shouldn't be run. Period. It has different OSes. that's like comparing a plum to a hamster. it doesn't work the same way.
I mean, if anything his result with a 5800X is at a disadvantage, Windows 11 is supposed to have improved CPU scheduling over Windows 10, and the 5800X3D was benched on 11. I have a hunch the hype train's out of brakes for some time and it may very well crash spectacularly very soon...
Mats
Bloody brilliant, don't tease me hahahaha, AMD needs to make this happen AND bring the ATI brand back
Posted on Reply
#83
Chrispy_
Dr. DroI have a hunch the hype train's out of brakes for some time and it may very well crash spectacularly very soon...
I wasn't aware there was a hype train!? There shouldn't be, this is last year's architecture on a 6-year-old platform with more cache.
All the cache can do is increase performance and costs. Performance will go up but not as much as the price will go up. If you're not bothered by the reduced performance/$ then have at it!
Posted on Reply
#84
ThrashZone
Hi,
Most people would like to see amd's silly high latency lowered by any means necessary and maybe when proper testing is done it might show 3d cache does it.
Maxmem if anyone has ever used it it's very responsive to high memory frequency whether amount does it is anyone's guess atm.
Posted on Reply
#85
Valantar
ThrashZoneMost people would like to see... better performance and don't care whatsoever about how that comes to pass
FTFY
Posted on Reply
#86
chrcoluk
Bwaze"These are 9% faster than a typical 5800X score on this benchmark. AMD's own gaming performance claims see the 5800X3D score a performance uplift above 20% over the 5800X, closing the gap with the Intel Core i9-12900K."

Isn't this uplift in Geekbench mainly in multicore? Why would we see an ever greater uplift in gaming, which mainly cares for single / low core speed?

Single core result actually seems to be even lower than standard 5800X:

5800X3D: 1637 single-core, 11250 multi-threaded points.

5800X: 1671 points single-core, 10333 points in the multi-core tests.
Interesting so boost is to help multi core performance primarily?

I wonder if there is any single core tests from geekbench.
TaraquinDue to higher cache. Cache is king in most games. Generally higher cache matters more for fps than how many cores you have :)

Interesting all these years people were saying i7 been faster than i5 proved threads were important in games, but it may have been mostly down to the bigger cache.
Posted on Reply
#87
Dr. Dro
Chrispy_I wasn't aware there was a hype train!? There shouldn't be, this is last year's architecture on a 6-year-old platform with more cache.
All the cache can do is increase performance and costs. Performance will go up but not as much as the price will go up. If you're not bothered by the reduced performance/$ then have at it!
Well, there is one, alright. A lot of people have very high expectations of the 5800X3D processor and expect it to be competitive with the Core i9-12900K and provide better value than the Core i7-12700K. The difficult thing will be doing that, especially at $450 SEP.

Mine are more tempered, I am interested primarily in a comparison of how would a dual CCD (so 32 MB x 2) L3 cache would handle against this single contiguous 96 MB slice.
Posted on Reply
#88
Chrispy_
Dr. DroMine are more tempered, I am interested primarily in a comparison of how would a dual CCD (so 32 MB x 2) L3 cache would handle against this single contiguous 96 MB slice.
AMD have already posted their own game benchmarks comparing the 5800X3D against the 5900X in their CES announcement slides.

www.techpowerup.com/290513/amd-ces-2022-liveblog-zen-3-rdna2-igp-6nm-rx-6500-xt-am5-zen-4-and-more



That's your 2x32MB vs 1x96MB cache comparison right there.
The 5900X clocks are likely to be 5-10% faster depending on boost and number of loaded cores, but in cache-bound scenarios that's of little relevance.

Games are definitely the application that AMD thinks will benefit most from the additional cache, I'm expecting 1.4x gains to be realized only in games (and synthetic L3 cache benchmarks, ofc). Given that these are likely cherry-picked games, I suspect the median game improvement is more like 1.1x over the 5900X. I am just guessing though and that's based on nothing other than gut feeling and distrust of marketing-department cherry-picking their benchmarks to make investors bend over and open their wallets some more.
Posted on Reply
#89
EatingDirt
chrcolukInteresting all these years people were saying i7 been faster than i5 proved threads were important in games, but it may have been mostly down to the bigger cache.
That was not really "all these years". It was more specifically when the i5's were still 4 core, 4 thread CPU's without hyperthreading which was 2011-2017(2600k-7600k). They would be inadequate now for most games, and around when the 7600k came out, were becoming a less than ideal experience for well optimized games, multi-threaded games. This is usually seen in large drops in the 1% lows in those titles.
Posted on Reply
#90
Valantar
Chrispy_AMD have already posted their own game benchmarks comparing the 5800X3D against the 5900X in their CES announcement slides.

www.techpowerup.com/290513/amd-ces-2022-liveblog-zen-3-rdna2-igp-6nm-rx-6500-xt-am5-zen-4-and-more



That's your 2x32MB vs 1x96MB cache comparison right there.
The 5900X clocks are likely to be 5-10% faster depending on boost and number of loaded cores, but in cache-bound scenarios that's of little relevance.

Games are definitely the application that AMD thinks will benefit most from the additional cache, I'm expecting 1.4x gains to be realized only in games (and synthetic L3 cache benchmarks, ofc). Given that these are likely cherry-picked games, I suspect the median game improvement is more like 1.1x over the 5900X. I am just guessing though and that's based on nothing other than gut feeling and distrust of marketing-department cherry-picking their benchmarks to make investors bend over and open their wallets some more.
Those benchmarks are kind of interesting though - especially the inclusion of CS:GO. That's been a strong point for Zen3 (before ADL) after all, so it's interesting to see this tie regular Zen3 there, but I guess it also indicates that lightweight, older applications that already fit decently into the 32MB Zen3 cache won't see any real benefit from this (though tying at lower clocks is still fine overall). SotTR is getting old but is pretty CPU heavy, and everything else is relatively new and/or quite demanding, though I don't think any of them have much of a reputation for being CPU limited. Reviews will definitely be interesting for this.
Posted on Reply
#91
Dr. Dro
Chrispy_AMD have already posted their own game benchmarks comparing the 5800X3D against the 5900X in their CES announcement slides.

That's your 2x32MB vs 1x96MB cache comparison right there.
The 5900X clocks are likely to be 5-10% faster depending on boost and number of loaded cores, but in cache-bound scenarios that's of little relevance.

Games are definitely the application that AMD thinks will benefit most from the additional cache, I'm expecting 1.4x gains to be realized only in games (and synthetic L3 cache benchmarks, ofc). Given that these are likely cherry-picked games, I suspect the median game improvement is more like 1.1x over the 5900X. I am just guessing though and that's based on nothing other than gut feeling and distrust of marketing-department cherry-picking their benchmarks to make investors bend over and open their wallets some more.
I'm well aware, but I mean, I have a 5950X and know what it can or can't do, in this case there is still a little more oomph to it vs. the 5900X (due to having two full CCDs and thus the same advantage of having more data access pathways and the extra bits of associated L1/L2 from the extra four cores present), but you know how pre-release first-party benchmarks go. I trust reputable reviewers (such as W1zz) and first-hand experience from actual owners more than AMD (or Intel, or NVIDIA, or whatever) marketing slides first... if AMD says median improvement of 20%, then 9% falls more in line with what I personally expect.

I think it will be a great processor, and it certainly heralds an innovation that will lead to wild successors in the future. But that is mostly because the 5800X itself is a great processor, and this is just a taste test for an upcoming packaging technology that is sure to revolutionize how we see the common desktop processor. :toast:
Posted on Reply
#92
Assimilator
FouquinRemember when generational improvements between entirely different architectures could barely scrape together a 9% improvement? I do. This chip isn't even a new architecture, it's literally a downclocked Zen 3 with a cache slice glued on, and it's putting up measurable improvements. Whatever fantasy land you want to live in doesn't negate that this strategy clearly works.
LOL.

This chip is Zen 3 at its base, except with deity knows how many tweaks to make the 3D cache bit work. Do you know how many hardware bugs picked up over the course of Zen 3's lifetime that would have been fixed in this silicon at the same time? Do you know how many lessons they've learned from Zen 4 that they would've belatedly applied to Zen 3 to try to squeeze some extra ooomph out of it? Do you know how much it's benefited from literally years of process node refinements?
Posted on Reply
#93
Mussels
Freshwater Moderator
I"m glad TPU clarified it was only a geekbench result, since it's technically a poor way to show the cache benefits
That DOES make it look like games will get a 10% or higher improvement which is great news
stimpy88Oh dear, AMD. 9% is not going to keep Intel at bay, is it. Is there a 9% cost difference between the 5800X and the 5800X3D? :D
This is 9% for a program that AMD said is the wrong kind of program to benefit from the cache - heavily multi threaded. Lower threaded repetitive tasks like gaming benefit the most.


This is meant to be a latency win. DDR4 tuned highly can get to about 60ns on Zen3.
This cache is meant to be around 20ns

Anything that fits in that cache, and get reused can see *massive* gains
Posted on Reply
#94
Valantar
Dr. DroI'm well aware, but I mean, I have a 5950X and know what it can or can't do, in this case there is still a little more oomph to it vs. the 5900X (due to having two full CCDs and thus the same advantage of having more data access pathways and the extra bits of associated L1/L2 from the extra four cores present), but you know how pre-release first-party benchmarks go. I trust reputable reviewers (such as W1zz) and first-hand experience from actual owners more than AMD (or Intel, or NVIDIA, or whatever) marketing slides first... if AMD says median improvement of 20%, then 9% falls more in line with what I personally expect.

I think it will be a great processor, and it certainly heralds an innovation that will lead to wild successors in the future. But that is mostly because the 5800X itself is a great processor, and this is just a taste test for an upcoming packaging technology that is sure to revolutionize how we see the common desktop processor. :toast:
I think you're misjudging things here. While I entirely agree that we shouldn't blindly trust first party benchmarks, these are pretty conservative overall. They're also saying "~15%" average if you look at the slide, not 20, FWIW. IMO, the inclusion of examples with no improvement speaks to a degree of honesty in the benchmarks - though that is obviously also what they want to convey, so it still can't be taken at face value. Still, I see these as slightly more plausible than most first party benchmarks.

As for your 5950X comparison, there are some holes there. First off, L1 and L2 caches on Zen3 are per-core and do not whatsoever affect the performance of other cores. Unless those cores are being utilized, there is no advantage there - and arguably there's a minor disadvantage, as the L3 is divided across more cores (though that mainly makes a difference in heavy MT loads). Still, the advantages of the 5950X in gaming mainly come down to clocks and the ability to keep more high performance threads on the same CCX due to the extra cores. I don't know what you mean by "data access pathways" - the Infinity Fabric of each die is active no matter what, and the full L3 is accessible to all cores (the only difference is the ring bus has two stops disabled), so there's no real difference in that (except for the aforementioned advantage of more local workloads due to more cores, meaning less need to transfer data over IF).

But again: 9% in GB tells us nothing at all about gaming. It might be 9%, it might be -10%, it might be 15% - geekbench does not give a reliable indication of gaming performance. Period. Heck, even AMD's own untrustworthy data shows a range from 0% to 40%, giving an average in the lower bounds of the examples given. So, we can't know, and as you say, we need to see third party benchmarks. Skepticism is good, but you're latching onto an irrelevant comparison, seemingly because it seems to confirm your skepticism, which is a bad habit. Whether or not AMD's numbers are inaccurate, I would recommend trying not to argue so hard for the validity of data that is verifiably irrelevant just because it happens to align with your expectations.
AssimilatorLOL.

This chip is Zen 3 at its base, except with deity knows how many tweaks to make the 3D cache bit work. Do you know how many hardware bugs picked up over the course of Zen 3's lifetime that would have been fixed in this silicon at the same time? Do you know how many lessons they've learned from Zen 4 that they would've belatedly applied to Zen 3 to try to squeeze some extra ooomph out of it? Do you know how much it's benefited from literally years of process node refinements?
Sounds to me like you're overestimating the silicon changes made to a chip throughout its production run. Yes, tweaks and bug fixes happen, but in general those things are quite small undertakings. And Zen3 has had the connection points for this extra cache since the first engineering samples after all. It's taken time to get it to market, but this is not "new" in that sense. It's been in the works since the first iterations of the architecture.
Posted on Reply
#95
chrcoluk
EatingDirtThat was not really "all these years". It was more specifically when the i5's were still 4 core, 4 thread CPU's without hyperthreading which was 2011-2017(2600k-7600k). They would be inadequate now for most games, and around when the 7600k came out, were becoming a less than ideal experience for well optimized games, multi-threaded games. This is usually seen in large drops in the 1% lows in those titles.
Yes that era, very few games I play even now in 2022 use more than 2-4 threads. It would have been even lower back then. Its mostly FPS genre that is highly threaded, I have only played one FPS game in a decade.
Posted on Reply
#96
AVATARAT
I think that this comparison is interesting too :)
Ryzen 9 5900X vs Ryzen 9 5900X3D
Ryzen 9 5900X vs Ryzen 7 5800X3D


Posted on Reply
#97
Valantar
AVATARATI think that this comparison is interesting too :)
Ryzen 9 5900X vs Ryzen 9 5900X3D
Ryzen 9 5900X vs Ryzen 7 5800X3D


They are interesting, but remember that the first comparison is with both chips locked to 4GHz, so it's not technically a 5900X, and there's no such thing as a 5900X3D and never will be - it's an engineering sample used for demonstration purposes.
Posted on Reply
#98
Aquinus
Resident Wat-man
MusselsThis is 9% for a program that AMD said is the wrong kind of program to benefit from the cache - heavily multi threaded. Lower threaded repetitive tasks like gaming benefit the most.
Actually that'd be an argument for faster and smaller cache. When it comes to cache, it's all about hit ratios (and latency to a lesser degree.) It doesn't really matter if you have 1 core or 16 cores doing something because the task on 1 core might have a dataset larger than the workload that the 16 core situation may not. So, this kind of statement is misleading because it might help based on the application's memory usage patterns regardless of how many cores are in operation. However, my general observation is more cores means more memory pressure, but that isn't to say that a single core hitting a bunch of novel data is unrealistic.

The longer that data can be kept in cache without evicting other things gives the CPU the opportunity to have cache hits when they're used again. This will benefit applications across the board. Some more than others, but it has the potential to improve performance on memory heavy applications by a significant amount. Like I said before, go check out the review on Phoronix for that EPYC chip. Some applications scale >50%, probably because they're heavily memory bound and cache misses were taking a huge amount of the time spent on the task.
Posted on Reply
#99
EatingDirt
chrcolukYes that era, very few games I play even now in 2022 use more than 2-4 threads. It would have been even lower back then. Its mostly FPS genre that is highly threaded, I have only played one FPS game in a decade.
Not sure what games you're playing in 2022 that don't scale to at least 4 threads. Maybe some indy titles don't, but the vast majority of games made today scale well to 6-12 threads. Cyberpunk 2077, F1 2020, Hitman 2, Battlefield 5, Shadow of the Tomb Raider, Watch Dog Legions and the list goes on of new games that take advantage of 6 threads or more.
A 4/4 experience would be miserable today, whereas a 6/6 or 4/8 experience is still typically more than adequate, though not always ideal.
Posted on Reply
#100
chrcoluk
EatingDirtNot sure what games you're playing in 2022 that don't scale to at least 4 threads. Maybe some indy titles don't, but the vast majority of games made today scale well to 6-12 threads. Cyberpunk 2077, F1 2020, Hitman 2, Battlefield 5, Shadow of the Tomb Raider, Watch Dog Legions and the list goes on of new games that take advantage of 6 threads or more.
A 4/4 experience would be miserable today, whereas a 6/6 or 4/8 experience is still typically more than adequate, though not always ideal.
I generally play mix of low and high budget jrpgs.
Posted on Reply
Add your own comment
Dec 27th, 2024 03:06 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts