Wednesday, March 23rd 2022

AMD Ryzen 7 5800X3D Geekbenched, About 9% Faster Than 5800X

Someone with access to an AMD Ryzen 7 5800X3D processor sample posted some of the first Geekbench 5 performance numbers for the chip, where it ends up 9% faster than the Ryzen 7 5800X, on average. AMD claimed that the 5800X3D is "the world's fastest gaming processor," with the 3D Vertical Cache (3D V-cache) technology offering gaming performance uplifts over the 5800X akin to a new generation, despite being based on the same "Zen 3" microarchitecture, and lower clock speeds. The Ryzen 7 5800X3D is shown posting scores of 1633 points 1T and 11250 points nT in one run; and 1637/11198 points in the other; when paired with 32 GB of dual-channel DDR4-3200 memory.

These are 9% faster than a typical 5800X score on this benchmark. AMD's own gaming performance claims see the 5800X3D score a performance uplift above 20% over the 5800X, closing the gap with the Intel Core i9-12900K. The 3D V-cache technology debuted earlier this week with the EPYC "Milan-X" processors, where the additional cache provides huge performance gains for applications with large data-sets. AMD isn't boasting too much about the multi-threaded productivity performance of the 5800X3D because this is ultimately an 8-core/16-thread processor that's bound to lose to the Ryzen 9 5900X/5950X, and the i9-12900K, on account of its lower core-count.
Source: Wccftech
Add your own comment

105 Comments on AMD Ryzen 7 5800X3D Geekbenched, About 9% Faster Than 5800X

#101
Assimilator
chrcolukI generally play mix of low and high budget jrpgs.
Then all you need is a literal potato.
Posted on Reply
#102
chrcoluk
AssimilatorThen all you need is a literal potato.
I wish that was the case, they can be demanding, just not highly threaded.
Posted on Reply
#103
Dr. Dro
ValantarI think you're misjudging things here. While I entirely agree that we shouldn't blindly trust first party benchmarks, these are pretty conservative overall. They're also saying "~15%" average if you look at the slide, not 20, FWIW. IMO, the inclusion of examples with no improvement speaks to a degree of honesty in the benchmarks - though that is obviously also what they want to convey, so it still can't be taken at face value. Still, I see these as slightly more plausible than most first party benchmarks.

As for your 5950X comparison, there are some holes there. First off, L1 and L2 caches on Zen3 are per-core and do not whatsoever affect the performance of other cores. Unless those cores are being utilized, there is no advantage there - and arguably there's a minor disadvantage, as the L3 is divided across more cores (though that mainly makes a difference in heavy MT loads). Still, the advantages of the 5950X in gaming mainly come down to clocks and the ability to keep more high performance threads on the same CCX due to the extra cores. I don't know what you mean by "data access pathways" - the Infinity Fabric of each die is active no matter what, and the full L3 is accessible to all cores (the only difference is the ring bus has two stops disabled), so there's no real difference in that (except for the aforementioned advantage of more local workloads due to more cores, meaning less need to transfer data over IF).

But again: 9% in GB tells us nothing at all about gaming. It might be 9%, it might be -10%, it might be 15% - geekbench does not give a reliable indication of gaming performance. Period. Heck, even AMD's own untrustworthy data shows a range from 0% to 40%, giving an average in the lower bounds of the examples given. So, we can't know, and as you say, we need to see third party benchmarks. Skepticism is good, but you're latching onto an irrelevant comparison, seemingly because it seems to confirm your skepticism, which is a bad habit. Whether or not AMD's numbers are inaccurate, I would recommend trying not to argue so hard for the validity of data that is verifiably irrelevant just because it happens to align with your expectations.


Sounds to me like you're overestimating the silicon changes made to a chip throughout its production run. Yes, tweaks and bug fixes happen, but in general those things are quite small undertakings. And Zen3 has had the connection points for this extra cache since the first engineering samples after all. It's taken time to get it to market, but this is not "new" in that sense. It's been in the works since the first iterations of the architecture.
Maybe you misread me, I agree with you and am aware of all that. By extra data access pathways I really meant the extra cores and associated bits that a 5900X will, in comparison, lack. It would come immediately apparent on a benchmark which will saturate L1 and L2 intentionally, for example, AIDA64 (which again, is synthetic and could be irrelevant) - the 3950X and 5950X are both consistently faster than their 12-core counterparts by about the same 25% core count advantage that they have. On the other side, only the L3 is larger in the 5800X3D, the L1 and L2 retain the same sizes, and there is only one CCX/CCD, which is what I actually wanted to understand: how does a relatively straightforward single CCD with the extra L3 fare against what is effectively two vanilla 5800Xs on a single package.

We know that there is a inter-CCD penalty in the 5950X, however milder that may be against its predecessor, but in real-world scenarios, I never really needed to be mindful of core affinity to get the best out of my system.

Either way, it's not really skepticism as much as it is just having quite tempered expectations. I find this technology exciting, even if I am not expecting it to be revolutionary in this first iteration (i.e. I do not think this will dethrone Golden Cove). Regarding silicon changes, Vermeer B2, like AMD stated, brought no changes to the experience (they do not seem to clock better, or have any difference in how they function), and this is the only one that has any actual tangible difference in its design. For a long time I associated the unusually good performance of my old 18-core Xeon Haswell CPU to its (at the time) vast 45 MB L3, even though it has the clock speed of molasses (2.9 linear and 2.4 in AVX) and a relative penalty from the dual ring bus and signaling (semaphore) bit, so it would be less overestimating silicon changes over just curiosity about how would the microarchitecture respond to such changes in topology, and how that would generate any practical benefit to the end user.

The definitive answer to that we will see when reviews land in mid-April, I suppose.

:toast:
Posted on Reply
#104
Mussels
Freshwater Moderator
EatingDirtNot sure what games you're playing in 2022 that don't scale to at least 4 threads. Maybe some indy titles don't, but the vast majority of games made today scale well to 6-12 threads. Cyberpunk 2077, F1 2020, Hitman 2, Battlefield 5, Shadow of the Tomb Raider, Watch Dog Legions and the list goes on of new games that take advantage of 6 threads or more.
A 4/4 experience would be miserable today, whereas a 6/6 or 4/8 experience is still typically more than adequate, though not always ideal.
Ackshually, i used HWinfo and tracked my average core count use after a day of gaming in various titles... average was around 3.5 in various modern titles

They can use more threads, but very few truly do, or not for long periods.
Posted on Reply
#105
EatingDirt
MusselsAckshually, i used HWinfo and tracked my average core count use after a day of gaming in various titles... average was around 3.5 in various modern titles

They can use more threads, but very few truly do, or not for long periods.
Maybe "miserable" was the wrong wording. 4/4(and sometimes even 4/8) is certainly less than ideal if you happen to buy a game that does fully tax 4 cores. A revisit of the i3-8350k with the current game title lineup would be an interesting article if TPU measured 1%/0.1% lows.
Posted on Reply
Add your own comment
Dec 27th, 2024 02:56 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts