Tuesday, April 12th 2022

AMD Ryzen 7 5800X3D Gets Full Set of Gaming Benchmarks Ahead of Launch

XanxoGaming has finally released its complete set of benchmarks for the AMD Ryzen 7 5800X3D and it's been tested against an Intel Core i9-12900KF. This time both platforms are tested using an NVIDIA GeForce RTX 3080 Ti and four times 8 GB of 3200 MHz CL14 DDR4 memory. The only difference appears to be the OS drive, motherboard and cooling, although both systems rely on a 360 mm AIO cooler. Both systems were running Windows 10 21H2. The site has a full breakdown of the components used for those interested in the exact details.

The two platforms were tested in 11 different games at 720p and 1080p. To spoil the excitement, it's a dead race between the two CPUs in most games at 1080p, with Intel being ahead by about 1-3 FPS in the games where AMD loses out. However, in the games AMD takes the lead, it's by a good 10 FPS or more, with games like the Witcher 3 and Final Fantasy XV seeing an advantage of 40-50 FPS. AMD often has an advantage when it comes to the one percent low numbers, even when Intel is ahead when it comes to the average FPS, but this doesn't apply to all of the games. It's worth keeping in mind that the Intel CPU should gain extra performance when paired with DDR5 memory in some of these games, but we'll have to wait for more reviews to see by how much. The benchmarks displayed are mostly the games TPU normally tests with, but aren't the entirety of games tested by XanxoGaming.
As for the 720p tests, AMD only loses out in Strange Brigade, even though it's a loss of over 20 FPS on average FPS and by over 10 FPS when it comes to the one percent low frames. As for the other games, it's mostly a dead race here too, but with an advantage to AMD instead of Intel by 1-3 FPS. However, the 3D V-Cache seems to kick in here when it comes to the one percent low frames, as AMD edges out Intel by a large margin in more games here by at least 10 FPS, often by around 30 FPS or more. Take these benchmarks for what they are, an early, unconfirmed test of the Ryzen 7 5800X3D. We're just over a week away from the launch and we should be seeing a lot more benchmarks by then. Head over to XanxoGaming for the full set of tests and their conclusion, especially as they made an effort to write the test in English this time around.
Source: XanxoGaming
Add your own comment

139 Comments on AMD Ryzen 7 5800X3D Gets Full Set of Gaming Benchmarks Ahead of Launch

#51
Vya Domus
TaraquinLook at 3100 vs 3300X at 4.0GHz fixed. The major difference is that 3300X has access to double cache vs 3100 that must share it between ccx-es.
There have been many other CPUs with way more cache than than a 3300X and it never made that much of a difference.
Posted on Reply
#52
sixor
Leshyi dont get it .. whats the point of this gaming benchmarks? :D whos gonna game with 3080ti with ultra low 720p? :D show some real benchamarks
Lol


Because that is the way to test bench CPUs for gaming
Posted on Reply
#53
Cowboystrekk
Vya DomusThere have been many other CPUs with way more cache than than a 3300X and it never made that much of a difference.
Cache often makes a large difference, but different architectures may not scale the same. Hardware unboxed did a comparison of this:
Generally cache matters more for fps than the number of cores. In RSS just upping cache from 12 to 20mb gave a 18% boost to fps when both cpus ran 6 cores. Both Cyberpunk and F1 got around 10% while the other gaves showed less than 5% gain.
Posted on Reply
#54
JustBenching
ratirtActuall

You are correct. I double checked and it would seem the Raptor Lake will be backwards compatible. Good for Intel fans.
About time something does not die with a release for Intel customers.
While amd users have a grand time. They only had to go nuts on the Internet to force amd to backtrack and give support for x470 (of course, 6 months after thr launch of zen 3) or just... wait with a 6 years old motherboard to get a bios update (x370) SOMETIME in the future for a soon to be 2 years old cpu. Absolutely amazing support man..
Posted on Reply
#55
HD64G
Gaming crown returned to AMD.-
Posted on Reply
#56
Tomorrow
usul1978That's what I'm wondering...Will the 5800x3d accept 3800 with a 1:1 fabric clock ? Or will it be a bit unstable on that matter... That would be a deal breaker for me to have to set my DR cl14 3800 DDR4 down !
Problably yes. Only CPU voltage and frequency are locked.
Posted on Reply
#57
Chrispy_
Tsukiyomi91what's gonna be even more interesting is pushing both the 5800X3D and 12900KS over their limits and run benchmarks at 1440p. It should be a really interesting one.
AMD have officially stated that the 5800X3D is not overclockable and have asked all motherboard vendors to explicitly lock down overclocking on the X3D because it will destroy the CPU.

It's to do with the 3D cache not being tolerant of the same voltages as the underlying chip. You overvolt your 5800X3D and AMD say that will toast the 3D V-cache, warranty null and void - Enjoy your $449 keychain.
HD64GGaming crown continues to be held by Nvidia because as long as you're not using a potato for a CPU you are going to be either GPU-bottlenecked, or capped by your monitor's max refresh rate.
FTFY ;)

If 1% lows are given more weighting, then yeah, 5800X3D looks like a real winner.
Posted on Reply
#58
InVasMani
From what I can see of this 3D cache and results it can have a fairly dramatic effect on the 1% low's and at lower resolutions especially evident. How that all translates with infinity cache and with the GPU upscale should be neat as well. In fact I think GPU upscale is only due to get better in future GPU generations so this 3D stacked cache should help even further in the next generation of GPU's. Beyond the upscale for GPU tech is variable rate shading and/or mesh shading that can bring down some of the peak on demand bandwidth within scenes too that will help with this overall cache design because smaller chunks of data that can fit within a cache and not be accessed by slower system memory is much more desirable for overall performance. Individual frames up to 96MB or a touch below it will be able to fit within the cache as well while on another CPU with smaller L3 cache that wouldn't be possible and that's a big gain to overall latency across many frames. This chip could open up a lot of improvements to post process techniques that otherwise might be more taxing on the CPU side.

Something else to mention is NTFS compression. I stumbled upon this review the other day at Igor's Lab that had some ATTO disk benchmark results on a NVME device on a 5950X CPU.

NVME SSD benchmark with 5950X CPU.

It was a NVME review, but I don't see ATTO Disk Benchmark used too much in general and noticed a 5950X got utilized. The way that ties in with results is right in line with what I'd suspected, but hadn't seen anything to really verify much on a more capable system with a better L3 cache. If you look at the results they top off at the 64MB mark which is exactly the size of the 5950X's L3 cache. From the results it appears Igor didn't utilize NTFS compression which I believe is the right call for a NVME benchmark test so as to not skew results. If you were to compress it with NTFS compression and windows highest NTFS compression unit allocation size the read performance would improve dramatically though right up to a 64MB I/O size and file size beyond it would drop off dramatically as it then fetches from slower system memory.

In essence the L3 cache serves as a bit of a dynamic ram disk at or below the L3 cache size and file sizes. I guess in the case of Primo Cache for block level cache it would do similarly with the block level chunk sizes and probably a bigger deal in regard to older slower mechanical drives. Still a 96MB chunk size in the case of a 5800X3D for a mechanical drive is great and alleviates there biggest drawback heavily or similarly for a 64MB chunk size with 5950X.



How it translates to games is interesting anything 96MB size or below compressed or uncompressed will be very quick at low latency. The larger file sizes will enable bigger files quicker access directly by the L3 cache and bypassing the additional latency of slower system memory. The CPU L3 fit larger image up in the L3 cache at or below below 96MB compressed or compressed w/o having to even touch system memory. It also allows for larger data for use with mesh shading/variable rate shading and upscale and general game data related file sizes including audio at or below 96MB w/o having to access slower latency system memory. Just imagine how those 768MB L3 cache EPYC are in certain scenario's. Things are going to get really interesting in the coming years as more L3 cache is made available and at more consumer friendly price levels.
Xex360Interesting to see how they'd use this technology in the future, maybe they could offer special gaming CPUs, with the efficient cores (similar to the 5950x cores) pushed hard but fewer in number with 3d cache.
I made a post on that prospects of what AMD could do with it's take on big LITTLE about a week or two ago. What AMD could do is possibly is utilize OS processor scheduling assignment and assign foreground/background to individual chiplets in the same manner. They could have your highly parallel chiplet and another chiplet that's got few cores, but much of the remaining die area space for a bit larger L2 cache and 3D stacked L3 cache. Both of those caches could have TSV to connect and share them with the parallel higher core count chiplet as well. It bit be a bit bifurcation segmented assignment between two chiplet's in a 25%/75% split and irreversible in terms of which gets the larger swath of L3 cache as well perhaps as a or neutral balanced 50%/50% split. AMD would probably want to work in tandem with Microsoft a little on how that can be done and operate, but seems like it would work nicely. The foreground/background CPU's might also have a +1to +2 / -1 to -2 to the boost multiplier depending on foreground/background while neutral perhaps doesn't adjust it.

If they wanted two BCLK's might even be possible for assigning a separate one to each chiplet for efficiency reasons and/or silicone lottery and let the BIOS set each chiplet up with it's own. The BIOS could sync them or make them both dynamic for each chiplet. That could actually even allow you mix different ram speed kits together using the faster ram kit for the foreground chiplet. It would work equally well for performance and efficiency.

What I see interesting with the 5800X3D result is the low 1% percentile results. How this chip performs at 720P is indicative of where things are headed more and more with GPU technology as a whole. It'll tie in nicely with infinity cache as well and with NTFS compression and GPU upscale from 720p to higher resolution points. It'll obviously help in turn for 1080p and upscale a well, but will be more pronounced at lower resolutions in particular for now at least. Give it some time however and with better GPU compression they might eek a touch more out of it. I definitely anticipate even better upscale in coming years and this cache will be able to readily make good use of it. How good we'll be able to upscale 720p upward in the next GPU architecture is something to look forward to. I look forward to seeing different example cases of where the cache makes a difference. I wonder how if it's something that would impact raid scaling performance tapering off or not.
Posted on Reply
#59
Chomiq
Chrispy_The 1% lows are the important result here.

Nobody really cares if they're getting 200 or 300fps average but in a busy firefight the minute it drops below vsync or whatever tickrate the engine/server runs at you'll notice it and want to drop quality settings.
#frametimesmatter
Chrispy_AMD have officially stated that the 5800X3D is not overclockable and have asked all motherboard vendors to explicitly lock down overclocking on the X3D because it will destroy the CPU.

It's to do with the 3D cache not being tolerant of the same voltages as the underlying chip. You overvolt your 5800X3D and AMD say that will toast the 3D V-cache, warranty null and void - Enjoy your $449 keychain.


FTFY ;)

If 1% lows are given more weighting, then yeah, 5800X3D looks like a real winner.
Now, now, they haven't said anything about destroying the CPU.
Posted on Reply
#60
HD64G
Chrispy_FTFY ;)

If 1% lows are given more weighting, then yeah, 5800X3D looks like a real winner.
Some people need to learn how testing works. For the games that the differences are below 2%, we have a GPU bottleneck. The games that show differences bigger than those are the ones that show the gaming power of the CPUs. So, in all but one game, 5800X3D shows a very important superiority in either average or minimum FPS to not be perceived.
Posted on Reply
#61
aQi
ZoneDymoermm plenty of reviews have already pointed out that DDR5 does not give a benefit as is...
Yeh we all know ddr4 was pre mature where as ddr3 had remarkable marks everywhere. I am not supporting ddr5 here but its still in its very early stages. Similarly 12900KS might use its potential but will be restricted to limited games/apps (new titles would be optimised to use ddr5 just like arc gpus)
But one thing is for sure 12900KS is an overclocking bad boy and 5800x3d is not. The above mentioned benches are also on stock for both where 5800x3d admittedly beats 12900KS yet we all wana see the 4k fun here (I personally want to see how 5800x3d performs under 4k circumstances).
I dont get it the bencher could get us with 4k benchmarks but did not. Stilll waiting. One thing more the value per dollar will be prominent enough for users to select options.
fevgatosIm pretty sure raptorlake will be supported on z690.
You bet it will thats what Intel does. Two chipsets and two generations then jump socket :p
Posted on Reply
#62
InVasMani
The 1% lows is what matters most critically and also where DDR4 still really holds up most well against DDR5 at the same time. I look at DDR5 almost like micro stutter in SLI/CF against a single card. There are cases where a single card of a bit lower average is so much smoother on micro stutter that it's still generally worth the trade off. It's much like some cases of a dual core vs quad core that can both generally run a game alright enough, but frame time variance on the dual core just craters at certain points while the quad core hums along smoothly. I'm looking at the 3D stacked cache the same way from the results I'm seeing thus far.
Posted on Reply
#64
ThrashZone
Hi,
Nice intel prices should tank unless amd raises 5800x3d price :eek:

Nice they used 3200c14 but should of used a more common 3200c16 instead.
Posted on Reply
#65
THU31
ThrashZoneHi,
Nice intel prices should tank unless amd raises 5800x3d price :eek:
The 12900K(S) is not just a gaming CPU, though.

What they should do is release an i7 without E-cores, but with maxed out clocks.
Posted on Reply
#66
ThrashZone
THU31The 12900K(S) is not just a gaming CPU, though.

What they should do is release an i7 without E-cores, but with maxed out clocks.
Hi,
5800x3d was said to be the gaming champ not the business champ though still might seeing default clocks are more efficient I'd consider a 5800x3d on a laptop way before intel

Now you see why intel pushed 12900ks release sooner so they could get the sucker buyers, which they would get them anyway but intel can get highest profits at near 800.us before 5800x3d release instead of 550.us like 12900k dropped to recently :laugh:
Posted on Reply
#67
Unregistered
HD64GGaming crown returned to AMD.-
Yeah, they stuck a foot out on the finishing line.
Posted on Edit | Reply
#68
SL2
I don't get all the "let's not compare actual cost" type of comments. It makes no sense.
Posted on Reply
#69
ThrashZone
Chrispy_AMD have officially stated that the 5800X3D is not overclockable and have asked all motherboard vendors to explicitly lock down overclocking on the X3D because it will destroy the CPU.

It's to do with the 3D cache not being tolerant of the same voltages as the underlying chip. You overvolt your 5800X3D and AMD say that will toast the 3D V-cache, warranty null and void - Enjoy your $449 keychain.


FTFY ;)

If 1% lows are given more weighting, then yeah, 5800X3D looks like a real winner.
Hi,
Last I read intel killed it's overclock policies to so seems a 450 keychain is cheaper than a 800 keychain :laugh:
Posted on Reply
#70
nexxusty
xorbePentium Pro reborn
Yes!

This is the best comment yet I've seen on the 5800X3D.
Posted on Reply
#71
zo0lykas
why but why ? why i dont see benchmark 5800x vs 5800x3d?

and plus can add intel cpu if you want..

ohhhhh :(
Posted on Reply
#72
HisDivineOrder
The thing I wonder is how many of these chips will AMD make. That'll be the real test of whether it's great or a unicorn. Either way, competition is grand. I want more of it.
Posted on Reply
#73
Punkenjoy
InVasManiFrom what I can see of this 3D cache and results it can have a fairly dramatic effect on the 1% low's and at lower resolutions especially evident. How that all translates with infinity cache and with the GPU upscale should be neat as well. In fact I think GPU upscale is only due to get better in future GPU generations so this 3D stacked cache should help even further in the next generation of GPU's. Beyond the upscale for GPU tech is variable rate shading and/or mesh shading that can bring down some of the peak on demand bandwidth within scenes too that will help with this overall cache design because smaller chunks of data that can fit within a cache and not be accessed by slower system memory is much more desirable for overall performance. Individual frames up to 96MB or a touch below it will be able to fit within the cache as well while on another CPU with smaller L3 cache that wouldn't be possible and that's a big gain to overall latency across many frames. This chip could open up a lot of improvements to post process techniques that otherwise might be more taxing on the CPU side.

Something else to mention is NTFS compression. I stumbled upon this review the other day at Igor's Lab that had some ATTO disk benchmark results on a NVME device on a 5950X CPU.

NVME SSD benchmark with 5950X CPU.

It was a NVME review, but I don't see ATTO Disk Benchmark used too much in general and noticed a 5950X got utilized. The way that ties in with results is right in line with what I'd suspected, but hadn't seen anything to really verify much on a more capable system with a better L3 cache. If you look at the results they top off at the 64MB mark which is exactly the size of the 5950X's L3 cache. From the results it appears Igor didn't utilize NTFS compression which I believe is the right call for a NVME benchmark test so as to not skew results. If you were to compress it with NTFS compression and windows highest NTFS compression unit allocation size the read performance would improve dramatically though right up to a 64MB I/O size and file size beyond it would drop off dramatically as it then fetches from slower system memory.

In essence the L3 cache serves as a bit of a dynamic ram disk at or below the L3 cache size and file sizes. I guess in the case of Primo Cache for block level cache it would do similarly with the block level chunk sizes and probably a bigger deal in regard to older slower mechanical drives. Still a 96MB chunk size in the case of a 5800X3D for a mechanical drive is great and alleviates there biggest drawback heavily or similarly for a 64MB chunk size with 5950X.



How it translates to games is interesting anything 96MB size or below compressed or uncompressed will be very quick at low latency. The larger file sizes will enable bigger files quicker access directly by the L3 cache and bypassing the additional latency of slower system memory. The CPU L3 fit larger image up in the L3 cache at or below below 96MB compressed or compressed w/o having to even touch system memory. It also allows for larger data for use with mesh shading/variable rate shading and upscale and general game data related file sizes including audio at or below 96MB w/o having to access slower latency system memory. Just imagine how those 768MB L3 cache EPYC are in certain scenario's. Things are going to get really interesting in the coming years as more L3 cache is made available and at more consumer friendly price levels.


I made a post on that prospects of what AMD could do with it's take on big LITTLE about a week or two ago. What AMD could do is possibly is utilize OS processor scheduling assignment and assign foreground/background to individual chiplets in the same manner. They could have your highly parallel chiplet and another chiplet that's got few cores, but much of the remaining die area space for a bit larger L2 cache and 3D stacked L3 cache. Both of those caches could have TSV to connect and share them with the parallel higher core count chiplet as well. It bit be a bit bifurcation segmented assignment between two chiplet's in a 25%/75% split and irreversible in terms of which gets the larger swath of L3 cache as well perhaps as a or neutral balanced 50%/50% split. AMD would probably want to work in tandem with Microsoft a little on how that can be done and operate, but seems like it would work nicely. The foreground/background CPU's might also have a +1to +2 / -1 to -2 to the boost multiplier depending on foreground/background while neutral perhaps doesn't adjust it.

If they wanted two BCLK's might even be possible for assigning a separate one to each chiplet for efficiency reasons and/or silicone lottery and let the BIOS set each chiplet up with it's own. The BIOS could sync them or make them both dynamic for each chiplet. That could actually even allow you mix different ram speed kits together using the faster ram kit for the foreground chiplet. It would work equally well for performance and efficiency.

What I see interesting with the 5800X3D result is the low 1% percentile results. How this chip performs at 720P is indicative of where things are headed more and more with GPU technology as a whole. It'll tie in nicely with infinity cache as well and with NTFS compression and GPU upscale from 720p to higher resolution points. It'll obviously help in turn for 1080p and upscale a well, but will be more pronounced at lower resolutions in particular for now at least. Give it some time however and with better GPU compression they might eek a touch more out of it. I definitely anticipate even better upscale in coming years and this cache will be able to readily make good use of it. How good we'll be able to upscale 720p upward in the next GPU architecture is something to look forward to. I look forward to seeing different example cases of where the cache makes a difference. I wonder how if it's something that would impact raid scaling performance tapering off or not.
The thing is the 5950x is not a 64 MB L3 cache CPU, it's a 2x32 MB cpu. Same thing Milan-X, it's a 8x96 MB cpu. Accessing the Other CCD L3 cache is as slow as accessing the main memory. That cache is only realy useful for the core inside that CCD.
Posted on Reply
#74
ACE76
Leshyi dont get it .. whats the point of this gaming benchmarks? :D whos gonna game with 3080ti with ultra low 720p? :D show some real benchamarks
This comment is getting old...they purposely used 720p to make the benchmark CPU bound. That takes the GPU out of the equation so you can see which CPU is having a better impact on the score. They're not benchmarking the video card right? So this is how you get a clear picture of which CPU is better.

AMD's Zen architecture is crazy good. This CPU is just a stop gap product that they're releasing to fill the void for the months before Zen 4 hits. They are literally releasing a CPU that can beat Intel's newest 12900KF CPU without DDR5 and this CPU can run on motherboards that came out when Ryzen was first introduced. Intel is gonna be in trouble Q3/4 when Zen 4 hits.
Posted on Reply
#75
Unregistered
ACE76This comment is getting old...they purposely used 720p to make the benchmark CPU bound. That takes the GPU out of the equation so you can see which CPU is having a better impact on the score. They're not benchmarking the video card right? So this is how you get a clear picture of which CPU is better.

AMD's Zen architecture is crazy good. This CPU is just a stop gap product that they're releasing to fill the void for the months before Zen 4 hits. They are literally releasing a CPU that can beat Intel's newest 12900KF CPU without DDR5 and this CPU can run on motherboards that came out when Ryzen was first introduced. Intel is gonna be in trouble Q3/4 when Zen 4 hits.
Then AMD will be in trouble when Intel release Raptor Lake, then Intel will be in trouble when AMD release...................See the pattern here. Just constantly keep upgrading, why even bother keeping anything for even 6mths when there is always something new.
Posted on Edit | Reply
Add your own comment
Mar 30th, 2025 03:20 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts