Tuesday, April 12th 2022

AMD Ryzen 7 5800X3D Gets Full Set of Gaming Benchmarks Ahead of Launch

XanxoGaming has finally released its complete set of benchmarks for the AMD Ryzen 7 5800X3D and it's been tested against an Intel Core i9-12900KF. This time both platforms are tested using an NVIDIA GeForce RTX 3080 Ti and four times 8 GB of 3200 MHz CL14 DDR4 memory. The only difference appears to be the OS drive, motherboard and cooling, although both systems rely on a 360 mm AIO cooler. Both systems were running Windows 10 21H2. The site has a full breakdown of the components used for those interested in the exact details.

The two platforms were tested in 11 different games at 720p and 1080p. To spoil the excitement, it's a dead race between the two CPUs in most games at 1080p, with Intel being ahead by about 1-3 FPS in the games where AMD loses out. However, in the games AMD takes the lead, it's by a good 10 FPS or more, with games like the Witcher 3 and Final Fantasy XV seeing an advantage of 40-50 FPS. AMD often has an advantage when it comes to the one percent low numbers, even when Intel is ahead when it comes to the average FPS, but this doesn't apply to all of the games. It's worth keeping in mind that the Intel CPU should gain extra performance when paired with DDR5 memory in some of these games, but we'll have to wait for more reviews to see by how much. The benchmarks displayed are mostly the games TPU normally tests with, but aren't the entirety of games tested by XanxoGaming.
As for the 720p tests, AMD only loses out in Strange Brigade, even though it's a loss of over 20 FPS on average FPS and by over 10 FPS when it comes to the one percent low frames. As for the other games, it's mostly a dead race here too, but with an advantage to AMD instead of Intel by 1-3 FPS. However, the 3D V-Cache seems to kick in here when it comes to the one percent low frames, as AMD edges out Intel by a large margin in more games here by at least 10 FPS, often by around 30 FPS or more. Take these benchmarks for what they are, an early, unconfirmed test of the Ryzen 7 5800X3D. We're just over a week away from the launch and we should be seeing a lot more benchmarks by then. Head over to XanxoGaming for the full set of tests and their conclusion, especially as they made an effort to write the test in English this time around.
Source: XanxoGaming
Add your own comment

139 Comments on AMD Ryzen 7 5800X3D Gets Full Set of Gaming Benchmarks Ahead of Launch

#101
Chrispy_
Leshyso u are telling, that this board www.asrock.com/MB/AMD/B550M-HDV/index.fr.asp is capable of running 5950x, but this one rog.asus.com/motherboards/rog-crosshair/rog-crosshair-vi-hero-model/ isnt :D give me a break :D its not supported because they dont want it to be :D idc if its AMD or ASUS part ... :D end of story
100% yes, no joke.
Asus cheaped out on the BIOS chip:

I'm sure the VRM is up to scratch, but it doesn't have a big enough BIOS chip to support more CPUs without losing features or older CPU support. It's up to ASUS to make that call and they either can't be bothered, or feel that the losses don't justify the gains.

They're also not impartial; they want to sell you a new motherboard. If you don't like their behaviour, stop buying Asus motherboards.
Posted on Reply
#102
SL2
Chrispy_Yep.
Asus cheaped out.

I'm sure the VRM is up to scratch, but it doesn't have a big enough BIOS chip to support more CPUs without losing features or older CPU support.
You beat me to it and you're 100 % right. It's all about ROM size, and it's a well known fact.
Posted on Reply
#103
aQi
GURU7OF9"The 12900ks overclocking bad boy" .
I know Intel overclocks better but seriously this ks version is already binned and overclocked from the factory to its limit. It is only for Intel to still try and claim tbe so called "Gaming Crown"! Their is no more headroom left in it! Which makes its overclocking capabilities pretty much non existant or minimal at best !
So as far as overclocking goes its not really relevant!
All the high end cpus from Intel and AMD are cranked pretty hard straight out of the box with minimal headroom . Only from the lower specced ones can any reasonable gains be made !

So far from these preliminary tests, it appears that the Ryzen 5800x 3dvcache will be very competitive with 12900ks in gaming !
I agree with you on this but these preliminary tests were on stock Intel settings. Yet the difference in fps was quite impressive. Speaking of which Intel is already cooking something with raptor lake. Though personally thr 5800x3d, on its pricing might be the only gamer's love on the market until we get something from AM5 socket.
Posted on Reply
#104
Punkenjoy
InVasManiFair point though it can still fit and access 2 files that are 32MB w/o having to fall back to memory. Not quite as good as true 64MB design that's not shared, but good regardless.
Filling the cache with a single files would be the worst usage of cache possible. Anyway, the CPU itself is not aware of files. It's just aware of memory address & register.

The memory controller won't have issue to send the files as it's being decompressed or compressed and current memory is fast enough. The cache will be used for the dictionary or something like that. You want to have in cache things that the CPU won't be able to prefetch and loading a file is not that at all.

Cache are useful for things that are accessed frequently. not for single file access. In the case of a file server, it could cache portion of the File table, portion of the ACL etc. but even there, CPU are fast enough for most of theses scenario.

Or let say you have a data set that is quite large (30 MB+) and you have to execute command based on random input (a Player playing a game), well in this case, the Prefetcher won't exactly know what to prefetch into cache and having a large L3 cache will help you to save some time on memory access. if you don't reuse that data for any reason, then it's useless to have it in cache.
Posted on Reply
#105
Leshy
Chrispy_100% yes, no joke.
Asus cheaped out on the BIOS chip:

I'm sure the VRM is up to scratch, but it doesn't have a big enough BIOS chip to support more CPUs without losing features or older CPU support. It's up to ASUS to make that call and they either can't be bothered, or feel that the losses don't justify the gains.

They're also not impartial; they want to sell you a new motherboard. If you don't like their behaviour, stop buying Asus motherboards.
LOL and they just released new betabios with full support of 5000+3d :D ...clearly its a hw restraint

Take your blindfold off

edit: so 3d not supported yet .. we ll see what amd ll do :)
Posted on Reply
#106
Chrispy_
LeshyLOL and they just released new betabios with full support of 5000+3d :D ...clearly its a hw restraint

Take your blindfold off

edit: so 3d not supported yet .. we ll see what amd ll do :)
I'm not blind, you simply don't understand the issue and you're not comprehending what I'm explaining to you in quite clear language.

ASUS has had to either remove BIOS features or remove supported CPU models to fit new features into a small 16MB BIOS. It's not up on their official website yet so I can't say for sure what's suffered to make it possible.

Vendors like Asrock and Gigabyte choose to drop support for older AM4 CPUs like A4/A6/A8 instead when using smaller BIOS chips so that they can keep all of the original BIOS features intact. It's clearly listed what they've had to drop if you look at warnings and notes on each BIOS version available to download.

EDIT:
According to reddit, it's not an official ASUS BIOS, it's a crossflash from the Fatal1ty B450 gaming K4 and it strips almost all but basic boot compatibility for Pinnacle Ridge, Raven Ridge or Summit Ridge CPUs. (so 1000-series CPUs/APUs, 2000-series CPUs) There's also a massive list of caveats going all the way back to the last AGESA that fully supported the graphics output of those APUs.
Posted on Reply
#107
THU31
Are BIOS chips expensive or something? 16 MB seems kind of smol, for what is basically flash memory?
Posted on Reply
#108
Chrispy_
THU31Are BIOS chips expensive or something? 16 MB seems kind of smol, for what is basically flash memory?
Uh, relatively yes I think. I'm also not sure it's cheap like regular NAND, I think it's EEPROM which is different and expensive in ways I don't care to understand.

Apparently a 256Mb BIOS chip is about $1.05 more expensive. And given that the BOM cost of a $150 retail board might be only $30, that's a big deal. I think @TheLostSwede wrote an article for TPU on BOM cost of motherboards a few months back.
Posted on Reply
#109
TheLostSwede
News Editor
THU31Are BIOS chips expensive or something? 16 MB seems kind of smol, for what is basically flash memory?
It's normally some form of SPI Flash, NOR Flash is the most common type and per MB or GB, whichever way you want to look at it, it's comparatively costly.
It's by no means crazy money, but 128 Mbit or 16 MB of NOR Flash is about US$1.32 these days if you buy 2k units on a reel, 4k units only saves you a cent or so.
The cheapest 256 Mbit or 32 MB NOR flash on a reel right no is about US$2.38. This is admittedly from a distributor and not directly from a memory manufacturer, but some of these companies only sells through distribution. Yes, you do get discounts as the volume increases, but that only goes so far.
16 MB was plenty, until AMD's AGESA grew in size and became an issue. It's also worth remembering that companies like Gigabyte had their Dual BIOS implementation that used two flash chips at double the cost.
These days the second flash chip seems to have been replaced by an MCU that allows for the BIOS/UEFI to the flashed without a CPU in the board, which also adds cost, but hopefully reduces RMA's due to bad flashes.
Posted on Reply
#110
Leshy
Chrispy_I'm not blind, you simply don't understand the issue and you're not comprehending what I'm explaining to you in quite clear language.

ASUS has had to either remove BIOS features or remove supported CPU models to fit new features into a small 16MB BIOS. It's not up on their official website yet so I can't say for sure what's suffered to make it possible.

Vendors like Asrock and Gigabyte choose to drop support for older AM4 CPUs like A4/A6/A8 instead when using smaller BIOS chips so that they can keep all of the original BIOS features intact. It's clearly listed what they've had to drop if you look at warnings and notes on each BIOS version available to download.

EDIT:
According to reddit, it's not an official ASUS BIOS, it's a crossflash from the Fatal1ty B450 gaming K4 and it strips almost all but basic boot compatibility for Pinnacle Ridge, Raven Ridge or Summit Ridge CPUs. (so 1000-series CPUs/APUs, 2000-series CPUs) There's also a massive list of caveats going all the way back to the last AGESA that fully supported the graphics output of those APUs.
not talkin about asrock bios as i mentionet before :) its asus betabios 8503 that is working with 5000 acording to some forum posts ... only 1H later but what ever AMD ...

language is clear .. pay for new board .. we dont keep promises

clearly its just greed not hw restrictions.
Posted on Reply
#111
thesmokingman
QuietBob3080Ti @ 1080p ultra
5800X3D vs. 12900K 1% lows

Assassin's Creed: Origins -3%
Borderlands 3 -1%
Control +11%
Death Stranding +9%
F1 2020 +1%
Final Fantasy XV +26%
Metro Exodus +15%
Shadow of the Tomb Raider +28%
Middle-Earth: Shadow of War +7%
Strange Brigade -1%
The Witcher 3 +1%

I'd call it a tie in five games and a win for the 5800X3D in the other six. The difference is going to be less pronounced in higher resolutions, or with a weaker GPU, but still. Based on these results alone, AMD have delivered on their promise.
Until you look at the power draw with one chip using up to 130w and the other up to/over 300w. It's bonkers the difference in perf per watt.
Posted on Reply
#112
THU31
thesmokingmanUntil you look at the power draw with one chip using up to 130w and the other up to/over 300w. It's bonkers the difference in perf per watt.
It is not going to reach 300 W in gaming, probably not even 200, but it does show how much wiser AMD's approach is. Instead of pushing the voltage and clocks, which ruins efficiency, they actually lowered both, but added something much more important in gaming.

As amazing as Alder Lake is, Intel is still forcing things that gamers do not care about. They need to change something about their i7 lineup. The i7 should be a top tier gaming CPU with more cache, the i9 should be dedicated to productivity competing with 12- and 16-core Ryzens.
Posted on Reply
#113
JustBenching
thesmokingmanUntil you look at the power draw with one chip using up to 130w and the other up to/over 300w. It's bonkers the difference in perf per watt.
Can you please show us the power draw where one consumes 300 and the other 130?
Posted on Reply
#114
thesmokingman
fevgatosCan you please show us the power draw where one consumes 300 and the other 130?
The reviews are out so it should be up everywhere with a thorough review.
Intel's short-lived advantage in gaming came at the cost of extra power, though: The Core i9-12900KS has a 150W processor base power (PBP), a record for a mainstream desktop processor, and we measured up to 300W of power consumption under full load. In contrast, the Ryzen 7 5800X3D has a 105W TDP rating and maxed out at 130W in our tests, showing that it is a far cooler processor that won't require as expensive accommodations, like a beefy cooler, motherboard, and power supply, as the Core i9-12900KS.
www.tomshardware.com/news/amd-ryzen-7-5800x3d-review

Also, I'd add this interesting bit of comparo to the 5800x.
The 5800X3D reached its peak 4.5 GHz frequency frequently, while the 5800X actually exceeded its 4.7 GHz spec and regularly hit 4.8 GHz. Temperatures and power draw aren't a major concern through most of this test, but there are a series of multi-threaded Geekbench workloads near the 1000-second mark. Again, the 5800X draws more power and runs at higher clocks than the 5800X3D during these periods of heavy load, but it has nearly identical temperatures.
Posted on Reply
#115
InVasMani
I wonder if all that extra cache will provide any upside on DIMM stability especially in regard to 4 DIMM slots being populated. It's a long shot, but would be quite nice if it did help.
Posted on Reply
#116
thesmokingman
InVasManiI wonder if all that extra cache will provide any upside on DIMM stability especially in regard to 4 DIMM slots being populated. It's a long shot, but would be quite nice if it did help.
It's doubtful that it would and I wouldn't go into it with any sort of expectation for that. Look at Zen 4 with stacked cache for example, they have to specifically work on the tuning the arch for more memory speed.
Posted on Reply
#117
Chrispy_
InVasManiI wonder if all that extra cache will provide any upside on DIMM stability especially in regard to 4 DIMM slots being populated. It's a long shot, but would be quite nice if it did help.
I don't see how.

The IO die that accompanies the 5800X3D's updated CCD is unchanged, identical to every other Zen3 CPU with an MCM design, and in case you weren't aware, the memory controller for Zen3 is on that IO die.
Posted on Reply
#118
Punkenjoy
InVasManiI wonder if all that extra cache will provide any upside on DIMM stability especially in regard to 4 DIMM slots being populated. It's a long shot, but would be quite nice if it did help.
maybe indirectly. The cache itself won't do anything for the RAM stability, but it will help to reduce performance loss due to bad timing or slow ram
Posted on Reply
#119
InVasMani
What I'm saying is if you aren't hammering the system memory as hard the IMC is less taxed and stressed so perhaps it helps a bit with stability. I think there are a lot of things in regard to 4 DIMM stability and frequency scaling that could stand to be better investigated more in depth perhaps. As a example pairing a two 3200MHz DDR4 kits in each channel, but one kit is CL16 and the other CL14 and first training the memory to operate the CL16 settings. Would that help with 4 DIMM stability over two kits of CL16? It's something from a technical standpoint worth considering and probably worth investigating from a tech industry stand point as well. If it's as simple as getting a bit higher quality kit to insert into the second DIMM channel that would be a nice solution to a situation that's often a bit of a slight boon to stability and higher DIMM capacity populating.

Mixing and matching DIMM kits isn't something that's been too heavily explored in more definitive terms. It's the odd one out, but more DIMM's are harder to keep stable so why not simply offset it a bit with a stronger kit if it's that simple!? I mean if you can use a really high quality larger kit in the first DIMM then offset it with smaller capacity, but higher perform kit in the other DIMM slot that's another angle to it. You'd end up with more capacity and more stability if it works well from a general standpoint provided you train the memory on the first kit for the additional kit to operate at. I've seen some investigating of it, but not quite the serious deep dive into it I'd prefer to see to really explore possibilities and nail down how well it can work more definitively. Every kit varies and I understand that, but would a stronger kit in the other two DIMM slots generally allow and provide for better stability is a legitimate question?

I think from a technical standpoint you would imagine it could and would provided you train timings for the slower kit. Given that the second kit is a bit higher quality it should be able to offset some of the signal intolerance's with running 4 DIMM's you would think and hope. I really can't think of any reason why getting a kit with 1CL tighter latency for the second kit at the same frequency setting wouldn't generally help 4 DIMM stability.

I think in cases similarly a stronger CPU cache might help with DIMM stability, but it's hard to say definitively. If you don't test it there is no way to know for certain what sort of impact it can play in sort of less orthodox scenario's. The thing is even if it is less orthodox a procedure and come across as oddball if it can work better legitimately for technical reasons with signal integrity and/or IMC stress increased or relieved by the cache or number of DIMM slots populated that's what's more important. Outside the box thinking I suppose, but signal integrity is a technical hurdle and perhaps that's what's needed in cases and if it works who cares long as it gets the job done!!?
Posted on Reply
#120
Chrispy_
InVasManiWhat I'm saying is if you aren't hammering the system memory as hard the IMC is less taxed and stressed so perhaps it helps a bit with stability. I think there are a lot of things in regard to 4 DIMM stability and frequency scaling that could stand to be better investigated more in depth perhaps. As a example pairing a two 3200MHz DDR4 kits in each channel, but one kit is CL16 and the other CL14 and first training the memory to operate the CL16 settings. Would that help with 4 DIMM stability over two kits of CL16? It's something from a technical standpoint worth considering and probably worth investigating from a tech industry stand point as well. If it's as simple as getting a bit higher quality kit to insert into the second DIMM channel that would be a nice solution to a situation that's often a bit of a slight boon to stability and higher DIMM capacity populating.

Mixing and matching DIMM kits isn't something that's been too heavily explored in more definitive terms. It's the odd one out, but more DIMM's are harder to keep stable so why not simply offset it a bit with a stronger kit if it's that simple!? I mean if you can use a really high quality larger kit in the first DIMM then offset it with smaller capacity, but higher perform kit in the other DIMM slot that's another angle to it. You'd end up with more capacity and more stability if it works well from a general standpoint provided you train the memory on the first kit for the additional kit to operate at. I've seen some investigating of it, but not quite the serious deep dive into it I'd prefer to see to really explore possibilities and nail down how well it can work more definitively. Every kit varies and I understand that, but would a stronger kit in the other two DIMM slots generally allow and provide for better stability is a legitimate question?

I think from a technical standpoint you would imagine it could and would provided you train timings for the slower kit. Given that the second kit is a bit higher quality it should be able to offset some of the signal intolerance's with running 4 DIMM's you would think and hope. I really can't think of any reason why getting a kit with 1CL tighter latency for the second kit at the same frequency setting wouldn't generally help 4 DIMM stability.

I think in cases similarly a stronger CPU cache might help with DIMM stability, but it's hard to say definitively. If you don't test it there is no way to know for certain what sort of impact it can play in sort of less orthodox scenario's. The thing is even if it is less orthodox a procedure and come across as oddball if it can work better legitimately for technical reasons with signal integrity and/or IMC stress increased or relieved by the cache or number of DIMM slots populated that's what's more important. Outside the box thinking I suppose, but signal integrity is a technical hurdle and perhaps that's what's needed in cases and if it works who cares long as it gets the job done!!?
Stability has no correlation with usage;

Something unstable can seem stable if it's used so little that you don't trigger a crash, but it's still unstable and will fall over with greater loads.

To use an analogy, a weak bridge won't collapse if you don't drive heavy vehicles over it, but not driving heavyvehicles over it doesn't somehow reinforce the bridge - it's still a weak bridge.
Posted on Reply
#121
InVasMani
Chrispy_Stability has no correlation with usage;

Something unstable can seem stable if it's used so little that you don't trigger a crash, but it's still unstable and will fall over with greater loads.

To use an analogy, a weak bridge won't collapse if you don't drive heavy vehicles over it, but not driving heavyvehicles over it doesn't somehow reinforce the bridge - it's still a weak bridge.
I get what you're saying, but at the same time if the additional cache results in it never having the extremes of a heavy vehicle driving over it then it does in fact provide additional stability does it not!!? It's not so much a case of can you add more weight until the bridge collapses, but can you add more weight without it collapsing.
Posted on Reply
#122
Why_Me
More cache looks to be good for gaming but not so good for any other uses is what I get out of these benchmarks.
Posted on Reply
#123
ARF
Why_MeMore cache looks to be good for gaming but not so good for any other uses is what I get out of these benchmarks.
Which means that in gaming a bottleneck is the lack of a large, close-to-the-CPU memory pool. Imagine how fast the games would be running if you have the 16 GB of DDR4 memory with the access times and throughput of an L3 cache...
Posted on Reply
#124
InVasMani
Where 3D Stacked cache would make even more sense is on the GPU's L cache.
Posted on Reply
#125
chrcoluk
Seen this image doing the rounds.

Posted on Reply
Add your own comment
Mar 30th, 2025 03:20 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts