• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Ryzen 7 5800X3D

Oooh my IMC looks nice too
View attachment 265113

It's not something you'll see gains in, if you were GPU limited. If you run DLSS or games that have CPU limits? yes.

As an example, your frametimes might be a lot smoother and have less microstutter... not that i ever had any on my 5800x outside of game bugs.

Oh and it seems with zero effort, the ram that only ran at 3800 on my 5800x will now do 4000 1:1 on my x3D
(No, the timings and latencies are not great here - they're at loose values because duh, 4000MT/s)


I am pleased.
View attachment 265114


the 5800x3d uses *less* power in gaming than the 5800x - 10W, 17% lower if my math was correct

TPU's 7700x review has some new testing methods and fancy graphs:
AMD Ryzen 7 7700X Review - The Best Zen 4 for Gaming - Power Consumption & Efficiency | TechPowerUp
View attachment 265115
No WHEA19? Unless you must run ridiculus voltages on SOC/IOD or raise VDD18 2000fclk is a good deal :)
 
Oooh my IMC looks nice too
View attachment 265113

It's not something you'll see gains in, if you were GPU limited. If you run DLSS or games that have CPU limits? yes.

As an example, your frametimes might be a lot smoother and have less microstutter... not that i ever had any on my 5800x outside of game bugs.

Oh and it seems with zero effort, the ram that only ran at 3800 on my 5800x will now do 4000 1:1 on my x3D
(No, the timings and latencies are not great here - they're at loose values because duh, 4000MT/s)


I am pleased.
View attachment 265114


the 5800x3d uses *less* power in gaming than the 5800x - 10W, 17% lower if my math was correct

TPU's 7700x review has some new testing methods and fancy graphs:
AMD Ryzen 7 7700X Review - The Best Zen 4 for Gaming - Power Consumption & Efficiency | TechPowerUp
View attachment 265115
Bruh get some dual rank B die.

You can cut your latency by 15ns.

RAM still matters with 5800X3D, its the new cause of stutter. Before it was the overall latency so always low fps etc, now ram is a bottleneck due to massive difference between cache and memory access if games ever spill out.

E.g. 5800x averages are 100 with lows of 50, better ram can increase both of those.
5800x3d averages are 140 with lows of 90, better ram can increase those lows to around 100/110.
 
I am wondering if running my 5800X3D@4.5GHz all-core with -0.100 mv which HWiNFO64 shows CPU package power at max 82.702W is bad when it's originally uses 103.090W on default.

Running Cinebench R23 I score originally 13883 pts and I am down to 11962 pts which if I can count is 16.1% performance loss but about 24,7%% power savings not sure if I am totally off I am not good at maths :(

Screenshot 2022-10-12 210159.png


So what do people say good or bad?
 
-100 is too much and you lose performance.
-25 to -50 is ok but it depends on the CPU silicon. Then you should score 14-15K.

The 5800X3D is already extremely efficient at gaming. There's no point of limiting the cpu power/voltage as long as the temps are reasonable.
 
The price for the Ryzen 7 7700X which I really wanted was too much adding the outrageous motherboard cost and DDR5 Expo memory that's why I got drawn to the Ryzen 7 5800X3D and I got a good Asus ROG Crosshair VIII Dark Hero with one year warranty left with half the price of a new board and reusing my DDR4 ram so it was a win for me also power wise.
I just got my x3D in and i'm fine tuning the PBO and voltage settings

Lower wattage and higher performance is what an upgrade should be, and i can consider this a finalised DDR4 system

I am wondering if running my 5800X3D@4.5GHz all-core with -0.100 mv which HWiNFO64 shows CPU package power at max 82.702W is bad when it's originally uses 103.090W on default.

Running Cinebench R23 I score originally 13883 pts and I am down to 11962 pts which if I can count is 16.1% performance loss but about 24,7%% power savings not sure if I am totally off I am not good at maths :(

View attachment 265198

So what do people say good or bad?
I get very similar, i'm testing -75 right now

It can scale from 80W to 120W depending on the chosen settings - you can also set a PPT limit to cap the wattage to say, 105W, leaving most of the performance on the table - I've found 3 different locations i can enter it into my BIOS, even boards with the removed settings tend to have it hidden away in the AMD generic bios settings

Cross posting with the zen garden thread:

Gaming results:
I reset HWinfo after I was already in game and took the screenshot before quitting so the average stats here are true gameplay averages, no idle time.

80 minutes of DRG, 4K ultra 140FPS (DX12, Unreal engine 3)
Peak system wattage of 380W, 32" monitor included.
CPU peaked at 63C, 69W
CPU averaged 48C 56W
3090 was in the 200-250W range (seriously, the gains from it going to the 375W limit are totally not worth it)

It's the little things like average core count being 3 threads average, 5 at most that show that 6 core CPU's are definitely viable for gaming.
1665630957070.png
 
I just got my x3D in and i'm fine tuning the PBO and voltage settings

Lower wattage and higher performance is what an upgrade should be, and i can consider this a finalised DDR4 system


I get very similar, i'm testing -75 right now

It can scale from 80W to 120W depending on the chosen settings - you can also set a PPT limit to cap the wattage to say, 105W, leaving most of the performance on the table - I've found 3 different locations i can enter it into my BIOS, even boards with the removed settings tend to have it hidden away in the AMD generic bios settings

Cross posting with the zen garden thread:

Gaming results:
I reset HWinfo after I was already in game and took the screenshot before quitting so the average stats here are true gameplay averages, no idle time.

80 minutes of DRG, 4K ultra 140FPS (DX12, Unreal engine 3)
Peak system wattage of 380W, 32" monitor included.
CPU peaked at 63C, 69W
CPU averaged 48C 56W
3090 was in the 200-250W range (seriously, the gains from it going to the 375W limit are totally not worth it)

It's the little things like average core count being 3 threads average, 5 at most that show that 6 core CPU's are definitely viable for gaming.
View attachment 265277

Well my memory is only 3000MHz it's a mixed kit with 3000MHz and 4400MHz Samsung B-Die so and the 3000MHz kit doesn't work great with 3600MHz.
 
Well my memory is only 3000MHz it's a mixed kit with 3000MHz and 4400MHz Samsung B-Die so and the 3000MHz kit doesn't work great with 3600MHz.
mixed memory makes it more fun, but every platform on DDR4 i've used has had issues with more memory ranks.

IMO it's the reason why intel locked the early DDR4 platforms so low to 2133, to let them cut costs on the boards instead of what AMD did hoping that end users would be understanding of problems if they tried to run it too fast...
 
mixed memory makes it more fun, but every platform on DDR4 i've used has had issues with more memory ranks.

IMO it's the reason why intel locked the early DDR4 platforms so low to 2133, to let them cut costs on the boards instead of what AMD did hoping that end users would be understanding of problems if they tried to run it too fast...

I noticed that Asus actually acomplish this task with mixed memory far better than MSI.
Even MSI also works where Gigabyte is the worst and says that it's the memory vendor if they say some ram works on their board and the customer has issues it's like Gigabyte want's the memory vendor to make the bios for the Gigabyte motherboard.

From my testing with my current DDR4 ranking of motherboard vendors goes like this:
1. Asus (B550/X570)
2. AsRock (Z370/Z390)
3. MSI (B450) it could share place with AsRock

and at the lowest is Gigabyte because even I tested my previous Gigabyte Z590 Vision G with compatible ram I borrowed wouldn't run XMP and valueram from Kingston at ran best at 2133MHz so maybe I was just unlucky not sure but I would go with another brand over visuals in the future because as nice features as Gigabyte can offer also in the visual department I do not want to face their support again with anything.
 
Well my memory is only 3000MHz it's a mixed kit with 3000MHz and 4400MHz Samsung B-Die so and the 3000MHz kit doesn't work great with 3600MHz.
I would unmix and go B-die only, can get you up to 20% on some games if you tuned vs 3000 xmp.
 
I would unmix and go B-die only, can get you up to 20% on some games if you tuned vs 3000 xmp.
You get free performance just having four ranks, it can end up pretty even
 
You get free performance just having four ranks, it can end up pretty even
Maybe, depends on what die you get, if 3000 xmp is Hynix C/D or Micron E/B then it will be close, if they are Hynix AFR, Samsung C, E etc then no way ;) Once tuned that is!
 
Maybe, depends on what die you get, if 3000 xmp is Hynix C/D or Micron E/B then it will be close, if they are Hynix AFR, Samsung C, E etc then no way ;) Once tuned that is!

Most memcontrollers in Ryzen 5000 only supports 3600MHz which is 1:1 with the infinity fab.

The Hynix dies on my 3000MT/s kit is a pain to setup for anything else than XMP and yes I tried but lately I don't have the time plus electricity is expensive so I prefer to use my time else where.

Plus I need more than 16GB of memory.
 
Maybe, depends on what die you get, if 3000 xmp is Hynix C/D or Micron E/B then it will be close, if they are Hynix AFR, Samsung C, E etc then no way ;) Once tuned that is!
the 1usmus TPU article kinda says otherwise, and it was done all the way back in 2019

You can pick just about any benchmark on here, some things like latency are obviously impacted by just clock speeds, but as far as gaming is concerned you get quite the boost from adding more ranks


Even if you stay with just samsung, 3200CL14 dual rank outperforms everythning above it
I'm not 100% sure what "multi rank" is vs dual rank, i think it may mean mixing a single and dual rank stick. For this discussion, focus on SR and DR results.


1666159617534.png


Cropping the relevant ones for side by side from that:
That's CL12 vs CL14 - and the CL14 wins from those ranks.
1666159644928.png

1666159663269.png



Yes, tuned in RAM is faster - it's fantastic.

But the advantage of dual ranks definitely makes up for a lot of that ground
3600C14 SR 204FPS
1666168201438.png

1666168216409.png


1666168109891.png


There are oddities and outliers since these werent tested as extensively as a normal TPU review, i'm just sharing the knowledge that dual rank memory or 4 sticks while it can hurt max clock speeds, genuinely *is* a big performance boost on AM4
These tests are on ryzen 2000, the gains got bigger on Zen3 according to a lot of other reviews out there


GN covered it as well, with 10% gains on a 5600x

They ran their review setup 4x8 3200C14) as well as a bunch of other setups, i'll paste the relevant ones near each other since the visuals a mess without them

Their review setup 4x8 vs 2x8
15FPS drop from removing two sticks, no other changes.
1666168775097.png

1666168802260.png



3800 C18 (yay it's me)
11FPS
1666168821589.png

1666168836458.png



Clearly, the tuned great timings ram is faster - but average speed with four ranks, is going to beat tuned two ranks


They show this in their more confusing all together graph, where looking at all the bottom results - they're all x2 sticks/ranks on the memory

The worst performing setup (sigh, closest to mine) beat the best performing, as long as they had the extra memory ranks
1666169056567.png
 
Last edited:
the 1usmus TPU article kinda says otherwise, and it was done all the way back in 2019

You can pick just about any benchmark on here, some things like latency are obviously impacted by just clock speeds, but as far as gaming is concerned you get quite the boost from adding more ranks


Even if you stay with just samsung, 3200CL14 dual rank outperforms everythning above it
I'm not 100% sure what "multi rank" is vs dual rank, i think it may mean mixing a single and dual rank stick. For this discussion, focus on SR and DR results.


View attachment 266109

Cropping the relevant ones for side by side from that:
That's CL12 vs CL14 - and the CL14 wins from those ranks.
View attachment 266110
View attachment 266111


Yes, tuned in RAM is faster - it's fantastic.

But the advantage of dual ranks definitely makes up for a lot of that ground
3600C14 SR 204FPS
View attachment 266116
View attachment 266117

View attachment 266115

There are oddities and outliers since these werent tested as extensively as a normal TPU review, i'm just sharing the knowledge that dual rank memory or 4 sticks while it can hurt max clock speeds, genuinely *is* a big performance boost on AM4
These tests are on ryzen 2000, the gains got bigger on Zen3 according to a lot of other reviews out there


GN covered it as well, with 10% gains on a 5600x

They dropped two sticks out (top and bottom result hence blurring the middle ones) for a 15FPS drop.
15FPS for four ranks? That's free performance.
View attachment 266119
I don`t disagree with you, especially if running xmp DR will be significantly faster is most cases, but if you mix kits and one kit is rather poor, a tuned set of B-die will beat it easily even if it`s single rank vs poor kit of DR. Things like running RFC at 240 vs 600, RC at 40 vs 65, FAW at 16 vs 28 etc will have very high impact on performance. The RFC potion alone can account for 5% performance, and that is usually what DR vs SR achieves.
 
Mixing kits is bad, agreed
I just mean that when buying, 4x8 3600 C18 cheapo ram is going to work better than expensive 2x8 3200C14

You're better off filling those ranks to get 32GB, and tuning that


I got my 64GB 3600 C18 cheaper than i could have got 16GB 3200C14
 
0475906F-D9E3-4770-A0D4-109465F4BC7F.png

Can someone explain this?
 
They are. No one disagrees with that.

I meant why the 4090 performs the same to 3090Ti when using the 3D.
 
They are. No one disagrees with that.

I meant why the 4090 performs the same to 3090Ti when using the 3D.
CPU bottleneck. 4090 requires a fast CPU at 1080p (looking at these numbers i assume this is 1080p).
It's sometimes CPU bottlenecked even at 1440p.
 
Yes, the magic cache doesn’t help in cs go.
The 3D performs similarly to the normal 5000 CPUs.
 
Less than 800 fps CS:GO, literally unplayable!
 
Once you get into hundreds of frames per second, what matters most is instruction and cache latency in nanoseconds and VRAM latency/bandwidth, cache size will not help beyond a certain point.
You probably have 50-100-200 instructions repeating at that point, if you increase instruction throughput of one of these, then another becomes a bottleneck.

If you increase clockspeed, they still need the same cycles to complete but more cycles are completed per second = higher FPS
 
Hi Guys, I own a ryzen 3600, a MSI b450 tomahawk max and a Nvidia 3060ti. I play games at 1440p.
Will I see significant performance gains if I upgrade to the 5800x3d or just isn't worth it?
Anyone here made the same upgrade?
Cheers
 
Hi Guys, I own a ryzen 3600, a MSI b450 tomahawk max and a Nvidia 3060ti. I play games at 1440p.
Will I see significant performance gains if I upgrade to the 5800x3d or just isn't worth it?
Anyone here made the same upgrade?
Cheers

Depends, what games are you gaming? Because there should be a difference on average of about 11% in fps going from a 3600X to a normal 5800X.

What is the rest of your system?

You will still miss out on PCI-E 4.0 with the B450 chipset even it's a fine board.
 
Back
Top