# Shadow Of The Tomb Raider - CPU Performance and general game benchmark discussions



## Felix123BU (Apr 3, 2021)

*Hello World!

Since Shadow Of The Tomb Raider is a particularly CPU intensive game, or better said, CPU performance has a major impact on FPS, it would be interesting to see how our setups handle the game.

For that, lets settle on very specific in-game settings:

Fullscreen
Exclusive Fullscreen
DirectX 12*
*DLSS OFF
Vsync OFF
Resolution 1920 X 1080*
*Anti-Aliasing OFF

Graphic Settings - Lowest Profile (please leave it at Lowest without any changes for the purpose of this test)

We shall gather and centralize the scores once a month, CPU and GPU, and try to find a correlation between them.

Here is the first one*


----------



## Taraquin (Apr 4, 2021)

5600X@4.8GHz 4000cl16 tweaked ram  I set exclusive fullscreen but it keeps resetting :/


----------



## Felix123BU (Apr 4, 2021)

Speaking of CPU centric benchmarks in this game, I noticed that  Resizable Bar on my system, 5800X and RX 6800XT, is at best not giving any performance benefits, but actually on average a super slight reduction in FPS, so either  same results, but out of a run of 5 an average of minus 2 FPS with Resizable Bar, which could be run to run variance, but still is a minute reduction.



Taraquin said:


> 5600X@4.8GHz 4000cl16 tweaked ram  I set exclusive fullscreen but it keeps resetting :/
> 
> View attachment 195219


There sometimes is some wonkiness on changing settings ingame, as in some settings needing  a full game restart to correctly apply. Noticed it when switching between Lowest and Highest, would get the same score with Highest as with Lowest which is only possible if setting did not really apply


----------



## QuietBob (Apr 13, 2021)

Here's mine, with 4.5 on all cores. IF and RAM running 1866 at 16-18-18-38-58 (secondary timings untouched). Using the demo, can somebody compare their results with the full version?


----------



## Felix123BU (Apr 13, 2021)

QuietBob said:


> Here's mine, with 4.5 on all cores. IF and RAM running 1866 at 16-18-18-38-58 (secondary timings untouched). Using the demo, can somebody compare their results with the full version?
> 
> View attachment 196597


Those are actually quit good CPU scores for a 3300X, also keeping in mind this game loves moar cores. I also wonder if there is anyone else with a 3300X that would want to post his results here, we are bombarded to much by last gen CPU's and to few by "older" ones, this type of test brings generations into perspective


----------



## QuietBob (Apr 13, 2021)

Felix123BU said:


> to few by "older" ones, this type of test brings generations into perspective


I've got a couple of older PCs, but they're all running 7 or XP, so obviously no DX12. DX11 would completely skew the results, so here's hoping


----------



## WHDS (Apr 14, 2021)

Deleted my old post because I forgot to turn off AA
4.525 all core, 3666 ram 16-16-19-16-36 plus a few secondary timings, +150ish gpu core


----------



## Felix123BU (Apr 14, 2021)

WHDS said:


> Deleted my old post because I forgot to turn off AA
> 4.525 all core, 3666 ram 16-16-19-16-36 plus a few secondary timings, +150ish gpu core
> View attachment 196669


Thanks, was just about to mention the TAA


----------



## cRs (Apr 18, 2021)

all stock 16gb 3600mhz cl 18


----------



## Deleted member 202104 (Apr 18, 2021)

5800x, 95w Eco Mode, 3600 CL18


----------



## Deleted member 202104 (Apr 24, 2021)

Testing some new RAM (and the latest GPU drivers)

5800x, 95w Eco Mode, 32GB (4x8) 3200 14-14-14-34


----------



## Felix123BU (Apr 24, 2021)

weekendgeek said:


> Testing some new RAM (and the latest GPU drivers)
> 
> 5800x, 95w Eco Mode, 32GB (4x8) 3200 14-14-14-34
> 
> View attachment 197925


Interesting, I was playing with this thought in my head to lower the RAM frequency (from 3733 currently) and also lower the timings as much as possible, this game really loves ram with fast timings and low latencies


----------



## Deleted member 202104 (Apr 24, 2021)

Felix123BU said:


> Interesting, I was playing with this thought in my head to lower the RAM frequency (from 3733 currently) and also lower the timings as much as possible, this game really loves ram with fast timings and low latencies



I've been trying to decide which kit I wanted to run, but I think test made the decision.  These are standard XMP timings with GDM disabled.  I'd like to see what I could get the timings to @ 3600.

Also just put the CPU back to stock (PBO Disabled and standard 105w)


----------



## Felix123BU (Apr 24, 2021)

weekendgeek said:


> I've been trying to decide which kit I wanted to run, but I think test made the decision.  These are standard XMP timings with GDM disabled.  I'd like to see what I could get the timings to @ 3600.
> 
> Also just put the CPU back to stock (PBO Disabled and standard 105w)
> 
> View attachment 197926


I seem to get better CPU results in SOTR with an all core of 4.7ghz on the 5800X vs PBO to 5ghz boost for some reason, but its very close. One thing that I am not really clear about, I get pretty crappy CPU Render scores for the settings I run, and that bugs me, have to find out why  

304 vs your 417, that with 3733 dual rank 16CL tuned timings...


----------



## Deleted member 202104 (Apr 24, 2021)

Felix123BU said:


> I seem to get better CPU results in SOTR with an all core of 4.7ghz on the 5800X vs PBO to 5ghz boost for some reason, but its very close. One thing that I am not really clear about, I get pretty crappy CPU Render scores for the settings I run, and that bugs me, have to find out why
> 
> 304 vs your 417, that with 3733 dual rank 16CL tuned timings...



Yeah, that doesn't make any sense.  That's a really large difference for such similar hardware.  Has me curious too.


----------



## Felix123BU (Apr 24, 2021)

This is a weird game engine, so, played a bit with RAM timings, especially Trfc which I think was a bit too tight, loosened it, tested again:





Initial result on the left, todays result on the right. What is really weird, I got a 40 fps increase in Average CPU Render, but a 40 fps decrease in GPU score, with the final result of the same overall FPS   
Granted, its on different drivers, and there seems to be a slight performance benefit for the previous AMD one 21.3.1, but still, this is odd


----------



## Space Lynx (Apr 24, 2021)

Felix123BU said:


> This is a weird game engine, so, played a bit with RAM timings, especially Trfc which I think was a bit too tight, loosened it, tested again:
> 
> View attachment 197931
> 
> ...



yeah the 21.3.1 AMD drivers are insanely good with FPS boost... I'm considering never upgrading them from that. AMD seemed to nail something right on them, and I doubt they even know what they did.


----------



## Felix123BU (Apr 24, 2021)

lynx29 said:


> yeah the 21.3.1 AMD drivers are insanely good with FPS boost... I'm considering never upgrading them from that. AMD seemed to nail something right on them, and I doubt they even know what they did.


Me too, even though I like the Vivid Gaming thing in the last driver, 21.3.1 seems better for sustained performance for some reason. Really good driver.


----------



## Space Lynx (Apr 24, 2021)

Felix123BU said:


> Me too, even though I like the Vivid Gaming thing in the last driver, 21.3.1 seems better for sustained performance for some reason. Really good driver.



the vivid thing is terrible. look at techpowerup after you turn on vivid. the gray turns to pink... that's not right...

just calibrate your monitor better to begin with and you don't need vivid.


----------



## Felix123BU (Apr 24, 2021)

lynx29 said:


> the vivid thing is terrible. look at techpowerup after you turn on vivid. the gray turns to pink... that's not right...
> 
> just calibrate your monitor better to begin with and you don't need vivid.


Might be wrong for others, but on mine it really looks better, does not really change colors, just makes things a bit more ... "vivid"  . And I am speaking only about how games look, not general windows.

I am trying some more RAM timings to see how the CPU stats in SOTR change, its interesting in a way, though sort of unpredictable.


----------



## Pixrazor (Apr 24, 2021)

here it is


----------



## Felix123BU (Apr 24, 2021)

Pixrazor said:


> here it isView attachment 197933


awwww, the first HBM powered GPU


----------



## Space Lynx (Apr 24, 2021)

Felix123BU said:


> This is a weird game engine, so, played a bit with RAM timings, especially Trfc which I think was a bit too tight, loosened it, tested again:
> 
> View attachment 197931
> 
> ...



how are you getting so much of a higher score than me?


----------



## W1zzard (Apr 24, 2021)

Felix123BU said:


> Since Shadow Of The Tomb Raider is a particularly CPU intensive game, or better said, CPU performance has a major impact on FPS, it would be interesting to see how our setups handle the game.


How's the difference between the benchmark and actual gameplay?


----------



## Felix123BU (Apr 24, 2021)

lynx29 said:


> how are you getting so much of a higher score than me?


Couple of reasons, first being the 6800XT I have vs your 6800, my 6800XT is running at 2.7Ghz with over stock power limits, 400w basically, water cooled, and the second  would be, and I know that for sure since I had the 5600X before the 5800X, that I run a 5800X, and this game loooooooooooooves extra cores   

Was speaking to a mate that has the same setup I have, except a 5900X, that one gets an extra 10-15 FPS at 1080, and the 5950X gets another extra 10 on top of that. Its weird that each 2 extra cores would give extra FPS.



W1zzard said:


> How's the difference between the benchmark and actual gameplay?


If you mean in regards to FPS only, I would not know since I play it at 3440X1440, I only test it at 1080 in this thread to see the CPU impact on FPS.
I can speak about the difference at 3440x1440, the benchmark gives a bit higher numbers there than normal gameplay, but its rather fluctuating depending on area, so would assume the same is true for 1080.


----------



## WHDS (Apr 24, 2021)

lynx29 said:


> how are you getting so much of a higher score than me?


Looks like your graphics settings are on custom instead of lowest mate



W1zzard said:


> How's the difference between the benchmark and actual gameplay?


It's a good indication of in game performance, but some of the more populated areas can get a lot more taxing on the cpu than the built in benchmark


----------



## Felix123BU (Apr 24, 2021)

lynx29 said:


> how are you getting so much of a higher score than me?


Can you please rerun an post it with Lowest setting so we all have the same settings for comparison? Thx mate  



WHDS said:


> Looks like your graphics settings are on custom instead of lowest mate
> 
> 
> It's a good indication of in game performance, but some of the more populated areas can get a lot more taxing on the cpu than the built in benchmark


The biggest CPU hog as in areas is the big city at the end, but I am guessing its also more graphically taxing also since most areas are very small in comparison.

Another thing that is interesting for me about this game is how GPU architectures impacts CPU stats in the benchmark, I am not 100% certain, I noticed in some other threads that generally speaking, Nvidia GPU's tend to get better CPU scores with similar setups. I was hoping we get more Nvidia powered results to test this theory 



lynx29 said:


> how are you getting so much of a higher score than me?


Also, you might ask @Taraquin (see at the beginning of thread), he posted a result with a 5600X, and he gets a lot higher CPU score than most, actually excellent for a 5600X, he might have an answer


----------



## Space Lynx (Apr 24, 2021)

Felix123BU said:


> Can you please rerun an post it with Lowest setting so we all have the same settings for comparison? Thx mate
> 
> 
> The biggest CPU hog as in areas is the big city at the end, but I am guessing its also more graphically taxing also since most areas are very small in comparison.
> ...


----------



## Taraquin (Apr 25, 2021)

lynx29 said:


>


Most of the improvements are based on ram tuning. You can post a screenshot of zentimings and we can try to help you if you want?


----------



## Space Lynx (Apr 25, 2021)

Taraquin said:


> Most of the improvements are based on ram tuning. You can post a screenshot of zentimings and we can try to help you if you want?



I mean, I'm only 10 fps away from his mega overclocked xt edition, so I'm good... lol


----------



## Taraquin (Apr 25, 2021)

lynx29 said:


> I mean, I'm only 10 fps away from his mega overclocked xt edition, so I'm good... lol


If you look at my score I get 29fps above you (and my ram has a poor bin and cooling), with some tweaking that is possible for you as well, but if you don't want an extra free 10-15% performance it's up to you


----------



## Space Lynx (Apr 25, 2021)

Taraquin said:


> If you look at my score I get 29fps above you (and my ram has a poor bin and cooling), with some tweaking that is possible for you as well, but if you don't want an extra free 10-15% performance it's up to you



I'm willing to give it a shot...

this is what i have now, stable.  if you can take me to 4000 cas 16-17-17-17 it might be worth a try, i don't think these sticks can do 4000 16-16-16-16 though. they are b-die  2x16gb.


----------



## jesdals (Apr 25, 2021)

Did not look good on my Eyefinity setup but heres my first try


----------



## Space Lynx (Apr 25, 2021)

jesdals said:


> Did not look good on my Eyefinity setup but heres my first try
> View attachment 198085



I'm only 30 fps slower than you on the average fps score. and my PC was about half the cost of yours... assuming we both paid MSRP.  not bad on my end, though if I could afford what you have I would have done so too lol

edit: do you have SAM on? I do


----------



## jesdals (Apr 25, 2021)

I have SAM on but not any settings giving boost in the driver setup so this is baseline 6900XT - I am not going to tinker with it today but perhaps next week


----------



## Space Lynx (Apr 25, 2021)

jesdals said:


> I have SAM on but not any settings giving boost in the driver setup so this is baseline 6900XT - I am not going to tinker with it today but perhaps next week
> View attachment 198088
> View attachment 198089



I've tested with boost off and on, and it was only a 1-2 fps difference for this particular 'lowest settings' test in this thread. It may just be it doesn't work well in the bench as the bench is usually slow moving camera angles and boost really only helps in fast paced action stuff... not sure. But I find it useless so far... maybe it will help in actual in-action though that I have not tested yet.


----------



## Taraquin (Apr 25, 2021)

lynx29 said:


> I'm willing to give it a shot...
> 
> this is what i have now, stable.  if you can take me to 4000 cas 16-17-17-17 it might be worth a try, i don't think these sticks can do 4000 16-16-16-16 though. they are b-die  2x16gb.


How high is your infinity fabric stable? These are samsung B-die? Your tRFC is really lax. If you can do 2000 infinity fabric try the following: Just try copying my timings except use your current values on ProcODT and all below. Copy the voltage settings. I have a poor binned B-die kit som you might get it to work


----------



## Space Lynx (Apr 25, 2021)

Taraquin said:


> How high is your infinity fabric stable? These are samsung B-die? Your tRFC is really lax. If you can do 2000 infinity fabric try the following: Just try copying my timings except use your current values on ProcODT and all below. Copy the voltage settings. I have a poor binned B-die kit som you might get it to work
> View attachment 198114



i always get confused which soc voltage to use, chipset or cpu for ram overclocking. my mobo has both listed in the same area. 

yeah 2000 flck works just fine on x570 tomahawk. its a great board.


----------



## mrthanhnguyen (Apr 25, 2021)




----------



## Space Lynx (Apr 25, 2021)

mrthanhnguyen said:


> View attachment 198128



those are some amazing system specs under your name there bud... wow lol  5.5 ghz, do you run that 24/7????


----------



## mrthanhnguyen (Apr 25, 2021)

lynx29 said:


> those are some amazing system specs under your name there bud... wow lol  5.5 ghz, do you run that 24/7????


Yes


----------



## Space Lynx (Apr 25, 2021)

mrthanhnguyen said:


> Yes



Please do me a favor? Trash that monitor, buy a bigger desk, and get a LG CX 48" OLED or the new 48" C1 OLED also just came out a few weeks ago, either one.

You can obviously afford it. Space it so its just right for your eyes with a bigger desk, if you have room for a bigger desk anyway...

that would truly be the ultimate gaming setup then lol

edit:  don't trash it... lol keep as backup


----------



## toilet pepper (Apr 26, 2021)

Here's mine with a heavily gimped 5800x. RAM at 3600CL16.






For some reason I cranked everything up and the difference ain't that big.


----------



## Taraquin (Apr 26, 2021)

lynx29 said:


> i always get confused which soc voltage to use, chipset or cpu for ram overclocking. my mobo has both listed in the same area.
> 
> yeah 2000 flck works just fine on x570 tomahawk. its a great board.


Not sure, only got one soc voltage at mye setup. 


toilet pepper said:


> Here's mine with a heavily gimped 5800x. RAM at 3600CL16.
> 
> View attachment 198149
> 
> ...


Done any ram tuning? 250fps+ should be doable with a bit of tuning


----------



## Felix123BU (Apr 26, 2021)

Taraquin said:


> Not sure, only got one soc voltage at mye setup.
> 
> Done any ram tuning? 250fps+ should be doable with a bit of tuning


It depends on how much tuning your ram kit can take, I speak from experience, my ram kit is natively 3333Mhz CL16 (very loose default timings), I tuned it to 3733 Mhz 1:1:1 CL16 as tight as I could get timings (Dram Calculator Fast, then manually tightening some more). That got me an extra 30-35 fps in this game at 1080, so from 230fps, for the next 20fps to 250fps I need better ram, no extra tuning will give me that since the ram is at its max already  

But yeah, if the ram kit is capable, tuning it can give you a looot of extra performance


----------



## Taraquin (Apr 26, 2021)

Felix123BU said:


> It depends on how much tuning your ram kit can take, I speak from experience, my ram kit is natively 3333Mhz CL16 (very loose default timings), I tuned it to 3733 Mhz 1:1:1 CL16 as tight as I could get timings (Dram Calculator Fast, then manually tightening some more). That got me an extra 30-35 fps in this game at 1080, so from 230fps, for the next 20fps to 250fps I need better ram, no extra tuning will give me that since the ram is at its max already
> 
> But yeah, if the ram kit is capable, tuning it can give you a looot of extra performance


Don't know what ram you got, but 3600cl16 usually means B-die or high binned Micron E/B or Hynix C/D. All those could do a lot more, but his infinity fabric might not support more than 3800 if he's unlucky. Even my Micron rev E 3000cl15 does 3733cl15 with much tighter timings than fast at dram calc. That got me 30fps. The B-die was better and got me over 40fps after tuning


----------



## Felix123BU (Apr 26, 2021)

Taraquin said:


> Don't know what ram you got, but 3600cl16 usually means B-die or high binned Micron E/B or Hynix C/D. All those could do a lot more, but his infinity fabric might not support more than 3800 if he's unlucky. Even my Micron rev E 3000cl15 does 3733cl15 with much tighter timings than fast at dram calc. That got me 30fps. The B-die was better and got me over 40fps after tuning


This: Corsair Vengeance RGB 32 GB (2 x 16 GB) DDR4 3333 MHz C16 XMP 2.0 Enthusiast RGB LED Illuminated Memory Kit - Black: Amazon.co.uk: Computers & Accessories

In theory its Samsung B-die, dual rank, at least that's what Tayphoon reports. I did a lot of testing to find its limits, tried combos of max frequency and min latencies, the best I can get it either 3266 CL14 tuned, or 3733 CL16 tuned (ram frequency might go higher, but anything about IF 1866 results in no boot) . My motherboard is excellent for ram oc, I am 95% sure the sticks for whatever reason can not go higher than what they do now, even though I could be very wrong   

The difference in Shadow Of The Tomb Raider between 3266 CL14 and 3733 CL16 is around 17 FPS at 1080 Lowest.


----------



## bxcounter (Apr 26, 2021)

(9900k 5.0/4.6 - HT Off) + 1080Ti stock*/*OC + DDR4 3800 c14*/*4100 c17


----------



## Taraquin (Apr 26, 2021)

Felix123BU said:


> This: Corsair Vengeance RGB 32 GB (2 x 16 GB) DDR4 3333 MHz C16 XMP 2.0 Enthusiast RGB LED Illuminated Memory Kit - Black: Amazon.co.uk: Computers & Accessories
> 
> In theory its Samsung B-die, dual rank, at least that's what Tayphoon reports. I did a lot of testing to find its limits, tried combos of max frequency and min latencies, the best I can get it either 3266 CL14 tuned, or 3733 CL16 tuned (ram frequency might go higher, but anything about IF 1866 results in no boot) . My motherboard is excellent for ram oc, I am 95% sure the sticks for whatever reason can not go higher than what they do now, even though I could be very wrong
> 
> The difference in Shadow Of The Tomb Raider between 3266 CL14 and 3733 CL16 is around 17 FPS at 1080 Lowest.


What are the primary timings xmp? 16-16-16 tends to be B-die, but if timing 2 and 3 are 18 they are very rarely B-die. Thaiphoon sometimes reads wrong, I have heard of rev E being read as B-die.

Edit: Googled a bit and found a similar kit with 3333cl16-18-18-35. If that is what you have I bet they are Micron rev B SR or rev E DR. They are both okay, but they struggle with low tRCDRD which can explain why getting below 17 is difficult.


----------



## Felix123BU (Apr 26, 2021)

Taraquin said:


> What are the primary timings xmp? 16-16-16 tends to be B-die, but if timing 2 and 3 are 18 they are very rarely B-die. Thaiphoon sometimes reads wrong, I have heard of rev E being read as B-die.


16-17-16, I can put 16-16-16, but its not really 100% stable. If you got some advice on how to tighten even more, I am ready to listen


----------



## Taraquin (Apr 26, 2021)

Felix123BU said:


> 16-17-16, I can put 16-16-16, but its not really 100% stable. If you got some advice on how to tighten even more, I am ready to listen


Are the xmp-timings 16-17-16? That is very unusual. What are you able to run tRFC at? Post your zentimings and I can try to help


----------



## Felix123BU (Apr 26, 2021)

Taraquin said:


> Are the xmp-timings 16-17-16? That is very unusual. What are you able to run tRFC at? Post your zentimings and I can try to help


"Are the xmp-timings 16-17-16? That is very unusual", yup, I know, these timings are unusual, which would make me think they are some B-dies that where deemed not the best, hence some looser timings where applied. Trfc I can go as low as 290. Here are also the zen timings:


----------



## Taraquin (Apr 26, 2021)

Felix123BU said:


> "Are the xmp-timings 16-17-16? That is very unusual", yup, I know, these timings are unusual, which would make me think they are some B-dies that where deemed not the best, hence some looser timings where applied. Trfc I can go as low as 290. Here are also the zen timings:
> 
> View attachment 198164


That is B-die, no other can run tRFC that low at 3733. What agesa are you on? 1.2.0.1? Tried running IF above 1866? Your ram voltage is very low, try upping voltage as tRCDRD scales positively with more voltage, tRFC also likes higher voltage. 

Try the following: 1.45Vdimm, 900mv vddp, 940mv vddg ccd, 1020 vddg iod, 1110mv soc, 2t instead of gdm and 1t, CL15, tRCD 16, tRP 15, tRAS 31, tRC 46, tRFC 276, tRTP 6, tRDWR 9 or 10. Disable spread spectrum. 

If that works and you want to try even tighter try tRCDRD 15, tRP 14, tRC 44, tRFC 264.

If you don't wanna go that much higher on volt set it to 1.4V and do 16-16-16-32-48 288 tRFC.


----------



## Felix123BU (Apr 26, 2021)

Taraquin said:


> That is B-die, no other can run tRFC that low at 3733. What agesa are you on? 1.2.0.1? Tried running IF above 1866? Your ram voltage is very low, try upping voltage as tRCDRD scales positively with more voltage, tRFC also likes higher voltage.
> 
> Try the following: 1.45Vdimm, 900mv vddp, 940mv vddg ccd, 1020 vddg iod, 1110mv soc, 2t instead of gdm and 1t, CL15, tRCD 16, tRP 15, tRAS 31, tRC 46, tRFC 276, tRTP 6, tRDWR 9 or 10. Disable spread spectrum.
> 
> ...


Thanks for trying to help  , will probably try it later today when time allows. Agesa is 1.2.0.1
Yes, tried running IF above 1866, wont work, cant run it without some indecent voltages on the CPU  and even then its not even close to stable (though have not tried only IF without the Mem clock in sync, might be that the CPU can run higher IF but the ram cant, will test that) . Spread spectrum is always disabled.
"2t instead of gdm and 1t" hmm, did not think of trying that, but its worth a try 
There is one weird thing that happens with my ram, namely, if I up the voltage over 1.4, it becomes unstable, I have to keep it under 1.4 for 100% stability, that is really weird, though I read some people have the same "issue".

The only reason I am trying to tune everything just a bit more is for this test exclusively, bugs me that I cant go over 230FPS in 1080 Lowest, even though I don't play anything at 1080 

Had 10 minutes free, GDM off now, and 1T cause with GDM off it goes to Auto only, and it stuck with 1T, tested again, behold..... 230 again     





Will play some more around later 

tRCD 16 brought me finally over 230fps, to 237 though the CPU scores remain basically identical  

Its not 100% stable at the current voltage, though game stable.


----------



## Taraquin (Apr 26, 2021)

Felix123BU said:


> Thanks for trying to help  , will probably try it later today when time allows. Agesa is 1.2.0.1
> Yes, tried running IF above 1866, wont work, cant run it without some indecent voltages on the CPU  and even then its not even close to stable (though have not tried only IF without the Mem clock in sync, might be that the CPU can run higher IF but the ram cant, will test that) . Spread spectrum is always disabled.
> "2t instead of gdm and 1t" hmm, did not think of trying that, but its worth a try
> There is one weird thing that happens with my ram, namely, if I up the voltage over 1.4, it becomes unstable, I have to keep it under 1.4 for 100% stability, that is really weird, though I read some people have the same "issue".
> ...


Try 1.39V, that will probably stabilize tRCDRD. Try tRP 15, tRC 47 and tRFC 282, tRTP 6.


----------



## toilet pepper (Apr 26, 2021)

Taraquin said:


> Not sure, only got one soc voltage at mye setup.
> 
> Done any ram tuning? 250fps+ should be doable with a bit of tuning



Here's what I have currently. They are bdies and I just upgraded from a Ryzen 3600 and know the RAM and mobo are perfectly capable of going higher. 

For some reason the 5800x doesnt want to go any higher. I tried DRAM calculator and it wouldnt boot. I just plugged this in and called it a day.


----------



## Felix123BU (Apr 26, 2021)

Taraquin said:


> Try 1.39V, that will probably stabilize tRCDRD. Try tRP 15, tRC 47 and tRFC 282, tRTP 6.


The voltage weirdness continues, at 1.37 I could run the benchmark, at 1.39 the game crashed completely within seconds   put it back to 1.37, game runs fine.
And I would go higher with voltage for testing purposes, only that as said if I go higher, the higher I go, the more unstable it gets, weird. Tried 1.4, then 1.41, got worse each time  
What about MEM VTT voltage?


----------



## Taraquin (Apr 26, 2021)

toilet pepper said:


> Here's what I have currently. They are bdies and I just upgraded from a Ryzen 3600 and know the RAM and mobo are perfectly capable of going higher.
> 
> For some reason the 5800x doesnt want to go any higher. I tried DRAM calculator and it wouldnt boot. I just plugged this in and called it a day.
> 
> ...


Try trrds 4, trrdl 6, tfaw 16, trc 48, twr 12, trtp 6. If that works you can try trp 15, tras 31, trc 47 and trfc 282.



Felix123BU said:


> The voltage weirdness continues, at 1.37 I could run the benchmark, at 1.39 the game crashed completely within seconds   put it back to 1.37, game runs fine.
> And I would go higher with voltage for testing purposes, only that as said if I go higher, the higher I go, the more unstable it gets, weird. Tried 1.4, then 1.41, got worse each time
> What about MEM VTT voltage?


Dunno about mem vtt. Are rgb on ram active? If it is and you dont care about it, try disabling it as it increases temp on ram, this might make it possible to run higher voltage.


----------



## Felix123BU (Apr 26, 2021)

Taraquin said:


> Try trrds 4, trrdl 6, tfaw 16, trc 48, twr 12, trtp 6. If that works you can try trp 15, tras 31, trc 47 and trfc 282.
> 
> 
> Dunno about mem vtt. Are rgb on ram active? If it is and you dont care about it, try disabling it as it increases temp on ram, this might make it possible to run higher voltage.


As far as temps go, they are pretty cool, during normal operations it never goes above 55c, that should not be an issue. Anyhow, i fixed my 'issue' of not seeing anything over 230fps, for now the itch is scratched.

Thanks a lot for the advices, will probably revisit the ram oc topic one day.

And speaking of Cpu performance in SOTR, whatever i did, the cpu stats basically remain the same, when there was a Fps increase, the section that increased was the Gpu section, i wonder if thats a difference of architectures since you have a Amd Nvidia combo, i have a Amd Amd combo, so the game engine might behave differently.


----------



## rtwjunkie (Apr 26, 2021)

I thought we already had a very multipage SOTTR benchmark thread?


----------



## Deleted member 202104 (Apr 26, 2021)

rtwjunkie said:


> I thought we already had a very multipage SOTTR benchmark thread?



I believe this new thread was set up to focus on CPU (1080 Lowest) vs overall system (1080/1440 Highest) in the other thread.

(There was a rather aggressive user who had popped into the other thread disregarding the parameters for posting results, prompting this one.)


----------



## rtwjunkie (Apr 26, 2021)

weekendgeek said:


> I believe this new thread was set up to focus on CPU (1080 Lowest) vs overall system (1080/1440 Highest) in the other thread.
> 
> (There was a rather aggressive user who had popped into the other thread disregarding the parameters for posting results, prompting this one.)


I’m confused why a benchmark would focus on how no one actually plays the game? The best purpose of a BM is to test somewhat how a system plays in the associated game, including visuals.


----------



## Felix123BU (Apr 26, 2021)

rtwjunkie said:


> I’m confused why a benchmark would focus on how no one actually plays the game? The best purpose of a BM is to test somewhat how a system plays in the associated game, including visuals.


Because some people think something is good, others think something else is good. We had the same discussion on the other Shadow of the Tomb Raider benchmark, where 1080 Highest was the norm, some people where not pleased and wanted a benchmark for Lowest, hence I created this opposite threat so we can compare and discuss setting that let the CPU shine instead of the GPU. But there will always be people who don't like either one or the other, and in the end not all can be pleased 

And to answer your question, why? Because its fun to test the same thing from a different perspective 



weekendgeek said:


> I believe this new thread was set up to focus on CPU (1080 Lowest) vs overall system (1080/1440 Highest) in the other thread.
> 
> (There was a rather aggressive user who had popped into the other thread disregarding the parameters for posting results, prompting this one.)


To quote the user you are referring to: "The whole world uses this game as a CPU benchmark", we can not disappoint the whole world


----------



## Taraquin (Apr 26, 2021)

Felix123BU said:


> As far as temps go, they are pretty cool, during normal operations it never goes above 55c, that should not be an issue. Anyhow, i fixed my 'issue' of not seeing anything over 230fps, for now the itch is scratched.
> 
> Thanks a lot for the advices, will probably revisit the ram oc topic one day.
> 
> And speaking of Cpu performance in SOTR, whatever i did, the cpu stats basically remain the same, when there was a Fps increase, the section that increased was the Gpu section, i wonder if thats a difference of architectures since you have a Amd Nvidia combo, i have a Amd Amd combo, so the game engine might behave differently.


Temp is your stability issue. B-die prefers beliw 50C and tends to be unstable if timings are tight above 50C. When I put a fan directly at ram I could up voltage to 1.5V and lower several timings. With no fan ram overheats above 1.45V. I changed chassis now with better cooling so it might be better now.


----------



## Deleted member 202104 (Apr 26, 2021)

rtwjunkie said:


> I’m confused why a benchmark would focus on how no one actually plays the game?



I think W!zzard asked the same question as well (or at least how it related to game play).

I think it's in the same spirit of why 720p benchmarks are posted - CPU performance.



rtwjunkie said:


> The best purpose of a BM is to test somewhat how a system plays in the associated game, including visuals.



I agree that's a really good purpose.  I play 1440p and as high quality settings as decent frame rates allow.

From doing this, I saw the actual results in my system from swapping to a lower frequency (3600 to 3200)  but lower latency (18-22-22-42 to 14-14-14-34) RAM and changing CPU power limits.

It was more interesting than a Cinebench run.


----------



## rtwjunkie (Apr 26, 2021)

Felix123BU said:


> Because some people think something is good, others think something else is good. We had the same discussion on the other Shadow of the Tomb Raider benchmark, where 1080 Highest was the norm, some people where not pleased and wanted a benchmark for Lowest, hence I created this opposite threat so we can compare and discuss setting that let the CPU shine instead of the GPU. But there will always be people who don't like either one or the other, and in the end not all can be pleased


Thanks for the thorough explanation. I went to see what was up.  It seems rather than report and threadban someone who can’t follow the thread rules and test his system on a benchmark that TPU uses as a GAME benchmark, he gets catered to and gets the CPU benchmark thread he wants created?  

Ok, then, don’t worry I’ll just see myself out.


----------



## Felix123BU (Apr 26, 2021)

Taraquin said:


> Temp is your stability issue. B-die prefers beliw 50C and tends to be unstable if timings are tight above 50C. When I put a fan directly at ram I could up voltage to 1.5V and lower several timings. With no fan ram overheats above 1.45V. I changed chassis now with better cooling so it might be better now.


It could be, you might be onto something, I never bothered about temps since 55 was the absolute max in a stress test after 1 hour (and that's max, normally its around 45 ingame), the one thing that sort of would not apply to this theory is that after a ram setting change and a reboot my ram idles at around 33c, and it does not heat up that fast, but who knows, the temp sensors on the sticks could be on one part of the stick and another gets hot really fast and is not covered by the sensors. But I like the idea, have tried so many things, putting a fan above the sticks for a couple of minutes to see if there is a difference would be a worthy test  


weekendgeek said:


> I think W!zzard asked the same question as well (or at least how it related to game play).
> 
> I think it's in the same spirit of why 720p benchmarks are posted - CPU performance.
> 
> ...


Deffo more interesting that a CB run 



rtwjunkie said:


> Thanks for the thorough explanation. I went to see what was up.  It seems rather than report and threadban someone who can’t follow the thread rules and test his system on a benchmark that TPU uses as a GAME benchmark, he gets catered to and gets the CPU benchmark thread he wants created?
> 
> Ok, then, don’t worry I’ll just see myself out.


I was not really to please him, he could have pleased himself with a thread, but more out of curiosity about how certain CPUs reflect in the benchmark, a thing not clearly visible in a higher resolution scenario, especially since this game is quite CPU dependent. And that guy did not want this thread, he wanted THAT thread   

I can generally agree that game CPU benchmarks are not the best use of a game, but they have a place and people who like to test these things.


----------



## toilet pepper (Apr 27, 2021)

Taraquin said:


> Try trrds 4, trrdl 6, tfaw 16, trc 48, twr 12, trtp 6. If that works you can try trp 15, tras 31, trc 47 and trfc 282.
> 
> 
> Dunno about mem vtt. Are rgb on ram active? If it is and you dont care about it, try disabling it as it increases temp on ram, this might make it possible to run higher voltage.


Thanks for the tip. Here's what I got with your suggestion. I'm heavily thermal limited as the rig is in an ITX case.









here's my timings on the same rig with a ryzen 3600. I can't do it with my 5800x.


----------



## Taraquin (Apr 27, 2021)

toilet pepper said:


> Thanks for the tip. Here's what I got with your suggestion. I'm heavily thermal limited as the rig is in an ITX case.
> 
> View attachment 198248
> View attachment 198247
> ...


3% more fps  Further suggestions: twtrl 12 or 10, trrdl 4, twrrd 1. If you can run voltage higher on the ram you can tighten further. With 1.45V you can try 15-15-15-14-30 44 trc and 264 trfc. That should give you a good boost. You might be able to run cl14, but that depends on how good binned your ram is.

If you have the same ram on the 3600 try trp 15, tras 33, trc 48, trfc 288, trrds 4, trrdl 6, tfaw 16, twr 12, trtp 6, that mught work at 1.4V or a bit higher.


----------



## steevebacon (Apr 30, 2021)




----------



## Det0x (Jun 19, 2021)




----------



## Space Lynx (Jun 19, 2021)

Det0x said:


> View attachment 204512



300 fps... mighty impressive...


----------



## Det0x (Jun 30, 2021)

lynx29 said:


> 300 fps... mighty impressive...


No need to stop there..


----------



## Felix123BU (Jun 30, 2021)

Det0x said:


> No need to stop there..
> View attachment 205978
> View attachment 205979


350, go big or go home  

bet that 14-14-14 memory helps a lot    mine cant go lower than 16 at 3800mhz 1:1:1


----------



## CGi-Quality (Aug 2, 2021)




----------



## Felix123BU (Aug 2, 2021)

CGi-Quality said:


> View attachment 210858
> 
> View attachment 210859


Can you please re-upload keeping in mind the threads topic? Please see first page: 

*Fullscreen
Exclusive Fullscreen
DirectX 12
DLSS OFF
Vsync OFF
Resolution 1920 X 1080
Anti-Aliasing OFF

Graphic Settings - Lowest Profile (please leave it at Lowest without any changes for the purpose of this test)*

Thank you!


----------



## CGi-Quality (Aug 2, 2021)

Felix123BU said:


> Can you please re-upload keeping in mind the threads topic? Please see first page:
> 
> *Fullscreen
> Exclusive Fullscreen
> ...


Weird, I thought it was fixed (I saw the error myself and re-uploaded before you replied). It's fine though, I'll re-up shortly.

*Edit*: Done. Got the old one mixed up in the upload, I'm guessing.

That said → lower result than I excepted with that proc, but hey, it is what it is.


----------



## Taraquin (Aug 3, 2021)

CGi-Quality said:


> Weird, I thought it was fixed (I saw the error myself and re-uploaded before you replied). It's fine though, I'll re-up shortly.
> 
> *Edit*: Done. Got the old one mixed up in the upload, I'm guessing.
> 
> That said → lower result than I excepted with that proc, but hey, it is what it is.


Have you tweaked ram or OCed CPU? You have a lot of potential


----------



## CGi-Quality (Aug 3, 2021)

Taraquin said:


> Have you tweaked ram or OCed CPU? You have a lot of potential


RAM is running at its 2400MHz XMP, but with the proc, I figured I wouldn't need to. I OC'ed my Haswell-E (5960X) proc about 7 years ago, but haven't dabbled in OC since. 

I _am_ considering upgrading to 4000MHz RAM, though. 2400 may not cut it.


----------



## Felix123BU (Aug 3, 2021)

CGi-Quality said:


> RAM is running at its 2400MHz XMP, but with the proc, I figured I wouldn't need to. I OC'ed my Haswell-E (5960X) proc about 7 years ago, but haven't dabbled in OC since.
> 
> I _am_ considering upgrading to 4000MHz RAM, though. 2400 may not cut it.


Yup high-speed low latency RAM helps a lot, low latency maybe even more than speed


----------



## CGi-Quality (Aug 3, 2021)

Felix123BU said:


> Yup high-speed low latency RAM helps a lot, low latency maybe even more than speed


I also found a batch that's C16/3600. May go with it!


----------



## Felix123BU (Aug 3, 2021)

CGi-Quality said:


> I also found a batch that's C16/3600. May go with it!


C16/3600 is good, C14/3600 is great, but its also damn expensive compared to C16/3600, and the performance difference is not that great if you don't want to squeeze the absolute best out of your kit


----------



## CGi-Quality (Aug 3, 2021)

Felix123BU said:


> C16/3600 is good, C14/3600 is great, but its also damn expensive compared to C16/3600, and the performance difference is not that great if you don't want to squeeze the absolute best out of your kit


I assume though that C16/3600 should easily smoke C14/2400 and it will show? In the past, I was once lead to believe that RAM speeds really didn't make THAT big a difference. I'd be happy to hear that this was totally false.


----------



## Taraquin (Aug 3, 2021)

Felix123BU said:


> Yup high-speed low latency RAM helps a lot, low latency maybe even more than speed


I bet you can get 20% better performance with better ram. Try finding a Samsung B-die kit. Ripjaws 3200cl14 is quite cheap and you should be able to OC those to 4000cl16 in most cases


----------



## Ja.KooLit (Aug 3, 2021)




----------



## damric (Aug 6, 2021)

Ryzen 5 3600@ 4.35GHz
4x8GB DDR4 @3800MT/s 16-19-19-39 CR1 (Hynix)
Vega 64 @1750/1100


----------



## Taraquin (Aug 6, 2021)

damric said:


> Ryzen 5 3600@ 4.35GHz
> 4x8GB DDR4 @3800MT/s 16-19-19-39 CR1 (Hynix)
> Vega 64 @1750/1100


You can probably get around 170 on CPU game with a few tweaks on the ram, post a zentimings screenshot and we can help you


----------



## damric (Aug 6, 2021)

Good luck. These are some very cheap RAM.


----------



## Taraquin (Aug 6, 2021)

damric said:


> Good luck. These are some very cheap RAM.


Have you tested the fast preset at 3800? Try it. If they are Hynix DJR they are some of the best chips out there  

If fast preset don't work even at 1.45V try tras 35, tRC 60, tFAW 20, trrds 5, trrdl 8, tRFC 540, tWR 16, tRTP 8, scls 4. If that works try tRC 58, trfc 504, tfaw 16, trrds 4, trrdl 6, tWR 12, tRTP 6.


----------



## damric (Aug 6, 2021)

Taraquin said:


> Have you tested the fast preset at 3800? Try it. If they are Hynix DJR they are some of the best chips out there
> 
> If fast preset don't work even at 1.45V try tras 35, tRC 60, tFAW 20, trrds 5, trrdl 8, tRFC 540, tWR 16, tRTP 8, scls 4. If that works try tRC 58, trfc 504, tfaw 16, trrds 4, trrdl 6, tWR 12, tRTP 6.


I've honestly tried so many settings. I think it's more of a limitation on this memory controller, and it's a weird one. With my Zen and Zen+ CPUs I could run at tighter timings using the calculator with the same sticks, 3466/14 if I recall. This CPU can do almost a 1950 fabric clock, but it seems to hate timings tightened, which is weird to me. Especially tRFC it hates tightening. Regardless, I'll give your suggested timings a shot tomorrow when I get a chance. My wife's rig has a shitty series wired motherboard and CJRs and can do 4x16GB way easier on her 5800x than I can even do 4x8GB with my parallel topology board.


----------



## Mbellantoni (Aug 11, 2021)

Hello all, i was browsing the internet looking for a good way to test my manual ram OC in realtime on my 5600x and came across this forum. Here are my results. My ram profile is 100 percent stable in a sense it doesnt throw any critical errors. However, a 2000flck is rough to get stable and it will throw whea 19's( interconnect bus error) which are corrected errors ONLY under extreme heavy loads like p95 large fft test. I cannot replicate them under any game, benchmark or memory based stress test besides p95. Large fft so im rolling with it. There is a very narrow margin with soc and cldo voltages that minimize these errors in p.95 to near none that took more time than i care to admit to dial in lol. Too much voltage or too little resulted in more whea 19's. I dont run prime much unless im off doing another oc so im happy with it and check for them daily and seem to have found luck with my voltages.

Anyways this is on a 5600x at 4.7ghz (pbo) with custom ppt values and negative curve. This 4000cl14 profile made a difference of 27 frames and brought my gpu productivity (gpu bound) 15% over the stock 3600cl16 gskill profile (f4-3600c16-8gtzn)


----------



## Felix123BU (Aug 11, 2021)

Mbellantoni said:


> Hello all, i was browsing the internet looking for a good way to test my manual ram OC in realtime on my 5600x and came across this forum. Here are my results. My ram profile is 100 percent stable in a sense it doesnt throw any critical errors. However, a 2000flck is rough to get stable and it will throw whea 19's( interconnect bus error) which are corrected errors ONLY under extreme heavy loads like p95 large fft test. I cannot replicate them under any game, benchmark or memory based stress test besides p95. Large fft so im rolling with it. There is a very narrow margin with soc and cldo voltages that minimize these errors in p.95 to near none that took more time than i care to admit to dial in lol. Too much voltage or too little resulted in more whea 19's. I dont run prime much unless im off doing another oc so im happy with it and check for them daily and seem to have found luck with my voltages.
> 
> Anyways this is on a 5600x at 4.7ghz (pbo) with custom ppt values and negative curve. This 4000cl14 profile made a difference of 27 frames and brought my gpu productivity (gpu bound) 15% over the stock 3600cl16 gskill profile (f4-3600c16-8gtzn)


Very respectable score for a 5700XT  This game and this benchmark loves fast tightened ram.


----------



## Taraquin (Aug 11, 2021)

Mbellantoni said:


> Hello all, i was browsing the internet looking for a good way to test my manual ram OC in realtime on my 5600x and came across this forum. Here are my results. My ram profile is 100 percent stable in a sense it doesnt throw any critical errors. However, a 2000flck is rough to get stable and it will throw whea 19's( interconnect bus error) which are corrected errors ONLY under extreme heavy loads like p95 large fft test. I cannot replicate them under any game, benchmark or memory based stress test besides p95. Large fft so im rolling with it. There is a very narrow margin with soc and cldo voltages that minimize these errors in p.95 to near none that took more time than i care to admit to dial in lol. Too much voltage or too little resulted in more whea 19's. I dont run prime much unless im off doing another oc so im happy with it and check for them daily and seem to have found luck with my voltages.
> 
> Anyways this is on a 5600x at 4.7ghz (pbo) with custom ppt values and negative curve. This 4000cl14 profile made a difference of 27 frames and brought my gpu productivity (gpu bound) 15% over the stock 3600cl16 gskill profile (f4-3600c16-8gtzn)


Good score, very close to mine. I ran CO -30 and +200 PBO when I got my best score. You can probably run trtp at 6 and trp at 15 (requires 2T instead of GDM), that might give you a few frames. When I compare your score against mine you run lower CL by 2, tRC by 7 and tRFC by 44, but 1 higher tRP, 2 higher WR and 3 higher tRTP. What is your aida64-scores?


----------



## Splinterdog (Aug 11, 2021)

All hardware at default - no OC.
Ryzen 5600X
RX 5700 XT
Mem 3200GHz XMP 2.0


----------



## Mbellantoni (Aug 11, 2021)

Splinterdog said:


> All hardware at default - no OC.
> Ryzen 5600X
> RX 5700 XT
> Mem 3200GHz XMP 2.0
> ...


Thats pretty damn impressive for being stock at those ram speeds. Even my best run (233fps) in non exlusive fullscreen -  my max cpu render capped at 697. And is far below yours although the other cpu scores were still a little higher. Very nice!


Taraquin said:


> Good score, very close to mine. I ran CO -30 and +200 PBO when I got my best score. You can probably run trtp at 6 and trp at 15 (requires 2T instead of GDM), that might give you a few frames. When I compare your score against mine you run lower CL by 2, tRC by 7 and tRFC by 44, but 1 higher tRP, 2 higher WR and 3 higher tRTP. What is your aida64-scores?


Im using my daily Oc's here. Unfortunately i think i hit the crap end of the silicon lottery on my cpu as it needs more voltage then some of the samples ive seen out there. My first core wont take any sort of negative curve at all or it will drop threads in prime.  0 -19 -14 -20 -20 -20 at 4.7ghz, scalar 4x, ppt values are 125-75-105 as i found better boost behavior keeping the ppt values closer to stock rather than leaving them uncapped.

Gpu is 2095 boost @ 1115mv and 1810mhz on the memory side.

As far as the ram goes i used the dram calc at 3600 fast preset then changed the primarys around and lowered the trc and trfc quite a bit. As well as changing the soc and cldo voltages around to "stabilize" the memory controller. Proc odt's and all that are still the same as i found no reason to change them after stability testing. Although in my limited testing swapping these around can have a extremely minor effect on bandwidth and latency. There seems to be a sweet spot

I may see what i can extract out of it further in terms of subtimings but i feel like i may be at the bleeding edge at this point. Im running these at 1.56.5v which is right in line with 4000cl14 xmp's (1.5v) with much tighter subtimings than those you buy off the shelf from g skill and such.

Pretty sure my saving grace is my fan cooling my dram. I used a amd wraith cooler fan i had laying around and custom made a bracket for my case and mounted the fan onto it using double sided hanging tape. It sits directly over the ram and it looks pretty damn clean tbh lol. works like a dream. My dram doesnt pass 35c under heavy gaming loads or memory stress test. Its probably the only reason i can run this profile stable. Before my dram was getting up into the mid to high 40's and samsung bdie doesnt like that when your pushing it. Although i feel my ram is pretty voltage tolerant and from my testing theres only a 1-2C difference in temps under load from my xmp profile.

As far as aida my trail has expired and im too cheap right now to buy it. But judging my last run (3800cl14) with looser trc and trfc i would estimate im at about 59-60,000mb/s read and around 53.5NS of latency


----------



## Splinterdog (Aug 11, 2021)

Mbellantoni said:


> Thats pretty damn impressive for being stock at those ram speeds. Even my best run (233fps) in non exlusive fullscreen -  my max cpu render capped at 697. And is far below yours although the other cpu scores were still a little higher. Very nice!
> 
> Im using my daily Oc's here. Unfortunately i think i hit the crap end of the silicon lottery on my cpu as it needs more voltage then some of the samples ive seen out there. My first core wont take any sort of negative curve at all or it will drop threads in prime.  0 -19 -14 -20 -20 -20 at 4.7ghz, scalar 4x, ppt values are 125-75-105 as i found better boost behavior keeping the ppt values closer to stock rather than leaving them uncapped.
> 
> ...


Looks like we have the same hardware, but I have a tad more memory. Maybe that's the difference?
I haven't yet put my toes in the overclocking game in a serious way. Much to learn.


----------



## Cheese_On_tsaot (Aug 11, 2021)

Something is clearly off with this benchmark, my RAM is just 200mhz slower than the 5700 XT combo.

Hmmm Windows build difference, mine is 1 iteration older.


----------



## Mbellantoni (Aug 11, 2021)

Splinterdog said:


> Looks like we have the same hardware, but I have a tad more memory. Maybe that's the difference?
> I haven't yet put my toes in the overclocking game in a serious way. Much to learn.


I honestly couldnt tell you lol. Ive ran it multiple times to confirm my scores were consistant. Maybe you had a freak run? I would check again 2 or 3 more times to make sure its consistant.

Otherwise i wouldnt expect much of a difference between our gpu's especially being as cpu bound as we are in this test. My gpu is oced and undervolted with temps in mind. Performance gains are probably marginal as far as gpu' s go 1-2 fps over stock but thermals are low (57-62C depending on the game)

If i recall correctly a gamers nexus video said something about running dual rank dimms (16x2) or 4 dimms total results in a performance increase on 5000 series architecture. If this is the case then im pretty blown away on how much of a difference it actually makes.

As far as the cpu overclock goes ive heard ryzen5000 is pretty well optimized for gaming out of the box. Multicore leaves some to desired but single core doesnt leave much for improvement even with pbo on. Ive tried different speeds on this particular benchmark such as 4725-4850 with pbo and the improvements were marginal at best and the voltages were outrageous even at 4725 compared to 4700 there was a difference of 3volts just at the title screen alone.

This is all theory of course. Who knows maybe you just have a bad ass cpu + mobo + ram that works together really well.


----------



## Cheese_On_tsaot (Aug 11, 2021)

Mbellantoni said:


> I honestly couldnt tell you lol. Ive ran it multiple times to confirm my scores were consistant. Maybe you had a freak run? I would check again 2 or 3 more times to make sure its consistant.
> 
> Otherwise i wouldnt expect much of a difference between our gpu's especially being as cpu bound as we are in this test. My gpu is oced and undervolted with temps in mind. Performance gains are probably marginal as far as gpu' s go 1-2 fps over stock but thermals are low (57-62C depending on the game)
> 
> ...


I have single rank dimms and they can't OC at all regardless of being B die...


----------



## Mbellantoni (Aug 11, 2021)

Cheese_On_tsaot said:


> I have single rank dimms and they can't OC at all regardless of being B die...


Got a zentimings screenshot?


----------



## Cheese_On_tsaot (Aug 11, 2021)

Mbellantoni said:


> Got a zentimings screenshot?


----------



## Mbellantoni (Aug 11, 2021)

Cheese_On_tsaot said:


> View attachment 212140


Your soc voltage is low. Set it to 1.1 at least but not above 1.125 and your soc llc to level 4 or something. Also you can try setting your vddg iod and ccd up to around 950. Your running gdm+ 1t which will only accept even numbers even though zen timings shows its at 17 i gaurentee its at 18 for your primarys. With that being said change your primarys to 16-16-16-36 andram speed to 3400 and flck to 1700 manually Bring up your dram voltage to 1.37 or 1.38 see if it post and what zen timings shows



Cheese_On_tsaot said:


> View attachment 212140


For the record keep gdm+1t enabled. It helps alot with stability. I cant even post without gdm on at anything above 3000mhz


----------



## Cheese_On_tsaot (Aug 11, 2021)

Mbellantoni said:


> Your soc voltage is low. Set it to 1.1 at least but not above 1.125 and your soc llc to level 4 or something. Also you can try setting your vddg iod and ccd up to around 950. Your running gdm+ 1t which will only accept even numbers even though zen timings shows its at 17 i gaurentee its at 18 for your primarys. With that being said change your primarys to 16-16-16-36 andram speed to 3400 and flck to 1700 manually Bring up your dram voltage to 1.37 or 1.38 see if it post and what zen timings shows
> 
> 
> For the record keep gdm+1t enabled. It helps alot with stability. I cant even post without gdm on at anything above 3000mhz


Did the settings you mentioned, can't even boot into the bios, at 3000mhz with those settings it BSOD.
Useless memory.


----------



## Mbellantoni (Aug 11, 2021)

Cheese_On_tsaot said:


> Did the settings you mentioned, can't even boot into the bios, at 3000mhz with those settings it BSOD.
> Useless memory.


You might have faulty ram man. That should be easy for bdie. Do you have the latest bios?


----------



## Cheese_On_tsaot (Aug 11, 2021)

Mbellantoni said:


> You might have faulty ram man. That should be easy for bdie. Do you have the latest bios?


Yes I am on the latest from the Gigabyte site shortly before the site became inaccessible.
This Ram was bad also on the B450 motherboard I had prior, no difference.


----------



## Splinterdog (Aug 11, 2021)

Cheese_On_tsaot said:


> Yes I am on the latest from the Gigabyte site shortly before the site became inaccessible.


Funny you should say that because I've been trying to get on Gigabyte's website for weeks 
Anyway, I may try the SOTTR bench on my secondary system - Ryzen 2600X/RX 580/32GB 2400MHz RAM


----------



## Taraquin (Aug 11, 2021)

Is 16-17-17 your stock speed? In general B-die at that low speed ALWAYS have even primary timings like 15-15-15 or 14-14-14. I have never heard of 16-17-17 3000. That sounds like Micron, Hynix or som low tier Samsungs like S-die etc. Thaiphoon reports wrong sometimes. An easy test to see if it`s B-die is to bring tRFC down. It`s very rare that non-B-die can do below 300 at 3000MHz. Up the soc and iod voltage and try trfc 300. If that does not boot I`m afraid your ram is not B-die.

You should establish what speed is the max for your ram. Set all timings to auto and try to up in one step at at time with 1.35V. If it`s lands below 3600 it is one of the garbo tiers, 3600 and above is generally reserved for B-die, Micron E\B and Hynix C\D.


----------



## Felix123BU (Aug 12, 2021)

Mbellantoni said:


> I honestly couldnt tell you lol. Ive ran it multiple times to confirm my scores were consistant. Maybe you had a freak run? I would check again 2 or 3 more times to make sure its consistant.
> 
> Otherwise i wouldnt expect much of a difference between our gpu's especially being as cpu bound as we are in this test. My gpu is oced and undervolted with temps in mind. Performance gains are probably marginal as far as gpu' s go 1-2 fps over stock but thermals are low (57-62C depending on the game)
> 
> ...


I initially used PBO for single core boost to 5.05Ghz on my 5800X, but later realized that a 4.65ghz all core manual OC gave me basically the same frames in games as the PBO, at lots less voltage and heat produced.

One thing about this game and benchmark, have seen it in various other benchmarks, extra core count gives by itself a FPS boost, besides the faster tuned RAM.


----------



## Zyll Goliat (Aug 12, 2021)

Here are the result from my old mule......
CPU=Xeon 2697 V2 12c/24t (3,45Ghz-All Cores)
GPU=R9 Fury
Results from low settings and lowest settings are in the attachment......


----------



## Felix123BU (Aug 12, 2021)

Zyll Goliath said:


> Here are the result from my old mule......
> CPU=Xeon 2697 V2 12c/24t (3,45Ghz-All Cores)
> GPU=R9 Fury
> View attachment 212245


Fury powered mule   

Can you please reupload with 1080p and Lowest quality preset so we all have the same baseline for comparison? Yours is custom and Level of Detail is Low vs Lowest which would change the scope of the test a bit.

Thx!


----------



## Zyll Goliat (Aug 12, 2021)

Felix123BU said:


> Fury powered mule
> 
> Can you please reupload with 1080p and Lowest quality preset so we all have the same baseline for comparison? Yours is custom and Level of Detail is Low vs Lowest which would change the scope of the test a bit.
> 
> Thx!


Cheers and TY for noticing that...I just did a run on lowest settings and it was actually a nice bump.......





seems like my old mule still kicking just fine .....maybe I gonna try some more tweaking and see if the results could be even better.....

I tweak a bit memory timings and managed to reach 130FPS!!!




Ahh...I just wish If this CPU is unlocked...this OC is all via BCLK tho' it's not a bad OC(115/bus)which gives me 3,45Ghz(All cores)+Turbo up to the 4,03Ghz(few cores)....and yeah my memory is also just a regular Kingston 1333Mhz as I am all about price/performance(cheap bastard)`tho Its working on 1550Mhz and it's in quad channel + yeah I did tighten timings a bit.......


----------



## Mbellantoni (Aug 12, 2021)

Felix123BU said:


> I initially used PBO for single core boost to 5.05Ghz on my 5800X, but later realized that a 4.65ghz all core manual OC gave me basically the same frames in games as the PBO, at lots less voltage and heat produced.
> 
> One thing about this game and benchmark, have seen it in various other benchmarks, extra core count gives by itself a FPS boost, besides the faster tuned RAM.


For sure having more cores helps lessen the bottleneck all together.

What im picking up from this test is increasing ram speed, core clock or just cores in general just lessens the bottleneck at stupidly high frames rates to different degrees. Core count being the biggest and after testing this benchmark cpu speed and ram speed kind of trade blows. Normally the cpu renders more than the gpu can process anyways but in this case the increased ram speed and latency  allows the cpu to be more efficient which allows gpu to act more effeciently as well on the frames being rendered by lessening the bottleneck in a cpu bottlenecked situation resulting in a raw performance increase.

When your gpu bound the gpu cant act upon the frame anyways and the cpu is in a much more relaxed state. Only in certain situations can the extra ram speed be utilized resulting in increased 1% and .1% lows. So the increase isnt very noticeable at all.

Overall high fps games can utilize higher ram speed and the cpu itself much more efficiently. And while this benchmark is fun i never met anyone who plays tomb raider on potato settings lol.


----------



## Felix123BU (Aug 13, 2021)

Mbellantoni said:


> For sure having more cores helps lessen the bottleneck all together.
> 
> What im picking up from this test is increasing ram speed, core clock or just cores in general just lessens the bottleneck at stupidly high frames rates to different degrees. Core count being the biggest and after testing this benchmark cpu speed and ram speed kind of trade blows. Normally the cpu renders more than the gpu can process anyways but in this case the increased ram speed and latency  allows the cpu to be more efficient which allows gpu to act more effeciently as well on the frames being rendered by lessening the bottleneck in a cpu bottlenecked situation resulting in a raw performance increase.
> 
> ...


"And while this benchmark is fun i never met anyone who plays tomb raider on potato settings lol." so very true   , yet nobody "plays" 3D Mark either, and people still loose hours on it, we are strange creatures


----------



## Splinterdog (Aug 13, 2021)

I ran the bench again just to be sure and then ran it on my second rig with Ryzen 2600x/RX 580. The performance difference is staggering.










The second one is with Windows 11 by the way, although it says Win 10 in the result.


----------



## Zyll Goliat (Aug 13, 2021)

Splinterdog said:


> I ran the bench again just to be sure and then ran it on my second rig with Ryzen 2600x/RX 580. The performance difference is staggering.
> View attachment 212468
> 
> 
> ...


I am curious are both of those systems are on stock speeds or they are OC?


----------



## Taraquin (Aug 13, 2021)

Splinterdog said:


> I ran the bench again just to be sure and then ran it on my second rig with Ryzen 2600x/RX 580. The performance difference is staggering.
> View attachment 212468
> 
> 
> ...


If you watch CPU Game avg 5600X is actually twice as fast. My 3600 with ram tweaked is able to get around 170 on that one.


----------



## Det0x (Aug 13, 2021)

Playing around with new memory sticks and settings, could make a run for 320 average cpu fps..


----------



## Mbellantoni (Aug 14, 2021)

Det0x said:


> Playing around with new memory sticks and settings, could make a run for 320 average cpu fps..


WOW those are cranked! what type of voltage are you running to those badboys to keep em stable ;]

taraquin i finally got around to pickin up aida. heres my results. although i did have to back off my trfc and increase my voltage up to 1.60v im not sure which was causing a random error in memtest at random times. so i adjusted both since it took too long to sit there and figure it out i kind of knew its either one or the other. im leaning more towards the trfc though and ill try to lower my voltage back down sometime soon to around 1.58v im not too concerned either way my ram doesnt pass 35c under heavy loads with my fan installed. with the temps being so much lower than non cooled ram even at stock xmp voltages i think i can run this profile daily without worrying about my ram crapping out any time soon.


----------



## Felix123BU (Aug 14, 2021)

Det0x said:


> Playing around with new memory sticks and settings, could make a run for 320 average cpu fps..
> View attachment 212479View attachment 212478


Big difference from the previous sticks? As far as I remember those where quite fast too.

51.7 latency....mmm....the best I could achieve with my meh sticks is 53.5ns and those settings where barely stable


----------



## Mbellantoni (Aug 14, 2021)

Felix123BU said:


> Big difference from the previous sticks? As far as I remember those where quite fast too.
> 
> 51.7 latency....mmm....the best I could achieve with my meh sticks is 53.5ns and those settings where barely stable


yea thats pretty insane whats hes got going on there at 3800mhz. my computer would laugh at me if i punched something like that in lol. comes stock at 4000cl14 so thats a premium bin for you $$ lol


----------



## Felix123BU (Aug 14, 2021)

Did any of you manage to get FCLK and UCLK fully stable at 2000mhz and tight timings?


----------



## Taraquin (Aug 14, 2021)

Felix123BU said:


> Did any of you manage to get FCLK and UCLK fully stable at 2000mhz and tight timings?


Best I have got so far is 16-16-15 and 282 trfc, pthers quite tight, must up voltage very much to stabiluze tighter timings :/



Mbellantoni said:


> WOW those are cranked! what type of voltage are you running to those badboys to keep em stable ;]
> 
> taraquin i finally got around to pickin up aida. heres my results. although i did have to back off my trfc and increase my voltage up to 1.60v im not sure which was causing a random error in memtest at random times. so i adjusted both since it took too long to sit there and figure it out i kind of knew its either one or the other. im leaning more towards the trfc though and ill try to lower my voltage back down sometime soon to around 1.58v im not too concerned either way my ram doesnt pass 35c under heavy loads with my fan installed. with the temps being so much lower than non cooled ram even at stock xmp voltages i think i can run this profile daily without worrying about my ram crapping out any time soon.
> View attachment 212491


My score with 16-16-15 and volt at 1.47V:
Running +200 pbo and 2T which might explain a bit better latency.


----------



## Splinterdog (Aug 14, 2021)

Zyll Goliath said:


> I am curious are both of those systems are on stock speeds or they are OC?


Everything at stock


----------



## Mbellantoni (Aug 14, 2021)

Taraquin said:


> Best I have got so far is 16-16-15 and 282 trfc, pthers quite tight, must up voltage very much to stabiluze tighter timings :/
> 
> 
> My score with 16-16-15 and volt at 1.47V:
> ...


Thats impressive. How is your cache so much faster than mine? EDIT: nevermind its an EDC related value bug with precision boost overdrive. ive capped mine to 105 and that gives me better real world performance but worse l3cache scores in aida



Felix123BU said:


> Did any of you manage to get FCLK and UCLK fully stable at 2000mhz and tight timings?


I wouldnt say mine is fully stable. Its stable in a sense that it wont crash my pc or corrupt any data.

I still get whea 19 (interconnect bus errors) which are corrected and non fatal errors in certain situations. A very small amount. I can reproduce them when stress testing p95 large fft's and thats about it. anything else i can do without errors like gaming and memory stress testing. Every once in a while when i start up a game i might get 1 or 2. But never during gaming or benchmarking.

There was a very narrow window of vsoc voltages and sub voltages that greatly minimized my chances of getting a whea 19. Too much and i got more. Too little and i had the same result. Even going as far at 10-15mv on the sub voltages caused a big difference and it took a while to actually dial in.


----------



## Felix123BU (Aug 14, 2021)

Mbellantoni said:


> Thats impressive. How is your cache so much faster than mine? EDIT: nevermind its an EDC related value bug with precision boost overdrive. ive capped mine to 105 and that gives me better real world performance but worse l3cache scores in aida
> 
> 
> I wouldnt say mine is fully stable. Its stable in a sense that it wont crash my pc or corrupt any data.
> ...


Yup, same for me, @2000 I get whea interconnect bus errors whatever voltage I would push through it, whatever timings, tried up to 1.65, did not help. Might be the CPU, might be the RAM for me.

My sticks are weird, they get unstable at high voltages, anything above 1.45v will spit out errors, a setting that is 100% stable at 1.38v will be unstable at 1.5v, regardless of any other consideration.
But it does need only 1.38 for 3800 CL16, at least that is good  

This is what I can do max, and this I use on a daily basis. Not to bad for a meeh ram kit which is rated for max 3333 CL16-17-16 @1.35v









Taraquin said:


> Best I have got so far is 16-16-15 and 282 trfc, pthers quite tight, must up voltage very much to stabiluze tighter timings :/
> 
> 
> My score with 16-16-15 and volt at 1.47V:
> ...


"Best I have got so far is 16-16-15 and 282 trfc" - at 2000 FCLK? Cool  Is it stable?


----------



## Taraquin (Aug 14, 2021)

Felix123BU said:


> Yup, same for me, @2000 I get whea interconnect bus errors whatever voltage I would push through it, whatever timings, tried up to 1.65, did not help. Might be the CPU, might be the RAM for me.
> 
> My sticks are weird, they get unstable at high voltages, anything above 1.45v will spit out errors, a setting that is 100% stable at 1.38v will be unstable at 1.5v, regardless of any other consideration.
> But it does need only 1.38 for 3800 CL16, at least that is good
> ...


Yeah, ran 25 TM5s, I gave no wheas. No crashes since May (when I got agesa 1.2.0.1, now on 1.2.0.3A). Too bad my ram us a shitty bin. 1t is out of the question and I seem to need 0.05V extra at same timings compared to others.


----------



## Mbellantoni (Aug 14, 2021)

Felix123BU said:


> Yup, same for me, @2000 I get whea interconnect bus errors whatever voltage I would push through it, whatever timings, tried up to 1.65, did not help. Might be the CPU, might be the RAM for me.
> 
> My sticks are weird, they get unstable at high voltages, anything above 1.45v will spit out errors, a setting that is 100% stable at 1.38v will be unstable at 1.5v, regardless of any other consideration.
> But it does need only 1.38 for 3800 CL16, at least that is good
> ...


Its deffinently your flck causing the whea 19's. All we can really do is hope an agesa update eliminates them. That is weird about your ram not being able to push further voltages. Seems like you have a nice healthy profile there though



Taraquin said:


> Yeah, ran 25 TM5s, I gave no wheas. No crashes since May (when I got agesa 1.2.0.1, now on 1.2.0.3A). Too bad my ram us a shitty bin. 1t is out of the question and I seem to need 0.05V extra at same timings compared to others.


I cant even run gdm off at all on my sticks. It needs to be on. 1t or 2t results in memory so unstable that my bios can even crash. I dont even try anymore due to the risk of corrupting my OS. Im pretty stable now at 264 trfc at 1.60v but im probably going to drop my voltage down to 1.57.5 today and run a long hcimemtest. I had my trfc at like 234 before and i would get an error regardless of voltages.


----------



## Felix123BU (Aug 14, 2021)

Mbellantoni said:


> Its deffinently your flck causing the whea 19's. All we can really do is hope an agesa update eliminates them. That is weird about your ram not being able to push further voltages. Seems like you have a nice healthy profile there though
> 
> 
> I cant even run gdm off at all on my sticks. It needs to be on. 1t or 2t results in memory so unstable that my bios can even crash. I dont even try anymore due to the risk of corrupting my OS. Im pretty stable now at 264 trfc at 1.60v but im probably going to drop my voltage down to 1.57.5 today and run a long hcimemtest. I had my trfc at like 234 before and i would get an error regardless of voltages.


Yes, I also think its the FLCK, 2000 is said to be rather rare even for the 5000 series, and I also tried some super relaxed timings, same errors. And the fact that my ram also does not like high voltages is explained in some forums and ram oc guides, some sticks just do not play nice with high voltage. Could also be the mobo, but others have pushed ram much much higher than me on the same model, sooo.....

I cant really complain about performance, its where I want it, the only reason to get faster sticks would be to play and tune them, that would be fun, but spending a couple of hundreds on a really good pair is just a waste of money for me  Another reason would be to see how much I can push this benchmark, if I where to look at the GPU averages, I get 542, and CPU renderer 352, I would sort of want to see how much a super fast ram would elevate the CPU game, and subsequently the whole score 

As for GDM off, I can run it at 3600mhz max, with some timings a bit looser, but the results are basically the same as GDM on plus higher frequency, so no point for me, as anything higher than 3600 will not work with GDM off for me, would probably need lots higher voltage.


----------



## Mbellantoni (Aug 14, 2021)

Yea theres not many people running 2000flck error free at the moment. I currently have mine on a stability test. I tightened the timings to 14-16-14-28-38 and the trfc to 252. I dropped the voltage to 1.58.5 and it looks promising so far. I played around with subtimings but im trying to drop voltage and at this point im only gettin very minor benefits with decreased stability for every timing change so i left it alone. This is how im cooling my ram, my buddy is going to design me a 3d printed bracket for a high flow 120mm fan like a noctua. But as it currently sits this old wraith fan keeps my ram at a cool 35c max under heavy loads.


----------



## Taraquin (Aug 14, 2021)

Could it be easier with ram\less wheas for me due to 2-dimm only motherboard? 4-dimms which almost all MBs have add complexity with signals etc. If ITX-MBs and the very few mATX-boards with only 2 dimms also have fewer wheas that might be a thing? I had a few wheas in May before I tuned voltages and got the latest agesa. Dunno what did it, but I run VDDG CCD and VDDP as low as I can, also keep VDDG IOD and SOC at the lowest I can before performance drops.


----------



## Mbellantoni (Aug 15, 2021)

Taraquin said:


> Could it be easier with ram\less wheas for me due to 2-dimm only motherboard? 4-dimms which almost all MBs have add complexity with signals etc. If ITX-MBs and the very few mATX-boards with only 2 dimms also have fewer wheas that might be a thing? I had a few wheas in May before I tuned voltages and got the latest agesa. Dunno what did it, but I run VDDG CCD and VDDP as low as I can, also keep VDDG IOD and SOC at the lowest I can before performance drops.


Possibly. I know evgas new mobo is only coming with 2 dimm slots. Im not sure though. I would imagine in general a 2x16gb setup will clock better than an 4x8 because its not providing power to 4 different dimms.


----------



## Zyll Goliat (Aug 15, 2021)

This is a bit strange....Can someone explain to me how is it possible to have more frames rendered and actually lower avg fps?


----------



## Felix123BU (Aug 15, 2021)

Zyll Goliath said:


> This is a bit strange....Can someone explain to me how is it possible to have more frames rendered and actually lower avg fps?


Because even if it says Frames Per Second its not really frames per second, I also do not really know how the so called FSP is calculated, but its not true FPS


----------



## Mbellantoni (Aug 15, 2021)

I went back to the drawing board for the final time. was able to drop the voltage to 1.58v and luckily was able to have someone i consider an expert overclocker take a look at my previous profile. i was able to tighten some timings down and not lose stability at all. as far as game performance goes i think im at the point of diminishing returns though. ive had better runs with slightly worse timings. so its coming down to margin of error between runs right now and think its just based off of the cpus and gpus performance per run rather than the ram timings making an actual difference. it is slightly faster though in any synthetic benchmark like geekbench, membench, geekbench etc.. im pretty much spent at this point lol.


----------



## Felix123BU (Aug 15, 2021)

@Mbellantoni , funny, I was doing something similar just now   

Put the latest bios to see if there where some change, not a lot except I can now run GDM off at 2T stable with basically the same performance as GDM on 1T. GDM off 1T does not give WHEA errors, just apps randomly closing for no good reason 

I was thinking also about all the time lost with RAM tuning, so did a comparison vs pure stock, XMP and 3800 CL16 tuned.

 Here goes, pure stock Ram 2133 CL15







XMP Ram 3333 CL16







difference vs pure stock +35 FPS

3800 CL16 tuned as far as I could







difference vs pure stock +58 FPS
difference vs XMP +23 FPS


I could probably get a lot more extra fps here with a 5950x or a 10900k plus some 3800+ CL14 tuned ram sticks for, but realistically in the real world and at higher resolutions the differences would be to small to really matter if you have at least a decent ram setup and XMP turned on


----------



## Taraquin (Aug 15, 2021)

Felix123BU said:


> @Mbellantoni , funny, I was doing something similar just now
> 
> Put the latest bios to see if there where some change, not a lot except I can now run GDM off at 2T stable with basically the same performance as GDM on 1T. GDM off 1T does not give WHEA errors, just apps randomly closing for no good reason
> 
> ...


If you could run 4000cl16 with higher voltage like me that could yield up to 4ns in aida and up to 20fps i SOTTR is my guestimate.


----------



## Zyll Goliat (Aug 15, 2021)

I turned HT off and gain even few more frames from 130 to 134.....


----------



## Felix123BU (Aug 16, 2021)

Taraquin said:


> If you could run 4000cl16 with higher voltage like me that could yield up to 4ns in aida and up to 20fps i SOTTR is my guestimate.


If it would work, possible, have a hunch that FCLK 2000 is not stable on my CPU, might be wrong though, will probably test with super loose timings to exclude CPU instability


----------



## Taraquin (Aug 16, 2021)

Felix123BU said:


> If it would work, possible, have a hunch that FCLK 2000 is not stable on my CPU, might be wrong though, will probably test with super loose timings to exclude CPU instability


Find the IF-limit first  Try everything on auto with 1.35V ram and up to 3866/1933, 3933/1966 etc. If you have agesa 1.2.0.1 or newer there is a much bigger chance of hitting 4000/2000+ vs agesa 1.1.x.x or lower. If it won't boot try soc at 1.14V or a bit higher, vddg iod at 1.06, vddg ccd 0.94, vddp 0.9, set procodt to 28-37 and let drvstr/cad etc stay at auto. Gdm and 1t is also recommended.

At 1.4V 4000cl17-17-16 300 tRFC is 100% stable, and perf is not that much worse vs 4000cl16. But you must use 2t and gdm off for cl17.


----------



## Felix123BU (Aug 16, 2021)

Taraquin said:


> Find the IF-limit first  Try everything on auto with 1.35V ram and up to 3866/1933, 3933/1966 etc. If you have agesa 1.2.0.1 or newer there is a much bigger chance of hitting 4000/2000+ vs agesa 1.1.x.x or lower. If it won't boot try soc at 1.14V or a bit higher, vddg iod at 1.06, vddg ccd 0.94, vddp 0.9, set procodt to 28-37 and let drvstr/cad etc stay at auto. Gdm and 1t is also recommended.
> 
> At 1.4V 4000cl17-17-16 300 tRFC is 100% stable, and perf is not that much worse vs 4000cl16. But you must use 2t and gdm off for cl17.


2000 boots and gets into Windows, but produces lots of WHEA errors (Bus interconnect), anyway, will probably test sometime. At one point in time 3600 was max, then a couple of bioses later 3800 was perfect, maybe 4000 is possible


----------



## Taraquin (Aug 16, 2021)

Felix123BU said:


> 2000 boots and gets into Windows, but produces lots of WHEA errors (Bus interconnect), anyway, will probably test sometime. At one point in time 3600 was max, then a couple of bioses later 3800 was perfect, maybe 4000 is possible


Try with the voltages I recommended, that can resolve wheas on some setups


----------



## Cheese_On_tsaot (Aug 17, 2021)

Cheese_On_tsaot said:


> View attachment 212134
> 
> 
> Something is clearly off with this benchmark, my RAM is just 200mhz slower than the 5700 XT combo.
> ...



New result with Patriot Viper 3733mhz CL17 RAM at XMP profile.




At 3800 16-19-19-38-57 1.45v


----------



## Taraquin (Aug 17, 2021)

Cheese_On_tsaot said:


> New result with Patriot Viper 3733mhz CL17 RAM at XMP profile.
> View attachment 212942
> 
> At 3800 16-19-19-38-57 1.45v
> ...


I`m unable to read the CPU game avg due to low quality image. Remember that we test at 1080p lowest


----------



## Cheese_On_tsaot (Aug 17, 2021)

Taraquin said:


> I`m unable to read the CPU game avg due to low quality image. Remember that we test at 1080p lowest


Yes but a 2060 is the bottleneck in some of the bench at 1080p so 800x600 is a real CPU load.


----------



## Felix123BU (Aug 17, 2021)

Cheese_On_tsaot said:


> Yes but a 2060 is the bottleneck in some of the bench at 1080p so 800x600 is a real CPU load.


Could be, but everybody else did the test at 1080p, so in the interest of keeping this thread clean, please be so kind and re-upload the bench at 1080.

Thank you


----------



## Cheese_On_tsaot (Aug 17, 2021)

Felix123BU said:


> Could be, but everybody else did the test at 1080p, so in the interest of keeping this thread clean, please be so kind and re-upload the bench at 1080.
> 
> Thank you


CPU Performance​
Rules don't apply when you are going against the point of your own thread by bringing in this illusionary ruling which removes the CPU benchmark and then becomes CPU / GPU.



Zyll Goliath said:


> I turned HT off and gain even few more frames from 130 to 134.....
> 
> View attachment 212769


You are GPU bound almost half of the time in the benchmark.

The OP only considered his own hardware when making a CPU bench thread for people with a gamut of different GPU options.

Respect would be realizing that not everyone has a GPU that is fully untapped at 1080p hitting almost 300 FPS.


Respect is respectfully setting out a thread to include those who wish to join in.

As before in this thread though the OP dismissed a submission due to their own ignorance to another user, the user was correct, not the OP.


The people here make the thread.


----------



## Felix123BU (Aug 17, 2021)

Cheese_On_tsaot said:


> CPU Performance​
> Rules don't apply when you are going against the point of your own thread by bringing in this illusionary ruling which removes the CPU benchmark and then becomes CPU / GPU.
> 
> 
> ...


  I see, so you, being the only one who did not post a 1080p lowest result, are "the people" 

I am again very kindly asking that you, the people, respect all other previous posters in this thread by providing something meaningful that we all can compare against.

You probably know that, as you said, there are multiple possible setups, and some will see a bottleneck at 1080p, other at 720p, other even lower than that if their GPU is to weak.

The point of this thread is to compare 1080p lowest and discuss about how to improve or what holds us back at this specific setting. If each would post whatever settings he feels like, this thread would have no meaning at all.


----------



## the54thvoid (Aug 17, 2021)

LQ'd the 'argument'. 

For reference, we've been here before in the benchmark threads. If an OP states a rule, and the thread runs fine, we stick to that rule. Otherwise, there is no benchmark, only random, unrelated performances. And for clarity, any bickering about why the settings are wrong will get you barred. This isn't a democracy folks; this is the rules:

*Fullscreen
 Exclusive Fullscreen
 DirectX 12
DLSS OFF
 Vsync OFF
 Resolution 1920 X 1080
Anti-Aliasing OFF

 Graphic Settings - Lowest Profile (please leave it at Lowest without any changes for the purpose of this test)*

Obey, or leave.


----------



## Zyll Goliat (Aug 17, 2021)

I mean it's really easy to see IF your GPU is the bottleneck in this benchmark....You can always do the test in lower resolution(880x600) and compare your results...here I did just for fun and actually the results are more or less identical(in my case) as you can see bellow




So HT off give me 4 fps(134) and lowest res 800x600 only 2 fps(132) which is almost margin of error......


----------



## Cheese_On_tsaot (Aug 17, 2021)

Lets see.

Prior result at truly CPU bound 800x600. 200 fps

Following OP's ruling and the moderator.







When rules are rules but no logic or intelligence is to be found with the rules, we get stupid people domineering the inteligent.

You cannot fix stupid and I fully agree, you can try to tell them and guide them, but they will likely throw it in your face with even more perceived superiority because "Rules"

Here is my result following your rules, my previous ones make me correct and are to the point of the thread which is a CPU benchmark, I am smart, you are not and there is nothing you can do about it other than try to silence me 


Have a good day.


----------



## Mbellantoni (Aug 17, 2021)

What the hell did i miss LOL. Felix i still get whea 19's myself maybe once on boot and i get a few on app startups depending on the app maybe 2 or so sometimes none..prime 95 ill get one on large ffts every 10 seconds or so though. Im on right on the edge of stability with my flck and it kind of turns me off knowing it lol. But i have msi afterburner/ rivatuner overlay combined with hwinfo and have windows errors added into the overlay. Not once during a gaming session did i get a whea19 nor cinebench,  geekbench and other synthetic benchmarks so im cool with it.  i havnt tried the newest agesa revision b yet as its still in beta phase for my mobo so im still on A. I think eventually 2000 flck will be achievable by " most".

It took more time than i care to admit setting my soc voltage and sub voltages. If my soc was too high more errors..too low more errors. Same with the other voltages. If they were too low id have 100 wheas on boot but very little during stress test. Too much and the opposite effect would happen. I literally had to find a sweet spot to balance the two to near none and its way off from what people recommend. Its alot of work honestly.

With that being said i think 4000 cl14-16 is about as good as it gets on ryzen and this test as it probably benefits the most from the bandwidth. Even fine tuning my timings and subtimings result in very little benefit if any at all. I actually got higher scores at 4000 14-16-16-28 and a lower trfc (unstable though) than i did at 14-15-14-21 with the subs tightened even more. I feel like it ended up just being margin of error runs from the gpu at that point.

I personally think a really good 3800cl14 profile can be just as effective at a 4000cl16 profile in this test. Ill probably go home today and give it a shot just by switching the speed and flck down a notch and leaving everything the same.


----------



## Felix123BU (Aug 17, 2021)

Mbellantoni said:


> What the hell did i miss LOL. Felix i still get whea 19's myself maybe once on boot and i get a few on app startups depending on the app maybe 2 or so sometimes none..prime 95 ill get one on large ffts every 10 seconds or so though. Im on right on the edge of stability with my flck and it kind of turns me off knowing it lol. But i have msi afterburner/ rivatuner overlay combined with hwinfo and have windows errors added into the overlay. Not once during a gaming session did i get a whea19 nor cinebench,  geekbench and other synthetic benchmarks so im cool with it.  i havnt tried the newest agesa revision b yet as its still in beta phase for my mobo so im still on A. I think eventually 2000 flck will be achievable by " most".
> 
> It took more time than i care to admit setting my soc voltage and sub voltages. If my soc was too high more errors..too low more errors. Same with the other voltages. If they were too low id have 100 wheas on boot but very little during stress test. Too much and the opposite effect would happen. I literally had to find a sweet spot to balance the two to near none and its way off from what people recommend. Its alot of work honestly.
> 
> ...



The drama you missed 

I have been trying to see what I can do to get above 240 FPS in the benchmark, and the only thing I could do is to see if I can get to 2000 FCLK running, I am reasonably sure my CPU cant run at 2000 fully stable (at least with current AGESA, have the latest for my board), but as you say, fully stable does not mean a lot if everything runs fine but you get some WHEA errors reported that don't cause harm.

I can boot into Windows with 2000 FCLK, but things get ugly pretty quickly after that (Prime 95 large FFT's is an instant reboot), regardless of voltages applied, even tried it uncoupled to memory, still get Bus Interconnect errors, and since I also work on this PC, a semi stable setting is not an option.

The thing you say about voltage ranges is interesting, and if I would have confidence in my ram sticks I would probably try to fine-tune those, but I think I am pretty close to the limit of what they can achieve by themselves. The thought of getting more capable sticks was appealing, then I remembered DDR5 is just around the corner, so I will be happy with the current ones for a while longer 

"I personally think a really good 3800cl14 profile can be just as effective at a 4000cl16 profile in this test. Ill probably go home today and give it a shot just by switching the speed and flck down a notch and leaving everything the same." that would be interesting to see.


----------



## Mbellantoni (Aug 17, 2021)

Felix123BU said:


> The drama you missed
> 
> I have been trying to see what I can do to get above 240 FPS in the benchmark, and the only thing I could do is to see if I can get to 2000 FCLK running, I am reasonably sure my CPU cant run at 2000 fully stable (at least with current AGESA, have the latest for my board), but as you say, fully stable does not mean a lot if everything runs fine but you get some WHEA errors reported that don't cause harm.
> 
> ...


I think ddr4 is still going to be relevant for a few years at most. Ive seen some testing ddr5 4800 but the cas latency is 40 and it just slightly outperformed 3200mhz ddr4. I forgot the cas latency for that. Thats entry level ddr5 and 5000+ is to be expected down the line which will start to take over as cas latencys drop from what ive seen


----------



## bissag (Aug 18, 2021)

I think my score could be better specially the Max


----------



## Felix123BU (Aug 18, 2021)

bissag said:


> I think my score could be better specially the Max


Interesting, your CPU scores are pretty much in line with my 5800X and 3800mhz CL16 setup. The only difference might be that I OC-ed my CPU to 4.75ghz all core, which gives some extra FSP, but not much vs PBO.


----------



## bissag (Aug 18, 2021)

another run no pbo, default cpu settings, wondering the score is better, and cpu boost was maxed to 4600, before with pbo boost was 4900 but score is lower


----------



## Taraquin (Aug 18, 2021)

bissag said:


> I think my score could be better specially the Max


A few timings should be tweaked:
Trrds 4, trrdl 6, twr 12, trtp 6. Trdwr 8, twrrd 3. I bet that can boost your cpu game avg atleadt 10fps, maybe more what ramvoltage are youvrunning? Also try soc 1.12v, vddg iod 1.04, vddg ccd 0.94, vddp 0.9.


----------



## Mbellantoni (Aug 18, 2021)

bissag said:


> another run no pbo, default cpu settings, wondering the score is better, and cpu boost was maxed to 4600, before with pbo boost was 4900 but score is lower


Theres a sweet spot with pbo. Despite what people think. Cranking the max boost has serious diminishing returns. The 5800x is already a hot cpu. Pbo tends to overvolt alot despite the curve optimizer. For example i keep my 5600x at 4700 and on the title screen of sotr i sit around 1.356v. When i increase my cpu speed to 4725 it increases the voltage 3 volts on the title screen. Seriously 3 volts for 25mhz. 4800 puts me at like 1.44 volts under load. Its stupid honestly.

My cpu runs best with pbo slightly above stock boost clock- 4700(4650 stock)

My ppt values manually inputted rather than letting the motherboard control it or leaving them uncapped. 125w(88wstock) tdc 75(65) edc 105(90a) as you can see if keep my ppt values not too far from stock as it leads to better boost behavior for some reason. Although capping your edc does bug aida 64 on your l3 cache test. Performance loss isnt their though.

This way through the curve optimizer, ppt values and max boost clock you can achieve the best result. It really is a balancing act that takes longer in my opinion than a static oc to dial in lol


----------



## Felix123BU (Aug 18, 2021)

Mbellantoni said:


> Theres a sweet spot with pbo. Despite what people think. Cranking the max boost has serious diminishing returns. The 5800x is already a hot cpu. Pbo tends to overvolt alot despite the curve optimizer. For example i keep my 5600x at 4700 and on the title screen of sotr i sit around 1.356v. When i increase my cpu speed to 4725 it increases the voltage 3 volts on the title screen. Seriously 3 volts for 25mhz. 4800 puts me at like 1.44 volts under load. Its stupid honestly.
> 
> My cpu runs best with pbo slightly above stock boost clock- 4700(4650 stock)
> 
> ...


Concur, PBO in games gives limited extra performance, at least in modern games who use multiple cores. PBO can be great in single threated apps like emulators who use 1 core, but gives a very small boost to modern gaming performance due to modern games using multiple cores which are not boosted as much as single core with PBO. You can get better game perf with a high all core overclock, tested that in a couple of games, including SOTTR.

The reason for diminishing returns with PBO is that one core can reach lets say 5Ghz, but when multiple cores are used, the power limits and heat limits kick in, and even though the cores might boost a small bit over stock, the difference is to small to make a meaningful difference. And yeah, AMD's auto PBO voltages are very trigger happy, PBO gives the CPU 1.37v for a close to 4..7Ghz multi core boost, I run it on a manual all-core OC at 4.7ghz and 1.25v 100% stable for 2 months. 

Best setting for unlocking multicore boost is modifying PPT, but on a 5800X that gets you into super hot territory, and you need a very beefy cooling solution to keep it from thermal throttling, aka reducing boost due to heat. You can potentially get close to an all core OC with a high enough PPT, but that would also put you into overheating territory.

And yeah, the 5800X is a really hot CPU, I have one, hottest chip I ever used, and I used to have a 4790k which without delid could fry eggs easy  
The 5800X could cook a whole menu if not cooled properly


----------



## Mbellantoni (Aug 18, 2021)

so i went back in one more time. those random wheas were starting to bother me lol. im a stability guy and have always been into stability more than cranking it. i went back to 3800 but retained my timings from my 4000 profile. unfortunately i couldnt tighten my subtimings up anymore even at the same voltage i was at with my 4000 profile despite dropping 200mhz. although i was able to lower my voltage. they wouldnt scale with the voltage range i considered my max voltage so i didnt even try anymore. i lost about 3-4 fps. and my "gpu bound" went from 25% to 20% so i guess we'll call that a 5 percent loss in performance. but hey these framerates are way above what my monitor can support anyways. my 4000 profile will continue to be saved in my bios just waiting for the day an agesa update makes those whea 19's disappear. and its still a pretty large leap over the xmp at 3600cl16 (27fps), overall i learned the importants of ram speed. i feel it is just as important as selecting a cpu and gpu when you have a high refresh rate/ competetive e sports in mind


----------



## Toss (Aug 19, 2021)

With cheapest 3200 Mhz 64 GB RAM.


----------



## Taraquin (Aug 19, 2021)

Mbellantoni said:


> Theres a sweet spot with pbo. Despite what people think. Cranking the max boost has serious diminishing returns. The 5800x is already a hot cpu. Pbo tends to overvolt alot despite the curve optimizer. For example i keep my 5600x at 4700 and on the title screen of sotr i sit around 1.356v. When i increase my cpu speed to 4725 it increases the voltage 3 volts on the title screen. Seriously 3 volts for 25mhz. 4800 puts me at like 1.44 volts under load. Its stupid honestly.
> 
> My cpu runs best with pbo slightly above stock boost clock- 4700(4650 stock)
> 
> ...


Pbo without curve optimizer is not worth it is my opinion. Too much heat due to high voltages abd currents.



Mbellantoni said:


> so i went back in one more time. those random wheas were starting to bother me lol. im a stability guy and have always been into stability more than cranking it. i went back to 3800 but retained my timings from my 4000 profile. unfortunately i couldnt tighten my subtimings up anymore even at the same voltage i was at with my 4000 profile despite dropping 200mhz. although i was able to lower my voltage. they wouldnt scale with the voltage range i considered my max voltage so i didnt even try anymore. i lost about 3-4 fps. and my "gpu bound" went from 25% to 20% so i guess we'll call that a 5 percent loss in performance. but hey these framerates are way above what my monitor can support anyways. my 4000 profile will continue to be saved in my bios just waiting for the day an agesa update makes those whea 19's disappear. and its still a pretty large leap over the xmp at 3600cl16 (27fps), overall i learned the importants of ram speed. i feel it is just as important as selecting a cpu and gpu when you have a high refresh rate/ competetive e sports in mind


I lost 6fps going from 4000cl16 to 3800cl15 with generally tighter timings. Voltage on ram equal. 


Toss said:


> View attachment 213221
> 
> With cheapest 3200 Mhz 64 GB RAM.


Good result! If you want a bit more performance post your zentimings so we can help you?


----------



## Felix123BU (Aug 19, 2021)

Toss said:


> View attachment 213221
> 
> With cheapest 3200 Mhz 64 GB RAM.


Best proof of moar cores moar FPS in this game, we have the same exact GPU, I can get max 237 FPS with 3800 CL16 medium tuned ram, you get 253 FPS with 3200 stock ram   
The 5950X is for some reason a gaming beast


----------



## Zyll Goliat (Aug 19, 2021)

Felix123BU said:


> Best proof of moar cores moar FPS in this game, we have the same exact GPU, I can get max 237 FPS with 3800 CL16 medium tuned ram, you get 253 FPS with 3200 stock ram
> The 5950X is for some reason a gaming beast


Honestly I do not think this difference in performance is due the more cores/threads at this point because 8c/16t is certainly more than enough for this game/benchmark more likely this performance gap its because L3 cache difference( 5800X=32768Kb Vs 5950X=65536Kb)and possibly higher Turbo frequency speed on 5950X .......


----------



## Felix123BU (Aug 19, 2021)

Zyll Goliath said:


> Honestly I do not think this difference in performance is due the more cores/threads at this point because 8c/16t is certainly more than enough for this game/benchmark more likely this performance gap its because L3 cache difference( 5800X=32768Kb Vs 5950X=65536Kb)and possibly higher Turbo frequency speed on 5950X .......


You possibly could be quite correct,  was referring to the fact that for some reason you get a FPS jump from lets say a 5600X or 5800X to a 5900X and 5950X by itself, and extra cache could be exactly that. Memory write speeds are also double on the dual CCD Zen 3's, though not sure how much that would impact this benchmark, extra cache seems more likely, especially at low resolutions.


----------



## Mbellantoni (Aug 19, 2021)

Felix123BU said:


> You possibly could be quite correct,  was referring to the fact that for some reason you get a FPS jump from lets say a 5600X or 5800X to a 5900X and 5950X by itself, and extra cache could be exactly that. Memory write speeds are also double on the dual CCD Zen 3's, though not sure how much that would impact this benchmark, extra cache seems more likely, especially at low resolutions.


Everything about the 5950 is superior to the 5600 and 5800x. I was checking out aidas of 5950's and even at 3600 and looser timings it had more bandwidth than i could achieve and less latency. Out of the box it has better single core than i can achieve overclocked. Higher cache speeds also. The only thing in its class is the 5900x and its intel counterparts. Its also significantly more expensive as well all know


----------



## Felix123BU (Aug 19, 2021)

Mbellantoni said:


> Everything about the 5950 is superior to the 5600 and 5800x. I was checking out aidas of 5950's and even at 3600 and looser timings it had more bandwidth than i could achieve and less latency. Out of the box it has better single core than i can achieve overclocked. Higher cache speeds also. The only thing in its class is the 5900x and its intel counterparts. Its also significantly more expensive as well all know


Yup, good thing is those pluses for the 5950X and 5900X are only a "game changer" in very niche scenarios, like for example this test we are playing with, which is anything but realistic when it comes to the setting people would use when gaming  In most gaming scenarios the difference between them is minimal.
But in the purest sense, they are superior CPU's 

Oh, and one more important thing to note, take from Anandtech's measurements, wattage per core:





Says it right there, 5950X and 5900X are the good bins, 5800X and 5600X are the crappy bins, with a special bad bin prize for the 5800X, more than twice the power used per core vs the 5950X


----------



## bissag (Aug 19, 2021)

this is my best so far but still couldn't pass the 350 max


----------



## Taraquin (Aug 19, 2021)

bissag said:


> this is my best so far but still couldn't pass the 350 max


You improved the cpu-score by 11fps avg, that is good. What voltage are you running on the ram? You could try trp 15, trc 46, trfc 276. Or if you run 1.45V on ram disable gear down mode, set 2T, cl 15, trcdrd 15, trp 15, trc 45, trfc 270.


----------



## Mbellantoni (Aug 20, 2021)

bissag said:


> this is my best so far but still couldn't pass the 350 max


Looks like your gpu is running out of juice. Potato settings and that 5900x still is only 50% bottlenecked. Your almost there though


----------



## bissag (Aug 21, 2021)

Mbellantoni said:


> Looks like your gpu is running out of juice. Potato settings and that 5900x still is only 50% bottlenecked. Your almost there though


I don't know what wrong with my GPU, it is on stock settings as it will crash if I just change clock/memory so I guess it is dying slowly.
I am trying to improve my latency and unfortunately I am getting same with cl14 so not sure what's wrong. What do you think guys?


----------



## phanbuey (Aug 21, 2021)

bissag said:


> I don't know what wrong with my GPU, it is on stock settings as it will crash if I just change clock/memory so I guess it is dying slowly.
> I am trying to improve my latency and unfortunately I am getting same with cl14 so not sure what's wrong. What do you think guys?


what gpu is it??


----------



## bissag (Aug 21, 2021)

phanbuey said:


> what gpu is it??


Gtx 1080ti


----------



## tabascosauz (Aug 21, 2021)

Zyll Goliath said:


> Honestly I do not think this difference in performance is due the more cores/threads at this point because 8c/16t is certainly more than enough for this game/benchmark more likely this performance gap its because L3 cache difference( 5800X=32768Kb Vs 5950X=65536Kb)and possibly higher Turbo frequency speed on 5950X .......



There is no cache advantage. 32MB of L3 per chiplet, a core can't just access the other chiplet's L3.



bissag said:


> I don't know what wrong with my GPU, it is on stock settings as it will crash if I just change clock/memory so I guess it is dying slowly.
> I am trying to improve my latency and unfortunately I am getting same with cl14 so not sure what's wrong. What do you think guys?



AIDA clock speeds can vary a lot. Especially if your CPU OC or Curve Optimizer settings are unstable. The latency number regularly flies all over the place on Zen 3, unpredictable.

As for the 3800CL16/CL14, those results straight up look unstable. AIDA does all sorts of weird shit if not stable. If you're running 4/4/16/4/8/10 for RRDS/RRDL/FAW/WTRS/WTRL/WR, you will usually need to increase VDIMM compared to if you were running 4/6/16/4/12/12. And that's assuming you actually stability tested the 3800CL16 profile.



Felix123BU said:


> View attachment 213297



The per-core power doesn't really correlate with silicon quality at all. I've seen plenty of well binned 5600X/5800X and shit-tier 5900X/5950X (mine is somewhere in the middle of mediocrity). You can't just extrapolate silicon quality from their nT W/core metric - that's for all-core, which purely just depends on how much watts can be run through the CPU under the PPT limit. A stock 5950X is below 4.0GHz on something like 60W per chiplet. A stock 5800X runs a blistering 4.4-4.6GHz all-core on like 125-130W in a single chiplet. Chop off 20/30/40W from a 5800X's stock power limit and chances are it'll get better performance while pulling significantly less per-core power.

The 7.85W figure is also BS for a stock 5900X, no idea what "test" they ran (though it was launch day so probably firmware BS). You'll easily see 8-10W per-core on just about any all-core SSE stress test, unless the test is for some reason not maxing out the PPT envelope (Cinebench seems to do this).


----------



## bissag (Aug 21, 2021)

The 3800c16 profile was tested 25 cycles tm5 stable.
The 3800c14 not tested yet, it was just for fun test but the results wasnt as expected.


----------



## tabascosauz (Aug 21, 2021)

I'm.....not sure exactly what this benchmark is testing? This is HWInfo's data for the duration of the run.














4.45GHz @ 64°C and 80% core usage........? I play indie games and cross-platform mobile games that tax the 5900X harder than this, and regularly run Core 0/1 up to 4.9GHz. I've not seen even mildly CPU-bound games draw less than 13W on Core 0 and Core 1. This just looks like it was running on the GPU the whole time, not even 30% like it claims.

Anyways, ran it with the requested settings, but it seems like it isn't a very reliable CPU benchmark......not unless everybody has a 6800XT or better, it seems. Ran it a couple of times and it came out to the same 213fps result.


----------



## harm9963 (Aug 21, 2021)

Ram at 3800 14-14-14-14


----------



## Mbellantoni (Aug 21, 2021)

tabascosauz said:


> I'm.....not sure exactly what this benchmark is testing? This is HWInfo's data for the duration of the run.
> 
> View attachment 213521View attachment 213520View attachment 213522View attachment 213519
> 
> ...


i think most of us are using this benchmark as a baseline to find out how much of a raw performance increase different speeds/manual overclocked ram profile would make over the stock xmp just for this particular benchmark.  everyones results vary with different hardware so its not really a comparison between each other. most of us have been going back and forth seeing what works and what didnt for us and giving each other tips on how we may improve. 

it gives a rough idea of performance increases in other games as well granted they are high fps and cpu heavy. its a great way to share knowledge and push our overclocks further and to learn from others.


----------



## Taraquin (Aug 21, 2021)

bissag said:


> The 3800c16 profile was tested 25 cycles tm5 stable.
> The 3800c14 not tested yet, it was just for fun test but the results wasnt as expected.


You might need a bit higher iod voltage. Try upping it by 0.02V and see what happends?


----------



## Mbellantoni (Aug 21, 2021)

@Taraquin i got around to taking your advice and switching to 2t instead of gdm 1t. i wasnt convinced at all at first but i was proven wrong. it seems it took an extremely minor hit to bandwidth but my latency is consistently around 52.4-52.6ns now. even with that small change somehow it translated to this benchmark and i got my best runs consistently. to make sure it wasnt a fluke i switched back to gdm 1t and got lesser scores. this is at 4000/2000 cl 14-15-14-21. 

i also experimented with turning off c states and messed around with the p states to see if i can get my latency lower. all it really did was stabilize any latency jitter between aida test at the cost of bandwidth and "sleep voltage" on my cpu was increased. so my cpu and flck woke up a little faster from idle pretty much. but it doesnt make a difference at all during gaming since the cpu is in a woken up state at that point. even then 2 or 3 quick  latency or read test before running a full aida benchmark is enough to wake the cpu up completely to get your true results. it might help in certain scenarios though but i couldnt tell you which lol.


----------



## bissag (Aug 21, 2021)

harm9963 said:


> Ram at 3800 14-14-14-14


Can you share zenTimings settings?



Taraquin said:


> You might need a bit higher iod voltage. Try upping it by 0.02V and see what happends?


iod voltage already at 1.05, I try to add more?


----------



## Zyll Goliat (Aug 21, 2021)

tabascosauz said:


> There is no cache advantage. 32MB of L3 per chiplet, a core can't just access the other chiplet's L3.


L1&L2 cache are dedicated per core and not shared between the cores L3 as far as I know is shared between the cores......




Now not sure how this works with new Ryzen chips but assume is pretty much the same.....


----------



## Taraquin (Aug 21, 2021)

bissag said:


> Can you share zenTimings settings?
> 
> 
> iod voltage already at 1.05, I try to add more?


Same latency at better timings can sometimes be caused by too low voltage. For instance I get 3ns higher latency in aida with iod volt at 1.02, if I raise to 1.03 everything is okay. You have 32gb ram which might require a bit more than 16gb. Have you tried the dram calc test?


----------



## tabascosauz (Aug 21, 2021)

Zyll Goliath said:


> L1 cache is dedicated per core and not shared between the cores L3 as far as I know is shared between the cores......
> 
> Now not sure how this works with new Ryzen chips but assume is pretty much the same.....



No I don't think they are the same in the way you think. On Ryzen 5000 they are literally separate dies under the heatspreader - there are 32MB L3 per chiplet in 5900X/5950X, but that doesn't mean a core gets access to 64MB. It doesn't get to go outside of its own chiplet, travel across the substrate (on an IF link that doesn't even exist, because the chiplets are only themselves linked to the I/O die), and go into the other chiplet's L3.


----------



## Zyll Goliat (Aug 21, 2021)

tabascosauz said:


> No I don't think they are the same in the way you think. On Ryzen 5000 they are literally separate dies under the heatspreader - there are 32MB L3 per chiplet in 5900X/5950X, but that doesn't mean a core gets access to 64MB. It doesn't get to go outside of its own chiplet, travel across the substrate (on an IF link that doesn't even exist, because the chiplets are only themselves linked to the I/O die), and go into the other chiplet's L3.


Private L1/L2 caches and a shared L3 is hardly the only way to design a cache hierarchy, but it’s a common approach that multiple vendors have adopted. Giving each individual core a dedicated L1 and L2 cuts access latencies and reduces the chance of cache contention — meaning two different cores won’t overwrite vital data that the other put in a location in favor of their own workload. The common L3 cache is slower but much larger, which means it can store data for all the cores at once. Sophisticated algorithms are used to ensure that Core 0 tends to store information closest to itself, while Core 7 across the die also puts necessary data closer to itself.


----------



## Selaya (Aug 21, 2021)

Yes, but if you look at the multi-CCD parts like the 5900X and 5950X, these have two CCDs each with 32MiB of L3, and C0-7 on CCD0 cannot access the CCD1's other 32 of L3.


----------



## Zyll Goliat (Aug 21, 2021)

Selaya said:


> Yes, but if you look at the multi-CCD parts like the 5900X and 5950X, these have two CCDs each with 32MiB of L3, and C0-7 on CCD0 cannot access the CCD1's other 32 of L3.


Well as I said I am not so sure how all this goes with this new Ryzen CPU's but this is what I found:"AMD’s Ryzen processors based on the Zen, Zen+, and Zen 2 cores all share a common L3, but the structure of AMD’s CCX modules left the CPU functioning more like it had 2x8MB L3 caches, one for each CCX cluster, as opposed to one large, unified L3 cache like a standard Intel CPU"


----------



## Selaya (Aug 21, 2021)

I'm not entirely sure about Zen(+) (which have 4-core CCXes, like Zen 2) but I know that Zen 2 caps out at 16MiB of L3 (up to 4x for a grand total of 64 for the 4 CCX parts, but only ever 16 usable per core).


----------



## tabascosauz (Aug 21, 2021)

Zyll Goliath said:


> Well as I said I am not so sure how all this goes with this new Ryzen CPU's but this is what I found:"AMD’s Ryzen processors based on the Zen, Zen+, and Zen 2 cores all share a common L3, but the structure of AMD’s CCX modules left the CPU functioning more like it had 2x8MB L3 caches, one for each CCX cluster, as opposed to one large, unified L3 cache like a standard Intel CPU"



Yes, in the past L3 was further narrowed down based on CCX (because there were 2 CCX per chiplet). No longer have that problem because now 1 CCX = 1 chiplet.

But you don't have to overthink it. Half of the cores are literally not on the same piece of silicon as the other half. Simple as that. Core on one chiplet can access only the L3 on its own piece of silicon. The two chiplets look physically close but they aren't directly connected between the two of them.





Mbellantoni said:


> i think most of us are using this benchmark as a baseline to find out how much of a raw performance increase different speeds/manual overclocked ram profile would make over the stock xmp just for this particular benchmark.  everyones results vary with different hardware so its not really a comparison between each other. most of us have been going back and forth seeing what works and what didnt for us and giving each other tips on how we may improve.
> 
> it gives a rough idea of performance increases in other games as well granted they are high fps and cpu heavy. its a great way to share knowledge and push our overclocks further and to learn from others.



I completely get what you mean, but the point is that SoTTR is supposed to be a CPU-heavy test, thus fast mem should help. 4.6GHz at 10W per-core on two cores isn't CPU-heavy by any stretch of the imagination. That's the kind of temps/clocks/volts/power I'd expect while working in Premiere or Photoshop, not demanding or maxed-out or CPU-limited by any stretch of the imagination. 

If that's how the benchmark really runs then forget memory profiles, a 4.5 or 4.6 all-core would absolutely dominate. Since the PB2 boost algorithm is literally sleeping on the job.

Are you (or anyone else) able to share some HWInfo logging during the test? Specifically effective clocks, power, and usage. From what I can tell running HWInfo during the run has a negligible impact on performance. Curious to see if it's something on my end.

For the record, this isn't a dig at the OP's chosen graphics settings. 800x600 makes very little difference to CPU behaviour.


----------



## Zyll Goliat (Aug 21, 2021)

tabascosauz said:


> Yes, in the past L3 was further narrowed down based on CCX (because there were 2 CCX per chiplet). No longer have that problem because now 1 CCX = 1 chiplet.
> 
> But you don't have to overthink it. Half of the cores are literally not on the same piece of silicon as the other half. Simple as that. Core on one chiplet can access only the L3 on its own piece of silicon. The two chiplets look physically close but they aren't directly connected between the two of them.
> 
> View attachment 213568


So in short on 5950X this means that actually now 8 cores sharing 32Mb of L3 and other 8 cores sharing also 32Mb because 64Mb of L3 cache is split on two chiplets right?


----------



## tabascosauz (Aug 21, 2021)

Zyll Goliath said:


> So in short on 5950X this means that actually now 8 cores sharing 32Mb of L3 and other 8 cores sharing also 32Mb because 64Mb of L3 cache is split on two chiplets right?



Just think of 5950X as two 5800X glued together, and 5900X as two 5600X glued together. Well, not together, but to the same substrate and connected to the same IO die. Obviously a rough concept but you get the idea

CPU-Z has the right idea, it lists L3 as 2 x 32MB, not 64MB.


----------



## bissag (Aug 21, 2021)

Taraquin said:


> Same latency at better timings can sometimes be caused by too low voltage. For instance I get 3ns higher latency in aida with iod volt at 1.02, if I raise to 1.03 everything is okay. You have 32gb ram which might require a bit more than 16gb. Have you tried the dram calc test?


I tried IOD up to 1.075 and CCD up to 1.080, no change I can't break the 55ns with that, while I should get around 53ns
didn't test yet, just benchmarks


----------



## Zyll Goliat (Aug 21, 2021)

tabascosauz said:


> Just think of 5950X as two 5800X glued together, and 5900X as two 5600X glued together. Well, not together, but to the same substrate and connected to the same IO die. Obviously a rough concept but you get the idea
> 
> CPU-Z has the right idea, it lists L3 as 2 x 32MB, not 64MB.


Yeah that's what I thought.....in short it's just split in two but it's still been shared between the cores(2x8c) which brings us on the beginning when I said that 5950X is probably faster because have more cache(+higher turbo) not because it has more cores at that point as other CPU also had 8c/16t even if that cache is split in two it's not changing the facts that more L3 cache is always better especially when it comes to the gaming....
Here watch this


----------



## tabascosauz (Aug 21, 2021)

Zyll Goliath said:


> Yeah that's what I thought.....in short it's just split in two but it's still been shared between the cores which brings us on the beginning when I said that 5950X is probably faster because have more cache(+higher turbo) even if that cache is split in two polls it's not changing the facts that more L3 cache is always better when it comes to the gaming....



What?

5600X: Core 0 can access 32MB of L3
5900X: Core 0 can access 32MB of L3
end of story
there is no "more" L3 cache available to any given core

There's no magic IF connection between the two chiplets. They function independently. In fact, so independently that often it feels like 2 x 5600X and not 1 x 5900X. Windows likes to use Core 0/1 on mine for its usual demanding tasks (opening apps, gaming, ST benchmarking), but it also loves to use Core 7 (on the other chiplet) for 95% of its background processing.


----------



## Felix123BU (Aug 21, 2021)

tabascosauz said:


> What?
> 
> 5600X: Core 0 can access 32MB of L3
> 5900X: Core 0 can access 32MB of L3
> ...


Yet there are some measurable differences in cache between the 5600x, 5800x and 5900x and 5950x   
See below screenshots of Aida Cache and Memory bench, taken from this thread, but repeatable over other threads findings:

5600X                                                             5800x                                                              5950x









Now, I am not expert on CPU architecture, but those double numbers of cache speed jump out, and in certain situations, like this test also, there is a noticeable gain between them, so my humble opinion, which could be wrong, is that there is a difference in cache behavior between single and dual CCD Ryzen 5000 CPUs, which in certain scenarios can give a uplift.

My best example is also found in this tread, 2 very close systems, both with a 6800XT as GPU, and 3800mhz tuned ram for the 5800x and stock 3200mhz ram for the 5950x, even if the 5950x is paired with slower ram and worse latency, it has a gain of ~20FPS over the 5800x in a more CPU intensive game setting.


----------



## Det0x (Aug 21, 2021)

This benchmark use more then 8 threads, especially in the last city scene -> dual CCD zen3 cpus score higher then 5600x/5800x with their single ccd's thanks to more cores, not more cache (this is one of the few game benchmarks that actually use threads)
Let me leave with posting one of my old testing screenshots which clearly show thread usage in city scene:



On a single CCD zen3 (8 cores) i get can around ~260-280 cpu fps average while above 300 with dual ccd @ same memory settings and cpu clockspeed

@ tabascosauz
Your settings are just plain bad, that's why your scoring low.. Oh and if you want to use hwmonitor to check what's going on, you need to up the pooling rate / understand the numbers
(hwmonitor cant keep up with the changing threads and clockspeeds)

@ Felix123BU

My aida 5950x screenshot above ?
Win11 makes L3 look bugged in aida... If you want one from win10 you can use this one:

Or if you want faster win11 screenshot:

But do note that my latency numbers are not normal for a dual ccd 5900x/5950x (average is 55-58ns latency in aida i would say)


----------



## Selaya (Aug 21, 2021)

tabascosauz said:


> Just think of 5950X as two 5800X glued together, and 5900X as two 5600X glued together. Well, not together, but to the same substrate and connected to the same IO die. Obviously a rough concept but you get the idea
> 
> CPU-Z has the right idea, it lists L3 as 2 x 32MB, not 64MB.


Wait isn't the 5900X 8+4 ... ?


----------



## Felix123BU (Aug 21, 2021)

Selaya said:


> Wait isn't the 5900X 8+4 ... ?


I don't think anybody knows exactly if its 8+4 or 6+6, initial reports said 6+6, in theory 8+4 would be better, but who knows, probably selected based on yields and CCD characteristics. Though I am not sure how such a random CCD selection process could work, but its not like AMD did not deliver 5800X's with 2 CCD, one of them being disabled


----------



## Zyll Goliat (Aug 21, 2021)

Felix123BU said:


> Yet there are some measurable differences in cache between the 5600x, 5800x and 5900x and 5950x
> See below screenshots of Aida Cache and Memory bench, taken from this thread, but repeatable over other threads findings:
> 
> 5600X                                                             5800x                                                              5950x
> ...


I don't know but seems like he still believes like those cores have dedicated L3 per core(like L1.L2) and it's not getting that L3 cache is actually characterized as a pool of fast memory that is shared between cores and even if it's split in half you still have 2x32mb which will certainly add some worst latency but you will have more cache....
P.S.On that video that I posted above you can clearly and without the doubt see advantage of more cache....sure it was Intel CPU but I doubt that it is such big difference with those Ryzens If I could guess I will said that gains are not as much as with the Intel but never the less you will still see the advantage of more L3 cache....


----------



## Felix123BU (Aug 21, 2021)

Det0x said:


> This benchmark use more then 8 threads, especially in the last city scene -> dual CCD zen3 cpus score higher then 5600x/5800x with their single ccd's thanks to more cores, not more cache (this is one of the few game benchmarks that actually use threads)
> Let me leave with posting one of my old testing screenshots which clearly show thread usage in city scene:
> View attachment 213588
> On a single CCD zen3 i get can around ~260-280 cpu fps average while above 300 with dual ccd @ same memory settings and cpu clockspeed
> ...


Yes, it was from one of your  Aida tests


----------



## Taraquin (Aug 21, 2021)

Mbellantoni said:


> View attachment 213535
> @Taraquin i got around to taking your advice and switching to 2t instead of gdm 1t. i wasnt convinced at all at first but i was proven wrong. it seems it took an extremely minor hit to bandwidth but my latency is consistently around 52.4-52.6ns now. even with that small change somehow it translated to this benchmark and i got my best runs consistently. to make sure it wasnt a fluke i switched back to gdm 1t and got lesser scores. this is at 4000/2000 cl 14-15-14-21.
> 
> i also experimented with turning off c states and messed around with the p states to see if i can get my latency lower. all it really did was stabilize any latency jitter between aida test at the cost of bandwidth and "sleep voltage" on my cpu was increased. so my cpu and flck woke up a little faster from idle pretty much. but it doesnt make a difference at all during gaming since the cpu is in a woken up state at that point. even then 2 or 3 quick  latency or read test before running a full aida benchmark is enough to wake the cpu up completely to get your true results. it might help in certain scenarios though but i couldnt tell you which lol.


2T on ryzen 5000 seems to be generally faster, BUT stabilizing it is much harder than 1T GDM. Currently I\m running flat 16-32-48 288 trfc on 1T and can do that at 1.44V. On 2T I get some errors in TM5 after a while if I try that. if you get 2T stable, go for it, but GDM is a nice fallback with almost the same performance and better stability


----------



## Felix123BU (Aug 21, 2021)

Zyll Goliath said:


> I don't know but seems like he still believes like those cores have dedicated L3 per core(like L1.L2) and it's not getting that L3 cache is actually characterized as a pool of fast memory that is shared between cores and even if it's split in half you still have 2x32mb which will certainly add some worst latency but you will have more cache....
> P.S.On that video that I posted above you can clearly and without the doubt see advantage of more cache....sure it was Intel CPU but I doubt that it is such big difference with those Ryzens If I could guess I will said that gains are not as much as with the Intel but never the less you will still see the advantage of more L3 cache....


I cant say for sure what makes a dual core CCD faster in certain situations, I don't have the in-depth knowledge to make 100% statements, I can only look at differences in results with similar hardware and draw some (possibly flawed) conclusions. 

The thing is, in this specific test we run on this thread, there are noticeable differences between single and dual CCD's Ryzens, if that's up to moar cores, or cache speed, or how the cache is used, that's just a guess


----------



## Mbellantoni (Aug 21, 2021)

Taraquin said:


> 2T on ryzen 5000 seems to be generally faster, BUT stabilizing it is much harder than 2T. Currently I\m running flat 16-32-48 288 trfc on 1T and can do that at 1.44V. On 2T I get some errors in TM5 after a while if I try that. if you get 2T stable, go for it, but GDM is a nice fallback with almost the same performance and better stability


Yea i noticed. Im getting failures about a half hour into hci memtest. I added .2 mv to it and am running a test now. Im not willing to go any higher.

@Taraquin yea 2t is not happening lol. I updated my bios this morning thinking this might be the "one" and thats a big fat NOPE. Now im testing my daily overclocks again for stability.


----------



## tabascosauz (Aug 21, 2021)

Felix123BU said:


> Yet there are some measurable differences in cache between the 5600x, 5800x and 5900x and 5950x
> See below screenshots of Aida Cache and Memory bench, taken from this thread, but repeatable over other threads findings:
> 
> Now, I am not expert on CPU architecture, but those double numbers of cache speed jump out, and in certain situations, like this test also, there is a noticeable gain between them, so my humble opinion, which could be wrong, is that there is a difference in cache behavior between single and dual CCD Ryzen 5000 CPUs, which in certain scenarios can give a uplift.



It's not new, the difference has been there since Ryzen 3000 and L3 results are pretty much mirrored (though slightly faster usually). I'm 90% sure that the "cache differences" in AIDA are horseshit. AIDA is pretty meaningless on both cache and memory front (especially DRAM latency, it's wildly unpredictable compared to membench's latency counter), it's not hard to dupe AIDA with memory settings that are flat out unstable or tank performance in other benchmarks (LinpackXtreme, DRAM Calc).

As to L3 in AIDA, infamous example was the L3 cache read "bug" with Renoir APUs. And no, before you ask, it wasn't an issue of boost or Cstates, all-core and Cstates off made zero difference on 4650G. AMD "fixed" it with an AGESA patch that literally didn't do anything to performance in any other test in existence. Suspected that AMD probably tweaked Precision Boost to prevent cores from parking during AIDA to make users feel better about themselves. Then Cexanne came around and now we're back to crappy 300-400GB/s L3 in AIDA......see the pattern here? If AIDA is authoritative, we'd be claiming that Zen 3 has demonstrably slower L3 than Zen 2 Renoir of all things......AIDA is the single greatest pat-oneself-on-the-back machine, it's popular because it's easy, doesn't mean it indicates anything at all. When different people's stock 5900Xs have hundreds of GB/s' difference in L3 AIDA readings..........

Again, don't get me wrong, I'm not trying to discredit you or cast doubt on your choice of settings for the benchmark. But if it's supposed to be a CPU-heavy game, it should perform the part, and nothing that I can see so far shows that. Please provide more HWInfo if you can though, more is better.



Det0x said:


> @ tabascosauz
> Your settings are just plain bad, that's why your scoring low..
> 
> But do note that my latency numbers are not normal for a dual ccd 5900x/5950x (average is 55-58ns latency in aida i would say)






Okay, make up your mind?

First you say that my settings are bad for 3800 14-15-15 (feel free to offer actionable feedback), then you say that my 54.8ns/101s membench is better than average for 2CCD and yours is significantly better than expected for some reason (are you implying board firmware or PEBCAK?). I never claimed to be running the tightest 3800CL14 setup in the world but neither are most of the other results in here, so, which one is it then?

I'm well aware of polling rate. That doesn't significantly change the test behaviour at all. Upping polling rate may cause a little more of the "high" boost clocks to translate into effective clock, but your own HWInfo screenshot indicates that usage and load are still nowhere near what's expected even from a mildly CPU-bound game (I have a LOT of those). Look at the disparity between your "clocks" and effective clocks, it's the classic symptom of mostly-idle cores and has little to do with polling rate. If anything, needing to increase polling rate to portray increased CPU usage just tells confirms how low average usage is...

Plus, while per-core clocks and power vary a lot and a loose polling rate may miss occasional peaks, polling rate can't fool temps. I've done a shit load of logging in a few other games on 5900X trying to figure out the 10-15C temp spikes that Zen3 chiplet seem to experience sometimes, particularly in MW19 where clocks/per-core power/temps jump around like a roller coaster. Insurgency:Sandstorm is an example of a game that works the CPU moderately but effective clock doesn't show it, only per-core power and temps do. You can up or down the polling rate all you like, if a game is actually CPU-intensive it makes no difference and will naturally show it in the data.

And one thing that polling rate *certainly* won't fool is the fact that the GPU is running full tilt during this benchmark for more than just part of it. It takes a real long time at 100% load to get to 72.5C edge temp, and 180W is literally max possible load. So from what I can tell, it's quite a bit more GPU bound than the vague "29%" number seems to imply, are you insinuating that "bad settings" are solely to blame for most of the test running on GPU?

Or are you implying that the GPU is bad (it certainly is no 3080, I never made any claims regarding GPU perf)? Which in itself would be an admission that the bench isn't nearly so CPU-bound as it should be?



Selaya said:


> Wait isn't the 5900X 8+4 ... ?



That's been rumored for a long time, but it's never made any sense. Ryzen has always functioned on symmetric CCDs. CCD1 and CCD2 cores are clearly demarcated in differences in per-core power during all-core for example, and it does not paint a picture of 8+4 or anything that isn't 6+6.

Some games like MW19 run a heavy "all-core" AVX workload sometimes...but whereas on a 3700X it runs truly 8-core loads, on 5900X it automatically limits itself to the 6 cores of CCD1. And Windows scheduler seems to pick its favoured background processing core not based on core quality (mine is literally the worst core), but the fact that it's not on the same CCD1 as the two preferred performance cores inevitably are.


----------



## Selaya (Aug 21, 2021)

Makes sense. I don't know who made up the 8+4 claim/rumor first but it seemed quite outlandish to me from the get-go. There's a reason why 3600, 3900X, 5600X and 5900X had been the price/performance champs while the 3700X and especially the 5800X never could quite make it (nor the 3950X/5950X) - you could simply jam the slightly flawed CCDs into the former while the latter require flawless ones.


----------



## harm9963 (Aug 21, 2021)

bissag said:


> Can you share zenTimings settings?
> 
> 
> iod voltage already at 1.05, I try to add more?


----------



## Mbellantoni (Aug 21, 2021)

Felix123BU said:


> Yet there are some measurable differences in cache between the 5600x, 5800x and 5900x and 5950x
> See below screenshots of Aida Cache and Memory bench, taken from this thread, but repeatable over other threads findings:
> 
> 5600X                                                             5800x                                                              5950x
> ...


My l3 cache is actually faster than that- around 600+gb/s but i capped my edc to 105 as ive gotten better performance from keeping my tdc/edc values about 10-15amps above stock when using pbo. The result is lowered l3 cache speeds per aida. I think its a bug or something

Now that i think of it its probably something ill want to test now. Cinebench is not the one and done cpu test. Capping my edc might give me a better cb20 score but may also be hurting gaming performance

 EDIT: ignore the memory side. the timings are different. the one on the left has an edc cap of 105 (stock is 90A) the one on the right has an edc of 300 so its basically uncapped. notice the difference in l3 cache speeds. however, with the edc uncapped i lose about 50-60 points in cinebench and performance falls within a margin of error in an actual gaming benchmark ...pretty weird


----------



## Felix123BU (Aug 21, 2021)

tabascosauz said:


> It's not new, the difference has been there since Ryzen 3000 and L3 results are pretty much mirrored (though slightly faster usually). I'm 90% sure that the "cache differences" in AIDA are horseshit. AIDA is pretty meaningless on both cache and memory front (especially DRAM latency, it's wildly unpredictable compared to membench's latency counter), it's not hard to dupe AIDA with memory settings that are flat out unstable or tank performance in other benchmarks (LinpackXtreme, DRAM Calc).
> 
> As to L3 in AIDA, infamous example was the L3 cache read "bug" with Renoir APUs. And no, before you ask, it wasn't an issue of boost or Cstates, all-core and Cstates off made zero difference on 4650G. AMD "fixed" it with an AGESA patch that literally didn't do anything to performance in any other test in existence. Suspected that AMD probably tweaked Precision Boost to prevent cores from parking during AIDA to make users feel better about themselves. Then Cexanne came around and now we're back to crappy 300-400GB/s L3 in AIDA......see the pattern here? If AIDA is authoritative, we'd be claiming that Zen 3 has demonstrably slower L3 than Zen 2 Renoir of all things......AIDA is the single greatest pat-oneself-on-the-back machine, it's popular because it's easy, doesn't mean it indicates anything at all. When different people's stock 5900Xs have hundreds of GB/s' difference in L3 AIDA readings..........
> 
> ...


Its fine, I love a healthy polite disagreement 

Regarding Aida memory and cache benchmarks, I don't think any of us can make an educated judgement on the validity of the scores if we are honest. I used that as a point to highlight differences between single and multi CCD's that also affect the benchmark, and with that I mean the CCD layout and not the Aida benchmarks.

Regarding the fact that the setting for this benchmark are the best or not for showing CPU differences, that's a very long and veeery subjective discussion. The fact is, the settings are good for highlighting differences between core counts, very good at highlighting differences between memory subsystems, and generally speaking not really GPU bound. As far as what was posted on this thread, there are clear differences between different CPU's, and also differences between the same CPU's but with faster memory. If that's not enough, I don't know what could be 

Now, one can always make the argument that for certain hardware combos there can be a different set of settings that could show even greater differences, like older GPU's who are weak enough to be the bottleneck for even 1080p lowest graphical settings, but that would lead to an infinity of results and an impossibility of getting a common denominator and being able to actually compare results and draw a conclusion.

I also argue there is no universal CPU test. and not because there aren't any, or some games that could be used for such a thing, there is no universal one because people have different ideas about what a CPU bottleneck is and how it manifests.

I sometimes do not understand people who are hellbent on changing a thread so it suits their feelings about a certain situation, everyone is free to start a new thread anytime and set a framework for another type of test if they can not live with a thread they do not agree with, I did that myself with this tread, and mainly not for me, for people who wanted to use this game as a CPU bench basically


----------



## bissag (Aug 22, 2021)

harm9963 said:


> harm9963​



What Vdimm you use 1.55v ?


----------



## Mbellantoni (Aug 22, 2021)

So does anyone know what the determining factor is about having gdm on or off. Ive only ever owned 1 set of ram. I built my first pc in december of 2020. Anyways, i noticed some people have gdm standard and its extremely hard to get it stable off. While others come standard with gdm off 1t. Why is this?


----------



## Felix123BU (Aug 23, 2021)

Mbellantoni said:


> So does anyone know what the determining factor is about having gdm on or off. Ive only ever owned 1 set of ram. I built my first pc in december of 2020. Anyways, i noticed some people have gdm standard and its extremely hard to get it stable off. While others come standard with gdm off 1t. Why is this?


I can only tell you what I experienced with GDM on vs off. For me GDM on 1T gives a slightly higher calculated bandwidth, while GDM off and 2T gives a slightly better latency, but the overall performance is basically the same. My experience is limited with my current CPU as I could only run GDM off 2T with the latest Agesa, prior to that GDM off was a no go being very unstable.


----------



## Taraquin (Aug 23, 2021)

2T is more flexible when tuning, but a bit harder to stabilize, in most scenarios on ryzen 5000 it seems like 2T is a bit faster. 1T and GDM har limitation with the CL, CWL, WR and RP-timings which must be set in even numbers or else they will be rounded up. The best RP-timung is impossible with GDM since it's 5. The commonly used CL 15 is also impossible with GDM due to this.


----------



## Mbellantoni (Aug 23, 2021)

Felix123BU said:


> I can only tell you what I experienced with GDM on vs off. For me GDM on 1T gives a slightly higher calculated bandwidth, while GDM off and 2T gives a slightly better latency, but the overall performance is basically the same. My experience is limited with my current CPU as I could only run GDM off 2T with the latest Agesa, prior to that GDM off was a no go being very unstable.


@Taraquin i screwed around last night trying to get gdm off 1t stable. I pumped a few more volts into the ram but setting my memclkdrstr to 60 ohms and my proct odt to 40 i was able to achieve significantly more stability than usual. Ultimately i still crashed after about 20 membench runs. But i would imagine 2t with the correct proct odts should be achievable. I am stable where im at though with gdm on with the stock resistances so it might be something ill try later on. I like the idea of being stable without having to worry about resistances being the only thing keeping me stable for a minor perf increase lol


----------



## harm9963 (Aug 24, 2021)

bissag said:


> What Vdimm you use 1.55v ?


1.5v .


----------



## Taraquin (Aug 25, 2021)

Mbellantoni said:


> @Taraquin i screwed around last night trying to get gdm off 1t stable. I pumped a few more volts into the ram but setting my memclkdrstr to 60 ohms and my proct odt to 40 i was able to achieve significantly more stability than usual. Ultimately i still crashed after about 20 membench runs. But i would imagine 2t with the correct proct odts should be achievable. I am stable where im at though with gdm on with the stock resistances so it might be something ill try later on. I like the idea of being stable without having to worry about resistances being the only thing keeping me stable for a minor perf increase lol


Yes, GDM is awesome stabilitywise. 2T or especially 1T w/o GDM is for the patient people  You might gain a bit if performance, but it takes a bit of tinkering. 

Even though I have a working 2T setup I prefer 1T GDM since it can run ram at 0.02V lower w/o errors in TM5. Fuddling with ProcODT etc would probably fix it, but I don't have that much patience.


----------



## Zyll Goliat (Aug 28, 2021)

Regarding the core/cache scaling on Intel...here more testing and a clear advantage of larger amount of L3 cache....


----------



## harm9963 (Aug 28, 2021)

Best run.


----------



## Felix123BU (Aug 28, 2021)

harm9963 said:


> Best run.


Seeing your result makes me wonder what my 6800XT could do with the new upcoming Alder Lake CPU's, the rumor is those would be gaming monster


----------



## QuietBob (Aug 29, 2021)

Zyll Goliat said:


> Regarding the core/cache scaling on Intel...here more testing and a clear advantage of larger amount of L3 cache....


What I'd really like to see is how the fastest quads today compare for gaming.
Intel's Core i3-10325 has a 4.7 GHz ST boost and 8 MB L3 cache. AMD's Ryzen 3 3300X boosts to 4.35 GHz but has double the amount of cache. I don't know what the all-core boost on the i3 is, but the 3300X does 4.2 GHz on stock. Would the higher clock speed of the Intel chip make more of a difference in games than the Ryzen's generous L3 or vice versa?


----------



## MxPhenom 216 (Aug 29, 2021)

Felix123BU said:


> I don't think anybody knows exactly if its 8+4 or 6+6, initial reports said 6+6, in theory 8+4 would be better, but who knows, probably selected based on yields and CCD characteristics. Though I am not sure how such a random CCD selection process could work, but its not like AMD did not deliver 5800X's with 2 CCD, one of them being disabled


I am pretty certain its 6+6. Its better this way too as the heat would be spread out more rather than majority of the heat concentrated on one CCD due to the increased cores it may have.



QuietBob said:


> What I'd really like to see is how the fastest quads today compare for gaming.
> Intel's Core i3-10325 has a 4.7 GHz ST boost and 8 MB L3 cache. AMD's Ryzen 3 3300X boosts to 4.35 GHz but has double the amount of cache. I don't know what the all-core boost on the i3 is, but the 3300X does 4.2 GHz on stock. Would the higher clock speed of the Intel chip make more of a difference in games than the Ryzen's generous L3 or vice versa?


It would depend on more than just core speed and the amount of cache cores have access too. And largely dependent on specific games.


----------



## Tanzmusikus (Aug 30, 2021)

Hello!


Zyll Goliat said:


> those cores have dedicated L3 per core(like L1.L2) and it's not getting that L3 cache is actually characterized as a pool of fast memory that is shared between cores and even if it's split in half you still have 2x32mb which will certainly add some worst latency but you will have more cache.


They have dedicated L3 per CCX (core complex), but there's also an interconnect (IF) between the two CCX of a 5900X or 5950X.
A core from CCX0 could also read the L3 data from CCX1, but it gets additional penalty time (latency).



tabascosauz said:


> When different people's stock 5900Xs have hundreds of GB/s' difference in L3 AIDA readings.


Yes, AIDA could show inconsequent results ... and too the RAM-OC could be instable, so different amount of data has transferred again by ECC. The result is a lower value.



MxPhenom 216 said:


> I am pretty certain its 6+6. Its better this way too as the heat would be spread out more rather than majority of the heat concentrated on one CCD due to the increased cores it may have.


And I think, that all combinations of 6+6, 7+5 and 8+4 cores are possible, because this makes more sense from a financial point of view. Companies love to save money.
So why would AMD waste the CCXs with 7 healthy cores and 5 healthy cores? Maybe the CCXs with 4 healthy cores could be reused in Athlon processors.
But that's just my guess ...

... Oh yes, Igor confirms the existence of Ryzen 5600X/5800X with two CCX, which can come from a faulty Ryzen 5900X: https://www.igorslab.de/en/ryzen-5-...t-laesst-sich-die-cpu-zum-ryzen-9-unlocken-2/


Btw, this is what I found on Anandtech:


> The *Ryzen 9 5900X*: 12 Cores at $549
> 
> Squaring off against Intel’s best consumer grade processor is the Ryzen 9 5900X, with 12 cores and 24 threads, offering a base frequency of 3700 MHz and a turbo frequency of 4800 MHz (4950 MHz was observed).
> This processor is enabled through *two six-core chiplets*, but all the cache is still enabled at 32 MB per chiplet (64 MB total). The 5900X also has the same TDP as the 3900X/3900XT it replaces at 105 W.


It should be two 6-core in a CCX.

Best regards


----------



## tabascosauz (Aug 30, 2021)

Tanzmusikus said:


> They have dedicated L3 per CCX (core complex), but there's also an interconnect (IF) between the two CCX of a 5900X or 5950X.



Do you have a source for this? Anandtech's core-to-core latency testing showed zero difference between Ryzen 3000 and 5000 when venturing outside of their respective CCXs. Ryzen 3000 had no direct IF link between CCDs, so if there was suddenly a new avenue for inter-CCD communication in Ryzen 5000 wouldn't you expect even slightly better results?



 

 



A direct IF link between CCD1-CCD2 would be a massive change from Ryzen 3000. A "new feature" that I would've expected AMD to constantly brag about or other news/review sites to have reported on. Outwardly it doesn't appear as if they've drastically redesigned the substrate for that (honestly I'm not sure the substrate is any different at all outside of accommodating the new CCD).

If the design hasn't actually changed and there is no direct link, even if the 2 CCDs can indirectly talk to each other, I'm pretty sure the sheer latency penalty associated with having to travelling across the substrate not once but twice, would basically invalidate any potential performance boost from theoretically having "more L3".


----------



## Tanzmusikus (Aug 30, 2021)

tabascosauz said:


> Anandtech's core-to-core latency testing showed zero difference between Ryzen 3000 and 5000 when venturing outside of their respective CCXs.





> *Inter-core latencies within the L3 lie in at 15-19ns, depending on the core pair*. One aspect affecting the figures here are also the boost frequencies of that the core pairs can reach as we’re not fixing the chip to a set frequency. This is a large improvement in terms of latency over the 3950X, but given that in some firmware combinations, as well as on AMD’s Renoir mobile chip this is the expected normal latency behaviour, it doesn’t look that the new Zen3 part improves much in that regard, *other than obviously of course enabling this latency over a greater pool of 8 cores within the CCD*.
> 
> *Inter-core latencies between cores in different CCDs still incurs a larger latency penalty of 79-80ns*, which is somewhat to be expected as the new Ryzen 5000 parts don’t change the IOD design compared to the predecessor, and traffic would still have to go through the infinity fabric on it.


The technical specs don't change so much, so you get extra penalty latency on Ryzen 5000 "between cores in different CCDs/CCXs".



tabascosauz said:


> Ryzen 3000 had no direct IF link between CCDs


I didn't write anything about a "direct IF interconnection". That's what you may wanted to read.  Maybe that's our misunderstanding.
I mean the way over the IF interconnection. And yes - now I see clearly, that is a doubled (serial, not parallel) connection through the IO-die. Thanks for making that clear for me!



tabascosauz said:


> If the design hasn't actually changed and there is no direct link, even if the 2 CCDs can indirectly talk to each other, I'm pretty sure the sheer latency penalty associated with having to travelling across the substrate not once but twice, would basically invalidate any potential performance boost from theoretically having "more L3".


If you watch at the two pictures of core-to-core-latencies, that you posted, then you'll see also a big improvement.
The green fields of very less latencies are 100% greater with Ryzen 5000 as with Ryzen 3000, so half of the picture is marked green than only a quarter.

In addition the biggest latencies of the green marked fields are almost the half (6.6ns - 19.8ns) of the ones from Ryzen 3000 (6.7ns - 33.1ns).
And the orange marked fields shows on Ryzen 5000 nevertheless a little better latencies at the end: 79.2ns - 84.6ns instead of 81.7ns - 92.5ns for the Ryzen 3000 series.

So nobody has to worry about bad performance incl. L3 cache latencies of the Ryzen 5000 series processors. They're just fine.


----------



## CGi-Quality (Sep 3, 2021)

Taraquin said:


> Have you tweaked ram or OCed CPU? You have a lot of potential


Figured I'd update you that went out and grabbed 3600MHz of Vengeance goodness! Gonna retest this benchy later!


----------



## Mbellantoni (Sep 4, 2021)

Just ordered a 2x8gb set of ripjaws v 3600 cl16 bdie to add to my neos for 100 bucks. Hoping i can pop them in an load my bios/custom timing profile and be set. Not too optimistic on hitting 4000 on 4 dimms though. Hoping for at least 3800. Using all 4 channels should net an increase even if i take a hit of 200mhz and 100 flck on zen 3 architecture according to GN. Really hoping i hit the lottery on these even though they were cheaper b die is bdie. Ill post a sotr run in about a week with the results


----------



## Tanzmusikus (Sep 8, 2021)

It also depends on the RAM topology of the board whether you can achieve high frequencies with four RAM modules.
Your ASUS X570 TUF Plus (Wifi) has a DaisyChain topology, so you can achieve higher frequencies with two modules.
With four modules it would be less, but you might end up with similar transfer rates.


----------



## Mbellantoni (Sep 10, 2021)

Tanzmusikus said:


> It also depends on the RAM topology of the board whether you can achieve high frequencies with four RAM modules.
> Your ASUS X570 TUF Plus (Wifi) has a DaisyChain topology, so you can achieve higher frequencies with two modules.
> With four modules it would be less, but you might end up with similar transfer rates.


You are correct. Slapped the new kit in along with my trident z neos and it wont even post at 4000 even with high voltage and the sloppiest of timings. The mem controller simply cant take it or its my mobo itself.

Anyways ill be finishing up 3800 14-15-14-21 soon on hci memtest . its looking promising and then ill run a few sotr benchmarks and post the results

dropped in the ripjaws with the neos for 32 gigs total. played around with some voltages/ cad bus and odt's and got this stable. the top run being 3800cl14 @ 32 gigs and the bottom being 4000cl14 at 16gigs. the bottom result was more of a freak run and it was just the highest score i have ever gotten but its not really consistent (usually 230-233fps) its the screenshot i had so i posted it. otherwise they are both in line with each other. the dual rank bonus from having all the dimm slots filled negating the 200mhz and 100 fclock bonus from running only 2 sticks and vise versa. the timings are pretty much the same besides a change of twrrd from 1 to 3 for stability. i know i have the screenshot somewhere but i did a previous run at 3800cl14 @ 16gb same timings and it scored 226. so running dual rank is for sure an upgrade speed for speed and timing for timing. so my conclusion is running a dual ranked ram config is equal in performance a x2 single ranked config with approx. 200mhz and 100flck with the same timings in this benchmark even though the 4000cl14 profile is superior in bandwidth by about 3,500mb/s per aida and 2ns per aida









heres another 4000cl14


----------



## Taraquin (Sep 10, 2021)

Mbellantoni said:


> You are correct. Slapped the new kit in along with my trident z neos and it wont even post at 4000 even with high voltage and the sloppiest of timings. The mem controller simply cant take it or its my mobo itself.
> 
> Anyways ill be finishing up 3800 14-15-14-21 soon on hci memtest . its looking promising and then ill run a few sotr benchmarks and post the results
> 
> ...


Running all core 4.7 or curve optimizer? If your binning is good you might be able to run 4.85 single and 4.7 all core with - 30 on CO and ppt around 90-100W.


----------



## Mbellantoni (Sep 10, 2021)

Taraquin said:


> Running all core 4.7 or curve optimizer? If your binning is good you might be able to run 4.85 single and 4.7 all core with - 30 on CO and ppt around 90-100W.


Curve optimizer. 0 , - 19 , -14 , - 18, -19, -19 my chip is BAD. Like...hilariously bad at all core.  Pbo is my saving grace. Lol


----------



## Taraquin (Sep 10, 2021)

Mbellantoni said:


> Curve optimizer. 0 , - 19 , -14 , - 18, -19, -19 my chip is BAD. Like...hilariously bad at all core.  Pbo is my saving grace. Lol


My chips is golden, but ram is really bad, your seems very good, wanna trade ram with me?


----------



## Felix123BU (Sep 10, 2021)

Taraquin said:


> My chips is golden, but ram is really bad, your seems very good, wanna trade ram with me?


In the same boat buddy, great CPU, crappy RAM, oh well, we can have it all...not without spending tons of cash


----------



## Mbellantoni (Sep 11, 2021)

Taraquin said:


> My chips is golden, but ram is really bad, your seems very good, wanna trade ram with me?


Lol i think im gonna end up selling this kit i just bought and try to re cooperate my money. I can not get this kit stable and its driving me nuts. I dont think its the ram either i think my mobo is showing its limits with 4 dimms. Gonna try to sell these. Buy a set of ripjaws 3600c14 16gbx2 sticks and then sell off my neos lol


----------



## Mbellantoni (Sep 15, 2021)

Amd's new driver enabled SAM on 5000 series. Just wanted to show. Cpu render takes a pretty large hit and yet, performance is improved. Staying true to the thread ill post the 1080 results. But there are even larger gains at 1440p. Pretty interesting. The top run is regular. Bottom has SAM enabled


----------



## Taraquin (Sep 15, 2021)

Mbellantoni said:


> Amd's new driver enabled SAM on 5000 series. Just wanted to show. Cpu render takes a pretty large hit and yet, performance is improved. Staying true to the thread ill post the 1080 results. But there are even larger gains at 1440p. Pretty interesting. The top run is regular. Bottom has SAM enabled
> View attachment 217011View attachment 217012


Interesting. On my 5600X, enabling r-bar reduces CPU game avg by 5-10fps, seems like running AMD GPU doesn`t do that. Cool that the 5000-series gets an uplift. In retrospect, the 5700 and 5700XT gave very good value for money, especially after FSR and if you got the at close to msrp in Q3 2019. Performance seems close to 2070S, only downside for me is lack of DLSS and that`s why I sold my 5700XT and bought 3060ti. How much was the fps-improvement at 1440p?


----------



## Mbellantoni (Sep 15, 2021)

Taraquin said:


> Interesting. On my 5600X, enabling r-bar reduces CPU game avg by 5-10fps, seems like running AMD GPU doesn`t do that. Cool that the 5000-series gets an uplift. In retrospect, the 5700 and 5700XT gave very good value for money, especially after FSR and if you got the at close to msrp in Q3 2019. Performance seems close to 2070S, only downside for me is lack of DLSS and that`s why I sold my 5700XT and bought 3060ti. How much was the fps-improvement at 1440p?








Getting better fps than 2070s these days in alot of games . Amd is really stepping their game up


----------



## Felix123BU (Sep 16, 2021)

Mbellantoni said:


> View attachment 217013View attachment 217014
> 
> Getting better fps than 2070s these days in alot of games . Amd is really stepping their game up


 Its interesting that you get a FPS increase with SAM, I get a minute 1-3 FPS reduction in this game with SAM enabled, tested a couple of time over different drivers out of curiosity.


----------



## harm9963 (Sep 16, 2021)

bissag said:


> Can you share zenTimings settings?
> 
> 
> iod voltage already at 1.05, I try to add more?


----------



## phanbuey (Sep 16, 2021)

that's a crazy 5600x tune.  wow.


----------



## Mbellantoni (Sep 16, 2021)

Are those stable with gdm off at 1t? Im pretty convinced its near impossible to truly get that stable on ryzen. Especially with a clkdrstr of 24ohms @harm9963



Felix123BU said:


> Its interesting that you get a FPS increase with SAM, I get a minute 1-3 FPS reduction in this game with SAM enabled, tested a couple of time over different drivers out of curiosity.


Have you tried the latest revision? They also just released new chipset drivers the other day as well. May be worth a shot


----------



## Felix123BU (Sep 16, 2021)

Mbellantoni said:


> Are those stable with gdm off at 1t? Im pretty convinced its near impossible to truly get that stable on ryzen. Especially with a clkdrstr of 24ohms @harm9963
> 
> 
> Have you tried the latest revision? They also just released new chipset drivers the other day as well. May be worth a shot


Will surely do when AMD releases the next WHQL Gpu driver   Chipset ones all always up to date. But if I get bored enough might give it a shot with the latest graphics driver aswell 

Was doing a bit of testing with my ram stick these days, the tuning that I did mainly for the purpose of this tests thread was "stable" for the last 3 months in everything, no crash, no weird behavior, nothing, until this week I got some free time and decided to replay Horizon Zero Dawn, that was crashing regularly every 10-15 minutes, never did before this week. 
I basically went crazy because I was not sure what was causing it, but in the end narrowed it down to...memory instability  Changed Trfc from 265 to 294 and no more crashes after 5 hours and counting. Seems 265 at the set voltage was a bit optimistic. Its the first time a tested ram oc gave me any crash


----------



## Mbellantoni (Sep 17, 2021)

Felix123BU said:


> Will surely do when AMD releases the next WHQL Gpu driver   Chipset ones all always up to date. But if I get bored enough might give it a shot with the latest graphics driver aswell
> 
> Was doing a bit of testing with my ram stick these days, the tuning that I did mainly for the purpose of this tests thread was "stable" for the last 3 months in everything, no crash, no weird behavior, nothing, until this week I got some free time and decided to replay Horizon Zero Dawn, that was crashing regularly every 10-15 minutes, never did before this week.
> I basically went crazy because I was not sure what was causing it, but in the end narrowed it down to...memory instability  Changed Trfc from 265 to 294 and no more crashes after 5 hours and counting. Seems 265 at the set voltage was a bit optimistic. Its the first time a tested ram oc gave me any crash


Trfc can be tricky. Ive had test pass 6hrs straight and some fail within 30 minutes with the same trfc. Its one of the trickier timings when it comes to stability for sure


----------



## AVATARAT (Sep 24, 2021)

Ryzen 5 5600x+PBO+CO Per Core
2x8GB DDR4@4066MHz 16-17-14-28-2T
PowerColor RX 6700 XT 12GB @2720MHz / Mem 2150MHz(17200)


----------



## jamse (Sep 27, 2021)

Haven't found any timings I could lower without actually losing performance. Anything I try to lower further will just make it run worse in all benchmarks for some reason


----------



## Felix123BU (Sep 27, 2021)

jamse said:


> View attachment 218394
> View attachment 218395
> 
> Haven't found any timings I could lower without actually losing performance. Anything I try to lower further will just make it run worse in all benchmarks for some reason


That's a nice score for the hardware combo


----------



## Taraquin (Sep 27, 2021)

jamse said:


> View attachment 218394
> View attachment 218395
> 
> Haven't found any timings I could lower without actually losing performance. Anything I try to lower further will just make it run worse in all benchmarks for some reason


Most of your timings are good. The only ones that should improve performance is setting tras to 24, trc to 42/trfc 252 or trc/trfc 44/264, trrds to 4, tfaw to 16, twr 14 or 12. Setting scls to 4 might improve stability without losing much if any performance. 

You could also try curve optimizer that might give you a 3-4% boost in some games/apps.


----------



## Felix123BU (Sep 27, 2021)

Taraquin said:


> Most of your timings are good. The only ones that should improve performance is setting tras to 24, trc to 42/trfc 252 or trc/trfc 44/264, trrds to 4, tfaw to 16, twr 14 or 12. Setting scls to 4 might improve stability without losing much if any performance.
> 
> You could also try curve optimizer that might give you a 3-4% boost in some games/apps.


For me the biggest boost I got in this test and others too was an all-core OC, slightly better than the PBO+curve optimizer, and even though PBO got me to 5.05 Ghz, a 4.7-4.8 Ghz all-core had better FPS every time. The differences are small anyway, memory tuning has by far the biggest impact.


----------



## Taraquin (Sep 27, 2021)

Felix123BU said:


> For me the biggest boost I got in this test and others too was an all-core OC, slightly better than the PBO+curve optimizer, and even though PBO got me to 5.05 Ghz, a 4.7-4.8 Ghz all-core had better FPS every time. The differences are small anyway, memory tuning has by far the biggest impact.


Yeah, I get 234fps at best with +200 pbo(4.6 avg allcore),  but my all core 4.8 got 248fps. I have activated resize bar and that steals a few CPU fps though, 4.8 was without r-bar.

So I re-ran the test. The following is 4.8 all core with r-bar. This lost me 8 fps avg on CPU-game compared to the result without r-bar, 4000cl15 tuned.


Next up is Curve optimizer and pbo+200 and ram at 4000cl15 tuned. CO lost 6 fps avg compared to all core 4.8.



Last is the same as above but a slower 4000cl16 with slightly looser timings and almost 0.1V lower vdimm. For unknown reasons it beat the 4000cl15 with tighter timings 




Wrap-up: Enabling r-bar reduces CPU-performance by 3%, allcore 4.8 is about 2% faster than CO+200 pbo, but uses about 10-15W more and CPU runs 5-10C warmer. CO might match 4.8 allcore if I diable the 76W limit.


----------



## Felix123BU (Sep 27, 2021)

Taraquin said:


> Yeah, I get 234fps at best with +200 pbo(4.6 avg allcore),  but my all core 4.8 got 248fps. I have activated resize bar and that steals a few CPU fps though, 4.8 was without r-bar.
> 
> So I re-ran the test. The following is 4.8 all core with r-bar. This lost me 8 fps avg on CPU-game compared to the result without r-bar, 4000cl15 tuned.View attachment 218418
> Next up is Curve optimizer and pbo+200 and ram at 4000cl15 tuned. CO lost 6 fps avg compared to all core 4.8.
> ...


I also noticed that r-bar slightly reduces CPU performance in this game on AMD Cpu's   Not sure about r-bar and Intel Cpu's.


----------



## Det0x (Oct 2, 2021)

tRFC 228 put up a really good fight with me, but in the end i have managed to knock it back in line

Had multiple runs where testmem simply would stop running without giving any errors, with timer still running, often at cycles between 15 and 24
What fixed it in the end was higher voltages.. +20mv to VDDP, CCD and IOD fixed testmem from stop running between cycles

I'm pretty sure this will be my finalised profile for awhile now.. 



Lots of things going on in this screen, so i will also write what it show:

25 cycles testmem 1usmus cfg
3000% Karhu ramtest
30min OCCT Large AVX
Aida64 memory benchmark
CPU-Z cpu bench, ST 709 and MT above 14k

And a quick and easy gaming test in SotTR with *daily 24/7* settings. (done in windows10)



Seems like i have to wait for Alder lake for someone else to cross the 300 fps line and i can start pushing again..
My old record with everything maxed and not stable was 314 average cpu fps i think, with these new memory stick and settings it should be good for ~320-330 fps when running bench settings
Looking forward what the 5950 xt 3dnow! edition can do in this bench when i get it 

*For those running windows11:*
Windows 11 will hobble gaming performance by default on some prebuilt PCs​



Far Cry New Dawn is the outlier here, which barely shrugs at VBS, with just a 5% reduction in frame rate. But Horizon Zero Dawn drops by some 25%, Metro Exodus by 24%, and *Shadow of the Tomb Raider by 28%.* Interestingly, the 3DMark Time Spy score only dropped by 10%.

"In our testing with pre-release builds of Windows 11," UL tells us, "a feature called Virtualization-based Security (VBS) causes performance to drop.* VBS is enabled by default after a clean install of Windows 11, but not when upgrading from Windows 10*. This means the same system can get different benchmark scores depending on how Windows 11 was installed and whether VBS is enabled or not. "

If you are affected:

"HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\DeviceGuard and add a new DWORD value named EnableVirtualizationBasedSecurity and set its value to 0 "

or

Run gpedit.msc
Go to _Local Computer Policy > Computer Configuration > Administrative Templates > System > Device Guard_
Double click _Turn on Virtualization Based Security_
Select _Disabled_
Click OK
A reboot might be required.


----------



## Felix123BU (Oct 2, 2021)

Det0x said:


> tRFC 228 put up a really good fight with me, but in the end i have managed to knock it back in line
> 
> Had multiple runs where testmem simply would stop running without giving any errors, with timer still running, often at cycles between 15 and 24
> What fixed it in the end was higher voltages.. +20mv to VDDP, CCD and IOD fixed testmem from stop running between cycles
> ...


Nice post, also, if you get a Alder Lake CPU, I would be highly interested to see results with it


----------



## Cekho (Oct 2, 2021)

Hello everyone.
I followed with interest this thread since I started to play at Shadow of the tomb raider 2 days ago.
Since the start of the game I started to feel something wrong: currently I play with a R7 3700x + 6700xt at 1080p (atm i cant go for 1440p which I aim), but my FPS average on game benchmark are pretty low for the ultra settings -> like 87 fps average. So i tried to set all at lowest like all of you did in this thread and these are the results:








I'm not sure but i think having 87 average fps on ultra and 102 on lowest settings with my current setup is not normal.
On the bottom-right corner, the first column of the table shows low fps from the CPU-game compared to your previous posts and I cant find the reason. Even lower CPUs have higher CPU-game fps.
I literally played Cyberpunk2077 on ultra in 1080p without any strange results like this.

Anyone maybe know the potential issues? Or maybe i'm just not reading correctly the datas?

Thanks you for the answers, hoping in a help  have a good day


----------



## Felix123BU (Oct 2, 2021)

Cekho said:


> Hello everyone.
> I followed with interest this thread since I started to play at Shadow of the tomb raider 2 days ago.
> Since the start of the game I started to feel something wrong: currently I play with a R7 3700x + 6700xt at 1080p (atm i cant go for 1440p which I aim), but my FPS average on game benchmark are pretty low for the ultra settings -> like 87 fps average. So i tried to set all at lowest like all of you did in this thread and these are the results:
> 
> ...


From what you posted, the result say your CPU-RAM combo is underperforming. A 3700x should give much better results. What RAM sticks do you use, and what are your RAM settings? I had the 3800X, it was roughly 25% faster than your 3700x in this test, and the difference between a 3800x and 3700x is minimal anyway.


----------



## Cekho (Oct 2, 2021)

Felix123BU said:


> From what you posted, the result say your CPU-RAM combo is underperforming. A 3700x should give much better results. What RAM sticks do you use, and what are your RAM settings? I had the 3800X, it was roughly 25% faster than your 3700x in this test, and the difference between a 3800x and 3700x is minimal anyway.


Thanks for the answer.

I'm actually using a pair of Crucial Ballistix 3000Mhz, but you just made me think i didnt activate DOCP after last BIOS update i did 2 weeks ago. Maybe is there the problem?


----------



## Felix123BU (Oct 2, 2021)

Cekho said:


> Thanks for the answer.
> 
> I'm actually using a pair of Crucial Ballistix 3000Mhz, but you just made me think i didnt activate DOCP after last BIOS update i did 2 weeks ago. Maybe is there the problem?


Most likely   . And even if you get it to 3000mhz with DOCP, that is still rather low because usually those timings are pretty relaxed. If the Ram kit allows, you can get a lot more extra performance by tuning it, but that takes some time and quite a lot of knowledge. A good way to enter tuning is Dram Calculator for Ryzen, its not perfect, but a decent starting point.


----------



## Cekho (Oct 2, 2021)

Felix123BU said:


> Most likely   . And even if you get it to 3000mhz with DOCP, that is still rather low because usually those timings are pretty relaxed. If the Ram kit allows, you can get a lot more extra performance by tuning it, but that takes some time and quite a lot of knowledge. A good way to enter tuning is Dram Calculator for Ryzen, its not perfect, but a decent starting point.



Well i just rebooted the system and enabled DOCP.
Long story short tomorrow i'll receive new pair of RAM that have 3600mhz, and i didnt even know this problem ^^'

I'm going for a benchmark now just to see if i got some improvement

Thanks for the pretty quick answer, super gentle


EDIT:
This is my last benchmark, i got a 17 fps improvement, not impressive but still a thing. Maybe with the new pairs of RAM i'll get a major improvement 



y


----------



## Felix123BU (Oct 2, 2021)

Cekho said:


> Well i just rebooted the system and enabled DOCP.
> Long story short tomorrow i'll receive new pair of RAM that have 3600mhz, and i didnt even know this problem ^^'
> 
> I'm going for a benchmark now just to see if i got some improvement
> ...


That's not bad, 17fps extra  

See here how much I gained just by Ram tuning, does not mean you will get the same increases, but it shows how much Ram does affect this game.





That 3600mhz ram should help, and if you can tune it even more, lets say to a 3800mhz with decent sub timings, that would make it even better.


----------



## QuietBob (Oct 2, 2021)

Cekho said:


> Well i just rebooted the system and enabled DOCP.
> Long story short tomorrow i'll receive new pair of RAM that have 3600mhz, and i didnt even know this problem ^^'
> 
> I'm going for a benchmark now just to see if i got some improvement
> ...


Those scores are way off the mark. Your 3700X is performing worse than a quad core Zen 2. Something's holding your CPU back and faster RAM isn't going to magically double your fps. Your processor may be throttling or it is being bogged down by heavy background processing.

Are you running any other software at the same time? What are your temperatures?


----------



## phanbuey (Oct 2, 2021)

Cekho said:


> Well i just rebooted the system and enabled DOCP.
> Long story short tomorrow i'll receive new pair of RAM that have 3600mhz, and i didnt even know this problem ^^'
> 
> I'm going for a benchmark now just to see if i got some improvement
> ...


do you have virtualization enabled in the bios? Turning this off also gives me a few more fps in this game.


----------



## Cekho (Oct 2, 2021)

QuietBob said:


> Those scores are way off the mark. Your 3700X is performing worse than a quad core Zen 2. Something's holding your CPU back and faster RAM isn't going to magically double your fps. Your processor may be throttling or it is being bogged down by heavy background processing.
> 
> Are you running any other software at the same time? What are your temperatures?


That's what i feared.
But temperatures seems ok. These screen is captured during the benchmark. I didnt use any software, just one tab of Chrome open. :s


----------



## kane nas (Oct 2, 2021)

5600x Pbo+curve optimization@5.0GHz,Gskill flareX 3200@3733 cl15.
6800 Powercolor,MorePowerTools@6800xt Nitro+ bios=300W Power Limit,360 A TDC,55 Soc,Power Limit +15%,1875 MHz Fclk.
Custom settings/all ultra/Aniso x16 /motion blur off/AMD FidelityFX CAS off
1920X1080,2560X1440,3840X2160,5120x2880 and 3840x2160 + RT ultra shadows.


----------



## QuietBob (Oct 2, 2021)

Cekho said:


> That's what i feared.
> But temperatures seems ok. These screen is captured during the benchmark. I didnt use any software, just one tab of Chrome open. :s
> View attachment 219231


You should be seeing much higher CPU usage, SOTTR scales with eight cores easily. Also your temps would indicate that the processor isn't being fully utilized.
Could you fill out your hardware specs in the user profile?
Oh, and welcome to TPU!


----------



## Cekho (Oct 2, 2021)

QuietBob said:


> You should be seeing much higher CPU usage, SOTTR scales with eight cores easily. Also your temps would indicate that the processor isn't being fully utilized.
> Could you fill out your hardware specs in the user profile?
> Oh, and welcome to TPU!



Thanks for the advice and the welcome  , i'll fill my user profile with my hardware specs.

I'm a bit frustrated cause the processor is pretty new (i bought it like 1 month ago) and dont really know what i can do for resolve the issue :\

EDIT: I added the specs  hope can help


----------



## Taraquin (Oct 2, 2021)

Cekho said:


> Thanks for the answer.
> 
> I'm actually using a pair of Crucial Ballistix 3000Mhz, but you just made me think i didnt activate DOCP after last BIOS update i did 2 weeks ago. Maybe is there the problem?


I had that on my 3600 and that should give 130 fps avg cpu game. 3700X should do 140+. Is your ram in A2-B2 slot? Dcop should help a lot


----------



## QuietBob (Oct 2, 2021)

Cekho said:


> Thanks for the advice and the welcome  , i'll fill my user profile with my hardware specs.
> 
> I'm a bit frustrated cause the processor is pretty new (i bought it like 1 month ago) and dont really know what i can do for resolve the issue :\
> 
> EDIT: I added the specs  hope can help


Great, you've got a nice setup there! The 3700X is a good CPU, but I wouldn't like to de-rail this thread. Maybe you could create your own new topic in General Hardware, so we can do some proper troubleshooting?


----------



## mrthanhnguyen (Oct 3, 2021)

I like Intel more coz its more User friendly. Just plug and play and you got good result. Meanwhile AMD took too much time and can't figure out what is going on. 47.75ccx0 47ccx1  3800c13 with 33c water for daily usage.


----------



## Cekho (Oct 3, 2021)

Taraquin said:


> I had that on my 3600 and that should give 130 fps avg cpu game. 3700X should do 140+. Is your ram in A2-B2 slot? Dcop should help a lot


Oh god, just checked the slots ram and I noticed they're in single channel. I'll fix the position when my new pair of 3600mhz arrive today.




QuietBob said:


> Great, you've got a nice setup there! The 3700X is a good CPU, but I wouldn't like to de-rail this thread. Maybe you could create your own new topic in General Hardware, so we can do some proper troubleshooting?


Absolutely you right, sorry I just posted there cause I thought was a problem related to the game. I'm going to fix the position of rams today (atm are in single channel, maybe this is limiting my CPU) and if the problem is not resolved i'll make a thread in the right section!

Thanks you all for your answers


----------



## Taraquin (Oct 3, 2021)

Finally I was able to match my slightly unstable 4.8 4000cl15 overclock. This is Curve optimizer + 200 PBO and 4000cl16 tuned 1T. Still 8fps lower than no resizeable bar, but I`m satisfied


----------



## Felix123BU (Oct 3, 2021)

Cekho said:


> Oh god, just checked the slots ram and I noticed they're in single channel. I'll fix the position when my new pair of 3600mhz arrive today.
> 
> 
> 
> ...


Well, it was a Ram problem in the end, only manifested in a different form. Glad you have a resolution, these things can be extremely frustrating


----------



## Taraquin (Oct 3, 2021)

Cekho said:


> Oh god, just checked the slots ram and I noticed they're in single channel. I'll fix the position when my new pair of 3600mhz arrive today.
> 
> 
> 
> ...


What 3600 ram did you order? The ballistix 3000 i Micron rev E that can overclock like a champ, I ran mine at 3733cl15 and tight subs for half a year at 1.43V. Bullzoid tested 10 of these sticks and all managed 4900-5100 a 1.55-1.7V. Running 3800cl15 which is your estimated max is quite easy  Unless you get B-die the new ram probably won't be faster than the ones you have.


----------



## Tanzmusikus (Oct 5, 2021)

Det0x said:


> "In our testing with pre-release builds of Windows 11," UL tells us, "a feature called Virtualization-based Security (VBS) causes performance to drop.* VBS is enabled by default after a clean install of Windows 11, but not when upgrading from Windows 10*. This means the same system can get different benchmark scores depending on how Windows 11 was installed and whether VBS is enabled or not. "


You could try to deactivate VBS/Hyper-V w/o deactivating AMD-V, if you wanna use it w/ VMs.
Open CMD as admin and type "bcdedit /set hypervisorlaunchtype off", then restart PC.

But you have to reset your registry or gpo settings:


Det0x said:


> "HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\DeviceGuard and add a new DWORD value named EnableVirtualizationBasedSecurity and set its value to 0 "
> 
> or
> 
> ...







Cekho said:


> I'm a bit frustrated cause the processor is pretty new (i bought it like 1 month ago) and dont really know what i can do for resolve the issue :\


Maybe you could get a bit more FPS by enabling option "exclusive fullscreen".
This is also a condition you can read in the start post.


----------



## Felix123BU (Oct 13, 2021)

This...is just weird 

Reran this test on Windows 11, exact same hardware settings...





Thats 47 extra FPS, and CPU Game is 48 FPS extra vs my previous best of 240.

Anybody else noticed such huge jumps? I still cant believe it, reran the test 3 times, same results


----------



## Taraquin (Oct 13, 2021)

Felix123BU said:


> This...is just weird
> 
> Reran this test on Windows 11, exact same hardware settings...
> 
> ...


Yeah, seen others report the same. Too bad Win11 in general performs worse i many other games :/


----------



## Felix123BU (Oct 13, 2021)

Taraquin said:


> Yeah, seen others report the same. Too bad Win11 in general performs worse i many other games :/


Well, I tested the games I have, and with the exception of this same game, but only at higher resolutions, and CP2077, its basically the same performance as it was in WIN 10 across the board.

That's why its really weird that in Shadow of the Tomb Raider at 1080p lowest I get a huge 47FPS boost (and its not a fluke, reran the test like 10 times), but in 3440x1440 and 4K I get a 4-5 FPS reduction. All other games are 1-2FPS +/-, so run to run variance.

And coupled with the L3 cache bug that AMD reported, that, as they say, can affect memory intensive games, that should be patched this month, I can only see positives in WIN 11, well, except the usual Windows telemetry and "spying" shenanigans


----------



## Taraquin (Oct 13, 2021)

Felix123BU said:


> Well, I tested the games I have, and with the exception of this same game, but only at higher resolutions, and CP2077, its basically the same performance as it was in WIN 10 across the board.
> 
> That's why its really weird that in Shadow of the Tomb Raider at 1080p lowest I get a huge 47FPS boost (and its not a fluke, reran the test like 10 times), but in 3440x1440 and 4K I get a 4-5 FPS reduction. All other games are 1-2FPS +/-, so run to run variance.
> 
> And coupled with the L3 cache bug that AMD reported, that, as they say, can affect memory intensive games, that should be patched this month, I can only see positives in WIN 11, well, except the usual Windows telemetry and "spying" shenanigans


Although SOTTR is a great game for testing HW and scales well with frequency, cores, timings etc it has some weird limitations on certain setups. Getting over 200fps on Ryzen 3000 seems impossible even at 720p low, there is a wall around 180-190 fps. On my 3600 with 3733-B-die I could get 165fps CPU avg at 1080p highest, but getting over 185fps even at 720p lowest was impossible. Maybe Windows 11 has improved some things, would be cool to see if Ryzen 3000 fares better on Win11.


----------



## Hyderz (Oct 13, 2021)

heres my results highest settings 1080p


----------



## Felix123BU (Oct 13, 2021)

Hyderz said:


> heres my results highest settings 1080p


Hi mate, please use 1080p *Lowest Profile* here  (see first page)

*Fullscreen
Exclusive Fullscreen
DirectX 12
DLSS OFF
Vsync OFF
Resolution 1920 X 1080
Anti-Aliasing OFF*

For 1080p Highest there is another thread Shadow of the Tomb Raider benchmark | TechPowerUp Forums

Thx!


----------



## jamse (Oct 13, 2021)

This has probably been done a billion times but wanted to test it for myself. Stock, XMP, XMP 3800 and XMP 3800 with better timings


----------



## the54thvoid (Oct 13, 2021)

Looks like the resolution doesn't match the thread 'standard'. A benchmark thread needs to follow set parameters from which to benchmark against. Resolution needs to be at 1080p.


----------



## AVATARAT (Oct 13, 2021)

My 5600x limit in some games my 6700 XT, so I want the new 5600x with 3D cache


----------



## Hyderz (Oct 13, 2021)

scores


----------



## Felix123BU (Oct 13, 2021)

AVATARAT said:


> My 5600x limit in some games my 6700 XT, so I want the new 5600x with 3D cache


If you get it, would be sweet if you could post some scores with it  I am curios about both Zen 3D and Alder Lake


----------



## phanbuey (Oct 13, 2021)

Felix123BU said:


> If you get it, would be sweet if you could post some scores with it  I am curios about both Zen 3D and Alder Lake


Same - seems like Zen3 scales like crazy with memory and latency so I imagine that 3D cache + tuned memory would be nuts.


----------



## Hyderz (Oct 14, 2021)

lappy scores


----------



## Det0x (Oct 14, 2021)

Intel Core i9-12900K @ DDR4 3866 C14-14-14-34 2T Gear1
(seems like memory controller have same limitations as with Rocket Lake, and ~8000 MT/s are needed for DDR5 to be faster then the fastest DDR4 we have today)



Unsure if this was done at 1080p highest or lowest, but there are a maximum difference around ~15 game cpu fps between these two settings..
Either way, not really impressive in my book i'm afraid


----------



## Felix123BU (Oct 14, 2021)

Det0x said:


> Intel Core i9-12900K @ DDR4 3866 C14-14-14-34 2T Gear1
> (seems like memory controller have same limitations as with Rocket Lake, and ~8000 MT/s are needed for DDR5 to be faster then the fastest DDR4 we have today)
> View attachment 220830
> Unsure if this was done at 1080p highest or lowest, but there are a maximum difference around ~15 game cpu fps between these two settings..
> Either way, not really impressive in my book i'm afraid


Yeah, I was just reading something similar regarding DDR5 memory speeds needed for being better than DDR4, and some other not so good news regarding Alder Lake. Lets see when they launch, maybe it will be better. Would still love to see some comparative benchmarks though    Competition is needed.


----------



## phanbuey (Oct 14, 2021)

not terrible but yes not impressive either...  especially since 3866 C14 is pretty optimized.  A current gen 5600x with those settings would smash it....


----------



## jamse (Oct 14, 2021)

Det0x said:


> Intel Core i9-12900K @ DDR4 3866 C14-14-14-34 2T Gear1
> (seems like memory controller have same limitations as with Rocket Lake, and ~8000 MT/s are needed for DDR5 to be faster then the fastest DDR4 we have today)
> View attachment 220830
> Unsure if this was done at 1080p highest or lowest, but there are a maximum difference around ~15 game cpu fps between these two settings..
> Either way, not really impressive in my book i'm afraid


It was highest, look at that GPU bound % too it's 99%. Reviewing a CPU with GPU bound settings in a game is kinda weird


----------



## Det0x (Oct 14, 2021)

jamse said:


> It was highest, look at that GPU bound % too it's 99%. *Reviewing a CPU with GPU bound settings in a game is kinda weird*


Are you new to this thread ? We are comparing "cpu game" which relay purely on cpu+memory performance, not gpu. 
Looks like mobile chip memory subsystem that had to be repurposed to desktop and compromises were made..


----------



## mrthanhnguyen (Oct 14, 2021)

How can you tell its a 12900k?


----------



## 95Viper (Oct 15, 2021)

Stay on topic.
Thank You.


----------



## Taraquin (Oct 15, 2021)

Some further testing at 1080p lowest cpu avg fps,  3 rounds, all pbo with 76W limit:
4000cl16 1T CO+200pbo r-bar on: 237
4000cl16 1T CO+200pbo r-bar off: 245
4000cl16 1T 4.8GHz@1.32V r-bar on: 240
4000cl16 1T 4.8GHz@1.32V r-bar off: 248 (consumption 5-15W above CO+200pbo)
3800cl15 1T CO+200pbo r-bar on: 234 (3800 needs 40mv lower soc and iod voltage so CPU runs 25-50MHz higher vs 4000cl16)
4000cl16 1T stock r-bar on: 225
Stock kjører CPU 4.65 SC og 4.3 MC. With pb+200 it runs 4.85 SC and 4.55 MC. 

Wrap-up: activating r-bar reduces cpu avg fps by about 3%. CO+200pbo increase perf by around 5%. 4000cl16 is 1% faster than 3800cl15, but CPU freqency at 4000 is a bit lower since the  io-die use 1-2W more). 4000 is actually about 2% fadter. 
In aida 3800cl15 gets 57/30/53 and 53.5ns.
4000cl16 gir 60/32/55 and 52ns.
4000cl16 4.8GHz gives 0.5ns lower, but same R/W/C.


----------



## Felix123BU (Oct 15, 2021)

Taraquin said:


> Some further testing at 1080p lowest cpu avg fps,  3 rounds, all pbo with 76W limit:
> 4000cl16 1T CO+200pbo r-bar on: 237
> 4000cl16 1T CO+200pbo r-bar off: 245
> 4000cl16 1T 4.8GHz@1.32V r-bar on: 240
> ...


Is that on Win 10? I noticed that in Win 11 Rebar actually gives me a slight improvement, in Win 10 I always got a slight regression in this bench.


----------



## Taraquin (Oct 15, 2021)

Felix123BU said:


> Is that on Win 10? I noticed that in Win 11 Rebar actually gives me a slight improvement, in Win 10 I always got a slight regression in this bench.


Yes, windows 10. There seems to be some issues with 11 still so I wait


----------



## Felix123BU (Oct 15, 2021)

Taraquin said:


> Yes, windows 10. There seems to be some issues with 11 still so I wait


Yeah, MS doing the "good" work again, patching a Ryzen issue and making it worse, looking at Aida Mem bench, one could cry   

After this weeks MS patch, that was supposed to fix this, mem latencies are worse by 5ns, and L3 cache is up to 10x slower 







Also lost 26 Fps in this bench after last W11 update, though its still 22Fps faster than anything I could achieve on Win 10.

Curios about the actual fix, could be even faster than before in Win11.


----------



## Det0x (Oct 15, 2021)

Did some benchmarks @ daily 24/7 settings vs the fastest 11900k ive seen (5.3Ghz + Cache 4.8Ghz + Gear1 DR 4000-CL13) on a other forum, can share the SotTR run here also.
This run was performed in win10, my old 313 max run was done in windows11 which gives higher scores in this benchmark for some strange reason.
By mistake i also had rebar enabled for this run. (gives higher render and gpu numbers, but lower cpu game)



(benchmark was ran in 1080p fullscreen, minimized after to show my settings)


----------



## Felix123BU (Oct 15, 2021)

Det0x said:


> Did some benchmarks @ daily 24/7 settings vs the fastest 11900k ive seen (5.3Ghz + Cache 4.8Ghz + Gear1 DR 4000-CL13) on a other forum, can share the SotTR run here also.
> This run was performed in win10, my old 313 max run was done in windows11 which gives higher scores in this benchmark for some strange reason.
> By mistake i also had rebar enabled for this run. (gives higher render and gpu numbers, but lower cpu game)
> View attachment 221001
> (benchmark was ran in 1080p fullscreen, minimized after to show my settings)


Damn, your Ram timings are out of this world 
So, what was the fastest run with the 11900k?
And yeah, I also get a lot better results in W11 in this bench vs W10


----------



## Det0x (Oct 15, 2021)

Felix123BU said:


> Damn, your Ram timings are out of this world
> So, what was the fastest run with the 11900k?
> And yeah, I also get a lot better results in W11 in this bench vs W10


Just click the "spoiler" in the linked post to that other forum, there are more games and information there. 


CPU : 11900K@5312Mhz (SP89)
Cache : 4802Mhz
Memory OC : 3950Mhz-13-13-13-14-215-2T (1:1)
M/B : ASUS ROG MAXIMUS XIII APEX (BIOS : 1102)
Memory : G.SKILL Trident Z Royal 32GB (2 x 16GB) (F4-4000C14D-32GTRG)
Voltages (Bios) : CPU 1.420v / RAM 1.610v / SA 1.440v / Mem OC IO 1.420v / vppddr 2.485v



This is the fastest rocket lake i have seen.


----------



## Felix123BU (Oct 15, 2021)

Det0x said:


> Just click the "spoiler" in the linked post to that other forum, there are more games and information there.
> 
> 
> CPU : 11900K@5312Mhz (SP89)
> ...


Ok, so me with my measly 5800X and not so great RAM at 3800mhz CL16 have 287 FPS, and more important, better CPU game than a uber maxxed out 11900K at 5.3Ghz ...   What has the world become


----------



## mrthanhnguyen (Oct 16, 2021)

There are more games to compare. This game favors AMD more. My binned 10900k in f1 2020 and metro exodus 2033


http://imgur.com/a/dSHgSG3


Try win11 with 24/7 setting. linpack extreme passes.


----------



## Udyr (Oct 16, 2021)

*My decent lowest result*




*My modest highest result*


----------



## Det0x (Oct 16, 2021)

mrthanhnguyen said:


> There are more games to compare. This game favors AMD more. My binned 10900k in f1 2020 and metro exodus 2033
> 
> 
> http://imgur.com/a/dSHgSG3


Aren't your 5950x also binned ?



Spoiler: Anyway here are my F1 + Metro scores:












Zen 3 seems just as strong compared to everything else in these game benches also. (?)

Have saved all my new game benchmarks here so i have a comparison against Alderlake and Zen3 V-cache when they get released


http://imgur.com/a/4K9Zob0


----------



## mrthanhnguyen (Oct 16, 2021)

I bought from CENS.


----------



## Taraquin (Oct 16, 2021)

Udyr said:


> *My decent lowest result*
> View attachment 221039
> 
> *My modest highest result*
> View attachment 221040


Post your zentimings and we can help you tweak if you want? My 3600 with 3733 micron rev E did about 155fps cpu game avg fps and with 3733 B-die it did 165fps 1080p highest. Even with budget ram like rev E you can improve performance by 40% if you want?


----------



## Udyr (Oct 16, 2021)

Taraquin said:


> Post your zentimings and we can help you tweak if you want? My 3600 with 3733 micron rev E did about 155fps cpu game avg fps and with 3733 B-die it did 165fps 1080p highest. Even with budget ram like rev E you can improve performance by 40% if you want?


----------



## Taraquin (Oct 17, 2021)

Do you know what die your ram is? You can download thaiphoon burner and read spd, there it will say for instance hynix A, Micron B etc. 

General tip, try setting speed to 3200 and infinity fabric 1600, all timings on auto, continiue to raise it until it won't boot. Reset bios and go back to last config that booted, then you can begin to tweak timings


----------



## Udyr (Oct 17, 2021)

Taraquin said:


> Do you know what die your ram is? You can download thaiphoon burner and read spd, there it will say for instance hynix A, Micron B etc.
> 
> General tip, try setting speed to 3200 and infinity fabric 1600, all timings on auto, continiue to raise it until it won't boot. Reset bios and go back to last config that booted, then you can begin to tweak timings


Hynix A


----------



## Taraquin (Oct 17, 2021)

Udyr said:


> Hynix A


Okay, usual max for Hynix A is 3200-3600. Try 3600/1800 and all timings on auto, if it dont boot try 3533/1766, 3466/1733 etc. Once you find highest speed try the following safe settings: ram volt at 1.45V, cl 16, trcdrd 20, trcwr 8, tRP 20, tRAS 40, tRC 60, tFAW 32, trrds 8, trrdl 10, twr 16, trtp 8, tRFC 480, scl's 4, twtrs 4, twtrl 12, rest at auto, set gear down mode on. You might be able to run tighter timings, try this first


----------



## Det0x (Oct 18, 2021)

Det0x said:


> Seems like i have to wait for Alder lake for someone else to cross the 300 fps line and i can start pushing again..
> My old record with everything maxed and not stable was 314 average cpu fps i think, with these new memory stick and settings it should be good for ~320-330 fps when running bench settings
> Looking forward what the 5950 xt 3dnow! edition can do in this bench when i get it


mrthanhnguyen also broke the 300fps barrier, so had to do a new run with daily 24/7 settings

339 cpu average fps




Recorded also a video for those who want to know see what kinda fps is needed for these average fps numbers 









*edit*
*Seems there was a new patch for SotTR releasted yesterday. Don't know how/if this affects performance!*


----------



## Taraquin (Oct 18, 2021)

Got a new patch yesterday. performance on CPU game avg is up by almost 30 fps  This is run with 5600X CO + 200MHz pbo, MB-limits, 4000cl16 ram and 1T  The best I have got earlier with this setting was 237 fps 






Det0x said:


> mrthanhnguyen also broke the 300fps barrier, so had to do a new run with daily 24/7 settings
> 
> *339 cpu average fps*
> View attachment 221388
> ...


They released a patch yesterday I think which boosted peformance by 10% atleast in Windows 10


----------



## Felix123BU (Oct 18, 2021)

Taraquin said:


> Got a new patch yesterday. performance on CPU game avg is up by almost 30 fps  This is run with 5600X CO + 200MHz pbo, MB-limits, 4000cl16 ram and 1T  The best I have got earlier with this setting was 237 fps
> View attachment 221405
> 
> 
> They released a patch yesterday I think which boosted peformance by 10% atleast in Windows 10


Hmm, that's the same amount of extra FPS I got first in Windows 11 in this bench, which makes me think that some extra performance for Ryzen CPU's was always on the table, but for years Microsoft did not bother to optimize the W10 system too much, and now the gains from Windows 11 trickle down to Windows 10....


----------



## Det0x (Oct 18, 2021)

Felix123BU said:


> Hmm, that's the same amount of extra FPS I got first in Windows 11 in this bench, which makes me think that some extra performance for Ryzen CPU's was always on the table, but for years Microsoft did not bother to optimize the W10 system too much, and now the gains from Windows 11 trickle down to Windows 10....


Nizzen got same ~ +30 fps gain on windows11


----------



## Felix123BU (Oct 18, 2021)

Det0x said:


> Nizzen got same ~ +30 fps gain on windows11


That's weird, I was just now looking at that result before seeing this post and asking myself if that's you


----------



## harm9963 (Oct 18, 2021)

Nice bump from 240 to 246


----------



## Det0x (Oct 19, 2021)

My new windows11 install


----------



## phanbuey (Oct 19, 2021)

Those fps numbers don't even make sense anymore.


----------



## harm9963 (Oct 19, 2021)

like fine wine


----------



## mrthanhnguyen (Oct 19, 2021)

water temp is too high and figure out it would crash in f1 so lower setting a bit and got these. F1 is so demanding like Battlefield series. Sotr is not that demanding.


----------



## Taraquin (Oct 19, 2021)

Felix123BU said:


> Hmm, that's the same amount of extra FPS I got first in Windows 11 in this bench, which makes me think that some extra performance for Ryzen CPU's was always on the table, but for years Microsoft did not bother to optimize the W10 system too much, and now the gains from Windows 11 trickle down to Windows 10....


I wonder what goes on in Win11 that gives the 10% boost vs Win10. It seems the patch boosted both 10 and 11 by around 10%.


----------



## natr0n (Oct 19, 2021)

Newest update removes Denovo. Should get higher fps now.


----------



## Taraquin (Oct 19, 2021)

mrthanhnguyen said:


> water temp is too high and figure out it would crash in f1 so lower setting a bit and got these. F1 is so demanding like Battlefield series. Sotr is not that demanding.
> 
> 
> View attachment 221448View attachment 221449


You run allcore OC or curve optimizer? I really recommend the latter 



natr0n said:


> Newest update removes Denovo. Should get higher fps now.


That makes sense, but Denovo causing 10% fps drop is very high!


----------



## mrthanhnguyen (Oct 19, 2021)

Taraquin said:


> You run allcore OC or curve optimizer? I really recommend the latter
> 
> 
> That makes sense, but Denovo causing 10% fps drop is very high!


Hydra.


----------



## Felix123BU (Oct 19, 2021)

Anybody got the W11 update that should to fix L3? Was supposed to come out today.


----------



## harm9963 (Oct 19, 2021)

240 old  vs 251 now


----------



## mrthanhnguyen (Oct 19, 2021)

35c water. Will try again with cold water.


----------



## Felix123BU (Oct 19, 2021)

mrthanhnguyen said:


> 35c water. Will try again with cold water.
> 
> View attachment 221505View attachment 221506


And the fight is ON!


----------



## harm9963 (Oct 20, 2021)

Cooler temps  today


----------



## Taraquin (Oct 20, 2021)

Crap, do not upgrade to the newest patch. Drm is apoarently back, lost 27fps :/


----------



## harm9963 (Oct 20, 2021)

Taraquin said:


> Crap, do not upgrade to the newest patch. Drm is apoarently back, lost 27fps :/


Just roll back to 449 ,that's what I did.
Also  got win11 bench , 256  vs win10  253.


----------



## mrthanhnguyen (Oct 20, 2021)

How to roll back?


----------



## harm9963 (Oct 21, 2021)

mrthanhnguyen said:


> How to roll back?


Go to properties , of the the game in steam, go to beta, stroll down 449 , check code , will patch back , that's it


----------



## mrthanhnguyen (Oct 21, 2021)

What code?


----------



## harm9963 (Oct 21, 2021)

ok


----------



## DanglingPointer (Oct 21, 2021)

Hi Window$ Lads,

Something different for you Windower$ to geez at...

Linux box here:

Ubuntu 20.04 on linux kernel 5.13.19
On-the-fly Feral propriety Vulkan translation over DX12 (So Vulkan translation overhead)
Open Source Mesa drivers using RADV for Vulkan (This runs faster than AMD's own propriety drivers!)  Basically what Valve will be using in there Steam Deck.
VSYNC off
AMD FidelityFX on max
SMAAT2x AA
Highest Graphics settings for everything (well whatever is available on linux)
6900xt Liquid Devil Ultimate stock
Ryzen 5800 all core OC at 4.6GHz
Test on 1080p and 1440p
Underneath you will note that at 1080p, the GPU is only 51% bound!  That's with Vsync off.  So CPU bottleneck here.  Added DX12 to Vulkan translation perhaps limiting max if this was instead a Vulkan native.
With 1440p the GPU is 97% bound!

However all that said, the age of Linux gaming is WELL and TRULY HERE and has ARRIVED!  basically anything over 60 FPS is good!  You can arguably play almost every single game from DX9-12 with ease due to Vulkan on the fly translation either through Proton on Steam, DXVK, or propriety ones like from Feral or Aspyr.  

So for those that want to bugger off Window$, now's the time if only games are what's holding you back!

*1080 Graphics *



*1080 Display*



*1440 Graphics *



*1440 Display


*


----------



## Taraquin (Oct 21, 2021)

DanglingPointer said:


> Hi Window$ Lads,
> 
> Something different for you Windower$ to geez at...
> 
> ...


Linux got potential. It's less complex than Windows and Vulkan is usually superior to DX performancewise


----------



## Felix123BU (Oct 21, 2021)

DanglingPointer said:


> Hi Window$ Lads,
> 
> Something different for you Windower$ to geez at...
> 
> ...


Nice, could you please upload one result with 1080p Lowest preset, since that the theme of this thread? Thx


----------



## Det0x (Oct 21, 2021)

Felix123BU said:


> And the fight is ON!


Last benchmarks from me in a while, going on holiday for a few weeks  

New SotTR run and more gamebenches @


http://imgur.com/a/RwxpB0T

(the 490 average fps f1 2020 is also decent)

Also recorded the 353 average fps run


----------



## Felix123BU (Oct 21, 2021)

Win 11 update dropped, fixing the L3 cache bug, and whoops....





That being with a 4.85 Ghz  all core OC, with PBO only 290fps. Not bad for one Win update, plus 17Fps here


----------



## phanbuey (Oct 21, 2021)

Might have to pick up a used 5xxx chip once they start dropping in price -- so much headroom in these.

I can get my 10850K to like ~152-160 CPU Game min FPS and that's pretty much the max.


----------



## Taraquin (Oct 21, 2021)

Det0x said:


> Last benchmarks from me in a while, going on holiday for a few weeks
> 
> New SotTR run and more gamebenches @
> 
> ...


Nice. What patch is this on? I tried rollback to the 445 beta I think it was but avg fps sits around 245-250 and not 260-265 like I had on the previous official patch. The patch that came yesyerday dropoed fps to 237.


----------



## Det0x (Oct 21, 2021)

Taraquin said:


> Nice. What patch is this on? I tried rollback to the 445 beta I think it was but avg fps sits around 245-250 and not 260-265 like I had on the previous official patch. The patch that came yesyerday dropoed fps to 237.





harm9963 said:


> Just roll back to 449 ,that's what I did.



You want 449


----------



## Taraquin (Oct 21, 2021)

Okay, I'll try that


----------



## DanglingPointer (Oct 22, 2021)

Felix123BU said:


> Nice, could you please upload one result with 1080p Lowest preset, since that the theme of this thread? Thx


Here you go, lowest settings everything off,...

It barely used the GPU at 12% bound!
*
1080p*


----------



## Felix123BU (Oct 22, 2021)

DanglingPointer said:


> Here you go, lowest settings everything off,...
> 
> It barely used the GPU at 12% bound!
> 
> ...


That's really good, practically same as my 5800x on Windows, pity that Linux is still a giant pain in the ass for the normal user, not talking about techy people


----------



## DanglingPointer (Oct 22, 2021)

Felix123BU said:


> That's really good, practically same as my 5800x on Windows, pity that Linux is still a giant pain in the ass for the normal user, not talking about techy people


I think it is more about habit and custom.  My retired oldies are 70+ each and have only ever used Ubuntu for the last 8 years.

That said they only do use the Browser for Facebook, Zoom to family, and LibreOffice for the odd document every now and then.

My 5 year old son has only ever known Ubuntu, Xubuntu, MacOS and iPadOS.  He has never seen Windows!  His favourite toy is Dolphin Wii Emulator (Kirby and Mario games).  And he knows how to get them all working.   So for my Oldies and my son, going to Windows will probably be the opposite of what most of the gen x-y people feel when they load up a linux distro.  I myself sometimes struggle with Windows when I do have to dabble with it sometimes for work (some customers).  But lucky enough most of my customers use linux in their servers.


----------



## Lew Zealand (Oct 22, 2021)

My first go at the CPU side of this benchmark.  Only a 9700f.  B360 Mobo so not worth the K sku and stuck to 2666 CL13 DDR4 only as well.

Lowest settings, everything off.  Still 41% GPU bound, I expected better and when you look at Afterburner, it seems way more GPU bound than that with GPU at 94-97% pretty much the whole time.


----------



## DanglingPointer (Oct 22, 2021)

Lew Zealand said:


> My first go at the CPU side of this benchmark.  Only a 9700f.  B360 Mobo so not worth the K sku and stuck to 2666 CL13 DDR4 only as well.
> 
> Lowest settings, everything off.  Still 41% GPU bound, I expected better and when you look at Afterburner, it seems way more GPU bound than that with GPU at 94-97% pretty much the whole time.
> 
> View attachment 221814


...


DanglingPointer said:


> If you want to really push the CPU only, turn on AMD FidelityFX CAS and slide the Resolution Modifier bar to the left.


Actually my bad, I don't think it made much of a difference with or without FidelityFX when it comes to CPU usage!  All it did turning it on was use the GPU 3x more!  Makes sense logically since FidelityFX is for the most part processed in GPU.

Turning 'off' FidelityFX CAS brought it to 4% GPU bound (image below)!

With FidelityFX CAS 'on' it was 12% GPU bound (screenshot in one of my previous post above https://www.techpowerup.com/forums/...ame-benchmark-discussions.280562/post-4632926).  But CPU stats are roughly the same.




.


----------



## Taraquin (Oct 22, 2021)

Lew Zealand said:


> My first go at the CPU side of this benchmark.  Only a 9700f.  B360 Mobo so not worth the K sku and stuck to 2666 CL13 DDR4 only as well.
> 
> Lowest settings, everything off.  Still 41% GPU bound, I expected better and when you look at Afterburner, it seems way more GPU bound than that with GPU at 94-97% pretty much the whole time.
> 
> View attachment 221814


You can easily boost fps by 5-10% by timingstweaking even if stuck at 2666. Do you know what ram-die you have? I have a i5 8400 with crappy Hynix AFR running 2666 13-16-16. Set volt to 1.4V. If you have Micron E/H or Hynix C/D/J I bet you can run 12-14-14 and Try getting tRFC down, get trrds and l to 4/4 or 4/6, tfaw to 16, twr to 12 or 10 and trtp half of that. Trefi as close to 65k as possible. If you have Hynix A/C/D/J try trfc 304, 312 or 320, if Micron try 384, 392 or 400. If you have Micron B 8gb try 256, 264 or 272 or Hynix A you can try 272, 280 or 288.

Undervolting you CPU will probably make it run allcore faster since you free up more pwr-budget. Try offset og -50 on cache and core, they might do a lot more. My 8400 do - 120 core - 160 cache.


----------



## k1xxlikeamule (Oct 22, 2021)

GTX 1080 still gets the job done pretty well, its a smooth ride @1440p ULTRA settings that I run when not benchmarking... here's my score, running the new released Definitive Editon of SOTTR on Win11


----------



## Felix123BU (Oct 22, 2021)

k1xxlikeamule said:


> GTX 1080 still gets the job done pretty well, its a smooth ride @1440p ULTRA settings that I run when not benchmarking... here's my score, running the new released Definitive Editon of SOTTR on Win11


Yup, anything close to a Gtx 1080 in todays crazy market is a keeper


----------



## OICU812 (Oct 22, 2021)

I'm trying use use SOTTR demo as a benchmark for RAM tuning / PBO tuning, but it has not been consistent for me.  Is this demo version behavior, or does the full version do this too?  No changes between runs, one after another showing 15fps difference between the two.  5800x - CH7 VII - 2x16gb 3800CL14

Run 1



Run 2


----------



## Lew Zealand (Oct 22, 2021)

Taraquin said:


> You can easily boost fps by 5-10% by timingstweaking even if stuck at 2666. Do you know what ram-die you have? I have a i5 8400 with crappy Hynix AFR running 2666 13-16-16. Set volt to 1.4V. If you have Micron E/H or Hynix C/D/J I bet you can run 12-14-14 and Try getting tRFC down, get trrds and l to 4/4 or 4/6, tfaw to 16, twr to 12 or 10 and trtp half of that. Trefi as close to 65k as possible. If you have Hynix A/C/D/J try trfc 304, 312 or 320, if Micron try 384, 392 or 400. If you have Micron B 8gb try 256, 264 or 272 or Hynix A you can try 272, 280 or 288.
> 
> Undervolting you CPU will probably make it run allcore faster since you free up more pwr-budget. Try offset og -50 on cache and core, they might do a lot more. My 8400 do - 120 core - 160 cache.



I don't know what ram-die I have as every time I look for a tool, there are warnings that the tools may not be accurately reading the chipmaker.  I'd certainly like to optimize those RAM timings as currently it's running at 2666 13-13-13-30 (it's a 3200 16-16-16-36 set) set manually but nothing else set up.  Do you have an app/tool rec to pin down what I have, or do I need to remove a heatsink and look directly?  In any case, I'll try out some of those timings especially if they're lower than my auto-set ones.

I don't need to UV the CPU as I just raise the power limit in Throttlestop and anyway, this CPU doesn't tolerate any UV it when run at all core turbo 4.5GHz.  I game at full speed but when running something that does use all cores >90% (Handbrake), I use it at 4.2GHz with a -0.04V undervolt which saves some power (~115W vs. 135-140W) with minimal speed reduction.  No UV tolerance at 4.5GHz ACT is just sample variation (maybe cheap Mobo, too - ASRock B360M Pro4), as all other Intel CPUs I've owned have UV'd very well but they're also all 3.8GHz and lower @act, including the previous i5-8400 in this same system.


----------



## Taraquin (Oct 23, 2021)

Lew Zealand said:


> I don't know what ram-die I have as every time I look for a tool, there are warnings that the tools may not be accurately reading the chipmaker.  I'd certainly like to optimize those RAM timings as currently it's running at 2666 13-13-13-30 (it's a 3200 16-16-16-36 set) set manually but nothing else set up.  Do you have an app/tool rec to pin down what I have, or do I need to remove a heatsink and look directly?  In any case, I'll try out some of those timings especially if they're lower than my auto-set ones.
> 
> I don't need to UV the CPU as I just raise the power limit in Throttlestop and anyway, this CPU doesn't tolerate any UV it when run at all core turbo 4.5GHz.  I game at full speed but when running something that does use all cores >90% (Handbrake), I use it at 4.2GHz with a -0.04V undervolt which saves some power (~115W vs. 135-140W) with minimal speed reduction.  No UV tolerance at 4.5GHz ACT is just sample variation (maybe cheap Mobo, too - ASRock B360M Pro4), as all other Intel CPUs I've owned have UV'd very well but they're also all 3.8GHz and lower @act, including the previous i5-8400 in this same system.


16-16-16 usually means B-die, but a low bin  Try for instance 1.4V dimm 12 12 12 24, tRFC 200, tfaw 16 trrds 4, trrdl 4, twr 12 trtp 6, trefi 50000+.


----------



## Tanzmusikus (Oct 28, 2021)

@OICU812 
You are right, SotTR benchmark is inkonsitant in FPS result.


----------



## phanbuey (Oct 28, 2021)

you also may have something running in the background... whenever my fps varies wildly in sotr it's because windows search is trying to index while im benching.


----------



## DanglingPointer (Oct 29, 2021)

After an upgrade to linux kernel 5.14.14 custom-optimised for zen3 (with GCC-11) and a slight bump of 25MHz to the all core clock to 4.65GHz.  I'm now consistently over 44k frames rendered and averaging 295 FPS...   
I'm looking forward to kernel 5.15.x release.  AMD has put a tonne of optimisations and changes into the kernel for both Ryzen and AMDGPU kernel driver.

Something to note is that it is now 1% GPU bound!

[updated edit] - Using Mesa 21.2.4 - kisak-mesa drivers.


----------



## Taraquin (Oct 29, 2021)

Impressive Linux performance. Wish we had Vulkan for Windows aswell.


----------



## Kissamies (Oct 29, 2021)

Everything on max and RT on medium


----------



## DanglingPointer (Oct 29, 2021)

Taraquin said:


> Impressive Linux performance. Wish we had Vulkan for Windows aswell.


GCC-12 is also due early next year.  Would be interesting to benchmark the new linux kernel built with it, optmised natively for Zen3!

Also what I failed to mention in the previous post was that the Mesa video drivers were Valve's new kisak mesa drivers built with LLVM-13/Clang-13!

Would be interesting to see if the game runs any faster with a kernel built with LLVM-13/Clang-13 but unfortunately I run it on a workstation that doubles as a 24/7 lab server so can't run LLVM/Clang built linux kernels due to guest VMs not working with it.  So stuck with GCC.


----------



## mrthanhnguyen (Oct 30, 2021)

24.2c ambient, 30c water.




Able to match the fastest 11900k in HZD


----------



## DanglingPointer (Oct 30, 2021)

Taraquin said:


> Impressive Linux performance. Wish we had Vulkan for Windows aswell.


The game is still DX12 but translated to Vulkan using a wrapper developed by Feral Interactive to run on Linux.  So there's that extra layer overhead.  If the game was native Vulkan then perhaps it would be closer to what you see with mrthanhnguyen's results as he is also runing a 6900xt but a with slightly more powerful cpu.


----------



## Det0x (Oct 31, 2021)

mrthanhnguyen said:


> 24.2c ambient, 30c water.
> 
> View attachment 222944
> Able to match the fastest 11900k in HZD
> View attachment 222969


Nicely done, welcome to the exclusive 353 fps club 
Looking forward what you can do with Alder lake if you can get the block mounted


----------



## mrthanhnguyen (Oct 31, 2021)

Det0x said:


> Nicely done, welcome to the exclusive 353 fps club
> Looking forward what you can do with Alder lake if you can get the block mounted


Nah, still cant find ddr5 to pre order.


----------



## Det0x (Oct 31, 2021)

mrthanhnguyen said:


> Nah, still cant find ddr5 to pre order.


Not the best specs, but in stock atm and ready for shipping atleast


----------



## mrthanhnguyen (Nov 4, 2021)

http://imgur.com/a/tGpfjqK


----------



## phanbuey (Nov 4, 2021)

Ddr4?


----------



## mrthanhnguyen (Nov 4, 2021)

phanbuey said:


> Ddr4?


Ddr5 6400c36. Not even tuned yet.


----------



## caroline! (Nov 4, 2021)

Meanwhile my 5700XT isn't able to get past 67 FPS for some reason and the game randomly crashes. Thankfully I got it as a gift because it's literally unplayable on my computer, reinstalled it, drivers, tried tuning, guess it's another green team exclusive. DX11 performance is even worse, tops at 32 FPS and the benchmark runs at like 8 in the Mexico scene.
Any ideas or is it unironically an nvidia-only game like Metro:Exodus Enhanced?


----------



## phanbuey (Nov 4, 2021)

caroline.v said:


> Meanwhile my 5700XT isn't able to get past 67 FPS for some reason and the game randomly crashes. Thankfully I got it as a gift because it's literally unplayable on my computer, reinstalled it, drivers, tried tuning, guess it's another green team exclusive. DX11 performance is even worse, tops at 32 FPS and the benchmark runs at like 8 in the Mexico scene.
> Any ideas or is it unironically an nvidia-only game like Metro:Exodus Enhanced?



What resolution are you at??  At 1080P you should be getting north of 100 fps.


----------



## DanglingPointer (Nov 4, 2021)

caroline.v said:


> Meanwhile my 5700XT isn't able to get past 67 FPS for some reason and the game randomly crashes. Thankfully I got it as a gift because it's literally unplayable on my computer, reinstalled it, drivers, tried tuning, guess it's another green team exclusive. DX11 performance is even worse, tops at 32 FPS and the benchmark runs at like 8 in the Mexico scene.
> Any ideas or is it unironically an nvidia-only game like Metro:Exodus Enhanced?


It's not a green team exclusive.  It's on all the consoles which are almost all AMD hardware.

My benchmark screenshots have all been on a AMD 6900XT



caroline.v said:


> Meanwhile my 5700XT isn't able to get past 67 FPS for some reason and the game randomly crashes. Thankfully I got it as a gift because it's literally unplayable on my computer, reinstalled it, drivers, tried tuning, guess it's another green team exclusive. DX11 performance is even worse, tops at 32 FPS and the benchmark runs at like 8 in the Mexico scene.
> Any ideas or is it unironically an nvidia-only game like Metro:Exodus Enhanced?


There's a 5700XT in these benchmarks running on linux.  Using a Feral Vulkan translation layer on top of DX12...

You can see the 5700XT on 1080p High with AA = SMAA doing 132FPS!  So like what @phanbuey said, you should be well north of 100 FPS running natively with DX on Windows


----------



## caroline! (Nov 4, 2021)

phanbuey said:


> What resolution are you at??  At 1080P you should be getting north of 100 fps.


1440p but also tried 1080 and 720, performance always stays around those numbers, definitely not over 100 at all. I've disabled tessellation and set everything to low/off, apparently PureHair can't be disabled. 



DanglingPointer said:


> My benchmark screenshots have all been on a AMD 6900XT


Your card has raytracing but mine doesn't, it's the only thing I can think of. Everything else works just fine, even Skyrim with a ton of mods applied to it has decent performance. Been reading about versions and mine is 1.0.449.0 which *should* have better performance than the rest because it lacks Shituvo DRM that hogs resources.

What's funny is that performance in Low and High is exactly the same, looks like a hardware issue but I'm not sure what exactly is causing it. Nothing's overclocked, XMP enabled, PBO disabled. Maybe I'll reinstall the game again tomorrow, make sure even the savegame is gone, I wasn't even that much in anyway, just about 5%, random crashes are definitely worse than the crap performance which I'm used to since my old PC was terrible.


----------



## phanbuey (Nov 4, 2021)

caroline.v said:


> 1440p but also tried 1080 and 720, performance always stays around those numbers, definitely not over 100 at all. I've disabled tessellation and set everything to low/off, apparently PureHair can't be disabled.
> 
> 
> Your card has raytracing but mine doesn't, it's the only thing I can think of. Everything else works just fine, even Skyrim with a ton of mods applied to it has decent performance. Been reading about versions and mine is 1.0.449.0 which *should* have better performance than the rest because it lacks Shituvo DRM that hogs resources.
> ...


Are all your power settings to max performance?


----------



## Deleted member 202104 (Nov 4, 2021)

caroline.v said:


> 1440p but also tried 1080 and 720, performance always stays around those numbers, definitely not over 100 at all. I've disabled tessellation and set everything to low/off, apparently PureHair can't be disabled.
> 
> 
> Your card has raytracing but mine doesn't, it's the only thing I can think of. Everything else works just fine, even Skyrim with a ton of mods applied to it has decent performance. Been reading about versions and mine is 1.0.449.0 which *should* have better performance than the rest because it lacks Shituvo DRM that hogs resources.
> ...


Here's one from the other thread:

2700x / 5700 XT / 1080p - High









						Shadow of the Tomb Raider benchmark
					

SLI? I have AMD cards ;)  Haha indeed you do! My last crossfire set up was 280X's, for the most part worked great but a string of 11updates broke it in Trine 2 (and maybe some others) and never got fixed I gave up and moved back to SLi. How are you finding your crossfire in todays games? I have...




					www.techpowerup.com


----------



## Athlonite (Nov 4, 2021)

caroline.v said:


> Meanwhile my 5700XT isn't able to get past 67 FPS for some reason and the game randomly crashes. Thankfully I got it as a gift because it's literally unplayable on my computer, reinstalled it, drivers, tried tuning, guess it's another green team exclusive. DX11 performance is even worse, tops at 32 FPS and the benchmark runs at like 8 in the Mexico scene.
> Any ideas or is it unironically an nvidia-only game like Metro:Exodus Enhanced?


yeah that's really weird I had an non XT 5700 and it got way better than what you are getting so there's definitely something fishy going on with your card like maybe it's not actually an RX5700XT


----------



## Kurt63 (Nov 4, 2021)

Please forgive my ignorance here, but is this in the game itself or a benchmark I can download ?????


----------



## DanglingPointer (Nov 4, 2021)

Kurt63 said:


> Please forgive my ignorance here, but is this in the game itself or a benchmark I can download ?????


actual game.  It comes with a built-in benchmark to test your settings.


----------



## QuietBob (Nov 4, 2021)

Kurt63 said:


> Please forgive my ignorance here, but is this in the game itself or a benchmark I can download ?????


You can download a free demo from Steam that also includes the benchmark. It supposedly runs on an older engine and produces lower results. Since I don't own the full game, I can't confirm this myself.


----------



## caroline! (Nov 4, 2021)

phanbuey said:


> Are all your power settings to max performance?


yup, I'm using the Ryzen power plan


Athlonite said:


> yeah that's really weird I had an non XT 5700 and it got way better than what you are getting so there's definitely something fishy going on with your card like maybe it's not actually an RX5700XT


Card is about 2 years old already.
I've reinstalled it and now I'm getting about 70 FPS + the benchmark isn't stuttering anymore, crashes persist tho.

Friend of mine is running it with a 9600k+3070Ti and has twice as much FPS as me, weird.


----------



## Felix123BU (Nov 4, 2021)

caroline.v said:


> yup, I'm using the Ryzen power plan
> 
> Card is about 2 years old already.
> I've reinstalled it and now I'm getting about 70 FPS + the benchmark isn't stuttering anymore, crashes persist tho.
> ...


what CPU do you have? and what RAM?


----------



## caroline! (Nov 5, 2021)

Felix123BU said:


> what CPU do you have? and what RAM?


5800X and G.Skill Trident Z 2x8 @3733MHz

I've disabled fullscreen optimizations and gained an additional 5 FPS overall, feel like I'm going places.


----------



## Felix123BU (Nov 5, 2021)

caroline.v said:


> 5800X and G.Skill Trident Z 2x8 @3733MHz
> 
> I've disabled fullscreen optimizations and gained an additional 5 FPS overall, feel like I'm going places.


That's really weird, seems your GPU is heavily underperforming, I would not rule out a faulty GPU, as in hardware problem, normally with a 5800x and G.Skill Trident Z @3773mhz it should be a lot faster. One thing to note, is that 3733mhz ram 100% stable? Is it clocked 1:1:1, as in, if the FCLK and UCLK running at 1886.5? You might want to run a couple of Y-Cruncher tests, all of the tests, that's very good at testing and finding stability issues. Could be that that 1886mhz FCLK is not a stable, and that can cause a lot of weirdness and random crashes, including degraded GPU performance.


----------



## caroline! (Nov 5, 2021)

Felix123BU said:


> That's really weird, seems your GPU is heavily underperforming, I would not rule out a faulty GPU, as in hardware problem, normally with a 5800x and G.Skill Trident Z @3773mhz it should be a lot faster. One thing to note, is that 3733mhz ram 100% stable? Is it clocked 1:1:1, as in, if the FCLK and UCLK running at 1886.5? You might want to run a couple of Y-Cruncher tests, all of the tests, that's very good at testing and finding stability issues. Could be that that 1886mhz FCLK is not a stable, and that can cause a lot of weirdness and random crashes, including degraded GPU performance.


hmm, memory is running at 1866 and FCLK is set to auto, ryzen master reports it as 1866 as well, could try manually setting everything on the BIOS setup, I don't like tinkering with the mobo via windows.
I'll try Y-cruncher, haven't heard of that program before, if something fails I'll set everything via BIOS and try again.

About the game, I was able to play for about an hour with no issues -at nearly 80 fps with dips to 50-60 in parts of the jungle- and decided to pause to get some water, when I came back (roughly 5 minutes) I found my desktop with an empty taskbar meaning the game had crashed.


----------



## Det0x (Nov 11, 2021)

@SuperMumrik 
Maybe upload SotTR run here also ? 

Fortsatt med ddr4 ? veldig bra min og 95% fps


----------



## SuperMumrik (Nov 11, 2021)

Det0x said:


> @SuperMumrik
> Maybe upload SotTR run here also ?
> 
> Fortsatt med ddr4 ? veldig bra min og 95% fps


Yes, DDR4@4000c15. 
The D5's just arrived at the postal office. To bad they are micron chips


----------



## mrthanhnguyen (Nov 11, 2021)

SuperMumrik said:


> Yes, DDR4@4000c15.
> The D5's just arrived at the postal office. To bad they are micron chips


Is that with 15c water?


----------



## SuperMumrik (Nov 11, 2021)

mrthanhnguyen said:


> Is that with 15c water?


nah, that was ambient with mo-ra


----------



## Det0x (Nov 19, 2021)

Current fastest SotTR run ive seen, performed by Carillo


> So, here is the SOTTR 4300 cl14 1T results


(12900k)


----------



## Felix123BU (Nov 19, 2021)

Det0x said:


> Current fastest SotTR run ive seen, performed by Carillo
> 
> (12900k)
> View attachment 225722


so basically from rather early data, a top tuned 5950x (353) vs a top tuned 12900k (366) is ~3% difference in favor of the Alder Lake chip in this bench?


----------



## mrthanhnguyen (Nov 19, 2021)

Det0x said:


> Current fastest SotTR run ive seen, performed by Carillo
> 
> (12900k)
> View attachment 225722


Side note: you need a binned cpu with a strong imc and a good ram stick to be able to run at those speed. A random sample of 12900k and a random stick wont be able to get u that result.


----------



## phanbuey (Nov 19, 2021)

Felix123BU said:


> so basically from rather early data, a top tuned 5950x (353) vs a top tuned 12900k (366) is ~3% difference in favor of the Alder Lake chip in this bench?



SOTTR is very much latency and ram limited.  Alder lake actually has a bit higher latency, especially with E cores enabled -- so it doesn't do as well in this bench as others.


----------



## SuperMumrik (Nov 19, 2021)

phanbuey said:


> SOTTR is very much latency and ram limited. Alder lake actually has a bit higher latency, especially with E cores enabled -- so it doesn't do as well in this bench as others.


This, and the fact that the whole SotTR engine seems to crap out that around 350fps


----------



## Felix123BU (Nov 19, 2021)

phanbuey said:


> SOTTR is very much latency and ram limited.  Alder lake actually has a bit higher latency, especially with E cores enabled -- so it doesn't do as well in this bench as others.


I said in this benchmark, did not say its a general difference   (do not want to start a fanboy war )
Anyone tested Alder Lake with a 6900xt in this benchmark?


----------



## Teex (Dec 10, 2021)

Proud on my hard work and results 

CPU: Ryzen 7 5800x -> PBO + CO optimized per core
RAM: G.SKILL 32 GB KIT DDR4 3200 MHz CL16 Ripjaws V CL16-18-18-38   1,35 V  OCed to -> 3733 MHz - 16-19-14-32-48    1,45 V
GPU: Sapphire Pulse 5700 XT OC + UV - > Core Clock - 2032 MHz Memory Clock - 1864 MHz CV 1111 mV

Edit: in second SS is AA off, because I seen many SS here with it turned off


----------



## harm9963 (Dec 11, 2021)

harm9963 said:


> Just roll back to 449 ,that's what I did.
> Also  got win11 bench , 256  vs win10  253.View attachment 221660View attachment 221657


Is anyone using the latest SOTTR update


----------



## Franz (Dec 16, 2021)

My old war horsey with memory at 2133 @11 12 12 36 1T (thanks intel for the huge WHOOPY delta temp and consumption whith the xmp profile on)

I'm expecting more, but thats it


----------



## Teex (Dec 22, 2021)

My Old PC:
CPU: Intel i7 - 8700 
RAM: Kingston FURY Beast 2 x 8 GB KIT DDR4 2666 MHz CL16 + Corsair Vengeance LPX 2 x 8 GB KIT DDR4 2666 MHz CL16  CL16-18-18-38 1,20 V OCed to -> 2666 MHz - 13-16-16-35 1,35 V
GPU: NVIDIA GeForce GTX 1070 G1 Gaming 8G  - OC  - > Core Clock + 135 MHz Memory Clock - 335 MHz


----------



## Taraquin (Dec 26, 2021)

Update with 4000cl16 vs 3800cl15. 5600X +200 PBO and curve optimizer, 88W max limit.

4000cl16:




3800cl15:


----------



## Taraquin (Jan 7, 2022)

Even on my i5 8400 a bit of ram tuning can have great impact! I advice all of you stuck on crappy Intelboards with ram speed limits like B360, B460, H310, H410 to tweak a bit.

Since the motherboard is a B360 max speed is 2666. Running the 2 sticks XMP gave me 143fps avg\96 min on CPU-game. Running tuned gave me 168fps avg\114 min on CPU-game. That is an uplift of 17.5% on avg and 19% on min. For comparison: In aida stock gave 39, 38 34k and 55.5ns, tuned: 41, 41, 37k and 50.9ns.

Stock timings: 15-16-16-35 refi 8316 rfc 460, rrd 8-12, faw 34, wr 24, rtp 12, wtr 5-14 
Tweaked timings: 12-16-16-28 refi 65535 rfc 380, rrd 4-6, faw 16, wr 12, rtp 6, wtr 3-6 rest of timings are unchanged

Stock:




Tuned:


----------



## Felix123BU (Jan 7, 2022)

Taraquin said:


> Even on my i5 8400 a bit of ram tuning can have great impact! I advice all of you stuck on crappy Intelboards with ram speed limits like B360, B460, H310, H410 to tweak a bit.
> 
> Since the motherboard is a B360 max speed is 2666. Running the 2 sticks XMP gave me 143fps avg\96 min on CPU-game. Running tuned gave me 168fps avg\114 min on CPU-game. That is an uplift of 17.5% on avg and 19% on min. For comparison: In aida stock gave 39, 38 34k and 55.5ns, tuned: 41, 41, 37k and 50.9ns.
> 
> ...


Indeed, Ram tuning is forgotten by most these days, even if in most situations it gives really nice boosts in certain scenarios.


----------



## Franz (Jan 7, 2022)

XMP enabled memory at 3200 15 17 17 37 good bost


----------



## Taraquin (Jan 7, 2022)

Franz said:


> XMP enabled memory at 3200 15 17 17 37 good bost
> 
> 
> 
> View attachment 231579


18fps boost on cpu game avg (that's what cpu outputs), try tuning other timings and you get 200+ fps. 3200cl15 is probably B-die so you can tune rfc a lot 



Teex said:


> Proud on my hard work and results
> 
> CPU: Ryzen 7 5800x -> PBO + CO optimized per core
> RAM: G.SKILL 32 GB KIT DDR4 3200 MHz CL16 Ripjaws V CL16-18-18-38   1,35 V  OCed to -> 3733 MHz - 16-19-14-32-48    1,45 V
> ...


Very good score for 5800X


----------



## Taraquin (Jan 15, 2022)

i5 12400F in tha house! Running 3600 rev E SR 2x8 1.42V: 15-19-19-34 - rc 53 - rrds\l\faw 4\4\16 - wr\rtp 12\6 - wtrs\l 3\6 - rfc 512 - refi 65536. No powerlimit and 10mv undervolt.

Running stock 3000cl15 Asus XMP2 it did 198fps avg CPU-game so tweaking ram and running it 600MHz faster increased fps by 20%


----------



## Det0x (Feb 9, 2022)

beware, If low min render and "high" gpu bound = run is actually dx11 like my screens above


----------



## AVATARAT (Feb 9, 2022)

Det0x said:


> Current fastest SotTR run ive seen, performed by Carillo
> 
> (12900k)
> View attachment 225722


Your result here is with lowered *Resolution Modifier* you must do not move it.


----------



## sam_86314 (Feb 9, 2022)

Main System: GPU is underclocked to 1850MHz at 950mV. System memory is 3600MHz 18-22-22-42 1.35V.






Testing System: All settings are stock.


----------



## Det0x (Feb 10, 2022)

AVATARAT said:


> Your result here is with lowered *Resolution Modifier* you must do not move it.


No its not, and *resolution* *modifier dont even affect cpu game* either way..
The only trick is that the run have been done in dx11

My "legit" 5950x run are still the 353 fps one:


----------



## Taraquin (Feb 10, 2022)

Det0x said:


> No its not, and *resolution* *modifier dont even affect cpu game* either way..
> The only trick is that the run have been done in dx11
> 
> My "legit" 5950x run are still the 353 fps one:


How can you run in DX11 if ingame sets dx12?


----------



## Det0x (Feb 10, 2022)

Taraquin said:


> How can you run in DX11 if ingame sets dx12?


I saw a guy other was was posting about ~375fps cpu game average numbers results with his 5950x in the "official-intel-ddr5-oc" and was wondering how that was possible myself.. Yesterday i saw a guy posted ~420fps average with top tuned Alder lake, but he had forgot to hide dx option..


Then i understood.. just run the benchmark as normal in dx11 mode, and switch the tab over to dx12 when before you take screenshot..
hence the:


> beware, If low min render and "high" gpu bound = run is actually dx11


----------



## Hyderz (Feb 10, 2022)

Heres my result .... with a laptop


----------



## Felix123BU (Feb 10, 2022)

Det0x said:


> I saw a guy other was was posting about ~375fps cpu game average numbers results with his 5950x in the "official-intel-ddr5-oc" and was wondering how that was possible myself.. Yesterday i saw a guy posted ~420fps average with top tuned Alder lake, but he had forgot to hide dx option..
> View attachment 236043
> 
> Then i understood.. just run the benchmark as normal in dx11 mode, and switch the tab over to dx12 when before you take screenshot..
> hence the:


So you say some people like to cheat, who would have thought


----------



## Taraquin (Feb 10, 2022)

New record after tweaking VDD18-voltage, don`t look at GPU avg, that is with undervolt\underclock for mining, see CPU game avg\min 



Also did a DX11 test


----------



## AVATARAT (Feb 12, 2022)

Det0x said:


> I saw a guy other was was posting about ~375fps cpu game average numbers results with his 5950x in the "official-intel-ddr5-oc" and was wondering how that was possible myself.. Yesterday i saw a guy posted ~420fps average with top tuned Alder lake, but he had forgot to hide dx option..
> View attachment 236043
> 
> Then i understood.. just run the benchmark as normal in dx11 mode, and switch the tab over to dx12 when before you take screenshot..
> hence the:


Yes *resolution modifier* has the same effect with the Highest preset I can't test it with the lowest at moment and with dx11 but it can be switched the same way. Except with the trial version where it is locked.


----------



## Det0x (Feb 12, 2022)

AVATARAT said:


> Yes *resolution modifier* has the same effect with the Highest preset I can't test it with the lowest at moment and with dx11 but it can be switched the same way. Except with the trial version where it is locked.


You dont give up

Resolution modifier have no effect on *cpu game average *numbers*, *which this thread are about.
No i didnt use it/lower it evenho you claim i did for whatever reason


----------



## mrthanhnguyen (Feb 14, 2022)

ADL is not as I expected  because we have to turn off HT to get this kinda fps.


----------



## Felix123BU (Feb 14, 2022)

mrthanhnguyen said:


> ADL is not as I expected  because we have to turn off HT to get this kinda fps.
> 
> View attachment 236506


What's the diff between HT on and off?


----------



## mrthanhnguyen (Feb 14, 2022)

Felix123BU said:


> What's the diff between HT on and off?


dont even know but I see higher fps for gaming.


----------



## Felix123BU (Feb 14, 2022)

mrthanhnguyen said:


> dont even know but I see higher fps for gaming.


That would be interesting, since I guess the AL small cores are probably not used in this scenario, which would mean you get +350fps with 8 cores only (no HT), which is 16% better core for core vs my 8 core 5800x


----------



## SuperMumrik (Feb 14, 2022)

mrthanhnguyen said:


> ADL is not as I expected because we have to turn off HT to get this kinda fps


Umm... This isn't something new. 
As long as you got "enough" cores for the given task, ht on will never be faster. If anything the scheduler will mess up and make shit slower..


----------



## Det0x (Feb 22, 2022)

Updated run with latest game version = *352* CPU average fps



Missed my old personal best by 1 fps  

Seems like i'm not allowed to post in the other SotTR thread (lol)




Anyway, i did some highest runs also before i knew i could not post them..


(Anyone feel free to post them in other thread)

Find it alittle strange you have zero % GPU bound, but lower overall "average fps" @mrthanhnguyen.. Dont know how that's possible is what should be GPU limited settings (?)


----------



## mrthanhnguyen (Feb 22, 2022)

Det0x said:


> Updated run with latest game version = *352* CPU average fps
> View attachment 237602
> Missed my old personal best by 1 fps
> 
> ...


Maybe sli fucks them up or my window is bs due to too many bsod.


----------



## Det0x (Feb 22, 2022)

mrthanhnguyen said:


> Maybe sli fucks them up or my window is bs due to too many bsod.


Oh your running SLI ? lol that explain the 0% GPU bound


----------



## AVATARAT (Mar 30, 2022)

Ryzen 5 5600x+PBO+CO Per Core
2x8GB DDR4@4000MHz 16-17-14-28-2T
RX 6800 XT Gaming OC 16GB @2690MHz / Mem 2150MHz(17200)

Score: 44618
Avg FPS: 288


----------



## QuietBob (Apr 8, 2022)

Re-tested with the full version
3300X @ 4.5 all core, IF @ 1866 MHz, RAM @ CL16, 6600XT @ default + SAM


----------



## Ibizadr (Apr 11, 2022)

What's the best score 5800x got?


----------



## Athlonite (Apr 12, 2022)

Felix123BU said:


> We shall gather and centralize the scores once a month, CPU and GPU, and try to find a correlation between them


This hasn't seemed to happen


----------



## Felix123BU (Apr 12, 2022)

Athlonite said:


> This hasn't seemed to happen


True, that is my bad, second kid came, things changed


----------



## Athlonite (Apr 12, 2022)

Felix123BU said:


> True, that is my bad, second kid came, things changed


excuse accepted  thankfully mines 22 and can look after himself


----------



## Felix123BU (Apr 12, 2022)

Athlonite said:


> excuse accepted  thankfully mines 22 and can look after himself


 With a 2 year old and a freshly born one its a bit more tricky 
Anyway, on topic, I was thinking of doing that for some time, hope to also find the time, it should be some interesting comparisons keeping in mind the amount of results gathered here.


----------



## Ibizadr (Apr 14, 2022)

This is my best score with 5800x +200mhz pbo +CO per core +2x8gb 3800mhz cl14. Next step try to reach 200fps


----------



## Lew Zealand (Apr 15, 2022)

CPU re-test after replacing GTX 1080 with RX 6600XT and i7-9700F gets faster.  OK then.  Makes me want to try the 1080 again and as I'm replacing it's cooler with an aftermaket one this weekend, that does seem to be in the cards.

No that was not a pun, your brain is just broken.


----------



## Ibizadr (Apr 15, 2022)

Lew Zealand said:


> CPU re-test after replacing GTX 1080 with RX 6600XT and i7-9700F gets faster.  OK then.  Makes me want to try the 1080 again and as I'm replacing it's cooler with an aftermaket one this weekend, that does seem to be in the cards.
> 
> No that was not a pun, your brain is just broken.
> 
> View attachment 243644


Pls use AA in taa its on first page the settings


----------



## Lew Zealand (Apr 15, 2022)

Ibizadr said:


> Pls use AA in taa its on first page the settings


Whoops!  I'll re-test this again tonite.  Lol when you think you're setting everything to Lowest and forget that it's actually another Lowest...


----------



## Ibizadr (Apr 15, 2022)

Lew Zealand said:


> Whoops!  I'll re-test this again tonite.  Lol when you think you're setting everything to Lowest and forget that it's actually another Lowest...


Its only to be a standard for everyone.


----------



## Lew Zealand (Apr 16, 2022)

Ibizadr said:


> Its only to be a standard for everyone.


OP says to do this with AA off and a lot of the previous posts (including my older ones) also have AA off.  The other SotTR thread targetting GPUs uses TAA.  But then I see quite a few people in this thread with TAA on as well.  Not consistent either way.  In any case, here's the same settings as above but with TAA on:

i7-9700F, 2666 CL13 (B360), power limit set to 130W (ie: no limit as this is an 8-core "65W" CPU, lol)
RX 6600XT, 2750 MHz cores, 17600 MHz memory, +20% Power limit





Still a faster CPU score than when tested with the 1080 earlier in the thread.  Should be building the 1080's cooler today and we'll see how that changes, if anything.


----------



## Ibizadr (Apr 17, 2022)

You are right bro sorry for my mistake. Another best result with change trcdwr. This time I put AA off.


----------



## lawood (Apr 26, 2022)

5600x + 3070 ti


----------



## Det0x (Apr 27, 2022)

tweaking settings, not finalized scores








Things left to try: disable SMT, enable/disable rebar, new bench-only windows install and giving it a shoot with my old 2x8gb memory sticks.
Main bottleneck is CPU clocks, motherboard without external clockgen -> stuck on 100mhz baseclock = max 4550mhz ST and 4450mhz MT


----------



## lawood (Apr 27, 2022)

That software looks useful (pbo2 tuner)
Can you share where to download? A quick google search found nothing.


----------



## Det0x (Apr 27, 2022)

lawood said:


> That software looks useful (pbo2 tuner)
> Can you share where to download? A quick google search found nothing.


Hope this link for works for you


----------



## Athlonite (Apr 27, 2022)

Det0x said:


> Main bottleneck is CPU clocks, motherboard without external clockgen -> stuck on 100mhz baseclock = max 4550mhz ST and 4450mhz MT


What mobo are you running with 

also can you fill in your system specs in your profile so we don't have to keep asking what you're running


----------



## Det0x (Apr 27, 2022)

Athlonite said:


> What mobo are you running with
> 
> also can you fill in your system specs in your profile so we don't have to keep asking what you're running





 Zentimings also show what memory i use


----------



## Athlonite (Apr 28, 2022)

You should still be able to push 110~115MHz on your fsb though


----------



## IamVoo (Apr 28, 2022)

Det0x said:


> tweaking settings, not finalized scores
> View attachment 245109
> 
> View attachment 245110
> ...


I've been trying to figure out where the huge change in results is coming from, if you can help me out. I understand the hardware isnt the same and youve got a more aggressive overclock on the memory and also CO on the cores but I feel like that cant be a 12000 frame difference. I could be wrong but I'd like to know for sure. I see you are running windows 11 while im on 10 so that too is another variable.



I'm on a 6800xt with a mild OC around 2500mhz but in none of the tests I've run have I shown to be GPU bound so im wondering if driver overhead from radeon or just how the 3090 performs could be a factor here or is it specifically due to your memory OC. Even with CO on a few extra megahertz shouldnt play that big of a role. I'm trying to determine if you are just that far ahead in performance, or if I'm running into technical problems somewhere I havent realized yet.


----------



## Ibizadr (Apr 29, 2022)

IamVoo said:


> I've been trying to figure out where the huge change in results is coming from, if you can help me out. I understand the hardware isnt the same and youve got a more aggressive overclock on the memory and also CO on the cores but I feel like that cant be a 12000 frame difference. I could be wrong but I'd like to know for sure. I see you are running windows 11 while im on 10 so that too is another variable.
> View attachment 245363
> I'm on a 6800xt with a mild OC around 2500mhz but in none of the tests I've run have I shown to be GPU bound so im wondering if driver overhead from radeon or just how the 3090 performs could be a factor here or is it specifically due to your memory OC. Even with CO on a few extra megahertz shouldnt play that big of a role. I'm trying to determine if you are just that far ahead in performance, or if I'm running into technical problems somewhere I havent realized yet.


My first advice it's to correct some of your timmings. Trc(45)=trp(15)+tras(30) and tFAW(16) 2x trtp(8). And try to running GDM disable even if you won't can't at 1t go to 2t GDM disable. Try it again to see improvement. You can improve ram even more but you need to learn the basics and do some ram test (testmem5, kahru, hci) to see if it's stable.



lawood said:


> 5600x + 3070 ti


Nice result bro


----------



## phanbuey (Apr 29, 2022)

IamVoo said:


> I've been trying to figure out where the huge change in results is coming from, if you can help me out. I understand the hardware isnt the same and youve got a more aggressive overclock on the memory and also CO on the cores but I feel like that cant be a 12000 frame difference. I could be wrong but I'd like to know for sure. I see you are running windows 11 while im on 10 so that too is another variable.
> View attachment 245363
> I'm on a 6800xt with a mild OC around 2500mhz but in none of the tests I've run have I shown to be GPU bound so im wondering if driver overhead from radeon or just how the 3090 performs could be a factor here or is it specifically due to your memory OC. Even with CO on a few extra megahertz shouldnt play that big of a role. I'm trying to determine if you are just that far ahead in performance, or if I'm running into technical problems somewhere I havent realized yet.


Awesome result thx for sharing - that 5800X3D is a monster.

It's way faster than 12600K w DDR5 (itx) 5.2Ghz 1.24v DDR5 6200 32-35-35-72... we do have different game versions tho... do you have the full game or demo?

Edit: the difference in your results is that you're runnin game demo - demo is way way slower.  If you get full version of the game you will get better results.

Here's mine:


----------



## IamVoo (Apr 29, 2022)

phanbuey said:


> Awesome result thx for sharing - that 5800X3D is a monster.
> 
> It's way faster than 12600K w DDR5 (itx) 5.2Ghz 1.24v DDR5 6200 32-35-35-72... we do have different game versions tho... do you have the full game or demo?
> 
> ...


wait is that it? All this time no matter what I couldnt reach similar scores. Demo is just slower? This would alleviate alot of issues man, hope it's true.

I ran earlier with just XMP and also with a tune I've used on my 5800x in the past that was stable 3600cl14 and at best there is a 3fps difference and at worse no change at all. I've heard that the cache can really cover up memory issues and excels in bringing rigs with slower memory back to par with other faster memory rigs. Maybe it's just game specific where I dont see much uplift?

Regardless, knowing the demo runs much slower makes me feel alot better. That said i cant see myself buying a game just for a benchmark.


----------



## phanbuey (Apr 29, 2022)

IamVoo said:


> wait is that it? All this time no matter what I couldnt reach similar scores. Demo is just slower? This would alleviate alot of issues man, hope it's true.
> 
> I ran earlier with just XMP and also with a tune I've used on my 5800x in the past that was stable 3600cl14 and at best there is a 3fps difference and at worse no change at all. I've heard that the cache can really cover up memory issues and excels in bringing rigs with slower memory back to par with other faster memory rigs. Maybe it's just game specific where I dont see much uplift?
> 
> Regardless, knowing the demo runs much slower makes me feel alot better. That said i cant see myself buying a game just for a benchmark.



Yeah I wouldn't buy it just for benchmark but is a great game, and yes the demo is super slow compared to the real game, they've optimized it a bunch since the demo release.

Always look at the game version, different game versions are not comparable in the benchmark for SOTTR.  It's for sure true - I have both and they're totally different numbers.


----------



## Det0x (May 1, 2022)

Manage to break magical 400fps cpu game limit 

5800x3d @ stock 4450mhz
Average CPU Game = 406 fps




If only we could find a way to overclock these cpu, for us without external clockgen for baseclock..
Imagine what this cpu could do @ 4.8 to 5ghz with unlocked multiplier


----------



## harm9963 (May 1, 2022)

Det0x said:


> Manage to break magical 400fps gpu game limit
> 
> 5800x3d @ stock 4450mhz
> Average CPU Game = 406 fps
> ...


ASUS DOCS would do that , only two MB can  do that - DARK  HERO and EXTREME.


----------



## QuietBob (May 12, 2022)

5800X3D with IF @ 1900, RAM @ CL14, 6600XT @ stock + SAM:


----------



## freestaler (May 17, 2022)

Trial Edition, 5800x3d BLCK 104. -30, 3750 c14 IF 1:1


----------



## Block10 (May 18, 2022)

Hi,
Here's mine, Stock. 12600k, 3070ti, DDR5 32GB RAM running 5600 mhz.


----------



## Det0x (Oct 8, 2022)

Nothing earthshattering, but decent enough i guess

7950x
Memory @ 6100MT/s 28-36-36-28 1T
FCLK @ 2200mhz

1080p lowest
*CPU game average fps = 385*




Screen with hwinfo open and rest of memory settings shown


Getting pretty much same numbers @ 1080p highest..
And ~fully GPU bound with a watercooled 3090 without using resolution modifier.

1080å highest
*CPU game average fps = 383*




*edit*
And a CS benchmark score since there is no thread for that..

*1080p highest = 991 FPS*


----------



## Det0x (Oct 16, 2022)

With new higher performing ASUS bios:

7950x is the second cpu after 5800x3d to break 400 avenge cpu fps! 

Memory @ 6100MT/s 28-36-36-28 2T
FCLK @ 2200mhz


----------



## BetrayerX (Oct 24, 2022)

5800X and 5700XT 16GB DDR4 3200


----------



## Franz (Oct 24, 2022)

BetrayerX said:


> 5800X and 5700XT 16GB DDR4 3200


You need to run in lowest settings


----------



## BetrayerX (Oct 24, 2022)

Franz said:


> You need to run in lowest settings


Ooopps, my bad. Fixed! ^_^ Thanks.


----------



## Franz (Oct 24, 2022)

BetrayerX said:


> Ooopps, my bad. Fixed! ^_^ Thanks.


Its overclocked?


----------



## Colddecked (Oct 24, 2022)

5800x3d (with -30 offset), 3080 with undervolt, 32gb 3200mhz ram clocked at 3800 @ 16-20-18-36 timing


----------



## BetrayerX (Oct 24, 2022)

Franz said:


> Its overclocked?


Yes! LLC on 4 on BIOS, the rest via Ryzen Master. CC was done per core.


----------



## Det0x (Nov 5, 2022)

Me and a few other guys over @ the overlock.net forum seems to have reach the same conclusion.. The more GPU bound you are, the higher CPU Game Average numbers you can get:

Can share some legit number with my decently clocked 3090 @ 520w powerlimit:

1080p high = *412 fps* average cpu fps @ *98%* gpu limited


1080p lowest= *408 fps* average cpu fps @ *28%* gpu limited


720p lowest = *402 fps* average cpu fps @ *1%* gpu limited


But watch what happens with the average cpu fps numbers when i enforce a low powerlimit for the graphic card:
*441 fps* average cpu fps@ *100%* gpu limited




Conclusion is that the more GPU limited you are, the more time the CPU have to push up the average cpu game numbers..
Going forward i suggest that either we run with resolution modifier at minimum to limit the GPU bottleneck as much as possible, or we run the benchmark @ 720p lowest like the Russians do..
Anything above 10% GPU limited should not count in my book.

(all this started with a guy showing too good to be true numbers, with a 4090 which was ~90% GPU limited @ 1080p lowest)


----------



## cRs (Nov 7, 2022)

i3 12100F with rtx 2080 16Gb DDR4 3200Mhz all stock


----------



## harm9963 (Jan 5, 2023)

PowerColor Red Dragon RX 6800 , nice upgrade from my 1080Ti for 5 years , the 1080Ti will go into my second rig , to replace 290X CFX .​​


----------



## Det0x (Sunday at 2:23 PM)

Seems like AMD Radeon GFX driver is alittle faster than Nvidia in this game..

1080p lowest = 409 average cpu game average




Hardware and settings:


720p lowest = 413 average cpu game average


----------



## harm9963 (Sunday at 3:55 PM)

harm9963 said:


> PowerColor Red Dragon RX 6800 , nice upgrade from my 1080Ti for 5 years , the 1080Ti will go into my second rig , to replace 290X CFX .​​View attachment 277610View attachment 277657View attachment 277662​


Went back to Micro Center , exchange the 6800 for PNY XLR8 4070Ti !


----------



## Athlonite (Sunday at 8:36 PM)

harm9963 said:


> Went back to Micro Center , exchange the 6800 for PNY XLR8 4070Ti !


Why


----------

