• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Shadow Of The Tomb Raider - CPU Performance and general game benchmark discussions

What the hell did i miss LOL. Felix i still get whea 19's myself maybe once on boot and i get a few on app startups depending on the app maybe 2 or so sometimes none..prime 95 ill get one on large ffts every 10 seconds or so though. Im on right on the edge of stability with my flck and it kind of turns me off knowing it lol. But i have msi afterburner/ rivatuner overlay combined with hwinfo and have windows errors added into the overlay. Not once during a gaming session did i get a whea19 nor cinebench, geekbench and other synthetic benchmarks so im cool with it. i havnt tried the newest agesa revision b yet as its still in beta phase for my mobo so im still on A. I think eventually 2000 flck will be achievable by " most".

It took more time than i care to admit setting my soc voltage and sub voltages. If my soc was too high more errors..too low more errors. Same with the other voltages. If they were too low id have 100 wheas on boot but very little during stress test. Too much and the opposite effect would happen. I literally had to find a sweet spot to balance the two to near none and its way off from what people recommend. Its alot of work honestly.

With that being said i think 4000 cl14-16 is about as good as it gets on ryzen and this test as it probably benefits the most from the bandwidth. Even fine tuning my timings and subtimings result in very little benefit if any at all. I actually got higher scores at 4000 14-16-16-28 and a lower trfc (unstable though) than i did at 14-15-14-21 with the subs tightened even more. I feel like it ended up just being margin of error runs from the gpu at that point.

I personally think a really good 3800cl14 profile can be just as effective at a 4000cl16 profile in this test. Ill probably go home today and give it a shot just by switching the speed and flck down a notch and leaving everything the same.
 
What the hell did i miss LOL. Felix i still get whea 19's myself maybe once on boot and i get a few on app startups depending on the app maybe 2 or so sometimes none..prime 95 ill get one on large ffts every 10 seconds or so though. Im on right on the edge of stability with my flck and it kind of turns me off knowing it lol. But i have msi afterburner/ rivatuner overlay combined with hwinfo and have windows errors added into the overlay. Not once during a gaming session did i get a whea19 nor cinebench, geekbench and other synthetic benchmarks so im cool with it. i havnt tried the newest agesa revision b yet as its still in beta phase for my mobo so im still on A. I think eventually 2000 flck will be achievable by " most".

It took more time than i care to admit setting my soc voltage and sub voltages. If my soc was too high more errors..too low more errors. Same with the other voltages. If they were too low id have 100 wheas on boot but very little during stress test. Too much and the opposite effect would happen. I literally had to find a sweet spot to balance the two to near none and its way off from what people recommend. Its alot of work honestly.

With that being said i think 4000 cl14-16 is about as good as it gets on ryzen and this test as it probably benefits the most from the bandwidth. Even fine tuning my timings and subtimings result in very little benefit if any at all. I actually got higher scores at 4000 14-16-16-28 and a lower trfc (unstable though) than i did at 14-15-14-21 with the subs tightened even more. I feel like it ended up just being margin of error runs from the gpu at that point.

I personally think a really good 3800cl14 profile can be just as effective at a 4000cl16 profile in this test. Ill probably go home today and give it a shot just by switching the speed and flck down a notch and leaving everything the same.

The drama you missed :laugh:

I have been trying to see what I can do to get above 240 FPS in the benchmark, and the only thing I could do is to see if I can get to 2000 FCLK running, I am reasonably sure my CPU cant run at 2000 fully stable (at least with current AGESA, have the latest for my board), but as you say, fully stable does not mean a lot if everything runs fine but you get some WHEA errors reported that don't cause harm.

I can boot into Windows with 2000 FCLK, but things get ugly pretty quickly after that (Prime 95 large FFT's is an instant reboot), regardless of voltages applied, even tried it uncoupled to memory, still get Bus Interconnect errors, and since I also work on this PC, a semi stable setting is not an option.

The thing you say about voltage ranges is interesting, and if I would have confidence in my ram sticks I would probably try to fine-tune those, but I think I am pretty close to the limit of what they can achieve by themselves. The thought of getting more capable sticks was appealing, then I remembered DDR5 is just around the corner, so I will be happy with the current ones for a while longer :)

"I personally think a really good 3800cl14 profile can be just as effective at a 4000cl16 profile in this test. Ill probably go home today and give it a shot just by switching the speed and flck down a notch and leaving everything the same." that would be interesting to see.
 
Last edited:
The drama you missed :laugh:

I have been trying to see what I can do to get above 240 FPS in the benchmark, and the only thing I could do is to see if I can get to 2000 FCLK running, I am reasonably sure my CPU cant run at 2000 fully stable (at least with current AGESA, have the latest for my board), but as you say, fully stable does not mean a lot if everything runs fine but you get some WHEA errors reported that don't cause harm.

I can boot into Windows with 2000 FCLK, but things get ugly pretty quickly after that (Prime 95 large FFT's is an instant reboot), regardless of voltages applied, even tried it uncoupled to memory, still get Bus Interconnect errors, and since I also work on this PC, a semi stable setting is not an option.

The thing you say about voltage ranges is interesting, and if I would have confidence in my ram sticks I would probably try to fine-tune those, but I think I am pretty close to the limit of what they can achieve by themselves. The thought of getting more capable sticks was appealing, then I remembered DDR5 is just around the corner, so I will be happy with the current ones for a while longer :)

"I personally think a really good 3800cl14 profile can be just as effective at a 4000cl16 profile in this test. Ill probably go home today and give it a shot just by switching the speed and flck down a notch and leaving everything the same." that would be interesting to see.
I think ddr4 is still going to be relevant for a few years at most. Ive seen some testing ddr5 4800 but the cas latency is 40 and it just slightly outperformed 3200mhz ddr4. I forgot the cas latency for that. Thats entry level ddr5 and 5000+ is to be expected down the line which will start to take over as cas latencys drop from what ive seen
 
I think my score could be better specially the Max
 

Attachments

  • tomb_cpu.png
    tomb_cpu.png
    3 MB · Views: 187
  • ZenTimings_Screenshot.png
    ZenTimings_Screenshot.png
    32.6 KB · Views: 165
I think my score could be better specially the Max
Interesting, your CPU scores are pretty much in line with my 5800X and 3800mhz CL16 setup. The only difference might be that I OC-ed my CPU to 4.75ghz all core, which gives some extra FSP, but not much vs PBO.
 
another run no pbo, default cpu settings, wondering the score is better, and cpu boost was maxed to 4600, before with pbo boost was 4900 but score is lower
 

Attachments

  • 212_tomb_default.png
    212_tomb_default.png
    3.3 MB · Views: 143
I think my score could be better specially the Max
A few timings should be tweaked:
Trrds 4, trrdl 6, twr 12, trtp 6. Trdwr 8, twrrd 3. I bet that can boost your cpu game avg atleadt 10fps, maybe more what ramvoltage are youvrunning? Also try soc 1.12v, vddg iod 1.04, vddg ccd 0.94, vddp 0.9.
 
another run no pbo, default cpu settings, wondering the score is better, and cpu boost was maxed to 4600, before with pbo boost was 4900 but score is lower
Theres a sweet spot with pbo. Despite what people think. Cranking the max boost has serious diminishing returns. The 5800x is already a hot cpu. Pbo tends to overvolt alot despite the curve optimizer. For example i keep my 5600x at 4700 and on the title screen of sotr i sit around 1.356v. When i increase my cpu speed to 4725 it increases the voltage 3 volts on the title screen. Seriously 3 volts for 25mhz. 4800 puts me at like 1.44 volts under load. Its stupid honestly.

My cpu runs best with pbo slightly above stock boost clock- 4700(4650 stock)

My ppt values manually inputted rather than letting the motherboard control it or leaving them uncapped. 125w(88wstock) tdc 75(65) edc 105(90a) as you can see if keep my ppt values not too far from stock as it leads to better boost behavior for some reason. Although capping your edc does bug aida 64 on your l3 cache test. Performance loss isnt their though.

This way through the curve optimizer, ppt values and max boost clock you can achieve the best result. It really is a balancing act that takes longer in my opinion than a static oc to dial in lol
 
Theres a sweet spot with pbo. Despite what people think. Cranking the max boost has serious diminishing returns. The 5800x is already a hot cpu. Pbo tends to overvolt alot despite the curve optimizer. For example i keep my 5600x at 4700 and on the title screen of sotr i sit around 1.356v. When i increase my cpu speed to 4725 it increases the voltage 3 volts on the title screen. Seriously 3 volts for 25mhz. 4800 puts me at like 1.44 volts under load. Its stupid honestly.

My cpu runs best with pbo slightly above stock boost clock- 4700(4650 stock)

My ppt values manually inputted rather than letting the motherboard control it or leaving them uncapped. 125w(88wstock) tdc 75(65) edc 105(90a) as you can see if keep my ppt values not too far from stock as it leads to better boost behavior for some reason. Although capping your edc does bug aida 64 on your l3 cache test. Performance loss isnt their though.

This way through the curve optimizer, ppt values and max boost clock you can achieve the best result. It really is a balancing act that takes longer in my opinion than a static oc to dial in lol
Concur, PBO in games gives limited extra performance, at least in modern games who use multiple cores. PBO can be great in single threated apps like emulators who use 1 core, but gives a very small boost to modern gaming performance due to modern games using multiple cores which are not boosted as much as single core with PBO. You can get better game perf with a high all core overclock, tested that in a couple of games, including SOTTR.

The reason for diminishing returns with PBO is that one core can reach lets say 5Ghz, but when multiple cores are used, the power limits and heat limits kick in, and even though the cores might boost a small bit over stock, the difference is to small to make a meaningful difference. And yeah, AMD's auto PBO voltages are very trigger happy, PBO gives the CPU 1.37v for a close to 4..7Ghz multi core boost, I run it on a manual all-core OC at 4.7ghz and 1.25v 100% stable for 2 months.

Best setting for unlocking multicore boost is modifying PPT, but on a 5800X that gets you into super hot territory, and you need a very beefy cooling solution to keep it from thermal throttling, aka reducing boost due to heat. You can potentially get close to an all core OC with a high enough PPT, but that would also put you into overheating territory.

And yeah, the 5800X is a really hot CPU, I have one, hottest chip I ever used, and I used to have a 4790k which without delid could fry eggs easy :laugh:
The 5800X could cook a whole menu if not cooled properly :D
 
so i went back in one more time. those random wheas were starting to bother me lol. im a stability guy and have always been into stability more than cranking it. i went back to 3800 but retained my timings from my 4000 profile. unfortunately i couldnt tighten my subtimings up anymore even at the same voltage i was at with my 4000 profile despite dropping 200mhz. although i was able to lower my voltage. they wouldnt scale with the voltage range i considered my max voltage so i didnt even try anymore. i lost about 3-4 fps. and my "gpu bound" went from 25% to 20% so i guess we'll call that a 5 percent loss in performance. but hey these framerates are way above what my monitor can support anyways. my 4000 profile will continue to be saved in my bios just waiting for the day an agesa update makes those whea 19's disappear. and its still a pretty large leap over the xmp at 3600cl16 (27fps), overall i learned the importants of ram speed. i feel it is just as important as selecting a cpu and gpu when you have a high refresh rate/ competetive e sports in mind
 

Attachments

  • 2021-08-18 (1).png
    2021-08-18 (1).png
    32.3 KB · Views: 126
  • 2021-08-18.png
    2021-08-18.png
    5.9 MB · Views: 132
Theres a sweet spot with pbo. Despite what people think. Cranking the max boost has serious diminishing returns. The 5800x is already a hot cpu. Pbo tends to overvolt alot despite the curve optimizer. For example i keep my 5600x at 4700 and on the title screen of sotr i sit around 1.356v. When i increase my cpu speed to 4725 it increases the voltage 3 volts on the title screen. Seriously 3 volts for 25mhz. 4800 puts me at like 1.44 volts under load. Its stupid honestly.

My cpu runs best with pbo slightly above stock boost clock- 4700(4650 stock)

My ppt values manually inputted rather than letting the motherboard control it or leaving them uncapped. 125w(88wstock) tdc 75(65) edc 105(90a) as you can see if keep my ppt values not too far from stock as it leads to better boost behavior for some reason. Although capping your edc does bug aida 64 on your l3 cache test. Performance loss isnt their though.

This way through the curve optimizer, ppt values and max boost clock you can achieve the best result. It really is a balancing act that takes longer in my opinion than a static oc to dial in lol
Pbo without curve optimizer is not worth it is my opinion. Too much heat due to high voltages abd currents.

so i went back in one more time. those random wheas were starting to bother me lol. im a stability guy and have always been into stability more than cranking it. i went back to 3800 but retained my timings from my 4000 profile. unfortunately i couldnt tighten my subtimings up anymore even at the same voltage i was at with my 4000 profile despite dropping 200mhz. although i was able to lower my voltage. they wouldnt scale with the voltage range i considered my max voltage so i didnt even try anymore. i lost about 3-4 fps. and my "gpu bound" went from 25% to 20% so i guess we'll call that a 5 percent loss in performance. but hey these framerates are way above what my monitor can support anyways. my 4000 profile will continue to be saved in my bios just waiting for the day an agesa update makes those whea 19's disappear. and its still a pretty large leap over the xmp at 3600cl16 (27fps), overall i learned the importants of ram speed. i feel it is just as important as selecting a cpu and gpu when you have a high refresh rate/ competetive e sports in mind
I lost 6fps going from 4000cl16 to 3800cl15 with generally tighter timings. Voltage on ram equal.
View attachment 213221

With cheapest 3200 Mhz 64 GB RAM.
Good result! If you want a bit more performance post your zentimings so we can help you?
 
View attachment 213221

With cheapest 3200 Mhz 64 GB RAM.
Best proof of moar cores moar FPS in this game, we have the same exact GPU, I can get max 237 FPS with 3800 CL16 medium tuned ram, you get 253 FPS with 3200 stock ram :D
The 5950X is for some reason a gaming beast :love:
 
Best proof of moar cores moar FPS in this game, we have the same exact GPU, I can get max 237 FPS with 3800 CL16 medium tuned ram, you get 253 FPS with 3200 stock ram :D
The 5950X is for some reason a gaming beast :love:
Honestly I do not think this difference in performance is due the more cores/threads at this point because 8c/16t is certainly more than enough for this game/benchmark more likely this performance gap its because L3 cache difference( 5800X=32768Kb Vs 5950X=65536Kb)and possibly higher Turbo frequency speed on 5950X .......
 
Last edited:
Honestly I do not think this difference in performance is due the more cores/threads at this point because 8c/16t is certainly more than enough for this game/benchmark more likely this performance gap its because L3 cache difference( 5800X=32768Kb Vs 5950X=65536Kb)and possibly higher Turbo frequency speed on 5950X .......
You possibly could be quite correct, was referring to the fact that for some reason you get a FPS jump from lets say a 5600X or 5800X to a 5900X and 5950X by itself, and extra cache could be exactly that. Memory write speeds are also double on the dual CCD Zen 3's, though not sure how much that would impact this benchmark, extra cache seems more likely, especially at low resolutions.
 
You possibly could be quite correct, was referring to the fact that for some reason you get a FPS jump from lets say a 5600X or 5800X to a 5900X and 5950X by itself, and extra cache could be exactly that. Memory write speeds are also double on the dual CCD Zen 3's, though not sure how much that would impact this benchmark, extra cache seems more likely, especially at low resolutions.
Everything about the 5950 is superior to the 5600 and 5800x. I was checking out aidas of 5950's and even at 3600 and looser timings it had more bandwidth than i could achieve and less latency. Out of the box it has better single core than i can achieve overclocked. Higher cache speeds also. The only thing in its class is the 5900x and its intel counterparts. Its also significantly more expensive as well all know :)
 
Everything about the 5950 is superior to the 5600 and 5800x. I was checking out aidas of 5950's and even at 3600 and looser timings it had more bandwidth than i could achieve and less latency. Out of the box it has better single core than i can achieve overclocked. Higher cache speeds also. The only thing in its class is the 5900x and its intel counterparts. Its also significantly more expensive as well all know :)
Yup, good thing is those pluses for the 5950X and 5900X are only a "game changer" in very niche scenarios, like for example this test we are playing with, which is anything but realistic when it comes to the setting people would use when gaming :laugh: In most gaming scenarios the difference between them is minimal.
But in the purest sense, they are superior CPU's :)

Oh, and one more important thing to note, take from Anandtech's measurements, wattage per core:

wpc.jpg


Says it right there, 5950X and 5900X are the good bins, 5800X and 5600X are the crappy bins, with a special bad bin prize for the 5800X, more than twice the power used per core vs the 5950X :laugh:
 
Last edited:
this is my best so far but still couldn't pass the 350 max
 

Attachments

  • tomb_2.png
    tomb_2.png
    3 MB · Views: 143
  • ZenTimings_Screenshot.png
    ZenTimings_Screenshot.png
    32.5 KB · Views: 139
this is my best so far but still couldn't pass the 350 max
You improved the cpu-score by 11fps avg, that is good. What voltage are you running on the ram? You could try trp 15, trc 46, trfc 276. Or if you run 1.45V on ram disable gear down mode, set 2T, cl 15, trcdrd 15, trp 15, trc 45, trfc 270.
 
Looks like your gpu is running out of juice. Potato settings and that 5900x still is only 50% bottlenecked. Your almost there though
I don't know what wrong with my GPU, it is on stock settings as it will crash if I just change clock/memory so I guess it is dying slowly.
I am trying to improve my latency and unfortunately I am getting same with cl14 so not sure what's wrong. What do you think guys?
 

Attachments

  • 3800_c14.png
    3800_c14.png
    187.7 KB · Views: 105
  • 3800_c16.png
    3800_c16.png
    159.6 KB · Views: 113
I don't know what wrong with my GPU, it is on stock settings as it will crash if I just change clock/memory so I guess it is dying slowly.
I am trying to improve my latency and unfortunately I am getting same with cl14 so not sure what's wrong. What do you think guys?
what gpu is it??
 
Honestly I do not think this difference in performance is due the more cores/threads at this point because 8c/16t is certainly more than enough for this game/benchmark more likely this performance gap its because L3 cache difference( 5800X=32768Kb Vs 5950X=65536Kb)and possibly higher Turbo frequency speed on 5950X .......

There is no cache advantage. 32MB of L3 per chiplet, a core can't just access the other chiplet's L3.

I don't know what wrong with my GPU, it is on stock settings as it will crash if I just change clock/memory so I guess it is dying slowly.
I am trying to improve my latency and unfortunately I am getting same with cl14 so not sure what's wrong. What do you think guys?

AIDA clock speeds can vary a lot. Especially if your CPU OC or Curve Optimizer settings are unstable. The latency number regularly flies all over the place on Zen 3, unpredictable.

As for the 3800CL16/CL14, those results straight up look unstable. AIDA does all sorts of weird shit if not stable. If you're running 4/4/16/4/8/10 for RRDS/RRDL/FAW/WTRS/WTRL/WR, you will usually need to increase VDIMM compared to if you were running 4/6/16/4/12/12. And that's assuming you actually stability tested the 3800CL16 profile.


The per-core power doesn't really correlate with silicon quality at all. I've seen plenty of well binned 5600X/5800X and shit-tier 5900X/5950X (mine is somewhere in the middle of mediocrity). You can't just extrapolate silicon quality from their nT W/core metric - that's for all-core, which purely just depends on how much watts can be run through the CPU under the PPT limit. A stock 5950X is below 4.0GHz on something like 60W per chiplet. A stock 5800X runs a blistering 4.4-4.6GHz all-core on like 125-130W in a single chiplet. Chop off 20/30/40W from a 5800X's stock power limit and chances are it'll get better performance while pulling significantly less per-core power.

The 7.85W figure is also BS for a stock 5900X, no idea what "test" they ran (though it was launch day so probably firmware BS). You'll easily see 8-10W per-core on just about any all-core SSE stress test, unless the test is for some reason not maxing out the PPT envelope (Cinebench seems to do this).
 
The 3800c16 profile was tested 25 cycles tm5 stable.
The 3800c14 not tested yet, it was just for fun test but the results wasnt as expected.
 
Back
Top