# Uncore frequency @ 4.8Ghz vs 4.0Ghz test in 8 games



## Enterprise24 (May 31, 2019)

Should you overclock uncore frequency ?
I think it depend.
If you already overclock CPU then why not ?
In case of Haswell / Broadwell thing is a bit more complicate because uncore use different voltage but on Skylake (and its brothers) use core voltage so it is like free stuff for extra performance.
If you buy K series CPU and leave everything at stock (maybe just enable XMP) then just forget about it.

In my experience high uncore frequency is essential for highly tweaked memory. Such as those guys who spend weeks or even month(s) to tighten every parameter of sub-timings.
Typical uncore overclock is 200-300Mhz behind core.
In this test my RAM running at 3500Mhz 16-18-18-36-2T with most efficient sub-timings for 24/7 Hynix AFR.






Test system
i7-8700K @ 5Ghz
ASRock Z370 Taichi P4.00
2x8GB DDR4-3500 16-18-18-36-2T
EVGA GTX 1080 Ti @ 2126 core / 12474 mem Corsair HX 750W
NZXT H440 White
Custom Water Cooling
Windows 10 64 bit 1607
Nvidia 430.64
Record by ShadowPlay

Let's see how much different...










*TLDW*


Spoiler


----------



## infrared (May 31, 2019)

I freakin love it when you post these in-depth testing vids, very interesting.


----------



## Enterprise24 (May 31, 2019)

infrared said:


> I freakin love it when you post these in-depth testing vids, very interesting.



Thanks , glad you like it.


----------



## Zyll Goliat (May 31, 2019)

Cool benchmarking....+ It will be interesting to see the difference in video editing.....


----------



## MrGRiMv25 (May 31, 2019)

Interesting video, if I could try it on my OEM board I would but will have to wait until I upgrade to have a play with uncore. It would be a pretty decent way to improve the older Xeons like I'm running as they clock their uncore quite a bit slower than the modern chips.

It also seems that it hits a brick wall past a certain frequency on some of the games, the jump from 4.0 to 4.8 isn't quite as drastic for the test results as I would have thought.


----------



## EarthDog (May 31, 2019)

A summary at the end would be awesome. I have the attention span of a goldfish. 

Good work!


----------



## xkm1948 (May 31, 2019)

At work right now. A quick TL;DW?


----------



## Enterprise24 (Jun 1, 2019)

xkm1948 said:


> At work right now. A quick TL;DW?



Of course you will only see benefit in non GPU limited situation
It can help 0.1% low and 1% low in some games while boosting AVG FPS in others.


----------



## MrGenius (Jun 1, 2019)

I tried to watch the whole thing. But if the last half is like the first half...which it undoubtedly is...what's the point? Doesn't do jack squat(half the time at least). Would I do it anyway? Guaranteed! I don't run anything that can be run balls-out...anything but balls-out...EVER!


----------



## xkm1948 (Jun 1, 2019)

MrGenius said:


> I tried to watch the whole thing. But if the last half is like the first half...which it undoubtedly is...what's the point? Doesn't do jack squat(half the time at least). Would I do it anyway? Guaranteed! I don't run anything that can be run balls-out...anything but balls-out...EVER!




What about warranty and longevity?



Also I assume you probably have lots of kids. I see so many “balls” and “out” lulz


----------



## MrGenius (Jun 1, 2019)

xkm1948 said:


> What about warranty and longevity?
> 
> Also I assume you probably have lots of kids. I see so many “balls” and “out” lulz


Warranties and longevity are known by the State Of Cancer to cause California.

About the kids...I keep the kids in the balls.


----------



## Vayra86 (Jun 1, 2019)

Pretty impressive IMO to see those 0.1% low gaps. The pattern that emerges is that it really does benefit that open-world game stuttery behavior you tend to get from area transitions and asset pop in.

Thanks for this, nice work!


----------



## trog100 (Jun 1, 2019)

nicely done.. 

thanks

trog


----------



## TTU (Jun 1, 2019)

@Enterprise24
Nice work, but I kinda get triggered when I see ''Test in games'' and you see in-game benchamarks being used 
Out of the eight games I think AC: Odyssey's, Far Cry 5's and Shadow of the Tomb Raider's benchmarks are kinda not meant for CPU and memory subsystem performance testing. And maybe even GTA V's combined bench which is good only in the city section.
But I gotta say I was most suprised that there is no difference in the Witcher 3 Novigrad run. Well, maybe that area doesnt hammer the memory subsystem that much, or its your good work with the memory  sub-timings. Maybe with just XMP profile being used there would be a difference.
I hope you get a good memory for you rig. I see your previous videos and you used to have some G.Skill 3600 CL17-18-18 kit ? That kit despite being relatively low bin should be quite the upgrade.


----------



## vega22 (Jun 1, 2019)

dude much love for what you've done, but this is a forum. if i was after watching a video on the subject i would of gone to youtube to begin with.

please add the data to the op in a text/chart format.


----------



## Enterprise24 (Jun 1, 2019)

vega22 said:


> dude much love for what you've done, but this is a forum. if i was after watching a video on the subject i would of gone to youtube to begin with.
> 
> please add the data to the op in a text/chart format.



Thanks for suggestion. Just add it in the first post.



TTU said:


> @Enterprise24
> 
> Nice work, but I kinda get triggered when I see ''Test in games'' and you see in-game benchamarks being used
> 
> ...



Thanks for suggestion about using real in game playing instead of benchmark.

You can see in Witcher 3 Novigrad my 1080 Ti running at 99% all time in both side so there is no benefit from faster core , uncore , memory.
For AC:O GPU is quiet bottleneck but Far Cry 5 , GTA V and SOTTR (DX11 because DX12 is stutter) there are many scenes that GPU is not fully utilize.

About my old video with TDZ 3600 CL17 well I sold it 2 years ago  Have a great time with it.


----------



## TTU (Jun 1, 2019)

@Enterprise24 Its not really about GPU usage, but selection the right section of the game that actually puts the CPU to the test . In such scenarios there would be a difference. It might even be in-game benchmarks, but in the case of the games Ive mention those suck. I think the in-game benchmarks of the previous Tomb Raider games were also quite bad for that, the reboot and RotTr. Origins's one is also not the best choice, dunno about Primal's. Off course maintaining consistency is also important.

Im quite surpsied that SotTR is stuttering. In that game DX12 is game-changer for Ryzen CPUs and its performing better even on high core count Coffee Lake.

It looks like only the 20nm b-die is being discontinued. So, maybe you will be able to snag some relatievly cheap kit. The prices are constantly coming down .


----------



## Enterprise24 (Jun 1, 2019)

TTU said:


> @Enterprise24 Its not really about GPU usage, but selection the right section of the game that actually puts the CPU to the test . In such scenarios there would be a difference. It might even be in-game benchmarks, but in the case of the games Ive mention those suck. I think the in-game benchmarks of the previous Tomb Raider games were also quite bad for that, the reboot and RotTr. Origins's one is also not the best choice, dunno about Primal's. Off course maintaining consistency is also important.
> 
> Im quite surpsied that SotTR is stuttering. In that game DX12 is game-changer for Ryzen CPUs and its performing better even on high core count Coffee Lake.
> 
> It looks like only the 20nm b-die is being discontinued. So, maybe you will be able to snag some relatievly cheap kit. The prices are constantly coming down .



The next test will be about PCIE 2.0 x16 vs 3.0 x16 still based on in game benchmark but after that I will seriously consider real gameplay instead. Consistency is a must.

Thanks again for your valuable suggestions.


----------



## EarthDog (Jun 1, 2019)

TTU said:


> Nice work, but I kinda get triggered when I see ''Test in games'' and you see in-game benchamarks being used


I don't. I'd rather have a consistent, 100% repeatable test. The point of these is to test the gpus performance, not the game on the GPU... if that makes sense. Actual in game performance will differ depending on where the 'benchmark'is captured anyway, so its no like testing in game is that much more accurate. In other words, you can test the game and give a result, someone else can test the game and give a different result because they tested in a different spot or played a similar area differently. With a canned benchmark, any Dick, Tom, or Jane can dial it up and compare.

Integrated benchmarks give a consistent and repeatable test which tells a user relative performance of the card(s). It's already well known integrated benchmarks at times, does not 'exactly' reflect in-game performance. I'll take repeatable consistency over actual game performance (especially since the difference isnt much most of the time) in benchmarks. 

@Enterprise24  Was that summary added in the vid? Again, fishbowl attention span... would like to see all the results. 
Edit: I see each graph. Ty!

As far as pcie scaling goes - https://www.techpowerup.com/reviews/NVIDIA/GeForce_RTX_2080_Ti_PCI-Express_Scaling/7.html

Dont waste your time for such a little difference.


----------



## TTU (Jun 1, 2019)

EarthDog said:


> I don't. I'd rather have a consistent, 100% repeatable test. The point of these is to test the gpus performance, not the game on the GPU... if that makes sense. Actual in game performance will differ depending on where the 'benchmark'is captured anyway, so its no like testing in game is that much more accurate. In other words, you can test the game and give a result, someone else can test the game and give a different result because they tested in a different spot or played a similar area differently. With a canned benchmark, any Dick, Tom, or Jane can dial it up and compare.
> 
> Integrated benchmarks give a consistent and repeatable test which tells a user relative performance of the card(s). It's already well known integrated benchmarks at times, does not 'exactly' reflect in-game performance. I'll take repeatable consistency over actual game performance (especially since the difference isnt much most of the time) in benchmarks.



Well, whats the point of testing CPU and memory subsystem performance with benchmarks made for GPU performance ? For me It doesnt make any sense. 
Repeatability and consistency is completely different topic. Choosing the right spot can be tricky, but after all doing proper and helpfull tests depends all on tester's hands.


----------



## EarthDog (Jun 1, 2019)

You're looking for the end result of a parameter change. If you want the result to be more like it is in game, you do not want to exaggerate the conditions to get the result. 

Its similar to running CPU based tests at 720p res. Sure, it's more of a CPU load down there, but it greatly exaggerates the results (which do not scale). For example if I run at 720p and see CPU a is 10% faster than b, then move up to say 1080p or 2560x 1440, the results differ at each res. Similar story here... some games will be more affected than others... it just is. The best result imo is a consistent and repeatable test which shows the difference.


----------



## R00kie (Jun 1, 2019)

nice work man.
so the long and short of it is i won't touch it then


----------



## lexluthermiester (Jun 1, 2019)

As shown by the results, this kind of OC will not show a great benefit in games. It will however show benefit in CPU intensive programs that utilize the extra uncore speed, F@H number crunching for example.


----------



## mouacyk (Jun 6, 2019)

Enterprise24 said:


> Of course you will only see benefit in non GPU limited situation
> It can help 0.1% low and 1% low in some games while boosting AVG FPS in others.


That begs a second set of benchmarks -- 120fps+ 720p benchmarks for those who care about high fps AND smoothness.


----------



## Enterprise24 (Jun 7, 2019)

Something is coming.


----------

