# Gaming Memory/Timings/Latency for z370 gaming MOBO



## zanatos (Oct 31, 2017)

Hi Guys,

gewtting my new z370 rig and i want to know what mhz speed i should focus on what timings and if it doesn't matter. i would appreciate your feedback.

By the way i will get the G Skill TRIDENTz RGB ram


----------



## EarthDog (Oct 31, 2017)

2x8gb ddr4 3200 mhz is my sweetspot. 

Set xmp and go.


----------



## londiste (Nov 1, 2017)

Memory-dependency varies by application or game. Not all software benefits from faster memory.
In general the lower latency memory is the better, the only problem/variable there is price.

As EarthDog already said, 3200 seems to be the sweetspot for price/speed right now.


----------



## Enterprise24 (Nov 1, 2017)

Get the best you can afford.

If you have some knowledge about overclocking you can pick 3200 CL14 which is SS b-die then overclock easily to 4000+. Lot of my friends get 4133-4266 on those kit (Galax HOF since TridentZ 3200 CL14 is not available in my country).


----------



## EarthDog (Nov 1, 2017)

I wouldnt bother overclocking memory unless you benchmark.


----------



## Enterprise24 (Nov 1, 2017)

I consider it necessary for my games like cities skylines 100K+ populations and Totalwar Series with thousand of units on screen.


----------



## EarthDog (Nov 1, 2017)

One off situations. What are you actual gains in those titles from 3200 to 4k mem?


----------



## metalfiber (Nov 2, 2017)

I got G.SKILL Ripjaws V Series 32GB (4 x 8GB) DDR4 3333 Timing 16-16-16-36 Model F4-3333C16Q-32GVR for my Gigabyte Z370 AORUS Gaming 7. It was approved by Gigabyte for the motherboard. I finally got my 8700k, putting it all together this weekend...hopefully.


----------



## Vayra86 (Nov 3, 2017)

Enterprise24 said:


> Get the best you can afford.
> 
> If you have some knowledge about overclocking you can pick 3200 CL14 which is SS b-die then overclock easily to 4000+. Lot of my friends get 4133-4266 on those kit (Galax HOF since TridentZ 3200 CL14 is not available in my country).



Looking at this its clear as day: you'll want to find 3200mhz C16, for skylake at least.


----------



## erocker (Nov 3, 2017)

Lower latency 3200 is good. The 6 core chips are hit and miss with their IMC's. They can usually run higher clocked RAM, but then require a bit more VCCSA voltage which = more heat.


----------



## mouacyk (Feb 22, 2018)

EarthDog said:


> I wouldnt bother overclocking memory unless you benchmark.


Don't see a good reason why not, if you have the time and interest to do the research to avoid the pitfalls.  Only then, can you discover your true headroom.  Normally, people who chase after these margins are already on the edge anyway -- like 144hz with the matching GPU and CPU horsepower.  They are pushing boundaries that the fledgling majority are ignorant of and can't even fathom why minimum frames, responsiveness, and just pure performance even matter.



Enterprise24 said:


> I consider it necessary for my games like cities skylines 100K+ populations and Totalwar Series with thousand of units on screen.


It's your time and I appreciate you sharing the results, even though many consider it niche.


----------



## EarthDog (Feb 22, 2018)

Old thread, lol!

For me, its time and level of effort versus results. I know the time sink it can be, and i know what results it yields. Unless you are that person chasing after 1%, like benchmarking, its worth it. Otherwise to me, it isnt. Its obviously up to tbe person making the effort. I disagree the majority who want to overclock memory are pushing limits. I see more people wanting to do it because its next on the list and believing one off results are more normal.


----------



## Enterprise24 (Feb 22, 2018)

If I have plenty of time I will retest again with 8700K. This time I think I will test with 2400 2666 2933 3200 3466 3733 4000 4133.

Probably I will include 720p low to see full potential of scaling and 1440p with appropriate settings for 980 Ti OC like my earlier test.


----------



## basco (Feb 22, 2018)

maybe if ya got time would you Mr.Enterprise24 try one test with manually optimised 2nd + 3rd timings 2133mhz(or 2400 or higher) run ?
TIA


----------



## EarthDog (Feb 22, 2018)

Enterprise24 said:


> If I have plenty of time I will retest again with 8700K. This time I think I will test with 2400 2666 2933 3200 3466 3733 4000 4133.
> 
> Probably I will include 720p low to see full potential of scaling and 1440p with appropriate settings for 980 Ti OC like my earlier test.


Dont test 720p... you are exaggerating a difference not found at normal resolutions just to find a difference....


----------



## Space Lynx (Feb 22, 2018)

DDR4 3200 14-14-14 timings are King - even better than 4000+ mhz plus modules according to some gaming benchmarks I have seen. Can't find the link right now. 1440p the gains are minimal though I admit. but still it is the sweet spot imo for overall snappiness in everything.


----------



## cucker tarlson (Feb 22, 2018)

Have a look at this people

https://www.purepc.pl/pamieci_ram/test_pamieci_ddr4_2133_3600_mhz_na_intel_core_i5_8600k?page=0,3

10 games in CPU intensive locations on 8600K 4.8GHz


----------



## EarthDog (Feb 22, 2018)

1080p with an overclocked 1080ti and no AA testing.... interesting.

Seems like the ram is tweaked as well to reach ddr4 3600 cl15...


I wonder what its like with out of the box ram and testing with AA at 1080p (needed and easily handled by 1080ti) or 1440p... lots of testing done in exaggerated environments...


----------



## basco (Feb 22, 2018)

yeah they use 1,50volt for the adata 3200 ram at 3600mhz.
wow 32fps more in witcher 3 ?

is this average+max framerate or minimum?


----------



## cucker tarlson (Feb 22, 2018)

EarthDog said:


> 1080p with an overclocked 1080ti and no AA testing.... interesting.
> 
> Seems like the ram is tweaked as well to reach ddr4 3600 cl15...
> 
> ...


You can call testing 1050/1060/1070 GPUs on i7 8700K /i9 7900X rigs like most reviewers do exaggerated as well, but nobody seems to have a problem with that. It's a ram test, therefore they test settings and locations that expose differences in ram. You're thinking of GPU tests that you want.


----------



## mouacyk (Feb 22, 2018)

1.5v is a lot for 3600-CAS15, but they were probably dialing in a lazy OC.  My semi-lazy OC of 3600-CAS15-1T is tested over a few weeks at 1.4v.  Anyone know what part of W3 they ran through with their benchmark?

They were obviously just testing the effect of speed on the same CAS latency, but it is slightly interesting that games are responding to it.

The figures on these benches are normally minimums and averages.


----------



## cucker tarlson (Feb 22, 2018)

Novigrad.


----------



## EarthDog (Feb 22, 2018)

cucker tarlson said:


> You can call testing 1050/1060/1070 GPUs on i7 8700K /i9 7900X rigs like most reviewers do exaggerated as well, but nobody seems to have a problem with that. It's a ram test, therefore they test settings and locations that expose differences in ram. You're thinking of GPU tests that you want.


I'm not thinking of GPU testing. All I am saying is I am a bit tired of seeing reviews which create unrealistic environments to exaggerate a difference which does not extrapolate to more realistic resolutions.

Running at 1080Ti overclocked at 1080p without AA, how does that show me what will happen at 1440p, or 1080p with AA which is a lot more appropriate for the hardware?

As far as your example as exaggerated, I don't see an issue with the latest CPUs, in particular, the 8700K or 7700K, mainstream CPUs for this testing. I wouldn't mind seeing testing done with 8700K and 1050/1060/1070/1080/1080Ti/Vega 56/vega64. This will end up showing if it makes a difference in the cards used as that will change as well considering the difference in data coming through. 

This was a neat exercise and shows good info for these settings (that few run). Outside of that, it doesn't show much at all considering it doesn't scale remotely like that.


----------



## cucker tarlson (Feb 22, 2018)

EarthDog said:


> Running at 1080Ti overclocked at 1080p without AA, how does that show me what will happen at 1440p, or 1080p with AA which is a lot more appropriate for the hardware?


It doesn't cause it's a RAM test.
When I made my thread about EDRAM on 5775c I used 1080p low settings in possibly the most demanding CPU scenarios I could think of.
When you test ram for games you have to be cpu bottlenecked, same as you test a gpu you have to be gpu bottlenecked, like I said no one seems to have a problem with testing at 1050Ti/1060 type of card with top of the line CPU. Why ? It's unrealistic as well.
It's good to have tests like that, what people do with the results is their choice, 99% will still try to find the sweet spot, and how are they going to do that without such test ? It proves 3000-3200 is still the sweet spot, if it's proved in most cpu bottlenedked conditions then it's 100% true.


----------



## EarthDog (Feb 22, 2018)

cucker tarlson said:


> It doesn't cause it's a RAM test.


Yes, its a RAM test to show how it improves FPS in a game. So, wouldn't more valid testing be to at least use AA at 1080p (which is what most users do that have at least half a clue)? People don't run 1080Ti's at 1080p with no AA. To that end, it is exaggerating the results with an unrealistic testing environment. I don't care what card or CPU is used, 1080p with Ultra/High settings and AA his how people do (who aren't running a 1050, lol).



cucker tarlson said:


> When you test ram for games you have to be cpu bottlenecked, same as you test a gpu you have to be gpu bottlenecked, like I said no one seems to have a problem with testing at 1050Ti/1060 type of card with top of the line CPU. Why ? It's unrealistic as well.


Why do you have to be CPU bottlenecked? You aren't when playing with normal settings? Why fabricate that environmental variable when testing?

You keep harping on low-end cards with high-end CPUs like it has something to do with this. I can't help how others test, nor is that any justification for testing in an equally lopsided manner (and to be clear, I am referring to HEDT CPUs excluding 7740K and its little brother - I believe testing with a 7700K or 8700K is absolutely fine. Plenty buy this way ). Can we focus on the discussion ?



cucker tarlson said:


> It's good to have tests like that, what people do with the results is their choice, 99% will still try to find the sweet spot, and how are they going to do that without such test ?


There is no way everyone will be happy. My only concern is people see this data and think they will achieve the same results at their settings and resolution. They won't. IMO, the best way to test is more realistic running situations. If you are going to use a 1080Ti down to a 1060, at least run at 1080p Ultra/High with gobs of AA, or better yet, 2560x1440. I feel complete testing will test this fabricated environment and something more realistic so users have results closer to where they play. AS this testing stands, people look at it and think they will see these gains where they play, and they won't.



cucker tarlson said:


> if it's proved in most cpu bottlenedked conditions then it's 100% true.


It isn't though. That is what I am saying. Because in many games, you are not cpu bottlenecked unless you rock Sandybridge or lower on Intel and every AMD cpu not named Ryzen. Some are, but then are they actually memory bandwidth limited which is what they are actually testing???? Lots of variables... 


Oh well, a never-ending discussion again. Sorry I brought it up.


----------



## Vayra86 (Feb 22, 2018)

EarthDog said:


> Yes, its a RAM test to show how it improves FPS in a game. So, wouldn't more valid testing be to at least use AA at 1080p (which is what most users do that have at least half a clue)? People don't run 1080Ti's at 1080p with no AA. To that end, it is exaggerating the results with an unrealistic testing environment. I don't care what card or CPU is used, 1080p with Ultra/High settings and AA his how people do (who aren't running a 1050, lol).
> 
> Why do you have to be CPU bottlenecked? You aren't when playing with normal settings? Why fabricate that environmental variable when testing?
> 
> ...



1. Market share of CPUs has NEVER been an influential factor for reviewers on gaming GPUs. A lot of them used HEDT class CPUs with mainstream GPUs, in fact.
2. You have a fundamentally different approach to 'reviews' it seems. Your reviews are aimed at 'what can I expect in real life' and not at 'what is the raw potential of this component'. Commendable but not realistic. How do you decide what is the 'average use case' and above all, what is 'average' or 'normal'? Its way too abstract especially in the PC world.
3. This different approach of yours, reduces the value of your style of reviewing to that of the simpleton: it is no more than a 'Can I Run this' benchmark focused on a specific component. I hate to break it to you, but in all fairness, we already have crappy websites for that basic info. Why write a whole review and analysis? I won't even read it, honestly. I care a lot more about that raw performance, because that is still relevant in other situations as an indicator of what the fastest solution will be.

We have butted heads on this on multiple occasions but the pattern is clear and again, I believe these are two radically different approaches at work here. Synthetic versus real world. While synthetic still is a valuable data-set 10 years from now, real world only matters today with the current driver, games, components and market reality. In my humble opinion, that is why we now see 'Performance Analysis' per game. Because really, for these real world performance numbers, the per-game basis is so much more valuable.

Focus on what's important for the subject of the article, and clearly divide the two, and there is no need for a never ending discussion on the subject, but instead we do gain more valuable information both for the NOW and future. When you review a component: focus on the component. When you review a game/bench a game: focus on the game.


----------



## EarthDog (Feb 22, 2018)

1. Tell that to Cucker... I really don't care what CPU is used. It makes sense to me to use a 7700K/8700K for this kind of testing. I can also go along with a 7900X or whatever... but then we get into a testing environment for the 1%. While we arent talking 50% here for 7700K/8700K they are mainstream and affordable which I believe is his underlying point there.
2. I don't care about raw potential as I never see it. I see the potential at reasonable settings and resolutions.
3. I disagree completely with that assertion... particularly the simpleton part. It isn't what you say it is. My method gives more realisitic results, period. There just isn't getting around the fact that the results here are fabricated to show a difference which isn't there in a normal testing environment.



Vayra86 said:


> I care a lot more about that raw performance, because that is still relevant in other situations as an indicator of what the fastest solution will be.


Buuuuuuuuuuut, it isn't. And it seems that is what is a big hang up here. That fabricated results doesn't scale or tell you anything when it's in a realistic use scenario. Some seem to think isolating the RAM in this manner, the results are scalable to other settings or resolutions. It isn't. Not remotely so. So looking back at a bad result still yields a bad result. For example, if we see a result of 20 FPS (say 15%) difference in this test, but with 'proper' settings, shows 5 FPS or no gains? What does that result really tell you? Its only good for its unrealistic test enviro. 

And it will be tested again when DDR5 comes out, or new CPUs or new, whatevers. Real world DOES matter with current driver, games, and components in the market... see the first sentence. This is why this fabricated test environment is so toxic as it skews perception and actual results replacing them with results unable to scale.

Yes, I agree, focus on the component, but not to the point where the testing environment isn't remotely realistic. This is why I am sticking with 1080p and AA.. because it can be a CPU bound resolution. Going lower or using Low/no AA, particularly with a high-end GPU, skews reality. 

Anyhoo, continuing to pile on... again, the PM box is always open.


----------



## Vayra86 (Feb 22, 2018)

EarthDog said:


> 1. Tell that to Cucker... I really don't care what CPU is used. It makes sense to me to use a 7700K/8700K for this kind of testing. I can also go along with a 7900X or whatever... but then we get into a testing environment for the 1%. While we arent talking 50% here for 7700K/8700K they are mainstream and affordable which I believe is his underlying point there.
> 2. I don't care about raw potential as I never see it. I see the potential at reasonable settings and resolutions.
> 3. I disagree completely with that assertion... particularly the simpleton part. It isn't what you say it is. My method gives more realisitic results, period. There just isn't getting around the fact that the results here are fabricated to show a difference which isn't there in a normal testing environment.
> 
> ...



Let's pick up the example given in this thread, to turn it back on topic:

We see 1080p no AA testing. You say 'not realistic'... I say 'this is my use case'.

I use a strong GPU with the fastest CPU at a relatively low resolution, hell I even drop IQ settings if that is what's needed to achieve 120 fps. The linked review and situation is 100% relevant to me and I can also tell you that it does indeed make a huge difference increasing RAM speeds in a CPU bound scenario that still is gaming. I can also transplant these numbers to different games that are far more CPU/RAM bound and conclude that faster RAM will benefit me in those games as well. And, again, this is supported by my own experiences switching to faster RAMs in for example Guild Wars 2 World vs World with loads of players on screen.

So again; who gets to decide what is realistic or relevant? Should the writer do that, or the reader... perhaps I HATE AA because it can make games blurry.

EDIT: as to your below points on
1.; I am already running into games at 1080p where a (non-TI) 1080 won't do the trick as in below 60 FPS performance. No less than a year ago, people yelled 'why would you ever use a 1080 for 1080p, its a 1440p card'.
2. AA really isn't that much of an influence here on recent GPUs, and there are tons of AA methods too, TAA being more and more prevalent, even combined with internal upscale or downscale. The AA-line is becoming real blurry (lol!). Go look at Resident Evil 7 and you can see what I mean... Dynamic Resolution is a thing these days and it can be used in both directions.

Ah well. I'm dropping it, as before, let's agree to disagree ^^


----------



## EarthDog (Feb 22, 2018)

If you (royal you note) are the person who runs a 1080Ti at 1080p without AA, than this is for you. If you have a clue on what you are doing, you......

1. Wouldn't be rocking a 1080Ti on 1080p without AA in the vast majority of situations.
2. Would be using higher settings and AA (otherwise, go console)

I have a 1080 that pounds through nearly everything at 60 FPS+ 2560x1440, well more than that in many titles (high/ultra 2xAA for this res). I have a 144 Hz monitor... I prefer the eye candy over getting another 20 FPS as its buttery smooth anyway.. In most titles, I am around 100 FPS.

As I said, this is realistic for the fabricated environment its in. Outside of that, it really isn't. It doesn't scale, it doesn't tell you anything about the sweet spot at more realistic settings. Id simply rather a review cover more realistic settings (again can't please everyone) to show a more accurate picture of what to expect. If the review is outside of your use case, it really doesn't tell users much. 


EDIT: 





Vayra86 said:


> Ah well. I'm dropping it, as before, let's agree to disagree ^^



We'll never see eye to eye on this point. 

Dynamic resolution is also placing the load on the GPU, making ram speed matter less... as does AA....as does higher settings which most strive to run. 

EDIT2: I mean, I said my PM box was open and you replied here, LOLOLOLOL!


----------



## cucker tarlson (Feb 22, 2018)

EarthDog said:


> Yes, its a RAM test to show how it improves FPS in a game. So, wouldn't more valid testing be to at least use AA at 1080p (which is what most users do that have at least half a clue)? People don't run 1080Ti's at 1080p with no AA. To that end, it is exaggerating the results with an unrealistic testing environment. I don't care what card or CPU is used, 1080p with Ultra/High settings and AA his how people do (who aren't running a 1050, lol).
> 
> Why do you have to be CPU bottlenecked? You aren't when playing with normal settings? Why fabricate that environmental variable when testing?
> 
> ...


Well it's still a good tets for people who know how to approach the outcome of such test. And people who you think might be confused or misinterpret these results don't read sources like that and delve enough into the subject anyway. I'm all for testing in fabricated environments but even I use a lot of common sense in then purchasing any components, just cause DDR4 3600 does scale over 3200 nicely, don't mean many people need it. If you're on a 240Hz monitor and got money to blow then why not though, it's good to know it will give you a slight boost.


----------



## mouacyk (Feb 22, 2018)

Testing to differentiate two products that are alternatives to each other needs strict isolation. Suppliers love this, because they can charge for the "unrealistic" potential.   Enthusiasts love this testing, because when other variables are controlled, they're free to chase and "realize" this potential or deny it.

It's a huge waste of time and money for your average reader.


----------



## EarthDog (Feb 22, 2018)

cucker tarlson said:


> Well it's still a good tets for people who know how to approach the outcome of such test. And people who you think might be confused or misinterpret these results don't read sources like that and delve enough into the subject anyway. I'm all for testing in fabricated environments but even I use a lot of common sense in then purchasing any components, just cause DDR4 3600 does scale over 3200 nicely, don't mean many people need it. If you're on a 240Hz monitor and got money to blow then why not though, it's good to know it will give you a slight boost.


That is the problem, you give people too much credit 

Look around these (any) forums and see some of the questions people ask... and forums are where enthusiasts are supposed to be... can you imagine a normal user with a Dell/prebuilt gaming PC thinking of upgrading reading that? "zOMG......I can hz 20 moar FPS by t3h memor13s? Where do I sign up?" More than half the people here wouldn't know what that testing is showing and how it extrapolates (or doesn't) to a higher res/settings (AKA- more realistic and not fabricated to exaggerate differences). 



cucker tarlson said:


> just cause DDR4 3600 does scale over 3200 nicely


But does it at the resolution and settings you play at or only when you run low and no AA?



mouacyk said:


> It's a huge waste of time and money for your average reader.


Exactly... they cannot discern like many here (but not a majority) can. This is another good reason why more testing at more appropriate settings is helpful to the average joe reading it. 



Damnit.. stop replying people...trying to leave this thread alone and nobody seems to care to want to use the PM box!!!! hahaha! lololol!


----------

