# Gaming benchmarks: DDR4 2133 MHz VS DDR4 3000 MHz (Core i7 6700K)



## Artas1984 (Mar 23, 2017)

It's been proven already that higher frequency RAM increases performance in gaming just like in content creation, whether just by little or considerably, dependent on the game. My problem with all of those showcases - there are far too little games tested. So right now and here i will add more.

For this specific test i have assembled a new bench computer with my own Asus GeForce GTX980 Ti Strix borrowed for the test. You can see the specs of my new PC in the screenshot. Don't mind the Windows 7 "not genuine" notification - i only installed a fresh ISO copy of Windows for testing and i am not activating it just for that! That being said, i obviously use legal Windows 7 for my main Core i7 5775C PC.







Intel Core i7 6700K 4500 MHz OC.
Gigabyte GA-Z170XP-SLI
Corsair Vengeance 2X8 GB DDR4 3000 MHz 15-17-17-35 (XMP1)
Corsair Vengeance 2X8 GB DDR4 2133 MHz 15-15-15-36 (SPD)
Plextor M8PG 256 GB NVME PCI-E 3.0
Asus GeForce GTX980 Ti Strix 6 GB


25 games have been tested. Testing on 1920x1080 resolution using maximum available in-game presets or simply maximum available settings with no AA if possible. All of the testing scenes have been tested many times in a row and the best results were obtained from both DDR4 frequencies using the same memory kit.
-------------------------------------------------------------------------------------------

*VIDEO PRESENTATION*
*
ALAN WAKE AMERICAN NIGHTMARE*






Obvious improvement in minimal and average FPS. Tested 5 times for each frequency.
*
ALIEN ISOLATION*






Not much of a difference. Tested 5 times for each frequency.
*
ARMA 3 APEX*






Arma 3 is exceptional... The difference between DDR4 3000 MHz and DDR4 2133 MHz is actually bigger than the difference between GTX980 Ti and GTX970 - that is how much this game is CPU bottle-necked even with Core i7 6700K 4500 MHz!!! I was shocked!!! Tested 5 times for each frequency.* 

ASHES OF SINGULARITY *






CPU heavy game? Certainly does not look so from the benchmark. Tested 2 times for each frequency.
*
ASSASSINS CREED SYNDICATE*






Little difference. Tested 5 times for each frequency.
*
BATMAN ARKHAM ORIGINS*






Some performance gain is evident. Tested 3 times for each frequency.
*
BATTLEFIELD 1*






Little difference. Tested 5 times for each frequency.
*
CALL OF DUTY BLACK OPS 3*






Not much of a difference. Tested 12 times for each frequency.
*
COMPANY OF HEROES 2*






The very first test is the most important and valuable, since every other test after the first is droping performance consistently more and more. The difference is certainly evident.
*
CRYSIS 3*






I have to say i have seen bigger gains in the internet, perhaps the test is not stressful enough... Tested 7 times for each frequency.
*
DYING LIGHT*






Little difference. Tested 5 times for each frequency.
*
DOOM*






Little difference. Tested 5 times for each frequency.
*
DRAGON AGE INQUISITION*






There is no difference whatsoever. Tested 5 times for each frequency.
*
FAR CRY 4*






Almost no difference. Tested 7 times for each frequency.
*
MAD MAX FURY ROAD*






Little difference. Tested 5 times for each frequency.
*
METRO LAST LIGHT REDUX*






This is huge! Unfortunately the maximum FPS do not matter that much, yet those high 300+ FPS for 3000 MHz DDR4 were constant in every test. Tested 12 times for each frequency. This test alone took me a whole hour to make!
*
MIDDLE EARTH SHADOW OF MORDOR*






Funny how 2133 MHz DDR4 actually won here. Little difference. Tested 4 times for each frequency*.

MIRRORS EDGE CATALYST*






No difference. Tested 5 times for each frequency.
*
PROJECT CARS*






Huge performance difference in this game. Tested 7 times for each frequency.
*
QUANTUM BREAK*










Trash this game! This is the worst optimized game i've ever seen and it looks *nowhere* as good as Crysis 3 or Battlefield 1 (speaking about objects, filtering and lightning, color tone, not face work, which actually looks good). Core i7 6700K 4.5 GHz, 16 GB DDR4 3 GHz and GTX980 Ti OC can not run this game at 1080 with highest preset and no AA? ARE YOU SERIOUS? WTH IS THIS SHIT? Anyway, you will want to have the highest frequency RAM for this game available. Tested 5 times for each frequency.
*
RAINBOW SIX SIEGE*






It appears that high frequency RAM improves maximum FPS performance the most. Tested 7 times for each frequency.
*
RISE OF TOMB RAIDER*






FPS increase from high frequency RAM is obvious. Tested 7 times "Mountain Peak" for each frequency.
*
THIEF*






Similar FPS increase in Thief just like in Tomb Raider is evident. Tested 4 times for each frequency.
*
WATCH DOGS 2*






Little difference. Tested 5 times for each frequency.
*
WITCHER 3 WILD HUNT*






I could not record any difference, although i've seen obvious improvements from high MHz RAM in this game tested elsewhere. Perhaps this testing scene is just not stressful enough. Tested 5 times for each frequency.
-------------------------------------------------------------------------------------------

*CONCLUSIONS*

1. Higher frequency RAM does increase gaming performance - mostly by little, but in some cases notably.

2. It is not worth selling your *basic* DDR4 and getting new *high* DDR4 from first hand retailer full price.

3. It is worth upgrading your RAM if it will only cost as much extra money, as much extra performance you get in return - that being said, it is not worth selling your 16 GB DDR4 2133 MHz for 60 EU just so get 16 GB DDR4 3000+ for 120 EU for an extra 10 % FPS improvements.


----------



## yogurt_21 (Mar 23, 2017)

sweet, now do ryzen...


----------



## ZenZimZaliben (Mar 23, 2017)

I appreciate the effort it took to do this...now I am going to point out some issues inherent with RAM Benchmarking. 

With RAM you really need to show memory timings for both tested speeds. Pure bandwidth is only a portion of the whole picture when it comes to ram. There is an Apex where tighter timings and high bandwidth meet and that is the sweet spot...even if clocked a little slower with tighter timings you will see an improvement.

How did you get the ram to run from 2133Mhz to 3000Mhz? XMP Profiles? What did the timings look like.


----------



## Artas1984 (Mar 23, 2017)

I have finished editing the thread. 



ZenZimZaliben said:


> I appreciate the effort it took to do this...now I am going to point out some issues inherent with RAM Benchmarking.
> 
> With RAM you really need to show memory timings for both tested speeds. Pure bandwidth is only a portion of the whole picture when it comes to ram. There is an Apex where tighter timings and high bandwidth meet and that is the sweet spot...even if clocked a little slower with tighter timings you will see an improvement.
> 
> How did you get the ram to run from 2133Mhz to 3000Mhz? XMP Profiles? What did the timings look like.



Updated!


----------



## EarthDog (Mar 23, 2017)

Sometimes none at all, mostly by very little (negligible even... 1 fps), and a notable amount in project cars. 

Still hardly a difference for sure. But worth the meager (as I recall them) price difference between 2133/2400 to 3000.


----------



## fullinfusion (Mar 23, 2017)

@Artas1984 thanks for the time taken for testing all this 

It was a wonderful read!


----------



## HTC (Mar 23, 2017)

fullinfusion said:


> @Artas1984 *thanks for the time taken for testing all this*
> 
> It was a wonderful read!



I second that!

I wonder if a similar test for Ryzen platform would show any tangible difference to this.


----------



## fullinfusion (Mar 23, 2017)

HTC said:


> I second that!
> 
> I wonder if a similar test for Ryzen platform would show any tangible difference to this.


you know what my fingers are crossed the Ryzen will show a heck of a more difference but who knows right now


----------



## Kanan (Mar 24, 2017)

Yeah well worth the minimal difference in price between ultra low 2133 and 3000/3200 DDR4. 

Notice: the badly coded games (Arma 3 + Project Cars) profit the most for obvious reasons. Also notice that on these games even more bandwidth would probably help even more. Basically you can't have enough bandwidth for Dual Channel CPUs. Even more so for Ryzen which ties its interconnect between CPU stacks (Infinity fabric) directly to Ram speed.


----------



## Artas1984 (Mar 25, 2017)

Thread updated.

Included a video presentation today in my youtube channel. All of the pictures represent real testing places. I think you can all recognize which are engine benchmarks and which are custom Fraps benchmarks. 

I also tested Left 4 Dead 2 local server in which i "don't get 300 FPS" 

DDR4 2133: 291/293/296 FPS
DDR4 3000: 292/296/300 FPS

Even in this 2008 game i can see minimal performance boost as i tested it 7 times i a row.


----------



## silentbogo (Mar 25, 2017)

Interesting. I was expecting a little more impact on newer titles, but I guess it is what it is....


----------



## EarthDog (Mar 25, 2017)

Artas1984 said:


> Thread updated.
> 
> Included a video presentation today in my youtube channel. All of the pictures represent real testing places. I think you can all recognize which are engine benchmarks and which are custom Fraps benchmarks.
> 
> ...


It falls ~1% within the margin of error.

Again, many of these titles are one FPS difference... I wouldn't call those minimal, I'd call it margin of error.. (anything 1% or so honestly...).

The real question is if it's worth it to pay for notable increases in two games while the rest fall within margin of error..even if it is $10 or 10% more...


----------



## DeathtoGnomes (Mar 25, 2017)

What this testing shows me is how much effort game developers deliver when it comes to higher frame rates.  The better looking games have great frame rates without tweaking our nuclear powered babies.


The sad part is this wasnt done with a Ryzen system. (yet?)


----------



## EarthDog (Mar 25, 2017)

There is a video here that was making its rounds on the forums... Let me link the thread:

It shows more improvements over Intel to 3K, and looks to scale a bit above that... however then you are getting into an 'is it worth it' situation there too...


----------



## Artas1984 (Mar 25, 2017)

EarthDog said:


> It falls ~1% within the margin of error.
> 
> Again, many of these titles are one FPS difference... I wouldn't call those minimal, I'd call it margin of error.. (anything 1% or so honestly...).
> 
> The real question is if it's worth it to pay for notable increases in two games while the rest fall within margin of error..even if it is $10 or 10% more...



Yes and no earthdog...

Take for example the same L4D2: in 14 tests that i have made with both frequencies the DDR4 2133 MHz would score around 291 - 293 average FPS (7 attempts), and DDR4 3000 MHz would score 294 - 296 average FPS (7 attempts). I took only the best results out of those 7 attempts. Even though that is 1 % difference, it's not because of margin of error. 

Certainly many games pose accidental results as you say: Ashes of Singularity, Assassins Creed Syndicate, Mirrors Edge Catalyst, Middle Earth Shadow of Mordor, Mad Max Fury Road, Dragon Age Inquisition - which means in those games there is no benefit of higher MHz RAM. I am not counting Crysis 3, Far Cry 4 and Witcher 3 since it's proven elsewhere that higher MHz RAM improves FPS in them, even though in my test there results were minimal.


----------



## EarthDog (Mar 26, 2017)

1% is negligible/margin of error... especially in game 'run-throughs' where there isn't a consistent, repeated, scene and actions on the run-through..

It's odd how you throw away your own testing for the other games...your results are your results. If you cant trust them all, you cant trust any of them. Stick to your guns next time... but mention you've seen other get different results.


----------



## DeathtoGnomes (Mar 26, 2017)

EarthDog said:


> 1% is negligible/margin of error... especially in game 'run-throughs' where there isn't a consistent, repeated, scene and actions on the run-through..
> 
> It's odd how you throw away your own testing for the other games...your results are your results. If you cant trust them all, you cant trust any of them. Stick to your guns next time... but mention you've seen other get different results.


So your telling him to include every game he tested or none at all. LOL, thats a hell of a potentially long post, please no...


----------



## EarthDog (Mar 26, 2017)

He included those titles already, just 'threw away' the results in the discussion with me it seems. But really.. he did already 'stick to his guns' in those titles in the OP... just backed out in our conversation I guess.

EDIT: Literally 18 of the 25 titles tested I would consider had margin of error type increases or simply no gain. These titles had gains of 0,1, or 3 FPS. The 3 FPS I counted  as 'no gain/margin of error' were those reaching 262, 150, and 129 FPS respectively. 3 FPS on 42 FPS (I counted as an increase) is what I would call a minimal gain.

Of the 0 FPS gains, their average FPS was 148, 144, and 89.

Of the 1 FPS difference games, their average FPS was 61/95/83/119/161/90/190/146/135/117/144/72 respectively. The games with 1 FPS difference, clearly a margin of error, particularly when they are at/above 100 FPS.

Literally, only 3-4 games will make a difference in performance out of the 25 tested.

I also get concerned when one guesses at things... for example "must not have stressed it enough"... ok... perhaps I can go along with that.. but, why make it up? Perhaps it, like the other titles that had 0 FPS difference, just didn't respond at all and averaged out to none? That shows bias in the result to me considering you expect an increase seemingly on every title (and count 1 FPS increases as anything more than negligible or MOError).

Taking the 'best' run is also concerning... why not the average? Or why not throw away the low and the high to get the mean? The 'best' run is typically anomalous and not the norm.

Sorry to be so blunt artas. I can really get on board with your testing if it wasn't stale (testing things already tested ad nauseum), and the methods are curious to me. Please take this as constructive, and not me being a douche.


----------



## BiggieShady (Mar 26, 2017)

EarthDog said:


> being a douche


It was a good, fun read ... I liked the part about when to buy faster ram 


> ... if it will only cost as much extra money, as much extra performance you get in return


trouble is I never know what's today's price of 1 fps


----------



## EarthDog (Mar 26, 2017)

What is really funny....I think our answer is the same, but the work to get there pretty different!!! DDR4 3000 is a sweetspot for price and performance now (remember, other reviews went over this since it was released)!!!! I agree with the end message, but perhaps not the points as stated (see my first post in the thread).


----------



## Artas1984 (Mar 26, 2017)

EarthDog said:


> 1% is negligible/margin of error... especially in game 'run-throughs' where there isn't a consistent, repeated, scene and actions on the run-through..



In all of my tests earthdog the scenes and actions were *consistent and always the same*. But we do agree that a lot of results could have been within the margin or error, and i myself expected far greater differences between DDR4 2133 MHz and DDR4 3000 MHz. I am not saying they were, since i explained it already with the L4D2 example.

I will stick to the point that only the best results from many runs should be obtained, and average results are wrong, because average results include calculating the worst and best data. That being said, many times the worst results occur from accidental HDD lag, program background checks or similar nuisances. Also the best results repeat themselves quite a few times and are never accidental. Here is an example of minimal FPS runs in Rise of Tomb Raider:

DDR4 2133 MHz: 54/71/*88*/75/*89*/*88*/45

DDR4 3000 MHz: 48/76/*94*/*95*/77/*95*/50/

We clearly see that 3 times the GPU manages to pull out the best results around for both RAM frequencies, sometimes the FPS falls short and sometimes a lag occurs resulting in 50 FPS or lower. Seeing this trend i clearly understood that only the best FPS should be taken and the difference between 88/89/88 VS 94/95/95 clearly shows what DDR4 2133 MHz VS DDR4 3000 MHz is all about. I will stick to this method and noone will talk me out of it!


----------



## Kaynar (Mar 26, 2017)

Very nice, this should be pinned or something! You should add a 2666mhz 16CAS kit, since those are quite common and in between 2133 and 3000mhz, though you've already made a point with this testing


----------



## EarthDog (Mar 26, 2017)

Artas1984 said:


> In all of my tests earthdog the scenes and actions were *consistent and always the same*.
> 
> I will stick to the point that only the best results from many runs should be obtained, and average results are wrong, because average results include calculating. Also the  I will stick to this method and noone will talk me out of it!


Manual runs throughs are not repeatable. Something changes each and every time. The best you can do manually are scenes 'on rails' for consistency. 

I don't agree that using the 'fastest' result is the best. I can't think of one website which does it that way... perhaps we are all wrong though...

You keep on keeping on, people eat your testing up like it's The Gospel.


----------



## Robert Bourgoin (Mar 26, 2017)

Thank you for doing this. I found it to be very interesting. As many have said it's the cpu overclocking having better results in improving FPS in games.


----------



## Artas1984 (Mar 26, 2017)

EarthDog said:


> Manual runs throughs are not repeatable. Something changes each and every time. The best you can do manually are scenes 'on rails' for consistency.
> 
> I don't agree that using the 'fastest' result is the best. I can't think of one website which does it that way... perhaps we are all wrong though...
> 
> You keep on keeping on, people eat your testing up like it's The Gospel.



Did not i explain to you earthdog why i choose only the best results from different contenders?

One more time:

DDR4 2133 MHz: 54/71/*88*/75/*89*/*88*/45

DDR4 3000 MHz: 48/76/*94*/*95*/77/*95*/50/

If i was to calculate only the average data, i would include accidental lag caused by other hardware and it might turn out that DDR4 2133 MHz is faster than DDR4 3000 MHz. *I must only take the best, yet repetitive, results that the tested hardware contender is able to produce without being hindered by any other hardware parts*. This applies to processors and video cards too. I can not imagine how else i am to explain it. Don't act like a stubborn child earthdog pretending you do not understand what i thoughtfully have explained, since i gave you a very good example of why i am doing it my way.


----------



## EarthDog (Mar 26, 2017)

Repeating something I understand and don't agree with doesn't help anything. Neither does your insult... I'm not a child (stubborn, sure).

Let me try the same thing...

By taking the best result, you are posting a best case scenario, not an average or a mean. By taking only the best result, this exaggerates the result. Accidental lag, as you say, is accounted for in average... that's why it's an average. You can also throw out the lowest and highest results and accomplish the same thing. As it stands though, you are discarding all results but the fastest.* You are manipulating results* by taking only highest values to get an expected outcome. I simply don't agree with that methodology. That's all.

...does repeating the same thing help? Probably not. 


We agree to disagree is where we end up... and that's OK (particularly if you don't resort to insults...)! Lead the lemmings! 

I digress.


----------



## DeathtoGnomes (Mar 26, 2017)

the highest results are PEAK results. The best his test system can do. I dont think that manipulating anything. It may be a false positive, but its not manipulating deliberately, IMO.

edit: There are a niche of consumers that actively look to buy products that give the best results vs. average results while marketing promotes around the above average product since it sells better over a wide range of consumers. This review shows results that niche might just want to see vs an average results review.


----------



## FireFox (Mar 26, 2017)

Sorry for my ignorance but why most of the Game are tested at 1920x1080?

Honestly, that's something i have never understood.


----------



## DeathtoGnomes (Mar 26, 2017)

Knoxx29 said:


> Sorry for my ignorance but why most of the Game are tested at 1920x1080?
> 
> Honestly, that's something i have never understood.


because 4k is not wide spread enough yet?


----------



## FireFox (Mar 26, 2017)

DeathtoGnomes said:


> because 4k is not wide spread enough yet?



It doesn't necessarily have to be 4K, it could be 2560 x 1440


----------



## Kanan (Mar 26, 2017)

Taking the highest FPS results for the end result as data is not "manipulating", it's just going the "optimistic way". 

3200 Ram is the minimum I'd go for whether i7 or Ryzen, the price difference isn't high compared to 2400 or 2666 and lower than that is a no-go anyway. So I don't see why people here are still discussing it. It's 2 games or more where it's scaling well and it will probably be even more games if you take in account all games ever released and some games that will be released in future as well.


----------



## DeathtoGnomes (Mar 27, 2017)

One thing that is missing is the MMO factor, and the ancient question....

"How many other characters does it take to lag someone out and make them crash."

In EQ1 days, raiding meant your system/card had to survive 50-60 people, while mostly on dial-up, and continue to handle whatever animated content there was. Which meant spending $$$$ on anything and everything to accomplish. Thats where i see a discussion like this somewhat valuable. Sure it prolly has a minimal impact, but there are those with too much money to burn and would pay for that 1% just to have it.


----------



## Artas1984 (Mar 27, 2017)

Knoxx29 said:


> Sorry for my ignorance but why most of the Game are tested at 1920x1080?
> 
> Honestly, that's something i have never understood.



In my case it has to do with limiting GPU bottleneck. Even at 1080 the GTX980 Ti is somewhat bottle-necked in many cases. When i will do a video card benchmark next time i will use an appropriate resolution for appropriate performance video cards, but when it comes to other hardware parts, there is no need to exaggerate, especially since 1080 is the mainstream resolution world wide.



Kanan said:


> Taking the highest FPS results for the end result as data is not "manipulating", it's just going the "optimistic way".



That is correct. The highest recorded FPS means that the system component *can* do that level of performance. Of course this is taking into consideration that i am searching for the best minimal FPS result, but writing the medium and maximum FPS in that same line, and not searching for highest medium and maximum in other lines... But i think you already expect this guys.


----------



## EarthDog (Mar 27, 2017)

What do the numbers look like averaged? Or take out highest and lowest and average those? I'm curious to see if that changes results any...


----------



## Artas1984 (Mar 27, 2017)

EarthDog said:


> What do the numbers look like averaged? Or take out highest and lowest and average those? I'm curious to see if that changes results any...



I will give an example with Metro Last Light Redux to show:

1 run: 97/103/107
2 run: 101/106/108
3 run: 102/107/111
4 run: 103/108/112
5 run: 101/107/113
6 run: 102/109/112
7 run: 103/107/111

So out of these runs i select Nr. 4 since the hardware component scored the best minimal FPS of 103. In some other runs i see higher average and maximum FPS, but i stick to one line, in which the best minimum is, and so by it the average and maximum follow... Usually the very first runs in most games are not that good, perhaps the needed info is only being written into CPU cache and therefore only there following runs are accelerated due to fast offlaod from CPU cache (this is just a theory).


----------



## sneekypeet (Mar 27, 2017)

Artas1984 said:


> 1 run: 97/103/107
> 2 run: 101/106/108
> 3 run: 102/107/111
> 4 run: 103/108/112
> ...



If you were to average them as most reviewers would do, you will find run three is closest to reality, not run #4. I would not pick and choose results randomly, and I think this is what @EarthDog  was getting at as well.


----------



## Artas1984 (Mar 27, 2017)

sneekypeet said:


> If you were to average them as most reviewers would do, you will find run three is closest to reality, not run #4. I would not pick and choose results randomly, and I think this is what @EarthDog  was getting at as well.



Yes, i agree that such calculating would get the results closer to "standart feeling", but by calculating average results the difference between the tested components would be much smaller and much more random than what they could really produce at their peaks. This average method might work well for video cards, but when it comes to measuring hardware that does not impact FPS so much as video cards, i choose the best peak results from both sides. Granted, if i was to test synthetic and productivity tests, i would measure the average results as well, but not in this case.

Here is an example: Core i3 6320 and Core i5 2500 produced similar average FPS in games, yet in minimal FPS Core i5 2500 was seen stronger. So if i was to measure the average (sum the rows) results, they would look much closer than what they really are, but if i am to measure the best minimal FPS from both of them (and in that case Core i3 6320 would produce smaler numbers), i really would show why Core i5 2500 is the better processor.

I appreciate both of your inputs, but i will stick to my methodology for some specific hardware parts like this.


----------



## sneekypeet (Mar 27, 2017)

Artas1984 said:


> I appreciate both of your inputs, but i will stick to my methodology for some specific hardware parts like this.



No offense, but save yourself the time then. If you just want a look at close to peak results, run the test once to get rid of the bad mojo, then just record the second run as best. There is no point to running it that many times not to take the average imho.


----------



## EarthDog (Mar 27, 2017)

sneekypeet said:


> If you were to average them as most reviewers would do, you will find run three is closest to reality, not run #4. I would not pick and choose results randomly, and I think this is what @EarthDog  was getting at as well.





sneekypeet said:


> No offense, but save yourself the time then. If you just want a look at close to peak results, run the test once to get rid of the bad mojo, then just record the second run as best. There is no point to running it that many times not to take the average imho.


Exactly.

Manipulating really had much more of a negative connotation than I wanted to relay (sorry about that Artas). Choosing results randomly, I think is more accurate. Or, at least I don't agree with how its chosen based off theories of HDD's and cache and... etc. Do people start games twice to 'shake off the cache'? I don't think so. Besides, this is what running it multiple times does already. All you are doing is taking the best result of all which isn't as accurate a representation of the results one will see. They play a game and run it. 

I don't understand why these data sets are any different than measuring graphics cards etc.. I mean its an FPS value you are measuring to determine if faster memory helps and all. The 'fastest' results don't mean a thing in this case...you'd still want to average it out to get rid of the anomalies (the fastest result you insist on using).

...thought I digressed somewhere in here already, LOL!


----------



## carlos1172 (Feb 19, 2018)

Sorry to dig this up, but am I right in assuming that 2133 RAM with CL 13 timings would not lag behind by that much? I.e. it will be faster than the 2133 CL 15 timings RAM used in these tests.
I moved from a single stick of 8Gb DDR4 2400 CL17, to a 16 Gb Kit (4x4) of Corsair Vengeance LPX 2133 CL 13, and it was a huge difference. Using an i5 8400, and a GTX 1070, I used to get an Average of about 70-75 in BF1 64 Player MP, with dips to 50. After the upgrade, it went to an average of 80-90, with no dips below 70. 

I just wanted to say this because I often see reviewers say we should never use 2133 MHz RAM since it'll hold you back, but most don't reveal the timings of their RAM. Only a "2133 vs 3200" etc. comparison. My 2400 CL17 would have a true latency of about 14 ns, quite like the 2133 CL15 RAM used in these tests. The 2133 CL13 has a true latency of about 12.16 ns, forming somewhat of a midpoint between the 2133 CL15 used here, and the 3000 CL15 (with a true latency of about 10 ns).


----------



## eidairaman1 (Feb 19, 2018)

Artas1984 said:


> It's been proven already that higher frequency RAM increases performance in gaming just like in content creation, whether just by little or considerably, dependent on the game. My problem with all of those showcases - there are far too little games tested. So right now and here i will add more.
> 
> For this specific test i have assembled a new bench computer with my own Asus GeForce GTX980 Ti Strix borrowed for the test. You can see the specs of my new PC in the screenshot. Don't mind the Windows 7 "not genuine" notification - i only installed a fresh ISO copy of Windows for testing and i am not activating it just for that! That being said, i obviously use legal Windows 7 for my main Core i7 5775C PC.
> 
> ...



If 2133 will run 2400, 2667, 2933, 3000, 3200, go to your ram makers site and get the ram timings and voltages and try to overclock them


----------



## Melvis (Feb 19, 2018)

Awesome as always, do you plan or do you have an Ryzen APU's? like the 2400G? I would love to see someone give some benchmarks when running different speed RAM over 3200MHz as I havent found any that seem to benchmark with RAM much over that speed sadly.


----------



## claylomax (Feb 19, 2018)

Check out this video:


----------



## Space Lynx (Feb 19, 2018)

From what I understand, it is not just ram speed but CAS latency. I think techspot or techradar or guru3d did a review on it, but CAS 14 ddr4 3200 is the sweet spot, after you go higher than that its very minimal gains.

I paid $352 for Dark Pro 32gb (8x4) CAS 14-14-14-31 3200.  Little pricey, but worth the extra cost to me. Will last me until DDR5 and my 2020-2021 build.


----------



## Artas1984 (Feb 23, 2018)

lynx29 said:


> From what I understand, it is not just ram speed but CAS latency. .



RAM latency is more important than speed. Speed is defined by how fast you accomplish a task, not by how many tasks you can accomplish ''at all''. It's like comparing Formula 1 car vs. any supercar. Who gives a shit if supercar can reach higher top speed if it will be 40 % slower in a race track??

That being said

I've tested DDR3 2400 MHz CL11-13-13 VS DDR3 1600 MHz CL7-7-7 and ye, DDR3 1600 MHz CL7 was way faster.

This DDR4 test, however,  was not about speed vs. latency, since the latency on my tested 3000 MHz DDR4 is nothing special.



carlos1172 said:


> I moved from a single stick of 8Gb DDR4 2400 CL17, to a 16 Gb Kit (4x4) of Corsair Vengeance LPX 2133 CL 13, and it was a huge difference. Using an i5 8400, and a GTX 1070, I used to get an Average of about 70-75 in BF1 64 Player MP, with dips to 50. After the upgrade, it went to an average of 80-90, with no dips below 70.



This is true, although the results are staggering. I would expect 5 FPS drop at minimum, not 20... Testing in multiplayer is not great, since the scenes are not consistent though...



Melvis said:


> Awesome as always, do you plan or do you have an Ryzen APU's? like the 2400G?



No no no. I had two Xeon Broadwell-EP Worksation builds on my shoulders in the past 3 months, not spending money to test some Ryzen CPU's...


----------



## adamiakadam00 (Feb 24, 2018)

Its not suprising for me. It only proves that modern intel CPUs got memory bus wide enough already.

And yes!. Low latency gives the same what big numbers next to "MHz". The point is that Intel and AMD designers are not willing to give us triple or quad channel memory bus (for normal coustomers). Thats an issue! Its easier to use slower and cheaper memory in triple or quad channel than manufacture/buy expensive hi-end memory to GAIN WHAT? 

For now, Im on Ryzen 3 2200G with Vega and higher memory transfers do the job. My OCed Vega with OCed DDR4 modules are so fast that 3D performance reached GT1030 level (64bit memory bus).


----------



## Camaxide (Feb 7, 2019)

Excellent data  and it frankly does not matter if you pick best or min or avg. Data as long as you do the same for both Ram’s. It’s not the actual fps that matters, but the difference.. and someone said 3 fps from 42 don’t matter... that is 7+ %.. that is huge from simply running a higher frequency ram... if you happen to play 2 or 3 titles alot which loves high speed memory (like pubg) then it will be much worth it... btw, dont listen to the guys who says latency is the only important value.. games boost more from MHz as long as latency isnt horrible. Bandwidth is important when swapping huge data. Cas will always be better when faster, but frequency allows more bandwidth.. 1 ms ping dont help if you are on a dial up bandwidth...


----------



## Vayra86 (Feb 8, 2019)

Camaxide said:


> Excellent data  and it frankly does not matter if you pick best or min or avg. Data as long as you do the same for both Ram’s. It’s not the actual fps that matters, but the difference.. and someone said 3 fps from 42 don’t matter... that is 7+ %.. that is huge from simply running a higher frequency ram... if you happen to play 2 or 3 titles alot which loves high speed memory (like pubg) then it will be much worth it... btw, dont listen to the guys who says latency is the only important value.. games boost more from MHz as long as latency isnt horrible. Bandwidth is important when swapping huge data. Cas will always be better when faster, but frequency allows more bandwidth.. 1 ms ping dont help if you are on a dial up bandwidth...



That only flies if you think that the difference extracted by faster RAM is actually linear. And it is not - at least far from always - and especially not if you're GPU bound, which most people are most of the time.


----------



## HazelSmith (Aug 12, 2019)

Melvis said:


> Awesome as always, do you plan or do you have an Ryzen APU's? like the 2400G? I would love to see someone give some benchmarks when running different speed RAM over 3200MHz as I havent found any that seem to benchmark with RAM much over that speed sadly. term paper writing services



Anyone who thinks to take the RISEN APU, take 2 RAM slots, so that it is conditionally 4GB + 4GB, or 8GB + 8GB, and preferably 2-rank (But if you find it), the 2-channel mode, in short, gives an increase of 20-30% t .to. RAM is also video memory, therefore its bandwidth (depending on rank and n-channel) is IMPORTANT.


----------



## Camaxide (Aug 12, 2019)

Vayra86 said:


> That only flies if you think that the difference extracted by faster RAM is actually linear. And it is not - at least far from always - and especially not if you're GPU bound, which most people are most of the time.


Linear or not, the difference will be there if you use avg over 5 testruns, or if using best/worst.. that has nothing to do with ram speed benefits being linear or not.. and ram cpu bench matters less ofc. If you are gpu bound, though at that point its not the cpu/ram bench you look at anymore


----------



## Hyderz (Aug 14, 2019)

awesome benchmarks there, so would you say its worth the money for the fps gained?


----------

