• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel i7-8700K Coffee Lake Memory Benchmark Analysis

Great review thanks.
btw, I think the difference would be visible in scenarios where the CPU is lower, like the i5-8400 or so.
 
So, things still haven't changed. 2666MHz is still the sweet spot. Anything above gives gains, but they are so small they aren't worth the ridiculous price jumps and stability issues. Where below 2666MHz, there are slightly more dramatic drops in performance.
 
It's certainly good to see if I go Coffee Lake, I don't need expensive RAM to get the most out of it.

Cheap CPUs with expensive RAM don't compute.
 
I would say 3200Mhz is now the sweet spot, specially if you have tight timings.
It always has been, at least back before RAM prices became nutty.

One thing I'm wondering about, when people do RAM reviews, are the many arcane settings that motherboards typically automatically set at post. Unless the reviewer locks down all of those settings and adjusts them manually, the settings might change from boot to boot. And, boards will sometimes change the settings the reviewer puts in, automatically at post, if they compromise stability too much. This makes RAM benchmarking quite challenging and inaccurate for most reviews, which leave the handling of many timings to the motherboard ("training" at boot). If the sticks have XMP 2 profiles or whatever that work that's helpful but they're not always reliable, especially with Ryzen.
 
The results seem pretty straight forward:
1) avoid 2133
2) the avg diff between 2666 and 3466 is like 2-3%, save money and get the 2666 CL14
3) buy trident z rgb anyway
 
hello
im planning for my next rig (z370 and 8700k)
should i buy 3600mhz cl 17 VS 2666hz cl 13? 16gb of each in dual channel, and both are >10ns latency
it will be mostly for gaming at 1440p 144fps
thx for help
 
hello
im planning for my next rig (z370 and 8700k)
should i buy 3600mhz cl 17 VS 2666hz cl 13? 16gb of each in dual channel, and both are >10ns latency
it will be mostly for gaming at 1440p 144fps
thx for help

Is 2% more fps on average worth the cost delta? Does the game you most play receive a notable boost?
 
hello
im planning for my next rig (z370 and 8700k)
should i buy 3600mhz cl 17 VS 2666hz cl 13? 16gb of each in dual channel, and both are >10ns latency
it will be mostly for gaming at 1440p 144fps
thx for help
Depends on the price of these options. If similar price then I think the 3600 CL17 should be a tiny bit faster, less than 1%
 
Is 2% more fps on average worth the cost delta? Does the game you most play receive a notable boost?
Depends on the price of these options. If similar price then I think the 3600 CL17 should be a tiny bit faster, less than 1%

3600 is 20$ less expensive
games i play are not in the list, soooo
 
3600 then or something slightly cheaper if you can use the leftover money To upgrade something else
 
Seems like low latency 3000 MHz memory would be the best bet, both from peak performance and the variety of choice available. I am curious what the raw bandwidth coffee lake gets from each set speed.


Clearly it doesnt make that much difference. If it did, the multi core encoding benchmarks would have shown massive boosts from higher speed memory. Same with the compression and rendering benches. The performance improvements were in line with what coffee lake gains, suggesting that infinity fabric was not a bottleneck there.

I'm pretty sure AMD would have pushed the multiplier up if there was that much performance left on the floor.


Based on? Games have not been memory bandwidth limited for a very, very long time. DDR4 finally pushed the last holdouts, games like supreme commander, to their limits, and it is highly unlikely going from 4.3 to 5 GHz would significantly increase memory bandwidth demand to make the jump from 3000 to 3600Mhz memory noticeable. Going from 2666 to 3000 already shows practically no performance gain.

Game engines are simply not built to handle that much power. to benefit, you would need a game engine built to use that much bandwidth effectively, and such an engine would not work on consoles. Outside of perhaps cloud imperium, I cant imagine anybody modding a PC engine that much.

er there is defo a difference between 4.3ghz x6 and 5.2ghz x6 -provided there is a proper uncore overclock also

this is based on much experience overclocking x58 systems
 
Hi,

I´m currently having issues to decide which ram speed I should choose for my new setup with i7-8700k. The "top-performer" of the test was the 3866 - before I saw the test I´ve ordered already 4000MHz Ram. My first intention/question was if its possible just do "downclock" the 4000 RAM with tighter timings compared to a similiar RAM. I guess I´m just getting the 3733 RAM, the price difference between 3200 up to 3866 isnt much, difference between 3866 and 4000 is already bigger...What do you think?
 
I don't think these new posters even read the attached article.
 
That´s a big help indeed smartass. Furthermore, I´m just interested in the best configuration first, afterwards I still can change it based on price-performance. The biggest question first was for me if I should return the RAM which I´ve already bought or if I should keep it, downclock it to get maybe better timings or if I should just go for 3866 if I do care about 1 or 2%.
 
Last edited:
The results arent pointing you in a particular direction already?

Honestly, the choice is yours, not ours. You have the data, know the price difference and level of effort, choose your poison. What is worth it to YOU?

Sure, its possible to slow them down and tighten timings... ;)
 
Price on DDR4 sucks anyway. Worst time to build a system -- already spending at least double for DDR4 memory, where that amount could've gone to a better CPU or GPU.
 
I don't think these new posters even read the attached article.
Just new o_O
Price on DDR4 sucks anyway. Worst time to build a system -- already spending at least double for DDR4 memory, where that amount could've gone to a better CPU or GPU.
Yeah I'm not upgrading till DDR4 prices come down to a reasonable level, as it is I overspent a bit on a laptop upgrade, don't want to feed the DRAM gold rush as well :wtf:
 
I really appreciate that you're using 720p resolution for CPU game tests to reduce GPU bottleneck. So I would like to ask a little more. Since game performance is contributed by both CPU and GPU, I suggest to adding another metric to display with each fps data you provided. This metric is the percentage when GPU usage is higher than 97% across the whole test process, which is to describe how much the test process is bottle-necked by GPU. I think this metric would help both CPU and GPU tests. Cheers.
 
With Coffee Lake, if you go in-depth on RAM speeds and latency for gaming, the sweet spot is somewhere around 3200mhz/cl16 or, if available, lower latency at the same price. Go higher speed and the avg and min fps results start going all over the place (from much higher, to even reduced performance in some titles). Go lower speed/higher latency, and you can see a steady loss in min.fps as you go down the ladder.

This review doesn't touch on that, but when you consider that for most rigs that get upgraded from Haswell onwards, the gain from a new CPU in gaming is 10-20% at best, a 3-5% performance gain from the RAM is in fact quite significant. Even from a performance/dollar perspective, relative to the old rig. More importantly, you get that performance where it matters most: in the minimums.

About the performance/dollar, we've already been in the situation for a few years now that we're looking at minimal boosts for high additional costs with platform upgrades. Its the nature of the beast. The top end performance is always the most expensive.
 
First up nice review it does a good job of showing the effect of timings vs frequency even if it doesn't show the possible potential that is there with higher speed kits

Not saying you're wrong, just that the review is meant to cover a cross section of popular games, not just the ones that are really CPU bound. I think that's fair and also shows which ones are and aren't.
Im all for showing some games where ram makes next to no difference but a good ram review really needs some cpu bound open world games like pubg\arma\fallout to be fair not just games that show a small to no difference otherwise its a bit one sided

Its also good to see a few tests with the affect from a change in cpu clock speed so people know how much budget to allocate to improved cpu cooling for a few hundred mhz higher oc or to put it towards faster ram
Arma%20III%20cpu%20vs%20ram.png

Likewise the gains from improved gpu clock speed in games that are more gpu bound vs cpu\ram
900x900px-LL-5280cd24_Rainbow20Six20Siege20cpu20vs20ram.png


as for comments of 8400 vs 8700k or stock vs a high oc for testing
a slower cpu like 8400 is more likely to show a bigger difference in a game that is partially gpu limited
but a overclocked 8700k will show a bigger difference in a cpu bound game as the faster the cpu the larger the % of its time that it spends waiting on files from ram if there not preloaded into its catch
 
This is the thing many people reading reviews and benchmarks miss - gains are usually highly situational. If you want to build a rig that does it all, you need to push all of its buttons.
 
Installed the dishonored 2 demo and at 1280x720 ultra detail the first two levels were caped at 120fps or gpu limited when it dipped below with a GTX1070 at 2088MHz core 4700MHz mem and 6700k@4.6GHz
The third level I found a cpu limited section during the boat ride to shore
I deleted the start and end of the ride from where it hit the 120fps cap as including them artificially halved the % increase between the ram speeds

AVG FPS
Overclocked 3733 16-16-16 =92.8 FPS 5.69% increase over 3200c14 or 22.7% increase over 2133c15
3200C14 XMP =87.8 FPS 16.1% increase over 2133
SPD 2133c15 =75.6 FPS
dishonored.png
 
Back
Top