• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel i7-8700K Coffee Lake Memory Benchmark Analysis

I am confused. In this review you claim "The minimum memory frequency we would recommend for a high-end Coffee Lake system is 2666 MHz. ". In AMD Ryzen Memory Analysis you said: "We are happy to report that you can save some money by choosing a slower DDR4-2133 or DDR4-2666 memory [..] You lose practically no performance to slower memory on the Ryzen platform". And in comment section here you claimed that Ryzen benefits from hi frequency ram because of Infinity Fabric being tied to DRAM frequency @ 0.5x. Lot of contradictions here.
 
I am confused. In this review you claim "The minimum memory frequency we would recommend for a high-end Coffee Lake system is 2666 MHz. ". In AMD Ryzen Memory Analysis you said: "We are happy to report that you can save some money by choosing a slower DDR4-2133 or DDR4-2666 memory [..] You lose practically no performance to slower memory on the Ryzen platform". And in comment section here you claimed that Ryzen benefits from hi frequency ram because of Infinity Fabric being tied to DRAM frequency @ 0.5x. Lot of contradictions here.
Ryzen is weird on how it behaves depending on app because of its cache organization. Memory speed does help if your app uses it in a specific way, but sometimes the ratio between the memory divider and the infinity fabric makes the memory increase hurt, too. Then there are instances of the memory controller just being funky... for example, my Ryzen CPUs can do 3066 MHz memory divider without any problem, but struggles with 2933 MHz.

For guys like me that like to fiddle with memory, Ryzen is a lot of fun.
 
Anyone else find it ironic that the intel platform that handles high clocked memory well sees no real speed benefit, while ryzen with its ram OC difficulties sees a reasonable gain?
 
Anyone else find it ironic that the intel platform that handles high clocked memory well sees no real speed benefit, while ryzen with its ram OC difficulties sees a reasonable gain?
If you know how it works it's not ironic at all. ;)
 
When deciding where we are going to go recommendation wise for RAM, (CAS / Frequency) Ă— 1000 = X ns is used as an initial perfomance ranking comparison ... them look to see how it pans out. The problem with most memory benchmark comparisons is that it is a ton or work.

Most reviewers will pick 3 -5 games with a modeate GFX card, measure FPS and draw conclusions. But performance is always limited by the weakest link and each game is limited by diffrerent constraints. Some games are GPU Bound., some CPU bound and some (at 4k and above) VRAM bound. If it's one of those Memory will have no impact above a certain point. But when you widen the parameters, historically, things have changed.

a) Start looking at min fps instead of average, and things can change
b) Add a 2nd GFX card and things can change

Unfortunately, to widen the test parameters can triple the time investment ... fortunately every once and a while someone will take a few games known to have been impacted by memory speed (i.e. F1) and undertake the effort. I haven't seen for DDR4 as yet so if anyone sees, plz advise.
 
8700k clocked at 5,2ghz stable oc for most? I highly doubt that... some don´t get past 4,8ghz if you check the web.
mine reach 4.9 with a fairly quite low voltage of 1.26

we should re-do all test now with molten and spectre patch, imo, the whole situation could change a lot, expecialy timings wise...
 
I don't imagine it has anything to do with it... The issue isn't in RAM.
 
It will be interesting to see the results and if it changes any. I'll bet not much... ;)
 
if a cpu have to constantly flush it s cache it also have to acces ram more ofte... got it?
Not constantly, because since Westmere PCID was added. PCID is why the impact isn't 30% across the board, as originally predicted and ended up being more like 5% maximum for modern Intel CPUs, excpet in IO-heavy workloads.
 
Thanks for this article. Seriously, these types of reference articles should be pinned to the front page for current gen cpus. Well I guess it kind of is, it's the 4th entry in the popular side bar.
 
Try a quick test on Totalwar Attila 720p low to ensure that GPU will not bottleneck.
4133 17-17-17-37-350-2T vs 3600 14-14-14-34-278-1T both extremely tight sub timings and performance is no different (both 3 run avg = 265 fps) but 4133 get more bandwidth in AIDA64.
Also try 4133 17-17-17-37-350-2T auto sub vs tighten sub and performance different is around 6% (250 fps vs 265 fps).
Tested on 8700K @ 5Ghz / cache 4.8Ghz / 980 Ti OC
 
I observed that 2x16GB 3200 CL14 was better than 2x8GB 3600/4000 CL15/16. Guessing 16GB sticks keep more pages open or something like that.
 
I observed that 2x16GB 3200 CL14 was better than 2x8GB 3600/4000 CL15/16. Guessing 16GB sticks keep more pages open or something like that.
it was probably due to single vs dual rank
 
Awesome write up. So many benches! As usual, unless you are benchmarking, higher speed memory just doesn't make sense. Went with GSkill Trident 3000Mhz CAS 15 kit myself this time around.
 
Is something wrong with these benchmark results? How can the fps be so low and vastly different in the gaming section compared to tests done here(particularly for witcher 3), seems like theres a setting(maybe the CPU MCE disabled) coming into play?? https://www.eurogamer.net/articles/digitalfoundry-2017-intel-coffee-lake-core-i7-8700k-review_1

Appreciate the benchmarks, but if another site has benched on video & shows almost 30fps difference from 2133>3000, with much larger jumps between speeds, I think it should be addressed.
There are also a few youtube vids showing massive performance differences going from 2133>3000mhz, while these benchmarks show negligible gains at best.

Could follow up gaming testing be made? It wouldn't need as many speeds ,just the common 2133/2400/3000 etc. with CPU intensive titles such as Battlefield 1, V, overwatch, Dota 2, CSGO, PUBG etc. It seems to make a pretty big difference on those titles.

Cheers!
 
Last edited:
Is something wrong with these benchmark results? How can the fps be so low and vastly different in the gaming section compared to tests done here(particularly for witcher 3), seems like theres a setting(maybe the CPU MCE disabled) coming into play?? https://www.eurogamer.net/articles/digitalfoundry-2017-intel-coffee-lake-core-i7-8700k-review_1

Appreciate the benchmarks, but if another site has benched on video & shows almost 30fps difference from 2133>3000, with much larger jumps between speeds, I think it should be addressed. Cheers!
Different test scene, different graphics card, and they have hairworks turned off for witcher
 
Different test scene, different graphics card, and they have hairworks turned off for witcher

Still, the titles used arent that popular and the fact
  • "Multi-core optimizations, overclocking, and Turbo tweaks were disabled" - Most of which are *enabled* by default, particularly turbo.
is definitely skewing the results & could easily lead a reader to believe ram speed makes a small difference. While some of the most popular titles are showing significant performance gaps, even /w the 7700k it made a big diff. https://www.eurogamer.net/articles/digitalfoundry-2017-intel-kaby-lake-core-i7-7700k-review (make sure to scroll down to the memory tests). You can even see them running through a scene, how on earth did techspot find an area where the difference between 2133 & 4000 was <2fps across all speeds with no linear gains. To test that many speeds and not be comprehensive doesnt make sense.

It made a difference with a 6700k//GTX 1080 in battlefield 1 for another example, yet ZERO scaling or consistency results for TPU's BF1 tests.

It even makes a difference with a 6700k(weaker) CPU & GTX 1080, but not a 8700k & GTX 1080?
https://www.gamersnexus.net/game-bench/2677-bf1-ram-benchmark-frequency-8gb-enough/page-2

But 8600k & 1080Ti?

Literally every test I've seen with 6700k>7700k>8700k or even i5 model CPUs & GTX 1060 or higher GPUs have shown linear performance scaling /w ram frequency, these are the only results I've found that show almost zero, in similar titles(BF1, Witcher 3).

Even with weaker configurations(which you'd logically think, less memory bandwidth requirement),, still linear scaling.

And since I doubt a 8700k + gtx 1080 are immune to the negative effects, further investigation needs to be done, only the hitman results looked close to normal, (though they went overboard with memory speeds. BF1 & witcher performance *should* scale with ram speed in a linear fashion, just like other similar open world or large map titles.

The only logical explanation is the setup/lack of turbo/something else is affecting the memory results by a large amount, which people should know or be mislead to think 2133 vs 3000 = 2fps.

The gaming benchmarks in particular, I think turbo off might be similar to doing CPU benchmarks @ 4k ultra, which masks the difference completely & why the 4000mhz ram is all over the place. If the CPU was running at it's stock turbo potential, I believe its enough to saturate DDR4 bandwidth when paired with a GTX 1080 in those titles, & we would see a much bigger difference & linear scaling with speeds as bandwidth improves between ram speed increment.
 
Last edited:
Back
Top