• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Ryzen Memory Analysis: 20 Apps & 17 Games, up to 4K

I don't think that's accurate. The 2500K falls behind in a lot of tests to the i3s, which (ignoring core clocks) is as strong as the current Pentiums. It has to be overclocked to support the 1080 Ti.

No, it doesn't even have to be overclocked to support the 1080Ti. In any modern game, using settings and resolutions that need a 1080Ti, the 2500K will not be the bottleneck. The 1080Ti will be, that is why we upgrade our GPUs way more than we upgrade our CPUs. I can't think of a single game released recently that this isn't true with.

What he's asking for is a low res test that is CPU limited because the CPU that gets the worse result will be the CPU that starts to bottleneck future GPUs first. It is a relevant test for people who plan to keep the CPU longer than the GPU (almost everyone).

In theory, yes. In real world use, almost never.
 
then in the future it will be 15% faster at higher resolutions with newer GPUs.
It won't... that is MY point (at least any time remotely soon that this is a worry now.. 4 years.. maybe). As you can see, it doesn't translate to the higher resolutions... Look at his results!!! 5.5% at 1080p to .8% at 4K.

I personally have always said, for the Intel platform, to grab DDR4 3000 CL15... that really has been where the sweetspot was. Now it seems to have slid up a bit to 3-3200... Much above that, the prices skyrocket. When a 1480Ti comes out in 4 years, you likely have other more pressing issues to worry about than ram. ;)

I found his conclusion to be quite open ended, actually... though here in the states (newegg) I am finding a $20 difference between the same brand/model ram 2400 to 3000. I couldn't find any 2133 kits (Was looking at GSkill Trident Z).
No, it doesn't even have to be overclocked to support the 1080Ti. In any modern game, using settings and resolutions that need a 1080Ti, the 2500K will not be the bottleneck. The 1080Ti will be, that is why we upgrade our GPUs way more than we upgrade our CPUs. I can't think of a single game released recently that this isn't true with.
The 2500K can be a glass ceiling in some titles and settings with a high end GPU. It doesn't happen on all titles, but, it is beginning to show its age with high end GPUs where a CPU is leaned on (along with the game). You can see these results if you look at some TechSpot reviews.
http://www.techspot.com/review/1333-for-honor-benchmarks/page3.html
http://www.techspot.com/review/1263-gears-of-war-4-benchmarks/page4.html

...and some it doesn't...

http://www.techspot.com/review/1271-titanfall-2-pc-benchmarks/page3.html

...again, it depends...


But, here we aren't testing 6 year old CPUs, but the fastest AMD has to offer (and mentally comparing it to the fastest Intel has to offer).
 
Last edited:
Nice review but I would have like to see more information about the RAM voltages, settings, BCLK for 3200Mhz.
On their website https://www.gskill.com/en/press/vie...s-and-fortis-series-ddr4-memory-for-amd-ryzen
Gskill seems to have configured these sticks to increase the bus speed to run at 3200 but I read that the Gigabyte Aorus 5 only has a multiplier that you can change.

Could we have a cpu-z screenshot to see how this frequency was achieved?
 
Nice review but I would have like to see more information about the RAM voltages, settings, BCLK for 3200Mhz.
On their website https://www.gskill.com/en/press/vie...s-and-fortis-series-ddr4-memory-for-amd-ryzen
Gskill seems to have configured these sticks to increase the bus speed to run at 3200 but I read that the Gigabyte Aorus 5 only has a multiplier that you can change.

Could we have a cpu-z screenshot to see how this frequency was achieved?
Just set ram to 3200, voltage, timings, done. couldn't be easier. Gigabyte has no bclk adjustments anyway
 
In a RAM or CPU gaming benchmark, anytime you introduce a GPU bottleneck, you no longer know what's really going on with the RAM/CPU. I feel like the gaming benchmarks don't really answer the overall controversial question of Ryzen's memory scaling capabilities. However, the non-gaming benchmarks, which are obviously not using the GPU, don't show any significant scaling either and fully support the gaming results. Perhaps, the RAM isn't optimized by the BIOS for their particular speeds? I know that even with matured Z97 BIOSes, I can still tweak a few secondary or tirtiary timings and blow automatic timings out of the water when it comes to bandwidth performance.
 
Just set ram to 3200, voltage, timings, done. couldn't be easier. Gigabyte has no bclk adjustments anyway

So the baseclock stays at 100 Mhz with the RAM speed at 3200 Mhz? That's nice to hear. Seeing the screenshots Gskill posted I assumed they automatically increased it to reach that speed.
It would be interesting to see if the system stays stable with 3200Mhz RAM and with an overclock of the CPU at ~ 4Ghz.
 
In a RAM or CPU gaming benchmark, anytime you introduce a GPU bottleneck, you no longer know what's really going on with the RAM/CPU. I feel like the gaming benchmarks don't really answer the overall controversial question of Ryzen's memory scaling capabilities. However, the non-gaming benchmarks, which are obviously not using the GPU, don't show any significant scaling either and fully support the gaming results. Perhaps, the RAM isn't optimized by the BIOS for their particular speeds? I know that even with matured Z97 BIOSes, I can still tweak a few secondary or tirtiary timings and blow automatic timings out of the water when it comes to bandwidth performance.
Different loads require use of different resources on the PC bud.
 
So the baseclock stays at 100 Mhz with the RAM speed at 3200 Mhz? That's nice to hear. Seeing the screenshots Gskill posted I assumed they automatically increased it to reach that speed.
It would be interesting to see if the system stays stable with 3200Mhz RAM and with an overclock of the CPU at ~ 4Ghz.
Yes.

It is rock stable at 3200 cl14 using xfr which automatically boosts the cpu beyond 4.0 oob
 
Thanks for the review.
Some managed to get DDR4-3800+ working with Ryzen, and the results were quite good. Infinity Fabric Runs at the IMC speed. So you are not just benefiting from faster Ram for increased performance.
 
Last edited:
Unfortunately, you have to keep going up in frequency to see the gains. 3,600 looks nice from some vids.

If 4,000 is achievable, then you're gonna see the dumb fabric working.
Calling Infinity Fabric dumb shows you know nothing about this unique highly innovative technology. This fabric is far more than a high speed interconnect. And nothing like Hyper Transport.
 
Yes.

It is rock stable at 3200 cl14 using xfr which automatically boosts the cpu beyond 4.0 oob

Nice. Do you think it would still be stable with 4Ghz on all 8 cores with that RAM speed?
 
Nice. Do you think it would still be stable with 4Ghz on all 8 cores with that RAM speed?
Yes.
But Motherboards need to be fully optimized for stability and for faster speeds. This happens with every new generation. Including Intel Chips.
I would guestamate in about 2-3 months time, Ryzen CPU's will grow in overall performance by about 15-20%. Real World. IMO
 
Calling Infinity Fabric dumb shows you know nothing about this unique highly innovative technology. This fabric is far more than a high speed interconnect. And nothing like Hyper Transport.

Yet, it's still dumb, b/c it's slow. Lipstick on a pig. It needs twice the bus, apparently, or just lower latency.

AMD cut corners to make the CPU. Hopefully, this is fixed in Zen 2.
 
Looks like to me, from 2133 to 3200, that 2400 gets you half of the available gain for probably the least amount of money and effort. [Except that the majority of 2400 is CL15 or CL16, so ...]
 
Last edited:
Wiz, thanks for all the hard work and time you've put into testing all these configurations. Please ignore the ignorant people here who seem to miss the point of this article.

In the future, it would be interesting to see a Ryzen vs Intel comparison using the same or similar RAM speeds/timings for gaming as that's what most people use their rigs for.

Keep up the good work!
 
Yes.
But Motherboards need to be fully optimized for stability and for faster speeds. This happens with every new generation. Including Intel Chips.
All true but since this RAM kit was specifically advertised as built for Ryzen I wanted to know what is so special about it and if it guarantees the overclock while
also having a decent OC on all cpu cores, without messing with the baseclock.
 
Yet, it's still dumb, b/c it's slow. Lipstick on a pig. It needs twice the bus, apparently, or just lower latency.

AMD cut corners to make the CPU. Hopefully, this is fixed in Zen 2.
Well I can't confirm whether they got lazy putting Infinity Fabric together or not, but I give Jim Keller a lot more credit. He is in fact one of the best CPU Architects.
Hopefully they tighten up those IF latencies, and speed support up to DDR4-3800 to DDR4-4000+.

One this I know is Infinity Fabric and ZEN in general requires optimizations in general.
Infinity Fabric. Great Read...........................

AMD Infinity Fabric underpins everything they will make!!!
http://semiaccurate.com/2017/01/19/amd-infinity-fabric-underpins-everything-will-make/
 
Well I can't confirm whether they got lazy putting Infinity Fabric together or not, but I give Jim Keller a lot more credit. He is in fact one of the best CPU Architects.
Hopefully they tighten up those IF latencies, and speed support up to DDR4-3800 to DDR4-4000+.

One this I know is Infinity Fabric and ZEN in general requires optimizations in general.
Infinity Fabric. Great Read...........................

AMD Infinity Fabric underpins everything they will make!!!
http://semiaccurate.com/2017/01/19/amd-infinity-fabric-underpins-everything-will-make/

Not lazy, not in the least. They had to be economical (no thanks to intel) and that makes performance suffer. Ideally you don't want a lot of intercore communication and moving primary threads across CCXs isn't helping, I'm sure.
 
In the mean time:


Really interesting findings
 
Had my first time hands on experience last night with a close friend's new Ryzen 1700x build. Overall I was impressed, but.... I do have to say that at times, it seemed sluggish, ok maybe that's not the right word. Maybe it just seemed like it wasn't as fast as I thought it would be. His rig ran fine no crashes or problems (he has a Fury Nano GPU). Maybe I can just chalk it up to the fact that's it still very early in it's life cycle, and that I am used to my overclocked i7 7700k rig. We played about a hour or so of Ghost Recon Wildlands, and the some BF1.

But although I may have came away with a little bit of mixed feelings, I am glad to see AMD finally releasing something that is new, fast, and close to/equal to/better than Intel's offerings. I guess time will tell how it all plays out over the next year - and it looks like Intel is prepping for the release of a 10-core Skylake-X, and a 6-core Coffee Lake before the end of they year that both look promising.

BTW he did tell me that he now wishes he'd have went with the 1800x, and is thinking about trading them out. And he had some issues getting his G-Skill RAM to work, so he switched to Crucial which now works fine. Very decent AMD rig though. Leaps and bounds ahead of their Piledriver procs.
 
Last edited:
@W1zzard @EarthDog

"The story repeats in our game-tests, where the most difference can be noted in the lowest resolution (1920 x 1080), all of 5.5 percent"

Again, as I've said before, it would be helpful if a low res test could be added eg 1024x768 or even less, so we can know the true fps performance of the processor. Testing only at 1080p and up, it's being hidden by GPU limiting which can kick in and out as different scenes are rendered, so you don't really know fast it is.

Contrary to popular opinion this really does matter. People don't change their CPUs as often as their graphics cards, so in the not too distant future we're gonna see 120Hz 4K monitors along with graphics cards that can render at 4K at well over 120fps. The slower CPU will then start to bottleneck that GPU so that it perhaps can't render a solid 120fps+ in the more demanding games, but the user didn't know about this before purchase. If they had, they might have gone with another model or another brand that does deliver the required performance, but are now stuck with the slower CPU because the review didn't test it properly. So again, yeah it matters. Let's finally test this properly.

Good review otherwise and good to know that it's not worth spending loads on fast, expensive memory. I remember it being a similar situation with Sandy Bridge when I bought my 2700K all those years ago. Saved me a ton of money.

VERY informative video of Ryzen running on code optimized for AMD coded back in 2003, YES TWO THOUSAND THREE. The programmer followed the guidelines provided from AMD back when there were ATHLON XPs and ATHLON 64s/64x2s.The results speak for themselves, Ryzen dominates ( massively ) with the optimized code vs the i7-7700k while only slightly lagging behind on the unoptimized code.

Remember, this wasn't optimized for Ryzen, so SMT/CCX are all irrelevant.

 
Thanks!

Wonder how this looks on the intel side... :)

Any chance you do something similar, with like a 7700K?
 
BTW he did tell me that he now wishes he'd have went with the 1800x, and is thinking about trading them out. And he had some issues getting his G-Skill RAM to work, so he switched to Crucial which now works fine. Very decent AMD rig though. Leaps and bounds ahead of their Piledriver procs.

Do not trade the Ryzen 7 1700X. You can reach 1800X with no issues. As soon as motherboard bios's are updated and Ryzen optimized. Save the extra cash and buy something else for the Rig.
 
Back
Top