• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Ryzen Memory Analysis: 20 Apps & 17 Games, up to 4K

Min fps are still missing, why is TPU still lagging behind on this?

Also I don't concur with the conclusion, 5% is a lot coming from RAM alone. And that's just average FPS and up to 3200 MHz RAM. As I see it the sweetspot price/performance wise is now 2933 DDR4.
 
The 2500K can be a glass ceiling in some titles and settings with a high end GPU. It doesn't happen on all titles, but, it is beginning to show its age with high end GPUs where a CPU is leaned on (along with the game). You can see these results if you look at some TechSpot reviews.
http://www.techspot.com/review/1333-for-honor-benchmarks/page3.html
http://www.techspot.com/review/1263-gears-of-war-4-benchmarks/page4.html

...and some it doesn't...

http://www.techspot.com/review/1271-titanfall-2-pc-benchmarks/page3.html

...again, it depends...


But, here we aren't testing 6 year old CPUs, but the fastest AMD has to offer (and mentally comparing it to the fastest Intel has to offer).

You really just proved my point. Nothing you posted shows the 2500K being the bottleneck. No one is buying a Titan X(P) and running at 1080p. But that is the only scenario that shows the 2500K, or any decent CPU, making a difference. It is an unrealistic scenario. In reality, if you are using a Titan X(P), you're running higher resolutions that actually push the GPU to its limits. And this will continue to be the case, the only scenario that will show a significant difference between CPUs are unrealistic scenarios.
 
Multiple other sites are reporting that while faster RAM doesn't increase average frame rates much, if at all, it does increase minimum frame rates and decrease frame times, thus leading to a smoother gaming experience.

It's a shame that minimum frame rates and frame times weren't tested or reported. If faster RAM alleviates bottlenecks in the most strenuous sections, where the system is being taxed the most, then it might very well be worthwhile to spend more on faster RAM. Such a bottleneck would only become more apparent with upgrades to faster GPUs.
 
There sure is a big difference with v2.20. I can see continued optimizations coming. NICE,

Multiple other sites are reporting that while faster RAM doesn't increase average frame rates much, if at all, it does increase minimum frame rates and decrease frame times, thus leading to a smoother gaming experience.

It's a shame that minimum frame rates and frame times weren't tested or reported. If faster RAM alleviates bottlenecks in the most strenuous sections, where the system is being taxed the most, then it might very well be worthwhile to spend more on faster RAM. Such a bottleneck would only become more apparent with upgrades to faster GPUs.
I believe Ryzen would benefit from DDR4-3600 and above.
 
You really just proved my point. Nothing you posted shows the 2500K being the bottleneck. No one is buying a Titan X(P) and running at 1080p. But that is the only scenario that shows the 2500K, or any decent CPU, making a difference. It is an unrealistic scenario. In reality, if you are using a Titan X(P), you're running higher resolutions that actually push the GPU to its limits. And this will continue to be the case, the only scenario that will show a significant difference between CPUs are unrealistic scenarios.
funny.. I've Made That Exact Point About Those Exact tests.... ;)

But it can be the glass ceiling with an even lesser card.. just keep going back in time there. :)
 
You really just proved my point. Nothing you posted shows the 2500K being the bottleneck. No one is buying a Titan X(P) and running at 1080p. But that is the only scenario that shows the 2500K, or any decent CPU, making a difference. It is an unrealistic scenario. In reality, if you are using a Titan X(P), you're running higher resolutions that actually push the GPU to its limits. And this will continue to be the case, the only scenario that will show a significant difference between CPUs are unrealistic scenarios.
A lot of the games I run in Linux have single-threaded bottlenecks but, I think that's more because of how OpenGL works. There are situations where my 3820 will be "under utilized" but, really is only because some games, when running under OpenGL, can't use more than a thread and a half worth of resources. I find this to be true for both Civ 5 and 6 along with Cities: Skylines. Other games with more emphasis on concurrency or that have implemented Vulkan run a lot better on my machine than their OpenGL equivalents. If I ran Windows, a lot of these same problems wouldn't be but, that's probably more due to the 3D API than anything else.

My simple point is that there are cases when single-threaded performance on SB and SB-E will no longer fit the bill without a significant overclock. I'm running my 3820 at 4.4Ghz just to try to keep the games mentioned above running a little more smoothly because a lot of times the rendering thread/process is the one eating a full core, at least for me.
 
awesome review guys, love the amount of numbers. you really put the hours in there.

tiny notes though
-nvidia videocards on Ryzen dont perform that well. for fun try the same with the rx480 (or rx580 when its coming)
-you may want to add cpu / gpu load values to the charts
-you may want to add min/max next to average to the charts

that said, i really the numbers: for gaming high res not so much benefit, for cpu intentsive tasks yes benefits.
i wonder when the compilers will incorporate dedicated ryzen optimisations (if ever :-) )

keep up the good work with these reviews, me ugha like!
 
awesome review guys, love the amount of numbers. you really put the hours in there.

tiny notes though
-nvidia videocards on Ryzen dont perform that well. for fun try the same with the rx480 (or rx580 when its coming)
-you may want to add cpu / gpu load values to the charts
-you may want to add min/max next to average to the charts

that said, i really the numbers: for gaming high res not so much benefit, for cpu intentsive tasks yes benefits.
i wonder when the compilers will incorporate dedicated ryzen optimisations (if ever :) )

keep up the good work with these reviews, me ugha like!

Dual rx480's
 
that too or wait for vega :p, but the reason i want to see the cpu/gpu loads is that the nvidia driver doesn't take full benefit of the ryzen multithreaded beast , u will probably see that the cpu is not maxed out, and the gpu is not maxed out.
as seen in a previous video linked in this thread

and perhaps nvidia driver is the reason for not showing such an improvement on gaming.

but hey, more numbers allow us to correlate those guesses
 
that too or wait for vega :p, but the reason i want to see the cpu/gpu loads is that the nvidia driver doesn't take full benefit of the ryzen multithreaded beast , u will probably see that the cpu is not maxed out, and the gpu is not maxed out.
as seen in a previous video linked in this thread

and perhaps nvidia driver is the reason for not showing such an improvement on gaming.

but hey, more numbers allow us to correlate those guesses

I've seen the adoredTV video too. Despite some controversy in the past, I think that he deserves some credit on this one.
 
funny.. I've Made That Exact Point About Those Exact tests.... ;)

But it can be the glass ceiling with an even lesser card.. just keep going back in time there. :)

If it isn't a glass ceiling with the highest end cards available, then it isn't one with a less card.

A lot of the games I run in Linux have single-threaded bottlenecks but, I think that's more because of how OpenGL works. There are situations where my 3820 will be "under utilized" but, really is only because some games, when running under OpenGL, can't use more than a thread and a half worth of resources. I find this to be true for both Civ 5 and 6 along with Cities: Skylines. Other games with more emphasis on concurrency or that have implemented Vulkan run a lot better on my machine than their OpenGL equivalents. If I ran Windows, a lot of these same problems wouldn't be but, that's probably more due to the 3D API than anything else.

My simple point is that there are cases when single-threaded performance on SB and SB-E will no longer fit the bill without a significant overclock. I'm running my 3820 at 4.4Ghz just to try to keep the games mentioned above running a little more smoothly because a lot of times the rendering thread/process is the one eating a full core, at least for me.

Yes, I already addressed games like this. If the CPU is going to be holding back the GPU in these types of games that already exist, then we are already seeing it in the 1080p test. There is no reason to go any lower.

Again, my argument isn't that we don't need a lower resolution test to give us an idea of how the CPUs perform in a situation where the GPU isn't the bottleneck. My argument is that 1080p is low enough when the tests are done with high end card. It already gives the information you want. If there is going to be a CPU bottleneck in the future, it will show in the 1080p test. Going lower just wastes time.
 
If it isn't a glass ceiling with the highest end cards available, then it isn't one with a less card.



Yes, I already addressed games like this. If the CPU is going to be holding back the GPU in these types of games that already exist, then we are already seeing it in the 1080p test. There is no reason to go any lower.

Again, my argument isn't that we don't need a lower resolution test to give us an idea of how the CPUs perform in a situation where the GPU isn't the bottleneck. My argument is that 1080p is low enough when the tests are done with high end card. It already gives the information you want. If there is going to be a CPU bottleneck in the future, it will show in the 1080p test. Going lower just wastes time.
Sure, I wasn't disputing that. I was more trying to get at that you can still have a bottleneck at 1080p with a lesser GPU on older hardware. I'm only driving a 390 with my 3820, it's not like I have a Titan X(P) or a 1080... but on to your point: If I did, I probably wouldn't be playing games at 1080p.
 
Last edited:
Dual rx480's
That just introduced a whole set of other issues that could affect performance.

Support for multi-GPUs is waning, and was never that great to begin with. And you certainly wouldn't be able to see if faster memory had an impact on frame times with the frame time stutter introduced by a multi-GPU setup.
 
@W1zzard @EarthDog
Again, as I've said before, it would be helpful if a low res test could be added eg 1024x768 or even less, so we can know the true fps performance of the processor. Testing only at 1080p and up, it's being hidden by GPU limiting which can kick in and out as different scenes are rendered, so you don't really know fast it is.

Contrary to popular opinion this really does matter. People don't change their CPUs as often as their graphics cards, so in the not too distant future we're gonna see 120Hz 4K monitors along with graphics cards that can render at 4K at well over 120fps. The slower CPU will then start to bottleneck that GPU so that it perhaps can't render a solid 120fps+ in the more demanding games, but the user didn't know about this before purchase. If they had, they might have gone with another model or another brand that does deliver the required performance, but are now stuck with the slower CPU because the review didn't test it properly. So again, yeah it matters. Let's finally test this properly.
Why stop at 1024x768? Why not test 800x600? Or 640x480? Or 320x240? Or better yet, why not test not test on a single pixel, thus completely eliminate the GPU as a bottleneck?

I understand your argument, but you're now effectively creating a synthetic benchmark not representative of real world performance. Where do you draw the line?

The CPU needs to work together with the GPU, so you can't take the GPU out off the picture entirely.

1080p is the lowest resolution that anyone with a modern gaming desktop will be playing at, so it makes sense to set that as the lowest resolution. A $500 CPU paired with a $700 GPU is already a ridiculous config for 1080p gaming.

And while it would be nice to predict performance five years out, we simply can't. Technological improvements will either continue to stagnate - with the GPU continuing to be the bottleneck - in which case CPU/memory performance won't matter, or there will be such a profound change that any older benchmarks will be meaningless and anyone not running a 12 core CPU with 40GB quint-channel RAM will be left in the dust.
 
Good timing on this.

I just updated the UEFI on my AsRock X370 Gaming Pro from 1.60 to 1.93D. It cleans up the interface a lot and adds in profile saving and such, however, here's the bad news:

Per AIDA64 @ ~3900MHz OC on the CPU, RAM (3733Mhz Team TF @ 3200MHz) bandwidth in 1.93D is down to ~44,000MB/sec versus the ~51,000MB/sec in 1.60.

@W1zzard, if you have a chance it would be interesting to see if different Gigabyte UEFI versions also produce different results.

I wonder what Asrock changed as I would like to get the speed back and if it's a compatibility thing, it would be great if they had an option to enable the "faster mode" again.
 
Last edited:
@W1zzard:

So all it means, BIOS updates & higher frequency RAM gets "measly" 5.5% improve in 1080p gaming & 13-15% improve in CPU relative tasks, but otherwise it's still largely unbeatable in productivity tasks? Hope by the time August comes-a-knockin' BIOS'es & Infinity Fabric get matured, i wanna see if R7 1800X & GA-AX370-Gaming-K7 support 3600MHz RAM without any quirks n sh1t. Nice read regardless, loved DOOM 1080p, 1440p & 4k results, cheers. :toast:
 
So what does that mean, that nvidia sucks at dx12 so much that it basically affected all the Ryzen bechmarks so far?

Or that it is codded in a way that it runs atrociously bad on Ryzen uarch. I'm not saying that this was made on purpose, but intel has dominated for so many years that it kinda made sense for them to get the best and most of the intel cpus. Hopefully when RX Vega comes out this theories can be put to test.
 
Or that it is codded in a way that it runs atrociously bad on Ryzen uarch. I'm not saying that this was made on purpose, but intel has dominated for so many years that it kinda made sense for them to get the best and most of the intel cpus. Hopefully when RX Vega comes out this theories can be put to test.

My understanding is that there is something in the nvidia driver/architecture which still runs in single thread or at least rely a lot on single thread performance, while on amd gpu it just spreads the load on multiple cores very nicely and that's why Ryzen looks so much better.
 
My understanding is that there is something in the nvidia driver/architecture which still runs in single thread or at least rely a lot on single thread performance, while on amd gpu it just spreads the load on multiple cores very nicely and that's why Ryzen looks so much better.

It could be the case too. However some of the fps differences that are reported are way beyond what one would expect... That is why I posted that hypothesis. If one is testing a cpu gaming performance, adoredTV tests just show that we must look at both sides of the fence. AdoredTV has posted some questionable things in the past, but imo he touched a very relevant point this time around.

Edit: Check this out, it seems that the AMD Dx12 driver loves the cores, while Nvidias DX12 seems to be lagging behind... At least this seems to be the case with "The Division" too.

NVIDIA DX11: 161.7 fps CPU: 40% GPU: 80%
AMD DX11: 128.4 fps CPU: 33% GPU: 71%

NVIDIA DX12: 143.9 fps CPU: 42% GPU: 67%
AMD DX12: 189.8 fps CPU: 49% GPU: 86%

 
Last edited:
It could be the case too. However some of the fps differences that are reported are way beyond what one would expect... That is why I posted that hypothesis. If one is testing a cpu gaming performance, adoredTV tests just show that we must look at both sides of the fence. AdoredTV has posted some questionable things in the past, but imo he touched a very relevant point this time around.

Edit: Check this out, it seems that the AMD Dx12 driver loves the cores, while Nvidias DX12 seems to be lagging behind... At least this seems to be the case with "The Division" too.

NVIDIA DX11: 161.7 fps CPU: 40% GPU: 80%
AMD DX11: 128.4 fps CPU: 33% GPU: 71%

NVIDIA DX12: 143.9 fps CPU: 42% GPU: 67%
AMD DX12: 189.8 fps CPU: 49% GPU: 86%


Interesting ... maybe w1zzard could look into it :)
 
Thanks for this article, W1zzard. I actually had that motherboard, the Gigabyte Auros Gaming 5, and returned it to the store because my G.Skill Trident Z DDR4-3200 CL14 32GB kit memory wouldn't break 2933MHz, no matter what I did. It was using Samsung chips, but it would NOT clock the ram any higher than 2933, which annoyed the crap out of me. I'm frantically looking for any post/article/review that shows this ram hitting 3200MHz or higher on any motherboard. I'm thinking I'll grab one of the 4 (currently) motherboards that have an external BCLK generator: Asus Crosshair VI, Gigabyte Auros Gaming 7, Asrock, Taichi or Asrock Fata1ity Professional, but of all of these, the Gigabyte Gaming 7 is the only one with a dual bios (like the Gaming 5 has), but the Asus Crosshair will take my Corsair water cooler because it accepts socket AM3+ coolers.

Decisions, decisions!
 
Back
Top