• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Ryzen Memory Analysis: 20 Apps & 17 Games, up to 4K

Kanan

Tech Enthusiast & Gamer
Joined
Aug 22, 2015
Messages
3,517 (1.03/day)
Location
Europe
System Name eazen corp | Xentronon 7.2
Processor AMD Ryzen 7 3700X // PBO max.
Motherboard Asus TUF Gaming X570-Plus
Cooling Noctua NH-D14 SE2011 w/ AM4 kit // 3x Corsair AF140L case fans (2 in, 1 out)
Memory G.Skill Trident Z RGB 2x16 GB DDR4 3600 @ 3800, CL16-19-19-39-58-1T, 1.4 V
Video Card(s) Asus ROG Strix GeForce RTX 2080 Ti modded to MATRIX // 2000-2100 MHz Core / 1938 MHz G6
Storage Silicon Power P34A80 1TB NVME/Samsung SSD 830 128GB&850 Evo 500GB&F3 1TB 7200RPM/Seagate 2TB 5900RPM
Display(s) Samsung 27" Curved FS2 HDR QLED 1440p/144Hz&27" iiyama TN LED 1080p/120Hz / Samsung 40" IPS 1080p TV
Case Corsair Carbide 600C
Audio Device(s) HyperX Cloud Orbit S / Creative SB X AE-5 @ Logitech Z906 / Sony HD AVR @PC & TV @ Teufel Theater 80
Power Supply EVGA 650 GQ
Mouse Logitech G700 @ Steelseries DeX // Xbox 360 Wireless Controller
Keyboard Corsair K70 LUX RGB /w Cherry MX Brown switches
VR HMD Still nope
Software Win 10 Pro
Benchmark Scores 15 095 Time Spy | P29 079 Firestrike | P35 628 3DM11 | X67 508 3DM Vantage Extreme
Min fps are still missing, why is TPU still lagging behind on this?

Also I don't concur with the conclusion, 5% is a lot coming from RAM alone. And that's just average FPS and up to 3200 MHz RAM. As I see it the sweetspot price/performance wise is now 2933 DDR4.
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.08/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
The 2500K can be a glass ceiling in some titles and settings with a high end GPU. It doesn't happen on all titles, but, it is beginning to show its age with high end GPUs where a CPU is leaned on (along with the game). You can see these results if you look at some TechSpot reviews.
http://www.techspot.com/review/1333-for-honor-benchmarks/page3.html
http://www.techspot.com/review/1263-gears-of-war-4-benchmarks/page4.html

...and some it doesn't...

http://www.techspot.com/review/1271-titanfall-2-pc-benchmarks/page3.html

...again, it depends...


But, here we aren't testing 6 year old CPUs, but the fastest AMD has to offer (and mentally comparing it to the fastest Intel has to offer).

You really just proved my point. Nothing you posted shows the 2500K being the bottleneck. No one is buying a Titan X(P) and running at 1080p. But that is the only scenario that shows the 2500K, or any decent CPU, making a difference. It is an unrealistic scenario. In reality, if you are using a Titan X(P), you're running higher resolutions that actually push the GPU to its limits. And this will continue to be the case, the only scenario that will show a significant difference between CPUs are unrealistic scenarios.
 

Timobkg

New Member
Joined
Apr 1, 2017
Messages
3 (0.00/day)
Multiple other sites are reporting that while faster RAM doesn't increase average frame rates much, if at all, it does increase minimum frame rates and decrease frame times, thus leading to a smoother gaming experience.

It's a shame that minimum frame rates and frame times weren't tested or reported. If faster RAM alleviates bottlenecks in the most strenuous sections, where the system is being taxed the most, then it might very well be worthwhile to spend more on faster RAM. Such a bottleneck would only become more apparent with upgrades to faster GPUs.
 
Joined
Mar 23, 2005
Messages
4,092 (0.57/day)
Location
Ancient Greece, Acropolis (Time Lord)
System Name RiseZEN Gaming PC
Processor AMD Ryzen 7 5800X @ Auto
Motherboard Asus ROG Strix X570-E Gaming ATX Motherboard
Cooling Corsair H115i Elite Capellix AIO, 280mm Radiator, Dual RGB 140mm ML Series PWM Fans
Memory G.Skill TridentZ 64GB (4 x 16GB) DDR4 3200
Video Card(s) ASUS DUAL RX 6700 XT DUAL-RX6700XT-12G
Storage Corsair Force MP500 480GB M.2 & MP510 480GB M.2 - 2 x WD_BLACK 1TB SN850X NVMe 1TB
Display(s) ASUS ROG Strix 34” XG349C 144Hz 1440p + Asus ROG 27" MG278Q 144Hz WQHD 1440p
Case Corsair Obsidian Series 450D Gaming Case
Audio Device(s) SteelSeries 5Hv2 w/ Sound Blaster Z SE
Power Supply Corsair RM750x Power Supply
Mouse Razer Death-Adder + Viper 8K HZ Ambidextrous Gaming Mouse - Ergonomic Left Hand Edition
Keyboard Logitech G910 Orion Spectrum RGB Gaming Keyboard
Software Windows 11 Pro - 64-Bit Edition
Benchmark Scores I'm the Doctor, Doctor Who. The Definition of Gaming is PC Gaming...
There sure is a big difference with v2.20. I can see continued optimizations coming. NICE,

Multiple other sites are reporting that while faster RAM doesn't increase average frame rates much, if at all, it does increase minimum frame rates and decrease frame times, thus leading to a smoother gaming experience.

It's a shame that minimum frame rates and frame times weren't tested or reported. If faster RAM alleviates bottlenecks in the most strenuous sections, where the system is being taxed the most, then it might very well be worthwhile to spend more on faster RAM. Such a bottleneck would only become more apparent with upgrades to faster GPUs.
I believe Ryzen would benefit from DDR4-3600 and above.
 
Joined
Dec 31, 2009
Messages
19,372 (3.54/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
You really just proved my point. Nothing you posted shows the 2500K being the bottleneck. No one is buying a Titan X(P) and running at 1080p. But that is the only scenario that shows the 2500K, or any decent CPU, making a difference. It is an unrealistic scenario. In reality, if you are using a Titan X(P), you're running higher resolutions that actually push the GPU to its limits. And this will continue to be the case, the only scenario that will show a significant difference between CPUs are unrealistic scenarios.
funny.. I've Made That Exact Point About Those Exact tests.... ;)

But it can be the glass ceiling with an even lesser card.. just keep going back in time there. :)
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.79/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
You really just proved my point. Nothing you posted shows the 2500K being the bottleneck. No one is buying a Titan X(P) and running at 1080p. But that is the only scenario that shows the 2500K, or any decent CPU, making a difference. It is an unrealistic scenario. In reality, if you are using a Titan X(P), you're running higher resolutions that actually push the GPU to its limits. And this will continue to be the case, the only scenario that will show a significant difference between CPUs are unrealistic scenarios.
A lot of the games I run in Linux have single-threaded bottlenecks but, I think that's more because of how OpenGL works. There are situations where my 3820 will be "under utilized" but, really is only because some games, when running under OpenGL, can't use more than a thread and a half worth of resources. I find this to be true for both Civ 5 and 6 along with Cities: Skylines. Other games with more emphasis on concurrency or that have implemented Vulkan run a lot better on my machine than their OpenGL equivalents. If I ran Windows, a lot of these same problems wouldn't be but, that's probably more due to the 3D API than anything else.

My simple point is that there are cases when single-threaded performance on SB and SB-E will no longer fit the bill without a significant overclock. I'm running my 3820 at 4.4Ghz just to try to keep the games mentioned above running a little more smoothly because a lot of times the rendering thread/process is the one eating a full core, at least for me.
 
Joined
Feb 16, 2014
Messages
39 (0.01/day)
awesome review guys, love the amount of numbers. you really put the hours in there.

tiny notes though
-nvidia videocards on Ryzen dont perform that well. for fun try the same with the rx480 (or rx580 when its coming)
-you may want to add cpu / gpu load values to the charts
-you may want to add min/max next to average to the charts

that said, i really the numbers: for gaming high res not so much benefit, for cpu intentsive tasks yes benefits.
i wonder when the compilers will incorporate dedicated ryzen optimisations (if ever :) )

keep up the good work with these reviews, me ugha like!
 
Joined
Sep 2, 2011
Messages
1,019 (0.21/day)
Location
Porto
System Name No name / Purple Haze
Processor Phenom II 1100T @ 3.8Ghz / Pentium 4 3.4 EE Gallatin @ 3.825Ghz
Motherboard MSI 970 Gaming/ Abit IC7-MAX3
Cooling CM Hyper 212X / Scythe Andy Samurai Master (CPU) - Modded Ati Silencer 5 rev. 2 (GPU)
Memory 8GB GEIL GB38GB2133C10ADC + 8GB G.Skill F3-14900CL9-4GBXL / 2x1GB Crucial Ballistix Tracer PC4000
Video Card(s) Asus R9 Fury X Strix (4096 SP's/1050 Mhz)/ PowerColor X850XT PE @ (600/1230) AGP + (HD3850 AGP)
Storage Samsung 250 GB / WD Caviar 160GB
Display(s) Benq XL2411T
Audio Device(s) motherboard / Creative Sound Blaster X-Fi XtremeGamer Fatal1ty Pro + Front panel
Power Supply Tagan BZ 900W / Corsair HX620w
Mouse Zowie AM
Keyboard Qpad MK-50
Software Windows 7 Pro 64Bit / Windows XP
Benchmark Scores 64CU Fury: http://www.3dmark.com/fs/11269229 / X850XT PE http://www.3dmark.com/3dm05/5532432
awesome review guys, love the amount of numbers. you really put the hours in there.

tiny notes though
-nvidia videocards on Ryzen dont perform that well. for fun try the same with the rx480 (or rx580 when its coming)
-you may want to add cpu / gpu load values to the charts
-you may want to add min/max next to average to the charts

that said, i really the numbers: for gaming high res not so much benefit, for cpu intentsive tasks yes benefits.
i wonder when the compilers will incorporate dedicated ryzen optimisations (if ever :) )

keep up the good work with these reviews, me ugha like!

Dual rx480's
 
Joined
Feb 16, 2014
Messages
39 (0.01/day)
that too or wait for vega :p, but the reason i want to see the cpu/gpu loads is that the nvidia driver doesn't take full benefit of the ryzen multithreaded beast , u will probably see that the cpu is not maxed out, and the gpu is not maxed out.
as seen in a previous video linked in this thread

and perhaps nvidia driver is the reason for not showing such an improvement on gaming.

but hey, more numbers allow us to correlate those guesses
 
Joined
Sep 2, 2011
Messages
1,019 (0.21/day)
Location
Porto
System Name No name / Purple Haze
Processor Phenom II 1100T @ 3.8Ghz / Pentium 4 3.4 EE Gallatin @ 3.825Ghz
Motherboard MSI 970 Gaming/ Abit IC7-MAX3
Cooling CM Hyper 212X / Scythe Andy Samurai Master (CPU) - Modded Ati Silencer 5 rev. 2 (GPU)
Memory 8GB GEIL GB38GB2133C10ADC + 8GB G.Skill F3-14900CL9-4GBXL / 2x1GB Crucial Ballistix Tracer PC4000
Video Card(s) Asus R9 Fury X Strix (4096 SP's/1050 Mhz)/ PowerColor X850XT PE @ (600/1230) AGP + (HD3850 AGP)
Storage Samsung 250 GB / WD Caviar 160GB
Display(s) Benq XL2411T
Audio Device(s) motherboard / Creative Sound Blaster X-Fi XtremeGamer Fatal1ty Pro + Front panel
Power Supply Tagan BZ 900W / Corsair HX620w
Mouse Zowie AM
Keyboard Qpad MK-50
Software Windows 7 Pro 64Bit / Windows XP
Benchmark Scores 64CU Fury: http://www.3dmark.com/fs/11269229 / X850XT PE http://www.3dmark.com/3dm05/5532432
that too or wait for vega :p, but the reason i want to see the cpu/gpu loads is that the nvidia driver doesn't take full benefit of the ryzen multithreaded beast , u will probably see that the cpu is not maxed out, and the gpu is not maxed out.
as seen in a previous video linked in this thread

and perhaps nvidia driver is the reason for not showing such an improvement on gaming.

but hey, more numbers allow us to correlate those guesses

I've seen the adoredTV video too. Despite some controversy in the past, I think that he deserves some credit on this one.
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.08/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
funny.. I've Made That Exact Point About Those Exact tests.... ;)

But it can be the glass ceiling with an even lesser card.. just keep going back in time there. :)

If it isn't a glass ceiling with the highest end cards available, then it isn't one with a less card.

A lot of the games I run in Linux have single-threaded bottlenecks but, I think that's more because of how OpenGL works. There are situations where my 3820 will be "under utilized" but, really is only because some games, when running under OpenGL, can't use more than a thread and a half worth of resources. I find this to be true for both Civ 5 and 6 along with Cities: Skylines. Other games with more emphasis on concurrency or that have implemented Vulkan run a lot better on my machine than their OpenGL equivalents. If I ran Windows, a lot of these same problems wouldn't be but, that's probably more due to the 3D API than anything else.

My simple point is that there are cases when single-threaded performance on SB and SB-E will no longer fit the bill without a significant overclock. I'm running my 3820 at 4.4Ghz just to try to keep the games mentioned above running a little more smoothly because a lot of times the rendering thread/process is the one eating a full core, at least for me.

Yes, I already addressed games like this. If the CPU is going to be holding back the GPU in these types of games that already exist, then we are already seeing it in the 1080p test. There is no reason to go any lower.

Again, my argument isn't that we don't need a lower resolution test to give us an idea of how the CPUs perform in a situation where the GPU isn't the bottleneck. My argument is that 1080p is low enough when the tests are done with high end card. It already gives the information you want. If there is going to be a CPU bottleneck in the future, it will show in the 1080p test. Going lower just wastes time.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.79/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
If it isn't a glass ceiling with the highest end cards available, then it isn't one with a less card.



Yes, I already addressed games like this. If the CPU is going to be holding back the GPU in these types of games that already exist, then we are already seeing it in the 1080p test. There is no reason to go any lower.

Again, my argument isn't that we don't need a lower resolution test to give us an idea of how the CPUs perform in a situation where the GPU isn't the bottleneck. My argument is that 1080p is low enough when the tests are done with high end card. It already gives the information you want. If there is going to be a CPU bottleneck in the future, it will show in the 1080p test. Going lower just wastes time.
Sure, I wasn't disputing that. I was more trying to get at that you can still have a bottleneck at 1080p with a lesser GPU on older hardware. I'm only driving a 390 with my 3820, it's not like I have a Titan X(P) or a 1080... but on to your point: If I did, I probably wouldn't be playing games at 1080p.
 
Last edited:

Timobkg

New Member
Joined
Apr 1, 2017
Messages
3 (0.00/day)
Dual rx480's
That just introduced a whole set of other issues that could affect performance.

Support for multi-GPUs is waning, and was never that great to begin with. And you certainly wouldn't be able to see if faster memory had an impact on frame times with the frame time stutter introduced by a multi-GPU setup.
 

Timobkg

New Member
Joined
Apr 1, 2017
Messages
3 (0.00/day)
@W1zzard @EarthDog
Again, as I've said before, it would be helpful if a low res test could be added eg 1024x768 or even less, so we can know the true fps performance of the processor. Testing only at 1080p and up, it's being hidden by GPU limiting which can kick in and out as different scenes are rendered, so you don't really know fast it is.

Contrary to popular opinion this really does matter. People don't change their CPUs as often as their graphics cards, so in the not too distant future we're gonna see 120Hz 4K monitors along with graphics cards that can render at 4K at well over 120fps. The slower CPU will then start to bottleneck that GPU so that it perhaps can't render a solid 120fps+ in the more demanding games, but the user didn't know about this before purchase. If they had, they might have gone with another model or another brand that does deliver the required performance, but are now stuck with the slower CPU because the review didn't test it properly. So again, yeah it matters. Let's finally test this properly.
Why stop at 1024x768? Why not test 800x600? Or 640x480? Or 320x240? Or better yet, why not test not test on a single pixel, thus completely eliminate the GPU as a bottleneck?

I understand your argument, but you're now effectively creating a synthetic benchmark not representative of real world performance. Where do you draw the line?

The CPU needs to work together with the GPU, so you can't take the GPU out off the picture entirely.

1080p is the lowest resolution that anyone with a modern gaming desktop will be playing at, so it makes sense to set that as the lowest resolution. A $500 CPU paired with a $700 GPU is already a ridiculous config for 1080p gaming.

And while it would be nice to predict performance five years out, we simply can't. Technological improvements will either continue to stagnate - with the GPU continuing to be the bottleneck - in which case CPU/memory performance won't matter, or there will be such a profound change that any older benchmarks will be meaningless and anyone not running a 12 core CPU with 40GB quint-channel RAM will be left in the dust.
 
Joined
Jan 17, 2006
Messages
932 (0.13/day)
Location
Ireland
System Name "Run of the mill" (except GPU)
Processor R9 3900X
Motherboard ASRock X470 Taich Ultimate
Cooling Cryorig (not recommended)
Memory 32GB (2 x 16GB) Team 3200 MT/s, CL14
Video Card(s) Radeon RX6900XT
Storage Samsung 970 Evo plus 1TB NVMe
Display(s) Samsung Q95T
Case Define R5
Audio Device(s) On board
Power Supply Seasonic Prime 1000W
Mouse Roccat Leadr
Keyboard K95 RGB
Software Windows 11 Pro x64, insider preview dev channel
Benchmark Scores #1 worldwide on 3D Mark 99, back in the (P133) days. :)
Good timing on this.

I just updated the UEFI on my AsRock X370 Gaming Pro from 1.60 to 1.93D. It cleans up the interface a lot and adds in profile saving and such, however, here's the bad news:

Per AIDA64 @ ~3900MHz OC on the CPU, RAM (3733Mhz Team TF @ 3200MHz) bandwidth in 1.93D is down to ~44,000MB/sec versus the ~51,000MB/sec in 1.60.

@W1zzard, if you have a chance it would be interesting to see if different Gigabyte UEFI versions also produce different results.

I wonder what Asrock changed as I would like to get the speed back and if it's a compatibility thing, it would be great if they had an option to enable the "faster mode" again.
 
Last edited:
Joined
May 12, 2009
Messages
973 (0.17/day)
System Name YautjaLord
Processor Ryzen 9 5900x @ 3700MHz
Motherboard Gigabyte X570 Aorus Xtreme rev. 1.1
Cooling EK-XE360mm front/SE360mm top/3xVardar 120mm top/3xbe quiet! Light Wings 120mm High Speed PWM/etc....
Memory HyperX Predator 4x8GB 4000MHz @ 4000MHz
Video Card(s) 1xGigabyte RTX 3080 Master 10GB rev. 3.0 (watercooled)
Storage Samsung 980 Pro 1TB (system)/Samsung 960 Pro M.2 NVMe (system) |Samsung T7 Shield 1TB USB Type-C
Display(s) LG 27GN950-B 4k IPS 1ms 144Hz HDR
Case be quiet! Dark Base Pro 900 rev. 2 Black/Orange
Audio Device(s) Integrated
Power Supply Corsair HX1200
Mouse Dragonwar ELE-G4.1 laser gaming mouse 9500dpi
Keyboard Corsair K60 Pro Low Profile
Software Windows 10 Pro 64-bit 21H2
@W1zzard:

So all it means, BIOS updates & higher frequency RAM gets "measly" 5.5% improve in 1080p gaming & 13-15% improve in CPU relative tasks, but otherwise it's still largely unbeatable in productivity tasks? Hope by the time August comes-a-knockin' BIOS'es & Infinity Fabric get matured, i wanna see if R7 1800X & GA-AX370-Gaming-K7 support 3600MHz RAM without any quirks n sh1t. Nice read regardless, loved DOOM 1080p, 1440p & 4k results, cheers. :toast:
 
Joined
Sep 2, 2011
Messages
1,019 (0.21/day)
Location
Porto
System Name No name / Purple Haze
Processor Phenom II 1100T @ 3.8Ghz / Pentium 4 3.4 EE Gallatin @ 3.825Ghz
Motherboard MSI 970 Gaming/ Abit IC7-MAX3
Cooling CM Hyper 212X / Scythe Andy Samurai Master (CPU) - Modded Ati Silencer 5 rev. 2 (GPU)
Memory 8GB GEIL GB38GB2133C10ADC + 8GB G.Skill F3-14900CL9-4GBXL / 2x1GB Crucial Ballistix Tracer PC4000
Video Card(s) Asus R9 Fury X Strix (4096 SP's/1050 Mhz)/ PowerColor X850XT PE @ (600/1230) AGP + (HD3850 AGP)
Storage Samsung 250 GB / WD Caviar 160GB
Display(s) Benq XL2411T
Audio Device(s) motherboard / Creative Sound Blaster X-Fi XtremeGamer Fatal1ty Pro + Front panel
Power Supply Tagan BZ 900W / Corsair HX620w
Mouse Zowie AM
Keyboard Qpad MK-50
Software Windows 7 Pro 64Bit / Windows XP
Benchmark Scores 64CU Fury: http://www.3dmark.com/fs/11269229 / X850XT PE http://www.3dmark.com/3dm05/5532432
So what does that mean, that nvidia sucks at dx12 so much that it basically affected all the Ryzen bechmarks so far?

Or that it is codded in a way that it runs atrociously bad on Ryzen uarch. I'm not saying that this was made on purpose, but intel has dominated for so many years that it kinda made sense for them to get the best and most of the intel cpus. Hopefully when RX Vega comes out this theories can be put to test.
 
Joined
Nov 18, 2009
Messages
86 (0.02/day)
Location
Alberta, Canada
System Name Fluffy
Processor Ryzen 7 2700X
Motherboard Asus Crosshair VII Hero Wi-Fi
Cooling Wraith Spire
Memory 32Gb's Gskill Trident Z DDR4 3200 CAS 14
Video Card(s) Asus Strix Vega 64 OC
Storage Crucial BX100 500GB SSD/Seagate External USB 1TB
Display(s) Samsung CHG70 32" 144hz HDR
Case Phanteks ENTHOO EVOLV X
Audio Device(s) SupremeFX S1220 / Tiamat 7.1
Power Supply SeaSonic PRIME Ultra Titanium 750 W
Mouse Steel Series Rival 600
Keyboard Razer Black Widow Ultimate
Software Open Office, Win 10 Pro
Joined
Apr 18, 2015
Messages
234 (0.07/day)
Or that it is codded in a way that it runs atrociously bad on Ryzen uarch. I'm not saying that this was made on purpose, but intel has dominated for so many years that it kinda made sense for them to get the best and most of the intel cpus. Hopefully when RX Vega comes out this theories can be put to test.

My understanding is that there is something in the nvidia driver/architecture which still runs in single thread or at least rely a lot on single thread performance, while on amd gpu it just spreads the load on multiple cores very nicely and that's why Ryzen looks so much better.
 
Joined
Sep 2, 2011
Messages
1,019 (0.21/day)
Location
Porto
System Name No name / Purple Haze
Processor Phenom II 1100T @ 3.8Ghz / Pentium 4 3.4 EE Gallatin @ 3.825Ghz
Motherboard MSI 970 Gaming/ Abit IC7-MAX3
Cooling CM Hyper 212X / Scythe Andy Samurai Master (CPU) - Modded Ati Silencer 5 rev. 2 (GPU)
Memory 8GB GEIL GB38GB2133C10ADC + 8GB G.Skill F3-14900CL9-4GBXL / 2x1GB Crucial Ballistix Tracer PC4000
Video Card(s) Asus R9 Fury X Strix (4096 SP's/1050 Mhz)/ PowerColor X850XT PE @ (600/1230) AGP + (HD3850 AGP)
Storage Samsung 250 GB / WD Caviar 160GB
Display(s) Benq XL2411T
Audio Device(s) motherboard / Creative Sound Blaster X-Fi XtremeGamer Fatal1ty Pro + Front panel
Power Supply Tagan BZ 900W / Corsair HX620w
Mouse Zowie AM
Keyboard Qpad MK-50
Software Windows 7 Pro 64Bit / Windows XP
Benchmark Scores 64CU Fury: http://www.3dmark.com/fs/11269229 / X850XT PE http://www.3dmark.com/3dm05/5532432
My understanding is that there is something in the nvidia driver/architecture which still runs in single thread or at least rely a lot on single thread performance, while on amd gpu it just spreads the load on multiple cores very nicely and that's why Ryzen looks so much better.

It could be the case too. However some of the fps differences that are reported are way beyond what one would expect... That is why I posted that hypothesis. If one is testing a cpu gaming performance, adoredTV tests just show that we must look at both sides of the fence. AdoredTV has posted some questionable things in the past, but imo he touched a very relevant point this time around.

Edit: Check this out, it seems that the AMD Dx12 driver loves the cores, while Nvidias DX12 seems to be lagging behind... At least this seems to be the case with "The Division" too.

NVIDIA DX11: 161.7 fps CPU: 40% GPU: 80%
AMD DX11: 128.4 fps CPU: 33% GPU: 71%

NVIDIA DX12: 143.9 fps CPU: 42% GPU: 67%
AMD DX12: 189.8 fps CPU: 49% GPU: 86%

 
Last edited:
Joined
Apr 18, 2015
Messages
234 (0.07/day)
It could be the case too. However some of the fps differences that are reported are way beyond what one would expect... That is why I posted that hypothesis. If one is testing a cpu gaming performance, adoredTV tests just show that we must look at both sides of the fence. AdoredTV has posted some questionable things in the past, but imo he touched a very relevant point this time around.

Edit: Check this out, it seems that the AMD Dx12 driver loves the cores, while Nvidias DX12 seems to be lagging behind... At least this seems to be the case with "The Division" too.

NVIDIA DX11: 161.7 fps CPU: 40% GPU: 80%
AMD DX11: 128.4 fps CPU: 33% GPU: 71%

NVIDIA DX12: 143.9 fps CPU: 42% GPU: 67%
AMD DX12: 189.8 fps CPU: 49% GPU: 86%


Interesting ... maybe w1zzard could look into it :)
 
Joined
Sep 29, 2011
Messages
217 (0.04/day)
Location
Ottawa, Canada
System Name Current Rig
Processor Intel 12700K@5.1GHz
Motherboard MSI Pro Z790-P
Cooling Arctic Cooling Liquid Freezer II 360mm
Memory 2x16GB DDR5-6000 G.Skill Trident Z RGB
Video Card(s) MSI Gaming X Trio 6800 16GB
Storage 1TB SSD
Case Cooler Master Storm Striker
Power Supply Antec True Power 750w
Keyboard IBM Model 'M"
Thanks for this article, W1zzard. I actually had that motherboard, the Gigabyte Auros Gaming 5, and returned it to the store because my G.Skill Trident Z DDR4-3200 CL14 32GB kit memory wouldn't break 2933MHz, no matter what I did. It was using Samsung chips, but it would NOT clock the ram any higher than 2933, which annoyed the crap out of me. I'm frantically looking for any post/article/review that shows this ram hitting 3200MHz or higher on any motherboard. I'm thinking I'll grab one of the 4 (currently) motherboards that have an external BCLK generator: Asus Crosshair VI, Gigabyte Auros Gaming 7, Asrock, Taichi or Asrock Fata1ity Professional, but of all of these, the Gigabyte Gaming 7 is the only one with a dual bios (like the Gaming 5 has), but the Asus Crosshair will take my Corsair water cooler because it accepts socket AM3+ coolers.

Decisions, decisions!
 
Top