# AMD Ryzen Memory Analysis: 20 Apps & 17 Games, up to 4K



## W1zzard (Mar 31, 2017)

We take a close look at memory performance on AMD Ryzen, using G.SKILL's Flare X modules which are optimized for the new platform. Our testing includes memory frequencies ranging from 2133 MHz all the way to 3200 MHz, with timings from CL14 to CL18. All games are tested at their highest settings in realistic resolutions used by gamers today: 1080p, 1440p, and 4K.

*Show full review*


----------



## londiste (Mar 31, 2017)

Something I noticed in couple earlier articles - could you sort the graphs so that better results are at the top?
I know that every chart has "Higher/Lower is better" on it but when there are both kinds on the same page, all sorted from lower to higher, it sometimes gets difficult to read.


----------



## Adam Freeman (Mar 31, 2017)

Good review with lots of benchmarks. But regarding gaming benchmarks you only measured average fps. Average fps don't tell the whole story,
you should measure min fps or maybe better to measure 1% min and 0.1% min. I'm sure that memory frequency will have big effect on those
measurements that relate more to game play smoothness than average fps


----------



## EarthDog (Mar 31, 2017)

Well done! I like this angle of testing!!!

And this, people, is exactly why you don't test at low  settings and resolutions (read: below 1080p), as it exaggerates results found and do not extrapolate up.

I'd like to see the same testing on Intel and see how the story differs there...


----------



## HD64G (Mar 31, 2017)

There are 5 games that greatly benefit by 13-17% when going from 2133 to 3200MHz RAM (Hitman, FC Primal, Civ6, Fallout4, Warhammer) and most of the others gain very little with Dishonored2 gaining 9%. It depends on the game engine I suppose. So, gaming performance of Ryzen clearly depends on RAM speed, along with game engine optimisations.


----------



## jabbadap (Mar 31, 2017)

Yeah interesting read, thank you. So it seems it confirmed that in uhd resolution memory speed does not matter, system is bottlenecked by gpu. 

Btw. mySQL bench should read higher is better, TPS means Transactions Per Second. So with more transactions, the better is the performance.


----------



## Ubersonic (Mar 31, 2017)

"Add to that, the ongoing price-fixing short supply in the DRAM industry has caused tremendous escalations in memory costs."

I like what you did there ^^


----------



## Ferrum Master (Mar 31, 2017)

Awesome test!

The new agesa updates(latency decrease) will spoil the epic work a bit...


----------



## TheGuruStud (Mar 31, 2017)

Unfortunately, you have to keep going up in frequency to see the gains. 3,600 looks nice from some vids.

If 4,000 is achievable, then you're gonna see the dumb fabric working.


----------



## hojnikb (Mar 31, 2017)

How about doing 1% and 0.1% percentile for gaming. Average fps does not tell the whole story, especially with higher ram frequencies.


----------



## qubit (Mar 31, 2017)

@W1zzard @EarthDog 

"The story repeats in our game-tests, where the most difference can be noted in the lowest resolution (1920 x 1080), all of 5.5 percent"

Again, as I've said before, it would be helpful if a low res test could be added eg 1024x768 or even less, so we can know the true fps performance of the processor. Testing only at 1080p and up, it's being hidden by GPU limiting which can kick in and out as different scenes are rendered, so you don't really know fast it is.

Contrary to popular opinion this really does matter. People don't change their CPUs as often as their graphics cards, so in the not too distant future we're gonna see 120Hz 4K monitors along with graphics cards that can render at 4K at well over 120fps. The slower CPU will then start to bottleneck that GPU so that it perhaps can't render a solid 120fps+ in the more demanding games, but the user didn't know about this before purchase. If they had, they might have gone with another model or another brand that does deliver the required performance, but are now stuck with the slower CPU because the review didn't test it properly. So again, yeah it matters. Let's finally test this properly.

Good review otherwise and good to know that it's not worth spending loads on fast, expensive memory. I remember it being a similar situation with Sandy Bridge when I bought my 2700K all those years ago. Saved me a ton of money.


----------



## EarthDog (Mar 31, 2017)

qubit said:


> @W1zzard @EarthDog
> 
> "The story repeats in our game-tests, where the most difference can be noted in the lowest resolution (1920 x 1080), all of 5.5 percent"
> 
> ...


It really doesn't matter. I can't agree at all. Sorry. I don't understand what testing lower res with lower settings shows considering people don't play at that res with low settings and higher end cards. So, its a dataset, sure, but I can't wrap my head around its relevance since people barely use it. Again, it exaggerates results which do not extrapolate to a higher res/settings. It doesn't matter and is tested properly IMO..


----------



## qubit (Mar 31, 2017)

EarthDog said:


> It really doesn't matter. I can't agree at all. Sorry. I don't understand what testing lower res with lower settings shows considering people don't play at that res with low settings and higher end cards. So, its a dataset, sure, but I can't wrap my head around its relevance since people barely use it. Again, it exaggerates results which do not extrapolate to a higher res/settings. It doesn't matter and is tested properly IMO..


I just explained in detail why it matters. Not sure what more I can add to this. 

Again, I want to stress that these tests are in addition to the current tests, not to replace them.


----------



## Ubersonic (Mar 31, 2017)

Just finished reading the whole review and I don't mean to be rude but hasn't this review left out the most important information, the minimum frame rates? You know the ones that are reportedly heavily affected by RAM speed and the reason people are saying Ryzen gets gimped on 2133/2400 RAM...




EarthDog said:


> Sorry. I don't understand what testing lower res with lower settings shows considering people don't play at that res with low settings and higher end cards. So, its a dataset, sure, but I can't wrap my head around its relevance


He literally explained why it's relevant in the post you quoted...


----------



## EarthDog (Mar 31, 2017)

qubit said:


> I just explained in detail why it matters. Not sure what more I can add to this.
> 
> Again, I want to stress that these tests are in addition to the current tests, not to replace them.


I understand what you are saying. I 100% disagree with your assertion (that its relevant)...its just that simple.

What you said doesn't really matter for people (to me - shouldn't for the rest, lol). It shows nothing that extrapolates to a resolution and settings where people actually play. By testing in such an artificial environment, you have created an UNREALISTIC environment to capture what amounts to be an IRRELEVANT data set. The faster CPU down low, at your lower than low settings, and 1080p, will still be the faster chip up top at 4K.

I believe its a waste of time to even add them to the review. Now, the MINIMUM FPS is a good thing to have here....


----------



## TheGuruStud (Mar 31, 2017)

Good luck with 120 fps at 4k. Between lazier coding and cramming in more textures/effects, it's not happening anytime soon.


----------



## newtekie1 (Mar 31, 2017)

qubit said:


> Contrary to popular opinion this really does matter. People don't change their CPUs as often as their graphics cards, so in the not too distant future we're gonna see 120Hz 4K monitors along with graphics cards that can render at 4K at well over 120fps. The slower CPU will then start to bottleneck that GPU so that it perhaps can't render a solid 120fps+ in the more demanding games, but the user didn't know about this before purchase. If they had, they might have gone with another model or another brand that does deliver the required performance, but are now stuck with the slower CPU because the review didn't test it properly. So again, yeah it matters. Let's finally test this properly.



Not really true.  As GPUs improve, so do the demands on them.  That isn't as true with CPUs.  The demand on the CPU, with the exception of a few games like Cities Skylines, pretty much stay the same.  This is why a gaming rig with a 1080Ti and a 4.4GHz 2500K is still viable.

At the end of the day, as long as we are getting to the point where we are removing the GPU bottleneck, which is what the 1080p tests with a GTX1080 largely do, there is no point in going lower.


----------



## refillable (Mar 31, 2017)

I know it's not that big of a deal, but I can note that the *FPS gains from the memory speeds *are most seen in games where *the biggest gap of Ryzen vs. the 7700K are*. Some of them are Fallout 4, Hitman and Total War Warhammer. No wonder these games usually give weird and inconsistent GPU results, these games are optimised like fried potatoes.



newtekie1 said:


> At the end of the day, as long as we are getting to the point where we are removing the GPU bottleneck, which is what the 1080p tests with a GTX1080 largely do, there is no point in going lower.



I don't think that's accurate. The 2500K falls behind in a lot of tests to the i3s, which (ignoring core clocks) is as strong as the current Pentiums. It has to be overclocked to support the 1080 Ti.


----------



## newtekie1 (Mar 31, 2017)

Oh just saw this:



> It's important to point out here, that at 1080p, games become more CPU-limited, and faster memory is somewhat rewarding (again, 5.5 percent). At 4K Ultra HD, the game is more GPU-limited, and hence the differences aren't *are* pronounced.



I think that is supposed to be "aren't as pronounced".


----------



## Ferrum Master (Mar 31, 2017)

refillable said:


> optimised like fried potatoes.



Why are you insulting fried potatoes?


----------



## Ubersonic (Mar 31, 2017)

EarthDog said:


> I understand what you are saying. I 100% disagree with your assertion (that its relevant)...its just that simple.


What he's asking for is a low res test that is CPU limited because the CPU that gets the worse result will be the CPU that starts to bottleneck future GPUs first.  It is a relevant test for people who plan to keep the CPU longer than the GPU (almost everyone).


----------



## EarthDog (Mar 31, 2017)

Yep. Again, I get it. That is a completely different test than what is going on here though. W1z isn't testing the CPU, he's testing the changes in memory speed/timings in games/apps. But again, in games, the faster CPU at 800x600 is still going to be the fastest CPU at 4K, right (Right.)? Now, if one was testing what Qubit is saying, you would want to get a round up of CPUs and test. Not the same CPU but change memory speeds as is done with this.... = proper testing by isolating the memory speeds from everything else using a REALISTIC testing environment to yield REALISTIC results instead of contrived results from an UNrealistic testing environment.


----------



## Basard (Mar 31, 2017)

I knotice stuttering in GTA5 a lot.... even though fps rarely drops below 30, mostly hovers above 50....  my system is a bit dated though.

Min FPS scores would be nice, though stuttering doesnt seem to get measured in the FPS through Steam's FPS counter. Dunno how they measure it scientifically.


----------



## TheGuruStud (Mar 31, 2017)

refillable said:


> I know it's not that big of a deal, but I can note that the *FPS gains from the memory speeds *are most seen in games where *the biggest gap of Ryzen vs. the 7700K are*. Some of them are Fallout 4, Hitman and Total War Warhammer. No wonder these games usually give weird and inconsistent GPU results, these games are optimised like fried potatoes.
> 
> 
> 
> I don't think that's accurate. The 2500K falls behind in a lot of tests to the i3s, which (ignoring core clocks) is as strong as the current Pentiums. It has to be overclocked to support the 1080 Ti.



Who has a 2500k that's not OCed to 4.5+? That's like buying an i3...stupid lol


----------



## Ubersonic (Mar 31, 2017)

EarthDog said:


> But again, in games, the faster CPU at 800x600 is still going to be the fastest CPU at 4K, right (Right.)?


Yeah that's the point, if say (hypothetically) 3200MHz was 15% faster at 800x600 than 2133MHz then in the future it will be 15% faster at higher resolutions with newer GPUs.

The conclusion to the review advises against buying faster RAM because the price increase is bigger than the performance increase show in the GPU limited tests, but you can bet anyone who follows that advice will be mad when their 1480ti is getting bottlenecked by their 2133MHz RAM and DDR4 prices are higher than they were in 2017.


----------



## newtekie1 (Mar 31, 2017)

refillable said:


> I don't think that's accurate. The 2500K falls behind in a lot of tests to the i3s, which (ignoring core clocks) is as strong as the current Pentiums. It has to be overclocked to support the 1080 Ti.



No, it doesn't even have to be overclocked to support the 1080Ti.  In any modern game, using settings and resolutions that need a 1080Ti, the 2500K will not be the bottleneck.  The 1080Ti will be, that is why we upgrade our GPUs way more than we upgrade our CPUs.  I can't think of a single game released recently that this isn't true with.



Ubersonic said:


> What he's asking for is a low res test that is CPU limited because the CPU that gets the worse result will be the CPU that starts to bottleneck future GPUs first.  It is a relevant test for people who plan to keep the CPU longer than the GPU (almost everyone).



In theory, yes.  In real world use, almost never.


----------



## EarthDog (Mar 31, 2017)

Ubersonic said:


> then in the future it will be 15% faster at higher resolutions with newer GPUs.


It won't... that is MY point (at least any time remotely soon that this is a worry now.. 4 years.. maybe). As you can see, it doesn't translate to the higher resolutions... Look at his results!!! 5.5% at 1080p to .8% at 4K.

I personally have always said, for the Intel platform, to grab DDR4 3000 CL15... that really has been where the sweetspot was. Now it seems to have slid up a bit to 3-3200... Much above that, the prices skyrocket. When a 1480Ti comes out in 4 years, you likely have other more pressing issues to worry about than ram. 

I found his conclusion to be quite open ended, actually... though here in the states (newegg) I am finding a $20 difference between the same brand/model ram 2400 to 3000. I couldn't find any 2133 kits (Was looking at GSkill Trident Z).


newtekie1 said:


> No, it doesn't even have to be overclocked to support the 1080Ti.  In any modern game, using settings and resolutions that need a 1080Ti, the 2500K will not be the bottleneck.  The 1080Ti will be, that is why we upgrade our GPUs way more than we upgrade our CPUs.  I can't think of a single game released recently that this isn't true with.


The 2500K can be a glass ceiling in some titles and settings with a high end GPU. It doesn't happen on all titles, but, it is beginning to show its age with high end GPUs where a CPU is leaned on (along with the game). You can see these results if you look at some TechSpot reviews.
http://www.techspot.com/review/1333-for-honor-benchmarks/page3.html
http://www.techspot.com/review/1263-gears-of-war-4-benchmarks/page4.html

...and some it doesn't...

http://www.techspot.com/review/1271-titanfall-2-pc-benchmarks/page3.html

...again, it depends...


But, here we aren't testing 6 year old CPUs, but the fastest AMD has to offer (and mentally comparing it to the fastest Intel has to offer).


----------



## zoplon (Mar 31, 2017)

Nice review but I would have like to see more information about the RAM voltages, settings, BCLK for 3200Mhz.
On their website https://www.gskill.com/en/press/vie...s-and-fortis-series-ddr4-memory-for-amd-ryzen 
Gskill seems to have configured these sticks to increase the bus speed to run at 3200 but I read that the Gigabyte Aorus 5 only has a multiplier that you can change.

Could we have a cpu-z screenshot to see how this frequency was achieved?


----------



## W1zzard (Mar 31, 2017)

zoplon said:


> Nice review but I would have like to see more information about the RAM voltages, settings, BCLK for 3200Mhz.
> On their website https://www.gskill.com/en/press/vie...s-and-fortis-series-ddr4-memory-for-amd-ryzen
> Gskill seems to have configured these sticks to increase the bus speed to run at 3200 but I read that the Gigabyte Aorus 5 only has a multiplier that you can change.
> 
> Could we have a cpu-z screenshot to see how this frequency was achieved?


Just set ram to 3200, voltage, timings, done. couldn't be easier. Gigabyte has no bclk adjustments anyway


----------



## Fragment (Mar 31, 2017)

TheGuruStud said:


> Who has a 2500k that's not OCed to 4.5+? That's like buying an i3...stupid lol



I guess it must be all those people who also still play in 480p or 720p


----------



## mouacyk (Mar 31, 2017)

In a RAM or CPU gaming benchmark, anytime you introduce a GPU bottleneck, you no longer know what's really going on with the RAM/CPU.  I feel like the gaming benchmarks don't really answer the overall controversial question of Ryzen's memory scaling capabilities.  However, the non-gaming benchmarks, which are obviously not using the GPU, don't show any significant scaling either and fully support the gaming results.  Perhaps, the RAM isn't optimized by the BIOS for their particular speeds?  I know that even with matured Z97 BIOSes, I can still tweak a few secondary or tirtiary timings and blow automatic timings out of the water when it comes to bandwidth performance.


----------



## zoplon (Mar 31, 2017)

W1zzard said:


> Just set ram to 3200, voltage, timings, done. couldn't be easier. Gigabyte has no bclk adjustments anyway



So the baseclock stays at 100 Mhz with the RAM speed at 3200 Mhz? That's nice to hear. Seeing the screenshots Gskill posted I assumed they automatically increased it to reach that speed.
It would be interesting to see if the system stays stable with 3200Mhz RAM and with an overclock of the CPU at ~ 4Ghz.


----------



## EarthDog (Mar 31, 2017)

mouacyk said:


> In a RAM or CPU gaming benchmark, anytime you introduce a GPU bottleneck, you no longer know what's really going on with the RAM/CPU.  I feel like the gaming benchmarks don't really answer the overall controversial question of Ryzen's memory scaling capabilities.  However, the non-gaming benchmarks, which are obviously not using the GPU, don't show any significant scaling either and fully support the gaming results.  Perhaps, the RAM isn't optimized by the BIOS for their particular speeds?  I know that even with matured Z97 BIOSes, I can still tweak a few secondary or tirtiary timings and blow automatic timings out of the water when it comes to bandwidth performance.


Different loads require use of different resources on the PC bud.


----------



## W1zzard (Mar 31, 2017)

zoplon said:


> So the baseclock stays at 100 Mhz with the RAM speed at 3200 Mhz? That's nice to hear. Seeing the screenshots Gskill posted I assumed they automatically increased it to reach that speed.
> It would be interesting to see if the system stays stable with 3200Mhz RAM and with an overclock of the CPU at ~ 4Ghz.


Yes.

It is rock stable at 3200 cl14 using xfr which automatically boosts the cpu beyond 4.0 oob


----------



## Super XP (Mar 31, 2017)

Thanks  for the review.
Some managed to get DDR4-3800+ working with Ryzen, and the results were quite good. Infinity Fabric Runs at the IMC speed. So you are not just benefiting from faster Ram for increased performance.


----------



## Super XP (Mar 31, 2017)

TheGuruStud said:


> Unfortunately, you have to keep going up in frequency to see the gains. 3,600 looks nice from some vids.
> 
> If 4,000 is achievable, then you're gonna see the dumb fabric working.


Calling Infinity Fabric dumb shows you know nothing about this unique highly innovative technology. This fabric is far more than a high speed interconnect. And nothing like Hyper Transport.


----------



## zoplon (Mar 31, 2017)

W1zzard said:


> Yes.
> 
> It is rock stable at 3200 cl14 using xfr which automatically boosts the cpu beyond 4.0 oob



Nice. Do you think it would still be stable with 4Ghz on all 8 cores with that RAM speed?


----------



## Super XP (Mar 31, 2017)

zoplon said:


> Nice. Do you think it would still be stable with 4Ghz on all 8 cores with that RAM speed?


Yes. 
But Motherboards need to be fully optimized for stability and for faster speeds. This happens with every new generation. Including Intel Chips. 
I would guestamate in about 2-3 months time, Ryzen CPU's will grow in overall performance by about 15-20%. Real World. IMO


----------



## TheGuruStud (Mar 31, 2017)

Super XP said:


> Calling Infinity Fabric dumb shows you know nothing about this unique highly innovative technology. This fabric is far more than a high speed interconnect. And nothing like Hyper Transport.



Yet, it's still dumb, b/c it's slow. Lipstick on a pig. It needs twice the bus, apparently, or just lower latency.

AMD cut corners to make the CPU. Hopefully, this is fixed in Zen 2.


----------



## xorbe (Mar 31, 2017)

Looks like to me, from 2133 to 3200, that 2400 gets you half of the available gain for probably the least amount of money and effort. [Except that the majority of 2400 is CL15 or CL16, so ...]


----------



## Farmer Boe (Mar 31, 2017)

Wiz, thanks for all the hard work and time you've put into testing all these configurations. Please ignore the ignorant people here who seem to miss the point of this article.

In the future, it would be interesting to see a Ryzen vs Intel comparison using the same or similar RAM speeds/timings for gaming as that's what most people use their rigs for.

Keep up the good work!


----------



## zoplon (Mar 31, 2017)

Super XP said:


> Yes.
> But Motherboards need to be fully optimized for stability and for faster speeds. This happens with every new generation. Including Intel Chips.


All true but since this RAM kit was specifically advertised as built for Ryzen I wanted to know what is so special about it and if it guarantees the overclock while
also having a decent OC on all cpu cores, without messing with the baseclock.


----------



## Super XP (Mar 31, 2017)

TheGuruStud said:


> Yet, it's still dumb, b/c it's slow. Lipstick on a pig. It needs twice the bus, apparently, or just lower latency.
> 
> AMD cut corners to make the CPU. Hopefully, this is fixed in Zen 2.


Well I can't confirm whether they got lazy putting Infinity Fabric together or not, but I give Jim Keller a lot more credit. He is in fact one of the best CPU Architects.
Hopefully they tighten up those IF latencies, and speed support up to DDR4-3800 to DDR4-4000+. 

One this I know is Infinity Fabric and ZEN in general requires optimizations in general. 
Infinity Fabric. Great Read...........................

*AMD Infinity Fabric underpins everything they will make!!!*
http://semiaccurate.com/2017/01/19/amd-infinity-fabric-underpins-everything-will-make/


----------



## TheGuruStud (Mar 31, 2017)

Super XP said:


> Well I can't confirm whether they got lazy putting Infinity Fabric together or not, but I give Jim Keller a lot more credit. He is in fact one of the best CPU Architects.
> Hopefully they tighten up those IF latencies, and speed support up to DDR4-3800 to DDR4-4000+.
> 
> One this I know is Infinity Fabric and ZEN in general requires optimizations in general.
> ...



Not lazy, not in the least. They had to be economical (no thanks to intel) and that makes performance suffer. Ideally you don't want a lot of intercore communication and moving primary threads across CCXs isn't helping, I'm sure.


----------



## uuuaaaaaa (Mar 31, 2017)

In the mean time:










Really interesting findings


----------



## hapkiman (Mar 31, 2017)

Had my first time hands on experience last night with a close friend's new Ryzen 1700x build.  Overall I was impressed, but.... I do have to say that at times, it seemed sluggish, ok maybe that's not the right word.  Maybe it just seemed like it wasn't as fast as I thought it would be.  His rig ran fine no crashes or problems (he has a Fury Nano GPU).  Maybe I can just chalk it up to the fact that's it still very early in it's life cycle, and that I am used to my overclocked i7 7700k rig.  We played about a hour or so of Ghost Recon Wildlands, and the some BF1.  

But although I may have came away with a little bit of mixed feelings, I am glad to see AMD finally releasing something that is new, fast, and _close to/equal to/better than_ Intel's offerings.  I guess time will tell how it all plays out over the next year - and it looks like Intel is prepping for the release of a 10-core Skylake-X, and a 6-core Coffee Lake before the end of they year that both look promising.

BTW he did tell me that he now wishes he'd have went with the 1800x, and is thinking about trading them out.  And he had some issues getting his G-Skill RAM to work, so he switched to Crucial which now works fine.  Very decent AMD rig though.  Leaps and bounds ahead of their Piledriver procs.


----------



## nem.. (Mar 31, 2017)

qubit said:


> @W1zzard @EarthDog
> 
> "The story repeats in our game-tests, where the most difference can be noted in the lowest resolution (1920 x 1080), all of 5.5 percent"
> 
> ...



VERY informative video of Ryzen running on code optimized for AMD coded back in 2003, YES TWO THOUSAND THREE. The programmer followed the guidelines provided from AMD back when there were ATHLON XPs and ATHLON 64s/64x2s.The results speak for themselves, Ryzen dominates ( massively ) with the optimized code vs the i7-7700k while only slightly lagging behind on the unoptimized code. 

Remember, this wasn't optimized for Ryzen, so SMT/CCX are all irrelevant.


----------



## simlariver (Mar 31, 2017)

I was hoping for minimum framerates with the gaming tests. Still pretty informative.


----------



## ThomasS31 (Mar 31, 2017)

Thanks!

Wonder how this looks on the intel side... 

Any chance you do something similar, with like a 7700K?


----------



## Super XP (Apr 1, 2017)

hapkiman said:


> BTW he did tell me that he now wishes he'd have went with the 1800x, and is thinking about trading them out.  And he had some issues getting his G-Skill RAM to work, so he switched to Crucial which now works fine.  Very decent AMD rig though.  Leaps and bounds ahead of their Piledriver procs.



Do not trade the Ryzen 7 1700X. You can reach 1800X with no issues. As soon as motherboard bios's are updated and Ryzen optimized. Save the extra cash and buy something else for the Rig.


----------



## Kanan (Apr 1, 2017)

Min fps are still missing, why is TPU still lagging behind on this? 

Also I don't concur with the conclusion, 5% is a lot coming from RAM alone. And that's just average FPS and up to 3200 MHz RAM. As I see it the sweetspot price/performance wise is now 2933 DDR4.


----------



## newtekie1 (Apr 1, 2017)

EarthDog said:


> The 2500K can be a glass ceiling in some titles and settings with a high end GPU. It doesn't happen on all titles, but, it is beginning to show its age with high end GPUs where a CPU is leaned on (along with the game). You can see these results if you look at some TechSpot reviews.
> http://www.techspot.com/review/1333-for-honor-benchmarks/page3.html
> http://www.techspot.com/review/1263-gears-of-war-4-benchmarks/page4.html
> 
> ...



You really just proved my point.  Nothing you posted shows the 2500K being the bottleneck.  No one is buying a Titan X(P) and running at 1080p.  But that is the only scenario that shows the 2500K, or any decent CPU, making a difference. It is an unrealistic scenario.  In reality, if you are using a Titan X(P), you're running higher resolutions that actually push the GPU to its limits.  And this will continue to be the case, the only scenario that will show a significant difference between CPUs are unrealistic scenarios.


----------



## Timobkg (Apr 1, 2017)

Multiple other sites are reporting that while faster RAM doesn't increase average frame rates much, if at all, it does increase minimum frame rates and decrease frame times, thus leading to a smoother gaming experience. 

It's a shame that minimum frame rates and frame times weren't tested or reported. If faster RAM alleviates bottlenecks in the most strenuous sections, where the system is being taxed the most, then it might very well be worthwhile to spend more on faster RAM. Such a bottleneck would only become more apparent with upgrades to faster GPUs.


----------



## nem.. (Apr 1, 2017)




----------



## Super XP (Apr 1, 2017)

nem.. said:


>


There sure is a big difference with v2.20. I can see continued optimizations coming. NICE, 



Timobkg said:


> Multiple other sites are reporting that while faster RAM doesn't increase average frame rates much, if at all, it does increase minimum frame rates and decrease frame times, thus leading to a smoother gaming experience.
> 
> It's a shame that minimum frame rates and frame times weren't tested or reported. If faster RAM alleviates bottlenecks in the most strenuous sections, where the system is being taxed the most, then it might very well be worthwhile to spend more on faster RAM. Such a bottleneck would only become more apparent with upgrades to faster GPUs.


I believe Ryzen would benefit from DDR4-3600 and above.


----------



## EarthDog (Apr 1, 2017)

newtekie1 said:


> You really just proved my point.  Nothing you posted shows the 2500K being the bottleneck.  No one is buying a Titan X(P) and running at 1080p.  But that is the only scenario that shows the 2500K, or any decent CPU, making a difference. It is an unrealistic scenario.  In reality, if you are using a Titan X(P), you're running higher resolutions that actually push the GPU to its limits.  And this will continue to be the case, the only scenario that will show a significant difference between CPUs are unrealistic scenarios.


funny.. I've Made That Exact Point About Those Exact tests.... 

But it can be the glass ceiling with an even lesser card.. just keep going back in time there.


----------



## Aquinus (Apr 1, 2017)

newtekie1 said:


> You really just proved my point.  Nothing you posted shows the 2500K being the bottleneck.  No one is buying a Titan X(P) and running at 1080p.  But that is the only scenario that shows the 2500K, or any decent CPU, making a difference. It is an unrealistic scenario.  In reality, if you are using a Titan X(P), you're running higher resolutions that actually push the GPU to its limits.  And this will continue to be the case, the only scenario that will show a significant difference between CPUs are unrealistic scenarios.


A lot of the games I run in Linux have single-threaded bottlenecks but, I think that's more because of how OpenGL works. There are situations where my 3820 will be "under utilized" but, really is only because some games, when running under OpenGL, can't use more than a thread and a half worth of resources. I find this to be true for both Civ 5 and 6 along with Cities: Skylines. Other games with more emphasis on concurrency or that have implemented Vulkan run a lot better on my machine than their OpenGL equivalents. If I ran Windows, a lot of these same problems wouldn't be but, that's probably more due to the 3D API than anything else.

My simple point is that there are cases when single-threaded performance on SB and SB-E will no longer fit the bill without a significant overclock. I'm running my 3820 at 4.4Ghz just to try to keep the games mentioned above running a little more smoothly because a lot of times the rendering thread/process is the one eating a full core, at least for me.


----------



## rippie (Apr 1, 2017)

awesome review guys, love the amount of numbers. you really put the hours in there.

tiny notes though
-nvidia videocards on Ryzen dont perform that well. for fun try the same with the rx480 (or rx580 when its coming)
-you may want to add cpu / gpu load values to the charts
-you may want to add min/max next to average to the charts 

that said, i really the numbers: for gaming high res not so much benefit, for cpu intentsive tasks yes benefits.
i wonder when the compilers will incorporate dedicated ryzen optimisations (if ever  )

keep up the good work with these reviews, me ugha like!


----------



## uuuaaaaaa (Apr 1, 2017)

rippie said:


> awesome review guys, love the amount of numbers. you really put the hours in there.
> 
> tiny notes though
> -nvidia videocards on Ryzen dont perform that well. for fun try the same with the rx480 (or rx580 when its coming)
> ...



Dual rx480's


----------



## rippie (Apr 1, 2017)

that too or wait for vega , but the reason i want to see the cpu/gpu loads is that the nvidia driver doesn't take full benefit of the ryzen multithreaded beast , u will probably see that the cpu is not maxed out, and the gpu is not maxed out.
as seen in a previous video linked in this thread

and perhaps nvidia driver is the reason for not showing such an improvement on gaming.

but hey, more numbers allow us to correlate those guesses


----------



## uuuaaaaaa (Apr 1, 2017)

rippie said:


> that too or wait for vega , but the reason i want to see the cpu/gpu loads is that the nvidia driver doesn't take full benefit of the ryzen multithreaded beast , u will probably see that the cpu is not maxed out, and the gpu is not maxed out.
> as seen in a previous video linked in this thread
> 
> and perhaps nvidia driver is the reason for not showing such an improvement on gaming.
> ...



I've seen the adoredTV video too. Despite some controversy in the past, I think that he deserves some credit on this one.


----------



## newtekie1 (Apr 1, 2017)

EarthDog said:


> funny.. I've Made That Exact Point About Those Exact tests....
> 
> But it can be the glass ceiling with an even lesser card.. just keep going back in time there.



If it isn't a glass ceiling with the highest end cards available, then it isn't one with a less card.



Aquinus said:


> A lot of the games I run in Linux have single-threaded bottlenecks but, I think that's more because of how OpenGL works. There are situations where my 3820 will be "under utilized" but, really is only because some games, when running under OpenGL, can't use more than a thread and a half worth of resources. I find this to be true for both Civ 5 and 6 along with Cities: Skylines. Other games with more emphasis on concurrency or that have implemented Vulkan run a lot better on my machine than their OpenGL equivalents. If I ran Windows, a lot of these same problems wouldn't be but, that's probably more due to the 3D API than anything else.
> 
> My simple point is that there are cases when single-threaded performance on SB and SB-E will no longer fit the bill without a significant overclock. I'm running my 3820 at 4.4Ghz just to try to keep the games mentioned above running a little more smoothly because a lot of times the rendering thread/process is the one eating a full core, at least for me.



Yes, I already addressed games like this.  If the CPU is going to be holding back the GPU in these types of games that already exist, then we are already seeing it in the 1080p test.  There is no reason to go any lower.

Again, my argument isn't that we don't need a lower resolution test to give us an idea of how the CPUs perform in a situation where the GPU isn't the bottleneck.  My argument is that 1080p is low enough when the tests are done with high end card.  It already gives the information you want.  If there is going to be a CPU bottleneck in the future, it will show in the 1080p test.  Going lower just wastes time.


----------



## Aquinus (Apr 1, 2017)

newtekie1 said:


> If it isn't a glass ceiling with the highest end cards available, then it isn't one with a less card.
> 
> 
> 
> ...


Sure, I wasn't disputing that. I was more trying to get at that you can still have a bottleneck at 1080p with a lesser GPU on older hardware. I'm only driving a 390 with my 3820, it's not like I have a Titan X(P) or a 1080... but on to your point: If I did, I probably wouldn't be playing games at 1080p.


----------



## Timobkg (Apr 1, 2017)

uuuaaaaaa said:


> Dual rx480's


That just introduced a whole set of other issues that could affect performance.

Support for multi-GPUs is waning, and was never that great to begin with. And you certainly wouldn't be able to see if faster memory had an impact on frame times with the frame time stutter introduced by a multi-GPU setup.


----------



## Timobkg (Apr 1, 2017)

qubit said:


> @W1zzard @EarthDog
> Again, as I've said before, it would be helpful if a low res test could be added eg 1024x768 or even less, so we can know the true fps performance of the processor. Testing only at 1080p and up, it's being hidden by GPU limiting which can kick in and out as different scenes are rendered, so you don't really know fast it is.
> 
> Contrary to popular opinion this really does matter. People don't change their CPUs as often as their graphics cards, so in the not too distant future we're gonna see 120Hz 4K monitors along with graphics cards that can render at 4K at well over 120fps. The slower CPU will then start to bottleneck that GPU so that it perhaps can't render a solid 120fps+ in the more demanding games, but the user didn't know about this before purchase. If they had, they might have gone with another model or another brand that does deliver the required performance, but are now stuck with the slower CPU because the review didn't test it properly. So again, yeah it matters. Let's finally test this properly.


Why stop at 1024x768? Why not test 800x600? Or 640x480? Or 320x240? Or better yet, why not test not test on a single pixel, thus completely eliminate the GPU as a bottleneck?

I understand your argument, but you're now effectively creating a synthetic benchmark not representative of real world performance. Where do you draw the line?

The CPU needs to work together with the GPU, so you can't take the GPU out off the picture entirely.

1080p is the lowest resolution that anyone with a modern gaming desktop will be playing at, so it makes sense to set that as the lowest resolution. A $500 CPU paired with a $700 GPU is already a ridiculous config for 1080p gaming.

And while it would be nice to predict performance five years out, we simply can't. Technological improvements will either continue to stagnate - with the GPU continuing to be the bottleneck - in which case CPU/memory performance won't matter, or there will be such a profound change that any older benchmarks will be meaningless and anyone not running a 12 core CPU with 40GB quint-channel RAM will be left in the dust.


----------



## nemesis.ie (Apr 1, 2017)

Good timing on this.

I just updated the UEFI on my AsRock X370 Gaming Pro from 1.60 to 1.93D. It cleans up the interface a lot and adds in profile saving and such, however, here's the bad news:

Per AIDA64 @ ~3900MHz OC on the CPU, RAM (3733Mhz Team TF @ 3200MHz) bandwidth in 1.93D is down to ~44,000MB/sec versus the ~51,000MB/sec in 1.60.

@W1zzard, if you have a chance it would be interesting to see if different Gigabyte UEFI versions also produce different results.

I wonder what Asrock changed as I would like to get the speed back and if it's a compatibility thing, it would be great if they had an option to enable the "faster mode" again.


----------



## Gio4you (Apr 1, 2017)

Great work, absolutely my next cpu


----------



## YautjaLord (Apr 1, 2017)

@W1zzard:

So all it means, BIOS updates & higher frequency RAM gets "measly" 5.5% improve in 1080p gaming & 13-15% improve in CPU relative tasks, but otherwise it's still largely unbeatable in productivity tasks? Hope by the time August comes-a-knockin' BIOS'es & Infinity Fabric get matured, i wanna see if R7 1800X & GA-AX370-Gaming-K7 support 3600MHz RAM without any quirks n sh1t. Nice read regardless, loved DOOM 1080p, 1440p & 4k results, cheers.


----------



## geon2k2 (Apr 1, 2017)

uuuaaaaaa said:


> In the mean time:
> 
> 
> 
> ...



So what does that mean, that nvidia sucks at dx12 so much that it basically affected all the Ryzen bechmarks so far?


----------



## uuuaaaaaa (Apr 1, 2017)

geon2k2 said:


> So what does that mean, that nvidia sucks at dx12 so much that it basically affected all the Ryzen bechmarks so far?



Or that it is codded in a way that it runs atrociously bad on Ryzen uarch. I'm not saying that this was made on purpose, but intel has dominated for so many years that it kinda made sense for them to get the best and most of the intel cpus. Hopefully when RX Vega comes out this theories can be put to test.


----------



## IRQ Conflict (Apr 1, 2017)

Timobkg said:


> Why stop at 1024x768? Why not test 800x600? Or 640x480? Or 320x240? Or better yet, why not test not test on a single pixel, thus completely eliminate the GPU as a bottleneck?


 Your wish has been granted.


----------



## geon2k2 (Apr 1, 2017)

uuuaaaaaa said:


> Or that it is codded in a way that it runs atrociously bad on Ryzen uarch. I'm not saying that this was made on purpose, but intel has dominated for so many years that it kinda made sense for them to get the best and most of the intel cpus. Hopefully when RX Vega comes out this theories can be put to test.



My understanding is that there is something in the nvidia driver/architecture which still runs in single thread or at least rely a lot on single thread performance, while on amd gpu it just spreads the load on multiple cores very nicely and that's why Ryzen looks so much better.


----------



## uuuaaaaaa (Apr 1, 2017)

geon2k2 said:


> My understanding is that there is something in the nvidia driver/architecture which still runs in single thread or at least rely a lot on single thread performance, while on amd gpu it just spreads the load on multiple cores very nicely and that's why Ryzen looks so much better.



It could be the case too. However some of the fps differences that are reported are way beyond what one would expect... That is why I posted that hypothesis. If one is testing a cpu gaming performance, adoredTV tests just show that we must look at both sides of the fence. AdoredTV has posted some questionable things in the past, but imo he touched a very relevant point this time around.

Edit: Check this out, it seems that the AMD Dx12 driver loves the cores, while Nvidias DX12 seems to be lagging behind... At least this seems to be the case with "The Division" too.

NVIDIA DX11: 161.7 fps CPU: 40% GPU: 80%
AMD DX11: 128.4 fps CPU: 33% GPU: 71%

NVIDIA DX12: 143.9 fps CPU: 42% GPU: 67%
AMD DX12: 189.8 fps CPU: 49% GPU: 86%


----------



## geon2k2 (Apr 2, 2017)

uuuaaaaaa said:


> It could be the case too. However some of the fps differences that are reported are way beyond what one would expect... That is why I posted that hypothesis. If one is testing a cpu gaming performance, adoredTV tests just show that we must look at both sides of the fence. AdoredTV has posted some questionable things in the past, but imo he touched a very relevant point this time around.
> 
> Edit: Check this out, it seems that the AMD Dx12 driver loves the cores, while Nvidias DX12 seems to be lagging behind... At least this seems to be the case with "The Division" too.
> 
> ...



Interesting ... maybe w1zzard could look into it


----------



## anubis44 (Apr 2, 2017)

Thanks for this article, W1zzard. I actually had that motherboard, the Gigabyte Auros Gaming 5, and returned it to the store because my G.Skill Trident Z DDR4-3200 CL14 32GB kit memory wouldn't break 2933MHz, no matter what I did. It was using Samsung chips, but it would NOT clock the ram any higher than 2933, which annoyed the crap out of me. I'm frantically looking for any post/article/review that shows this ram hitting 3200MHz or higher on any motherboard. I'm thinking I'll grab one of the 4 (currently) motherboards that have an external BCLK generator: Asus Crosshair VI, Gigabyte Auros Gaming 7, Asrock, Taichi or Asrock Fata1ity Professional, but of all of these, the Gigabyte Gaming 7 is the only one with a dual bios (like the Gaming 5 has), but the Asus Crosshair will take my Corsair water cooler because it accepts socket AM3+ coolers. 

Decisions, decisions!


----------



## SageWolf (Apr 2, 2017)

1) "again... amd failed because 5.5%".  But when it's nvidia 5.5% faster in selected games or intel, it's also AMD failure. So it's always AMD failure, no matter what. I don't think so, as most "reviewers/benchmarkers" say 5.5% it's enough for a win and it's an improvement. BUT...... ----> go to point 2)

2) It's not really  5.5%, right? it's more like, some games improves nothing, some others, many, improves more than 10 FPS. I saw one like 12 FPS faster at 3200 mhz. That isn't a lot just because a faster ram stick? looks like a lot.

3) "3200mhz ram, the limits of the impossible!!" Well.... right now that would be 3600mhz ram 







. But just be patient, with time 4000mhz will be common currency in everyday build for r5 and r7 ryzen cpu builds.

Let's see a Re-Review of ryzen with 10-20% improvement in a few months like* Hardware Canucks* did with polaris huh?


----------



## nemesis.ie (Apr 2, 2017)

Note, this is not aimed at you SageWolf, just a general observation I've been meaning to vent about that your post reminded me of. 

I also wish folks on the interwebs would stop going on about "xx fps faster". We all need to ONLY talk about % as e.g. 5fps difference at 150fps is barely noticeable but at 25fps is huge.

Often there is no context given just "blah blah, I got 5 more fps in xxx on my yyy".


----------



## HD64G (Apr 2, 2017)

Look carefully at 2:54










and get stunned when you learn how much faster can Ryzen get with just a bios update 

And then think of a few updates, faster RAM compatibility, proper cpu usage by better OS support.

It has a lot more to give imho.


----------



## Kanan (Apr 2, 2017)

All who looked Adored videos and similar stuff should watch this aswell:








tl:dw: Nvidia does fine in DX12 with Ryzen 7.


----------



## Super XP (Apr 3, 2017)

There's "A LOT" more hidden performance coming soon to Ryzen. 
Game Optimizations, Bios Updates, actual GPU driver updates to better work with Ryzen chips, OS Optimizations, faster Ram to also benefit "IF". And so on. Endless possibilities.  

So many angry people online calling Ryzen the FailDozer., just because equivalent Intel Chips get what? 10-15 more FPS out of 150+ FPS? Lol ridiculous.  

At least there's One hard Fact I'll continue to hammer away, Ryzen guarantees you Smooth Gaming, regardless of Frame Rates. And this is based on 80% of official review sites. Anantech, Guru3D, etc., included. 

So let the Ryzen AM4 platform gets its continued updates and EnJoY the fact that AMD has now given us a CHOICE. Finally.


----------



## nemesis.ie (Apr 3, 2017)

I'm chomping at the bit to get some R7 paired with Vega data. 

I hope that Vega Fury X2 is real and available at launch ..... dribble, pass the tissues! 

/hypetrain


----------



## Super XP (Apr 3, 2017)

nemesis.ie said:


> I'm chomping at the bit to get some R7 paired with Vega data.
> 
> I hope that Vega Fury X2 is real and available at launch ..... dribble, pass the tissues!
> 
> /hypetrain


AMD is scrapping the Fury name seeing how Fury didn't live up to the hype. And despite its massive 4096-Bit memory interface.  

RX-Vega should pound the Fury X by at least 2.5X it's performance. If not more.


----------



## Kanan (Apr 3, 2017)

Super XP said:


> At least there's One hard Fact I'll continue to hammer away, Ryzen guarantees you Smooth Gaming, regardless of Frame Rates. And this is based on 80% of official review sites. Anantech, Guru3D, etc., included.


It's true Ryzen is doing great on 1% percentile and 0.1% percentile FPS, way better than i7 7700K for example. This has to do with the 8 cores that Ryzen has compared to the 4 cores of 7700K as well as the better latency on its L1 + L2 cache.










This is strictly on topic and proofs what I said as well.


----------



## EarthDog (Apr 4, 2017)

I really don't think core count has much to do with apparent smoothness in game... add cores to intel, or tale them away from ryzen... it's still as smooth...


----------



## Kanan (Apr 4, 2017)

EarthDog said:


> I really don't think core count has much to do with apparent smoothness in game... add cores to intel, or tale them away from ryzen... it's still as smooth...


I'm only repeating what reviewers say, they think in some heavy games it's due to more reserves on the 8 core CPUs (Ryzen), but I added my own theory, it possibly being the better/faster L1/L2 cache of Ryzen compared to the entire Intel Core architecture (all gens). That said, I think 6/8 core Intel's are smoother as well unless it's the 2nd reason.


----------



## Super XP (Apr 4, 2017)

Kanan said:


> It's true Ryzen is doing great on 1% percentile and 0.1% percentile FPS, way better than i7 7700K for example. This has to do with the 8 cores that Ryzen has compared to the 4 cores of 7700K as well as the better latency on its L1 + L2 cache.
> 
> 
> 
> ...


Actually I was only referring to the 7700K. for some reason, there's enough reports and reviewers that claim it causes micro stutters. But the 6700K for example does not. So is there something wrong with the 7700K? Perhaps. 
Thanks for the video, ya I've seen that, can't wait to see the Ram pushed even faster. What effect it will have on IF.


----------



## Kanan (Apr 4, 2017)

Super XP said:


> Actually I was only referring to the 7700K. for some reason, there's enough reports and reviewers that claim it causes micro stutters. But the 6700K for example does not. So is there something wrong with the 7700K? Perhaps.
> Thanks for the video, ya I've seen that, can't wait to see the Ram pushed even faster. What effect it will have on IF.


I think Ryzen scales nice up to 2933/3200, after that the curve flattens a bit, but scaling continues, may continue endlessly, lifting that bottleneck infinity fabric a small bit every time after increasing the Ram bandwidth. I'd like to see it with the best possible DDR4 there is atm (4200?), but I guess the curve gets really flat after 3600, also depending on the game tested. It's also important to note, that games where Ryzen runs very well on don't need very high clocked Ram for it to run really good, they easily get along with 2666 or 2933 DDR4.

On the i7 7700K: wow, that would be bad - well I don't like the 7700K anyway, fucking rebrand for high money.


----------



## Relayer (Apr 4, 2017)

HD64G said:


> There are 5 games that greatly benefit by 13-17% when going from 2133 to 3200MHz RAM (Hitman, FC Primal, Civ6, Fallout4, Warhammer) and most of the others gain very little with Dishonored2 gaining 9%. It depends on the game engine I suppose. So, gaming performance of Ryzen clearly depends on RAM speed, along with game engine optimisations.


Yeah. I have to disagree with the technique of averaging them all out and then concluding RAM speed doesn't matter for gaming.


----------



## uuuaaaaaa (Apr 4, 2017)

Interesting!


----------



## medi01 (Apr 4, 2017)

hojnikb said:


> How about doing 1% and 0.1% percentile for gaming. Average fps does not tell the whole story, especially with higher ram frequencies.


Indeed, avg fps doesn't really tell the real picture.

From other reviews, most boost from faster mem was to min fps/ 1% / 0.1%


----------



## medi01 (Apr 4, 2017)

TheGuruStud said:


> Yet, it's still dumb, b/c it's slow.


I wouldn't call it "dumb". Cache access figures I've seen weren't that bad either, within CCX (4cpu block) it was faster than intel's cache, across CCXes (using IF) about 2 times slower. Still pretty cool.

There is only mild perf hit in (emulated) 2 + 2 vs  4 + 0 scenario (games):











I also doubt it can be "addressed", as the whole concept is to have 4 core blocks and other CPUs created from those blocks, connected via infinity fabric.


----------



## nemesis.ie (Apr 4, 2017)

@Super XP: re "Vega Fury" I was obliquely referring to the WCCFTech article on Vega, a bit tongue in cheek TBH as we all know how reliable there stuff is, the do somethimes get it right too though.

http://wccftech.com/vega-teaser-slides-leak-nda/


----------



## Super XP (Apr 4, 2017)

I did read that ZEN+ or ZEN 2 (Same thing) coming in early 2018 will resolve any latency issues and/or any speed related issues. Let me see if I can find that info so I can post. I think one being allowing much higher Ram to be supported. 

Infinity Fabric Is in its infancy. I just find it quite innovative. What if that ran at the CPU speed instead of the IMC?


----------



## nemesis.ie (Apr 4, 2017)

What if it also had its own port connected to the GPU? Inside an APU that would make a lot of sense and also possibly on multi-ship cards.

> Visions of a cable from next to the CPU socket to the GPU card now ...


----------



## uuuaaaaaa (Apr 4, 2017)

nemesis.ie said:


> What if it also had its own port connected to the GPU? Inside an APU that would make a lot of sense and also possibly on multi-ship cards.
> 
> > Visions of a cable from next to the CPU socket to the GPU card now ...



I think Vega also has the infinity fabric thing too...


----------



## Super XP (Apr 4, 2017)

uuuaaaaaa said:


> I think Vega also has the infinity fabric thing too...


Yes VEGA does have it and it pushes 512 GB /s too. Curious to see how a VEGA will perform with Ryzen and with an Intel CPU. Will it run faster with a Ryzen? We will see.


----------



## medi01 (Apr 5, 2017)

Super XP said:


> I did read that ZEN+ or ZEN 2 (Same thing) coming in early 2018 will resolve any latency issues and/or any speed related issues.


Welp, what issues?
Perf difference (in games) between 4 + 0 and 2 + 2 is lower than 5%. How far lower should it be not to be considered an issue? In my humble opinion that's dayum good.

Zen 2 will show more consistent performance, according to anand. When tuning all that "branch prediction" logic, Intel has enough resources to do it using much wider range of apps than AMD. That's what Zen 2 (using AM4 socket, f*ck you intel) will do.



uuuaaaaaa said:


> I think Vega also has the infinity fabric thing too...


Makes me wonder about implication about dual chip "Vega Pro", connected using IF.


----------



## uuuaaaaaa (Apr 5, 2017)

medi01 said:


> Welp, what issues?
> Perf difference (in games) between 4 + 0 and 2 + 2 is lower than 5%. How far lower should it be not to be considered an issue? In my humble opinion that's dayum good.
> 
> Zen 2 will show more consistent performance, according to anand. When tuning all that "branch prediction" logic, Intel has enough resources to do it using much wider range of apps than AMD. That's what Zen 2 (using AM4 socket, f*ck you intel) will do.
> ...



Maybe it won't use a PLX bridge chip this time?


----------



## nemesis.ie (Apr 5, 2017)

That's my thought too, if the have a DF connection they wouldn't need a PCIe swtich from PLX (or another company that makes them) and the DF should be faster by around 2x at least.

That said, they probably could have used the original HyperTransport in the older designs but didn't, but now they have more of a need to integrate stuff in CPU/GPU and data centre racks so it probably makes more sense now


----------



## HTC (Apr 5, 2017)

anubis44 said:


> Thanks for this article, W1zzard. I actually had that motherboard, the Gigabyte Auros Gaming 5, and returned it to the store because my G.Skill Trident Z DDR4-3200 CL14 32GB kit memory wouldn't break 2933MHz, no matter what I did. It was using Samsung chips, but it would NOT clock the ram any higher than 2933, which annoyed the crap out of me. I'm frantically looking for any post/article/review that shows this ram hitting 3200MHz or higher on any motherboard. I'm thinking I'll grab one of the 4 (currently) motherboards that have an external BCLK generator: Asus Crosshair VI, Gigabyte Auros Gaming 7, Asrock, Taichi or Asrock Fata1ity Professional, but of all of these, the Gigabyte Gaming 7 is the only one with a dual bios (like the Gaming 5 has), but the Asus Crosshair will take my Corsair water cooler because it accepts socket AM3+ coolers.
> 
> Decisions, decisions!



This dude managed to do 3200 with *dual rank* 2 * 16 GB GSkill RAM @ CAS 16.

Pic is a bit hard to see, though.


----------



## medi01 (Apr 6, 2017)

Vayra86 said:


> nonsense Youtuber


Oh dear, give me a break.



Vayra86 said:


> He's grasping at straws


He is stating facts, comparing effect of AMD GPU vs nVidia GPU in CPU performance context.



uuuaaaaaa said:


> Maybe it won't use a PLX bridge chip this time?


That's a mainboard manufacturer's choice.


----------



## Vayra86 (Apr 6, 2017)

medi01 said:


> Oh dear, give me a break.
> 
> 
> He is stating facts, comparing effect of AMD GPU vs nVidia GPU in CPU performance context.
> ...



'Facts' that change with every passing day as things get adapted towards Ryzen, which does not exclude Nvidia at all. Meanwhile, CPU tax in one game is different from another and he tested it while being GPU-limited in every single title, reducing the value of these different CPU loads to 'oh look my CPU is doing less' while it has no bearing on ingame FPS at all. Don't get me wrong, I like it that Ryzen gaming benches are starting to look better, but its very easy to see that the Ghz bottleneck isn't gone, this reality has not changed at all.

Thing is, for 60hz gaming, Ryzen is MORE than fine and it was more than fine at launch too. For 120hz/fps however, and on cpu-limited games, the reality has NOT changed, and the 7700k is still the go-to CPU. The net value of all these wonderful conclusions is quite precisely zero.


----------



## uuuaaaaaa (Apr 6, 2017)

medi01 said:


> That's a mainboard manufacturer's choice.



Not really, I was talking about the PLX chips that connects both gpu's in a dua gpu card like the Pro Duo/ R9 295X2, HD 7990/6990/5970/4870X2. Since vega will support infinity fabric, they might get rid of the PLX and directly bridge both gpus.


----------



## nemesis.ie (Apr 6, 2017)

We should really call them PCIe switches, PLX are not the only makers of themm I think Broadcom for one have some. 

Maybe "PCIe-Sw" ot something shoter would be good though.


----------



## Super XP (Apr 6, 2017)

medi01 said:


> Welp, what issues?
> Perf difference (in games) between 4 + 0 and 2 + 2 is lower than 5%. How far lower should it be not to be considered an issue? In my humble opinion that's dayum good.
> 
> Zen 2 will show more consistent performance, according to anand. When tuning all that "branch prediction" logic, Intel has enough resources to do it using much wider range of apps than AMD. That's what Zen 2 (using AM4 socket, f*ck you intel) will do.
> ...



I wish I can locate that Link with AMD discussing about ZEN2 enhancements over ZEN1. And how they explain ZEN1 issues won't be present in ZEN2. Paraphrasing of course, let me look.


----------



## nemesis.ie (Apr 6, 2017)

As an aside, the UEFI with the new AGESA/R5 support just arrived for my Asrock. 

Installed but I've not played with it yet.


----------



## Kanan (Apr 7, 2017)

Super XP said:


> I wish I can locate that Link with AMD discussing about ZEN2 enhancements over ZEN1. And how they explain ZEN1 issues won't be present in ZEN2. Paraphrasing of course, let me look.


I confirm this, Lisa Su said in an interview that they are *currently* working at fixing the biggest flaws of Ryzen for Ryzen II (or "Ryzen 7 2xxx"), and then going from there to the smaller kinks.


----------



## anubis44 (Apr 7, 2017)

Kanan said:


> I confirm this, Lisa Su said in an interview that they are *currently* working at fixing the biggest flaws of Ryzen for Ryzen II (or "Ryzen 7 2xxx"), and then going from there to the smaller kinks.



Just a semantic point, but I would find it hard to believe Lisa Su used the word 'flaw' in describing improvements in Ryzen II over Ryzen.


----------



## medi01 (Apr 7, 2017)

Vayra86 said:


> For 120hz/fps however, and on cpu-limited games, the reality has NOT changed, and the 7700k is still the go-to CPU.


You make it sound as if 7700k wins vs 8 cores (including Intel's) all the time. But that's not the case.

Paying 300$+ for a 4 core CPU in 2017 is outright wrong, in my humble opinion, with major consoles going 8 core (first 6 and now 7 are usable in games) and even Blizzard optimizing games for 6 cores.


----------



## TheLaughingMan (Apr 7, 2017)

From what I saw in games for memory speed it was only two outcomes, it either made no difference or going from 2133 to 3200 gained about 10 FPS. That can be a big deal. For example in Fallout 4 at 1440p that pushed it form 57.4 to 66.6 avg. That is a 13.8% improvement. The same for Hitman and Civ IV. And the price of RAM is not going down any time soon. I am going to go for the higher clocks first and lower timing if I can afford it here soon.


----------



## Vayra86 (Apr 7, 2017)

medi01 said:


> You make it sound as if 7700k wins vs 8 cores (including Intel's) all the time. But that's not the case.
> 
> Paying 300$+ for a 4 core CPU in 2017 is outright wrong, in my humble opinion, with major consoles going 8 core (first 6 and now 7 are usable in games) and even Blizzard optimizing games for 6 cores.



Oh but I agree on that second half! The 4c/8t at that price is retarded. But still, to hit the highest min fps and especially above 60, you really do want the Ghz not the cores. The reason Ryzen is bottlenecking at high refresh is exactly because there is always that ONE game thread you can't divide across multiple cores, and that is exactly where the min. fps takes a hit. The higher averages are the result of multiple cores taking the other threads while the main thread hits a GPU wall before it hits the CPU wall. But the main thread is still limited by the 1-core perf.

Also keep in mind that the comparison to consoles is broken by design because the consoles aim for 30 or 60 fps targets, as HDTV's are generally built for 50/60hz. And because the consoles still use a Phenom II derivative core - an entirely different beast from Ryzen or Intel CPUs.


----------



## Kanan (Apr 8, 2017)

anubis44 said:


> Just a semantic point, but I would find it hard to believe Lisa Su used the word 'flaw' in describing improvements in Ryzen II over Ryzen.


Right, you got me there.


----------



## Super XP (Apr 8, 2017)

Kanan said:


> I confirm this, Lisa Su said in an interview that they are *currently* working at fixing the biggest flaws of Ryzen for Ryzen II (or "Ryzen 7 2xxx"), and then going from there to the smaller kinks.


I don't recall anybody at AMD mentioning the word "Flaw". Though Lisa did mention they are hard at work ironing out minor kinks and stuff in ZEN1, and will include enhancements over ZEN1 into ZEN2.

I highly doubt she say Flaw in any relation to ZEN1. These chips are great. They just need optimizations,  modifications and tweaking.


----------



## TheLaughingMan (Apr 9, 2017)

Super XP said:


> I don't recall anybody at AMD mentioning the word "Flaw". Though Lisa did mention they are hard at work ironing out minor kinks and stuff in ZEN1, and will include enhancements over ZEN1 into ZEN2.
> 
> I highly doubt she say Flaw in any relation to ZEN1. These chips are great. They just need optimizations,  modifications and tweaking.



The only "flaw" I can think of is I think the Infinity fabric needs to be DDR. It appears to be running at the Bclk rate of the memory controller. I don't see any reason it can't move data on the rising and fall edge like DDR memory. Everything else will come with time.


----------



## Kanan (Apr 9, 2017)

Super XP said:


> I don't recall anybody at AMD mentioning the word "Flaw". Though Lisa did mention they are hard at work ironing out minor kinks and stuff in ZEN1, and will include enhancements over ZEN1 into ZEN2.
> 
> I highly doubt she say Flaw in any relation to ZEN1. These chips are great. They just need optimizations,  modifications and tweaking.


Man I don't wanted to trigger your fanboyed-ness, calm down. Yes she didnt' use the word flaw, as someone else right before you already noted, but it's a flaw nonetheless, she just used a typical "political correct" word for it, rather than flaw. She essentially said it will get more optimized, doesn't change a thing.

@TheLaughingMan That and its low clocks! Two big flaws. Right now, Intel processors simply win in games because of higher clocks. Ryzen @ high clocked Ram isn't bottlenecked in games, but 4000 MHz simply isn't enough to win vs. 7700K overclocked (and a lot of reviewers are comparing like that).


----------



## EarthDog (Apr 9, 2017)

Makes sense to compare stock for stock, same clocks (for ipc and h2h), as well as overclocked. Since ryzen 1700/1700x/1800x doesn't really overclock at all past xfr (except to bring all cores there), it loses out by 25% clockspeed.


----------



## Super XP (Apr 9, 2017)

Kanan said:


> Man I don't wanted to trigger your fanboyed-ness, calm down. Yes she didnt' use the word flaw, as someone else right before you already noted, but it's a flaw nonetheless, she just used a typical "political correct" word for it, rather than flaw. She essentially said it will get more optimized, doesn't change a thing.
> 
> @TheLaughingMan That and its low clocks! Two big flaws. Right now, Intel processors simply win in games because of higher clocks. Ryzen @ high clocked Ram isn't bottlenecked in games, but 4000 MHz simply isn't enough to win vs. 7700K overclocked (and a lot of reviewers are comparing like that).


Lol, you can remove that fanboy comment, I ain't a fanboy of either company. 

Have you seen the recent Gaming Benchmarks from various reviewers. Ryzen does well enough in Gaming today versus when they were 1st released.


----------



## TheLaughingMan (Apr 9, 2017)

Kanan said:


> @TheLaughingMan That and its low clocks! Two big flaws. Right now, Intel processors simply win in games because of higher clocks. Ryzen @ high clocked Ram isn't bottlenecked in games, but 4000 MHz simply isn't enough to win vs. 7700K overclocked (and a lot of reviewers are comparing like that).



That chip was not the Ryzen 7 target. Nor was pure gaming. It is also the first generation of a brand new architecture filled with brand new tech. Clock speed will take a generation or two. That is not a flaw, but more or less the reality of any highly complex product.


----------



## techtard (Apr 9, 2017)

Nice writeup. Thinking about jumping on a Ryzen setup, maybe a 6 core with slow ram for now and if they iron out all the bugs (and when prices drop) get some faster ram later.
I don't need a new setup, but I'm kinda bored with my 2500k and the skylake stuff isn't that appealing.  I was hoping Intel would have released a mainstream 6 core i5 and a 6c/12t i7 by now.


----------



## Aquinus (Apr 9, 2017)

Super XP said:


> Infinity Fabric Is in its infancy. I just find it quite innovative. What if that ran at the CPU speed instead of the IMC?


What if it was just HyperTransport and was able to run in its own clock domain with its own multiplier? I can see there being benefits to clocking different parts of the CPU at different speeds and having control of that. Considering communication between each complex and how important *it can be*. Being able to clock it higher under certain conditions or lower when it's not needed could offer power saving options that are a little more granular than they are now.

I personally find what AMD has produced to be fascinating and there are classes of applications (such as web servers utilizing non-blocking I/O,) that can realize the power of a multi-core system. My exploration has involved using between 75% and 95% of the 8 threads on my 3820. I need more cores to test how far this scales.


----------



## Kanan (Apr 10, 2017)

TheLaughingMan said:


> That chip was not the Ryzen 7 target. Nor was pure gaming. It is also the first generation of a brand new architecture filled with brand new tech. Clock speed will take a generation or two. That is not a flaw, but more or less the reality of any highly complex product.


I know, I know. Still it's a flaw when a interconnect between CPU's is so narrow that you have to overclock the Ram to get better performance ("better" not saying "full"). The low clock speed is another flaw, low clocks isn't something acceptable these days, every other architecture (including FX) has high clocks out of the box or is overclockable to 4.5 - 5 GHz (eg. Core architecture, entire line). I'm sure both flaws are well known at AMD and are right now worked on.


----------



## techtard (Apr 10, 2017)

~4.0 ghz is not bad, it can still keep up pretty good from the benches around the 'net. And smokes my FX that's running @ 5.0ghz. Maybe the clockspeed wars are finally over and we're solidly moving to the core wars.

Could be that this is AMDs version of the original i7, maybe we'll get a 'sandybridge' type r7 that hits 5.0 with Ryzen 2 when they get all the kinks worked out and the process matures.


----------



## medi01 (Apr 10, 2017)

Vayra86 said:


> But still, to hit the highest min fps and especially above 60, you really do want the Ghz not the cores.



Actually...
There is not that clearly explained "better min fps" effect of Ryzen CPUs, (mostly) vs Intel's 8 cores. 
I'd thought it's, perhaps, that XFR thing, but "consensus is" (among some anonymous dudes on the internet, lolz) that likely the larger cache plays a role.


----------



## Vayra86 (Apr 10, 2017)

medi01 said:


> Actually...
> There is not that clearly explained "better min fps" effect of Ryzen CPUs, (mostly) vs Intel's 8 cores.
> I'd thought it's, perhaps, that XFR thing, but "consensus is" (among some anonymous dudes on the internet, lolz) that likely the larger cache plays a role.



Ryzen is a bit different but still hasn't got the performance to apply that statement as a general one (cache makes Ghz irrelevant) because its too much of a black box as of yet and performance is going all over the place (better or worse, engine specific etc etc), while high Ghz does apply as a general guarantee for better min. fps.

The eternal problem of only being able to choose one CPU is at play here


----------



## TheLaughingMan (Apr 10, 2017)

Kanan said:


> The low clock speed is another flaw, low clocks isn't something acceptable these days, every other architecture (including FX) has high clocks out of the box or is overclockable to 4.5 - 5 GHz (eg. Core architecture, entire line).



You do realize that out of the box, Intel only has 3 chips with a clock speed higher than 4.0 GHz right? That would be the infamous 7700K and the recent i3 7350K and i3 7320. So yes it is not only 100% acceptable, its 99.9% of the market normal. 4.1 GHz being a brick wall for even the best binned chips is an issue (short of LN2 cooling) that needs to be addressed. Flaw it is not. They built a brand new chip, on a brand new architecture, with brand new tech, and old tech they have never used before. We are lucky then got to the 4.0 GHz. You clearly just want to call something a flaw so have at it man, but you are off base.


----------



## medi01 (Apr 10, 2017)

Vayra86 said:


> because


Welp


----------



## Vayra86 (Apr 10, 2017)

medi01 said:


> Welp



Welp take a critical look and you see 82 C versus 72 C and a significant clock difference while the cooler card has 4 fps over the hotter one that runs a higher clock.

Smells like awesome youtubers doing 'research'...


----------



## medi01 (Apr 10, 2017)

Vayra86 said:


> Welp take a critical look and you see 82 C versus 72 C


Welp
https://www.reddit.com/r/Amd/comments/5z9weg/amd_confirms_20c_offset_thermal_reading_bug/


----------



## Aenra (Apr 10, 2017)

What a man my age thinks reading the last pages:

- Two..three? People 'stuck' in a conversation that leads nowehere. But at it nonetheless.
- An entire market of people somehow -convinced- comparing an 8core to a second generation 4core is.. logical? Productive?

But please carry on


----------



## TheLaughingMan (Apr 10, 2017)

medi01 said:


> Welp
> https://www.reddit.com/r/Amd/comments/5z9weg/amd_confirms_20c_offset_thermal_reading_bug/



That refers to the CPU temperature. No the GPU. In your screenshot, the GPU's are running at different clock speeds likely due to the drastic temp difference. One of these things is not like the other.


----------



## Relayer (Apr 10, 2017)

TheLaughingMan said:


> You do realize that out of the box, Intel only has 3 chips with a clock speed higher than 4.0 GHz right? That would be the infamous 7700K and the recent i3 7350K and i3 7320. So yes it is not only 100% acceptable, its 99.9% of the market normal. 4.1 GHz being a brick wall for even the best binned chips is an issue (short of LN2 cooling) that needs to be addressed. Flaw it is not. They built a brand new chip, on a brand new architecture, with brand new tech, and old tech they have never used before. We are lucky then got to the 4.0 GHz. You clearly just want to call something a flaw so have at it man, but you are off base.



Amazing the people who think that a quad core for more money is a better value than the R7 1700 because it's faster in some games, but slower in virtually everything else. Also amazing that AMD actually beats the hype and over delivers on their promises and some people still can't find anything to be impressed about because of the aforementioned game results. 

Anyone remember the quad core vs dual core debates from the past? Lots of people made the wrong decision back then too. Why don't people learn?


----------



## EarthDog (Apr 11, 2017)

TheLaughingMan said:


> You do realize that out of the box, Intel only has 3 chips with a clock speed higher than 4.0 GHz right? That would be the infamous 7700K and the recent i3 7350K and i3 7320. So yes it is not only 100% acceptable, its 99.9% of the market normal. 4.1 GHz being a brick wall for even the best binned chips is an issue (short of LN2 cooling) that needs to be addressed. Flaw it is not. They built a brand new chip, on a brand new architecture, with brand new tech, and old tech they have never used before. We are lucky then got to the 4.0 GHz. You clearly just want to call something a flaw so have at it man, but you are off base.


i agree with your underlying point...

However....4790k was 4ghz.

Next, all those chips (that can) overclock a lot more than their own boost with all cores like ryzen. I wouldnt call it a flaw either, but a pretty big disappointment to the enthusiast/overclocking crowd, agreed. Really, most dont overclock at all (i dont consider all cores at XFR overclocking). I hope they can hit 4.5ghz+ at some point in their lifecycle.


----------



## Fluffmeister (Apr 11, 2017)

medi01 said:


> Welp
> https://www.reddit.com/r/Amd/comments/5z9weg/amd_confirms_20c_offset_thermal_reading_bug/



Honestly, I'm embarrassed for you.


----------



## Relayer (Apr 11, 2017)

EarthDog said:


> i agree with your underlying point...
> 
> However....4790k was 4ghz.
> 
> Next, all those chips (that can) overclock a lot more than their own boost with all cores like ryzen. I wouldnt call it a flaw either, but a pretty big disappointment to the enthusiast/overclocking crowd, agreed. Really, most dont overclock at all (i dont consider all cores at XFR overclocking). I hope they can hit 4.5ghz+ at some point in their lifecycle.


Intel still has the superior process and fabs don't they? Not much AMD can do about that.


----------



## medi01 (Apr 11, 2017)

TheLaughingMan said:


> That refers to the CPU temperature. No the GPU. In your screenshot, the GPU's are running at different clock speeds likely due to the drastic temp difference. One of these things is not like the other.



I see, here is the same thing in motion:












Fluffmeister said:


> Honestly, I'm embarrassed for you.


Honestly, I find it flattering.


----------



## Vayra86 (Apr 11, 2017)

medi01 said:


> I see, here is the same thing in motion:
> 
> 
> 
> ...



To each their own, I guess 

Look through the whole thing and you can see alarm bells left and right.
- 1440p Ultra, sub 60 FPS gameplay, GPU limited, is precisely what is NOT interesting to see. CPU load tops out at 90% on a single core.
- 99% GPU load makes this not a CPU test by design
- Streaming and online during a CPU test (LOL)
- GPU load is all over the place, showing as low as 50% at times on either system, and I see Early Access bottom of screen. Game not only looks like shit, but it runs badly.

Should I go on? I will tell you this: if this is how you form your opinions, don't tire us with them please


----------



## medi01 (Apr 11, 2017)

Vayra86 said:


> us


You, perhaps, should list "us", so that I know whom not to "tire".


----------



## EarthDog (Apr 11, 2017)

We need a bird catcher... red herring just landed.


----------



## Super XP (Apr 11, 2017)

Kanan said:


> I know, I know. Still it's a flaw when a interconnect between CPU's is so narrow that you have to overclock the Ram to get better performance ("better" not saying "full"). The low clock speed is another flaw, low clocks isn't something acceptable these days, every other architecture (including FX) has high clocks out of the box or is overclockable to 4.5 - 5 GHz (eg. Core architecture, entire line). I'm sure both flaws are well known at AMD and are right now worked on.


Ryzen's clock speed has absolutely nothing to do with there precious FX generation nor what Intel has out to date, which is 7-8 Generations mature. 
We are talking about a completely new Micro- Architecture. Sure higher clocks benefit a Central Processing Unit. 

Do you remember the Legendary Athlon 64? Clocked 1,000 MHz lower over the P4.  But performed Faster. Different designs equal clock speed irrelevance.  

Ryzen has Zero Flaws. It's Brand New. Talk to me when it's a couple generations mature. In the meantime, optimizations and motherboard Bios updates will continue to enhance it.


----------



## Super XP (Apr 11, 2017)

http://semiaccurate.com/2017/01/19/amd-infinity-fabric-underpins-everything-will-make/

Infinity Fabric is AMD's secret Weapon.


----------



## Kanan (Apr 12, 2017)

TheLaughingMan said:


> You do realize that out of the box, Intel only has 3 chips with a clock speed higher than 4.0 GHz right? That would be the infamous 7700K and the recent i3 7350K and i3 7320. So yes it is not only 100% acceptable, its 99.9% of the market normal. 4.1 GHz being a brick wall for even the best binned chips is an issue (short of LN2 cooling) that needs to be addressed. Flaw it is not. They built a brand new chip, on a brand new architecture, with brand new tech, and old tech they have never used before. We are lucky then got to the 4.0 GHz. You clearly just want to call something a flaw so have at it man, but you are off base.


By my definition, which makes easily more sense (or any) compared to yours, it is a flaw. High MHz is needed for highend gaming, and there Ryzen isn't capable of doing the job, yes, compared to 7700K or compared to ANY Intel CPU that can be overclocked (you also failed to understand me, so I repeated it). Your behaviour strikes me as biased anyway. It is a obvious flaw, and many reviewers have called it exactly that. AMD themselves have accepted it and are working on it (that and the CCX shortcomings, and general optimization, IPC etc.).


----------



## Super XP (Apr 12, 2017)

Kanan said:


> By my definition, which makes easily more sense (or any) compared to yours, it is a flaw. High MHz is needed for highend gaming, and there Ryzen isn't capable of doing the job, yes, compared to 7700K or compared to ANY Intel CPU that can be overclocked (you also failed to understand me, so I repeated it). Your behaviour strikes me as biased anyway. It is a obvious flaw, and many reviewers have called it exactly that. AMD themselves have accepted it and are working on it (that and the CCX shortcomings, and general optimization, IPC etc.).


FYI, Ryzen is the best Gaming CPU out to date. Because it beats Intel in 90% of Benchmarks.  
Any game you play with Ryzen guarantees you Smooth Gaming. The 7700k causes in game stuttering. Lol


----------



## TheLaughingMan (Apr 12, 2017)

Kanan said:


> By my definition, which makes easily more sense (or any) compared to yours, it is a flaw. High MHz is needed for highend gaming, and there Ryzen isn't capable of doing the job, yes, compared to 7700K or compared to ANY Intel CPU that can be overclocked (you also failed to understand me, so I repeated it). Your behavior strikes me as biased anyway. It is a obvious flaw, and many reviewers have called it exactly that. AMD themselves have accepted it and are working on it (that and the CCX shortcomings, and general optimization, IPC etc.).



You special. Did you read the review? Did you look at any of Wiz's graphs from the 1600X? Show me where Ryzen is completely incapable of "high-end" game. I really want to know. And now one I have seen or heard of has called the clock speed a flaw. disappointed, lack of headroom, will get better next generation, etc. Not one "this is a flaw" So educate me on your point show me someone calling it a flaw. Please.


----------



## Kanan (Apr 12, 2017)

TheLaughingMan said:


> You special. Did you read the review? Did you look at any of Wiz's graphs from the 1600X? Show me where Ryzen is completely incapable of "high-end" game. I really want to know. And now one I have seen or heard of has called the clock speed a flaw. disappointed, lack of headroom, will get better next generation, etc. Not one "this is a flaw" So educate me on your point show me someone calling it a flaw. Please.


Nobody said it's "completely incapable of high-end gaming"  - you're still overreacting or having problems properly reading my posts / understanding me. Your bias is ever so clear again, it's the 2nd or 3rd time now that you're overreacting and not understanding my point.

It is a flaw, I already described why. Go and read some more reviews, especially ones with a lot of game benches in it, educate yourself and don't bother me again. Won't waste my time here again. You can also believe whatever you want, I'm not on these forums to educate obvious fanboys. Those are unteachable anyway. Byebye

/unsub


----------



## Super XP (Apr 12, 2017)

Kanan said:


> Nobody said it's "completely incapable of high-end gaming"  - you're still overreacting or having problems properly reading my posts / understanding me. Your bias is ever so clear again, it's the 2nd or 3rd time now that you're overreacting and not understanding my point.
> 
> It is a flaw, I already described why. Go and read some more reviews, especially ones with a lot of game benches in it, educate yourself and don't bother me again. Won't waste my time here again. You can also believe whatever you want, I'm not on these forums to educate obvious fanboys. Those are unteachable anyway. Byebye


Claiming ZEN is a flaw is simply your opinion, nothing  more. You do not provide fact based info, just opinion. So there you have it. 
In the meantime.....


----------



## Vayra86 (Apr 12, 2017)

medi01 said:


> You, perhaps, should list "us", so that I know whom not to "tire".



Look at the people who thanked my post, gives you a solid idea I reckon. The real message here was: do some source checking and develop a more critical stance if you want to actually make statements. It allows us to discuss things for what they are, much more informative.


----------



## TheLaughingMan (Apr 12, 2017)

Kanan said:


> Nobody said it's "completely incapable of high-end gaming"/unsub



You did. You said that. Now I don't believe you read your own comment. I will repost it...without the disrespect and obvious trolling parts though.



Kanan said:


> High MHz is needed for highend gaming, and there Ryzen isn't capable of doing the job, yes, compared to 7700K or compared to ANY Intel CPU that can be overclocked



And I am done giving you the attention you want. Take your own advise on this one and read a review or 10.


----------



## YautjaLord (Apr 14, 2017)

R7 1800X/1700X/1700 & R5 1600X/1500X/1500 + 2x8GB DDR4 4000MHz RAM scaling review/test should be all good n valid. Gigabyte released AGESA update for GA-AX370-Gaming-K7 (F3, non-Beta EFI update). Actually changed from F3b to just F3. Next logical step is for AMD to send few more samples for MB vendors so that the guys will test & validate DDR4 with 4000MHz frequency & above, f*ckload of time till July/August for them AMDz to do that. My 2 cents/pennies/agoras/etc.....


----------

