# AMD Responds to Ryzen's Lower Than Expected 1080p Performance



## Raevenlord (Mar 3, 2017)

The folks at PC Perspective have shared a statement from AMD in response to their question as to why AMD's Ryzen processors show lower than expected performance at 1080p resolution (despite posting good high-resolution, high-detail frame rates). Essentially, AMD is reinforcing the need for developers to optimize their games' performance to AMD's CPUs (claiming that these have only been properly tuned to Intel's architecture). AMD also puts weight behind the fact they have sent about 300 developer kits already, so that content creators can get accustomed to AMD's Ryzen, and expect this number to increase to about a thousand developers in the 2017 time-frame. AMD is expecting gaming performance to only increase from its launch-day level. Read AMD's statement after the break.



 



AMD's John Taylor had this to say:

"As we presented at Ryzen Tech Day, we are supporting 300+ developer kits with game development studios to optimize current and future game releases for the all-new Ryzen CPU. We are on track for 1000+ developer systems in 2017. For example, Bethesda at GDC yesterday announced its strategic relationship with AMD to optimize for Ryzen CPUs, primarily through Vulkan low-level API optimizations, for a new generation of games, DLC and VR experiences.

Oxide Games also provided a public statement today on the significant performance uplift observed when optimizing for the 8-core, 16-thread Ryzen 7 CPU design - optimizations not yet reflected in Ashes of the Singularity benchmarking. Creative Assembly, developers of the Total War series, made a similar statement today related to upcoming Ryzen optimizations.

CPU benchmarking deficits to the competition in certain games at 1080p resolution can be attributed to the development and optimization of the game uniquely to Intel platforms - until now. Even without optimizations in place, Ryzen delivers high, smooth frame rates on all "CPU-bound" games, as well as overall smooth frame rates and great experiences in GPU-bound gaming and VR. With developers taking advantage of Ryzen architecture and the extra cores and threads, we expect benchmarks to only get better, and enable Ryzen excel at next generation gaming experiences as well.

Game performance will be optimized for Ryzen and continue to improve from at-launch frame rate scores."

Two game developers also chimed in.

 Oxide Games, creators of the Nitrous game engine that powers Ashes of the Singularity:

"Oxide games is incredibly excited with what we are seeing from the Ryzen CPU. Using our Nitrous game engine, we are working to scale our existing and future game title performance to take full advantage of Ryzen and its 8-core, 16-thread architecture, and the results thus far are impressive. These optimizations are not yet available for Ryzen benchmarking. However, expect updates soon to enhance the performance of games like Ashes of the Singularity on Ryzen CPUs, as well as our future game releases." - Brad Wardell, CEO Stardock and Oxide

And Creative Assembly, the creators of the Total War Series and, more recently, Halo Wars 2:

"Creative Assembly is committed to reviewing and optimizing its games on the all-new Ryzen CPU. While current third-party testing doesn't reflect this yet, our joint optimization program with AMD means that we are looking at options to deliver performance optimization updates in the future to provide better performance on Ryzen CPUs moving forward. "

*View at TechPowerUp Main Site*


----------



## caleb (Mar 3, 2017)

I don't think anybody will recode already published titles to utilize more cores but lets see.


----------



## RejZoR (Mar 3, 2017)

When games depend on small thread count, higher clock is needed to make up the difference. The concept is not new and all highly threaded CPU's have problem with that. They always come with lower clocks. That's why AMD is pushing more cores so hard. If they can't win on the IPC and clock front, they certainly can with multiple threads/cores.

And lets be realistic, most people won't be running GTX 1080Ti or Titan X Pascal with these, meaning framerate differences to Intel will be minimal. And even with these, difference is minimal and most people won't even notice anything different. I've stopped chasing tiny framerate differences long time ago. If CPU or graphic card is about right compared to competition and you like it, just go for it.


----------



## londiste (Mar 3, 2017)

RejZoR said:


> And lets be realistic, most people won't be running GTX 1080Ti or Titan X Pascal with these,


with 350-550$/£/€ cpu-s that might be used for gaming? as opposed to current crop of gaming computers with generally cheaper cpus that *are* running high end gpu-s? 

something else to note with gaming results is that ryzens tend to lose to lower-clocked broadwell-e as well.


----------



## RejZoR (Mar 3, 2017)

Most of these systems will end up running GTX 1060/RX480 or GTX 1070 grade graphic cards.


----------



## londiste (Mar 3, 2017)

RejZoR said:


> Most of these systems will end up running GTX 1060/RX480 or GTX 1070 grade graphic cards.


any particular reason why?


----------



## RCoon (Mar 3, 2017)

I just LOVE it when people say things like "developers should optimize their games better" and "developers should learn to multithread their games better" as if it were stupendously easy and totally viable in every case. For big studios, they could probably do better, although most of them are working on old ass APIs and engines that are simply limited in scope, and switching either of these would seriously disrupt their workflow, which in turn would set them back on timelines and $$$. For Indie games, they don't need 16 threads. Most of them barely need two.


----------



## Camm (Mar 3, 2017)

Ryzen's occasionally tanky performance however comes down to two things.

A: Intel specific compilers being used. Game developers can fix this be recompiling and using specific codepaths.
B: Scheduler performance. There's an insane bottleneck going out to fabric between each 4 core complex. The scheduler needs to ensure threads that are using similar data (L2+L3) stay on each core complex.

And fixing whatever bloody memory erata is going on atm wouldn't hurt either.


----------



## PerfectWave (Mar 3, 2017)

It is not about core count or low frequency because 1080x can go up to 4ghz. I guess really is because using intel compiler or maybe like fury that dislike low resolution and give the best on high resolution


----------



## Xzibit (Mar 3, 2017)

RCoon said:


> I just LOVE it when people say things like "developers should optimize their games better" and "developers should learn to multithread their games better" as if it were stupendously easy and totally viable in every case. For big studios, they could probably do better, although most of them are working on old ass APIs and engines that are simply limited in scope, and switching either of these would seriously disrupt their workflow, which in turn would set them back on timelines and $$$. For Indie games, they don't need 16 threads. Most of them barely need two.



XB1 and PS4 use more cores.  More cores at a much lower fqz and IPC.  Optimization has to be spread over cores.

On the other hand you have PCs which are dominated by 2core and 4cores 48%-44%.  You end up with Ports which are done poorly by the same reason you just said (They dump the porting process to a 3rd party).  Most publisher do consoles first because those are "assured sales" and PC are an after thought unless its an MMO


----------



## BiggieShady (Mar 3, 2017)

RCoon said:


> I just LOVE it when people say things like "developers should optimize their games better" and "developers should learn to multithread their games better" *as if it were stupendously easy* and totally viable in every case. For big studios, they could probably do better, although most of them are working on old ass APIs and engines that are simply limited in scope, and switching either of these would seriously disrupt their workflow, which in turn would set them back on timelines and $$$. For Indie games, they don't need 16 threads. Most of them barely need two.


Yeah, when months pass after the project development it's extremely difficult to get back in ... however, in this case if the game already uses thread pools properly, it will be probably something as simple as going back to latest version in the source code repository and re-compiling with latest version of the compiler using zen specific optimizations in compiler options ... for example latest gears of wars should run better on ryzen since it's heavily multithreaded and dx12


----------



## techy1 (Mar 3, 2017)

amd Ryzen has better gaming performance than intel (yes I said it - read few more lines before respond...)... lest say you wanna i7-6900K type of CPU performance (obviously your primary need is rendering and crunching, not gaming)... and then you want to game with same system... lets put that in numbers:
intels:
1000$ CPU (not a dime less for such a CPU performance)
300$ (cheapest X99 mobos)
600$ GTX 1080 (because most reviews used only these card for their Ryzen gaming testing)
lets asume that others (case, cooling, ram, psu, fans etc) are equall for both systems.
total = 1900$ (without others - that should be same)

amd: 
399$ 1700X
200$ mobo
1300$ for one Titan XP or even two GTX 1080 Ti's


now we have 1900$ vs 1900$ system with similar CPU performance (for rendering and crunching - what is primary objective for CPU's liek these)... and which system will push more frames now? the one with single gtx 1080 or the one with GTX 1080 Ti's x2 or with Titan XP  ???


----------



## miki (Mar 3, 2017)

Intel really got in everybodys mind. 5 years of stagnation in CPU market and overcharging for their cpus and still people defend intel. Even the reviewers. Tell me what game did max out AMD rysen CPUs. I bet you they work in even the most demanding games with 30%-40% of its resources used, not more. What game is unplayable with AMD CPU. What game is bottlenecked with AMD cpus. 7700K cpu is 4 core cpu, with higher core frequency that is better only in 1920x1080 resolution, it even states in some reviews that as the resolution goes up the disadventage melts down in favour od AMD CPUs.
My point is AMD rysen cpus are superior in every sense compared to intel core I7 7700K, and even their HEDT price/performace wise. So what if some game works on amd cpu with 100 fps, and on intel with 110 fps, does that even matter.
5 years i ve been waiting for afordable 8 core cpu i and finally got it to replace my aging 3930K, and im am buying it. 
No amount of intel fanboism is going to stop me in that regard.


----------



## londiste (Mar 3, 2017)

disadvantage does not melt at higher resolution due to anything to do with cpu. higher resolutions will simply bring gpu limit quite a bit lower.
with 1080ti (and hopefully vega) out soon, titanxp level of performance will be more accesible than ever. that performance level is the same on 1440p as gtx1080 performance is on 1080p.


----------



## vziera (Mar 3, 2017)

Dumbass snobs got rekt by @techy1 comment


----------



## bug (Mar 3, 2017)

Let's see (in no particular order):

Intel's CPUs to this still run some apps faster with HT disabled, so disabling SMT to improve performance is not totally unexpected.
Ryzen seems to be fine with everything but games. Singling out games as needing optimization is to be taken with a grain of salt, imho.
In the face of AthlonXP onslaught, intel also claimed that Netburst was faster if you compiled apps specifically for it. And that was true, but we all know how it ended.

I'll take Ryzen at face value and welcome any further improvements. But I will not cont on them.


----------



## NeoGalaxy (Mar 3, 2017)

I think that AMD should 1st release a driver for the X370 chipset + drivers for Ryzen. Then we'll see how it goes or works. Also while playing The Division (DX12 render), on 1800X, the frames do not really fluctuate, I play at 60 FPS thou. 1080p. Not sure how important this is but playing the game seems more fluid.


----------



## unsmart (Mar 3, 2017)

I don't get the focus on the 1080 gaming numbers? from the reviews I read this platform is buggy as all hell! What good is gaming if your rig won't even run,makes me wonder about AMDs relationship with its partners.


----------



## Solidstate89 (Mar 3, 2017)

To be honest, they said the exact same thing about Bulldozer and that never really came to fruition.

I expect the most "optimizations" we'll see coming out of this are those outlier games that get shittier frame rates with SMT enabled. I expect that to get fixed by developers, but not a whole lot happening beyond that.


----------



## ZoneDymo (Mar 3, 2017)

RejZoR said:


> Most of these systems will end up running GTX 1060/RX480 or GTX 1070 grade graphic cards.



quite a statement to make, what do you base this on?


----------



## noname00 (Mar 3, 2017)

For me, Ryzen does exactly what I was I was expecting it to do, maybe it's a bit faster than I was expecting. 
With this response they are basically blaming the developers for the "poor" gaming performance. And this is exactly what they did with the FX-8150 and FX-8350. "Performance will increase after developers will optimize their applications for our CPUs" - this is jut a poor excuse. For now, this launch looks similar to Buldozer.
I am not dissapointed by Ryzen, but I won't buy one soon either, no reason to replace my 6700k. It's just a good upgrade for AMD fanboys that still have their FX-83xx CPUs (if you are going to buy a R7 now but a year ago you did not buy an i5 or i7 because it was too expensive, you are clearly a fanboy), or Intel owners that did not upgrade for more than 4 years (considering the same price segment). 
The real problem for gamers, is that 4 and 6 core variants won't clock higher.


----------



## FYFI13 (Mar 3, 2017)

In my eyes this is a "work in progress" product, unfinished and rushed out. Can't remember any "unlokcked" CPU that barely can reach it's advertised speeds and non-existing overclocking. On top of that all memory issues.

Eventually they will fix that but this had dropped a bad tomato on Ryzens name already.

Gaming performance is alright (in most tittles) if you forget the fact that this thing costs nearly 500 pounds. Although, even ~200 pound Intel CPU's are much better choice right now for those who mainly game on their PC's.

PS. Revievers, how many of you were asked by AMD to "bench Ryzens in GPU bound games"?


----------



## petepete (Mar 3, 2017)

Was on the hype train but after these reviews I got the 7700k; A little underwhelming


----------



## dozenfury (Mar 3, 2017)

techy1 said:


> amd Ryzen has better gaming performance than intel (yes I said it - read few more lines before respond...)... lest say you wanna i7-6900K type of CPU performance (obviously your primary need is rendering and crunching, not gaming)... and then you want to game with same system... lets put that in numbers:
> intels:
> 1000$ CPU (not a dime less for such a CPU performance)
> 300$ (cheapest X99 mobos)
> ...



That's hardly an apples to apples comparison.  Ryzen isn't that much faster in rendering and crunching to justify that.  Plus the above still doesn't fix Ryzen for single-threaded cpu bottlenecked games like wow.  A more fair comparison would be a 7700K+1080 to a 1800X+1080ti.  Those would be within $50 of each other and be more comparable in performance. 

I also don't buy the "devs have to code for our cpu" excuse.  AMD might sponsor or give cash to a dev for a particular game to tune it for their game (like AMD did with Hitman), but cpu performance shouldn't ever be reliant on that.  IPC is what it is, there's no magic wand in code to fix it.  When people want to run an app or game they want to run it knowing roughly what performance they can expect from their cpu, not have to check a list or wonder if the particular app or game happened to be tuned for Ryzen.  If that were the case people including me would barely consider Ryzen for 1/2 the current retail price. 

AMD would be more credible imo if they admitted that the gaming shortcomings, focus on the crunching/rendering lead, and take the angle that this is the first wave of Ryzen chips and that faster IPC chips will be coming in future Ryzen releases.  Although it's not a great message, at least it's less disingenuous than insinuating major performance jumps in cpu benchmarks somehow from a patch. Unless a patch can make the Ryzen oc ceiling go from 3.9/4.0 to 5.0 Ghz I don't see it happening.  BTW what happened to "5.0 on air" in their marketing?  It turned out to be 5.0 on LN2...


----------



## RejZoR (Mar 3, 2017)

londiste said:


> any particular reason why?



Because that's what most gamers have. You see here that people have GTX 1080 en mass, but reality is, most gamers have mid-high tier cards. It's why AMD focused on RX480 only the last round and it worked out great for them. Because that's what majority buys.



RCoon said:


> I just LOVE it when people say things like "developers should optimize their games better" and "developers should learn to multithread their games better" as if it were stupendously easy and totally viable in every case. For big studios, they could probably do better, although most of them are working on old ass APIs and engines that are simply limited in scope, and switching either of these would seriously disrupt their workflow, which in turn would set them back on timelines and $$$. For Indie games, they don't need 16 threads. Most of them barely need two.



Sure, you can't always use X threads. But the fact is, when majority of systems these days run crappy quad cores, thanks to Intel insisting on such design. With AMD pushing 8 cores and 16 threads so hard, things might and also will change.

Also an interesting thing, yesterday I've started playing game "Infested Planet" and indie 2D strategy game with massive swarms of aliens and it can utilize up to 4 cores. And I think it actually needs that because it's pushing my 5820K quite hard when massive swarms of aliens flow through map obstacles. It's almost like watching liquid move around. It's quite nice actually. The reason they can't really use more is because not many could utilize that properly.


ZoneDymo said:


> quite a statement to make, what do you base this on?



Haven't you learned anything from the RX480 ? The whole reason why AMD focused on that is because MAJORITY of people buy that. You see tons of us having high end cards here, but out there among normies, they mostly buy big ass CPU's with many cores and then they buy graphic card with the most VRAM. Even if it's a model with 16 shader units and barely runs Solitaire at 60fps...


----------



## bug (Mar 3, 2017)

petepete said:


> Was on the hype train but after these reviews I got the 7700k; A little underwhelming


It can be underwhelming if bought into the hype train. Otherwise, it's a good chip. It's just not a win across the board. Right now, both Intel or AMD can be the better choice, but it depends on your typical workload.
Imho, even 8 threads is more than a regular user needs for a home systems (with exceptions, of course), so the interesting part will only come with Ryzen 3. If those can clock at 4GHz and be had for ~$250 (or less), things will get really interesting.


----------



## bug (Mar 3, 2017)

RejZoR said:


> Sure, you can't always use X threads. But the fact is, when majority of systems these days run crappy quad cores, thanks to Intel insisting on such design. With AMD pushing 8 cores and 16 threads so hard, things might and also will change.



Hehe, you must have forgotten the time when we all had quad cores and praying games will use a second core.
Availability of cores is not the issue here, multithreading is actually hard. And by hard, I mean expensive.


----------



## yogurt_21 (Mar 3, 2017)

You know I feel like a "Don't worry we're working on it" would have sufficed. But they as a company have to stop blaming everyone else when this happens. 

Instead of blaming devs they could simply say that "All of these titles were developed pre-Ryzen and as such won't be fully optimized for our brand spanking new architecture. We are working with major studios to alleviate the issue and hope to see better performance going forward as new games come out that have been developed with Ryzen in mind and new versions of our chips ship that are more gaming oriented."

That's all they had to say.  

Instead they accused gaming devs of developing and optimizing ONLY for Intel, despite the fact that AMD has cpu's out and they are used for gaming. Without optimization and with the massive performance deficit between bulldozer gen4/vishera based chips and the intel offerings would have made gaming improbable on any AMD cpu. 

Obviously the game devs had to be doing some optimization for AMD. Throwing them under the bus like that and then saying "but we're counting on them to fix it" isn't a great idea. "Let's piss off the people we need to fix the issue" is a great way to ensure Ryzen's 1080p performance remains lackluster.

Had they gone with my quote from above it would go over better and its more honest.


----------



## RejZoR (Mar 3, 2017)

It's not. But it is, if implementing it means you'll only be covering 10% of the units out there. The fact we've been stuck on quad cores for so long, developers got kinda lazy in their comfort zone of 4 threads in mind. No one even bothered to use more threads to crunch physics and make CPU physics more immersive. Because up till now, those few of us with 6 and 8 cores just weren't worth the effort. But now with 8 core 16 threads CPU's going for only $499, it will change. Not over night, but certainly at accelerated pace.


----------



## bug (Mar 3, 2017)

RejZoR said:


> It's not. But it is, if implementing it means you'll only be covering 10% of the units out there. The fact we've been stuck on quad cores for so long, developers got kinda lazy in their comfort zone of 4 threads in mind. No one even bothered to use more threads to crunch physics and make CPU physics more immersive. Because up till now, those few of us with 6 and 8 cores just weren't worth the effort. But now with 8 core 16 threads CPU's going for only $499, it will change. Not over night, but certainly at accelerated pace.


Have you ever developed/delivered a piece of multithreaded code? It's hard to write, hard to test properly and even harder to maintain. The number of threads hardly matters.


----------



## londiste (Mar 3, 2017)

RejZoR said:


> Because that's what most gamers have. You see here that people have GTX 1080 en mass, but reality is, most gamers have mid-high tier cards. It's why AMD focused on RX480 only the last round and it worked out great for them. Because that's what majority buys.


agreed, but at the same time, the same minority who has 1080s is the same crowd who buys 300+€ cpu-s.


----------



## newtekie1 (Mar 3, 2017)

RejZoR said:


> And lets be realistic, most people won't be running GTX 1080Ti or Titan X Pascal with these, meaning framerate differences to Intel will be minimal.



I think it will be a mix. I largely agree with you though.  Since the 1060/480 segment is the most popular, most will be pairing these processors with that level of card, and probably a lot of 1070s too.

_However_, I'll say that whatever you pair it with the Ryzen processors aren't that bad at gaming, even if you do pair it with a GTX1080Ti or a Titan XP.  And I'm sure there will be a good number of people buying GTX1080Ti cards and pairing them with Ryzen processors.  The reason being that no one buying those cards is going to be playing at 1080p.  And that is where people are going wrong.  They are focusing too much on the 1080p benchmarks.  The 1080p benchmarks have a place, they are done for a reason, I know that.  They use the beefiest card/s possible, at basically the lowest resolution people are playing at, to limit the GPU bottleneck as much as possible and try to move the bottleneck to the CPU.  The problem is that essentially becomes a synthetic benchmark to me.  Because that is a scenario that will almost never happen.



RejZoR said:


> It's not. But it is, if implementing it means you'll only be covering 10% of the units out there. The fact we've been stuck on quad cores for so long, developers got kinda lazy in their comfort zone of 4 threads in mind. No one even bothered to use more threads to crunch physics and make CPU physics more immersive. Because up till now, those few of us with 6 and 8 cores just weren't worth the effort. But now with 8 core 16 threads CPU's going for only $499, it will change. Not over night, but certainly at accelerated pace.



Actually, $329.  And it isn't that I think they got lazy.  I think it is the same reasoning behind why Ryzen 1080 benchmarks aren't that reasonable.  They are programming to the majority.  The majority have 4-core or less processors.  So that is what they program for.

Ryzen is going to force Intel to lower prices on their 6 and 8 core chips.  So it isn't just that the AMD chips are reasonable now, it will make Intel's reasonable, and overall a lot more peoplegamers are going to be buying them.


----------



## Slizzo (Mar 3, 2017)

techy1 said:


> amd Ryzen has better gaming performance than intel (yes I said it - read few more lines before respond...)... lest say you wanna i7-6900K type of CPU performance (obviously your primary need is rendering and crunching, not gaming)... and then you want to game with same system... lets put that in numbers:
> intels:
> 1000$ CPU (not a dime less for such a CPU performance)
> 300$ (cheapest X99 mobos)
> ...





vziera said:


> Dumbass snobs got rekt by @techy1 comment



Here's my build from November of last year:

Intel: 
6800K (yeah, I know, not what you posted exactly) - $380
MSI X99A Raider - $159
GTX 1080 (I already had this from launch day, with my previous setup.) - $650

Total = $1189

Before chucking out numbers, maybe want to check your facts.  The motherboard alone you're just crazy pants.  Now I would have liked to have waited for Ryzen launch, but after seeing reviews of day one, I'm OK with having my machine for 6-9 months before the bugs and issues get ironed out with the Ryzen platform.


----------



## Dave65 (Mar 3, 2017)

Dunno why some are bashing AMD, I think they "FINALLY" got something to compete with Intel and at a decent price..


----------



## EarthDog (Mar 3, 2017)

techy1 said:


> amd Ryzen has better gaming performance than intel (yes I said it - read few more lines before respond...)... lest say you wanna i7-6900K type of CPU performance (obviously your primary need is rendering and crunching, not gaming)... and then you want to game with same system... lets put that in numbers:
> intels:
> 1000$ CPU (not a dime less for such a CPU performance)
> 300$ (cheapest X99 mobos)
> ...


What you actually meant to say is that, price to performance, AMD is better... indeed. Because if you strapped the same hardware to the AMD CPU (as best you could read: GPU and memory), it performs LESS in some titles with the same GPU compared to intel as we have seen in many reviews. 

Remember, boost is only 2c/4t.


----------



## unsmart (Mar 3, 2017)

Dave65 said:


> Dunno why some are bashing AMD, I think they "FINALLY" got something to compete with Intel and at a decent price..



Because they pissed on there own parade with a not ready for prime time platform. Really sending reviewers systems that hardly work is just screwing themselfs.


----------



## GoldenX (Mar 3, 2017)

You can't call gaming only results "hardly working". That's like saying a Xeon is a bad CPU for being bad at gaming.


----------



## Foobario (Mar 3, 2017)

petepete said:


> Was on the hype train but after these reviews I got the 7700k; A little underwhelming



If your Intel option was the 7700k, it seems you weren't in the market for any of the AMD CPUs released thus far.  8 core/16 thread CPUs are targeted at individuals that work in a software environment that has embraced multi core performance.  

No matter what the internet hype was in regard to gaming, 8 core CPUs are not targeted at gaming in the current single core environment.

Having said that, it is obvious that AMD has some work to do to create compilers to bring some games up to the natural performance levels that one would expect from chips that have similar IPC to Intel's eight core chips.


----------



## Mescalamba (Mar 3, 2017)

Only problem is that nobody will optimize current games for AMD.

Apart that tiny fact that making game from dual core to quad, hexa or octa friendly isnt "just like that". In many cases (FPS mostly) its near impossible. If we dont mind that even if it was possible you dont have that much to occupy those extra cores with. Sure you can probably have FPS that uses six cores. Only problem will be that one core will go to 100% load and rest up to whole freaking 5%.

Low-lvl approach (Vulkan, Mantle however is that called now) wont help, power aint there. By that I mean single-core computing power. And if its not there, you wont get more.

But IMHO these CPUs are great for that price and do we all really need top CPUs for gaming? Not rly.. Lately I could actually use 6 cores, so Im glad they made them.

Ryzen today is "pretty good CPU for pretty good price". If you want the best, sure buy Intel. But thing is, very few do need the best. There is Ferrari, Bugatti, Koenigsegg.. and majority of ppl drives something from Ford, VW or some asian stuff.


----------



## TheGuruStud (Mar 3, 2017)

Most reviewers are getting it wrong. I saw several stating that gaming discrepancies are due to single thread IPC... that's obviously false. Ryzen would be at least keeping up with haswell, but it's not in a lot of games. They also compared it directly to kaby without a mention of the several hundred MHz clock difference. I guess they don't even know what IPC is.

Clearly, there are bios/cpu/scheduling issues and/or the games are only meant to run well on Intel. Otherwise, they would be within a reasonable margin of 6900K. That is not happening except in some games in some reviews. The reviews are all over the place.

And why did I see big losses at 1080 in some games, but noticeable leads at 1440? That should be impossible. Shit is jank and needs to be sorted. Reviewers and AMD need to get their shit straight. This could be worse than all the stupid bulldozer reviews that were inconsistent.


----------



## Lionheart (Mar 3, 2017)

RCoon said:


> I just LOVE it when people say things like "developers should optimize their games better" and "developers should learn to multithread their games better" as if it were stupendously easy and totally viable in every case. For big studios, they could probably do better, although most of them are working on old ass APIs and engines that are simply limited in scope, and switching either of these would seriously disrupt their workflow, which in turn would set them back on timelines and $$$. For Indie games, they don't need 16 threads. Most of them barely need two.



Alright then lets be stuck on quad cores for another 10years then


----------



## noname00 (Mar 3, 2017)

I am glad on Monday a friend will receive his 1800x, and we will be able to compare it to my 6700k. I want to disable 4 cores on the Ryzen CPU and clock both of them to the same frequency, and then we will do some tests. I am really curious about the real IPC of both machines, and testing it myself is the best way to do this, even if I won't be able to disable half of the L3 cache on Ryzen.


----------



## TheGuruStud (Mar 3, 2017)

noname00 said:


> I am glad on Monday a friend will receive his 1800x, and we will be able to compare it to my 6700k. I want to disable 4 cores on the Ryzen CPU and clock both of them to the same frequency, and then we will do some tests. I am really curious about the real IPC of both machines, and testing it myself is the best way to do this, even if I won't be able to disable half of the L3 cache on Ryzen.



It is disabled.


----------



## jigar2speed (Mar 3, 2017)

Dave65 said:


> Dunno why some are bashing AMD, I think they "FINALLY" got something to compete with Intel and at a decent price..



Because losing by 10 FPS at already 140 FPS feels bad to them... They keep forgetting the price and the performance where AMD is giving run for the money to Freaking THOUSAND DOLLAR CPU.


----------



## XiGMAKiD (Mar 3, 2017)

Dave65 said:


> Dunno why some are bashing AMD, I think they "FINALLY" got something to compete with Intel and at a decent price..



Because they're abroad the hype train expecting AMD to be flawless in every single way, except that is not what happened here with AMD's funky RAM implementation, barely-there overclocking headroom, just-like-the-old-days temp sensor, etc

Or maybe because they just love conflict no matter what happened


----------



## GhostRyder (Mar 3, 2017)

TheGuruStud said:


> Most reviewers are getting it wrong. I saw several stating that gaming discrepancies are due to single thread IPC... that's obviously false. Ryzen would be at least keeping up with haswell, but it's not in a lot of games. They also compared it directly to kaby without a mention of the several hundred MHz clock difference. I guess they don't even know what IPC is.
> 
> Clearly, there are bios/cpu/scheduling issues and/or the games are only meant to run well on Intel. Otherwise, they would be within a reasonable margin of 6900K. That is not happening except in some games in some reviews. The reviews are all over the place.
> 
> And why did I see big losses at 1080 in some games, but noticeable leads at 1440? That should be impossible. Shit is jank and needs to be sorted. Reviewers and AMD need to get their shit straight. This could be worse than all the stupid bulldozer reviews that were inconsistent.


Well said, that does seem to be the issue because otherwise alot of the programs and benchmarks they ran would be reflecting the same thing as the games.  Its just going to take time for hte correct code paths to be adjusted for from AMD and the developers.  Intel has been the main game for quite some time and this is a brand new architecture, everything will become more optimized in time.


----------



## blued (Mar 3, 2017)

I am not going to make any final judgments on which CPU is best for me at this point in time. I have put off any CPU upgrade decisions for another 2-3 months. Lets see how things stand then. Maybe too many conflicting points of data at this point in time. Will let the dust settle and check on the CPU scene after (hopefully) some clarity in 2-3 months.


----------



## rruff (Mar 3, 2017)

techy1 said:


> amd Ryzen has better gaming performance than intel (yes I said it - read few more lines before respond...)



Only problem is you are describing a very niche market that is also $$$ limited. 

AMD needs to (eventually) sell to 4 and 6 core chips to the average Joe and do so at a profit. It will be tough if the game performance leaves much to Intel.


----------



## TheoneandonlyMrK (Mar 3, 2017)

newtekie1 said:


> I think it will be a mix. I largely agree with you though.  Since the 1060/480 segment is the most popular, most will be pairing these processors with that level of card, and probably a lot of 1070s too.
> 
> _However_, I'll say that whatever you pair it with the Ryzen processors aren't that bad at gaming, even if you do pair it with a GTX1080Ti or a Titan XP.  And I'm sure there will be a good number of people buying GTX1080Ti cards and pairing them with Ryzen processors.  The reason being that no one buying those cards is going to be playing at 1080p.  And that is where people are going wrong.  They are focusing too much on the 1080p benchmarks.  The 1080p benchmarks have a place, they are done for a reason, I know that.  They use the beefiest card/s possible, at basically the lowest resolution people are playing at, to limit the GPU bottleneck as much as possible and try to move the bottleneck to the CPU.  The problem is that essentially becomes a synthetic benchmark to me.  Because that is a scenario that will almost never happen.
> 
> ...


I'd agree fully, I'm looking at all the reviews  whilst owning 2x480 and a 4k monitor, are these reviewers taking the loss ,hardly any did resolution  performance scaling tests and less still went 4k, enthusiasts by and  large will buy these first ryzen  and they won't be paid g them up with Vega or 1080ti attached to a 1080p monitor.

And as for blaming devs ,amd are stating facts, theirs is a NEW arch that needs some software  optimisation to attain the full performance  potential ,you bet and thank god ,because the sAme old shit wasn't cutting it for me I prefer my innovation to be both different and worthwhile and not just another 100 mhz bump, so yay back to you devs and engine devs ,while you're at that dx12 render path to take advantage  of Moar cores do us all a favour, look into ryzen optimisation and  possibly don't use Intel's compiler,as Ive no doubt  its latest version will insist everything has to be 256bit aes encrypted or some such malougins.


----------



## Batou1986 (Mar 3, 2017)




----------



## TheoneandonlyMrK (Mar 3, 2017)

So ryzen has also gots its own band of meme trolls ,how very original, yawn ,out.

@Batou1986 ,you get what you pay for ,Intel or amd,if you buy a low end cheap motherboard you get low end performance and will end up a moaner.

I overclocked my friends 6320 on you're  board two nights ago because he finally got a evo 212 like I told him.
Nightmare, his throttled all the time at stock settings,I was forced into bclk clocking it by the crapness of his board, I've had his chip easily do 4.5 in my rig but not in his, 4.3 max.
Point being you're reference and perception have been  affected by your purchase choices and you should have chosen better imho.


----------



## CAPSLOCKSTUCK (Mar 3, 2017)

so they should have called it Simultaneous HyperThreading after all

SHT.....


----------



## DRDNA (Mar 3, 2017)

Still all in all this is SHITTY NEWS!


----------



## TheoneandonlyMrK (Mar 3, 2017)

I haven't gamed at peasent settings (1080p)for years, so not really for my needs ,and at 4k ryzen equals Intel's chops.


----------



## Solidstate89 (Mar 3, 2017)

theoneandonlymrk said:


> I haven't gamed at peasent settings (1080p)for years, so not really for my needs ,and at 4k ryzen equals Intel's chops.


Yes, but just keep in mind that's because you hit a GPU wall, not a CPU one. At 1080p you can witness CPU bottleneck, at 1440p and beyond, you're witnessing a GPU bottleneck. Which is why even my 4770K can get similar if not same frames as a 7700K or even a 6900.


----------



## TheoneandonlyMrK (Mar 3, 2017)

Solidstate89 said:


> Yes, but just keep in mind that's because you hit a GPU wall, not a CPU one. At 1080p you can witness CPU bottleneck, at 1440p and beyond, you're witnessing a GPU bottleneck. Which is why even my 4770K can get similar if not same frames as a 7700K or even a 6900.


I know I agreed with a similar statement made by newtechi in another thread and I do here but as he said you made it fully synthetic and not representative of in use gaming.
More reviewers should have shown 1440p and 4k results as these I believe are indicative of what these will game at, and Amd isn't apologizing for Ryzens higher res performance because its not needed.
I'm seeing where those Intel calls were made imho tbh.


----------



## bug (Mar 3, 2017)

theoneandonlymrk said:


> I haven't gamed at peasent settings (1080p)for years, so not really for my needs ,and at 4k ryzen equals Intel's chops.





theoneandonlymrk said:


> I know I agreed with a similar statement made by newtechi in another thread and I do here but as he said you made it fully synthetic and not representative of in use gaming.
> More reviewers should have shown 1440p and 4k results as these I believe are indicative of what these will game at, and Amd isn't apologizing for Ryzens higher res performance because its not needed.
> I'm seeing where those Intel calls were made imho tbh.



Can you read what the "Primary Display Resolution" line says? http://store.steampowered.com/hwsurvey/


----------



## TheoneandonlyMrK (Mar 3, 2017)

bug said:


> Can you read what the "Primary Display Resolution" line says? http://store.steampowered.com/hwsurvey/


it's exactly what it is, the majority.

Now read again  what I said , these will game at.
Is the majority represented by enthusiasts, no.

Who will buy Ryzen?? That's right Enthusiasts
In fact look to Intel for those stats ,they to this day saturate the market with lovely dual cores joy.


----------



## Grings (Mar 3, 2017)

bug said:


> Can you read what the "Primary Display Resolution" line says? http://store.steampowered.com/hwsurvey/



and the next most popular on there is 1366x768

Steam has a hell of a lot of casual users, most likely with a laptop or dell desktop, can you read what the vram line says?, should we all ditch our ram and get 1gb cards?


----------



## WaroDaBeast (Mar 3, 2017)

bug said:


> Have you ever developed/delivered a piece of multithreaded code? It's hard to write, hard to test properly and even harder to maintain. The number of threads hardly matters.


I don't think so. He still has nightmares whenever he hears "CLI."


----------



## ADHDGAMING (Mar 3, 2017)

Im not spending 400+ dollars on a CPU just to pair it with a 1080p monitor and if i did im most likely going to upscale using BOTH AMD and Nvidia Upscaling tool.


----------



## ADHDGAMING (Mar 3, 2017)

theoneandonlymrk said:


> I know I agreed with a similar statement made by newtechi in another thread and I do here but as he said you made it fully synthetic and not representative of in use gaming.
> More reviewers should have shown 1440p and 4k results as these I believe are indicative of what these will game at, and Amd isn't apologizing for Ryzens higher res performance because its not needed.
> I'm seeing where those Intel calls were made imho tbh.




Ryzens reviews will make it extremely easy to find Shills .. If all they show you is 1080p scores they are shills .. its that simple really .. you wnat 1080p you can stick to an 8350/70 .. its stupid to pair this CPU with a 1080p monitor .. its over kill and you are not seeing any benefit in spending that kinda money to push that low rez.


----------



## rruff (Mar 3, 2017)

ADHDGAMING said:


> Im not spending 400+ dollars on a CPU just to pair it with a 1080p monitor



So true. But how does this look for the 4 and 6 core chips that will be more mainstream? If the gaming hit remains they may be a tough sell. I think the gaming will improve though. This isn't Bulldozer.


----------



## TheoneandonlyMrK (Mar 3, 2017)

rruff said:


> So true. But how does this look for the 4 and 6 core chips that will be more mainstream? If the gaming hit remains they may be a tough sell. I think the gaming will improve though. This isn't Bulldozer.


could be ,they will be priced aggressive though.


----------



## Camm (Mar 3, 2017)

rruff said:


> So true. But how does this look for the 4 and 6 core chips that will be more mainstream? If the gaming hit remains they may be a tough sell. I think the gaming will improve though. This isn't Bulldozer.



The Core Complex fabric bandwidth hit won't be a problem on a quad. Don't know what the CCX looks like on a 6. Still have the memory erata problem though.


----------



## noname00 (Mar 3, 2017)

theoneandonlymrk said:


> I know I agreed with a similar statement made by newtechi in another thread and I do here but as he said you made it fully synthetic and not representative of in use gaming.
> More reviewers should have shown 1440p and 4k results as these I believe are indicative of what these will game at, and Amd isn't apologizing for Ryzens higher res performance because its not needed.
> I'm seeing where those Intel calls were made imho tbh.



The thing is you are comparing CPUs, not real world systems. If you are using your PC mostly for gaming, why would you buy a R7 1800x when a 7600k would give you the same framerate on a 4K display with a Titan XP? If you have a GTX 1070 and game on 4K, you could even buy an i3 or an FX-8300 and overclock the s**t out of it.
It's just stupid to say "for gaming I will buy a $500 AMD CPU and not a $350 7700k or a $250 7600k because in 4K I have the same framerate, even if at lower resolution the Intel are faster and cheaper". You can buy whatever you want, but don't try to justify your purchase with invalid reasons.

Maybe AMD will patch the AM4 platform, as I read in many places it's buggy, and gaming performance will get better. Until then, 1151 is the winner for gaming.

You are the same as my friend who just bought a R7 1800X (replacing his FX-8300 @4.5 GHz) - he only bought it because it's a new AMD CPU and he hates Intel. That is the only real reason people are buying Ryzen over Kaby Lake for gaming.


----------



## Camm (Mar 3, 2017)

noname00 said:


> The thing is you are comparing CPUs, not real world systems. If you are using your PC mostly for gaming, why would you buy a R7 1800x when a 7600k would give you the same framerate on a 4K display with a Titan XP? If you have a GTX 1070 and game on 4K, you could even buy an i3 or an FX-8300 and overclock the s**t out of it.
> It's just stupid to say "for gaming I will buy a $500 AMD CPU and not a $350 7700k or a $250 7600k because in 4K I have the same framerate, even if at lower resolution the Intel are faster and cheaper". You can buy whatever you want, but don't try to justify your purchase with invalid reasons.
> 
> Maybe AMD will patch the AM4 platform, as I read in many places it's buggy, and gaming performance will get better. Until then, 1151 is the winner for gaming.
> ...



The thing with Ryzen however is it gives you options where as the 7700k doesn't. Want to stream and play a game at the same time (without having to get your GPU to do it and lose frames that way?) No problem. Want to render that video? It'll be twice as quick.

I don't think anyone's arguing that the 7700k is right now the better gaming CPU, but Ryzens performance is acceptable enough in gaming that the benefit it has everywhere else is worth it.


----------



## qubit (Mar 3, 2017)

I seem to remember AMD crying about "optimisation" with the Bulldozer fiasco, where in the end it wasn't, it was just poor design. It might be different this time round since Ryzen doesn't use that siamesed disaster, but I remain sceptical until I see these optimisations make up the difference in IPC like they claim. I reckon it will take Ryzen version 2 to fix this and I do have some confidence that AMD will achieve this.

In the meantime, if you want the best framerates in games, stick to a 7700K, an overclocked one in particular. If nothing else, all those older games you love to play will never be optimized for Ryzen.


----------



## m0nt3 (Mar 3, 2017)

Solidstate89 said:


> To be honest, they said the exact same thing about Bulldozer and that never really came to fruition.



Except in this case, some game devs, like oxide have come out and made a statement in this regard as well. Bethesda and Sega are  working with it as well. This may have more to do with Vulkan / DX12, but we shall see. Game performance is still very acceptable, especially comapred to FX series. While that is not justification for its current performance, but really, who plays 1080P at low settings. Or like the HardOCP review, 640x480. I do understand trying to show CPU performance in games, but with the future moving towards Vulkan/DX12 and multi threading these trends are likely to change. It not like games are unplayable, as can be with the FX series due to low minimums. Maybe I am just getting old and not caring as much about high numbers as I am just have smooth gameplay, I dont watch FPS numbers when im fragging demons in Doom ...unless it causes me to die .


----------



## Fluffmeister (Mar 3, 2017)

m0nt3 said:


> Except in this case, some game devs, like oxide have come out and made a statement in this regard as well.



Well Oxide have been in bed with AMD for years now, they were the original Mantle pimps after all.


----------



## m0nt3 (Mar 3, 2017)

Fluffmeister said:


> Well Oxide have been in bed with AMD for years now, they were the original Mantle pimps after all.


it is also a game where Ryzen really struggled. Their adoption of mantle or working with AMD does not invalidate their claim that CPU optimizations can help Ryzen. I am sure they work with nvidia and intel as well. After all they have the majority in both markets.


----------



## Fluffmeister (Mar 3, 2017)

m0nt3 said:


> it is also a game where Ryzen really struggled. Their adoption of mantle or working with AMD does not invalidate their claim that CPU optimizations can help Ryzen. I am sure they work with nvidia and intel as well. After all they have the majority in both markets.



Of course, and I for one certainly appreciate the irony. I just hoped like many others it was more of a generic issue that can be addressed via various platform updates either through the OS or BIOS updates and the like.

If it means they have to work closer with every dev out there now on, then it's gonna take time to gain traction. Strategic partnerships with Bethesda is one thing, but I'm not sure I have the patience for the long game.

I got a free copy of Ashes thinking about it, must remember to try that... "game".


----------



## m0nt3 (Mar 3, 2017)

Fluffmeister said:


> Of course, and I for one certainly appreciate the irony. I just hoped like many others it was more of a generic issue that can be addressed via various platform updates either through the OS or BIOS updates and the like.
> 
> If it means they have to work closer with every dev out there now on, then it's gonna take time to gain traction. Strategic partnerships with Bethesda is one thing, but I'm not sure I have the patience for the long game.


AMD Has to play the long term game. It is a completely new uArch with a different implimention of SMT than Intel uses, so it will take work at the game development level. To me, it also make the most sense, because there is pretty good parity in single thread and multi threaded in other applications. With the exception of outlier that prefer a different uArch, which there have always been.


----------



## Xzibit (Mar 3, 2017)

Fluffmeister said:


> If it means they have to work closer with every dev out there now on, then it's gonna take time to gain traction. Strategic partnerships with Bethesda is one thing, but *I'm not sure I have the patience for the long game*.



Your processor says other wise.  if your still on spinners I know your BS'n. 

I still have a working i950 somewhere


----------



## Fluffmeister (Mar 3, 2017)

m0nt3 said:


> AMD Has to play the long term game. It is a completely new uArch with a different implimention of SMT than Intel uses, so it will take work at the game development level. To me, it also make the most sense, because there is pretty good parity in single thread and multi threaded in other applications. With the exception of outlier that prefer a different uArch, which there have always been.



I agree, it's the first time they have become relevant in years, precisely why I'm seriously considering a 1700, but then the 7700K is frankly better for me needs as it stands.


Xzibit said:


> Your processor says other wise.  if your still on spinners I know your BS.
> 
> I still have a working i950 somewhere



Exactly, I've waited this long. I'd hate to think I've made the wrong decision buying a CPU that is ultimately worse for gaming in the long term.

Sorry.


----------



## noname00 (Mar 3, 2017)

m0nt3 said:


> ... Game performance is still very acceptable, especially comapred to FX series.  ...



You did not just compare a brand spanking new CPU with a 6 year old CPU and basically said "at least it beats that" ... I refuse to accept it.
I know Intel Core is also an old architecture, but at least it's still faster in gaming, and they just need to pack more cores to match or exceed the multithread performance of Ryzen and lower the prices.

The biggest problem AMD has is that gamers won't buy 8 core CPUs as it won't provide better performance and are more expensive, and game developers won't invest too much into new engines because not enough users have 6/8/10 core CPUs, and moving to 2k and 4k the GPU is anyway the limiting factor.


----------



## Xzibit (Mar 3, 2017)

Fluffmeister said:


> I agree, it's the first time they have become relevant in years, precisely why I'm seriously considering a 1700, but then the 7700K is frankly better for me needs as it stands.
> 
> 
> Exactly, I've waited this long. I'd hate to think I've made the wrong decision buying a CPU that is ultimately worse for gaming in the long term.
> ...



I really don't understand the leap tho.  

Your a smart dude, I like to think.  Why even look at 1700 vs 7700k if the primary reason is gaming.  That mindset I don't understand.

The IPC difference is 6-8% give or take at clock. Thats going to continue down the line 7, 5, 3.  I personally don't think they have room to have higher clocks. Its GLOFD. That's why they are priced they way they are, bang for the buck.


----------



## TheoneandonlyMrK (Mar 4, 2017)

noname00 said:


> The thing is you are comparing CPUs, not real world systems. If you are using your PC mostly for gaming, why would you buy a R7 1800x when a 7600k would give you the same framerate on a 4K display with a Titan XP? If you have a GTX 1070 and game on 4K, you could even buy an i3 or an FX-8300 and overclock the s**t out of it.
> It's just stupid to say "for gaming I will buy a $500 AMD CPU and not a $350 7700k or a $250 7600k because in 4K I have the same framerate, even if at lower resolution the Intel are faster and cheaper". You can buy whatever you want, but don't try to justify your purchase with invalid reasons.
> 
> Maybe AMD will patch the AM4 platform, as I read in many places it's buggy, and gaming performance will get better. Until then, 1151 is the winner for gaming.
> ...


I haven't bought one ,I'm realistic about my use and the use of this and I didn't say you're precious Intel wasn't best for games,so stick the rest of you're opinion where it belongs.
You don't know me or my usage so back up.
And check the badges for an example.

I've been having the same i5 argument a while  and it doesn't  change,play old dx11 era games go i5, see mostly new era games, don't get an i3.

You seen gtaV running on crossfire 480s on many i3 or i5 pcs, I expect my present PC would beat them both clocked at 5ghz running 4k as I am, ultra settings .

So there ya go my use cases this eve on one page and my use cases state only i7 or ryzen is a step up, I'm not going to put cash in a clowns pocket though ,no.


----------



## geon2k2 (Mar 4, 2017)

londiste said:


> agreed, but at the same time, the same minority who has 1080s is the same crowd who buys 300+€ cpu-s.



Interesting fact: http://store.steampowered.com/hwsurvey/cpus/
6 cpus
*1.39%*
7 cpus
*0.00%* 
8 cpus
*0.24%
*
It looks like ~1.63% of the users on steam have cpus with more than 6 cores, which would normally be called enthusiast, however I'm afraid that includes all the 6 and 8 core FX-es and also some older parts like the 6 core phenoms. It looks like the enthusiast market at least for gaming is very close to 0. Maybe 0.5%.


----------



## m0nt3 (Mar 4, 2017)

Fluffmeister said:


> I agree, it's the first time they have become relevant in years, precisely why I'm seriously considering a 1700, but then the 7700K is frankly better for me needs as it stands.
> 
> 
> Exactly, I've waited this long. I'd hate to think I've made the wrong decision buying a CPU that is ultimately worse for gaming in the long term.
> ...


In regards to the 7700K, I see it this way. Ryzen does offer acceptable game performance, that will likely get better. You also get a platform that will be upgradeable into the future. AMD stated the AM4 platform will be supported through 2020. You do get other inherent advantages to 8 cores and 16 threads, streaming, encoding, and rendering for example. It is priced very well. As stated previously, things are moving to more threaded workloads and will have greater advantages a few years now, that the 7700K will likely fall behind in. So  it is kind of like in the moment vs in the future.


----------



## geon2k2 (Mar 4, 2017)

Xzibit said:


> I really don't understand the leap tho.
> 
> Your a smart dude, I like to think.  Why even look at 1700 vs 7700k if the primary reason is gaming.  That mindset I don't understand.
> 
> The IPC difference is 6-8% give or take at clock. Thats going to continue down the line 7, 5, 3.  I personally don't think they have room to have higher clocks. Its GLOFD. That's why they are priced they way they are, bang for the buck.



The thing is the 1700 is already cheaper than 7700K. What if the R5 with 4 cores 8 threads at higher clock speed, than R7 will cost half compared to an i7 7700k?
Will it be worth to pay double for maybe 10% better gaming performance?
Hopefully by then they'll also resolve the glitches the platform has.


----------



## Fluffmeister (Mar 4, 2017)

Xzibit said:


> Your a smart dude, I like to think.



Aha! Had you fooled all along!



Xzibit said:


> I really don't understand the leap tho. Why even look at 1700 vs 7700k if the primary reason is gaming.  That mindset I don't understand.
> 
> The IPC difference is 6-8% give or take at clock. Thats going to continue down the line 7, 5, 3.  I personally don't think they have room to have higher clocks. Its GLOFD. That's why they are priced they way they are, bang for the buck.



Honestly I'm just looking at it based on what I do, I mentioned in another thread many moons ago I like the idea of more cores because frankly that is how things are going, and if I intend to get another great innings out of a CPU like my beloved i7 920, an 8c 16t chip is right up my alley. The 7700K is the same price as the 1700 and more often than not wins *right now*.

I've been so tempted to pull the trigger on both options, but I'm gonna wait for the dust to settle and revisit the options in a couple of months.... patience is a virtue after all.


----------



## m0nt3 (Mar 4, 2017)

noname00 said:


> You did not just compare a brand spanking new CPU with a 6 year old CPU and basically said "at least it beats that" ... I refuse to accept it.
> I know Intel Core is also an old architecture, but at least it's still faster in gaming, and they just need to pack more cores to match or exceed the multithread performance of Ryzen and lower the prices.
> 
> The biggest problem AMD has is that gamers won't buy 8 core CPUs as it won't provide better performance and are more expensive, and game developers won't invest too much into new engines because not enough users have 6/8/10 core CPUs, and moving to 2k and 4k the GPU is anyway the limiting factor.



You do realize that Zen has been in devolopment for over 5 years right?

For a company as small as AMD, yes I am comparing it to their previous architecture and it is significantly better. It is a huge generalational leap and better than they were telling us it was going to be. Gaming performance, at settings people will actually play at is negilible, unlike the FX line which suffers from terrible minimum frame rates and high frame latency. If you want to base a purchasing decision off of game performance at setting you will not be playing at, be my guest, to each their own. AMD's biggest problem is not gamers buying 8 core CPU's, the rest of the Ryzen lineup has yet to be released. Also, things need to be put into perspective, AMD  does not have the R&D budget intel has, but yet look at what they have accomplished. Is it really fair to say that Ryzen just sucks at gaming  because it doesn't match a quad core CPU clocked at 4.2GHz base that has its architecture based as far back as nehalem, when they first introduced the Itegrated mem controller? This is the first revision of a from scratch design that is already much better off than they were with bulldozer. If it is not for you, great, that doesn't mean it is not for anyone.


----------



## TheoneandonlyMrK (Mar 4, 2017)

noname00 said:


> You did not just compare a brand spanking new CPU with a 6 year old CPU and basically said "at least it beats that" ... I refuse to accept it.
> I know Intel Core is also an old architecture, but at least it's still faster in gaming, and they just need to pack more cores to match or exceed the multithread performance of Ryzen and lower the prices.
> 
> The biggest problem AMD has is that gamers won't buy 8 core CPUs as it won't provide better performance and are more expensive, and game developers won't invest too much into new engines because not enough users have 6/8/10 core CPUs, and moving to 2k and 4k the GPU is anyway the limiting factor.


also pure ass, Almost every dev is developing  for 8 core cpus Now.

For my ten pence worth on the actual issue,I think the change to a write back cache is affecting the max frame throughput of the core in some game engines optimised to use write through caches.


----------



## Foobario (Mar 4, 2017)

Mescalamba said:


> Only problem is that nobody will optimize current games for AMD.
> 
> Apart that tiny fact that making game from dual core to quad, hexa or octa friendly isnt "just like that". In many cases (FPS mostly) its near impossible. If we dont mind that even if it was possible you dont have that much to occupy those extra cores with. Sure you can probably have FPS that uses six cores. Only problem will be that one core will go to 100% load and rest up to whole freaking 5%.
> 
> ...



Oxide and another is already working on optimizations.  AMD has a team doing the same as we speak.

AMD's slightly inferior single core capabilities are not the problem.  Compilers that can direct the workload to a single core or multiple cores, for that matter, that are not working in an efficient manner is the problem.

Oxide is actually working on getting the workload to distribute to multiple cores as Ashes of Singularity performed like crap on Ryzen in spite of AMD's apparent multi core superiority, or equivalency, depending on which camp you fall into.

This is similar to games being optimized for Nvidia hardware resulting in AMD having to create post release drivers to overcome the advantage Nvidia had.  Game developers have been optimizing for Intel CPUs for a long time since, quite frankly, there was no reason to put much effort into optimizing for AMD's FX line after AMD quit updating it after year one.

Between AMD's efforts and developer efforts the delta between Intel and AMD will narrow.  However, as we see in the GPU realm, there will be developers "persuaded" to not make an effort to optimize for AMD and the burden will be on AMD to find workarounds to close the performance gap.

However, as AMD's market share grows it will take more persuasion to coerce developers to ignore AMD optimizations in the future.

Gaming was never gonna be the catalyst for AMD to grow market share in the 8 core market.  It's the content creators that are going to drive that and some of them are gamers.  As their numbers grow, they will become a big enough market for the developer holdouts to accept AMD CPUs as a viable market to optimize for.


----------



## m0nt3 (Mar 4, 2017)

theoneandonlymrk said:


> also pure ass, Almost every dev is developing  for 8 core cpus Now.
> 
> For my ten pence worth on the actual issue,I think the change to a write back cache is affecting the max frame throughput of the core in some game engines optimised to use write through caches.



Agreed, any developer, developing on consoles, is developing for 8 cores. The low level API from consoles will be experience gained for low level API's on PC. Low level API's are greatly going to help ports. Especially the linux ports of games that we will start to see very soon utilizing vulkan.


----------



## Prima.Vera (Mar 4, 2017)

They develop for 8 Cores, OK. But does this includes the H.T.? Meaning anything more than a 4 Core CPU with H.T. is useless?


----------



## m0nt3 (Mar 4, 2017)

Prima.Vera said:


> They develop for 8 Cores, OK. But does this includes the H.T.? Meaning anything more than a 4 Core CPU with H.T. is useless?


Of course not.


----------



## eidairaman1 (Mar 4, 2017)

theoneandonlymrk said:


> I haven't bought one ,I'm realistic about my use and the use of this and I didn't say you're precious Intel wasn't best for games,so stick the rest of you're opinion where it belongs.
> You don't know me or my usage so back up.
> And check the badges for an example.
> 
> ...




Interesting enough for you and I mrk is we can only move up to 2011-3 or 1331 to have a true upgrade, 1151 is a sidegrade for us lol.


----------



## TheoneandonlyMrK (Mar 4, 2017)

Prima.Vera said:


> They develop for 8 Cores, OK. But does this includes the H.T.? Meaning anything more than a 4 Core CPU with H.T. is useless?


i've had this quad stroke 8 core for years since I bought it for 159 quid and honestly its been the core that could for me ,that day and since  many have said a quad is enough, it isn't, there aren't any games I'm not on the useable list of yet but next gen Vr and games need better everything....


----------



## Melvis (Mar 4, 2017)

I wouldnt be surprised if it is a BIOS issue, the 8120 of my mates was terrible in games until the BIOS was updated and it totally transformed the CPU.

Time will fix this issue and the performance of these new Zen CPU's will only get faster over time.


----------



## HisDivineOrder (Mar 4, 2017)

If you have to have games be optimized for your less prevalent alternative, then that is NOT a good thing.


----------



## Xzibit (Mar 4, 2017)

m0nt3 said:


> In regards to the 7700K, I see it this way. Ryzen does offer acceptable game performance, that will likely get better. You also get a platform that will be upgradeable into the future. AMD stated the AM4 platform will be supported through 2020. You do get other inherent advantages to 8 cores and 16 threads, streaming, encoding, and rendering for example. It is priced very well. As stated previously, things are moving to more threaded workloads and will have greater advantages a few years now, that the 7700K will likely fall behind in. So  it is kind of like in the moment vs in the future.



I agree on the platform no doubt. What i was implying and i guess i wasn't clear is there still the 5 (1600X and 1500X) will provide the same performance as the 7s on games that aren't utilizing the added cores at a much lower cost.




geon2k2 said:


> The thing is the 1700 is already cheaper than 7700K. What if the R5 with 4 cores 8 threads at higher clock speed, than R7 will cost half compared to an i7 7700k?
> Will it be worth to pay double for maybe 10% better gaming performance?
> Hopefully by then they'll also resolve the glitches the platform has.



The 5s aren't going to be higher clocked. That's already been established.







I myself eye'n a 1600X or non-X if there is one.



Fluffmeister said:


> Honestly I'm just looking at it based on what I do, *I mentioned in another thread many moons ago I like the idea of more cores* because frankly that is how things are going, and if I intend to get another great innings out of a CPU like my beloved i7 920, an 8c 16t chip is right up my alley. The 7700K is the same price as the 1700 and more often than not wins *right now*.
> 
> I've been so tempted to pull the trigger on both options, but I'm gonna wait for the dust to settle and revisit the options in a couple of months.... patience is a virtue after all.



I didn't see those posts. I'm in the same situation but I don't see much benefit to going full 8c at the moment for my use.  My position is i'll see how this plays out (things get ironed out) the next couple of months.


----------



## yoyo2004 (Mar 4, 2017)

Who wants a cpu that gets handicapped by one game?, definitely not me!

I am just gonna leave this here,





all credit to this reddit: https://www.reddit.com/r/Amd/comments/5xcc0r/jokers_bench_showing_7700k_cpu_bottleneck/


----------



## TheGuruStud (Mar 4, 2017)

HisDivineOrder said:


> If you have to have games be optimized for your less prevalent alternative, then that is NOT a good thing.



You certainly can't design a CPU to do well in gaming only (you could, but how would that pay off).



yoyo2004 said:


> Who wants a cpu that gets handicapped by one game?, definitely not me!
> 
> I am just gonna leave this here,
> 
> ...



Look like a lot of time wasted where Ryzen is doing nothing based on that pic, which is nice (to a degree lol).

BTW, don't anger the 7700K fanboys. How dare you insinuate that it can cause fps drops or stuttering?!


----------



## Gundem (Mar 4, 2017)

Good for AMD! If enough people buy Ryzen, they'll have have a bigger budget to work on improving the Ryzen 8/x2 series.
And development should move towards utilizing more cores? 

Quad core is so old hat now, AMD are on the right road.

Also, this will hopefully force Intel to pull up their socks... improvement between 6700k and 7700k(for example) was quite sad and far from worth the price/cost.


----------



## SUNmoon2020 (Mar 4, 2017)

AMD Ryzen 8% behind Kapy-Lake in IPC and 12%behind Kapy-Lake in clock speed.

That's mean 20% Delta behind Kapy-Lake in single core performance.

*These information from AMD not from my pocket*

Coffee-Lake should as usual bring 5% in IPC 10% in IGPU 15% in total , that's mean Ryzen will be 25~30% Delta behind Coffee-Lake single core performance and that's huge deal in games.

Coffee-Lake also will bring 6C/12T and 
24~30 PCI-E3.0 lanes to mainstream, plus the IGPU will support 4K HD10, Dolby Vision [H.265 ,Vp9 4K 12bit Encode and Decode].

AMD did great job with Ryzen but they have a lot of work to do with Zen2.

First they have to close the IPC gab between Zen and Coffee-Lake (10% higher IPC will be really good news).

Second they have to close the gap between Zen and Coffee-Lake in clock speed (10% will be amazing)
[That's will but Zen2 just 8~10% Delta behind Coffee-Lake in single core performance] .

Third they have to add the missing features from Zen architecture like AVX256, 28~40 PCI-E 3.0 lanes.

Fourth they have to fix memory bandwidth issue from what AMD said it's look like they will not add Quad channel memory anytime soon, but they can increase the bandwidth buy supporting higher RAM speed out of the box 3200 MHz with two Dimm slots and 2699 MHz with Four Dimm slots that's will bring huge improvements to the performance plus support RAM OC up to 4000 MHz.

AMD have to do that next year to close the gap because in 2019 Intel will release Ice-Lake with new architecture using 10nm, we will see at least 10% higher IPC than Coffee-Lake, that's will but Ryzen 40% Delta behind Ice-Lake in single core performance, just in two years With support for DDR5 and DDR4, PCI-Express 4.0, and many other features, not to mention Optan X will be available for consumers.

In 2019 AMD should release Zen3 on 7nm FinFET not on 14nm FinFET and add all the features from Ice-Lake and keep the 5~8% Delta behind Intel in single core performance.

AMD also have two years to work with games developer to support Ryzen 8C/16SMT, 6C/12SMT maybe also 12C/24SMT to make sure First generation Ryzen will not be end up 40% behind Intel Ice-Lake in games, that's possible specialy if Xbox Scorpio will end up using Ryzen CPU that's will help AMD a lot in optimization issues.

[if AMD will not bring down the 40% Delta gab in performance between Ryzen and Ice-Lake, most of the people will jump in Intel train and will never go back to AMD train even if they will offer Quantum computer for free for each person bought Ryzen in 2017-2018].

We know also Tiger-Lake in 2021 will be the last Intel (Cor I7,I5,I3) architecture after that they will move to new architecture from the ground, AMD have a lot of work to do to keep up with Intel.

AMD APU's before end of the year should give us big example about how AMD will fix the RAM bandwidth issues, if they will use HBM2 to feed the CPU and GPU they should do that with Zen 2 as L4 cash to avoid all the problems from RAM speed, timing and channels.

If they will use DDR4 Dual channel memory to feed the APU, like what they did with old APU's and DDR3 single channel and dual channel it will be epic fail to AMD in performance. Buy the way AMD APU's still better than Intel IGPU but both suffered from RAM bandwidth limitations, Intel still has the problem.

Coming months will give us some answers about next Zen architecture. [Just hope it's good not bad].


----------



## geon2k2 (Mar 4, 2017)

Xzibit said:


> The 5s aren't going to be higher clocked. That's already been established.
> 
> 
> 
> ...



You're right.
Previously published prices suggest ~260$ for top line 6 core part, ~200 for the 4 core 8 thread and ~150$ for the 4 core 4 thread.
I'm not sure what to make of it for now, I think, non GPU bound, gaming price/performance ratio vs intel will still be good, probably with 60% of the price you will buy 75% of performance, with few exceptions for games properly optimized for MT.

Nevertheless for pure best gaming performance intel will still be the best choice.
Let's see what the real pricing will be and how the things will unfold.

Anyway, in general without discussing specifically about Ryzen, when benchmarking games on various CPUs, I think review sites should consider using lesser GPUs for better consumer information.
With 1080 GTX , probably 7700 will be on top, with 1060 GTX probably all of them will be clumped together and maybe even the lowest i5, the 7400 or the lowest Ryzen, will provide adequate performance.


----------



## bug (Mar 4, 2017)

geon2k2 said:


> Interesting fact: http://store.steampowered.com/hwsurvey/cpus/
> 6 cpus
> *1.39%*
> 7 cpus
> ...


I'm pretty sure there are more 8 cores out there. Just not for gaming, games today don't need that many cores. And in the age of pictures and videos taken with a smartphone, there's not much photo or video editing going on at home either. So Steam is not where you'll find those CPUs


----------



## geon2k2 (Mar 4, 2017)

bug said:


> I'm pretty sure there are more 8 cores out there. Just not for gaming, games today don't need that many cores. And in the age of pictures and videos taken with a smartphone, there's not much photo or video editing going on at home either. So Steam is not where you'll find those CPUs



There are for sure but in general these parts are not used for gaming so this whole 1080p lower gaming performance might not be an issue at all. At least not now. We will discuss once more on this when r5/r3 will come.


----------



## efikkan (Mar 4, 2017)

It's funny to observe how the narrative changed from extreme hype to full crisis control.



Raevenlord said:


> The folks at PC Perspective have shared a statement from AMD in response to their question as to why AMD's Ryzen processors show lower than expected performance at 1080p resolution (despite posting good high-resolution, high-detail frame rates). Essentially, AMD is reinforcing the need for developers to optimize their games' performance to AMD's CPUs (claiming that these have only been properly tuned to Intel's architecture).


Seriously? Resorting to conspiracy theories? This is low, AMD!

I've never seen any games optimized for Intel, as a matter of fact games commonly contain some of the worst CPU code. The reason why Intel wins is their prefetcher is better at handling crappy code.



Raevenlord said:


> "As we presented at Ryzen Tech Day, we are supporting 300+ developer kits with game development studios to optimize current and future game releases for the all-new Ryzen CPU. We are on track for 1000+ developer systems in 2017. For example, Bethesda at GDC yesterday announced its strategic relationship with AMD to optimize for Ryzen CPUs, primarily through Vulkan low-level API optimizations, for a new generation of games, DLC and VR experiences.


Imagine if AMD spent this kind of resources on designing a god prefetcher for their CPU…



SUNmoon2020 said:


> AMD Ryzen 8% behind Kapy-Lake in IPC and 12%behind Kapy-Lake in clock speed.


During gaming all the CPUs will boost, AMD is not far behind in clock speed, if it's not ahead.
Ryzen 7 1800 X boost beyond 4.0 GHz
i7-7700K boosts to 4.5 GHz
i5-7600K boosts to 4.2 GHz
i7-6950X boosts to 4.0 GHz
i7-6900K boosts to 4.0 GHz
i7-6800K boosts to 3.8 GHz
Yet all of these Intel CPUs have marginal differences in games, while Ryzen struggles in a number of games. Something tells me that it's not just a lack of clock speed. We already know there is little gains for Intel beyond 4.0 GHz, so AMD would have to do something with their prefetcher.

-----

Well, we all knew this were going to happen. AMD did a decent job by building a more superscalar processor, but they didn't prioritize building a proper front end/prefetcher. Their prefetcher is worse the one in Sandy Bridge, and considering that most of the improvements from Sandy Bridge to Kaby Lake is in the prefetcher, they have some serious catching up do to.

The efficiency of the prefetcher matters a lot for some workloads, including gaming. And when it comes to cache misses, increasing the clock frequency wouldn't help mitigate the performance penalty.

It's not like this "problem" is going to blow over. It might not matter to a GTX 1060, but when Ryzen is too slow to saturate a GTX 1080, things are only going to get worse with GTX 1080 Ti, Volta, etc. For buyers of GTX 1070 or higher the first generation Ryzen is simply too slow. For _gaming_ a i7-6800K is a better deal, even if Ryzen 7 1800X beats it in some workloads.


----------



## TheGuruStud (Mar 4, 2017)

And another incorrect post by efikkan.


----------



## FordGT90Concept (Mar 4, 2017)

caleb said:


> I don't think anybody will recode already published titles to utilize more cores but lets see.


If this dev kit includes a C/C++ compiler for Ryzen, all they should have to do is point it at their code base and compile then push it out a digital distribution update.  I think the only games that will get that treatment though are ones actively getting updates (CS:GO, DOTA2, PD2, MMOs, etc.).


FX didn't get special compiler treatment because that was putting lipstick on a pig.


----------



## Batou1986 (Mar 4, 2017)

theoneandonlymrk said:


> @Batou1986 ,you get what you pay for ,Intel or amd,if you buy a low end cheap motherboard you get low end performance and will end up a moaner.
> 
> I overclocked my friends 6320 on you're board two nights ago because he finally got a evo 212 like I told him.
> Nightmare, his throttled all the time at stock settings,I was forced into bclk clocking it by the crapness of his board, I've had his chip easily do 4.5 in my rig but not in his, 4.3 max.
> Point being you're reference and perception have been affected by your purchase choices and you should have chosen better imho.



My man I think you need to stop making assumptions, The crapness of my board has nothing to do with the lack of performance from the FX series.
I easily beat the benchmarks for an 8350 because all 8 cores are running 4.2 as a non turbo boosted speed and I have none of these throttling issues you mentioned even when running linpac for hours.
Running at 5ghz is not going to make DCS or Star Citizen or any number of other games that have issues with AMD CPU's run any better for me.

Its great that AMD kinda caught up to Intel with Ryzen.
But if its going to be the same as the FX series where certain applications perform worse specifically because of AMD CPUs like DCS World and Cryengine games that's a major issue that cant just be ignored.

Also you need to stop repeating that devs are all making games for 8 cores and using that as a reason your "8 core" cpu is still ok, its well known fact that there are 4 full cpu cores and 4 limited cpu cores on the FX series and this makes a HUGE difference in performance when comparing it with a true 8 core cpu.


----------



## efikkan (Mar 5, 2017)

FordGT90Concept said:


> FX didn't get special compiler treatment because that was putting lipstick on a pig.


Bulldozer got compiler optimizations from major compilers such as GCC and LLVM, in fact GCC alone has 4 levels of it.



FordGT90Concept said:


> If this dev kit includes a C/C++ compiler for Ryzen, all they should have to do is point it at their code base and compile then push it out a digital distribution update.  I think the only games that will get that treatment though are ones actively getting updates (CS:GO, DOTA2, PD2, MMOs, etc.).


Compiler optimizations have been available for a long time, ever since the ISA was planned. You can see some of the results from it here. Such optimizations usually helps with specific edge cases and helps with vectorization, which can help a bit in some applications. But games are usually limited by cache misses and branch mispredictions, compiler optimization wouldn't do much to help with this, so game developers can't just throw a compiler at it.


----------



## TheoneandonlyMrK (Mar 5, 2017)

Batou1986 said:


> My man I think you need to stop making assumptions, The crapness of my board has nothing to do with the lack of performance from the FX series.
> I easily beat the benchmarks for an 8350 because all 8 cores are running 4.2 as a non turbo boosted speed and I have none of these throttling issues you mentioned even when running linpac for hours.
> Running at 5ghz is not going to make DCS or Star Citizen or any number of other games that have issues with AMD CPU's run any better for me.
> 
> ...


and you think a HT core is a full core? your so ,so wrong the FX series are a closer fit to dual cores and thats there inherant problem each core had less actual rescources and no micro ops so under utilisation happens.
intel on the other hand had micro ops and could if wanted use a hole cores(2 intel cores worth) on one thread leaveridging a wider execution pipe micro ops and better cache plus two node swaps lower ,but they are all old advantages and its clear amd have the raw per core and multicore performance so a few tweaks here and there on this brand new uarch and im sure it will be fine.

Then i might buy one ,but as i said if i bought one , running two 480s and 4k , i could not do better buying intel anything in any metric , apparently , so i could happily dodge 1080p my whole life but alas im skint so im dreamin still.


----------



## DeathtoGnomes (Mar 5, 2017)

caleb said:


> I don't think anybody will recode already published titles to utilize more cores but lets see.


I dont see why not when Trion/Rift has.


----------



## Camm (Mar 5, 2017)

Well being fair, games don't need to be recoded to use more cores to benefit Ryzen, but they could do with being compiled with an AMD friendly compiler. Contrary to belief, its not quite as simple as just using the AMD compiler and off you go, but the work to do it wouldn't be extravagant either.

Depending on how well Ryzen sells, I can see plenty of recentish games getting patches.


----------



## BiggieShady (Mar 5, 2017)

efikkan said:


> game developers can't just throw a compiler at it.


To be fair, you are partially right, there are number of compiler optimizations that can help with prefetch to have less cache misses: http://ece-research.unm.edu/jimp/611/slides/chap5_3.html ... I say partially, because they either do so in the expense of more added instructions or covering edge cases for this particular purpose (gaming) ... it's win some lose some situation. Can't know for certain until you fire up the cpu profiler on the ryzen for the specific game.
Trouble is game devs will find little incentive to do so for past projects ... and for the new ones, compilers will get tuned as time goes by because of the zen in the console space


----------



## medi01 (Mar 5, 2017)

efikkan said:


> Resorting to conspiracy theories?


CPU optimization is a conspiracy theory?
Need to take into account that 8 core chip is actually 4x4 (OS would do that) with their own L3 is a conspiracy theory?

"lower than expected" is a fact nowadays? Expected by whom?

I have seen Starcraft 2 benchmarks with Ryzen doing min 16 average 31 fps (on 980), *are you freaking kidding me? *
This is plain and outright bullshit, there is no desktop CPU that is less than 4 years old that would score like that in that game.

There is an expected single thread advantage that Intel's 4 cores have, and AMD has voiced it actually.
AMD states they they are 6% behind Skylake IPC, taking higher clock into account it's flat 20% advantage for 7700k in single core tasks, who "expected" something, pretty please?
Haswell was an "unlikely but hopefully" target. It ended up on Broadwell levels, jeez.


/double facepalm


----------



## efikkan (Mar 5, 2017)

BiggieShady said:


> To be fair, you are partially right, there are number of compiler optimizations that can help with prefetch to have less cache misses: http://ece-research.unm.edu/jimp/611/slides/chap5_3.html ...


This is primarily referring to other instruction sets than x86, since modern x86 architectures have a prefetcher with a large instruction window. If a prefetching hint should give any helpe before dereferecning a pointer, then the programmer has to insert this hint earlier in the code. Large number of cache misses usually occur when traversing a list, but using such hints inside a loop will provide no benefit since the CPU will what's inside the loop, and you can't know the memory address of data several iterations ahead without dereferecning pointers, so doing so will probably reduce cache efficiency causing a performance penalty. For this reason manual prefetching is discouraged.



BiggieShady said:


> I say partially, because they either do so in the expense of more added instructions or covering edge cases for this particular purpose (gaming) ... it's win some lose some situation. Can't know for certain until you fire up the cpu profiler on the ryzen for the specific game.


Please explain what this means.



BiggieShady said:


> Trouble is game devs will find little incentive to do so for past projects ... and for the new ones, compilers will get tuned as time goes by because of the zen in the console space


Compilers are already "tuned", so we wouldn't see any major change there, but as I've mentioned optimizing compilers can't do much with branch mispredictions and cache misses.
If a compiler were to eliminate some branching, the CPU has to have some new unique instructions allowing certain conditionals to be converted into branchless code. Otherwise, a compiler can't help here.
Data cache misses usually occur because of traversal of lists, and the only way to eliminate this would be to rewrite the whole codebase to align the data in a native array, no compiler can ever do this. This is largely a result of how the developer chose to do OOP.
Code cache misses is once again usually a result of the code structure, OOP and lists of arbitrary elements is the greatest challenge here. Once again the solution is to restructure the code which is outside the realm of a compiler. A kind of optimization I can think of which would help here would be to inline small functions, but compilers already do that, like GCC with -O2 which enables _-finline-small-functions._


----------



## BiggieShady (Mar 5, 2017)

efikkan said:


> Please explain what this means.


I was pointing out that only running a cpu profiler while debugging a specific game on ryzen can show where those nanoseconds are lost inside a frame compared to less new cpu architectures. Then critical sections get either rewritten for specific arch, or those libraries can be compiled with different options. Mostly combination of both. 


efikkan said:


> Compilers are already "tuned", so we wouldn't see any major change there, but as I've mentioned optimizing compilers can't do much with branch mispredictions and cache misses.


Let's not forget this is a completely new arch, I'm not saying compilers should suddenly start doing impossible ... but in the realm of what is possible, it seems there is a headroom. I'm guessing here that CPU with such large cache shouldn't suffer much from cache misses, branch misprediction is another story but anyway both would produce stuttery experience, and fps seems extremely steady only lower on average.


----------



## efikkan (Mar 5, 2017)

BiggieShady said:


> I was pointing out that only running a cpu profiler while debugging a specific game on ryzen can show where those nanoseconds are lost inside a frame compared to less new cpu architectures. Then critical sections get either rewritten for specific arch, or those libraries can be compiled with different options. Mostly combination of both.


First of all, profiles will not measure large problems like cache misses accurately. There will hardly be any games which can get substantial benefits from AMD specific tweaks this way, since compiler optimizations are limited to small patterns of instructions. Unless there are some big "bugs" in the Zen architecture here, there is little to gain from this. Almost all larger problems would need a rewrite, and are not AMD specific in any way.



BiggieShady said:


> Let's not forget this is a completely new arch, I'm not saying compilers should suddenly start doing impossible ... but in the realm of what is possible, it seems there is a headroom.


Why does there seem to be headroom? Do you even know how a compiler works? You clearly don't seem to do so.



BiggieShady said:


> I'm guessing here that CPU with such large cache shouldn't suffer much from cache misses,


You know what a *kB* is right?
Just the rendering of a single frame will process several hundred MBs, and at 60 FPS there is a lot of data flowing through.
With 512 kB of L2 cache, and 8 MB of shared L3 cache it's not like even 1% of the data is in there at any point.



BiggieShady said:


> branch misprediction is another story but anyway both would produce stuttery experience, and fps seems extremely steady only lower on average.


So since FPS is "stable", there is not branch mispredictions and cache misses? I'm sorry, but you clearly don't even know at which scale this things even happen. We are not talking of single stalls causing ms of latency known as stutter, no we are talking about clock cycles which are in ns scale, and since there are so many thousands of them every second they add up to a steady performance drop rather than noticeable stutter. A single local branch misprediction causes ~20 clocks of idle, a non-local adds a cache miss as well(code cache miss), so +~250 clocks. A data cache miss is ~250 clocks on modern CPUs.


----------



## bug (Mar 5, 2017)

geon2k2 said:


> There are for sure but in general these parts are not used for gaming so this whole 1080p lower gaming performance might not be an issue at all. At least not now. We will discuss once more on this when r5/r3 will come.


Quite frankly, I'm not worried about FHD gaming at all (I game at 1920x1200 atm). In a couple of years I hope 4k will become much more affordable. What's giving me pause is AMD has only matched an architecture that has been stagnant for years. AMD themselves said Zen is their workhorse for the next four years. And if Intel comes up with something till then (which they probably will), AMD may not get a chance to cash in properly. Then again, I was never that good at predicting things


----------



## rruff (Mar 5, 2017)

bug said:


> And if Intel comes up with something till then (which they probably will), AMD may not get a chance to cash in properly. Then again, I was never that good at predicting things



On the other hand if Intel doesn't have some secret weapon, then I think AMD will stand to gain ~100% in market share by 2018 (would put them ~35%) based on Ryzen and refinements. This is *so* much better than Bulldozer. Ryzen may be down a little on raw speed but power efficiency is very good, which will bode well for laptops and servers.


----------



## BiggieShady (Mar 5, 2017)

efikkan said:


> So since FPS is "stable", there is not branch mispredictions and cache misses? I'm sorry, but you clearly don't even know at which scale this things even happen. We are not talking of single stalls causing ms of latency known as stutter, no we are talking about clock cycles which are in ns scale, and since there are so many thousands of them every second they add up to a steady performance drop rather than noticeable stutter. A single local branch misprediction causes ~20 clocks of idle, a non-local adds a cache miss as well(code cache miss), so +~250 clocks. A data cache miss is ~250 clocks on modern CPUs.


Of course I'm not talking about single cache miss ... rather about max frame times ... with each cache miss at 62.5 ns, any excessive cache misses in one frame compared to previous would show as much bigger variation of the maximum frame time ... you say it adds up to a steady performance drop, but I say it should affect measured frame time variations in a non-steady manner.


----------



## Patriot (Mar 5, 2017)

londiste said:


> disadvantage does not melt at higher resolution due to anything to do with cpu. higher resolutions will simply bring gpu limit quite a bit lower.
> with 1080ti (and hopefully vega) out soon, titanxp level of performance will be more accesible than ever. that performance level is the same on 1440p as gtx1080 performance is on 1080p.



Not necessarily true...  There are actually quite a few 8 -12 threaded games on the market.
When there is falloff in framerate on the intel side and the AMD side stays flat at higher res... that shows a cpu bottleneck plain and clear.   If the gap does anything other than stay constant... the difference is more than the gpu.


----------



## EarthDog (Mar 6, 2017)

@Patriot - Please list all titles which can utilize 8+ threads. 

Wondering how many you believe is 'quite a few'..


----------



## yoyo2004 (Mar 6, 2017)

EarthDog said:


> @Patriot - Please list all titles which can utilize 8+ threads.
> 
> Wondering how many you believe is 'quite a few'..


Rise of the tomb raider uses all my 8 cores/ threads...


----------



## Patriot (Mar 6, 2017)

EarthDog said:


> @Patriot - Please list all titles which can utilize 8+ threads.
> 
> Wondering how many you believe is 'quite a few'..



The last two tomb raiders, Crytek engine games, frostbite engine games (BC2, 9 threads, BF3 was up to 12 threads... BF4, battlefront BF1 ... and whatever else uses it.
GTA5, Sniiper Elite is a showcase of it...

idk, how many AAA titles does it take?   I am sure there are more... and as DX12 and vulkan become more prevalent I am sure that is the trend.

Point stands... If the gap does anything other than stay constant when you change resolutions... the difference is more than the gpu.

Even on the games that just hit 4-6 threads hard... having spare threads means if anything hiccups in the background doesn't hurt you.


----------



## XiGMAKiD (Mar 6, 2017)

Well there's hope that AMD's push on much more affordable multicore to the masses could result in developers making prettier and higher performance games even though it won't be easy just like AMD's push on lower level API with Mantle and more recently Vulkan/DX12


----------



## Patriot (Mar 6, 2017)

XiGMAKiD said:


> Well there's hope that AMD's push on much more affordable multicore to the masses could result in developers making prettier and higher performance games even though it won't be easy just like AMD's push on lower level API with Mantle and more recently Vulkan/DX12


They are also pushing 1000+ dev units out... they are giving away ryzen to game devs...


----------



## XiGMAKiD (Mar 6, 2017)

Patriot said:


> They are also pushing 1000+ dev units out... they are giving away ryzen to game devs...



Well that's a good start


----------



## akumod77 (Mar 7, 2017)

Why not compare any Ryzen againts i7 7700k at same clock speed, mem timings, core/thread count?

For example, because Ryzen won't oc much. Clock them both @ 3.9ghz ~ 4.1ghz, 4c/8t. I know we are gimping the i7 7700k but i'm just curious to know the result of "almost the same" setup would be. Gaming & productivity benches needed


----------



## bug (Mar 7, 2017)

@akumod77 Just wait for Anandtech. They'll do this right.


----------



## efikkan (Mar 8, 2017)

BiggieShady said:


> Of course I'm not talking about single cache miss ... rather about max frame times ... with each cache miss at 62.5 ns, any excessive cache misses in one frame compared to previous would show as much bigger variation of the maximum frame time ... you say it adds up to a steady performance drop, but I say it should affect measured frame time variations in a non-steady manner.


You still don't understand the time scale here.
Fluctuations around 1-2 ms is very noticeable, and I would claim anything below ~0.2 ms is hard to notice.
For comparison 0.2 ms = 200 μs = 200,000 ns.



Patriot said:


> Not necessarily true...  There are actually quite a few 8 -12 threaded games on the market.
> When there is falloff in framerate on the intel side and the AMD side stays flat at higher res... that shows a cpu bottleneck plain and clear.   If the gap does anything other than stay constant... the difference is more than the gpu.





XiGMAKiD said:


> Well there's hope that AMD's push on much more affordable multicore to the masses could result in developers making prettier and higher performance games even though it won't be easy just like AMD's push on lower level API with Mantle and more recently Vulkan/DX12


For *both* of you;
Multithreading in games mainly comes down to freeing up the rendering thread to work undisturbed building a queue. Granted, Direct3D 12 allows you to use multiple threads to build a single queue, but there's really not any point to it. Having several threads querying the driver this way will create a number a synchronization issues, so the gains will be minimal. So the gains of multiple threads will mostly be limited to having one thread per queue, and since most games use 1-2 queues for most of the load, there will not be a huge potential here. It's not like we can just throw four threads at it and scale nicely.

If a game has a problem with a bottlenecked CPU, it's usually caused by the computations done between each API call. So e.g. precalculating animations in a different thread can help a bit, but of course it mostly comes down to the code structure in the game engine. This is why I started by mentioning "freeing up the rendering thread".



Patriot said:


> They are also pushing 1000+ dev units out... they are giving away ryzen to game devs...


Too little, too late…
This is all about PR, sending out some dev kits is not going to make developers rewrite their games over night. In ~99% of cases reducing the bloat would require a major rewrite, which is not something that can be done in 10 hours or so.


----------



## BiggieShady (Mar 8, 2017)

efikkan said:


> You still don't understand the time scale here.


Are you sure about that ... at 3% cache miss rate there are from 1 to 2 million cache misses inside each frame at 60 fps, you only need extra 1% more misses to happen next frame to have that net variance
... besides we already know the problem is cache latency when L3 gets over 8 MB


----------

