• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

How is Intel Beating AMD Zen 3 Ryzen in Gaming?

@W1zzard
Just wanted to say awesome article; It's nice to get some thorough, original testing.

The really simple solution for people with Zen3 actually worried about losing 3% performance seems to be for them to just get a better graphics card. :D
 
1605198258818.png


“Nothing to see here, move along!”

:laugh: :D
 
Really. It's measly single digit % difference in the first place, likely completely imperceptible in real world. Hence should be discarded as meaningless, yet some folks blew it out of proportion.
You mean like people did when Intel was ahead?
 
Can someone explain how we get over 100% load on the x-axis? I can not really understand the part: towards to the right GPU limited and left CPU limied. It is relative to what?
 
hope you get rid of those games that nobody play on your bench list
Care to be more precise? As I don't know which ones those are.

For anyone looking for cheap dual rank 16gb (8gbx2) memory kits, Crucial Ballistix uses dual rank memory sticks. Also, Patriot Viper Steel uses single rank chips.
I've been running four sticks of the Viper Steels for about a year now at 3800MHz CL16. Very nice RAM for the money, once AMD fixed their AGESA.

It just shows you how tiny that difference is though. CPU difference if 20% between the top 2 CPUs when extremely CPU bound is not humanly perceptible.

When Ryzen was behind 20% of Intel at 720P AMD fanboys were all like "It games the same! who plays at 720P?" now when AMD is 20% ahead in the most academic scenarios AMD fanboys are going "It CRUSHES INTEL IN GAMING" -- it does, but I think the moral of this story is you shouldn't spend $500+ on a CPU for gaming lol.
If I'm not blind, it seems like AMD is beating Intel at higher resolutions too, not just 720p...
It'll be interesting if the Intel fanbois own up now, as they claimed the performance at 720p meant their CPUs were more future proof than AMDs, so does the reverse now apply?
 
For anyone wondering, the average difference for the games used between the best results for each CPU was 5% for the AMD chip at 1080P. And it might get bigger with the upcoming Big Navi GPUs combined with the new smart cache feature.
 
If I'm not blind, it seems like AMD is beating Intel at higher resolutions too, not just 720p...
It'll be interesting if the Intel fanbois own up now, as they claimed the performance at 720p meant their CPUs were more future proof than AMDs, so does the reverse now apply?

To the 3000? yes - and the reverse does apply - in the somewhat distant future, the 5900x with a 7 or 8900xt may be 10% faster than a 10850k / 10900K at high fps 1440p and below, and the 9900K will be X% faster than the 3700x.

But all of them will get absolutely destroyed by a $300 Ryzen 5 or Intel i5 of that current gen -- and if you're running ~120FPS or lower at 4k there won't be any meaningful difference between the top CPUs on this list.
 
Last edited:
When Ryzen was behind 20% of Intel at 720P AMD fanboys were all like "It games the same! who plays at 720P?" now when AMD is 20% ahead in the most academic scenarios AMD fanboys are going "It CRUSHES INTEL IN GAMING" -- it does, but I think the moral of this story is you shouldn't spend $500+ on a CPU for gaming lol.

Now fanboys have similar claims with sides switched, and it will always be like that. Moral of the story is do not give any attention to fanboys, ever.

People should just look at the data and make the choice they want for whatever reasons they might have.

The only unreasonable thing to do is trying to dictate other people's choices or browsing internet forums for validation of your personal choices.
 
How about we see performance profiling? You know, disassembled, disected and raw. Like, instruction per instructuon profiling?

Also, it will be nice to have a re-run once the latest RDNA-based cards (6xxx) are out.
 
Last edited:
Once we move past 140% in the chart above, the FPS rates are pretty much identical. Is anyone surprised? The GPU is limiting the FPS here, which is typically referred to as the "GPU bottleneck". Having now thought about this for a while, isn't it kinda obvious? Did anyone expect Zen 3 to magically increase GPU rendering performance?

That increase can only be done through the miracle of RGB
 
If you know the full power of certain hardware can only be shown by testing it with with a fast GPU and a fast memory, then you should simply do it.

For my future processor reviews, I'll definitely upgrade to Ampere, or RDNA2 if that turns out to be the more popular choice on the market.

When have you ever looked that "the most popular choice"? You have never changed your test system to AMD despite AMD beating Intel in sales 10 to 1. You have always used "the fastest ont he market" to eliminate bottlenecks. Now that there is a chance of AMD beating the 3090 you suggest using the "more popular choice" ( = Nvidia because they have biggest mindshare in GPU market ).

I'm also thinking about upping the memory speeds a bit, DDR4-3600 seems like a good balance between cost and performance. I doubt I'll pick DDR4-3800 or 4000 just because AMD runs faster with it—guess I'll still get flak from the AMD fanboys.

Smart of you of already putting a label on people that would give critic to your test methods. You tested all the RTX 30xx cards with a system using DDR4 4000 despite it being far from the best balance between cost and performance. But now that you know for a fact that Ryzen 5000 benefits from high-speed memory ( and this is not exactly breaking news... ) you think about upgrading to DDR4 3600 and not 3800 or 4000? Come on man...

Either you always use the fastest hardware to eliminate possible bottlenecks or either you use hardware that most readers can relate to ( = most popular in sales or best price/performance ratio ) but by cherry picking the fastest hardware in one review and then using the "best value" hardware in another review, you make yourself look less thrustworty.

But hey, because I don't agree with your arguments, I'll simply be marked as an AMD fanboy I guess.

And soon it's gonna be 2021, time to add 1%lows & frametime graphs to CPU reviews. When someone reads your reviews and sees the 5900X leading the 2700x by only 5% on 1440p in Battlefield V they might actually think that's the case. In reality 2700x has many framedrops and much lower 1% lows then a 5900x. I know because I already noticed a massive increase when I upgraded from 2700x to the 3900x.
 
Last edited:
Why can't be all agree that AMD won this round? The fact that gaming was the last bastion for Intel means it deserves this article. The fact that based on what AMD has told us bodes well for the coming future but. Intel is serious about their DGPU and I think while we focus on the high end parts they are focusing on delivering an APU that trounces anything in AMD's stack as the $300 and under APUs will sell like Tim Horton's coffee in this anemic economy.
 
That's great, but the need for 4 DIMMs on AM4 not to lose performance still bugs me. I consider full ATX boards to be a waste of space and haven't used one in years, and, to be fair, Intel traditionally has a better choice of mini-ITX boards.
 
That's great, but the need for 4 DIMMs on AM4 not to lose performance still bugs me. I consider full ATX boards to be a waste of space and haven't used one in years, and, to be fair, Intel traditionally has a better choice of mini-ITX boards.
This could change as MB vendors try to entice us with new boards. If the chipset is the same the form factor must be the next "innovation" we see. Having said that 4 DIMMs is great for the vast amount of users that already have 2x8 GB sticks. The way RAM prices are right now makes it add to the flexibility of Ryzen.
 
That's great, but the need for 4 DIMMs on AM4 not to lose performance still bugs me. I consider full ATX boards to be a waste of space and haven't used one in years, and, to be fair, Intel traditionally has a better choice of mini-ITX boards.

To be more specific, a total of 4 ranks. That can be achieved with dual rank memory sticks. like Crucial Ballistix (currently best value) 16Gb (8Gbx2). Crucial ballistix is single rank sticks, but I wouldn't count it out for being single rank. Still great value.

B die ram will typical be single rank per stick (8gb). The newer G.skill Trident Z Neo 3600/3800/4000mhz b die sticks are dual rank, though.

B-die single rank sticks are better than dual rank Micron E-die, hynix cjr, etc because of better frequencies and tighter timings.

Edit: Also, to be fair, a lot of games aren't affected by memory too much. Best value are going to be the Crucial ballistix kits. B die ram is great and all, but its not worth it value wise, unless you like overclocking/tuning and want to flex.
 
Last edited:
To be more specific, a total of 4 ranks. That can be achieved with dual rank memory sticks like Crucial Ballistix (currently best value).

B die ram will typical be single rank per stick (8gb). The newer G.skill Trident Z Neo 3600/3800/4000mhz b die sticks are dual rank, though.

B-die is better than dual rank Micron E-die, hynix cjr, etc because of better frequencies and tighter timings.

This is important for the ITX guys in here ^.

Im thinking of a 5900x mini box but was worried about 4 dimms vs 4 ranks -- would love to see more testing on this.
 
This is important for the ITX guys in here ^^.
I want to build an ITX that can fit in my suitcase the next time I travel to Barbados. Plus one of those foldable screens Samsung just debuted.

On STOCK , Ryzen Zen 3 5000 line is better than Intel . Also , clearly on productivity task (tha'ts not new).

BUT , if in terms of overclocked CPU's :
- 8 cores i7-10700k is better in games compared to Ryzen 8500x (8 cores).
- Intel i9-10900k overclocked is better than ANY Ryzen 5000 Zen 3 PCU's in games (overclocked or not) . By quite a serios margin for some games ...

SO : Intel is still the king of gaming over the Zen 3 5000 series ....
It doesn't matter how you try to skin it the article reads Why and not if.
 
Great ideas, keep them coming, I'll test them all over the weekend

Edit: note to self, from Jonny via email, force Zen 3 + Ampere to Gen 3 to more clearly see PCIe 3 vs 4
Well since you asked ;)
1T vs 2T cmd rate
Even vs odd timings
Fabric mem speed syncd vs non synced 2133
ECC ram compatibility
Mem clocks vs mem timings

Etc etc :)
 
Awesome work, its the dedication to the methods that's impressive, elimination of variables and pin down the core parking on AMD when the CPUs are saturated with work as the primary culprit in performance variation.
 
At most , I can tell you what to do to have an Intel CPU better in games than AMD Ryzen Zen 3 :
- for 8 cores battle : Intel I7-10700k clocked at 5.1-5.2Ghz all cores, at least 4.5Ghz cash speed, RAM with at least 4133Mhz speed , tight adjusted timings (16 CAS for e.g)
- for higher cores : Intel i9-10900k clocked at 5.3-5.4Ghz all cores , Cash speed at least 4.8Ghz, RAM at least 4266Mhz , tight timings.

Also important : 4 ranks of the RAM . (2x16Gb dual rank or 4 x 8 Gb single rank , for e.g)
 
@W1zzard - That was a fantastic and insightful write up , thank you for taking the time. A couple of requests--

I thought that your custom Unreal Engine methodology/tests/charts were great. It easily gives a very clear picture of the performance relationship between CPU and GPU across the entire workload spectrum, for each different CPU. Do you think it is possible to utilize this in future reviews as a regular part of your testing suite?

"For my future processor reviews, I'll definitely upgrade to Ampere, or RDNA2 if that turns out to be the more popular choice on the market. I'm also thinking about upping the memory speeds a bit, DDR4-3600 seems like a good balance between cost and performance. I doubt I'll pick DDR4-3800 or 4000 just because AMD runs faster with it—guess I'll still get flak from the AMD fanboys."

I do have to agree with part of a previous comment above about this in your conclusion-- In my opinion, the purpose of benchmarks are for measuring absolute performance. Though I do understand that choosing a more mainstream RAM speed may be more reflective of the average gamer, I feel that it is more important to show a best case scenario for a given piece of hardware, and then let the readers decide on where to give and take to fit a particular budget. If we were to have a known bottle neck in the testing, it will then leave readers who do choose to purchase the highest performing hardware with a "what if" question. If your next test bench utilizes AMD cpu's, I feel that it should feature other components that allow it to achieve the fastest performance it is capable of, effectively taking it out of the equation as much as possible.
 
If you know the full power of certain hardware can only be shown by testing it with with a fast GPU and a fast memory, then you should simply do it.

For my future processor reviews, I'll definitely upgrade to Ampere, or RDNA2 if that turns out to be the more popular choice on the market.

When have you ever looked that "the most popular choice"? You have never changed your test system to AMD despite AMD beating Intel in sales 10 to 1. You have always used "the fastest ont he market" to eliminate bottlenecks. Now that there is a chance of AMD beating the 3090 you suggest using the "more popular choice" ( = Nvidia because they have biggest mindshare in GPU market ).

I'm also thinking about upping the memory speeds a bit, DDR4-3600 seems like a good balance between cost and performance. I doubt I'll pick DDR4-3800 or 4000 just because AMD runs faster with it—guess I'll still get flak from the AMD fanboys.

Smart of you of already putting a label on people that would give critic to your test methods. You tested all the RTX 30xx cards with a system using DDR4 4000 despite it being far from the best balance between cost and performance. But now that you know for a fact that Ryzen 5000 benefits from high-speed memory ( and this is not exactly breaking news... ) you think about upgrading to DDR4 3600 and not 3800 or 4000? Come on man...

Either you always use the fastest hardware to eliminate possible bottlenecks or either you use hardware that most readers can relate to ( = most popular in sales or best price/performance ratio ) but by cherry picking the fastest hardware in one review and then using the "best value" hardware in another review, you make yourself look less thrustworty.

But hey, because I don't agree with your arguments, I'll simply be marked as an AMD fanboy I guess.

And soon it's gonna be 2021, time to add 1%lows & frametime graphs to CPU reviews. When someone reads your reviews and sees the 5900X leading the 2700x by only 5% on 1440p in Battlefield V they might actually think that's the case. In reality 2700x has many framedrops and much lower 1% lows then a 5900x. I know because I already noticed a massive increase when I upgraded from 2700x to the 3900x.
I just checked it. Your call is correct, Sir.
All of the reviews I found here of 3070/3080/3090 (more than a dozen) from September/October had the following
memory setup: Thermaltake TOUGHRAM, 16 GB DDR4 @ 4000 MHz 19-23-23-42.
 
Back
Top