• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

How is Intel Beating AMD Zen 3 Ryzen in Gaming?

Joined
Jan 24, 2020
Messages
107 (0.07/day)
you seem to be new to the game.

You are the one that is referencing people that can't setup their machines. -50% less performance. like in what world. At first you talk about stock, then you talk about overclock. You are all over the place.
TPU average is only 3% difference. Most review outlets have slightly higher 5950x, 5900x averages.
It only comes down to price (with motherboard included).

so far you haven't linked anything that back up your claims.

jfyi no review outlet is reviewing at non-stock bc overclocks are not guaranteed.
I quoted a max tuned AMD vs Intel comparasion. It was considered "inválid" because GamersNexus review with Intel at 3600mhz only (wich is wayyyyy low value compared to the speeds Intel can get) Said otherwise

Talk about being Over the place. Só lets recap, AMD vs Intel tests are only valid when you use all favourable AMD tuning even if it means spending 300€ on RAM, while on Intel you tune it Mildly

Check the vídeo I posted before
 
Joined
Mar 21, 2009
Messages
45 (0.01/day)
Processor 9700k
Motherboard asrock z390 extreme4
Video Card(s) 5700
Display(s) benq xl2411p 144hz 1080p
ok let me help you. all with testing methodology outlined.

@Cobain
There is nothing wrong. I followed his thread on another fórum where Many users also repórted and shown that tuned Intel beats tuned Ryzen 5000 in games, BE it 240hz scenario low settings or 4k. It is not wrong.
link me/us

SIMRACE pl contradicted himself with his zen2 vs coffee lake comment.

----------------------
the spiritual successor to pclab.pl


I don't see anything to get wet over with intel. It really comes down to price.
AMD has €80-150 motherboards that can handle the 5950x (mostly not OC), intel not really.

--------------------------
since you're so RAM focused. same RAM for both systems
ttps://www.sweclockers.com/test/30638-amd-ryzen-9-5950x-och-ryzen-9-5900x-vermeer/5
 
Last edited:
Joined
Oct 15, 2019
Messages
550 (0.32/day)
I quoted a max tuned AMD vs Intel comparasion. It was considered "inválid" because GamersNexus review with Intel at 3600mhz only (wich is wayyyyy low value compared to the speeds Intel can get) Said otherwise

Talk about being Over the place. Só lets recap, AMD vs Intel tests are only valid when you use all favourable AMD tuning even if it means spending 300€ on RAM, while on Intel you tune it Mildly

Check the vídeo I posted before
What? Gamers Nexus has both AMD and intel running with 3200 MHz dual rank RAM. Where did you get the idea of them using different RAM settings for different platforms?

The only way intel outperforms AMD is if you overclock it to absurd (=unstable) levels and use a lot of time and money to get well performing ram for it. For AMD 3200 MHz dual rank is all you need.

If you'd like to see stock results, Anand has a great article about it, with 13 games tested: https://www.anandtech.com/show/1621...ive-review-5950x-5900x-5800x-and-5700x-tested
The performance difference is especially absurd when you take GPU bottlenecks out of the picture.


As for equally tuned systems, as in the same amount of time used to tune both an intel and AMD system of the same price point, the results are not good for intel either. Just look at the following video:
 
Joined
Nov 19, 2019
Messages
103 (0.06/day)
@W1zzard Coming back to these results in prep for the 12th gen reviews...
Any chance you could re-generate this chart and add an intel 11th & 12th gen cpu, maybe as a future article?
I think it will be interesting to see where things stand now.
Don't need all the other charts, and it doesn't specifically need to be with a 2080Ti.

1635871108548.png
 
Joined
Nov 13, 2007
Messages
10,326 (1.69/day)
Location
Austin Texas
Processor 13700KF Undervolted @ 5.6/ 5.5, 4.8Ghz Ring 200W PL1
Motherboard MSI 690-I PRO
Cooling Thermalright Peerless Assassin 120 w/ Arctic P12 Fans
Memory 48 GB DDR5 7600 MHZ CL36
Video Card(s) RTX 4090 FE
Storage 2x 2TB WDC SN850, 1TB Samsung 960 prr
Display(s) Alienware 32" 4k 240hz OLED
Case SLIGER S620
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse Xlite V2
Keyboard RoyalAxe
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.
As for equally tuned systems, as in the same amount of time used to tune both an intel and AMD system of the same price point, the results are not good for intel either. Just look at the following video:

Those results are great for intel - that's an 14nm 8700K OC matching a stock 7nm 5600x with a 3090. Not bad at all.

Also that is the current GPU limit cap - basically if you're GPU limited at all, there is very little difference between tuned Intel and tuned AMD with a 3090, lesser video cards even less.

Anyone running an OC on 8700K (4 year old chip), 9900K, or 10 series isn't going to be pressed to upgrade until RDNA 3 and even then only if they're going for a 7800XT+.
 
Last edited:
Joined
Oct 23, 2020
Messages
671 (0.49/day)
Location
Austria
System Name nope
Processor I3 10100F
Motherboard ATM Gigabyte h410
Cooling Arctic 12 passive
Memory ATM Gskill 1x 8GB NT Series (No Heatspreader bling bling garbage, just Black DIMMS)
Video Card(s) Sapphire HD7770 and EVGA GTX 470 and Zotac GTX 960
Storage 120GB OS SSD, 240GB M2 Sata, 240GB M2 NVME, 300GB HDD, 500GB HDD
Display(s) Nec EA 241 WM
Case Coolermaster whatever
Audio Device(s) Onkyo on TV and Mi Bluetooth on Screen
Power Supply Super Flower Leadx 550W
Mouse Steelseries Rival Fnatic
Keyboard Logitech K270 Wireless
Software Deepin, BSD and 10 LTSC
AMD wouldnt release it to Consumer but the real King for IGP (1% slower than 5600G IGP) and CPU overclock would be the 5300G :toast:

In the seight from AMD, why should we release a Consumer 5300G for 150 $ if that stupid Users like dumb monkeys buy a 5600G 280$ anywas?
 
Joined
Jan 27, 2015
Messages
1,658 (0.48/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
Those results are great for intel - that's an 14nm 8700K OC matching a stock 7nm 5600x with a 3090. Not bad at all.

Also that is the current GPU limit cap - basically if you're GPU limited at all, there is very little difference between tuned Intel and tuned AMD with a 3090, lesser video cards even less.

Anyone running an OC on 8700K (4 year old chip), 9900K, or 10 series isn't going to be pressed to upgrade until RDNA 3 and even then only if they're going for a 7800XT+.

I think gen 8 and anything short of a 9900K is becoming limiting on higher end new GPUs. It will be interesting to see how new GPUs late next year affect the last of the Skylake derivatives.

You need a $1400 GPU to show them up right now, but in normal / past times that would be a $650 GPU, which is really why they're still viable. Zen 2 (3000 series) and below are ofc in worse shape on that front.

I think the next gen GPUs late next year will push anything less than Zen 3 and RKL to the bottom of the charts.

The SOTR benchmark thread here is a really good place to find information on how CPU limited you may be.

And with a 6800XT, the 3800X is CPU limited to 161 FPS :
1635888051846.png

And a 10850K, CPU limited to 181 FPS with a 3080 :

1635888483710.png
 
Joined
Nov 13, 2007
Messages
10,326 (1.69/day)
Location
Austin Texas
Processor 13700KF Undervolted @ 5.6/ 5.5, 4.8Ghz Ring 200W PL1
Motherboard MSI 690-I PRO
Cooling Thermalright Peerless Assassin 120 w/ Arctic P12 Fans
Memory 48 GB DDR5 7600 MHZ CL36
Video Card(s) RTX 4090 FE
Storage 2x 2TB WDC SN850, 1TB Samsung 960 prr
Display(s) Alienware 32" 4k 240hz OLED
Case SLIGER S620
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse Xlite V2
Keyboard RoyalAxe
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.
I think gen 8 and anything short of a 9900K is becoming limiting on higher end new GPUs. It will be interesting to see how new GPUs late next year affect the last of the Skylake derivatives.

You need a $1400 GPU to show them up right now, but in normal / past times that would be a $650 GPU, which is really why they're still viable. Zen 2 (3000 series) and below are ofc in worse shape on that front.

I think the next gen GPUs late next year will push anything less than Zen 3 and RKL to the bottom of the charts.

The SOTR benchmark thread here is a really good place to find information on how CPU limited you may be.

And with a 6800XT, the 3800X is CPU limited to 161 FPS :
View attachment 223437
And a 10850K, CPU limited to 181 FPS with a 3080 :

View attachment 223439

I am pretty familiar with the SOTR bench, couple things:
1) that bench is poorly optimized for processors on the DEMO build - and the newer builds take much better advantage of processor performance:
2) untuned (i.e. not memory overclocked Zen 3) score almost the same despite being 25%-30% faster than Skylakes on this build
3) 6800XT will absolutely smoke a 3080 in average fps at 1080P due to a bit of nvidia driver overhead, which will impact CPU score a bit (zen 3 does better here)
here is my 10850K w/ 3080 at 1080P Lowest on the real build of the game:
1635891872763.png


Am I bottlenecking? Yes - I am bottlenecking anywhere to 215FPS (highest) to 238FPS (lowest) average at 1080P for this game...

the bottleneck is not really ipc but memory bandwith.
 
Last edited:
Joined
Jan 27, 2015
Messages
1,658 (0.48/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
I am pretty familiar with the SOTR bench, couple things:
1) that bench is poorly optimized for processors on the DEMO build - and the newer builds take much better advantage of processor performance:
2) untuned (i.e. not memory overclocked Zen 3) score almost the same despite being 25%-30% faster than Skylakes on this build
3) 6800XT will absolutely smoke a 3080 in average fps at 1080P due to a bit of nvidia driver overhead, which will impact CPU score a bit (zen 3 does better here)
here is my 10850K w/ 3080 at 1080P Lowest on the real build of the game:
View attachment 223443

Am I bottlenecking? Yes - I am bottlenecking anywhere to 215FPS (highest) to 238FPS (lowest) average at 1080P for this game...

the bottleneck is not really ipc but memory bandwith.


I intentionally only looked for benchs that followed the rules - 1080P, highest settings, DLSS and RT off. The hardest part of using that thread to find useful information is that people don't follow the rules.

And yes the CPU numbers can vary significantly and definitely favor OC / high mem speeds on Skylake derivatives which have always been the tweakers favorites and with good reason. Still, for those looking for knowledge as opposed to talking points, the information in that thread is quite valuable IMO.
 
Joined
Nov 13, 2007
Messages
10,326 (1.69/day)
Location
Austin Texas
Processor 13700KF Undervolted @ 5.6/ 5.5, 4.8Ghz Ring 200W PL1
Motherboard MSI 690-I PRO
Cooling Thermalright Peerless Assassin 120 w/ Arctic P12 Fans
Memory 48 GB DDR5 7600 MHZ CL36
Video Card(s) RTX 4090 FE
Storage 2x 2TB WDC SN850, 1TB Samsung 960 prr
Display(s) Alienware 32" 4k 240hz OLED
Case SLIGER S620
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse Xlite V2
Keyboard RoyalAxe
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.
I intentionally only looked for benchs that followed the rules - 1080P, highest settings, DLSS and RT off. The hardest part of using that thread to find useful information is that people don't follow the rules.

And yes the CPU numbers can vary significantly and definitely favor OC / high mem speeds on Skylake derivatives which have always been the tweakers favorites and with good reason. Still, for those looking for knowledge as opposed to talking points, the information in that thread is quite valuable IMO.
If you look at that bench you will see there isn’t a large difference between processors at stock memory settings.
 
Joined
Jan 27, 2015
Messages
1,658 (0.48/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
If you look at that bench you will see there isn’t a large difference between processors at stock memory settings.

Ya I know. I'm not sure why people buy K and X processors and run them at stock, kinda like people who buy boats and never go boating and whatnot. Buy a Hellcat and drive the speed limit. w/e

And I know where you are going with this, yes the 10850K can get 20% better than at stock pretty easy with minimal effort (literally 220 CPU score vs 180ish stock) while the 3800X cannot - provided you're willing to spend an extra $100 on an AIO and $50 on faster RAM.

Still, a stock 10850K with power limits intact and no fancy cooling or RAM speed will not keep up with a 3080. This means 90% of 10850K's in existence can't keep up with the 3080, which means gen 10's time being able to push a GPU to its limit is probably going to end with the next gen of GPUs in 2022. This is not a slam on intel because frankly Zen 2 is in worse shape on a 3080 or 3090, and I saw a benchmark where it appeared a 2600X is about a perfect match to a 2060 Super. Just that Skylake's time is about to pass, too. The GPUs are just getting too fast.
 
Joined
Nov 13, 2007
Messages
10,326 (1.69/day)
Location
Austin Texas
Processor 13700KF Undervolted @ 5.6/ 5.5, 4.8Ghz Ring 200W PL1
Motherboard MSI 690-I PRO
Cooling Thermalright Peerless Assassin 120 w/ Arctic P12 Fans
Memory 48 GB DDR5 7600 MHZ CL36
Video Card(s) RTX 4090 FE
Storage 2x 2TB WDC SN850, 1TB Samsung 960 prr
Display(s) Alienware 32" 4k 240hz OLED
Case SLIGER S620
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse Xlite V2
Keyboard RoyalAxe
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.
Ya I know. I'm not sure why people buy K and X processors and run them at stock, kinda like people who buy boats and never go boating and whatnot. Buy a Hellcat and drive the speed limit. w/e

And I know where you are going with this, yes the 10850K can get 20% better than at stock pretty easy with minimal effort (literally 220 CPU score vs 180ish stock) while the 3800X cannot - provided you're willing to spend an extra $100 on an AIO and $50 on faster RAM.

Still, a stock 10850K with power limits intact and no fancy cooling or RAM speed will not keep up with a 3080. This means 90% of 10850K's in existence can't keep up with the 3080, which means gen 10's time being able to push a GPU to its limit is probably going to end with the next gen of GPUs in 2022. This is not a slam on intel because frankly Zen 2 is in worse shape on a 3080 or 3090, and I saw a benchmark where it appeared a 2600X is about a perfect match to a 2060 Super. Just that Skylake's time is about to pass, too. The GPUs are just getting too fast.

But SOTR does not really show this. What SOTR shows is exactly the opposite, that game build, and ram speed are more important than IPC/ processor power - to an extent. The 10850K has much lower IPC, less cache and fewer cores than a 5900X - here is SOTR. Even if I took my overclock off and ran at 4.8Ghz I would beat that 5900x at stock.

Stock 5900X w/ 3080
1635901611076.png



The 10850K w/ a 3080
1635901664727.png


I understand what you're trying to say, but by this logic that 5900x also can't keep up with a 3080 (it can).

Different game build on 10850K:
1635901933245.png


I honestly think DDR5 and AMD's 3D cache are going to make a bigger difference than more IPC/Cores. Zen 3 and even Skylake are bandwidth starved -- if you feed them enough bandwidth & lower latency they scale beyond 150-200FPS for bottlenecking even the highest end cards at low resolutions. Rocket lake, for example, has a higher IPC than Skylake by about 20%, and basically ties (and even loses) some games to Skylake -- that to me shows that the current memory platform as a whole is a bottleneck. That extra processor power is just going to waste.
 

Attachments

  • 1635902330484.png
    1635902330484.png
    2.2 MB · Views: 38
Last edited:
Joined
Sep 17, 2014
Messages
21,552 (6.00/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
But SOTR does not really show this. What SOTR shows is exactly the opposite, that game build, and ram speed are more important than IPC/ processor power - to an extent. The 10850K has much lower IPC, less cache and fewer cores than a 5900X - here is SOTR. Even if I took my overclock off and ran at 4.8Ghz I would beat that 5900x at stock.

Stock 5900X w/ 3080
View attachment 223462


The 10850K w/ a 3080
View attachment 223463

I understand what you're trying to say, but by this logic that 5900x also can't keep up with a 3080 (it can).

Different game build on 10850K:
View attachment 223466

I honestly think DDR5 and AMD's 3D cache are going to make a bigger difference than more IPC/Cores. Zen 3 and even Skylake are bandwidth starved -- if you feed them enough bandwidth & lower latency they scale beyond 150-200FPS for bottlenecking even the highest end cards at low resolutions. Rocket lake, for example, has a higher IPC than Skylake by about 20%, and basically ties (and even loses) some games to Skylake -- that to me shows that the current memory platform as a whole is a bottleneck. That extra processor power is just going to waste.

Exactly.

CPU Performance in games is down to optimization and how the API / abstraction is handled. We have literally a whole slew of DirectX versions worth of proof. Every time the burden is on developers to make use of what's possible and do it the right way. If they don't, it costs CPU cycles. GPU is simple, you just crunch pixel info as fast as you get it. CPU is a timing thing and it directly relates to how code is used.

For gaming, raw CPU performance was NEVER a big issue. In DirectX 11, the problem wasn't primarly the lack of CPU grunt, but the lack of threading and handling high counts of draw calls, struggling with cache coherency, scheduling, etc.. Closer to metal APIs fix this - the CPUs are still the same. Similarly, a different API can in that way also improve GPU performance on the same CPU. Why? Because there is less waiting to draw frames. The GPU and CPU are the same, and yet you get more pixels crunched, because code is executed faster early in the pipeline.

The top-down view on CPU performance in gaming based off an FPS number and an ingame counter was always flawed, and it always will be. Processing is a pipeline and the weakest link is really never in hardware - you meet specs or you don't - but in coding things proper. And yes, you can use more hardware to mitigate the problem of shitty code, but you're not fixing anything. You're just throwing your money at a problem someone else should have fixed proper - and in many games, where the CPU efficiency is abysmal, complaints lead to improvements in due time. Especially now that the APIs do support threading. Developers can barely get away with games choking on one thread now.


Really, for PC gaming hardware, there is one overarching trend that echoes through the past few decades: we move to a new level of mainstream performance, and lagging way behind that OR jumping way above it is just a big pile of overexpensive waste of time. Gains suffer heavily from lack of optimization for those performance levels, and with the prevalence of console ports, that has only gotten worse. New console generations determine what becomes feasible on the PC, most of the time. That even goes for the adoption of RT right now. Not Nvidia nor the PC gaming space can or will carry that on its own. VR: same rocky start for gaming, chicken-egg problems and it still remains a niche. And that technology isn't new either, has been attempted before. Even natively developed DX12 or Vulkan games are only now starting to pop up - because consoles have now adopted it, so there is an economical advantage.

Its good to recognize these facts because they place the practical use of all your upgrading in a rather different perspective. Trailing under the top-end of performance is generally the best way to go, and lagging a gen or two behind is very cost effective. Match the mainstream as long and as closely as possible, and you have the least problems, the best value for money and a good/great experience, simply because the market demands it.
 
Last edited:
Joined
Jan 14, 2019
Messages
10,655 (5.29/day)
Location
Midlands, UK
System Name Holiday Season Budget Computer (HSBC)
Processor AMD Ryzen 7 7700X
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 16 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 6500 XT 4 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Exactly.

CPU Performance in games is down to optimization and how the API / abstraction is handled. We have literally a whole slew of DirectX versions worth of proof. Every time the burden is on developers to make use of what's possible and do it the right way. If they don't, it costs CPU cycles. GPU is simple, you just crunch pixel info. CPU is a timing thing and it directly relates to how code is used.

For gaming, raw CPU performance was NEVER a big issue. In DirectX 11, the problem wasn't primarly the lack of CPU grunt, but the lack of threading and handling high counts of draw calls. Closer to metal APIs fix this - the CPUs are still the same. Similarly, a faster API can in that way also improve GPU performance on the same CPU. Why? Because there is less waiting to draw frames. The GPU and CPU are the same, and yet you get more pixels crunched.

The top-down view on CPU performance in gaming based off an FPS number and an ingame counter was always flawed, and it always will be. Processing is a pipeline and the weakest link is really never in hardware - you meet specs or you don't - but in coding things proper. And yes, you can use more hardware to mitigate the problem of shitty code, but you're not fixing anything. You're just throwing your money at a problem someone else should have fixed proper - and in many games, where the CPU efficiency is abysmal, complaints lead to improvements in due time. Especially now that the APIs do support threading. Developers can barely get away with games choking on one thread now.
One might argue that game tests are irrelevant because they employ different code to use hardware differently. One might argue that X API is irrelevant compared to another one because of the same reason. One might argue that testing X game is irrelevant because it's not optimised well. All of these are valid points from a strictly scientific point of view, but you can't describe real user experience in a scientific way.

Also, there is no golden way to test anything, as every hardware and software is different. You can establish trends at best, but there will always be some deviation. For that reason, I say every hardware is good hardware if it serves your needs, and every test is a good test if it relates to your specific use case. Arguing about which test method is irrelevant and why is irrelevant.

I do agree about games not being CPU-dependent, though (testing at 720p minimum with an RTX 3090 really is... irrelevant).
 
Joined
Sep 17, 2014
Messages
21,552 (6.00/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
every test is a good test if it relates to your specific use case. Arguing about which test method is irrelevant and why is irrelevant.

I do agree about games not being CPU-dependent, though (testing at 720p minimum with an RTX 3090 really is... irrelevant).
This is key and far too often, overlooked. And it begins with a proper specification of your use case, quite a few of those are rather dishonest (for their own selves) - and it often seats on a lack of insight/knowledge of how PCs and games work. OTOH we're making progress as a gaming community too - think of the revelations on frame pacing etc. So if the experience is truly lacking, we'll move in to fix it because we can collectively recognize it. This is what makes up the mainstream progress.

Good examples are everywhere. New consoles now support higher refresh rates. Have multiple quality/performance modes. Are somewhat tweakable. And PCs have also moved towards the console space. We're blending, another trend that has been going on for years but is now also technically feasible.
 
Joined
Jan 14, 2019
Messages
10,655 (5.29/day)
Location
Midlands, UK
System Name Holiday Season Budget Computer (HSBC)
Processor AMD Ryzen 7 7700X
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 16 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 6500 XT 4 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
This is key and far too often, overlooked. And it begins with a proper specification of your use case, quite a few of those are rather dishonest (for their own selves) - and it often seats on a lack of insight/knowledge of how PCs and games work.
Agreed. :)

It is commonly overlooked by reviewers as well. They tend to say things like "X CPU is crap because it gets beaten by Y CPU by a margin of 5%". Then they show graphs of averages across a million games, when in fact, nobody notices 5% in real life and nobody plays "Average Game" as there is no such thing.

They also say things like "X CPU uses a lot more power, therefore it runs hotter" when actually, the two statements share no logical connection - primarily, larger heat density (more heat in a smaller area) and heat dissipation issues lead to hotter chips. I had to learn this the hard way, not being able to cool a R5 3600 at stock with the same case and cooler that my i7-11700 is happy with even with a modified 125 W PL1.

In our informationally saturated times, one must read between the lines of marketing BS and reviewer's (false) conclusions and find information crumbs that are relevant to one's own use case, and establish a picture from there. X CPU isn't good because it beats Y CPU, but because it fits your use case. Whether it's a well-optimised game or a shitty console port, doesn't matter. The fact that your PC runs it, and you enjoy playing it does.
 
Top