• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Is CPU game benchmarking methodology (currently) flawed?

Joined
Nov 26, 2013
Messages
816 (0.20/day)
Location
South Africa
System Name Mroofie / Mroofie
Processor Inte Cpu i5 4460 3.2GHZ Turbo Boost 3.4
Motherboard Gigabyte B85M-HD3
Cooling Stock Cooling
Memory Apacer DDR3 1333mhz (4GB) / Adata DDR3 1600Mhz(8GB) CL11
Video Card(s) Gigabyte Gtx 960 WF
Storage Seagate 1TB / Seagate 80GB / Seagate 1TB (another one)
Display(s) Philips LED 24 Inch 1080p 60Hz
Case Zalman T4
Audio Device(s) Meh
Power Supply Antec Truepower Classic 750W 80 Plus Gold
Mouse Meh
Keyboard Meh
VR HMD Meh
Software Windows 10
Benchmark Scores Meh
I'd say it's not completely flawed but a bit. In the end seeing future by yourself instead of forcing it by low resolution is what made the difference. Reality is worth more than a simulation by low resolution. The FX 8350 sure has more power than the 2500k, but only if it's used well, and exactly that needed time. It's more on par with the 2600K than the 2500K.

BTW about the offtopic blatter:
He said he talks in a more extreme accent in his videos not that he "fakes" anything - also it's of no importance and consequence related to the topic, haters gonna hate anyway.
Its been how many years now?
How long should i wait for a cpu to be effectively used? 12, 20 years? :rolleyes:

The Battlefield video he uses is from an amd shill so clearly that's not working to his favor.

Especially since he said they were friends or he knew him well something like that if i remember correctly (on mobile cant check now)
 
Joined
Mar 10, 2010
Messages
11,880 (2.19/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Gskill Trident Z 3900cas18 32Gb in four sticks./16Gb/16GB
Video Card(s) Asus tuf RX7900XT /Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores laptop Timespy 6506
Its been how many years now?
How long should i wait for a cpu to be effectively used? 12, 20 years? :rolleyes:

The Battlefield video he uses is from an amd shill so clearly that's not working to his favor.

Especially since he said they were friends or he knew him well something like that if i remember correctly (on mobile cant check now)
Depends who's using it ,and on what ,mines been flat out the 4 years I've owned it , again yawn.
Hyperbole eh 12-20 years ffs really are you 5
 

HTC

Joined
Apr 1, 2008
Messages
4,664 (0.76/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 5800X3D
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Pulse 6600 8 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 20.04.6 LTS
There seems to be some confusion here: i'm asking about the methodology!

The way the current methodology works is that you test CPUs with a very fast GPU @ low resolutions / details (to eliminate the GPU as a variable) in a variety of gaming scenarios: this will tell if CPU X is better then CPU Y in gaming (and which ever other CPUs included in the review) and no matter what faster card you test again in the future will not change this outcome.

HOWEVER
, Adored has found this not the case since the example he showed shows a FX8350 going from over 10% slower to 10% faster and this contradicts the methodology's theory.

There's a BIG catch however, which is what i was trying to get tested: there were changes in the hardware used as well as drivers and even games (didn't mention the games bit in the OP: that's IMO a very BIG variable).

What i was trying to get answered is if his findings are still true once eliminating as many variables as possible.

This has serious implications because, if a proper review shows Adored's right, then the methodology's is flawed and needs to be scrapped.
 
Joined
Apr 2, 2009
Messages
3,505 (0.61/day)
You know, all these gaming benchmarks are ultimately pointless.

What you really need is a benchmark that displays raw CPU powers in single and multi thread - which is Cinebench.

That's really all you need. I can make decisions based on that single bench. No need to do all these gaming benchmarks that differ one from another like watching Sun and Moon.
 
Joined
Nov 13, 2007
Messages
10,884 (1.74/day)
Location
Austin Texas
System Name stress-less
Processor 9800X3D @ 5.42GHZ
Motherboard MSI PRO B650M-A Wifi
Cooling Thermalright Phantom Spirit EVO
Memory 64GB DDR5 6400 1:1 CL30-36-36-76 FCLK 2200
Video Card(s) RTX 4090 FE
Storage 2TB WD SN850, 4TB WD SN850X
Display(s) Alienware 32" 4k 240hz OLED
Case Jonsbo Z20
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse DeathadderV2 X Hyperspeed
Keyboard 65% HE Keyboard
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.
There seems to be some confusion here: i'm asking about the methodology!

The way the current methodology works is that you test CPUs with a very fast GPU @ low resolutions / details (to eliminate the GPU as a variable) in a variety of gaming scenarios: this will tell if CPU X is better then CPU Y in gaming (and which ever other CPUs included in the review) and no matter what faster card you test again in the future will not change this outcome.

HOWEVER
, Adored has found this not the case since the example he showed shows a FX8350 going from over 10% slower to 10% faster and this contradicts the methodology's theory.

There's a BIG catch however, which is what i was trying to get tested: there were changes in the hardware used as well as drivers and even games (didn't mention the games bit in the OP: that's IMO a very BIG variable).

What i was trying to get answered is if his findings are still true once eliminating as many variables as possible.

This has serious implications because, if a proper review shows Adored's right, then the methodology's is flawed and needs to be scrapped.

The assumptions are that 1)the game will scale into high rez very much the same way as it performs at low rez; therefore testing at low rez will CPU bottleneck and more clearly show the differences. And 2) future GPU will scale the way that a current GPU performs at low rez... i.e. the 2080 that you will pop in your rig will behave the same way at 1440P that your current1080 behaves at 1080P --

If that assumption breaks (i.e. the performance scaling doesn't hold and one proc actually ends up falling off more slowly than the other, and they change places) then the current methodology is flawed, and we should test at all resolutions (some sites do this).

However, the widely accepted belief is that low rez just highlights the bottlenecks of the CPU which will still be somewhat present in the High rez tests, and also with future GPUs.

For a while when sandy bridge first came out that CPU was became almost irrelevant for games... you could get the cheapest i5 quad and the cheapest ddr3 kit and it wouldn't matter - everything was GPU bound above 720 P. Then CPUs scaled at (+)5-10% per generation and GPUs scaled at (+) 60-80%. I just read a 1080TI review where they were seeing bottlenecking on an overclocked skylake at 1440P... so the low rez tests are not all that trivial ..
 
Joined
Nov 26, 2013
Messages
816 (0.20/day)
Location
South Africa
System Name Mroofie / Mroofie
Processor Inte Cpu i5 4460 3.2GHZ Turbo Boost 3.4
Motherboard Gigabyte B85M-HD3
Cooling Stock Cooling
Memory Apacer DDR3 1333mhz (4GB) / Adata DDR3 1600Mhz(8GB) CL11
Video Card(s) Gigabyte Gtx 960 WF
Storage Seagate 1TB / Seagate 80GB / Seagate 1TB (another one)
Display(s) Philips LED 24 Inch 1080p 60Hz
Case Zalman T4
Audio Device(s) Meh
Power Supply Antec Truepower Classic 750W 80 Plus Gold
Mouse Meh
Keyboard Meh
VR HMD Meh
Software Windows 10
Benchmark Scores Meh
Depends who's using it ,and on what ,mines been flat out the 4 years I've owned it , again yawn.
Hyperbole eh 12-20 years ffs really are you 5
old enough to call out bs :rolleyes:
 
Joined
Mar 10, 2010
Messages
11,880 (2.19/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Gskill Trident Z 3900cas18 32Gb in four sticks./16Gb/16GB
Video Card(s) Asus tuf RX7900XT /Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores laptop Timespy 6506
old enough to call out bs :rolleyes:
I have not waffled any so who's , the op.
Either way our opinions differ I'm fine with that.
 
Joined
Jan 8, 2017
Messages
9,577 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Bottom of the line is that Ryzen clearly has a lot more power left in it while Kaby Lake dose not , this is something important that should be included as a side note alongside with whatever testing methodology you use.

Remember the q6600 and other first quad core cpus ? They were complete pointless at the time they were released for gaming , thats what a lot people said and advised strongly to buy a dual core instead. Today a q6600 runs GTA 5 decently with a good GPU and with most dual cores that can be much higher clocked from the same era it is literally unplayable , no testing methodology told us that.

So to this :

How long should i wait for a cpu to be effectively used? 12, 20 years? :rolleyes:

Yes it seems like it really took about 10 years to become relevant but it did eventually happen. Future proofing is a real thing with CPUs and people care about it , or they should , since not everyone can afford to simply buy whatever it is that performs best each generation.

So shockingly I would take a 1700 over a 7700 any day knowing that today I get 10 less frames in a game that already runs at 100fps but a couple of years in the future performance on the 7700 might be seriously crippled. Again , no current testing methodology says this.

So don't just take for granted every chart you see if you really care about this stuff and make some predictions yourself. Whatever insanely precise way of testing CPU performance you find out is only going to do it for the type of software you run today , so yes it is a flawed way of doing things.
 
Last edited:

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.52/day)
Bottom of the line is that Ryzen clearly has a lot more power left in it while Kaby Lake dose not , this is something important that should be included as a side note alongside with whatever testing methodology you use.


Meh. AMD said that bulldozer would do 5 GHz, and that only took like 4 years, so it took a long time for AMD to capitalize on that architecture, and I do not expect AMD to be able to capitalize on Ryzen any faster. So, that "potential" is meaningless. IF it was an actual priority for AMD, they'd have dealt with it prior to the launch, but they failed to do so. You can't tell me they had chips, tested them, knew they worked, saw the performance, and then unintentionally ignored the issues... they ignored the issues on purpose. I fully expect them to keep ignoring them, too.
 
Joined
Jan 8, 2017
Messages
9,577 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Meh. AMD said that bulldozer would do 5 GHz, and that only took like 4 years, so it took a long time for AMD to capitalize on that architecture, and I do not expect AMD to be able to capitalize on Ryzen any faster. So, that "potential" is meaningless. IF it was an actual priority for AMD, they'd have dealt with it prior to the launch, but they failed to do so. You can't tell me they had chips, tested them, knew they worked, saw the performance, and then unintentionally ignored the issues... they ignored the issues on purpose. I fully expect them to keep ignoring them, too.

I was referring simply to what the chip can do , not it's issues . And I'm afraid it's not in their power to capitalize on what Ryzen can do from that point of view (at least not entirely) , you didn't seriously expect them run up to every software developer and make them rewrite their stuff just for their 1 product did you ? So sure , potential extra power it's meaningless today , but you can never know when the the industry will take big turn and they'll prove useful , that's why I said this should be mentioned as a side note not an actual guideline.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.52/day)
I was referring simply to what the chip can do , not it's issues . And I'm afraid it's not in their power to capitalize on what Ryzen can do from that point of view (at least not entirely) , you didn't seriously expect them run up to every software developer and make them rewrite their stuff just for their 1 product did you ? So sure , potential extra power it's meaningless today , but you can never know when the the industry will take big turn and they'll prove useful , that's why I said this should be mentioned as a side note not an actual guideline.
Not every dev, but Microsoft, for Windows10? Yeah, I expect them to deal with such issues prior to a launch.

I don't care about "maybes", I care about what you get for sure.
 
Joined
Jan 8, 2017
Messages
9,577 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Not every dev, but Microsoft, for Windows10? Yeah, I expect them to deal with such issues prior to a launch.

Well after looking more into this , it seems there isn't much MS can do about it , every weak spot Zen has it's due to it's nature not because of what the OS is doing. All there is left is for developers to take these into account. Even if they don't there is still room for more performance to be had.

I don't care about "maybes", I care about what you get for sure.

That's a perfectly fine way of looking at this matter , but I for one would much rather pick a product that's good enough today and has some "maybes" than one that has none as I would like not to be forced to upgrade it sooner.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.52/day)
Well after looking more into this , it seems there isn't much MS can do about it , every weak spot Zen has it's due to it's nature not because what the OS is doing. All there is left is for developers to take these into account. Even if they don't there is still room for more performance to be had.

Says who? Not AMD... they say everything is working fine. Like I said, they are purposefully ignoring any present "issues".



That's a perfectly fine way of looking at this matter , but I for one would much rather pick a product that's good enough today and has some "maybes" than one that has none as I would like not to be forced to upgrade it sooner.

I can understand that opinion, and you holding it, but having been let down by AMD's "promises" time and again over the years has me quite hesitant to believe there will be any improvement.


You see, I already have Ryzen and boards and memory... actual AM4-rated memory... I have a system sitting here next to me running benchmarks as we speak for a board review.


And I see nothing wrong with Ryzen as is. So there is nothing left to improve upon. Ryzen is quite good. IS it a bit disappointing? Not to me. It is EXACTLY what I expected. Did the hype train kill the public's perception of Ryzen? Yeah, it did, and in a big way, but not my perception.
 
Joined
Jan 8, 2017
Messages
9,577 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Says who? Not AMD... they say everything is working fine. Like I said, they are purposefully ignoring any present "issues".





I can understand that opinion, and you holding it, but having been let down by AMD's "promises" time and again over the years has me quite hesitant to believe there will be any improvement.


You see, I already have Ryzen and boards and memory... actual AM4-rated memory... I have a system sitting here next to me running benchmarks as we speak for a board review.


And I see nothing wrong with Ryzen as is. So there is nothing left to improve upon. Ryzen is quite good. IS it a bit disappointing? Not to me. It is EXACTLY what I expected. Did the hype train kill the public's perception of Ryzen? Yeah, it did, and in a big way, but not my perception.

To be fair all of these discussions could have been avoided if AMD never said the word "gaming" and never showed any "live gaming benchmark\comparisons" . That would have not let the hype grow as big as it did. So I condemn them too for some of these promises , they've always been dumb in this regard and never learned. I cannot recall a truly crap product from them , but all the BS they said did make it look like crap often to the public. I have learned to ignore this BS as well and take it for what it is.
 
Last edited:

Kanan

Tech Enthusiast & Gamer
Joined
Aug 22, 2015
Messages
3,517 (1.03/day)
Location
Europe
System Name eazen corp | Xentronon 7.2
Processor AMD Ryzen 7 3700X // PBO max.
Motherboard Asus TUF Gaming X570-Plus
Cooling Noctua NH-D14 SE2011 w/ AM4 kit // 3x Corsair AF140L case fans (2 in, 1 out)
Memory G.Skill Trident Z RGB 2x16 GB DDR4 3600 @ 3800, CL16-19-19-39-58-1T, 1.4 V
Video Card(s) Asus ROG Strix GeForce RTX 2080 Ti modded to MATRIX // 2000-2100 MHz Core / 1938 MHz G6
Storage Silicon Power P34A80 1TB NVME/Samsung SSD 830 128GB&850 Evo 500GB&F3 1TB 7200RPM/Seagate 2TB 5900RPM
Display(s) Samsung 27" Curved FS2 HDR QLED 1440p/144Hz&27" iiyama TN LED 1080p/120Hz / Samsung 40" IPS 1080p TV
Case Corsair Carbide 600C
Audio Device(s) HyperX Cloud Orbit S / Creative SB X AE-5 @ Logitech Z906 / Sony HD AVR @PC & TV @ Teufel Theater 80
Power Supply EVGA 650 GQ
Mouse Logitech G700 @ Steelseries DeX // Xbox 360 Wireless Controller
Keyboard Corsair K70 LUX RGB /w Cherry MX Brown switches
VR HMD Still nope
Software Win 10 Pro
Benchmark Scores 15 095 Time Spy | P29 079 Firestrike | P35 628 3DM11 | X67 508 3DM Vantage Extreme
There seems to be some confusion here: i'm asking about the methodology!

The way the current methodology works is that you test CPUs with a very fast GPU @ low resolutions / details (to eliminate the GPU as a variable) in a variety of gaming scenarios: this will tell if CPU X is better then CPU Y in gaming (and which ever other CPUs included in the review) and no matter what faster card you test again in the future will not change this outcome.

HOWEVER
, Adored has found this not the case since the example he showed shows a FX8350 going from over 10% slower to 10% faster and this contradicts the methodology's theory.

There's a BIG catch however, which is what i was trying to get tested: there were changes in the hardware used as well as drivers and even games (didn't mention the games bit in the OP: that's IMO a very BIG variable).

What i was trying to get answered is if his findings are still true once eliminating as many variables as possible.

This has serious implications because, if a proper review shows Adored's right, then the methodology's is flawed and needs to be scrapped.
TBH, I don't need more data to be quite sure that the FX 8350 scaled to be better over time. We had examples like this already when the CPU was quite new, when Crysis 3 utilized all its cores in one level (Djungle) it matched the speed of the i7 2600K or 2700K. Now that more games take advantage of more cores its obvious to me that the "10% faster" instead of "10% slower" should hold to be true.

Ryzen is no disappointment to anyone that had realistic expectations. Its gaming performance - I never expected it to be faster than the 7700K, I simply expected good to very good performance all around (games, apps, server) and I'm not disappointed at all. In the same time a lot of people were raging around I was like "it's performing quite good, seeing from where AMD did achieve this kind of performance you all should rather be happy than behave like this".

Also over time Ryzen will be easily easily better than the 7700K, it's just a quad core with high clocks, it has no chance once more than 4 cores are properly utilized, even the 1700 should be easily faster then. The games where it performs a bit disappointing are maybe, just maybe, utilizing the CPU the wrong way, that is why. We should know more about that, once the 4 core Ryzen's arrive, so games are prevented from crosstalking between Quad Core modules (CCX) to diminish latencies and bandwidth, because there's simply not a 2nd CCX that could possibly penalize this. As I see it, some games work properly, and some don't. Big disappointment for game lovers? I don't see why. When exactly did a new architecture happen to be fully utilized from the get go? Not even the Core architecture (Core i7 etc.) was fully utilized from the get go, and that despite it being related to the Core 2 architecture (Core 2 Quad etc.). Give it some freaking time. When FX 8350 can scale to be rather "okay", Ryzen will scale to be "great", I'm sure.

AMD did design Ryzen to be first of all a great server / workstation CPU, and they fully delivered and are even winning this against Intel - the big money is there, not in the gaming department. If AMD makes some big money that way, you can expect AMD to design better CPUs for gaming as well in the future - I believe this is also why Lisa Su always said "the best is yet to come". This is the first of a new great line of CPUs that will deliver for everyone - in time.
 

HTC

Joined
Apr 1, 2008
Messages
4,664 (0.76/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 5800X3D
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Pulse 6600 8 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 20.04.6 LTS
Let me try and be a bit more clear:

This topic is NOT about Ryzen: it's about gaming CPU benchmarking and it's current methodology!!!!

It just so happens that i used a video talking about the methodology AND about Ryzen.

Adored isn't the only one that can grab something out of someone else's review / video to try and make a point: in this case, about the validity (or lack there of) of the current methodology for gaming CPU benchmarking.
 

SL2

Joined
Jan 27, 2006
Messages
2,463 (0.36/day)
IMO, dropping the resolution in CPU benchmarks is just as bad as using graphs that doesn't start at zero.
Also, why are you surprised? Because Intel told us that going back in time to 2009* is best? And never mind our ignoring almost a decade of technological progress? :)

*that's when the first quad core came out. Yeap.
Try 2006. ;)

https://www.techpowerup.com/reviews/Intel/QX6700/
(480p gaming benchmarks included :D)
 
Joined
Feb 8, 2012
Messages
3,014 (0.64/day)
Location
Zagreb, Croatia
System Name Windows 10 64-bit Core i7 6700
Processor Intel Core i7 6700
Motherboard Asus Z170M-PLUS
Cooling Corsair AIO
Memory 2 x 8 GB Kingston DDR4 2666
Video Card(s) Gigabyte NVIDIA GeForce GTX 1060 6GB
Storage Western Digital Caviar Blue 1 TB, Seagate Baracuda 1 TB
Display(s) Dell P2414H
Case Corsair Carbide Air 540
Audio Device(s) Realtek HD Audio
Power Supply Corsair TX v2 650W
Mouse Steelseries Sensei
Keyboard CM Storm Quickfire Pro, Cherry MX Reds
Software MS Windows 10 Pro 64-bit
low res, low setting, fast gpu testing BY ITSELF is NOT good way to show CPU differences.
Settings are the real culprit here, some of them make game much less CPU intensive, right? In fact, some of the most popular performance tweaks in games are view distance, detail distance and/or some clutter density setting ... because those move substantial amount of pressure both from GPU and CPU.
In my opinion, low res (+ frame scaling at 0.25 :laugh:), ultra settings (with AA/post-processing/geometry on low), fast GPU testing is a pretty good way to show CPU differences in games ... but you gotta find a way to replicate significant stress reliably (in real game play).
Exact methodology would require to measure CPU and GPU hit for each setting, and appropriately choose settings so the GPU has least possible amount of work and CPU has maximum possible amount of work inside a frame.
The question is what do we gain when all we do is analyze relative performance in this case ... one goes through all that trouble just to show a very similar column graph, only slightly stretched :kookoo:
 
Last edited:

qubit

Overclocked quantum bit
Joined
Dec 6, 2007
Messages
17,865 (2.86/day)
Location
Quantum Well UK
System Name Quantumville™
Processor Intel Core i7-2700K @ 4GHz
Motherboard Asus P8Z68-V PRO/GEN3
Cooling Noctua NH-D14
Memory 16GB (2 x 8GB Corsair Vengeance Black DDR3 PC3-12800 C9 1600MHz)
Video Card(s) MSI RTX 2080 SUPER Gaming X Trio
Storage Samsung 850 Pro 256GB | WD Black 4TB | WD Blue 6TB
Display(s) ASUS ROG Strix XG27UQR (4K, 144Hz, G-SYNC compatible) | Asus MG28UQ (4K, 60Hz, FreeSync compatible)
Case Cooler Master HAF 922
Audio Device(s) Creative Sound Blaster X-Fi Fatal1ty PCIe
Power Supply Corsair AX1600i
Mouse Microsoft Intellimouse Pro - Black Shadow
Keyboard Yes
Software Windows 10 Pro 64-bit
You can't tell me they had chips, tested them, knew they worked, saw the performance, and then unintentionally ignored the issues... they ignored the issues on purpose. I fully expect them to keep ignoring them, too.
This is the sort of thing that fills me with such confidence with AMD products.

/s

I'll stick to Intel and avoid the headaches, thanks.
 
Joined
Dec 31, 2009
Messages
19,374 (3.53/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Settings are the real culprit here, some of them make game much less CPU intensive, right? In fact, some of the most popular performance tweaks in games are view distance, detail distance and/or some clutter density setting ... because those move substantial amount of pressure both from GPU and CPU.
In my opinion, low res (+ frame scaling at 0.25 :laugh:), ultra settings (with AA/post-processing/geometry on low), fast GPU testing is a pretty good way to show CPU differences in games ... but you gotta find a way to replicate significant stress reliably (in real game play).
Exact methodology would require to measure CPU and GPU hit for each setting, and appropriately choose settings so the GPU has least possible amount of work and CPU has maximum possible amount of work inside a frame.
The question is what do we gain when all we do is analyze relative performance in this case ... one goes through all that trouble just to show a very similar column graph, only slightly stretched :kookoo:
I still don't buy it... I dont play at low res.. all this does, settings and all, is EXAGGERATE any effect..period. I want to see it with settings/res we play at...not settings/res which exacerbate the variable we are testing.
 
Joined
Feb 8, 2012
Messages
3,014 (0.64/day)
Location
Zagreb, Croatia
System Name Windows 10 64-bit Core i7 6700
Processor Intel Core i7 6700
Motherboard Asus Z170M-PLUS
Cooling Corsair AIO
Memory 2 x 8 GB Kingston DDR4 2666
Video Card(s) Gigabyte NVIDIA GeForce GTX 1060 6GB
Storage Western Digital Caviar Blue 1 TB, Seagate Baracuda 1 TB
Display(s) Dell P2414H
Case Corsair Carbide Air 540
Audio Device(s) Realtek HD Audio
Power Supply Corsair TX v2 650W
Mouse Steelseries Sensei
Keyboard CM Storm Quickfire Pro, Cherry MX Reds
Software MS Windows 10 Pro 64-bit
is EXAGGERATE any effect..period.
Well, the point is to exaggerate the effect when there is not much spread between the graph bar values (to show any fps difference between ryzen sku-s for example) ... argument "why do it when effect is already measurable at settings/res we play at" is a valid one when it applies and I pointed that out in my post.
 
Joined
Dec 31, 2009
Messages
19,374 (3.53/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
I am saying exaggerating is the problem as it doesn't extrapolate to higher settings and resolutions. It is trying to describe an issue, which isn't much of one when run at 1080p with Ultra settings or higher (where people use these cards and those settings), so to MAKE it (more of an) an issue, its run at very low res with a high end card and low settings. Makes no sense whatsoever to me. None.

So, if I am understanding you are right, yes, it is a valid point, WHEN IT APPLIES, which is for those people that run 1070+ at lower than 1080p res on low settings. Please tell me there are zero people in this world which do that...

In my opinion, low res (+ frame scaling at 0.25 :laugh:), ultra settings (with AA/post-processing/geometry on low), fast GPU testing is a pretty good way to show CPU differences in games
In the end, I totally disagree with this statement. (and of course, that is OK. :))
 
Joined
Sep 17, 2014
Messages
22,830 (6.06/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
There seems to be some confusion here: i'm asking about the methodology!

The way the current methodology works is that you test CPUs with a very fast GPU @ low resolutions / details (to eliminate the GPU as a variable) in a variety of gaming scenarios: this will tell if CPU X is better then CPU Y in gaming (and which ever other CPUs included in the review) and no matter what faster card you test again in the future will not change this outcome.

HOWEVER
, Adored has found this not the case since the example he showed shows a FX8350 going from over 10% slower to 10% faster and this contradicts the methodology's theory.

There's a BIG catch however, which is what i was trying to get tested: there were changes in the hardware used as well as drivers and even games (didn't mention the games bit in the OP: that's IMO a very BIG variable).

What i was trying to get answered is if his findings are still true once eliminating as many variables as possible.

This has serious implications because, if a proper review shows Adored's right, then the methodology's is flawed and needs to be scrapped.


The methodology isn't flawed. The only thing flawed is the people READING benchmark results.

A benchmark is the following:
"The measure of performance on a specific piece of hardware at a specific point in time, in a specific setup"

Chance parameters in these specifics, and you get different results.

There are several key differences in specifics when you pair an old CPU with a different GPU, or when you pair an old CPU with the same benchmark suite at a different point in time:

- GPU Drivers
- OS version & changes
- GPU architecture (for example: the difference between DX11 CPU load between AMD and Nvidia GPUs, even if they are on a similar performance level, would produce surprising results, even on similar performing CPUs; like so: http://www.tomshardware.com/reviews/crossfire-sli-scaling-bottleneck,3471.html
- API changes

So, bottom line: always use a benchmark that is:
- Relevant (to your use case, and your system - Ryzen release showed how people can forget this, judging workstation class CPU by mainstream/lower core count CPU standards)
- Recent (same architecture, same OS, same driver branch)

It ain't rocket science, but just quickly glancing over a few graphs DOES NOT give you any good information. Benchmark results need some attention to really grasp what they are trying to tell you.
 
Last edited:
Joined
Feb 8, 2012
Messages
3,014 (0.64/day)
Location
Zagreb, Croatia
System Name Windows 10 64-bit Core i7 6700
Processor Intel Core i7 6700
Motherboard Asus Z170M-PLUS
Cooling Corsair AIO
Memory 2 x 8 GB Kingston DDR4 2666
Video Card(s) Gigabyte NVIDIA GeForce GTX 1060 6GB
Storage Western Digital Caviar Blue 1 TB, Seagate Baracuda 1 TB
Display(s) Dell P2414H
Case Corsair Carbide Air 540
Audio Device(s) Realtek HD Audio
Power Supply Corsair TX v2 650W
Mouse Steelseries Sensei
Keyboard CM Storm Quickfire Pro, Cherry MX Reds
Software MS Windows 10 Pro 64-bit
So, if I am understanding you are right, yes, it is a valid point, WHEN IT APPLIES, which is for those people that run 1070+ at lower than 1080p res on low settings. Please tell me there are zero people in this world which do that...
You, me and everybody is aware how we, the people, play our games ... we put shit to ultra and resolution to native, sometimes we curse and swear because of the fps dips then adjust the settings only as much is needed :)
In other words, you are not understanding me right, let me put it the other way ... using high resolution when benching cpus in games is ok (it applies) when game is cpu hungry enough so you can get meaningful value range for your graph (for example if you don't like differences that are fractions of a frame) ... if you want wider value range for your graph in those couple of games (that's when it doesn't apply) you lesser the burden on the gpu and get less compressed graph. It's fine because you are trying to analyze relative cpu performance executing that particular game code. It's kinda game to game basis IMHO, to get a more readable graph.
Now, argument how we should always and exclusively always test cpu in games using uhd res is resonable to an extent only if somehow lowering the resolution lowers the amount of job cpu has to do inside a frame. Does it? Valid question because games usually adjust level of detail system at higher resolutions, but LOD is all about geometry, not draw calls. Tough one.
 
Joined
Jan 8, 2017
Messages
9,577 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
but LOD is all about geometry, not draw calls.

But those are actually linked , higher levels of geometry = higher number of draw calls. Depending on how the batches are handled by the engine it may not impact performance that much, but it still has an effect. Though most games today use a fixed level of geometry. That's why in some games low vs high doesn't look that far apart and CPU usage remains almost the same. Things like tessellation can still has this effect though if not implemented efficiently, it's the reason why AMD cards got outperformed by Nvidia in this category with their weak DX11 drivers in the past that were hammered by this.
 
Last edited:
Top