- Joined
- Apr 1, 2008
- Messages
- 4,664 (0.76/day)
- Location
- Portugal
System Name | HTC's System |
---|---|
Processor | Ryzen 5 5800X3D |
Motherboard | Asrock Taichi X370 |
Cooling | NH-C14, with the AM4 mounting kit |
Memory | G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB |
Video Card(s) | Sapphire Pulse 6600 8 GB |
Storage | 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III |
Display(s) | LG 27UD58 |
Case | Fractal Design Define R6 USB-C |
Audio Device(s) | Onboard |
Power Supply | Corsair TX 850M 80+ Gold |
Mouse | Razer Deathadder Elite |
Software | Ubuntu 20.04.6 LTS |
After watching this video, i'm not so sure anymore:
He claims that the "ancient" FX8350 became faster the the I5 2500K simply by testing with a much much faster graphics card and has shown that, with each increase in GPU performance, the distance to the 2500K narrows, until it's reversed. To be fair, i personally have some doubts because there are some variables he didn't take into account, such as motherboards differences, memory differences, storage speed differences, used drivers differences that may or may not contribute to the overall result, not to mention that in the final part, he used the results of the FX8370 instead (due to not having the FX8350 on the benchmarks).
This finding contradicts the current thinking that in order to remove the GPU out of the equation, one needs to test @ lower resolutions / details so that one can say X processor is better then Y processor @ gaming with the results one gets after the tests, and any changes made to the graphics card will never change the outcome. I understand the logic ... but is this really true?
You may think: why am i even thinking about this with "ancient" CPUs? Because you can use the same principle on current CPUs: if the testing methodology is indeed wrong, then all those referencing reviews with it are being misled (even though not intentionaly) and this means a new way to test a gaming CPU must be found and this one scrapped.
And so i propose that our very own @W1zzard tests this (@ his convenience) and answer this question so that there's zero doubts. Use a 2500K and a 8350 and pair it with a 1080Ti, a 980Ti, a 680Ti (2 generation gap between each card, i think): only change the boards and CPUs while trying to keep the rest, if @ all possible (including drivers), so that there are less variables to interfere with the final results.
EDIT
I think this topic can be closed now because someone has tested this and found the 2500K to be faster on all but 1 title (of 16 tested):
It seems the methodology still holds: as such, no point in keeping this topic open, i think.
EDIT #2
The plot thickens ...
I do admit that i missed the very important bit that different games being tested was the point. I'm also to blame in the sense that i took HUB's numbers as a "gospel" (like Adored calls it) in order to "prove" that the methodology wasn't flawed after all.
His more recent video shows that current CPU benchmarking indeed is flawed (in this title @ the very least). Let me however say that what i'm trying to show is not Intel VS AMD CPU performance bit but the difference you get in a supposedly CPU bottlenecked game when changing from a nVidia card to an AMD one on BOTH CPUs:
View attachment 85800
You can't have it both ways: either the CPU is being bottlenecked or it isn't. Adored showed that both Intel and AMD benefited from changing from nVidia to AMD in DX12 and both lost in DX11. That the gap shrunk in DX12 is not the issue i'm trying to address: that there is a gap is the issue i'm trying to address. This proved that the CPU wasn't being bottlenecked after all, or there wouldn't have been an increase in both CPUs: there was another variable that wasn't being accounted for.
But there are variables here, because of which i think more testing is definitely required: he tested crossfire VS single card and that introduces another variable that doesn't have to be present: CF scaling. It seems RotTR isn't the only game because it happens in DX12's The Division, that i know of so far.
In fact, you don't even need to use 2 CPUs to test if this is true or not, but you do need an nVidia and an AMD cards, as well as RotTR game: just run RotTR using DX11 and DX12 with settings you're absolutely sure will bottleneck the CPU with both cards @ stock and then with the highest overclock on the cards you can get.
If the CPU is bottlenecked "properly", then going from a stock nVidia card to an OCed one should yield margin of error differences and the same should be true for stock AMD card VS OCed one but, if comparisons between manufacturers are allot higher then margin of error, then you'll have your proof right there.
There's also this video that i found very interesting about CPU overhead in nVidia VS AMD:
He claims that the "ancient" FX8350 became faster the the I5 2500K simply by testing with a much much faster graphics card and has shown that, with each increase in GPU performance, the distance to the 2500K narrows, until it's reversed. To be fair, i personally have some doubts because there are some variables he didn't take into account, such as motherboards differences, memory differences, storage speed differences, used drivers differences that may or may not contribute to the overall result, not to mention that in the final part, he used the results of the FX8370 instead (due to not having the FX8350 on the benchmarks).
This finding contradicts the current thinking that in order to remove the GPU out of the equation, one needs to test @ lower resolutions / details so that one can say X processor is better then Y processor @ gaming with the results one gets after the tests, and any changes made to the graphics card will never change the outcome. I understand the logic ... but is this really true?
You may think: why am i even thinking about this with "ancient" CPUs? Because you can use the same principle on current CPUs: if the testing methodology is indeed wrong, then all those referencing reviews with it are being misled (even though not intentionaly) and this means a new way to test a gaming CPU must be found and this one scrapped.
And so i propose that our very own @W1zzard tests this (@ his convenience) and answer this question so that there's zero doubts. Use a 2500K and a 8350 and pair it with a 1080Ti, a 980Ti, a 680Ti (2 generation gap between each card, i think): only change the boards and CPUs while trying to keep the rest, if @ all possible (including drivers), so that there are less variables to interfere with the final results.
EDIT
I think this topic can be closed now because someone has tested this and found the 2500K to be faster on all but 1 title (of 16 tested):
It seems the methodology still holds: as such, no point in keeping this topic open, i think.
EDIT #2
The plot thickens ...
I do admit that i missed the very important bit that different games being tested was the point. I'm also to blame in the sense that i took HUB's numbers as a "gospel" (like Adored calls it) in order to "prove" that the methodology wasn't flawed after all.
His more recent video shows that current CPU benchmarking indeed is flawed (in this title @ the very least). Let me however say that what i'm trying to show is not Intel VS AMD CPU performance bit but the difference you get in a supposedly CPU bottlenecked game when changing from a nVidia card to an AMD one on BOTH CPUs:
View attachment 85800
You can't have it both ways: either the CPU is being bottlenecked or it isn't. Adored showed that both Intel and AMD benefited from changing from nVidia to AMD in DX12 and both lost in DX11. That the gap shrunk in DX12 is not the issue i'm trying to address: that there is a gap is the issue i'm trying to address. This proved that the CPU wasn't being bottlenecked after all, or there wouldn't have been an increase in both CPUs: there was another variable that wasn't being accounted for.
But there are variables here, because of which i think more testing is definitely required: he tested crossfire VS single card and that introduces another variable that doesn't have to be present: CF scaling. It seems RotTR isn't the only game because it happens in DX12's The Division, that i know of so far.
In fact, you don't even need to use 2 CPUs to test if this is true or not, but you do need an nVidia and an AMD cards, as well as RotTR game: just run RotTR using DX11 and DX12 with settings you're absolutely sure will bottleneck the CPU with both cards @ stock and then with the highest overclock on the cards you can get.
If the CPU is bottlenecked "properly", then going from a stock nVidia card to an OCed one should yield margin of error differences and the same should be true for stock AMD card VS OCed one but, if comparisons between manufacturers are allot higher then margin of error, then you'll have your proof right there.
There's also this video that i found very interesting about CPU overhead in nVidia VS AMD:
Last edited: