- Joined
- Apr 1, 2008
- Messages
- 4,664 (0.76/day)
- Location
- Portugal
System Name | HTC's System |
---|---|
Processor | Ryzen 5 5800X3D |
Motherboard | Asrock Taichi X370 |
Cooling | NH-C14, with the AM4 mounting kit |
Memory | G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB |
Video Card(s) | Sapphire Pulse 6600 8 GB |
Storage | 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III |
Display(s) | LG 27UD58 |
Case | Fractal Design Define R6 USB-C |
Audio Device(s) | Onboard |
Power Supply | Corsair TX 850M 80+ Gold |
Mouse | Razer Deathadder Elite |
Software | Ubuntu 20.04.6 LTS |
I don't think so, here comes the answer:
You guys were simply too fast. This guy is smarter than you thought he is and he didn't really do a mistake, I guess people simply didn't understand his logic. It's a fact, 8350 is now faster than at release, it's probably faster than 2500K when properly utilized. But then again, this is just vs. a 2500K at *stock* which means nothing. Who bought a K CPU, which cost extra, to not overclock it? Yeah. That said, the FX 8350 has not the slightest chance once the 2500K is overclocked, even slightly. The difference just increases with a higher clock. In the end, all that information was just good for one thing: was the FX 8350 a futuristic CPU ahead of its time? Yes. Is it better now than at release? Probably, but it's still not good enough. A overclocked 2500K is barely good enough, but a FX 8350 isn't.
I do admit that i missed the very important bit that different games being tested was the point. I'm also to blame in the sense that i took HUB's numbers as a "gospel" (like Adored calls it) in order to "prove" that the methodology wasn't flawed after all.
His more recent video shows that current CPU benchmarking indeed is flawed (in this title @ the very least). Let me however say that what i'm trying to show is not Intel VS AMD CPU performance bit but the difference you get in a supposedly CPU bottlenecked game when changing from a nVidia card to an AMD one on BOTH CPUs:
You can't have it both ways: either the CPU is being bottlenecked or it isn't. Adored showed that both Intel and AMD benefited from changing from nVidia to AMD in DX12 and both lost in DX11. That the gap shrunk in DX12 is not the issue i'm trying to address: that there is a gap is the issue i'm trying to address. This proved that the CPU wasn't being bottlenecked after all, or there wouldn't have been an increase in both CPUs: there was another variable that wasn't being accounted for.
But there are variables here, because of which i think more testing is definitely required: he tested crossfire VS single card and that introduces another variable that doesn't have to be present: CF scaling. It seems RotTR isn't the only game because it happens in DX12's The Division, that i know of so far.
In fact, you don't even need to use 2 CPUs to test if this is true or not, but you do need an nVidia and an AMD cards, as well as RotTR game: just run RotTR using DX11 and DX12 with settings you're absolutely sure will bottleneck the CPU with both cards @ stock and then with the highest overclock on the cards you can get.
If the CPU is bottlenecked "properly", then going from a stock nVidia card to an OCed one should yield margin of error differences and the same should be true for stock AMD card VS OCed one but, if comparisons between manufacturers are allot higher then margin of error, then you'll have your proof right there.
There's also this video that i found very interesting about CPU overhead in nVidia VS AMD: