Anyway...

I only saw two cringeworthy moments in the FX 6300 vs i5 6400 vs i5 2400 comparison video. One of them is the stutterfest on the 2400 in BF5 at 0:46, which also presented itself on the 6400 at 0:55. The other one is when CP77 dipped below 30 fps on the 2400 at the end of the video. Neither of these are deal breakers, though, as the general gameplay experience is largely stutter-free. Also, I wonder what point the video's maker wanted to make by including the middle game (was it GTA5?). It looks buttery smooth on all 3 systems. Maybe that's the point?
Because GTA 5 used to be heavy on CPUs. My own FX 6300 was only able to run it at 40-60 fps. As far as I know no FX chip stock can handle GTA 5 properly, perhaps except for 9370 and 9590. At mountains it runs GTA fine, but not so well at city, when you have lots of NPCs and lots of objects. One odd thing is that grass tends to reduce fps a lot, as it seemingly loads CPU a lot more. I remember that in GTA 4 if I didn't turn off shadows (which to be honest looked a bit garbage anyway), CPU was stuttering and dipping a lot, often to 30s. Without shadows, CPU ran GTA 4 at nearly 60 fps all time. Another fun fact, after GTA 5 loads up, if you kill Rockstar's Social Club task, you can gain like 5-10 fps. I didn't test that on FX 6300, but I tested that on Athlon 870K, which is 4 FX cores, with less L2 cache and no L3 cache. It does break game a little, as far as I know stocks stop working. Anyway, if you really want GTA 5 to be actually smooth on FX stuff, you have to overclock those chips. I now run GTA 5 on i5 10400F and I use VSync, it makes 60 fps feel like 120 fps on high refresh rate monitor and there's seemingly no input lag (I also use Radeon Anti-Lag, so maybe that's why).
For a
trouble-free gaming experience, I agree. For
any gaming experience, just 4 cores are still enough. It's still rather impressive, considering that the first 4-core 8-thread consumer-grade CPUs were released 12 years ago (Core i7-860 and 870), and also considering that you can get a modern 8-thread CPU for about £80 with delivery brand new. That's £10 per thread, or £20 per core.

I remember the '90s and early 2000s when you had to upgrade almost every year just to be able to run the newest games at any fps - and I was still rocking a Celeron MMX 300 MHz with an
S3 ViRGE "3D decelerator" for 6 long years, dreaming about owning a 3DFX or GeForce 3 one day. This might have something to do with why I don't give a damn about super high framerate gaming.
Maybe, for me it's just a massive expense. I still remember, when I had my retro rig. Initially I used nVidia GeForce FX 5200 128MB 128 bit. It was one of the better models of legendary potato, but it was able to run CoD 1 in DirectX 7 mode at 1024x768 resolution at decent framerate. It also was able to run Far Cry at lowest settings, 800x600, 16x aniso, maximum textures at quite stable 30 fps. It was also able to run GTA San Andreas okay at whatever settings I used. I later got ATi X800 Pro and it was all around a better card, much faster, had more up to date DX compatibility. It ran games a lot better and at far more respectable settings. I quite liked that and it more or less transformed gaming experience. However, I later got ATi X800 XT PE and while it was even faster, it made me feel nothing. Sure I got somewhat more fps, maybe a bit IQ bump, but it felt like money as wasted as it simply failed to do anything more than X800 Pro. It taught me a good lesson for life, is that chasing for best isn't always an answer, but getting most for your money is surely good. And while that is nice, sometimes just the minimum spec can be very enjoyable (FX 5200). For these reasons + some other reasons, I probably never again will buy the best card. nVidia's xx60 series are plenty as well as AMD's equivalents, sometimes even xx50 Ti is good enough. The thing is that once you get game running at some adequate framerate and resolution high enough that you can see shit, any further improvements will do very little for your gaming experience.
And nowadays, getting a high end card is as much fun as getting high end car. You see, you can buy an expensive sports car, that is very fast and all, but what many people still don't understand is that those cars are high maintenance affairs and they are awful for for pretty much any daily driving. Same with RTX 3090. It sure is fast, but it certainly runs hot, it certainly is not quiet and it consumes a lot of electricity. And on top of that it dumps a lot of heat into room. Realistically, you need beefy case, decent cooling set up, may AC at home and a set of headphones. It requires a lot of reengineering of your whole setup and in return you only get more frames and maybe a resolution bump (depends on your monitor). Meanwhile something like RTX 3060 barely requires anything else done and it is very practical for most people. Cards like RTX 3060 are like VW Golf GTi. Everybody knows that it's a basic card and it doesn't have many downsides of high end cards and yet it delivers a good performance. A Golf GTi didn't become a good seller, because it was really fast, but because it remained cheap to buy and relatively inexpensive to own and surprisingly quick. Same with graphics cards, RTX 3060 is the Golf GTi of graphics cards now (or at least it should have been if prices remained as intended). And that's why every GTX xx60 sold so well, it's because it's the most suitable card for masses. Some people may not consider that, but high end cards are quite bad for us in unexpected ways:
And if you are completely cynical today, then they do harm us in a very predictable and annoying way:
I don't know, maybe. I'll try my best to test it out, though, as I'm genuinely interested.
Here's another video to illustrate what I mean by stuttering and inconsistency:
The Cyberpunk part is an excellent demonstration, but I really mean the GTA5 and AC:V parts where the average fps is quite alright in my opinion, but the stutters (shown as frametime spikes) just make it uncomfortable for the eye.
Well, that's easy Athlon X4 950 doesn't have L3 cache at all and its L2 cache is tiny. Once you need cache, you stutter. On top of that, those Athlons are dual module FX chips, and FX cores are bit complicated. Each core isn't independent. They works more like 2C4T parts and more precisely something in between 2C4T and 4C4T parts. So you have typical FX being a FX and then you chop the cache and shit happens. Athlon 950 was one of the latest FX derivates and had improved IPC and supported more latest instructions so the cores themselves are better than what Vishera's had, but due to less cache and less levels, you get mostly better performance, but sometimes poopy performance. What most people also don't know is that late FX derivatives, basically anything beyond Vishera and Richland, tend to run much hotter and generally don't have nearly as much overclocking headroom too (maybe because they have much more volts from factory). So, Athlon X4 950 ends up being a poor CPU, which also cannot be improved much. The only good thing about late FX derivatives is that undervolting potential is still quite high. I own Athlon X4 845, which is Carrizo and stock it was like 1.45 volts and I managed to bring that down to 1.275 volts and a bit below that. This doesn't mean much, because despite being FX derivatives, they still have too much of FX soul in them and they still consume a lot of power for what they are. Athlon 200GE consumes pretty much half of what Athlon 950 does and delivers slightly better performance. Those FX derivates were quite useless, meanwhile Richland APU's at least were good overclockers. You can clock them to 5GHz on modest cooling, but you still have cut down cache and 90% good performance and 10% bad performance. It wasn't the first time that AMD cut down the cache. Basically all Phenom era Athlons didn't have L3 cache at all. K8 era Athlons depending on your luck and PR Rating you either got more cache or more clock speed and clock speed was far more valuable, than bigger cache models. I don't know about Intel much, particularly in their malaise era of Pentium Ds and Celeron Ds, but Intel generally messed less with caches. And to make matters worse, AMD on FM2+ platform often remodeled caches, so 760K has more L2 cache, than 870K, but it was worse cache on 760K. They also messed with L1 size. And I don't remember that too well, but AMD may or may not have cut down PCIe lanes from 16 to 8. I know for sure that AM1 stuff only had 4 PCIe lanes and that was part of why they sucked so badly.