I tried to explain this to someone on reddit, I pointed out a modern low core chip will outperform a old high core chip in many games, and he was adamant I was talking nonsense, so just eventually gave up on it. I think he is still running his coffee lake (or maybe sky lake cant remember) chip to this day probably.
Enthusiasts tend to grossly overestimate the CPU impact on games. I wager if you sit a gamer in front of a black box with a 4080, 98% wouldn't be able to tell the difference between anything 7600X/13600k and up.
I can only speak of personal experience.
On consoles I think the biggest impact this gen is not the GPU uplift's , but that they moved from jaguar to Ryzen architecture.
On my PC. Most of my games I play are lightly threaded and are a mixture of older and less technically advanced modern games, which one might assume a old CPU would be adequate.
But I have had issues on older CPUs with stutters manifesting due to CPU bottlenecks, and it also affects loading speeds as well as things like streaming performance in games. Since I moved to a newer platform, games I have tested have either completely resolved their issues or have a significant improvement. It feels like a far bigger improvement for gaming than when I went from 1080ti to RTX 3080.
Note I game at capped FPS (usually 30 or 60), so I am talking about things that many on here dont think relevant to gaming performance.
Its interesting as I ended up not buying the current market leading chip for gaming which is the 7800X3D, so my improvements probably would be even more with that chip.
We have seen some reviews of games released in the past year where even only moderately old chips had major performance issues in games.
This is true but it is also true that reviews tend to have the game and only the game running. Whereas most people will have other programs running concurrently, on likely a less than tuned OS.
AFAIK the tests are air gapped too, so there's no network traffic or variability in game/software version etc., but also no additional load from that.
The point I'm making is that singleplayer offline, clean OS testing is a little less representative of the average gamer online, background tasks, less than clean OS. But it can provide insight into what resources a game needs in a vacuum.
In my experience eight cores without SMT/HT is enough, six for games, two for everything else.
The only way to use a dual CCD chip for gaming is to pin the game on one CCD and everything else on the other. Using both for the game will result in worse performance. Similar story for P+E cores, but there's no latency penalty moving off die in that situation.
The final thing I'd like to point out is that a huge amount of people have a 60/120/165 Hz monitor, and are almost never CPU limited in the first place. Try 240/360 Hz with a non entry level GPU, and they might start appreciating the actual differences in performance between CPUs.
You can be CPU limited at low frame rates, which is the case with many of my older games.
One thing not factored in is many people just look at max frame rate and use that as a means of deciding performance, but if a game needs to load in a texture, shader or something, this will need some processing and the CPU might be albeit very briefly saturated, and this could manifest in a stutter or delayed loading (meanwhile cpu monitoring tools would typically not report close to 100% load due to polling interval and how windows schedules cpu tasks). There is also many older games that are just horrible ports, and there is a game I have played multiple times called lightning returns which will wreck CPU's manufactured several years after it was released. Not even able to sustain 30fps in places, CPU core saturated.
Your point on background processes I agree with of course. But I do think ir_cow is right for the most part, a huge amount of games will run fine with a low amount of cores, and they are mostly dependent on performance per core rather than number of cores.