When one plans for cooling, MB, and PSU, one should go with the worst case power consumption to avoid accidentally frying the components or shutdowns etc, not the ideal "main use case" average.
And that's why they focus on those. It's fairly relevant.
Yes, it is relevant for PSU, but not for cooling, noise etc.
But having a little margin over sustained load is good anyways, as the PSU lasts longer then, so the difference in PSU recommendations are usually not changed because of a little higher peak power.
Completely agree with that. I've run perfmon on my box multiple times for an entire day, and at the most intense CPU use times during those measurements I would characterize the usage as 'lightly threaded' 'single core limited'. 80% of the time, it's near idle. i.e., like right now. typing this.
Well, of course there is little difference during idle, the question is the characteristics during a realistic load. And the reality is that even for most power users, it's generally "medium threaded" at most, because synchronized workloads don't scale perfectly. And "running many things at once" is not a good excuse either, because it usually runs into other bottlenecks long before core count. Even when I try to stress out my 6-core with a real workload; with a couple of VMs, compiling a decent sized project and running a bunch of tabs in a browser, it rarely go much over 50% load, and runs into scheduling bottlenecks long before maxing out the cores.
Chrome with 40 tabs and some of the pages are just so poorly written, image processing, etc all taxing it in the foreground. Then in the background Plex server, Resilio Sync, etc which sometimes just decide to fully utilize a core or more and I'm too lazy to hunt them down. I got rid of antivirus but many folks still keep them and those programs also tend to have similar behavior. Way too many things to keep CPU at idle.
Chrome spawns an incredible amount of threads, up to one per CPU thread for each tab it seems, that quickly adds up to hundreds of threads. This will often "overload" the OS scheduler, and can cause serious stutter (on the desktop) even though there is barely any load on the CPU. Unless those threads are all videos or something, you are more likely to experience this kind of problems before running out of CPU cores from web browsers. And having many more CPU cores means Chrome will spawn even more worker threads for each tab, bothering your scheduler just even more.
As a software developer I both fully utilize CPUs/GPUs myself and deal with demanding programs others write, also including the IDEs that can be surprisingly taxing those days.
There are certainly many IDEs and text editors being very heavy these days, but very few if any of them get much better with more threads. Especially the non-native ones are horrible, like Eclipse has always been painfully sluggish, but even the JavaScript text-editors like Atom and VS Code which are so popular these days are incredibly laggy and unreliable, to the point where it can miss/misinterpret key presses. These problems are mostly due to cache misses, and there is little to do about that other than writing native cache optimized code. It's sad that these tools are worse than the tools we had back in the 90s.
I'm personally "allergic" to lag when typing code, there is little that bugs me more than an unresponsive text editor (or IDE). I can deal with operations being slow, but unresponsive typing and button clicks becomes a constant annoyance, and leads to misinterpretation of input plus loss of focus. I did in fact write most of my code (at work and home) in Gedit from ~2009-2015, because I'll choose a plain text editor over sluggish "IDE features" any day, I only eventually switched to Sublime due to newer Gedit being buggy. Still today I stick to a "plain" text editor whenever possible, that plus a dropdown terminal, tmux, grep and a few other specific tools and I'll "beat" an IDE in productivity any day, and I'm saying that as someone who was a "big IDE guy" in the 90s and early 2000s.
So I've been a developer for 30 years, and I'm a lead these days so I regularly have developers working for me on projects.
I'm not too impressed with a developer who runs 40 tabs on Chrome, Plex, an image editor (why, are you also a visual artist?),<snip>
There are edge cases and typically they are visual design and media artists. Most developers are neither. Your scenario is not credible.
While I have no reason to doubt your evaluation of your fellow developer, I seriously doubt his/her lack of professionalism has much to do with the specifics you mentioned, or at least it depends on how he/she uses the tools. So more context is needed to justify such claims.
While it may be less common that programmers don't possess graphical skills, it can certainly be a useful asset. I consider myself an old school programmer, yet have basic image editing and drawing skills, which has been useful quite often, for mockups, documentation or graphical elements in software.
And don't you dare judge people based on tab usage
What's even crazier about such a pair up, there's this review site called Vortez, that has like the oldest test components on the planet. All his reviews he uses an RX 480, some old Corsair LPX 3Ghz DDR4, an an old 128GB Intel SSD I've never heard of
On that setup, guess which CPU is the fastest at Tomb Raider?
10900K? nope. 5800X? Nope. 5600X? Nope.
Zen 1.5 2600X.
And an 8700K beats everything at Total War:Warhammer at 1080p with that setup - beats comet lake, and Zen 3.
So, entirely possible, someone getting a 5600X for gaming but stuck with one of those old GPUs, may well actually get lower FPS than the guy that went cheap on the CPU.
It is very unlikely that an old game will scale better with an old CPU. There are no ISA differences or fundamental scaling characteristics between these CPUs to exploit in "optimizations".
This is probably just a bad test with poor testing procedures.