You cannot trust a single data point, no matter who it is from. Science 101.
Well, for starters I can read the graphs TPU puts out.
Handwaving proof that disagrees with your own, and insinuating that readers of your website are complaining simply to complain because they are "not the target market" is both petty and disingenuous. That's already been written up in comments in this thread. Look at the way W1zzard responds to criticism VS the way you do, it's rather eye opening. The way you responded to me, claiming that I was questioning the expertise of a reviewer because I dont take one data point as holy text and asking what my qualifications are is incredibly inappropriate for a TPU staff member, and it reflects poorly on this site as a whole.
You're the one questioning results from a very respected source - the burden of evidence is on you to provide proof as to why we should take that questioning seriously, hence why I asked what your qualifications are. TPU results are great, and contextually provide really reliable datasets which are useful in comparing different hardware. The issue is that we're in the middle of several huge generational product releases, CPU and GPUs are all being refreshed at the same time practically.
There's a good reason why the TPU graphs are somewhat slewed, it's because we had to use a 3080. Why exactly do you think we're retesting with a 4090? For fun? It's incredibly difficult to put modern CPUs in a position where they are actually the limiting factor in a system, making testing hard, and justifying the use of academic 720p testing. Just look at 4K results, you can infer the same thing from the other end of the extreme, why bother getting anything above a 3700X when you'll only get a 4% performance increase for your money? Obviously the actual performance difference is significantly different, but in that context, you wouldn't think so. Just like in the context of not using a 4090, the 13900K seems silly. People upgrade their GPU a lot more than they upgrade their CPU for this reason. And it's also why we try to have so many different tests, so you can get an idea of actual chip performance, instead of chip performance when it's being bottlenecked by a different component.
You're still part of a community buddy. This was feedback.
For a staff member, a more open stance could be expected. We (or at least, I) am not jabbing at you, its about the content we're discussing and what might make it more valuable. Insights, you know. I think that gaming energy efficiency result is a very interesting thing to highlight, because it points out the limited necessity for going this high up the Intel stack for 'better gaming'. Seeing as we are speaking of a gaming champ. Could less actually be more?
Exactly why the 13600K is titled "best gaming CPU", it's an overclocking beast, and I see no reason why it can't achieve the same clocks as the 13900K when tuned a bit. It goes from 628 to 712 FPS in CS:GO with a 500 MHz OC to 5.6 GHz, pretty sweet. The 13900K gets 736 FPS at the same clocks (5.6 GHz), so the difference of 24 FPS is probably due to the two extra P cores and more cache.
I believe I have a pretty open stance, I'm not planning on upgrading to this or any other next gen CPU released by Intel or AMD, for the next few years. I just dislike the criticisms of people who don't even have the use case a product is aimed at (the best HFR gaming experience, or making money with their processing power). If anyone has doubts about my own feelings on performance/energy efficiency - have a look at my specs, I use a 5800X3D. As you said, I'm part of a community, and these posts are my own, just because I'm a staff member doesn't mean that every opinion or post I make is the official representation of TPU, nor should it be IMO. The responsibility I have as a staff member is to ensure my TPU work is of good quality and unbiased, which I believe it is. It's not like I'm slandering people here. The responsibility I have as a member of this community is to present my statements as best I can in order to foster greater understanding, which I will continue to do.
To clarify and restate - if you're not chasing 240 Hz gaming, don't care about e-peen, or don't have the time=money attitude, to where a little extra on the power bill is irrelevant in the context of each minute saved in that workflow = $$$. This CPU isn't really for you.