The problem here is you have what looks like blind faith in the advertisers - reviewers who get freebies from those big evil corporations, and make money by people clicking on their videos and related ads. Yes that's the advertising / marketing business.
You also don't understand that bias is not just something introduced via corporate pressures, the community itself can and does create bias. Say something the community doesn't like, and your revenue goes down. Ever notice how these review sites tend to say the same thing? Go somewhere the audience is different, and you'll see different conclusions, you might even see different benchmarks. Funny how that works, isn't it?
Same thing happens on all kinds of topics. Cars, motorcycles, TVs, skate boards, whatever.
All these things mashes the majority of review sites into one mold. It's why most of them run the same 'benchmarks', they test the same games, they come to the same conclusions. Haven't you noticed, very very few of these sites run a MS Office benchmark (TPU does yes I know). And yet, they all have an opinion on 'productivity'. Did you know there are over a billion instances of MS Office running in the world, and over 120 million active Office 365 accounts? What about MS Teams - there are over 500,000 *organizations* using that. How can anyone talk about productivity with a straight face and not talk about this?
So instead we're all going to run Cinema 4D benchmarks, but no one runs Premiere Pro? This is a freaking joke. Adobe is by far the largest media creation tool set in existence.
So no, these people are not unbias 3rd party observers. They are in the business of telling people what they want to hear, getting access to products, and putting food on the table for their families. There's nothing unethical about that, a lot of people are in that business, but people should understand what they are looking at. It is up to the viewer to find truth in the mix, but it isn't going to be handed to you on a silver plate like you seem to think.
A) I'm a media researcher. I'm quite aware of how biases in media are formed, expressed, reinforced, challenged, and so on, thank you.
B) I don't tend to have blind faith in much of anything. My trust in the tech media is both moderate and well reasoned. Please check your condescending BS at the door.
C) You're overestimating the homogenizing effect of audience tastes - this has more of an effect on form and presentation than on content, though of course it has some effect on content also. But arguing that community pressures are causing, for example, reviewers to present Nvidia (or AMD) in a better light? Nah. At least not the ones I would take seriously - TPU, AnandTech, GN, and a handful of others. Those also, for the record, typically publish their test methodologies as well as the reasoning behind the development of those methodologies, allowing readers to examine this for themselves and see if they agree with the choices made.
D) There are absolutely some weird and poor choices of benchmarks out there. Cinebench is definitely one. But there are also many good ones, as well as good reasons for
not choosing things that would have been very interesting. Multiplayer games and various online (i.e. constantly updated) games are for example near impossible to benchmark in a reliable way, and if you can't benchmark reliably, not doing it at all is definitely the right choice. There are also valid questions about whether benchmarking a bot match in a multiplayer game is representative of real-world online play, etc. Benchmarking also tends to focus on challenging loads, which explains why esports games tend to have a very low priority (yes, some can be demanding, but most run passably on just about anything). There's also an argument to be made about when good performance at ultra actually matters (I'd say never, as Ultra is inevitably wasteful), and AAA and story-heavy games are definitely places where I would think most players care more about the style and aesthetics of the game than fast-paced competitive titles (though I by no means think that style or aesthetics are of no importance there - just that fluidity matters more).
E) There are absolutely people running Office benchmarks (
AnandTech for example uses BapCo SysMark extensively in their system testing), though the reality is that while there are billions of Office users out there, the
vast majority of them use it for things that run fine on 5-year-old 15W u-series laptop chips. There are of course some with massively demanding workloads like enormous spreadsheets etc., but those are a
tiny minority. Measuring the performance of office tasks relevant to the majority of users is rather meaningless given how light a load they represent, so it's a better use of reviewers' time to test more demanding common workloads.
F) Are there no Premiere Pro benchmarks out there? What? I'm sorry, but what reviews are you reading/watching? Premiere Pro export time benchmarks are a dime a dozen, and even LTT (who are definitely more in the "entertainment" than "trustworthy review" category) places a lot of focus on Premiere Pro timeline performance (logical for them, really).
Lastly, I never claimed that reviewers are unbiased 3rd party observers. There is no such thing as an unbiased human being. In any matter, in any situation, ever. Period. However, the observable level of bias in the tech review sites I follow is generally low enough to not make much of a difference, and many of them do a good job of explicitly addressing and countering their own biases. You, on the other hand, seem to be arguing that reviewers are
both biased in favor of the companies "giving them free stuff" (which, to be honest, assumes a level of unprofessionalism on their part that says more of you than of them, unless you have actual proof),
and in favor of whatever their audience prefers. That sounds ... problematic, to say the least. What do they then do if those two don't align? Who are they more beholden to? And do they then have no journalistic integrity and self respect whatsoever? Your analysis of this seems shallow, simplistic, and overly black and white, and while I as I said previously agree with a lot of your base assumptions, I entirely disagree with your bombastic and all-encompassing black-and-white conclusions.