I see interesting parallels between PC hardware and audio gear. People like to put certain measurements or ways of measuring things on a pedestal. Often that stuff is generally useful, but it's always a sliver of the bigger picture. Certain benchmarks become like a religion for the audiences of certain reviewers who push their specific benchmarks over others because it doesn't show this or only shows that. They just don't give you everything, you know? That is true of all benchmarks! People get carried away too easily. One won't tell you what another can, and vice versa. It means nothing if you can't fully comprehend the implications and be able to corroborate it with experience. Until then it's mostly a number, as far as you are concerned. It may be good enough on its own. Or it may not.
After a certain point you kind of wonder whether they're trying to inform you, or just selling you products, you know? If a reviewer can measure things in a way that's objective enough for people to trust it, but on some level favors certain products over others, people making those products will want to work with them. And then it becomes something a little out there. Companies start tweaking what they make to look better by that metric and it almost doesn't matter if it lives up, because enough people believe in the metric to buy it and stand by it. Ideology is a powerful tool. It can draw people's attention away from less obvious negatives fairly easily. The reviewers are essentially telegraphing what they want to see before they will make a positive recommendation.
Not too long ago people didn't believe microstutter existed because they couldn't measure it, and the way we measured FPS suggested it was all in people's heads. In audio, people often believe that super-low THD guarantees good sound, when all it really guarantees is 'not bad' sound, specifically in whatever narrow band and power level was tested (gotta watch that too, I've seen people try to pass off distortion figures on headphones at 300vrms, which is basically what you want for starting fires.) It could measure well there and still sound 'not good' because of all of the other things not accounted for. And invariably, the people you see touting it as superior, haven't actually been around to hear things for themselves and realize how much more is not on paper. There's a disconnect from what all of the parameters mean, because you have to observe for yourself before they start sinking in.
Of course when we started looking deeper into frametimes and plotting them out over time with more granularity, the microstutters were there for all to see. Before that, you were an idiot for even mentioning it. Stick to the science, K? I know it's right there in front of you when you're gaming, but we have charts that show it's fine, so it's definitely you!
IIRC we have HardOCP for proliferating measurements showing microstutter. Now that is a big deal. People want to see that consistency in writing AND see people using the card reporting smoothness. People may say the benchmarks look good... usually followed by a
but we'll see.
What I'm getting at is... measuring things in different ways is important. It gets us closer to identifying everything that does and doesn't matter. There's always going to be something to see there. But no measurement paints the whole picture. There's never gonna be that one number or chart to rule them all. That's not really how science is supposed to work. You're always looking for more... always expanding rigor and refining method. This isn't hard science they're doing but it still applies. Just because you have data for one thing, doesn't necessarily disprove things your data didn't account for, that you may not even have identified - and maybe nobody ever has. There will always be those blind spots. Never assume total accuracy, consistency, or practicality.
Honestly.... these kinds of measurements are better for troubleshooting. Say something isn't right with the cooler... it's not performing how your experience suggests it should. So you test it and find some things standing out comparatively. That's your clue to what to change. For someone buying the same product, it may in fact be totally useless.
My biggest pet peeve... and you see this most in new people, is the tendency to cling to one or two measurements for their standard in PC hardware. There are the standbys that give a decent starting point, but I feel like you're setting yourself up to be mislead by thinking you know more than you do simply because you can read charts. You know the ones. They will shit on anything that falls short in one spec or benchmark, even in the face of obvious evidence from people who have real experience with the hardware. It's a mess.
Making good PC hardware is one part art, one part science, and one BIG part experience. Really, that's just engineering in a nutshell. You have to be able to make meaningful subjective observations and learn when not to blindly trust a number and just understand that there is only so much you can learn about a car by putting it on the dyno. For all you know, that perfectly performing machine actually drives like shit.
And with that in mind, I'll be interested in seeing how these benchmarks play out. But I'll still be checking out other ones too. I'm sure Steve and GN know what they're doing. But Steve is only Steve. He's got his own standards and determinations that make sense to him. What that actually amounts to in the real world, even he doesn't know.
If things work out, I'll try them for myself and ACTUALLY find out if it works for me. And if it doesn't I'll tell people. This is where subjectivity comes in. How many times has something been lauded for big performance metrics, with all of this hype and aplomb and people who've never used it propping it up as the best, only to have a bunch of people come around later with other problems? By then, the cult mentality has taken hold and people argue about it for fucking years. It's a pissing contest. People are too scared of buyer's remorse to get their feet wet in this hobby. There is no substitute for that. You can obsess over benchmarks for years and miss the trees for the forest on a consistent basis if you're not getting down and seeing how things actually play out. Otherwise it's kinda like living in a decently (but not amazingly) convincing VR simulation and thinking you understand reality because of it.
Sometimes I think it is actually more useful to ask people who's biases are known to you about what they're using and how it's working out for them than it is to look at benchmarks. ESPECIALLY when it comes to coolers. Everyone measures those completely differently, rarely are they transparent enough about it, and quite often the circumstances they choose to replicate accentuate things that actually don't matter to you, and obfuscate things that matter more than anything. They don't know what you're working with. Only you can know that. On that level, the more actual experience you have, the better things tend to go for you. That last one is really perilous... what seems to make sense on paper isn't always so relevant in practice. Experience is still way more valuable IMO, though benchmarks do count for something.
TLDR: Oh boy, a new way to benchmark coolers!