Meh, I beg to differ. Its really just a problem with the definition of 'turbo' or 'boost'.
In fact, I think we can safely say only Nvidia has its GPU boost story in good order. As in: you always get an advertised clock, and most of the time, you get much higher clocks. Even if you nearly cook the GPU it will still run base clock or you'll be using it completely out of spec.
Perhaps what we need is an industry standard wrt boost clocks. I could imagine you'd set that at something like a total achieved frequency across all available threads divided by number of threads. And when you do it like that, suddenly AMD doesn't look all that bad. Another approach could be the total deviation from advertised clocks - again, if you'd put Intel versus AMD spec sheet and reality; AMD will be coming out much better.
Let's not go blind on that peak clock number and define something around thát. I really don't like Intel's Turbo and the spec sheet trickery they've deployed over the years (and how recent parts handily turn the 'headroom' into 'used room' and how CPUs royally boost beyond TDP on stock bioses). They're playing the game for marketing, AMD is just bad at it (once again... it never ends does it) and you're right they've left a gap here for a lawsuit. But AMD does deploy a much better type of boost.
I mean for this example here, what should AMD advertise, 4375 mhz on the box instead of 4.4? 25 mhz...
As for the 'need' to see what conditions are required to get the advertised boost clocks...maybe. Maybe not... isn't the actual defining factor for a CPU in the end
performance? You cannot grasp CPU performance based on clocks alone, nor could you do it based on the advertised turbo/boost clock of a spec sheet, after all there are barely any use cases that only hit one core.