@GoldenX has been sly enough to not tell us what he was referring to when he deemed this "good". I'm not gonna try to guess what he meant.
For me, there's a clear cut: the IGP is for desktop work, for games you need at least a mid range dGPU. Anything between that is just money down the drain: it will not play games well, it will not make Office or your browser of choice perform any better.
Let's just wait for benchmarks and see where this stands, instead of passing on definitive judgment based on pictures and impressions, ok?
I'm with you on that last part, but I disagree a bit on the "value of an iGPU" part. Of course, I'm in the rather privileged position of having a gaming desktop, an HTPC, a desktop workstation and three laptops in a two-person household. As such, I don't exactly need the iGPU to be the be-all, end-all in gaming performance.
Still, given the good CPU performance we know Zen cores deliver, with a "good enough for esports" iGPU, these chips would make for amazing budget-level gaming boxes with the option of adding a dGPU down the line. This argument was made for the previous A-series also, but they fell flat due to weak CPU perf making them too weak for anything but a bargain bin dGPU. I'm hoping this manages to live up to this expectation (i.e. run DOTA, CS:GO, RL and Overwatch at 1080p medium or so), as that could be a great market share win for AMD. Which, again, brings me to why I find it worth my time to speculate on and discuss. Of course, we can't know until we see benchmarks, so the discussion
is ultimately meaningless. That doesn't mean it's a waste of time, though
You cant use a framerate comparison between Xbox and PS4 based on recording the outputs, for the simple reason they use different graphical settings.
You can compare which is smoother or better looking for a user experience point of view, but it IS NOT, and CAN NOT be used as a performance benchmark unless the graphical details are 100% the same on both platforms (complicate that even further for games that use different settings for the various hardware models of each console)
This is just plain misunderstood. Of course you can compare them even if they have different graphical settings. It's even quite simple, as they both have fixed (non-user adjustable) graphical settings. It simply adds another variable, meaning that you have to have to stop asking "which performs better at X settings?" and start asking "which can deliver the best combination of performance and graphical fidelity for a given piece of software?". This does of course mean that the answers might also become more complex, with possibilities such as better graphical fidelity but worse performance. But how is that really a problem? This ought to be far easier to grasp for the average user than the complexities of frame timing and 99th percentile smoothness, after all.
You seem to say that performance benchmarks are only useful if they can provide absolute performance rankings. This is quite oversimplified: the point of a game performance benchmark is not ultimately to rank absolute performance, but to show which components/systems can run [software X] at the the most ideal settings
for the end user. Ultimately, absolute performance doesn't matter to the end user - perceived performance does. If a less powerful system lowers graphical fidelity imperceptibly and manages to match a more powerful system, they are for all intents and purposes identical to the end user. To put it this way: if a $100 GPU lets you play your favourite games at ~40fps, and a $150 GPU runs them at ~60fps, that's perceivable for most users. On the other hand, if a $120 GPU runs them at ~45fps, is there any reason to buy the more expensive GPU? Arguably not.
Absolute performance doesn't matter unless it's perceivable. Of course, this also brings into question the end user's visual acuity and sensitivity to various graphical adjustments, which is highly subjective (and at least partially placebo-ish, not entirely unlike the Hi-Fi scene), but that doesn't change if you instead try measuring absolute performance. The end user still has to know their own sensitivity to these things, and ultimately decide what matters to them. That's not the job of the reviewer. The job of the reviewer is to inform the end user in the
best way possible, which multi-variable testing arguably does better than absolute performance rankings (as those have a placebo-reinforcing effect, and ignore crucial factors of gameplay).