I'm not super impressed nor surprised, but maybe that's just me.
I get it, some people really, really, want to run 4k max settings right now, and perhaps a multiple of this will do it just fine. The rest of us will wait for more reasonable 14nm/16nm parts with 8GB of ram and (likely) typical high-end card (R390/980ti-like?) prices.
That said, I really don't think any conclusions can be drawn on practicality until we know how '980ti' and 390x clock, as well as how the mem bandwidth vs larger buffer effect general performance at higher resolutions. That's pretty much completely unknown. Sure, this might be able to (barely) sustain 60fps/1440p max settings in most games stock, but can the cheaper alternatives do it overclocked? Which (R390x/980ti) will fare better? My gut says it's possible and probably '980ti (which I think will be slightly more expensive)', rendering this (for many people) moot because the tangible performance difference at playable settings will be negligible...but who knows.
*Note about 3dmark scores: They very much are swayed by amd's stronger compute performance. While that well and truly is a practical advantage in some cases, it is fundamentally generally unbalanced to core architecture efficiency. 3072 (+768sfus) at similar clocks should often perform just as well, if not better in a similar power envelope (not accounting for memory type and amount) offset to a higher clock if AMD stuck to 64 ROPs. This is one of those reasons I wish AMD would do something radical like 22CUs/24ROPs (scaled) on 14/16nm. While they'd lose the compute advantage, it would surely be efficient (generally), while perhaps better meshing with generally power-efficient clock potential of the processes within the set tdp parameters.