This.
Also, 4K gaming is still too early (IMHO) and I am guessing most PC gamers play their games on a 21-27" desktop monitor, 2 feet from their face instead of sitting on a couch 10 feet from the TV. 4K makes no sense on unless you're playing games on a 50"+ screen.
As a member of the 8'' from a 65'' nutjobs...I disagree. We're a growing segment of the population. This is evident not only by the Nano (mini ITX), but the excitement surrounding the Fury form-factor in general. It is much more attune to the performance required for such a setup while staying within a reasonable (microATX) form-factor...something long-since coveted. People want this, the industry knows it; look at the scrambling to make 'Steambox' accessible and quantifiable by levels of expected performance or even things like AMD's Quantum (which, barring the buffer size for future high-resolution/badly optimized titles [outside the possibilities of DX12]), should have the brute force for 4k30 and/or Oculus/Vive at 90hz. The big players know exactly where the target lies across the spectrum, and nothing perfectly satiates it at this juncture. That will change next gen.
The point I have long (going on three years) contended is that 28nm was never going to get us to any sustainable level of 4k nor 1080p120; it was always going to about a good 1440p experience (and it has pissed me off to no end these solutions are being sold touting 4k). It's a combination of many factors: bandwidth, buffer size, power requirements of units/clocks to make it feasible to scale throughout the life of this console generation...etc. We were always destined to get painfully close, although I very much thought both companies would be more on par to what AMD achieved (with Hawaii/Fiji) than what nvidia pulled off with GM204/GM200 (overclocked; absolute performance). There is certainly an argument to be made for those product as we wait for the next generation; but the fact remains they don't quite reach the next threshold per segment.
That alll said, that next-generation is where things get real. 4k30 or 60 sustainable on a single card without the fuss of multiple cards. Large buffers. Small card sizes; performance of gpus lining up closer to the ram configurations in consoles scaled across resolutions (even if it shouldn't have to be that way) PER segment...power and price.
For geeks like me, I am so completely excited by the amount of bandwidth/floating-point that these cards should bring. Why, beyond gaming? MADVR.
I'm (within realistic budget) an image snob...I want my
porn media to look good. The idea of what these cards should be able to achieve (as remember multi-card doesn't work for real-time image processing) is tremendously exciting, and should be leaps and bounds better than what is currently possible with most consumer cards. Piled on top of video blocks that actually decode highly-compressed (see realistically steamable) VP8/9 (plastic) and h.264/h.265 (noisy) codecs, the idea remains in my head that scaling/processing many different aspects of video, even with FRC, should be possible while all looking good.
It's all about the WHOLE package, and this generation just didn't get there compared to what they want us to believe. It's not AMD nor nvidia's fault; it's a process limitation. If this month of launches has me grateful for anything, it's simply because we're that much closer to 14/16nm. I've said it before, but I'll say it again; it doesn't matter what kind of user you are; high or low rez, high or low framerate, simply IoT or hardcore gamer/video snob; 2016 is going to be one hell of a year. The convergence of technology that should be possible is going to be nothing short of amazing.