Hmm, still not quite 4K ready. ~40 FPS isn't bad, but not quite there.
Like I keep saying...this gen is 1440p single card, 4k dual card...this gen at most was going to get us 75% of the way there (3200x1800 or slightly less?) Single chip 4k will be 14nm/16nm...conceivably even 2nd gen (first gen might be shrinks/mem consolidation through 2nd gen HBM etc).
It's also why AMD doesn't need to be as fast, just consistent across different titles. They literally just need to keep above 60 at 1440p in most worse-case scenarios and 30+ (whatever scaling generally turns out to be on avg...vicariously 60 in dual config) at 4k, even if through overclocking, because that in practicality is all that matters. The questions most people (as most don't use adaptive sync) ask themselves is if it will generally stay above 30/60fps (or again vicariously below 16.66/33.333ms) and at what price. The question is not if it will do 38 vs 35 fps. This is nvidia's gpu that mostly can (and through overclocking seems probably will almost always) accomplish that.
Also,
Big thanks to W1zzard for this:
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_Titan_X/33.html
It pretty much proves my point: 8GB is generally the sweet spot (conceivably for the life of the current consoles). That said, I respectfully disagree with his '
640k 4GB is enough for everybody' conclusion. I think more and more games will push up against that wall, or at least 6GB as Mordor does (scaling from a 900p XB1/1080p PS4 game), as it is clearly shown which titles consume the most memory:
AC, Mordor, COD (even if busted), Rise, W_D, DR3. What do these games have in common? If you said they were developed primarily towards the memory architecture of the ps4/xb1...You'd be right. That higher memory usage is not going to change, and only become more apparent as creators target not only more tricky uses for the console memory arch to squeeze performance from those otherwise-limited machines, but also start targeting lower-resolutions on consoles that will require greater texture scaling on PC.
For example, Mordor is currently 1080p on PS4, 900p on XB1. What if next time they targeted 720p on xbox and 900p on ps4 (which I think will end up being a
very common occurrence for multiplat as time passes, as it allows xb1 to still remain HD)? You end up with 8GB making a lot more sense, if not the most sense. While that may occur (worst case scenario), the compute power of the consoles never changes and performance in that respect for scaling is a known...hence why cards are aimed at certain metrics outlined in my top-most comment.
At any rate, I primarily was eying the upcoming baby brother to this because I assumed (and rightfully so it appears) Mordor would consume just under 5.5GB at a higher resolution (given memory use for me before the game starts doing that weird hiccup thing) and would make sense for many more games that might have a similar mentality on console (900-1080p). I think it also verifies 3.5GB is indeed an issue for that game at some resolutions where 4GB would not be one, but also 4GB could be an over-all limiting factor where core performance may not be (4GB 390x?), even if more consistent because not (weird switching to separate memory controller) issues.
TLDR: If you could continue to add memory usage as more titles are released, it would be most appreciated. I would be willing to bet people would LOVE to know what Witcher will demand at various resolutions...
As always, appreciate the thorough review with nice graphs. Only thing I would mention is to please not forget us over-volters (at least for one sku based on a chip), although I can tell it's become less of a priority. The clock-scaling/voltage/leakage graphs you used to do on new parts was REALLY awesome/helpful. I hope that doesn't get left-behind completely.