Yeah, but the question is, are there any games out there worth the investment of buying a new top video card??
Star Citizens? New refreshes of Battlefield and Call of Duty sequels? Assassin Creed Sequels? New MMOs with D3D12.0? 4K eye candy that will have diminished value and desire as time approaches 2017 and 2018?
The exception of course is Crysis, but that's about it.
Is there a new Crysis Sequel coming out? C3 is cake on for high-end computers...
I'm sorry if it came out a bit weird, but i meant the Pascal one, not the K20x. In any case, I really don't understand what the graph wants to communicate. They don't even pit Pascal with the top Tesla dogs, K40 and K80(dual gpu).
Edit: To reiterate, i quote the OP: NVIDIA's upcoming flagship GPU based on its next-generation "Pascal" architecture, codenamed GP100. Specifically mentioning "flagship", then comparing it to K20x and 7970 is at the very least misleading.
1. The graph is basically stating a performance improvement in the 64 bit floating point precision area over CPUs and others. As you can see, there's no major improvements for gaming if you focus on 64bitFPP, but rendering and number crushing, that's a different story. 32bitFPP at 12 Tera-whatevers per sec is actually pretty significant for gaming. You can say one of NVidia's many points with this graph is they didn't skimp on the 64bitFPP area like the last 2 to 3 generations on Titan "this time."
2. The graph speaks of a correlation between memory usage and the 1st derivative aka bytes per flop. What NVidia is basically saying is that the point in which information is being stored to the framebuffer for 32 or 64 bit floating point precision executions, the usage is actually less if you compare it to other products with a similar relationship. Furthermore, I think it's a typo when the graph shows 0,256 and 0,805 for SP and DP on the new Pascal. It's probably meant to say 0.256 bytes per flop SP and 0.805 bytes per flop DP.
3. 7970 and above, 64bit FPP has actually gone up for AMD Graphic Cards probably because AMD saw a small niche in the market where AMD Consumers would use their discrete graphic cards to render videos and others in a time where NVidia was taking it away after the first Titan series generation. NVidia was thinking that they could remove the 64bitFPP in gaming cards, and this would probably boost the sells of Quandro Cards, but there wasn't really a big difference in sales (speculation), and you can see this in M4000 where you have a Maxwell Titan and Workstation card providing about the same performance/features to rendering. The only difference is the driver that was probably significant for the most part.
4. Tesla is more of a number cruncher, and it's contender is the Intel's knight's landing or any server CPUs. Simply put, it's an accelerator card, but it still acts as a Graphic Card: Offload GPU executions to the GPU for processing and image rendering, use CUDA, blah blah blah. Some would say that Knights Landing is a work in progress and Intel's failure at an Intel Graphics Card. Intel's Xeon PHI is future a proof toys that can't be used for practical applications because a lot of current softwares don't utilize multi-core coding, and in order to make it work, you need to be someone who knows how to code both for a program and on the Xeon Phi to make it work remotely (in theory). From my understanding, you can't just load a PC game, and 64 micro CPU cores from Knights Landing is going to make your bottleneck troubles disappear. Thus giving you an FPS of 3,000 on World of Warcraft on ultra high settings. NO! The PC game utilizes coding to function with the physical Core for your CPU, but other codes need to be implemented for Knights Landing--that's assuming it works properly when you do that, to make it work. While Intel has it's multicore coding for Knight's Landing, NVidia's Tesla line uses Cuda. They say it's more efficient, and it provides better performance than Knights Landing. Overall, I think it's just a glorified GPU with some Nitro or rocket boosters... Tesla can't act as a substitute CPU through your PCI bus for increased performance, but it can improve rendering times for programs that utilize GPU rendering, and the coding is less complicated??
5. Majority of CPUs have poor 64bit FPP in general. Take a look at the Sandy Bridge Xeon 2690 in the table. 64bitFPP is only what, 243.2Teraflops versus the AMD 7970 at 1010. TeraFLops in DP alone.
6. 64bitFPP isn't a major function for every, normal use and PC Gaming. So in a sense, Intel and AMD can say "big F***en Deal," but to renderers and CGI people who use NVidia's codes to render particle effects, we'll be like OMG, that's going to make my epeen super sexy. Frames times are cut down from 10 minutes to 10 seconds. Woot WOOT! I can hit the clubs a lot sooner.