i dunno whats the point they place the connector like that?
Probably has to do with freeing up property on the PCB and traces. The card has the construction of a $250-300 parts and components, but with that you get firmware/software optimizations that maximize game play to be 60Fps... right?
Can we continue using the normal maximum/average/low way of calculating performance when 60fps is always the target? From what I see this thing promotes the "average" and nothing higher. It a way of unearthing efficiency and lowering thermals, while giving it a shot of nitrous when not keeping up to the mean! So will all the old data and graphs necessarily translate to apple-to-apples as provided in the past? Will it look/play great... in theory yes, though will Adaptive V-Sync sire new glitchy-ness over the traditional lag and spike as we’ve know it in past. Averaging at 60 Fps will not look all that different than say a 7970, which at say the old way of looking at an average hits say 80 Fps (that's what Nvidia is betting on). This really in my mind stops the whole I’m fastest, changes it to I’m the sam at providing the average, because no one can tell the difference. Kind’of the old "more than a mouth full is a waste"!
I don’t know what this now means, but traditional testing like W1zzard been done may well have very little merit any longer. We might need more of graphs like [H]ard|OPC has done those "spiky" graphs as the games played; although now there will be this slightly wavy green line hugging right a 60fps. Well that’s boring might really minimize graphic card reviews as we know it, sure plays BF3.... 60fps.
It feels like Nvidia took the ball and ran into left field and is saying, "the game as you knew it has changed". Which isn’t wrong or bad, but it really feels like the old way of figuring the best card has changed... but the price remains the same! Now, here’s a question why now can there be enthusist cards? In theory any GPU that’s strong and sought enough to game the newest or most demanding titles as long as they can clock it fast enough for the few milliseconds to keep the game from dipping below 60fps is all the best newest offerings need to be. The new mantra will be "we can render that @ XXXX resolution with this level/type of AA (as in Nvidia TXAA, anti-aliasing algorithm). It’s no longer about being fastest or faster. So if AMD take the Cape Verde and does this, are we all okay with it?