Thursday, April 10th 2014
NVIDIA GeForce GTX 880 Detailed
NVIDIA's next-generation GeForce GTX 880 graphics card is shaping up to be a true successor to the GTX 680. According to a Tyden.cz report, GTX 880 will be based on NVIDIA's GM204 silicon, which ranks within its product stack in the same way GK104 does to the GeForce "Kepler" family. It won't be the biggest chip based on the "Maxwell" architecture, but will have what it takes to outperform even the GK110, again, in the same way GK104 outperforms GF110. The DirectX 12-ready chip will feature an SMM (streaming multiprocessor Maxwell) SIMD design that's identical to that of the GeForce GTX 750 Ti, only there are more SMMs, spread across multiple graphics processing clusters (GPCs), probably cushioned by a large slab of cache.This is what the GTX 880 is shaping up to be.
Sources:
PCTuning Tyden.cz, Expreview
- 20 nm GM204 silicon
- 7.9 billion transistors
- 3,200 CUDA cores
- 200 TMUs
- 32 ROPs
- 5.7 TFLOP/s single-precision floating-point throughput
- 256-bit wide GDDR5 memory interface
- 4 GB standard memory amount
- 238 GB/s memory bandwidth
- Clock speeds of 900 MHz core, 950 MHz GPU Boost, 7.40 GHz memory
- 230W board power
102 Comments on NVIDIA GeForce GTX 880 Detailed
What do you want me to say, that you are right or even thank you for your kind words????
Let's stop it here because every post you do it just makes it worst.
Also, there are nut jobs, and then there are nut jobs.
Whatever you like to believe.
PS And no. When commending someone you don't know there is only one kind of "nut jobs". The kind that insults.
Look at what's said of next generation gpu's over the years and both this and the pirate island info are both realistic possibilities that's all, final steppings and binning might yet destroy all hope of a good 2015 (as I said ages ago) as 20nm is not looking stable.
With Maxwell we didn't knew for sure how the cards will look like 1 weak before the official presentation, not months. Do they have a six pin connector? They need it? They don't? just as an example.
AMD is more easily to predict with cards.
Nvidia farts out an imaginary Titan Z for $3,000 and AMD follows suit with an actual R9 295X2 for the low low price of only $1,500. At least we get a suitcase with this one. :p
Welcome to the Circus of Values!
I am buying it? Wait. The problem isn't with the chip. The problem is with the price. Nvidia is intentionally increasing the prices in the hi end sector. What you could buy with $500-600 in the past, costs $700-$1000 today or that's how much it will cost you, me, everybody tomorrow. So if that small chip with a 50% extra performance compared to GTX780 comes as a 880GTX at $900, well, they know what to do with it, better what to do with the whole card for better satisfaction.
Then the mining madness happen and the extra dollars from the price hikes didn't go into AMD's pockets but in the retailers pockets. So at AMD I am guessing they where hitting their heads in the wall for losing the chance to sell the cards at much better margins.
The result is AMD to follow the leader, Nvidia. So here we are with a metal case, a hydro cooler and $1500 price that make even Nvidia's marketing department at Tom's happy with the card(the price was in line with Nvidia's plans).
www.techpowerup.com/reviews/ASUS/GTX_750_Ti_OC/23.html
There is a huge increase in core voltage needed to attain the last bit of clock speed, and it becomes exponential as the core gets hotter and leakage increases, look at any GPU review, and you will notice this, the only exception to this rule is the minimal switching power required, and the 750Ti does a good job, at that, at lower core speeds the leakage is so minimal that the .95 volts it has doesn't mean anything in real power consumption, and it is only possible on that core as it is so small with so few shaders. If we attempted this with a larger core the power drop for the first increase in clock speeds would cause the hardware to fail during the clock state transition.
Now we have two options here, we can say, hey look at the 750Ti, it costs $150, and the R7 265 it costs $150.
www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_750_Ti/28.html
Even overclocked it doesn't reach the R7, which is 2 years old. So bad on them, if you are concerned about the few watts difference in idle and full power why are you even talking about a high power GPU?
The other option is you can say, is the 750Ti is a great first foray for nvidia to test something new, and despite its lack of GPU power for actual use since 2009 I hope they learn from it. But power consumption speculation and raw GPU power is and has been centered around the 300W ceiling for both companies for the last few generations, I don't see any reprise there, all I can see is the shrink should bring power consumption down per transistor and allow for more transistors and more raw compute power, core improvements and features will no doubt improve their IPC. Anything more than this is buffalo chips.