# NVIDIA GeForce GTX 880 and GTX 870 to Launch This Q4



## btarunr (Jun 19, 2014)

NVIDIA is planning to launch its next high performance single-GPU graphics cards, the GeForce GTX 880 and GTX 870, no later than Q4-2014, in the neighborhood of October and November, according to a SweClockers report. The two will be based on the brand new "GM204" silicon, which most reports suggest, is based on the existing 28 nm silicon fab process. Delays by NVIDIA's principal foundry partner TSMC to implement its next-generation 20 nm process has reportedly forced the company to design a new breed of "Maxwell" based GPUs on the existing 28 nm process. The architecture's good showing with efficiency on the GeForce GTX 750 series probably gave NVIDIA hope. When 20 nm is finally smooth, it wouldn't surprise us if NVIDIA optically shrinks these chips to the new process, like it did to the G92 (from 65 nm to 55 nm). The GM204 chip is rumored to feature 3,200 CUDA cores, 200 TMUs, 32 ROPs, and a 256-bit wide GDDR5 memory interface. It succeeds the company's current workhorse chip, the GK104.





*View at TechPowerUp Main Site*


----------



## d1nky (Jun 19, 2014)

already?!


----------



## XSI (Jun 19, 2014)

I would be happy to change my 8800GT to GTX 880


----------



## The Von Matrices (Jun 19, 2014)

btarunr said:


> When 20 nm is finally smooth, it wouldn't surprise us if NVIDIA optically shrinks these chips to the new process, like it did to the G92 (from 65 nm to 55 nm).



I have to disagree with you here.  20nm isn't going to be less expensive than 28nm per transistor, so there's no financial incentive for a die shrink and thus it won't be done.  It makes more financial sense to sell a large 28nm chip than a smaller 20nm chip.

20nm will only be for the extreme high end this generation and will only be used in cases where it's impossible to manufacture a larger 28nm chip (e.g. you can't make a 28nm, 15 billion transistor, 1100mm^2 GM100).  20nm won't become mainstream until NVidia (or anyone else) can't achieve their performance targets on 28nm, which likely will not happen until the generation after this.


----------



## GreiverBlade (Jun 19, 2014)

btarunr said:


> The GM204 chip is rumored to feature 3,200 CUDA cores, 200 TMUs, 32 ROPs, and a *256-bit* wide GDDR5



i can't wait to see how a 880 does against 780/780Ti R9 290/290X ... if the gain is minimal (15-25%) and the tdp is the major point: then no regrets.  (specially if Nv does the pricing  "à la nVidia")


----------



## THE_EGG (Jun 19, 2014)

Earlier than I thought. I thought it would be coming out around December 2014 to February 2015 sometime. Looking forward to it!


----------



## MxPhenom 216 (Jun 19, 2014)

I expect GM210, big die Maxwell to debut 20nm.


----------



## ZoneDymo (Jun 19, 2014)

Will be interesting to see how it preforms, will it handle 4k good enough etc, and power usage.
But that 28nm v 20nm makes it feel like an inbetween thing you dont want imo.


----------



## alwayssts (Jun 19, 2014)

d1nky said:


> already?!



LOL...
___

I don't understand why people think a 256-bit/32 ROP chip is going to have something like 3200sp.  That makes absolutely no sense.  Half that (according to nvidia-speak), at most, is feasible.

One of those components, at least, is wrong.  It could be 256-bit/32 ROPs/1536(1920), or given since we know it is 8GB (and sixteen 4GB chips is a lot for a mid-range part),  512-bit/64/3200, or some combo of more cache/256-bit/64 ROPs/3200 because the design probably will indeed likely be shrank to 20nm where size will prohibit a larger bus.  You gotta remember 3200sp, or 25 SMM, is essentially similar to 4000sp from AMD.  That's a lot of chip, more than actually needed for 64 ROPs on avg (where-as Hawaii would be optimal for 48, if the design allowed it)...and again if true we can probably more realistically expect 23-24 (3072) unit parts, as it makes the most efficient sense.  Not unlike Titan, for instance, and the full design is probably a safety net. 

I agree it will be shrank, but I think a more suitable comparison would be G80->G92b...because if accurate we're talking a huge chip (~4x gm107) transitioning to a process that's supposed to allow somewhere around 1.9x density, granted around 1.2-1.3x performance/power savings.  That means going from behemoth size (GT200 was 576mm) to large 256-bit size (like GK104 which is 294mm, and probably the largest really feasible before being larger and switching to a larger controller with slower ram).  I certainly see how it could be conceivable to have such a large design on 28nm, and then scale size down and clockspeed up as we move to newer processes.  That doesn't necessarily mean it's market will change...a small(ish) chip on 20nm/16nm (20nmFF) will likely be very expensive, but the clock improvement/power savings could, at least in on the later, make the change worth it.

I'm really curious how they could get a 3072sp (equivalent to 3840sp from amd) with 8GB of ram within a decent power envelope, especially in a feasible manner (meaning at least .9v and around 876mhz, the minimum voltage for the process and avg clocks at that voltage).  I don't doubt the design is 'possible', especially with low-speed/voltage and higher density ram on a smaller bus (cache is probably more power efficient),  but damn....that's pushing it to the edge of feasibility on pretty much all counts.


----------



## Dejstrum (Jun 19, 2014)

Finally....... I need to change my  gtx 570


----------



## RCoon (Jun 19, 2014)

Alright, I don't expect any miracles then. Same process, but more cores? It's just Kepler with 400 more cores on a slightly more energy efficient architecture. So they might deal with the heat increase by adding more cores by using the slightly more efficient archi, and in turn gain a small performance increase from 2880 cores to 3200. I'm assuming the 870 will have ~3000 cores to hit a price point between the two.

Call me cynical, but I don't see the 780ti lowering in price and the 880 taking its place. The 880 is going to hit a higher price point. Then there's the simple fact that the 860 is probably going to just be a rebranded 780ti and everything else below will likely be a rebrand too. Ugh... new GPU releases are so disappointing these days... nothing to get excited about, especially when you know the price gouging is imminent.


----------



## arbiter (Jun 19, 2014)

RCoon said:


> Alright, I don't expect any miracles then. Same process, but more cores? It's just Kepler with 400 more cores on a slightly more energy efficient architecture. So they might deal with the heat increase by adding more cores by using the slightly more efficient archi, and in turn gain a small performance increase from 2880 cores to 3200. I'm assuming the 870 will have ~3000 cores to hit a price point between the two.



Sighting more efficient? you should check on 750ti and see how its power usage compares. It used less then 50% the power 650ti used, yea 650ti had 768 cores and 750ti only had 640. 650 non-ti had 384 cores and it used 4 more watts then 750ti was rated it. I don't expect it to be 50% of what 780's use which is listed around 250 watts but very possible it could be around ~150-175watt range maybe little higher.


----------



## The Von Matrices (Jun 19, 2014)

alwayssts said:


> don't understand why people think a 256-bit/32 ROP chip is going to have something like 3200sp.  That makes absolutely no sense.  Half that (according to nvidia-speak), at most, is feasible.
> 
> One of those components, at least, is wrong.  It could be 256-bit/32 ROPs/1536(1920), or given since we know it is 8GB (and sixteen 4GB chips is a lot for a mid-range part),  512-bit/64/3200, or some combo of more cache/256-bit/64 ROPs/3200 because the design probably will indeed likely be shrank to 20nm where size will prohibit a larger bus.  You gotta remember 3200sp, or 25 SMM, is essentially similar to 4000sp from AMD.  That's a lot of chip, more than actually needed for 64 ROPs on avg (where-as Hawaii would be optimal for 48, if the design allowed it)...and again if true we can probably more realistically expect 23-24 (3072) unit parts, as it makes the most efficient sense.  Not unlike Titan, for instance, and the full design is probably a safety net.
> 
> ...



I think the much simpler explanation is the one that Cadaveca posted at the last leak.  The different SKUs are getting mixed up and 3200SP and 8GB is for a dual-GPU card, the successor to GTX 690.  The single GPU part, successor to the GTX 680/GTX 770 would therefore have 4GB and 1600SP.  To me, this is much more reasonable.

Remember, GTX 750 Ti outperforms the GTX 650 Ti by 20% and yet it has 20% fewer shaders, so assuming the same scaling, a 1600SP GTX 880 would have almost 50% more performance than GTX 770/680, completely in line with a generational improvement.

Edit: updated correct card names


----------



## RCoon (Jun 19, 2014)

arbiter said:


> Sighting more efficient? you should check on 750ti and see how its power usage compares. It used less then 50% the power 650ti used, yea 650ti had 768 cores and 750ti only had 640. 650 non-ti had 384 cores and it used 4 more watts then 750ti was rated it. I don't expect it to be 50% of what 780's use which is listed around 250 watts but very possible it could be around ~150-175watt range maybe little higher.



Yeah I understand the 750ti was a total baller for energy efficiency, but it wasn't just down to cores. This 880 has more of everything, wider memory bus, etc, so while it will undoubtedly use less power than the 780ti, I don't forsee it being a massive amount, like you said, the difference of 250W and 175W, I reckon ~50W or more in savings sounds about right.


----------



## techy1 (Jun 19, 2014)

will it run Crysis *in 4K*? if the answer is "no" - why should we bother and even talk about this useless hardware. if the answer is "yes" - then shut up and take my money


----------



## HumanSmoke (Jun 19, 2014)

The Von Matrices said:


> I have to disagree with you here.  20nm isn't going to be less expensive than 28nm per transistor, so there's no financial incentive for a die shrink and thus it won't be done.  It makes more financial sense to sell a large 28nm chip than a smaller 20nm chip.


It does make financial sense to go with 28nm, but I doubt it is because of the reason you've given.
Transistor density for 20nm (16nm FEOL + 20m BEOL) is estimated at 1.9 - 2.0x that of 28nm.
Wafer costs: 28nm : $4500-5000 per. 20nm: $6000 per....1.3x that of 28nm.

Reasons to go with 28nm?
Available capacity
Yields
Would the GPU design benefit from, or require increased transistor density over increased GPU silicon cost for the given price points of the product being sold? The GTX 870/880 (and presumably followed by the GTX 860 Ti) would still likely reside in the $350/$500 segment brackets. Why add to the manufacturing cost when you're under no pressure to do so (since AMD will also go with 28nm for their next iteration of GPUs).

My guess is that neither Nvidia nor AMD trust TSMC to deliver a large IC in commercial quantity based on TSMC's projections. Given the woes of 32nm and the slow and problematic ramp of 28nm who could blame them?


----------



## Squuiid (Jun 19, 2014)

What I most want to know is do these cards do HDMI 2.0 and Displayport 1.3?
Until both video cards and 4K monitors support BOTH of these standards I won't be dumping my GTX590 any time soon.
These two standards are a must for 4K IMO.


----------



## Roel (Jun 19, 2014)

I am hoping for cards with 3 DisplayPort connections.


----------



## FrustratedGarrett (Jun 19, 2014)

arbiter said:


> Sighting more efficient? you should check on 750ti and see how its power usage compares. It used less then 50% the power 650ti used, yea 650ti had 768 cores and 750ti only had 640. 650 non-ti had 384 cores and it used 4 more watts then 750ti was rated it. I don't expect it to be 50% of what 780's use which is listed around 250 watts but very possible it could be around ~150-175watt range maybe little higher.



Yeah but the the Maxwell GM107 is ~160mm^2 and it only packs half the performance of the GK104 which measures ~300mm^2, so Maxwell doesn't improve efficiency area wise.  I expect the new chips to be big, and while not as power hungry as the GK110 chips,  performance is not going to be much better. 

BTW, I think the 3200 CUDA cores is impossible. If GM107 can pack 640 CUDA cores onto a ~160mm^2 chip, then a 450mm^2 chip can't pack more than ~2000 cores. 
I
 expect 15%-20% better performance than the 780TI at lower prices, which is great nevertheless!


----------



## The Von Matrices (Jun 19, 2014)

HumanSmoke said:


> It does make financial sense to go with 28nm, but I doubt it is because of the
> reason you've given.
> Transistor density for 20nm (16nm FEOL + 20m BEOL) is estimated at 1.9 - 2.0x that of 28nm.
> Wafer costs: 28nm : $4500-5000 per. 20nm: $6000 per....1.3x that of 28nm.
> ...



I should clarify my point.  I was making my comment based upon NVidia's own press slide showing the transition to cost-effective 20nm occurring in Q1 2015.







The difference in cost per transistor between 20nm and 28nm is minimal, making me question whether it's worth putting engineering effort toward shrinking GPUs for a marginal cost savings per GPU (that may never make up the capital expenditure to make new masks and troubleshoot issues) rather than concentrating engineering on completely new GPUs at that smaller process.  Unlike in the past, there's a lot more to be gained from a newer, more efficient architecture than from a die shrink.


----------



## RejZoR (Jun 19, 2014)

People still obsessed with stupid power consumption. Its like buying a Ferrari and then driving around at 50km/h to conserve petrol. Or worse, driving a Ferrari and constantly bitch about MPG. Idiotic. Give me cheaper top performing card and i don't give a toss about consumption.


----------



## Constantine Yevseyev (Jun 19, 2014)

techy1 said:


> will it run Crysis *in 4K*? if the answer is "no" - why should we bother and even talk about this useless hardware. if the answer is "yes" - then shut up and take my money


Dude, you have _so much_ to learn about computer software, I don't even know where you should start...


----------



## robert3892 (Jun 19, 2014)

ZoneDymo said:


> Will be interesting to see how it preforms, will it handle 4k good enough etc, and power usage.
> But that 28nm v 20nm makes it feel like an inbetween thing you dont want imo.



I don't think you'll see good 4K support until 2015


----------



## Kissamies (Jun 19, 2014)

I'll just guess that the full GM204 has 2560 shaders.


----------



## HumanSmoke (Jun 19, 2014)

FrustratedGarrett said:


> Yeah but the the Maxwell GM107 is ~160mm^2 and it only packs half the performance of the GK104 which measures ~300mm^2, so Maxwell doesn't improve efficiency area wise.


GM107 is 148mm², GK104 is 294mm².
You can say that the Maxwell is half the size for slightly better than half the performance, although the comparison is somewhat flawed since the Maxwell chip is hampered by a constrained bus width, and the Maxwell chip devotes a larger percentage of its die area in comparison to GK104 to its uncore (the L2 cache is a significant increase, but not particularly relevant to gaming at this time).
As you say, I'd be very sceptical over the 3200 core claim. The GM204 is obviously designed to supplant GK104, not GK110.


----------



## TheDeeGee (Jun 19, 2014)

techy1 said:


> will it run Crysis *in 4K*? if the answer is "no" - why should we bother and even talk about this useless hardware. if the answer is "yes" - then shut up and take my money



Just like "Mom" jokes, the Crysis ones also getting old.

And btw Crysis is a turd that can't be polished.


----------



## techy1 (Jun 19, 2014)

Svarog said:


> Just like "Mom" jokes, the Crysis ones also getting old.
> 
> And btw Crysis is a turd that can't be polished.


Mom jokes are never getting old. but Crysis question is still actual - because if this card can not run newst titles (I do not give a damn about Crysis) on 4K - then why someone should upgraide from like HD 5870 or GTX 580 like cards?


----------



## HumanSmoke (Jun 19, 2014)

The Von Matrices said:


> I should clarify my point.  I was making my comment based upon NVidia's own press slide showing the transition to cost-effective 20nm occurring in Q1 2015.


Fair enough, although I don't personally put much stock in a vendors slides, especially when it's an Nvidia/TSMC thing - they're like some masochistic couple - the Richard Burton and Elizabeth Taylor of the semicon world. I'd also note that some of the numbers for those projections have changed, or at least been made public, since the TSMC vs Intel transistor density spat earlier in the year.


The Von Matrices said:


> The difference in cost per transistor between 20nm and 28nm is minimal, making me question whether it's worth putting engineering effort toward shrinking GPUs for a marginal cost savings per GPU (that may never make up the capital expenditure to make new masks and troubleshoot issues) rather than concentrating engineering on completely new GPUs at that smaller process.  Unlike in the past, there's a lot more to be gained from a newer, more efficient architecture than from a die shrink.


The whole deal with 20/16nm, like (almost) any new process is the ability to dial in either a lower power budget or higher clocks. Lower power isn't a "must have" for the high end GPU market judging by recent events, and higher clocks mean higher localized temps in a more densely packed die which is a bit of a compromise on a large die (it's problematic enough on the small die Ivy Bridge/Haswell). If the current architecture can't fully utilise the high clock resources without pipeline bottlenecks is it worthwhile moving to a smaller node? - which is what I meant by "Would the GPU design benefit from, or require increased transistor density over increased GPU silicon cost for the given price points of the product being sold?". How much real world performance is gained vs overclock for the 750 Ti for example (percentage to percentage). W1zzard measured the difference as 14% more performance from a 18.2 - 22.7% clock boost between a stock 750 Ti (980-1150 core), and an OC'ed card (1202-1359 core), so there is a point where the higher clocks don't earn their keep


----------



## The Von Matrices (Jun 19, 2014)

techy1 said:


> Mom jokes are never getting old. but Crysis question is still actual - because if this card can not run newst titles (I do not give a damn about Crysis) on 4K - then why someone should upgraide from like HD 5870 or GTX 580 like cards?



The question should not be "can it run the newest games" because a graphics card from 10 years ago can do that.  The question should be "can it run the newest games faster or at better quality" to which the answer is a definite yes.

There is a correlation between the power of GPUs and the complexity of games.  If you're waiting to buy a GPU that can play the latest games at highest settings then you will never buy one because the latest games will always be setting the bar higher.


----------



## RCoon (Jun 19, 2014)

techy1 said:


> Mom jokes are never getting old. but Crysis question is still actual - because if this card can not run newst titles (I do not give a damn about Crysis) on 4K - then why someone should upgraide from like HD 5870 or GTX 580 like cards?



VRAM usage is only going to go up. I upgraded from my 3 x 570's because 1280MB of VRAM wasn't enough to run Max Payne 3 at ultra settings on 1080p.


----------



## btarunr (Jun 19, 2014)

alwayssts said:


> I don't understand why people think a 256-bit/32 ROP chip is going to have something like 3200sp.  That makes absolutely no sense.



I know, right? It made sense for GTX 560 Ti (GF114) to have 384sp, and 256-bit/32 ROP. A chip with four times the cores (1536sp) with 256-bit/32 ROP is so totally unimaginable! NVIDIA would never make such a chip. 

Oh wait...it did. The GK104.

Ermagerd...3200 SP and 256-bit/32 ROP? Totally unimaginable and borderline blasphemous!


----------



## ZoneDymo (Jun 19, 2014)

Constantine Yevseyev said:


> Dude, you have _so much_ to learn about computer software, I don't even know where you should start...



I really would like to know where that came from because it does not seem to be relevant in any way to what the guy stated.


----------



## ZoneDymo (Jun 19, 2014)

Svarog said:


> Just like "Mom" jokes, the Crysis ones also getting old.
> 
> And btw Crysis is a turd that can't be polished.



Crysis can be seen as the Crysis series and as of yet they are still the heavier games out there so using them as a standard for the next gen to overcome is not such an odd request.
Also nobody was talking about the quality of the games.


----------



## W1zzard (Jun 19, 2014)

RejZoR said:


> People still obsessed with stupid power consumption. Its like buying a Ferrari and then driving around at 50km/h to conserve petrol. Or worse, driving a Ferrari and constantly bitch about MPG. Idiotic. Give me cheaper top performing card and i don't give a toss about consumption.


today's top performing cards are limited by power consumption = heat output. so by your analogy, today's ferrari's will always be limited by the rpm limiter of the engine, which can only be improved if you improve engine tech, so you can relax the rpm limit


----------



## RejZoR (Jun 19, 2014)

Not really. RPM is not everything. If you increase the engine capacity, you don't need as many RPM to compensate for that. Besides, if it was really a thermal issue, all the GPU's would use water cooling as stock cooling solution. But they still use crappy little coolers. So there is plenty of headroom...


----------



## RCoon (Jun 19, 2014)

RejZoR said:


> Not really. RPM is not everything. If you increase the engine capacity, you don't need as many RPM to compensate for that. Besides, if it was really a thermal issue, all the GPU's would use water cooling as stock cooling solution. But they still use crappy little coolers. So there is plenty of headroom...



Less power initially used means not only a smaller thermal envelope, but also increases the prospective overclock, as you have more power available to you before the thing pops, and more power available to use for OC'ing before you start needing watercooling.

That and I prefer companies at least trying to not destroy our planet for the sake of bigger numbers. Granted, the concept of high end GPU's still destroy the planet, but at least it's a dent.

I'm glad you don't care about power consumption. BUT MOST OF THE REST OF US DO.


----------



## blibba (Jun 19, 2014)

RejZoR said:


> People still obsessed with stupid power consumption. Its like buying a Ferrari and then driving around at 50km/h to conserve petrol. Or worse, driving a Ferrari and constantly bitch about MPG. Idiotic. Give me cheaper top performing card and i don't give a toss about consumption.



You don't care about power consumption, but you do care about price? You realise electricity costs money? I care about consumption insofar as the money saved on electricity will allow me to get a better card.


----------



## Deleted member 24505 (Jun 19, 2014)

People who buy 2x 780ti do not care about cost of electric.


----------



## ZoneDymo (Jun 19, 2014)

tigger said:


> People who buy 2x 780ti do not care about cost of electric.



And you know this because you asked every person who owns such a config right?
*Removed (language)*
If you are just browsing the internet or any of those other stuff you can and will do on a pc other then gaming which barely require a gpu of any kind, you dont want your pc to use up tons of electricity.


----------



## rtwjunkie (Jun 19, 2014)

So, it seems to me this is just an incremental upgrade on current gen, with the addition of much better energy efficiency.  That means the really big increase in performance will come with the 9-series, right?


----------



## ZoneDymo (Jun 19, 2014)

rtwjunkie said:


> So, it seems to me this is just an incremental upgrade on current gen, with the addition of much better energy efficiency.  That means the really big increase in performance will come with the 9-series, right?



its taking waaay too long for someone who wants to upgrade but is waiting for a proper power upgrade ><


----------



## RCoon (Jun 19, 2014)

ZoneDymo said:


> its taking waaay too long for someone who wants to upgrade but is waiting for a proper power upgrade ><



Pretty much like the processor market. People who are still on Sandybridge STILL have no reason at all to upgrade from their stupendously overclocked processors to the new Devil's Canyon chips, besides a small performance increase which isn't necessarily needed. GPU market is stagnating and no real performance improvement is coming because we're stuck on the 28nm process and everything is just a rebranded architecture.


----------



## midnightoil (Jun 19, 2014)

Personally I'm much more excited to see what AMD's next cards can do, since they'll be on GF 28nm instead of TSMC.

Oh, and the specs listed here are completely inaccurate / made up.  No way a card with that many shaders could function on a 256bit memory bus.


----------



## CookieMonsta (Jun 19, 2014)

RejZoR said:


> People still obsessed with stupid power consumption. Its like buying a Ferrari and then driving around at 50km/h to conserve petrol. Or worse, driving a Ferrari and constantly bitch about MPG. Idiotic. Give me cheaper top performing card and i don't give a toss about consumption.



That's what I thought too, until I got 2 AMD 290s. Its the first time I've actually ever heard my 1000W's fan, and it was really really loud....so much for SilentPro.


----------



## rtwjunkie (Jun 19, 2014)

RCoon said:


> Pretty much like the processor market. People who are still on Sandybridge STILL have no reason at all to upgrade from their stupendously overclocked processors to the new Devil's Canyon chips, besides a small performance increase which isn't necessarily needed. GPU market is stagnating and no real performance improvement is coming because we're stuck on the 28nm process and everything is just a rebranded architecture.


 
Exactly!  Other than for more VRAM, I see really zero incentive to upgrade from my 780 until the 9-series.  The VRAM is gonna be the killer, so I'll probably stay at 1080P so I can still maximize visuals.

Now, my fiance's rig (Frankenrig, below) on the other hand, probably could see some really good improvement with the 8's going from a 660Ti.


----------



## yogurt_21 (Jun 19, 2014)

Power consumption and heat arguments typically also hit noise arguments. Some people only care about the latter. So sure you can afford to pay for the power used and don't mind dropping the A/C down a few degrees to counteract the heat being added to the room, but if that thing Idles above 45dba it's going to be annoying. So even those with unlimited funds and "gimme more powa" mentality will still want a quieter rig when they're just browsing the net. 

For me After 4 years of Fermi SLI I'm ready for something with less of all the above. For me I'm also interested in semi portability so a gaming laptop is where it's at. 

But say you've been sitting on 580 sli and the rest of your rig runs fine. I can see an 880/870 that runs at less than half the power while offering more performance being a very attractive solution.


----------



## tehehe (Jun 19, 2014)

The Von Matrices said:


> I have to disagree with you here.  20nm isn't going to be less expensive than 28nm per transistor, so there's no financial incentive for a die shrink and thus it won't be done.  It makes more financial sense to sell a large 28nm chip than a smaller 20nm chip.
> 
> 20nm will only be for the extreme high end this generation and will only be used in cases where it's impossible to manufacture a larger 28nm chip (e.g. you can't make a 28nm, 15 billion transistor, 1100mm^2 GM100).  20nm won't become mainstream until NVidia (or anyone else) can't achieve their performance targets on 28nm, which likely will not happen until the generation after this.



Cost is just one variable. Being ahead of competition by providing cooler, less power hungry and faster chip is another one because clients will be more likely to buy from you if competition is not up to par.  If that were not the case companies would not invest in fabs at all. Competition is magic ii all of this.


----------



## xorbe (Jun 19, 2014)

3200 cores and 256-bit mem bus?  Seems unlikely.


----------



## RejZoR (Jun 19, 2014)

blibba said:


> You don't care about power consumption, but you do care about price? You realise electricity costs money? I care about consumption insofar as the money saved on electricity will allow me to get a better card.



Sorry but i call that BS. For the price of your power used, converted to € you'll benefit like 10 EUR difference a year compared to me.


blibba said:


> You don't care about power consumption, but you do care about price? You realise electricity costs money? I care about consumption insofar as the money saved on electricity will allow me to get a better card.



With the price difference you'll save with your more "power efficient" graphic card you'd save about 10 € a year. And i'm being very optimistic here. Will you really have it for 10 years that you'll save up 100 € and buy a better graphic card because of it? Sure power efficiency is nice if it just happens to be cheap. Otherwise everyone charge such ridiculous premiums for it you never get that back with usage alone. Not the primary usage graphic cards were intended for. And that's gaming. Bitcoin mining is something completely different and i'll still stand by my statement that ALL bitcoin mining is a total and entire waste of worlds resources. Grinding some pointless algorithms, burn electricity for them so you can spend them as real money. A single sided non productive manufacturing of world goods. And only one who actually made a profit out of it were graphic card makers and no one else.


----------



## GhostRyder (Jun 19, 2014)

This rumored specs just seem to be as stated rumor because as seen the new Maxwell is about efficiency and better performance per cuda core instead of just cramming more onto the die.  I would find it hard to believe that after what 4 generations of cards they would drop the bus width down to 256.  To much of that seems like its just a wish list that's mixed up.  Now maybe these specs are 100% accurate and we all are just blabbering but this just seems to be very suspicious because it would not follow the route GM seemed to be going.

Now as far as the shrinking of the die obsession, I really do not think thats a big deal.  I would rather wait on better stability than just shrink for the whole idea of shrinking.  GM already proved with the 750ti that we can use less power and cores yet achieve better performance on the 28nm die.

As for the power consumption debate, having lower power consumption is something to strive for since we have started to go a little crazy in that area.  However the differences your talking about in most cases amount to zip/zilch/nada in a power bill even in some of the more expensive regions.  With thigns like zero core or the likes if the computer is idle then its not using much power and even under gaming or 100% load the power consumption is not outrageous even on the craziest of machines.  The amount of difference things would make on a power bill to amount to anything really under general use would take years to make up an amount that would look like actual savings.  So unless your running your computer 24/7 under load, power consumption for money savings is a meh topic.


----------



## Roel (Jun 19, 2014)

The technology keeps getting more energy efficient so I don't know what you're complaining about. If you want low power consumption then you can go for a 750 Ti which has a TDP of 60W and performs about the same as a high-end gaming laptop and the 4-year old GTX 470 which had a TDP of 215W at that time. It can play all games if you just turn the settings down a bit. The top cards are for those who want maximum performance and don't care about the power consumption.

I am building a water cooling loop with lots of radiators so I won't have any trouble with the noise. The heat will keep me warm so the central heating system doesn't have to work as hard. I would buy a 500W card if the performance was double that of a 250W card, it would be better than SLI.


----------



## Octavean (Jun 19, 2014)

Meh,....

I just want a cheaper GTX 760 or GTX 770,....

I'm on a GTX 670 now and I find it fine for my needs but I need to upgrade another system.


----------



## debs3759 (Jun 19, 2014)

Hmm, might be time to upgrade that Matrox Mystique...


----------



## TheGuruStud (Jun 19, 2014)

The Von Matrices said:


> I think the much simpler explanation is the one that Cadaveca posted at the last leak.  The different SKUs are getting mixed up and 3200SP and 8GB is for a dual-GPU card, the successor to GTX 690.  The single GPU part, successor to the GTX 680/GTX 770 would therefore have 4GB and 1600SP.  To me, this is much more reasonable.
> 
> Remember, GTX 750 Ti outperforms the GTX 660 Ti by 20% and yet it has 20% fewer shaders, so assuming the same scaling, a 1600SP GTX 880 would have almost 50% more performance than GTX 770/680, completely in line with a generational improvement.



No. Just no.


----------



## The Von Matrices (Jun 19, 2014)

btarunr said:


> I know, right? It made sense for GTX 560 Ti (GF114) to have 384sp, and 256-bit/32 ROP. A chip with four times the cores (1536sp) with 256-bit/32 ROP is so totally unimaginable! NVIDIA would never make such a chip.
> 
> Oh wait...it did. The GK104.
> 
> Ermagerd...3200 SP and 256-bit/32 ROP? Totally unimaginable and borderline blasphemous!



If we had no idea what the Maxwell architecture was, then I could see your point.  But since we have GM107 and we know it has fewer shaders than GK106 (for more performance), then it would be a very unexpected move for GM104 to have double the shaders of GK104.



TheGuruStud said:


> No. Just no.



Care to explain your reasoning?


----------



## TheGuruStud (Jun 19, 2014)

The Von Matrices said:


> If we had no idea what the Maxwell architecture was, then I could see your point.  But since we have GM107 and we know it has fewer shaders than GK106 (for more performance), then it be a very unexpected move for GM104 to have double the shaders of GK104.
> 
> 
> 
> Care to explain your reasoning?



750 is not competition for a 660 nor faster than a 660.


----------



## The Von Matrices (Jun 19, 2014)

TheGuruStud said:


> 750 is not competition for a 660 nor faster than a 660.



You're right in that I stated the wrong name of the card, I meant a GTX 650 Ti.  However, I was still using the correct numbers when comparing it to a GTX 750 Ti, so the math is still valid.


----------



## Deleted member 24505 (Jun 20, 2014)

ZoneDymo said:


> And you know this because you asked every person who owns such a config right?
> *Removed (language)*
> If you are just browsing the internet or any of those other stuff you can and will do on a pc other then gaming which barely require a gpu of any kind, you dont want your pc to use up tons of electricity.



I know this because they have 2x 780ti, not exactly low powered. I think it's pretty obvious anyone who has 2 of these is gonna have a pretty high end rig with at least a 1kw PSU. So do you really think they care about power usage? if they did they would not have such a high end rig, or maybe they just use it for browsing the internet or any of those other stuff you can and will do on a pc other then gaming. or maybe you are just a *Removed (language)*


----------



## LeonVolcove (Jun 20, 2014)

No Upgrade for me, i am still satisfied with my current rig


----------



## bloodyriders (Jun 20, 2014)

the spec still rumored
you guys forgot about 750ti? the rumor said if it have to be between 660 and 660ti with kepler architecture
but when 2 weeks before released (CMIIW)?
it goes all wrong
so, rumor from nvidia spec = not trusted


----------



## Prima.Vera (Jun 20, 2014)

robert3892 said:


> I don't think you'll see good 4K support until 2015



Make that 2016 and realistically speaking 2017. 

Long have been the times when you could have a 75%-90% performance increase over previews generation (3870-4870-5870 anyone?).
Now we should be lucky if there is a stunning 25% increase.


----------



## ZoneDymo (Jun 20, 2014)

Prima.Vera said:


> Make that 2016 and realistically speaking 2017.
> 
> Long have been the times when you could have a 75%-90% performance increase over previews generation (3870-4870-5870 anyone?).
> Now we should be lucky if there is a stunning 25% increase.



Seems unlikely, the R9 295x2 does a fine job at 4k, if the next AMD card (r9 380x?) single card is as fast as that one we should be well on our way


----------



## FreedomEclipse (Jun 20, 2014)

well if this brings the price of 770s or 780s down then im all for it. I need some 3 or 4GB cards to drive my 1440p monitor


----------



## vagxtr (Jun 20, 2014)

btarunr said:


> will be based on the brand new "GM204" silicon, which most reports suggest, is based on the existing 28 nm silicon fab process (...) When 20 nm is finally smooth, it wouldn't surprise us if NVIDIA optically shrinks these chips to the new process, like it did to the G92 (from 65 nm to 55 nm). The GM204 chip is rumored to feature 3,200 CUDA cores, 200 TMUs, 32 ROPs, and a 256-bit wide GDDR5 memory interface. It succeeds the company's current workhorse chip, the GK104.



Right. Onec again nicely rigged numbers make itself a news. How about that rumor site tells us how SweClockers would manage desing overstuffed chip with 2.1× times more shaders over GK104 on same *28nm* processing node and stick only 200TMUs to accompany it with on measly 256b bus. Maxwell might be great as GM*1*07 already show us that, but only thats done mostly by reconfiguration of avaliable resources so we can enjoy quite massive gaming improvenets over Kepler. And really, nobody even try to speculate on tha. It's always some rigged bigger numbers which arent possible when nVidia already has pretty big die to start with.


----------



## vagxtr (Jun 20, 2014)

The Von Matrices said:


> I have to disagree with you here.  20nm isn't going to be less expensive than 28nm per transistor, so there's no financial incentive for a die shrink and thus it won't be done.  It makes more financial sense to sell a large 28nm chip than a smaller 20nm chip.
> 
> 20nm will only be for the extreme high end this generation and will only be used in cases where it's impossible to manufacture a larger 28nm chip (e.g. you can't make a 28nm, 15 billion transistor, 1100mm^2 GM100).  20nm won't become mainstream until NVidia (or anyone else) can't achieve their performance targets on 28nm, which likely will not happen until the generation after this.



Actually 20nm according to same jumping jack flash tsmc rules would never become "mainstream" at all, it's just available node for a year or so , and we all expect die shrinks from eagerish AMD to implement it with real GPUs already fabbed on it. nVidia bets all their cards on overpromised but probably quite ready 16nm/14nm FinFet HKMG TSMC node which is ready for test since early this year. So if they really had Maxwell done and working they might just delay it and jump on more promising riskier 16/14nm node. Din't anybod ask themselves why rebrand of HD7000 series last year when they had available that obviously buggerishly plagued 20nm node? Which is "production ready" since April 2013 or so!


----------



## vagxtr (Jun 20, 2014)

btarunr said:


> I know, right? It made sense for GTX 560 Ti (GF114) to have 384sp, and 256-bit/32 ROP. A chip with four times the cores (1536sp) with 256-bit/32 ROP is so totally unimaginable! NVIDIA would never make such a chip.
> 
> Oh wait...it did. The GK104.
> 
> Ermagerd...3200 SP and 256-bit/32 ROP? Totally unimaginable and borderline blasphemous!




You should learn more about those two totally different chip design approaches. This thing is pretty much whatCUcallit "blasphemous" as we already previewed Maxwell architecture in GM107 and it really didn't BOOST NUMBERS OF SPs so we can really "wildly guess" that this ain't approach nVidia position themselves with whole Maxwell lineup as long as we talk SAME NODE HERE (hint: 28nm)


----------



## AnticiudadanoNumer1 (Jun 20, 2014)

I dont know what to do, If to buy a gtx 780 GHz Edition or wait to GTX 800 series, and meanwhile buy a 750 Ti, I dont dont know what to do, I confused, if someone can help me with this existential question.


----------



## Prima.Vera (Jun 21, 2014)

ZoneDymo said:


> Seems unlikely, the R9 295x2 does a fine job at 4k, if the next AMD card (r9 380x?) single card is as fast as that one we should be well on our way


Read again what I've wrote. Your logic applies only if the next generation cards are 100% faster than the previews one (295x2 is a dual GPU btw...)


----------



## RejZoR (Jun 21, 2014)

They usually are, though not always. The HD4800 to HD5800 transition was awesome, because the boost was actually 100%. I wish every gfx iteration would be in such a way. Otherwise we pay twice the price for 30% bumps that are almost an insult to gamers.


----------



## GhostRyder (Jun 21, 2014)

RejZoR said:


> They usually are, though not always. The HD4800 to HD5800 transition was awesome, because the boost was actually 100%. I wish every gfx iteration would be in such a way. Otherwise we pay twice the price for 30% bumps that are almost an insult to gamers.


That's just probably how its going to remain for the foreseeable future sadly.  On all fronts the walls are starting to slow us down to crawls when it comes to making things bigger, faster, stronger, (etc joke).  CPU's barely get faster, GPU's jump by small increments, its just the way things are going to keep going.  Likely things will only change once they start working on power consumption and getting it down then eventually throwing that out the window and turning the cards loose.  Or maybe there is something coming that will cause a huge jump, who knows in reality...


----------



## Prima.Vera (Jun 21, 2014)

GhostRyder said:


> That's just probably how its going to remain for the foreseeable future sadly.  On all fronts the walls are starting to slow us down to crawls when it comes to making things bigger, faster, stronger, (etc joke).  CPU's barely get faster, GPU's jump by small increments, its just the way things are going to keep going.  Likely things will only change once they start working on power consumption and getting it down then eventually throwing that out the window and turning the cards loose.  Or maybe there is something coming that will cause a huge jump, who knows in reality...


Is the limit of technology really. They are almost at atomic level and there is no technology yet to further decrease the printing of circuits components on/for CPUs/GPUs. Adding extra transistors is not going to work good either, because the good yields percentage will decrease even more, meaning even higher price for a marginal only better product.

Let's face it, the semiconductor industry is getting stuck and no good things on the horizon unfortunately.


----------



## ZoneDymo (Jun 23, 2014)

Prima.Vera said:


> Read again what I've wrote. Your logic applies only if the next generation cards are 100% faster than the previews one (295x2 is a dual GPU btw...)



I know what you wrote and I know the 295x2 is a dual gpu, I did not say R9 380x for nothing, that would be the logical name for their new single gpu card following the current R9 280x, the next dual gpu should be called R9 395x2.

Anywho for a long time the next gen single gpu high end card from AMD was as fast as the previous gen dual gpu card.
If that streak is continued now, with the current R9 295x2 doing quite well at 4k, the next gen should be quite well on its way to be 4k ready.


----------



## heydan83 (Jun 23, 2014)

RCoon said:


> Alright, I don't expect any miracles then. Same process, but more cores? It's just Kepler with 400 more cores on a slightly more energy efficient architecture. So they might deal with the heat increase by adding more cores by using the slightly more efficient archi, and in turn gain a small performance increase from 2880 cores to 3200. I'm assuming the 870 will have ~3000 cores to hit a price point between the two.
> 
> Call me cynical, but I don't see the 780ti lowering in price and the 880 taking its place. The 880 is going to hit a higher price point. Then there's the simple fact that the 860 is probably going to just be a rebranded 780ti and everything else below will likely be a rebrand too. Ugh... new GPU releases are so disappointing these days... nothing to get excited about, especially when you know the price gouging is imminent.



I think is too optimist think that the 860 will be a re-branded 780ti, I think maybe if thinks go well it would be the 870 the re-branded 780ti and we only will have the 880 and 880ti as new chips, hopefully not, but I understand that neither Nvidia or AMD trust TSMC to deliver a large IC in commercial quantity so this generation would be like "in the meantime while we wait".


----------

