# NVIDIA to Launch GeForce GTX 880 in September



## btarunr (Aug 1, 2014)

NVIDIA is expected to unveil its next generation high-end graphics card, the GeForce GTX 880, in September 2014. The company could tease its upcoming products at Gamescom. The company is reportedly holding a huge media event in California this September, where it's widely expected to discuss high-end graphics cards based on the "Maxwell" architecture. Much like AMD's Hawaii press event that predated actual launch of its R9 290 series by several weeks; NVIDIA's event is expected to be a paper-launch of one or more graphics cards based on its GM204 silicon, with market availability expected in time for Holiday 2014 sales.

The GM204 is expected to be NVIDIA's next workhorse chip, which will be marketed as high-end in the GeForce GTX 800 series, and performance-segment in its following GTX 900 series; much like how the company milked its "Kepler" based GK104 across two series. It's expected to be built on the existing 28 nm process, although one cannot rule out an optical shrink to 20 nm later (like NVIDIA shrunk the G92 from 65 nm to 55 nm). The GTX 880 reportedly features around 3,200 CUDA cores, and 4 GB of GDDR5 memory.

*View at TechPowerUp Main Site*


----------



## RejZoR (Aug 1, 2014)

I'm interested. Mostly because current gen high end hardly offers any significant upgrade over my current card. I just hope they won't cost 500+ EUR like last gen did. It's ridiculous.


----------



## Scatler (Aug 1, 2014)

RejZoR said:


> I'm interested. Mostly because current gen high end hardly offers any significant upgrade over my current card. I just hope they won't cost 500+ EUR like last gen did. It's ridiculous.


As if Nvidia ever released top end gpu's at reasonable prices. When you have "top" performance you ask top dollar for it /sarcasm.


----------



## RejZoR (Aug 1, 2014)

Well, i'm not asking for Titan. Just a high end card (GTX880). Not interested in top end (Titan).


----------



## NC37 (Aug 1, 2014)

204 for sure being milked...ok nothing to see here folks. 

AMD...surprise me...I'm waiting for it...come on...you can do it...


----------



## arbiter (Aug 1, 2014)

Scatler said:


> As if Nvidia ever released top end gpu's at reasonable prices. When you have "top" performance you ask top dollar for it /sarcasm.




um, gtx680 was 50$ cheaper then AMD 7970 when they released it so take the amd fanboy crap somewhere else


----------



## Lionheart (Aug 1, 2014)

arbiter said:


> um, gtx680 was 50$ cheaper then AMD 7970 when they released it so take the amd fanboy crap somewhere else



What!


----------



## arbiter (Aug 1, 2014)

Lionheart said:


> What!


Launch price of 680 was 500$, at the time AMD was shoving 7970 at people for 550$.  Given the lack of any info from AMD on what they are working on, doubt they have anything close to being ready before q1 2015, less they rush it out the door with a crappy cooler on it like they did the last time


----------



## RejZoR (Aug 1, 2014)

arbiter said:


> um, gtx680 was 50$ cheaper then AMD 7970 when they released it so take the amd fanboy crap somewhere else



I find that very unlikely since every card that has carried a GTX name has been more expensive as the same grade AMD. That's why i haven't touched a single GeForce since then. Maybe it was 500 bucks on paper launch, but in retail, there is no way it was cheaper.

Though i have to see what AMD will do with R9-390X...


----------



## ZoneDymo (Aug 1, 2014)

arbiter said:


> um, gtx680 was 50$ cheaper then AMD 7970 when they released it so take the amd fanboy crap somewhere else



Slow down there cowboy, nowhere did he say he found a 500(+) dollar price tag ok for the competition either, so take your fanboy crap conversation somewhere else


----------



## HumanSmoke (Aug 1, 2014)

RejZoR said:


> I find that very unlikely since every card that has carried a GTX name has been more expensive as the same grade AMD. That's why i haven't touched a single GeForce since then.


The GTX 680 was initially in short supply, but even a cursory check of the review thread shows that cards were available as MSRP.


RejZoR said:


> maybe it was 500 bucks on paper launch, but in retail, there is no way it was cheaper.


Unless you're playing the "where I am..." card, I think you'll find that isn't the case


----------



## alwayssts (Aug 1, 2014)

RejZoR said:


> Well, i'm not asking for Titan. Just a high end card (GTX880). Not interested in top end (Titan).



I don't mean to state the obvious, but when nvidia, if not both companies, both retain an efficient 16 rop chip for around 100-150 (gm107/Bonaire), an efficient 32 rop chip for 200-300 (GK104/Tonga or Tahiti), and an equivalent to efficient 48 ROP chip for 400-600 (Hawaii/GK110), where do you expect these (probably efficient 64 ROP parts at low clock with equivalent to around 3840 [amd] -4000 [nvidia] units) to be priced?  I personally can't fathom the current high-end dropping much under $400-500 until there is a new process on the horizon.  These coming super high-end 28nm chips will probably be very expensive in whatever form they take....even if you don't consider the ever-increasingly ridiculous prices nvidia charges on a high-end card (at least until they have competition).

Going by the reasonable assumption these chips will be shooting for ~4x spec of the ps4 in some form (think ~1ghz x 3840sp/60CUs or 24SMM/3072sp +768sfu), you could expect them to be around a 1/3 faster than a well-cooled stock Hawaii, but really...how much clock headroom could these realistically have versus something like an overclocked 780/290x if staying within 300w?  Suddenly the picture of any value equation drops to somewhere around 20% or so performance for whatever massive premium, granted the distinct possibility a larger frame buffer (8GB) that is useful for 4k will largely be held hostage by such parts to justify them.

It's all cool I guess, if that's your jam. The point of diminishing returns from 32 ROP parts to GK110/Hawaii is one thing, but this will likely be another.  I don't know about you, but I'll just buy the most efficient thing with similar performance on the next process, thanks.    I think it may be wise to consider doing the same, if not finding a couple of the unicorns that are a decently priced 780/Hawaii with a larger buffer.


----------



## arbiter (Aug 1, 2014)

ZoneDymo said:


> Slow down there cowboy, nowhere did he say he found a 500(+) dollar price tag ok for the competition either, so take your fanboy crap conversation somewhere else



he said "As if Nvidia ever released top end gpu's at reasonable prices." it was cheaper then AMD's top card. GTX680 did stay at about 500$ mark even with short supply. It wasn't like AMD prices and coin mining price gouging. Newegg kept their prices around 500-520$ for most models depending on brand. Only Amazon I have ever seen blatantly gouge's people like that.


----------



## MxPhenom 216 (Aug 1, 2014)

RejZoR said:


> I find that very unlikely since every card that has carried a GTX name has been more expensive as the same grade AMD. That's why i haven't touched a single GeForce since then. Maybe it was 500 bucks on paper launch, but in retail, there is no way it was cheaper.
> 
> Though i have to see what AMD will do with R9-390X...


 
Jesus, where have you been? NVIDIA released their mid end silicon gpu gk104, since gk110 wasn't working out, it ended up beating AMD top tier 7970 that was bring sold for $550 at the time. Nvidia under cut them with the 680 for $499. I would not have gotten the 680 I got at the time if it wasn't cheaper than the 7970. Then when Nvidia got gk110 out the door I bought the 780.


----------



## TheoneandonlyMrK (Aug 1, 2014)

Some are very quick with the fanboy crap here, its only pr news lets not get too excited as the cards are not down yet.
I can't see a gtx880  ending up less than 500 $ but it looks like it might have reasonable performance for the price , I'm not convinced by the shader count though either , imho that would be the full bin shader count and its rare for a gpu to be built using the max bin straight away but I suppose if it's 28nm it is a mature process.


----------



## hardcore_gamer (Aug 1, 2014)




----------



## Fluffmeister (Aug 1, 2014)

Looking forward to it, should be a nice upgrade from my awesome little GTX 670.


----------



## 64K (Aug 1, 2014)

Yes, you could get a GTX 680 for $500 after launch. I started following the price as soon as they showed up on Amazon and about a month later for some strange reason they put the EVGA GTX 680 on sale for $400 for two days. I think it was a mistake but I got one at that price. It wasn't a high end Kepler then nor is it now. Yes, it was a damn good GPU then and it still is pretty solid now but it's after all a GK104 not a GK110.


----------



## rtwjunkie (Aug 1, 2014)

I'm definately not excited.  It's the same crap they did with the 680, selling a mid-line chip (GK104) as their top end.  Not until 780 did Nvidia actually release the top end chip (GK110) as the top end GPU.  I'm not gonna buy an 880 with a GM204 marketed as top of the line, when we all know that GM210 will actually be the top of the line 980.  I'll keep my 780 and wait till 980.  I'm at 1080P, so no loss waiting an extra year.


----------



## the54thvoid (Aug 1, 2014)

If true....
To be fair, they have every right to sell this as the high end 8 series chip, if performance is there or thereabouts. It's the space in time until 980 comes out that makes it relevant. If 880 is their best card for the best part of a year, then that's cool but if they release another Titan in Spring, that will be a pisser.


----------



## rtwjunkie (Aug 1, 2014)

I see your pooint, but it feels like too much bait and switch to me, because a year later Nvidia will say, we were just kidding...880 isn't top of the line Maxwell, 980 is, with a proper GM210.  In the meantime, they will demand top dollar for 880's, with their middle of the pack chip (as far as Maxwell goes).  Sure, GM204 may outperform GK110 in GTX880 form, but that doesn't make it top-tier money.  They merely need to be honest, and say "this is our mid-grade chip, so we won't charge you your first-born son to buy it, but it WILL outperform GK110."  Make sense?


----------



## 64K (Aug 1, 2014)

The author of the article that the OP sourced speculates that the price of the GTX 880 may be set at between $400 and $450. If it is and it does outperform the GTX 780 non Ti then that would be a good price. If it outperforms the GTX 780 Ti then I suspect it will be priced more than $500 but who knows. It's all speculation at this point. The GTX 680, even though it was a mid range Kepler GPU, outperformed the Fermi flagship refresh GTX 580 so it's possible the GTX 880 may outperform the Kepler flagship GTX 780 Ti.


----------



## rtwjunkie (Aug 1, 2014)

@64K: yep, you are probably right about it outperforming 780, and I expect it to, like the 680 outperformed 580.  If in fact it only prices between $400 and $450, then it is in effect admitting their GM204 equipped card is not the top of the line in the Maxwell series, and I am fine with that.  Still waiting for 980 tho!


----------



## LAN_deRf_HA (Aug 1, 2014)

Just how far off is the die shrink? If they have time history will repeat, 880, T2, 980, because this has proven incredibly profitable for them. But if the die shrinks come soon they might not have time for that whole cycle.


----------



## LightningJR (Aug 1, 2014)

I remember clearly too that when I bought my 670 at launch it was $399. I wasn't going to be stupid to pay an extra $100 for 3-5% more performance. This was my thinking at the time. There's a possibility it'll be ~$500 but I doubt it since the 780ti is so expensive and it'll eclipse that in performance for sure. NVidia wont cut the prices of those cards by 50% to keep it's new card from ruining the others chance of selling.


----------



## Roel (Aug 1, 2014)

I get tired that I keep hearing the 880 will be mid-range. It's likely going to be the fastest card available at launch which defines top-end. It doesn't matter if they could have released something faster or if the 980 will be faster next year, that's how technology works, there will always be something faster next year.


----------



## micropage7 (Aug 1, 2014)

when they offer better card with lower power consumption  and better performance with friendly price


----------



## MxPhenom 216 (Aug 1, 2014)

There has been talk that Nvidia could skip 20nm all together are go to 16nm.


----------



## rtwjunkie (Aug 1, 2014)

Roel said:


> I get tired that I keep hearing the 880 will be mid-range. It's likely going to be the fastest card available at launch which defines top-end. It doesn't matter if they could have released something faster or if the 980 will be faster next year, that's how technology works, there will always be something faster next year.


 
You are correct, it will be the top end of the 8 series.  But it's not the top-end Maxwell.  GM204 is the mid-range chip for Maxwell.  It's the same state of affairs that happened with the GTX 680.  Most people who don't know anything about the die at all, merely bought 680's thinking that was the top of the line Kepler, unaware that Nvidia had held back that chip until the next lineup.  So, Nvidia were able to perpetrate the illusion to the uninitiated that this was top-tier, when in reality, all they were buying was an unlocked 660 with good die selection.


----------



## MxPhenom 216 (Aug 1, 2014)

theoneandonlymrk said:


> Some are very quick with the fanboy crap here, its only pr news lets not get too excited as the cards are not down yet.
> I can't see a gtx880  ending up less than 500 $ but it looks like it might have reasonable performance for the price , I'm not convinced by the shader count though either , imho that would be the full bin shader count and its rare for a gpu to be built using the max bin straight away but I suppose if it's 28nm it is a mature process.


 
How about we all stop with calling people out on their alleged fanboyism and try have a civil conversation?


----------



## THE_EGG (Aug 1, 2014)

If gtx 880 pricing starts around the $750-800AUD mark here in Australia (around entry level about 780ti prices), I think it will sell well. The current and increasing flood of demand for adequate 4K performance will hopefully be enough for people to buy a new video card and hopefully drop prices a bit (after all, lower prices normally equals higher demand).

I find it interesting that the launch date seems to get earlier and earlier as time goes on. A few months ago it was predicted the 880 would launch Q1 2015, then it was slated for a December release, then October and now September. Although it is a paper launch so as the article says, we probably won't see availability till the end of the year.


----------



## Roel (Aug 1, 2014)

rtwjunkie said:


> You are correct, it will be the top end of the 8 series.  But it's not the top-end Maxwell.  GM204 is the mid-range chip for Maxwell.  It's the same state of affairs that happened with the GTX 680.  Most people who don't know anything about the die at all, merely bought 680's thinking that was the top of the line Kepler, unaware that Nvidia had held back that chip until the next lineup.  So, Nvidia were able to perpetrate the illusion to the uninitiated that this was top-tier, when in reality, all they were buying was an unlocked 660 with good die selection.


Yes it's the same story as with the GTX 680. It really depends on how much you're in need for an upgrade. If you wait for the 980 then you will be stuck with a card that is slower than the 880 for another year. However if you don't want to wait that long then it's probably a better choice to get the 880 instead of going for the "old" 780ti (for those that don't have the 780ti).


----------



## rtwjunkie (Aug 1, 2014)

Roel said:


> Yes it's the same story as with the GTX 680. It really depends on how much you're in need for an upgrade. If you wait for the 980 then you will be stuck with a card that is slower than the 880 for another year. However if you don't want to wait that long then it's probably a better choice to get the 880 instead of going for the "old" 780ti (for those that don't have the 780ti).


 
True, true!  I agree with your assessment.


----------



## Casecutter (Aug 1, 2014)

I don't know but it just feels like the GM107 all over... A smaller die size and efficiency the guiding principles.

The 880 will be above the 780 (perhaps get up into 290X range) while not encroaching on the 780Ti.  If really nice Nvidia may price at $450, because this die is perhaps smaller than a GK104, while nowhere near a GK110 which Nvidia won’t/can’t sell consistently on a cards disconneted 10-15% below the $500 MSRP.

So we all know how this goes… A "tech paper teaser" in September, mid-Oct launch with the normal reference brigade cards, while AIB customs mid-end November for the customary 10-15% charge… do the math.  Other than efficiency there’s be no real justification to run to get this over the 780.

This just provides the path in facilitating EOL of the GK104 (760/770) while increasing the margins, so they remain price effective with AMD, while permit GK110 (2304 sp) to dwindle down. I could see a them produce a cost-effective Quadro part to finish off, rather than discounting them to gamers.


----------



## PatoRodrigues (Aug 1, 2014)

I wonder... How much of a increase in CUDA cores and die shrink did actually matter for gamers since Kepler came out? I can't see a reason to upgrade from a 780 or a R9 290 unless it surprises the s**t out of everybody with 2x the performance of a 780 in a single GPU, with better power efficiency and +$50 in price.


----------



## MxPhenom 216 (Aug 1, 2014)

PatoRodrigues said:


> I wonder... How much of a increase in CUDA cores and die shrink did actually matter for gamers since Kepler came out? I can't see a reason to upgrade from a 780 or a R9 290 unless it surprises the s**t out of everybody with 2x the performance of a 780 in a single GPU, with better power efficiency and +$50 in price.



Now, the focus is getting powerful enough GPUs to accelerate 4k. That is going to be Nvidia and AMD's focus for a while. More memory, bandwidth and overall GPU grunt to make use of the increased memory. 

I can definitely see Nvidia going right to 16nm with the rate 20nm is going, and releasing GM210 (big die Maxwell) on 16nm. Actually, this is what I hope for mostly.


----------



## rtwjunkie (Aug 1, 2014)

MxPhenom 216 said:


> I can definitely see Nvidia going right to 16nm with the rate 20nm is going, and releasing GM210 (big die Maxwell) on 16nm. Actually, this is what I hope for mostly.


 
Now THAT would be a probable big increase, and one well worth waiting for!!


----------



## MxPhenom 216 (Aug 1, 2014)

rtwjunkie said:


> Now THAT would be a probable big increase, and one well worth waiting for!!


 
All dependent on TSMC really.


----------



## Casecutter (Aug 1, 2014)

GK106 221mm2 / 980 Cudas > GM107 148mm2 / 640 Cudas.

By this they could reduce die 25-30% and Cuda almost 35% (if it scales the same) and should/could be still a GTX770 performance.  I couldn't see them delivering anything with 3,200 CUDA cores, if anything over 2,000 Cudas or a die bigger than 300mm2, I'll be surprised and perhaps a little disillusioned.


----------



## bpgt64 (Aug 1, 2014)

I really want to see a single GPU solution that can drive a 4k display solo.


----------



## Hilux SSRG (Aug 1, 2014)

MxPhenom 216 said:


> There has been talk that Nvidia could skip 20nm all together are go to 16nm.



I've read such talk online but money talks at the end of the day.  I don't see that happening.  16nm chips will be more expensive than 20nm, AMD/NVIDIA won't jump to squash their profit margins.  Heck, both are releasing still 28nm in a few months rather than 20nm., not that they had a choice.


----------



## HumanSmoke (Aug 1, 2014)

Casecutter said:


> I don't know but it just feels like the GM107 all over... *A smaller die size* and efficiency the guiding principles.


Unlikely. GPU architectures have a design lead-in time measured in years. It has been pretty much established that TSMC's process node cadence is now out of step with both AMD and Nvidia's product cycle ( 20nm late/CLN20G cancelled, 20nm FEOL+16nm BEOL ahead of schedule). Just as with TSMC's cancelled 32nm node, both vendors will likely produce a kludge - 28nm designs porting that were originally intended for 20nm/ 20+16nm.


Casecutter said:


> The 880 will be above the 780 (perhaps get up into 290X range) while not encroaching on the 780Ti.  If really nice Nvidia may price at $450, because this die is perhaps smaller than a GK104, while nowhere near a GK110 which Nvidia won’t/can’t sell consistently on a cards disconneted 10-15% below the $500 MSRP.


Then again, if the GM 204 cards are 256-bit/4GB, then it is quite possible to market GTX 880/870 and GTX 780/780Ti with the same basic performance alongside each other, especially if the 384-bit cards are aimed at high res gaming and come with a 6GB framebuffer. It wouldn't surprise me to see the 3GB GTX 780 EOL'ed, and Nvidia sanction 6GB for use with the 780 Ti


Casecutter said:


> So we all know how this goes… A "tech paper teaser" in September, mid-Oct launch with the normal reference brigade cards, while AIB customs mid-end November


Really?
GTX 580 - Vendor custom boards available at launch
GTX 680 - Vendor custom boards available at launch
GTX 770 - Vendor custom boards available at launch
GTX 780 - Vendor custom boards available at launch
GTX 780 Ti - Vendor custom boards available at launch
First Maxwell cards - Vendor custom boards available at launch

If you're looking at historical precedent, the only cards that aren't available as vendor custom are dual GPU cards and cards not included in Nvidia's series-market segment numerical naming convention ( GTX Titan/Titan Black, Tesla, Quadro)


Casecutter said:


> for the customary 10-15% charge… do the math


You buy online from Tajikistan ?
Gigabyte Windforce OC - same price as reference (reviewed by W1zzard on launch day)
EVGA Superclocked ACX - $10 more than reference (1.5% more) (reviewed by W1zzard on launch day)


Casecutter said:


> Other than efficiency there’s be no real justification to run to get this over the 780.


So, just recapping, this card in your opinion doesn't have a market even though the specifications aren't known, the price isn't known, it's performance isn't known, it's actual entry date isn't known,
and it's feature set isn't known, because it conflicts with a card which may or may not be EOL'ed at the time of launch (either in its entirety or as a $500 3GB iteration)


Casecutter said:


> I couldn't see them delivering anything with 3,200 CUDA cores, if anything over 2,000 Cudas or a die bigger than 200mm2, I'll be surprised and perhaps a little disillusioned.


In what world is a performance GPU only 35% larger than the same vendors low end chip ? If GM 107 is 148mm² packing 640 cores, how the **** is GM 204 supposed to pack anything close to 2000 into 200mm² ????
I forgot, the actual mathematics are unimportant....your personal disappointment is the fact that you're trying to get across by setting an unrealistic target. Well, for my part, I'll be disillusioned if Intel's next desktop CPU doesn't have a thermal envelope of 2 watts and AMD's next flagship GPU doesn't stay under 35C under full gaming load. When you stock up on Xanax in preparation for this graphics Armageddon, grab me some.


Hilux SSRG said:


> I've read such talk online but money talks at the end of the day.  I don't see that happening.  16nm chips will be more expensive than 20nm, AMD/NVIDIA won't jump to squash their profit margins.  Heck, both are releasing still 28nm in a few months rather than 20nm., not that they had a choice.


Depends upon whether the die shrink outweighs the wafer cost, as it usually does. 16nmFF (20nm BEOL+16nm FEOL)  is supposed to bring a ~15% reduction in die size over the same design rendered by 20nm - a 15% reduction does not equate to 15% more die candidates per wafer which is dependant upon the actual die size (you could try inputting various sizes into a die-per-wafer calculator to see the variances). Latest estimates put 16nmFF at ~21% more expensive per wafer than 20nm. Even with the known parameters you would still need to factor in what kind of deal each vendor has in place regarding yield. The usual arrangement is per-wafer with guaranteed minimum yields or per viable die.


----------



## arbiter (Aug 1, 2014)

micropage7 said:


> when they offer better card with lower power consumption  and better performance with friendly price



better performance and lower power consumption cost $ in R&D hence increased cost. AMD tends to be on other end of that scale, slower part but they bump clocks up to match the competition but in effect eats more power.



HumanSmoke said:


> You buy online from Tajikistan ?
> Gigabyte Windforce OC - same price as reference (reviewed by W1zzard on launch day)
> EVGA Superclocked ACX - $10 more than reference (1.5% more) (reviewed by W1zzard on launch day)



Yea bought a gtx670 gigabyte windforce card when they were released ($399) and it was same price for ref card but with one best air coolers on the market. Most gaming never topped 65c even OC'ed to 1275mhz. Only a few games pushed it to 70-75c.


----------



## Casecutter (Aug 1, 2014)

I'm just looking at it as a GK104 replacement, in step with what they provided with Maxwell over Kepler. Isn't this what we all understand Nvidia is looking at?

GK106 @ 221mm2 Vs GM107 @ 148mm2 = ~35% reduction
GK104 @ 294mm2  - 30% = 205mm2


----------



## xorbe (Aug 1, 2014)

RejZoR said:


> Well, i'm not asking for Titan. Just a high end card (GTX880). Not interested in top end (Titan).



Now would that be the GTX880SE, GTX880, GTX880Ti, or GTX880Ti Black that you want?


----------



## HumanSmoke (Aug 2, 2014)

Casecutter said:


> I'm just looking at it as a GK104 replacement, in step with what they provided with Maxwell over Kepler. Isn't this what we all understand Nvidia is looking at?
> GK106 @ 221mm2 Vs GM107 @ 148mm2 = ~35% reduction
> GK104 @ 294mm2  - 30% = 205mm2


Well, that's some deductive logic right there 
GK 106 in its fully enabled form (the GTX 660) has *23% more* performance than a fully enabled GM 107 (GTX 750 Ti). You expect GM 204 to be *20% slower* than the GPU it is replacing in the product stack (GK 104) ???


----------



## Prima.Vera (Aug 2, 2014)

bpgt64 said:


> I really want to see a single GPU solution that can drive a 4k display solo.


Probably in 2016 or 2017. The tech slowed down to much and prices got ridiculous higher, so my bet is 2018 for an affordable one


----------



## a_ump (Aug 2, 2014)

HumanSmoke said:


> Well, that's some deductive logic right there
> GK 106 in its fully enabled form (the GTX 660) has *23% more* performance than a fully enabled GM 107 (GTX 750 Ti). You expect GM 204 to be *20% slower* than the GPU it is replacing in the product stack (GK 104) ???



If all those numbers are true, it would mean that Maxwell is ~15% more efficient with die space than Kepler.  Course that was just with their first Maxwell chip, i'm sure(hope) they have made many more improvements with the Maxwell line after GTX 750 Ti release. 

@HumanSmoke : Very aggressive with statistics you are lol


----------



## HumanSmoke (Aug 2, 2014)

a_ump said:


> If all those numbers are true, it would mean that Maxwell is ~15% more efficient with die space than Kepler.


The better comparison would be with GK 107 (118mm²)  since it, like GM 107, features a 128-bit memory I/O.  GK 106 not only is 192-bit (more of the uncore devoted to memory interfaces), but sits one rung higher in the respective GPU hierarchy.


a_ump said:


> Course that was just with their first Maxwell chip, i'm sure(hope) they have made many more improvements with the Maxwell line after GTX 750 Ti release.


Yes. Regardless of the chips already known, it is a sure bet that the ratios change with incoming parts. Cache size and structure will likely play a large part in performance-per-watt, and of course, some aspects of the smaller GPU don't need scaling up for larger GPUs - the PCI-Express interface, command processor, and video encode/transcode engines are fixed size for instance. 


a_ump said:


> @HumanSmoke : Very aggressive with statistics you are lol


Well, if you stay with the facts and known (verifiable) numbers it's usually a better base to work from than pulling supposition out of thin air based on a wish list - or in some cases here, the most pessimistic scenario imaginable. The downside is that the eventual products generally conform to the laws of physics and expectation - which can be not quite as exciting I guess if you're of the "school of wild guessing" method of prediction- although that method generally leads people to be desperately disappointed (unfounded optimism for the vendor they love) to openly resentful (unfounded pessimism for the vendor they dislike). All a bit bipolar from my PoV.


----------



## jagd (Aug 2, 2014)

Nvidia will use what available them at TSMC ( TSMC if they dont change their foundry ) there are not much foundry fabs and  switching to new process ( 28nm to 20 nm to 16 etc) costing more with every step and also difficulty and problems .
If you look to TSMC 20nm struggle youll find what i mean , nvidia can't decide to skip to 16nm for simple reasons , it will take more time  , that 16nm installation to fabs  look very long way , and nvidia will be stuck at 28 nm with power and price disadvantage



MxPhenom 216 said:


> There has been talk that Nvidia could skip 20nm all together are go to 16nm.


----------



## 64K (Aug 2, 2014)

jagd said:


> Nvidia will use what available them at TSMC ( TSMC if they dont change their foundry ) there are not much foundry fabs and  switching to new process ( 28nm to 20 nm to 16 etc) costing more with every step and also difficulty and problems .
> If you look to TSMC 20nm struggle youll find what i mean , nvidia can't decide to skip to 16nm for simple reasons , it will take more time  , that 16nm installation to fabs  look very long way , and nvidia will be stuck at 28 nm with power and price disadvantage



Good points. I can't see Nvidia being able to stretch the 28nm Maxwells out for another year or more after the release of the GTX 880 either and the engineering for the 20nm Maxwell has already been paid for by Nvidia so they will probably want to release a line of 20nm Maxwells to recoup their investment.


----------



## Steevo (Aug 2, 2014)

We are at teh verge of what Silicon can do for us, until we make a breakthrough in graphiene or another substance to replace it we are reaching the limit of how much performance we can get without going bigger on power use and heat dissipation. Essentially we are close to the buy a Prius, Evo, or Ferrari of performance.


----------



## GhostRyder (Aug 2, 2014)

I think many of yall are giving to little credit here...We have already been well versed in what the current process is capable of with the GTX 750ti.  While that card is nothing amazing performance wise, its the power consumption difference and the power with less cores available.  The point of comparing the 640 cores in the 750ti versus the 768 in the 650ti while the 750ti pretty much blows the 650ti out of the water shows that you can improve on current processes and such.  Why should we be so upset that were not dropping down to a smaller process now and be focused on disappointment before anything is even shown to the public?

Even using current basic logic, by the standard that the 650ti to the 750ti using less cores was able to outperform the older generation chip while using less power, we can assume that if the GTX 880 has the same amount of cores or even ~15% less that the performance would still be above the predecessor.  Of course that's assuming core clocks remain roughly the same which depending could end up showing higher clock speeds and memory speeds (Of course that's just an assumption).

We also are basing so much off of rumors, speculations, and possible release dates.


----------



## mcraygsx (Aug 2, 2014)

rtwjunkie said:


> I'm definately not excited.  It's the same crap they did with the 680, selling a mid-line chip (GK104) as their top end.  Not until 780 did Nvidia actually release the top end chip (GK110) as the top end GPU.  I'm not gonna buy an 880 with a GM204 marketed as top of the line, when we all know that GM210 will actually be the top of the line 980.  I'll keep my 780 and wait till 980.  I'm at 1080P, so no loss waiting an extra year.




+1 OP

Exactly what this guy said. They could have easily released Gk110 but instead they were selling 680 for a high end price. We should know what NVidia does from past.


----------



## arbiter (Aug 2, 2014)

jagd said:


> Nvidia will use what available them at TSMC ( TSMC if they dont change their foundry ) there are not much foundry fabs and  switching to new process ( 28nm to 20 nm to 16 etc) costing more with every step and also difficulty and problems .
> If you look to TSMC 20nm struggle youll find what i mean , nvidia can't decide to skip to 16nm for simple reasons , it will take more time  , that 16nm installation to fabs  look very long way , and nvidia will be stuck at 28 nm with power and price disadvantage



One thing you forget AMD will stuck at the same proc if they release a new gpu soon as well.  So they would be stuck with same issue as well, so its not just problem for nvidia but could be bigger problem for amd.


----------



## 64K (Aug 2, 2014)

mcraygsx said:


> +1 OP
> 
> Exactly what this guy said. They could have easily released Gk110 but instead they were selling 680 for a high end price. We should know what NVidia does from past.



I disagree. TSMC failed with the 28nm process and Nvidia had to pay per wafer for them. This resulted in the gimped GTX 780. The 28nm process is much more refined now so we should see plenty of un-gimped GM210 chips a few months after the GTX 880 actually becomes available for purchase.

Edit: I'm not defending there price structure though. It's gone balls up.


----------



## mcraygsx (Aug 2, 2014)

64K said:


> I disagree. TSMC failed with the 28nm process and Nvidia had to pay per wafer for them. This resulted in the gimped GTX 780. The 28nm process is much more refined now so we should see plenty of un-gimped GM210 chips a few months after the GTX 880 actually becomes available for purchase.
> 
> Edit: I'm not defending there price structure though. It's gone balls up.



I hope you are right but considering what we have all seems in past several years how NVidia has been treating us. Its too good to be true that NVidia wont sell 880 Labeled as a premium product. Of course it will have minor advantages over GeForce 780 but time will tell.


----------



## 64K (Aug 2, 2014)

mcraygsx said:


> I hope you are right but considering what we have all seems in past several years how NVidia has been treating us. Its too good to be true that NVidia wont sell 880 Labeled as a premium product. Of course it will have minor advantages over GeForce 780 but time will tell.



Yes, and that's the downside.

The GTX 880 should hammer on the GTX 780 and I think it will. It may roll right over the GTX 780 Ti in performance. Time will tell.

If anyone needs to upgrade their GPU in the next couple of months and wants to go Nvidia then the GTX 880 priced at around $425 will probably be a good deal. Otherwise wait for the 20nm Maxwells.


----------



## arbiter (Aug 2, 2014)

64K said:


> Yes, and that's the downside.
> 
> The GTX 880 should hammer on the GTX 780 and I think it will. It may roll right over the GTX 780 Ti in performance. Time will tell.
> 
> If anyone needs to upgrade their GPU in the next couple of months and wants to go Nvidia then the GTX 880 priced at around $425 will probably be a good deal. Otherwise wait for the 20nm Maxwells.



Likely 500$ at lowest probably 600$ being new gpu. if it has 30% like rumored it would be within price range.


----------



## Fluffmeister (Aug 2, 2014)

mcraygsx said:


> +1 OP
> 
> Exactly what this guy said. They could have easily released Gk110 but instead they were selling 680 for a high end price. We should know what NVidia does from past.



This whole nVidia are the bad guys thing is just nonsense, again the GK104 based 680 was more than a match for the 7970, they literally had zero reason to release a consumer orientated card based on the GK110 at that time [regardless of it being ready or not].

I guess it would have been funny to see a GK110 powered 680 vs the under clocked Tahiti 7970, embarrassing.... but funny.


----------



## GhostRyder (Aug 2, 2014)

Fluffmeister said:


> This whole nVidia are the bad guys thing is just nonsense, again the GK104 based 680 was more than a match for the 7970, they literally had zero reason to release a consumer orientated card based on the GK110 at that time [regardless of it being ready or not].
> 
> I guess it would have been funny to see a GK110 powered 680 vs the under clocked Tahiti 7970, embarrassing.... but funny.


Wow dude I cannot believe that you really believe that.  I mean really do you think nvidias strategy was to release a GPU that was even instead of releasing something way more powerful.  If nvidia had the GK 110 ready they would have easily released it and at a price point fitting ( probably close to the 1k mark depending).

This is the same strategy that has been used before so I do not get why people are getting so shocked.  Compare the fermi architecture to the Kepler in terms of release and the chips used you will see the strategy remains the same and the same ideas can be expressed.  Each cycle follows a similar strategy with the companies, you can say it's like a tick tock cycle.  They release the introduction to a new architecture, show off how well it performs, gain data, and then next cycle release the full powered version of the architecture next release.

VLIW (or the Terascale series) from ATI also followed a similar set.  This is no different than the strategies were all used to (well not much at least) and we can also compare the GCN architecture in that way.

Also anyone assuming that the GTX 880 is going to be weaker than the 780ti I feel is going to be either disappointed or impressed (depending on your outlook).  It would not make much sense to release a less powerful GPU as your next gen GPU...


----------



## 64K (Aug 3, 2014)

GhostRyder said:


> Wow dude I cannot believe that you really believe that.  I mean really do you think nvidias strategy was to release a GPU that was even instead of releasing something way more powerful.  If nvidia had the GK 110 ready they would have easily released it and at a price point fitting ( probably close to the 1k mark depending).
> 
> This is the same strategy that has been used before so I do not get why people are getting so shocked.  Compare the fermi architecture to the Kepler in terms of release and the chips used you will see the strategy remains the same and the same ideas can be expressed.  Each cycle follows a similar strategy with the companies, you can say it's like a tick tock cycle.  They release the introduction to a new architecture, show off how well it performs, gain data, and then next cycle release the full powered version of the architecture next release.
> 
> ...



Well said GhostRyder.


----------



## Fluffmeister (Aug 3, 2014)

GhostRyder said:


> Wow dude I cannot believe that you really believe that.  I mean really do you think nvidias strategy was to release a GPU that was even instead of releasing something way more powerful.  If nvidia had the GK 110 ready they would have easily released it and at a price point fitting ( probably close to the 1k mark depending).



Why would they need to put all their cards on the table right away? That makes absolutely zero sense.

Fact is when the GK110 was ready, it was in the form of the K20X for Oak Ridge, low yield, high returns, much more sense than appeasing forum warriors at TPU.


----------



## HumanSmoke (Aug 3, 2014)

GhostRyder said:


> Wow dude I cannot believe that you really believe that.  I mean really do you think nvidias strategy was to release a GPU that was even instead of releasing something way more powerful.  If nvidia had the GK 110 ready they would have easily released it and at a price point fitting ( probably close to the 1k mark depending).


Nvidia did have GK110 up and running in the same time frame. For some reason I have yet to fathom, people seem to think that releasing the GPU as a $1000 card makes more sense than selling it in a $4500-5000 package to a  client who was the damned launched customer (contact signed October 2011) and needed nineteen thousand of the things in a contract that would severely penalize Nvidia if the boards weren't supplied on schedule.
If Nvidia had intended for the GK 110 for desktop from the outset - which they could have managed as a paper/soft launch with basically no availability but plenty of PR ( i.e. a green scenario mirroring the HD 7970's 22nd December 2011 "launch"), they in likelihood could have had parts out in time. GK 110 taped out in early January 2012 (even noted Nvidia-haters tend to agree on this point). Fabrication, testing/debug, die packaging, board assembly, product shipping to distributers takes 8-12 weeks for a consumer GPU - production GK110's are A1 silicon, so no revision was required - that means early to mid March 2012 for a possible launch date IF the GTX 680 hadn't proved sufficient....and the launch date for the GTX 680? March 22nd, 2012.
Oak Ridge National Labs started receiving their first Tesla K20's in September 2012 (1000 or so in the first tranche), which tallies with the more stringent runtime validation process required for professional boards in general and mission critical HPC in particular.

Unbelievable that so much FUD exists about this considering most of the facts are actually well documented by third parties.



Fluffmeister said:


> Why would they need to put all their cards on the table right away? That makes absolutely zero sense.


History tells us that the GTX 680 was sufficient. The competition (the 7970) was a known factor, so there was actually zero need to hastily put together a GK110 card. I doubt that a GK110 GTX card would have been any more than a PR stunt in any case, since Oak Ridge's contract superseded any consumer pissing contest. 


Fluffmeister said:


> Fact is when the GK110 was ready, it was in the form of the K20X for Oak Ridge, low yield, high returns, much more sense than appeasing forum warriors at TPU.


True enough. ORNL's Titan was the high profile large order customer, but more than a few people forget that Nvidia was also contracted to supply the Swiss supercomputing institute's Todi system, and the Blue Waters system for the National Center for Supercomputing, so around 22,000 boards required without taking replacements into consideration.


----------



## Fluffmeister (Aug 3, 2014)

^ Exactly, and once those contracts were fulfilled and yields gradually improved what did we see some 5-6 months later.... *drum roll*.... the $1000 GTX Titan , still without any fear of direct competition and a price as much as about protecting Nvdia's professional product stack as.... why fuck not?

But no, it should have been $400 bucks and called the 680, wonders never cease.


----------



## rtwjunkie (Aug 3, 2014)

Well-explained by @Fluffmeister and @HumanSmoke why the mid-levelnchip ended up as the premiere Kepler card (and remained there so long)!!

Still, since I bought the 780 before the price drop, i prefer to keep the top of the chip line in my main rig. For me it just makes sense to wait til GM210, whenever that is (GTX 980?). Gotta get my money's worth!!

So anyway, i take back some of my false advertising statements, about 680 and the correllary to the 880, with neither top of the line card having the top of the line chip in the lineup. It all relates to being ready as well as business committments by Nvidia.


----------



## TheoneandonlyMrK (Aug 3, 2014)

You two sound like nvidia board members some days .it would Be nice to get back on topic at some point. 
No new news on the hybrid board , gtx880 or anything going on then I guess.


----------



## Fluffmeister (Aug 3, 2014)

theoneandonlymrk said:


> You two sound like nvidia board members some days .it would Be nice to get back on topic at some point.
> No new news on the hybrid board , gtx880 or anything going on then I guess.



Really? Talking sense is being on the board of nVidia?

I guess you're right.


----------



## SIGSEGV (Aug 3, 2014)

arbiter said:


> One thing you forget AMD will stuck at the same proc if they release a new gpu soon as well.  So they would be stuck with same issue as well, so its not just problem for nvidia but could be bigger problem for amd.



According to various sources AMD already stated that they will introduce TSMC's 20nm products including gpu by next year (2015)


----------



## 64K (Aug 3, 2014)

If you need a GPU upgrade and you want 4 GB VRAM then go with GTX 880. I am 100% convinced that it will smoke the GTX 780 at this point but know what you're buying. It's not the Maxwell Flagship. It's a mid range GPU and a throwback to the 28nm process. It is by no means a Maxwell Flagship so consider the price and don't be scammed.


----------



## GhostRyder (Aug 3, 2014)

Fluffmeister said:


> Why would they need to put all their cards on the table right away? That makes absolutely zero sense.
> 
> Fact is when the GK110 was ready, it was in the form of the K20X for Oak Ridge, low yield, high returns, much more sense than appeasing forum warriors at TPU.


Makes more sense than releasing an equal product with less VRAM and performs almost exactly the same on average (Except when you take resolutions into account)...


Fluffmeister said:


> ^ Exactly, and once those contracts were fulfilled and yields gradually improved what did we see some 5-6 months later.... *drum roll*.... the $1000 GTX Titan , still without any fear of direct competition and a price as much as about protecting Nvdia's professional product stack as.... why fuck not?
> 
> But no, it should have been $400 bucks and called the 680, wonders never cease.


5-6months...Try almost a year later dude...

GTX 680 Released: March 22, 2012
GTX Titan Released: February 19, 2013

Yea they released Titan as a 1k card almost 11 months later, obviously they had no problem releasing a 1k Desktop grade video card.  If they had wanted to get that card out sooner they would have been happier to and charged accordingly, but they had enough trouble even getting the GTX 680 out which was out of stock and basically required camping your computer night and day to get one.



theoneandonlymrk said:


> You two sound like nvidia board members some days .it would Be nice to get back on topic at some point.
> No new news on the hybrid board , gtx880 or anything going on then I guess.


I am getting just as tired as you are of people dragging out these threads to off subject fanboy arguments.

But then what is going to be the excuse this time with the 880?  Since everyone is convinced an un-released card with very little known about it is going to be inferior to the current lineup...



64K said:


> If you need a GPU upgrade and you want 4 GB VRAM then go with GTX 880. I am 100% convinced that it will smoke the GTX 780 at this point but know what you're buying. It's not the Maxwell Flagship. It's a mid range GPU and a throwback to the 28nm process. It is by no means a Maxwell Flagship so consider the price and don't be scammed.



Exactly, im at a loss how certain people keep claiming that this chip sucks before we have even seen anything...


----------



## Xzibit (Aug 3, 2014)

HumanSmoke said:


> History tells us that the GTX 680 was sufficient. The competition (the 7970) was a known factor, so there was actually zero need to hastily put together a GK110 card. I doubt that a GK110 GTX card would have been any more than a PR stunt in any case, since Oak Ridge's contract superseded any consumer pissing contest.



They pulled the PR stunt anyways with the intro of TITAN brand

680 was good enough for them and they saw a $ benefit. 580 was FP64=1/8 and since then all Geforce have gone to a FP64=1/24. While AMD stuck to a FP64=1/4 on Tahiti until Hawaii where they lowered it to FP64=1/8.

Taihiti had FP64=1/4 so it was AMD "Titan" successor to the 580 if u don't take sides released a year after the 580. Not to mention the prices.
11/2010 - GTX 580 = $500
1/2012 - HD 7970 = $550
2/2013 - GTX Titan = $1000
The whole "TITAN" argument applies to Tahiti with in that same time frame with the notable exception of CUDA of course.

Now both companies are further cutting FP64 for gaming line where if Nvidia would had stuck to its old ways TITAN would have been the 580 successor not 680 nor 780.

I hope Maxwell goes back to the old ways but I highly doubt it.


----------



## HumanSmoke (Aug 3, 2014)

Xzibit said:


> The whole "TITAN" argument applies to


...nothing being talked about here....but since you're hanging out the bait...


Xzibit said:


> They pulled the PR stunt anyways with the intro of TITAN brand


Sure did. Seems like a marketing winner.


Xzibit said:


> 680 was good enough for them and they saw a $ benefit.* 580 was FP64=1/8 and since then all Geforce have gone to a FP64=1/24.*


Spends a whole tortured introduction trying to get Titan into the topic....then screws it up.
GeForce GTX Titan : FP64 1:3 rate (w/boost disabled - which stands to reason since overclocking and double precision aren't mutually beneficial from either a error or power consideration)
GeForce GTX Titan Black: FP64 1:3 rate w/boost disabled
GeForce GTX Titan Z : FP64 1:3 rate w/boost disabled


Xzibit said:


> While AMD stuck to a FP64=1/4 on Tahiti until Hawaii where they lowered it to FP64=1/8.


Thanks for reminding me that AMD halved the double precision ratio for desktop high end in the current series - though I already was aware of the fact. How about not offering double precision at all on GPU's other than the top one for the Evergreen and Northern Islands series after offering FP64 on the HD 4000 series RV770 ? Crazy shit huh? or limiting Pitcairn and Curacao to 1:16 FP64 to save die space and keep power demand in check? It's called tailoring the feature set to the segment.

Horses for courses. FP64 is a die space luxury largely unrequired in gaming GPUs.
Nvidia figured out a while ago that the monolithic big die really isn't that economic when sold at consumer prices which was why the line was bifurcated after the Fermi architecture - who would have thought selling a 520mm² GPU for $290 (GTX 560 Ti 448) and $350 (GTX 570) wouldn't have resulted in a financial windfall !. AMD will likely do the same since they will need a big die for pro/HSA apps ( and Fiji sounds like a 500mm²+ from all accounts), and keep the second tier and lower die-area ruled by gaming considerations ( just as Barts, Pitcairn, and Curacao are now) 


Xzibit said:


> I hope Maxwell goes back to the old ways but I highly doubt it.


The old ways of reverting back to 1:8 FP64 rate with Fermi, or 1:3 rate with the* current* GTX Titan range ?


----------



## Xzibit (Aug 3, 2014)

HumanSmoke said:


> ...nothing being talked about here....but since you're hanging out the bait...
> 
> Sure did. Seems like a marketing winner.
> 
> ...



WOW.

Even when I'm not arguing with you, you still come off as a jerk.

I didn't include TITAN because that was the exception on there top series card even though it has different "branding".  I understood you would know the difference.  Sheesh.  Didn't think crossing T's and dotting I's was needed for you to understand.

Old way as to not change FP64 with-in chip in gaming series.  GK110 was there first to do that TITAN & 780 differ.  They saw an opportunity to make $ off so many that didn't meet standards but it was a smart business move but not so good for the consumer.



P.S.
I need to stay away from culinary school.  Apparently it turns you into an even greater ass.


----------



## HumanSmoke (Aug 3, 2014)

Xzibit said:


> I didn't include TITAN because that was the exception on there top series card even though it has different "branding".


Ah, I see.
So when you said...


Xzibit said:


> 580 was FP64=1/8 and since then all Geforce have gone to a FP64=1/24.[


...what you actually meant was "all GeForces have gone to 1:24* except* the ones that are 1:3"
Makes sense. Might have been apropos to include that....but then it would make the rest of your post redundant.
Still not sure why you actually bought up double precision in any case, since GM 204 likely won't be compute/pro focused any more than any other sub-300mm^2 GPU is, and it isn't actually apropos to anything anyone including myself was talking about - so why bother quoting my post which wasn't in any way related to what you are talking about?


Xzibit said:


> jerk...ass


Still can't hold a discussion without resorting to name calling? Some things never change.


----------



## Xzibit (Aug 3, 2014)

HumanSmoke said:


> Ah, I see.
> So when you said...
> 
> ...what you actually meant was "all GeForces have gone to 1:24* except* the ones that are 1:3"
> ...



Should have pointed you here but I doubt that would stop your usual grandiose reply as usual.

*GEEKS3D - AMD Radeon and NVIDIA GeForce FP32/FP64 GFLOPS Table
*
Really I though most of that post I quoted you from was refering to *GK110*.?  Silly me. 

Name calling.  More like observation. Not like I'm the only one nor in this thread with such an observation.

I'll leave you to your HPD


----------



## HammerON (Aug 3, 2014)

Time to move along folks. Not a suggestion...


----------



## the54thvoid (Aug 3, 2014)

To be honest, I've never read a post from Humansmoke that isn't logically planned and argued. Likewise, Xzibit generally does the same. But the argument about FP is pretty irrelevant. 
Two guys with good GPU history knowledge, slightly ignoring literal freedoms and being obtuse. But kudos to both for knowing their stuff.
As for 880, it has to be more powerful than 780  and if it's not faster than 780ti, will need to come in an appropriate price point.  What will its FP be? Doesn't matter, gamers don't need it. What matters in the market now is power draw/performance ratio.  For 4k we need better power efficiency for the dual gpu's it looks like we need to run them till maybe a couple generations away? 
I know folks say power consumption is irrelevant to the gamer but it isn't to the manufacturer. Mobile markets are dictating the trend and whoever gets the best power efficient architecture in their gfx will win. It's the only reason AMD aren't buried on the CPU front, with their APU's beating Intel's on board gfx solutions.


----------



## jagd (Aug 3, 2014)

I did not forget AMD and it was  irrelevant to this topic , not the matter discussed here, all fabless chip makers have this problem also ; They have to use what available at them . You had been missed one simple point , AMD is not skipping 20nm and nvidia will be at disadvantage all time 16nm did not realised by TSMC than ,it was the point.



arbiter said:


> One thing you forget AMD will stuck at the same proc if they release a new gpu soon as well.  So they would be stuck with same issue as well, so its not just problem for nvidia but could be bigger problem for amd.


----------



## Xzibit (Aug 3, 2014)

the54thvoid said:


> To be honest, I've never read a post from Humansmoke that isn't logically planned and argued. Likewise, Xzibit generally does the same. But the argument about FP is pretty irrelevant.
> Two guys with good GPU history knowledge, slightly ignoring literal freedoms and being obtuse. But kudos to both for knowing their stuff.
> As for 880, it has to be more powerful than 780  and if it's not faster than 780ti, will need to come in an appropriate price point.  What will its FP be? Doesn't matter, gamers don't need it. What matters in the market now is power draw/performance ratio.  For 4k we need better power efficiency for the dual gpu's it looks like we need to run them till maybe a couple generations away?
> I know folks say power consumption is irrelevant to the gamer but it isn't to the manufacturer. Mobile markets are dictating the trend and whoever gets the best power efficient architecture in their gfx will win. It's the only reason AMD aren't buried on the CPU front, with their APU's beating Intel's on board gfx solutions.



FP was a reference to value. Similar to Titan brand marketing and the infamous Z towards more then just gamers to justify its high price.  Some think that Nvidia making 50+ on margins wouldn't allow them to sell at a lower MSRP price or that of similar in the past.

If there was no value in FP64 TITAN brand wouldn't exist.  I was simply implying that during Tahiti the value was always there since the launch of 7970->7990 until Titan at a more value oriented granted you weren't tied down to CUDA.  Which is a benefit for Nvidia which can lure you into spending more $ if your locked into their CUDA Eco-System.  I know some green faithful don't even want to look at the other side but it was there if you weren't bias.

The comparisons people are making are just wacky especially when they include GK110 Titan because then your being hypocrite when up-holding its value and not seeing GK104 similar down-side when comparing it to Tahiti.  That's my grip.
By all means compare GK110 780/Ti all day long in a gaming context since its better and by the looks of it those people are more interested in swinging there favorite color purse at someone.

I will be looking at Maxwell but in a more cautious way with the talk of V1 28nm and V2 20nm.


----------



## Fluffmeister (Aug 3, 2014)

GhostRyder said:


> Makes more sense than releasing an equal product with less VRAM and performs almost exactly the same on average (Except when you take resolutions into account)...



They only have to be more or less equal, but then I guess AMD felt the need to release a GHZ edition for a reason. But then people don't just take resolution into account but feature sets and the like, and as Tech Report had shown, higher FPS doesn't always equal a better experience... people hated hearing that too.

Both have come a long way regardless.



			
				GhostRyder said:
			
		

> 5-6months...Try almost a year later dude...
> 
> GTX 680 Released: March 22, 2012
> GTX Titan Released: February 19, 2013
> ...



You're right I got my dates wrong, but once again they were perfectly happy selling K20 \ K20X for thousands a pop. Once your high end pro market is happy you can start to trickle that tecnonlogy down to the consumer, even coming almost a year after the 680 it still had the market to itself for another what 8 months?


----------



## HumanSmoke (Aug 3, 2014)

the54thvoid said:


> As for 880, it has to be more powerful than 780  and if it's not faster than 780ti, will need to come in an appropriate price point.


I think a price realignment is in order anyway. The margin between the 780 and 780 Ti is thin as it is, and certainly doesn't warrant the $150 price differential, especially as cards such as the Gigabyte GHz Edition sits at 780 Ti (stock) performance and can be had for $480 after MIR, and if you shop around as I did for mine, then you can knock another forty off that. IF you split the difference between the 780 and 780 Ti, and say for arguments sake that the 880 sits between them, then it either sells for $549/599 or it takes the more usual $500 price point and the 780 drops officially down to $400-450, and even that may be a little high since the GTX 870 presumable would be around the same performance level if previous salvage part performance is applicable. The only non variable at the moment seems the GTX 770/760 pricing, since for all the talk about GM 204, there hasn't been much (if any) real info on the GM 206 which would EOL the 770/760. If the GM 206 parts are far enough off not to warrant rumours then Nvidia would need stability in the higher volume mainstream. It is pretty much why I thought that the 780 is the odd man out. Newer B1 silicon cards have considerable OC headroom, so how low can you realistically price them?


the54thvoid said:


> What will its FP be? Doesn't matter, gamers don't need it.


Exactly. Double precision is a workstation feature for the most part and becomes a liability for power consumption and die space - two aspects that are much more important - power consumption because most sales are to OEM's. Low power means being able to cheap out on the PSU, and die space as foundry pricing takes a hike (probably important for both AMD and Nvidia if they plan to basically optically shrink for the next process node.


the54thvoid said:


> What matters in the market now is power draw/performance ratio.  For 4k we need better power efficiency for the dual gpu's it looks like we need to run them till maybe a couple generations away?


I'd guess further out than that. Both AMD and Nvidia need to keep discrete graphics alive and well, and both need to keep distance between themselves and Intel's efforts. Generally software runs ahead of hardware to drive graphics sales, and I don't see that changing. Even if gaming stayed at 4K for a while, it only takes the addition of path tracing (or ray tracing), and the almost certain addition of voxel based global illumination (UE4 almost had it except it requires too much graphics horsepower for current architectures) to bring the next generations of cards to their knees.


----------



## GhostRyder (Aug 3, 2014)

Fluffmeister said:


> They only have to be more or less equal, but then I guess AMD felt the need to release a GHZ edition for a reason. But then people don't just take resolution into account but feature sets and the like, and as Tech Report had shown, higher FPS doesn't always equal a better experience... people hated hearing that too.
> 
> Both have come a long way regardless.


The GHz edition 7970s and the likes are no different then the normal counter parts.  All that really changed was the core clocks being bumped to the 1ghz mark essentially to match nvidia doing the clock speeds so high on the 680.  Anyone could have upped a normal 7970 to the same levels (not including the heavily binned non-reference models).




Fluffmeister said:


> You're right I got my dates wrong, but once again they were perfectly happy selling K20 \ K20X for thousands a pop. Once your high end pro market is happy you can start to trickle that tecnonlogy down to the consumer, even coming almost a year after the 680 it still had the market to itself for another what 8 months?


That is the definition of the professional market.  Professional cards always cost more because if the feature set, software, ram, and the fact they are built and rated for 24/7 use.  They are meant to be beaten up and keep working under the most heavy of tasks hence why they charge a fortune for them.  The teslas also do not give output (k20/x) and are designed for straight up compute.



Xzibit said:


> FP was a reference to value. Similar to Titan brand marketing and the infamous Z towards more then just gamers to justify its high price.  Some think that Nvidia making 50+ on margins wouldn't allow them to sell at a lower MSRP price or that of similar in the past.
> 
> If there was no value in FP64 TITAN brand wouldn't exist.  I was simply implying that during Tahiti the value was always there since the launch of 7970->7990 until Titan at a more value oriented granted you weren't tied down to CUDA.  Which is a benefit for Nvidia which can lure you into spending more $ if your locked into their CUDA Eco-System.  I know some green faithful don't even want to look at the other side but it was there if you weren't bias.
> 
> ...


It's because people think that separating the market out completely is the thing to do...I remember people buying 3gb GTX 580's like they were hot cakes for professional work simply because they were so good at it with a decent value.  Truth is the only reason nvidia did that was to create more niche markets (titan). The professional cards always have and always will have their reasons for existing and being priced the way they do because they come way more prepared for that work.  Desktop GPUs always carry some risk trying to use them for professional work hence why I think Titan branding is foolish because you get basically similar attributes the GTX 580 had normally without a load of extras (including 24/7 rated) at a premium.

You were correct as per usual in your argument.


----------



## HumanSmoke (Aug 3, 2014)

Well, I wouldn't ordinarily trust Videocardz as any kind of legitimate news outlet, but they seem to have picked up on a vendor (Gigabyte) spokesman's interview at a Chinese event
From Google Translation of the original Expreview article:


> Gigabyte declared in an interview with the good news, this year in September they will launch a new G1 game graphics, and is based on the latest NVIDIA GTX 800 series flagship product...


----------



## Xzibit (Aug 4, 2014)

If you follow the link. It refers to 880 GM204. We are more likely to be seeing a repeat of the Kepler cycle (880 is GM204 then is 970) and wont see a full Maxwell until next cycle in 2015.

Worst case scenario = a refined Kepler with a 800 series designation.

Best case scenario = Full Maxwell on 28nm (no gimps)

Atleast we got two months of speculations.  What if Maxwell comes with a built in Alien receiver. My bad that was Tesla. Maxwell should come with a camera and a picture of little green men.


----------



## Fluffmeister (Aug 4, 2014)

If Kepler is anything to go by, Maxwell is going to huge success for the big meanies from Santa Clara.

Don't worry guys, at least you have Tonga to look forward to.


----------



## Fluffmeister (Aug 4, 2014)

GhostRyder said:


> The GHz edition 7970s and the likes are no different then the normal counter parts.  All that really changed was the core clocks being bumped to the 1ghz mark essentially to match nvidia doing the clock speeds so high on the 680.  Anyone could have upped a normal 7970 to the same levels (not including the heavily binned non-reference models).



Of course, but AMD felt the need to respond, because the GK104 based 680 was more than enough to compete, I feel like I'm banging my head against a wall here.



GhostRyder said:


> That is the definition of the professional market.  Professional cards always cost more because if the feature set, software, ram, and the fact they are built and rated for 24/7 use.  They are meant to be beaten up and keep working under the most heavy of tasks hence why they charge a fortune for them.  The teslas also do not give output (k20/x) and are designed for straight up compute.



Hmm bit of a random statement there, nothing of which changes what I said.

Can't believe the GK110 is like 2 years old already, what a monster.


----------



## HumanSmoke (Aug 4, 2014)

Fluffmeister said:


> If Kepler is anything to go by, Maxwell is going to huge success for the big meanies from Santa Clara.


It wont just be desktop, since the latest Nvidia driver branch identifies* nine* different Maxwell (N16E-) SKU's, which if they conform to Nvidia's usual nomenclature equate to GTX 940M to GTX 980M models. AFAIK, the top dog GTX 980M arrives October.


----------



## erocker (Aug 4, 2014)

HumanSmoke and Xzibit can leave this thread now. Thread has been purged of off-topic nonsense. Move along.


----------



## GhostRyder (Aug 4, 2014)

Fluffmeister said:


> Of course, but AMD felt the need to respond, because the GK104 based 680 was more than enough to compete, I feel like I'm banging my head against a wall here.


First of all, the bump in clocks was more to conform with the whole "ghz race".  Similar to cpu's in the past it was a race and nvidia decided to knock the clocks.  The mere 75mhz was just so there was no longer a claim of nvidia being the only one with a reference 1ghz on the core clock.  It also proved to be more than enough to best it in a clock to clock ratio (Since both overclock about the same).




Fluffmeister said:


> Hmm bit of a random statement there, nothing of which changes what I said.


You did not get my point...The Tesla series of cards are not normal and do not conform with the similar laws the desktop and quaddro series follow. In most cases they do not even have the ability to output video to a monitor (Recent C Series have a DVI but not the K20's).  They also are deigned in such a way that make it very hard to run in normal or even many professional environments without modification (passive coolers) or special rack mount servers.  The K20X for instance has a massive passive heatsink opened up and designed to receive airflow from blower fans inside a machine.

The Primary Focus of these cards are as follows:
Large Scale calculations (Floating Point)
Cuda/OpenCL
Large Scale Image Generation

They are essentially with those series cards almost just saying "Here is a powerful GPU, have fun we will see you later".  Your not getting the same type of package as with a Quaddro or Desktop card...

The Teslas are a special breed of cards designed with super computers in mind and do not need to have certain attributes designed for those in mind.  They are meant for people to program things to utilize their GPU cores for calulations and have professionals spend time working on them.  Nvidia can release GK 110 chips on these even if they are not ready for the mainstream because even with a poor early binning process they do not expect many sales of the cards.  It is a very limited market (Even in the Oak Ridge Super Computer has 18,688 GPUs that are K20X but that is still an insignificant amount of GPU's out there) and Nvidia knows that which is why putting out a GK 110 chip early to a very niche market still meant they could work and improve the chip for the main market.  I still call upon my last quote "they had enough trouble even getting the GTX 680 out which was out of stock and basically required camping your computer night and day to get one".  They were not ready and even the K20X still did not have the full powered core (K40 does) because the process of creating those chips was still difficult like it was for GK 104.  If they had been fully ready to release the chip, they would have done so even releasing gimped GK 110 chips (Like the 780) but they were not ready to push it out onto the market (Just like AMD was not ready with Hawaii or else they would have done the same).  But they were not ready and had not perfected the binning process yet and a company likes to be prepared with the best product.  They do not like having to release products that do not meet standards, they do not want to waste money and maximize profits and getting a bunch of poor quality chips that cannot run at the fullest power is a sure fire way to waste money.



Fluffmeister said:


> Can't believe the GK110 is like 2 years old already, what a monster.


This suprises you how?  Most GPU's, CPU's, or other chips all have existed out in the development for quite some time (A year or so) but does not mean its ready to be used.  There are exceptions of course but most GPU's are made awhile in advance and go through rigorous testing including working on simple things like making the GPU in a cost effective way with having the least amount of failures (Or poor performing chips).  Nvidia nor AMD does not just plop out a chip  the month they announce it, its not like they experiment and the chips appear with a chemical reaction and they go "BY GOD WE HAVE DONE IT!!! Quick make the annoucement", its a long and tiring process that includes much testing and refining.

GM 204 is the same, its takes along time and they have been working on it for quite some time.  They were well aware at a point they could not drop down a node and began working on the GM 204 using the old process and working on it.  It has existed for quite some time and we will see all the work they have put into it very soon.

Again hating on GM 204 saying its a poor GPU is going to show to be foolish.  GM 204 is going to beat the GK 110 by a decent margin probably at least similar to how the GTX 680 beat the GTX 580.  Even though its on the tick cycle where they introduce the new architecture and save the final chip until they have improved/refined the whole process, your going to get a better performing chip.  Until we have more information however, most of this is still speculation.  The only thing I am sure of (Unless something really wierd happens) is that it will be the top performing single GPU chip from Nvidia once its released at the time!


----------



## Fluffmeister (Aug 4, 2014)

I know all this, it seems we have our wires crossed here.

You implied if the GK110 was ready they would release it, I and HumanSmoke said it was but they were fulfilling big buck contracts first which made much more sense then rushing it to the consumer market.

http://www.techpowerup.com/forums/t...x-880-in-september.203661/page-3#post-3144739

http://www.techpowerup.com/forums/t...x-880-in-september.203661/page-3#post-3144743

I then said:



			
				Fluffmeister said:
			
		

> ^ Exactly, and once those contracts were fulfilled and yields gradually improved what did we see some 5-6 months later.... *drum roll*.... the $1000 GTX Titan



You corrected me on the release date, but I was referring to the thousands of GK110 installed and up and running at Oak Ridge, so no I wasn't referring to the 680 release at all, and was right in the first place, so yeah shrug.

The comment about the age of GK110 is merely to emphasize how competitive the chip is, even against new AMD silicon, so again my point is simply they didn't need to rush it to the consumer market that's all... yes i really believe that.


----------



## GhostRyder (Aug 4, 2014)

Fluffmeister said:


> I know all this, it seems we have our wires crossed here.
> 
> You implied if the GK110 was ready they would release it, I and HumanSmoke said it was but they were fulfilling big buck contracts first which made much more sense then rushing it to the consumer market.


First of all it was not ready which is why it was *NOT* released for the normal consumer market or even the average professional market.  Professionals and super computers do different things and program/work on the devices constantly.  They are able to work more closely with the machine and most programs and such are custom for the machine which is part of the reason Teslas exist as they do.  They can release something not yet ready for the consumer market for professionals because they know it wont sell in a high enough quantity and that the professionals do not need as much in terms of extras.  The K20 and K20X did not even have the full chip which is even more reason it was not ready (K40 was full version).  It was the tick cycle, and they decided to release the GK 104 so they can test things much more in the field and work on the GK 110 silicon.  They were not ready and if GK 110 was fully ready they would have been more than happy to wreck AMD with the chip if it was ready like AMD would have done had Hawaii been ready to Nvidia.

Just because a chip exists does not make it ready for the average market or even the average professional market (Hence why no Quaddros or else they would have catered to all the professional markets and not the very strict niche market).  If they had released a Quaddro card this would be a different discussion but as with the Desktop cards the GK 110 chip came out much later for reasons of more testing, improving binning, and improved software among other things.  Hawaii was the same way and is why we saw Tahiti released first over the Hawaii chips just like we saw GK 104 over GK 110.  They are the chips that can be made ready on the new architecture and they are used to test the fields while they put work onto the better chips and improve the processes so they can be ready for the next release.



Fluffmeister said:


> You corrected me on the release date, but I was referring to the thousands of GK110 installed and up and running at Oak Ridge, so no I wasn't referring to the 680 release at all, and was right in the first place, so yeah shrug.


Ok well either way GK 110 being in a super computer working among professionals does not make it a consumer release or provide that it was ready for the big time...



Fluffmeister said:


> The comment about the age of GK110 is merely to emphasize how competitive the chip is, even against new AMD silicon, so again my point is simply they didn't need to rush it to the consumer market that's all... yes i really believe that.


Again you are saying that like this chip is two years old and the AMD silicon is a few months old which is not the case.  These chips may have been made at some different dates and exacts are hard to pinpoint except by the executives at both companies.  But the fact remains both are probably alot closer in age than you would expect (Or seem to think)...

Just like GM 204, it was probably existed for much longer than we give credit for...

I am also done arguing this at this point...


----------



## Fluffmeister (Aug 4, 2014)

Again your going off on a bit of a tangent here with your walls of text, I'm talking about working silicon here being shipped and used by paying clients.

Besides I never said it was ready for the consumer market, I said they didn't need to rush it to the consumer market, different things.

I'm glad your done too, for my own sake.

PS. Hi Xzibit!


----------



## Sony Xperia S (Aug 5, 2014)

the54thvoid said:


> But the argument about FP is pretty irrelevant.
> What will its FP be? Doesn't matter, gamers don't need it.



I am a gamer and I need my fully-enabled FP rate! Be it 1/3 or 1/4 but not so ugly crippled to 1/8 or even worse to 1/24.

Nvidia (and AMD but AMD is the smaller evil) guys are indeed arses and money-grabbing jerks. And some guys make it sound as if everything they do should be justified.

After all, HumanSmoke's argument would have made some sense (about the die space and power consumption) if they actually didn't cripple in the driver and used different dies but they use the same die for professional and consumer cards!


----------



## xenocide (Aug 6, 2014)

Sony Xperia S said:


> I am a gamer and I need my fully-enabled FP rate! Be it 1/3 or 1/4 but not so ugly crippled to 1/8 or even worse to 1/24.
> 
> Nvidia (and AMD but AMD is the smaller evil) guys are indeed arses and money-grabbing jerks. And some guys make it sound as if everything they do should be justified.
> 
> After all, HumanSmoke's argument would have made some sense (about the die space and power consumption) if they actually didn't cripple in the driver and used different dies but they use the same die for professional and consumer cards!


 
The point he was trying to make was clearly that there are almost no games (if any) that benefit from FP64 performance.  I would also say it's far more likely that even though they are the same die's the die's the performed better became Tesla's where as the lower performing (slightly defective but still functional) die's became Titans.  In the world of semiconductor manufacturing not all parts are created equal.


----------



## Sony Xperia S (Aug 6, 2014)

xenocide said:


> The point he was trying to make was clearly that there are almost no games (if any) that benefit from FP64 performance.



I know that GTX 680 and its professional iteration K10 actually do have one and the same FP performance. It is hardware castrated from the very beginning. My question is - how do nvidia sell this crap as a Tesla K10 and who would buy 1/24 double precision?



As a gamer, I didn't say that I need my fully-enabled double precision for games. There are multiple other applications where I would be glad to use the same card for.

Anandtech have a pretty nice showcase page called "Compute: What You Leave Behind?" in the GTX 680 review.

http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/17

You will see the PrimeGRID Genefer 1.06, and also AESEncryptDecrypt, SmallLUX GPU, Civilisation V.

I will leave you for your own conclusions.


----------



## THU31 (Aug 8, 2014)

Scatler said:


> As if Nvidia ever released top end gpu's at reasonable prices. When you have "top" performance you ask top dollar for it /sarcasm.



The price of GTX 480, 580 and 680 was 500 $, which was not unreasonable. They went bullshit with the whole Titan/780 thing.


----------

