Monday, September 15th 2014
NVIDIA GeForce GTX 980 and GTX 970 Pricing Revealed
Apparently, NVIDIA is convinced that it has a pair of winners on its hands, with its upcoming GeForce GTX 980 and GTX 970 graphics cards, and is preparing to price them steeply. The GeForce GTX 980 is expected to start at US $599, nearly the same price as the GeForce GTX 780 Ti. The GTX 970, on the other hand, will start at US $399, danger-close to cannibalizing the GTX 780.
Across the brands, the GTX 980 is launching at the same pricing AMD's Radeon R9 290X launched at; and the GTX 970 at that of the R9 290. AMD's cards have since settled down to $449 for the R9 290X, and R9 290 at $350. Both the GTX 980 and GTX 970, will be available in non-reference board designs, although reference-design GTX 980 will dominate day-one reviews. Based on the 28 nm GM204 silicon, the GTX 980 features 2,048 CUDA cores, 128 TMUs, 32 ROPs; while the GTX 970 features 1,664 CUDA cores, and 104 TMUs. Both feature 256-bit wide memory interfaces, holding 4 GB of GDDR5 memory.
Source:
3DCenter.org
Across the brands, the GTX 980 is launching at the same pricing AMD's Radeon R9 290X launched at; and the GTX 970 at that of the R9 290. AMD's cards have since settled down to $449 for the R9 290X, and R9 290 at $350. Both the GTX 980 and GTX 970, will be available in non-reference board designs, although reference-design GTX 980 will dominate day-one reviews. Based on the 28 nm GM204 silicon, the GTX 980 features 2,048 CUDA cores, 128 TMUs, 32 ROPs; while the GTX 970 features 1,664 CUDA cores, and 104 TMUs. Both feature 256-bit wide memory interfaces, holding 4 GB of GDDR5 memory.
71 Comments on NVIDIA GeForce GTX 980 and GTX 970 Pricing Revealed
GK 110 > GF100.........104% improvement
GF100 > GT200.........58.3% improvement
GT200 > G80.............38.2% improvement
GK 104 > GF 104......104% improvement
GF 104 > G92............56.8% improvement
Hawaii > Tahiti.........33.3% improvement
Tahiti > Cayman....38.9% improvement
Cayman > Cypress...16.3% improvement
Cypress > RV770....100% improvement
Pitcairn > Barts.......51.5% improvement
Barts > Juniper.......66.7% improvement
Juniper > RV740....51.5% improvement
They are asking lot of money for mid tier product. Within next 6 mo they will be introducing GM210.
About the chart thou...
The way i saw the nvidia part (maybe i'm crazy i don't know)
GK110(GTX780) > GK104.........19% improvement -msrp 650 vs 500 - year launched 2013 (may)
GK104(GTX680) > GF110.........19% improvement -msrp 500 vs 500 - year launched 2012 (march) (you could say that gk104 is a midrange part, and you're right, but to US customers it was sold at high end price, sorry.)
GF110(GTX580) > GF100.........11% improvement -msrp 500 vs 500 - year launched 2010 (november)
GF100(GTX480) > GT200.........49.2% improvement -msrp 500 vs 650 - year launched 2010 (march)
GT200(GTX280) > G80.............37% improvement -msrp: 650 vs 520 - year launched 2008 (june)
Now Radeon
Hawaii(R9290X) > Tahiti.........41.3% improvement -msrp: 550 vs 550 - year launched 2013 (october)
Tahiti(HD7970) > Cayman....29.8% improvement -msrp: 550 vs 370 - year launched 2011 (december)
Cayman(HD6970) > Cypress...13.4% improvement -msrp: 370 vs 400 - year launched 2010 (december)
Cypress(HD5870) > RV770....49% improvement -msrp: 400 vs 300 - year launched 2009 (september)
RV770(HD4870) > RV670....57% improvement -msrp: 300 vs 250 - year launched 2008 (june)
Even if you could compare a 20% castrated GK 110 (GTX 780), the closest analogue in GK 104 is the OEM GTX 660 or 760 (1152 shader active of 1536) Regardless, I was talking about GPU hierarchy, not price segment The GTX 480 also isn't a full-die part I used the highest resolution for the percentages (2560x1440) where possible - overall figures include resolutions like 1024x768...hardly indicative of the GPUs being discussed. I also only looked at fully enabled GPUs. Salvage parts aren't indicative since the degree that they are castrated will differ between architectures.
Nice use of colour - very vibrant.
While there a part in this that isn't taken into context, there's also price increases both incur from TSMC. Like the rumored 20-25% increase they enacted on 28nm wafers. Sure it’s moot in the pricing between them, but is a factor still passed on to us, So the escalation to pricing isn't just AMD/Nvidia.
Another piece that needs consideration was up until Kepler Nvidia purchase the chips individually that achieved their requirements. If or how it helps/hinder can't say. Now they buy the wafers and harvest many more variants as was the case like the GK104. I'm not saying there's anything wrong about it... It's just a different business structure and would lead one to figure it's enhanced the margins as they can extract from each wafer. Also why the GTX680 price was maintained as it was the midrange part even though 28nm cost went up.
I thought in the old arrangement Nvidia didn’t pay for parts that couldn’t meet the specification? They went later on shrinks because TSCM would work through "risk production" before getting Nvidia stuff going?
The RV670 of the 4870 was 55mn Nov 2007; wasn't the G92+ of the 9800GTX+ the first 55nm released July 2008?
The RV740 (pipe cleaner) of the 4770 was 40nm May 2009 / even the 5870 (RV870) was Sept 2009; while wasn't the GTX480 the first iteration on 40nm and that was March 2010.
And for 28nm the 7970 was out a good 3 months before the GXT680 (both being afflicted by what I understood as TSCM teething pains). That was the first time Nvidia took the regins to look at each chip test and bin them.
I could be mistaken on those releases, and others may show different GPU's (professional) or dates, but for gamers this is the trend I recall.
There is an article somewhere that refers to this, let me see if I can find it...
They both purchase wafers basically at a set price, no matter the number of parts or the complexity. Traditionally there’s been an determined yield % for tier 1 and tier 2 parts after risk production is satisfied. There are cost allocations (may not pay full price depending on the problems) for at risk wafers, but once they project production yields the wafer will hold to "X" percentages things start. There's the unwritten eventual objective that more tier 1&2 parts should be "harvested" as production matures. Although, parts outside the agreed "yields" that are not so much "defects" they’re parts that can be salvaged/cut-back/fused/gelded to produce other lower variants.
I believe early in the old deal TSCM would go to Nvidia and show they found "tier 3/4 chips" in volumes that could be used for a GSO or some other variants. It wasn't absolute that Nvidia had to contractually take them, but after the margins they paid for tier 1&2 parts, it was very lucrative to find homes for them. However with each shrink TSCM yields where less worthwhile for TSCM, and Nvidia was compelled to make use of remnants or see their part costs skyrocket. With Fermi I see Nvidia embracing a thoughtful architecture layout to provide them even more flexibility past the traditional 2 tier. They work toward the imminent day when the arrangement with TSCM wasn’t flexible enough. It had long since morphed basically into how AMD toiled and my synopsis still are… basically just 2 tiers. Don’t get me wrong the old arrangement was shrewd aiding Nvidia for a good many years, but TSCM no longer wanted to sort chips and Nvidia had long recognized they needed to change.
Now this is where Nvidia gets accolades, when transitioning for 28nm (a point that TSCM not only raised price, but started true parity for cost of a wafer to both) they smash the "just" 2 tier yield concept with Kepler. Fully embracing the notion of multiples of spec’s that can be discerned from a wafer. I think Nvidia also worked very studiously for the development of a apparatuses (machines) that rapidly test, sort, and bin in one quick operation. The long-established method was(is) segregate tiers 1&2, and set the rest aside then come back later and see what the remnants might give... is not what Nvidia does now. Nvidia I believe not only develops in "by design", but more importantly identifies almost immediately the various iterations they could release. Recognizing when a particular spec can be offered in volumes and priced to slot into the market. They realize much earlier their potential product stack, meaning they can be much more forward thinking from the first good wafers, rather than reactionary weeks later.
Back in the day (we're talking 800nm - 250nm here), TSMC had competition- lots of it- ST Micro (Nvidia, 3DLabs, ATI, VideoLogic/PowerVR), LSI (Number Nine), LG/Toshiba (Chromatic Research), UMC (S3/XGI, ATI, Matrox, Nvidia), Mitsubishi (Rendition), NEC (Matrox, VideoLogic/PowerVR), IBM (3DLabs, Nvidia), UICC (Trident), MiCRUS (Cirrus Logic), Fujitsu (S3), SMC (SiS), Texas Instruments (Chromatic, 3DLabs) - those are the main ones I remember and GPU vendors usually associated with them, but there a quite a few others (Lockheed-Martin I think only produced for Intel).
As the vendors disappeared, the foundries specializing in large IC's decreased also - some got out of the business by choice, some because their contracts weren't sufficient to remain competitive - but basically TSMC ended up ruling the pure-play foundry business and ATI/AMD and Nvidia became hamstrung. What I was talking about earlier when TSMC's 110nm process got into difficulties. The previous 130nm node had ATI and Nvidia launch at the same time (within a week of each other). 110nm was delayed and forced Nvidia to go with IBM's 130nm FSG process or TSMC's 130nm Lo-K - Nvidia chose IBM for the FX 5700's, and ATI chose Lo-K for the 9600 XT as a stop-gap until 110nm came onstream. Basically as soon as TSMC slipped, Nvidia and ATI were scrambling, and its been that way since early 2003.