Monday, September 15th 2014

NVIDIA GeForce GTX 980 and GTX 970 Pricing Revealed

Apparently, NVIDIA is convinced that it has a pair of winners on its hands, with its upcoming GeForce GTX 980 and GTX 970 graphics cards, and is preparing to price them steeply. The GeForce GTX 980 is expected to start at US $599, nearly the same price as the GeForce GTX 780 Ti. The GTX 970, on the other hand, will start at US $399, danger-close to cannibalizing the GTX 780.

Across the brands, the GTX 980 is launching at the same pricing AMD's Radeon R9 290X launched at; and the GTX 970 at that of the R9 290. AMD's cards have since settled down to $449 for the R9 290X, and R9 290 at $350. Both the GTX 980 and GTX 970, will be available in non-reference board designs, although reference-design GTX 980 will dominate day-one reviews. Based on the 28 nm GM204 silicon, the GTX 980 features 2,048 CUDA cores, 128 TMUs, 32 ROPs; while the GTX 970 features 1,664 CUDA cores, and 104 TMUs. Both feature 256-bit wide memory interfaces, holding 4 GB of GDDR5 memory.
Source: 3DCenter.org
Add your own comment

71 Comments on NVIDIA GeForce GTX 980 and GTX 970 Pricing Revealed

#51
rtwjunkie
PC Gaming Enthusiast
SlizzoAs long as I will be able to get another 780 for around $300 I'll be happy.
Indeed! I'm awaiting the mad rush of 780's and 770's dumping into used for sale threads and on ebay as the masses can't wait to hand over their cash for a "two-generational-upgrade-so-it-must-be good" video card.
Posted on Reply
#52
Slizzo
64KYeah, the $600 price tag on the GTX 980, if true, is too high. The only point I was trying to make is that the GTX 980 isn't a top end card. It's a midrange Maxwell GPU that won't even have the benefit of greater efficiency with a die shrink. The top end Maxwells will be the GM210. If Nvidia were releasing a die shrink 250 watt GM210 it would run all over a GTX 780Ti (GK110). That won't come until next year some time though.
Same thing happened with the GTX680 though. GK104, not top range which would have been GK100, but still performed quite well, and was still quite an upgrade from GF110.
Posted on Reply
#53
GhostRyder
64KYeah, the $600 price tag on the GTX 980, if true, is too high. The only point I was trying to make is that the GTX 980 isn't a top end card. It's a midrange Maxwell GPU that won't even have the benefit of greater efficiency with a die shrink. The top end Maxwells will be the GM210. If Nvidia were releasing a die shrink 250 watt GM210 it would run all over a GTX 780Ti (GK110). That won't come until next year some time though.
Could not agree more, I just wish for more performance and power for the top tier labeled GPUs. But I am still reserving my full judgment until everything is out in the open.
Posted on Reply
#54
ironwolf
Now an equally important question: how much of a price premium (read price hike) will places like Newegg, etc. put on the cards the first few days/weeks?
Posted on Reply
#55
64K
ironwolfNow an equally important question: how much of a price premium (read price hike) will places like Newegg, etc. put on the cards the first few days/weeks?
Depends on supply and demand. If the reviews are favorable I expect demand will be high.
Posted on Reply
#56
claes
Since when did anything sell at MSRP? Doesn't a $600 MSRP mean ~$500-$520 for reference at launch, $550 for aftermarket?
Posted on Reply
#57
GhostRyder
claesSince when did anything sell at MSRP? Doesn't a $600 MSRP mean ~$500-$520 for reference at launch, $550 for aftermarket?
If Nvidia says the price is $600 on a card, then that is the reference models price while aftermarket will then have a mark up depending on what the company who made the cooler decides the overclock and new cooler should cost on top of the normal price. Things are always subject to change and of course this is still just a leak.
Posted on Reply
#58
HumanSmoke
N3M3515+1
Did you remember when a new generation vcard was like 70% more perf than the one it was replacing, and the price the same? good old times... (add to that no more than a year and a half passed between generations, right now they have their mouths full saying "new generation" when in reality are no more than miserable refreshes and incremental updates)
Big improvements usually accompany a new process node and new architecture. The reality is this actually seldom eventuates. The other major point is that silicon gains become more incremental as the process price increases. You also won't see vast improvements as in the past for the simple reason that there was much more to learn and implement back in the day. CPUs today for instance don't show the huge leaps shown by the 8086 -> 80286.
GK 110 > GF100.........104% improvement
GF100 > GT200.........58.3% improvement
GT200 > G80.............38.2% improvement

GK 104 > GF 104......104% improvement
GF 104 > G92............56.8% improvement

Hawaii > Tahiti.........33.3% improvement
Tahiti > Cayman....38.9% improvement
Cayman > Cypress...16.3% improvement
Cypress > RV770....100% improvement

Pitcairn > Barts.......51.5% improvement
Barts > Juniper.......66.7% improvement
Juniper > RV740....51.5% improvement
Posted on Reply
#59
mcraygsx
Sony Xperia SYou will be very badly disappointed since GM204 doesn't offer anything new performance wise compared to GK110. What it was capable of delivery, will now only be delivered with lower power requirements. And nothing more.

nvidia is a really very very shitty company.

the 970 should be $250 and the 980 not more than $400.

nothing to see here, folks, move on and forget it. meh :rolleyes:
What this person said !

They are asking lot of money for mid tier product. Within next 6 mo they will be introducing GM210.
Posted on Reply
#60
Scrizz
The Von MatricesThe launch price of the R9 290X was $549, not $599.
I was getting ready to post the same thing.
Posted on Reply
#61
N3M3515
HumanSmokeBig improvements usually accompany a new process node and new architecture. The reality is this actually seldom eventuates. The other major point is that silicon gains become more incremental as the process price increases. You also won't see vast improvements as in the past for the simple reason that there was much more to learn and implement back in the day. CPUs today for instance don't show the huge leaps shown by the 8086 -> 80286.

GK 110 > GF100.........104% improvement
GF100 > GT200.........58.3% improvement
GT200 > G80.............38.2% improvement

GK 104 > GF 104......104% improvement
GF 104 > G92............56.8% improvement

Hawaii > Tahiti.........33.3% improvement
Tahiti > Cayman....38.9% improvement
Cayman > Cypress...16.3% improvement
Cypress > RV770....100% improvement

Pitcairn > Barts.......51.5% improvement
Barts > Juniper.......66.7% improvement
Juniper > RV740....51.5% improvement
You're right about node and architecture, things advance at a much slower pace nowadays...

About the chart thou...
The way i saw the nvidia part (maybe i'm crazy i don't know)

GK110(GTX780) > GK104.........19% improvement -msrp 650 vs 500 - year launched 2013 (may)
GK104(GTX680) > GF110.........19% improvement -msrp 500 vs 500 - year launched 2012 (march) (you could say that gk104 is a midrange part, and you're right, but to US customers it was sold at high end price, sorry.)
GF110(GTX580) > GF100.........11% improvement -msrp 500 vs 500 - year launched 2010 (november)
GF100(GTX480) > GT200.........49.2% improvement -msrp 500 vs 650 - year launched 2010 (march)
GT200(GTX280) > G80.............37% improvement -msrp: 650 vs 520 - year launched 2008 (june)

Now Radeon

Hawaii(R9290X) > Tahiti.........41.3% improvement -msrp: 550 vs 550 - year launched 2013 (october)
Tahiti(HD7970) > Cayman....29.8% improvement -msrp: 550 vs 370 - year launched 2011 (december)
Cayman(HD6970) > Cypress...13.4% improvement -msrp: 370 vs 400 - year launched 2010 (december)
Cypress(HD5870) > RV770....49% improvement -msrp: 400 vs 300 - year launched 2009 (september)
RV770(HD4870) > RV670....57% improvement -msrp: 300 vs 250 - year launched 2008 (june)
Posted on Reply
#62
HumanSmoke
N3M3515You're right about node and architecture, things advance at a much slower pace nowadays...

About the chart thou...
The way i saw the nvidia part (maybe i'm crazy i don't know)

GK110(GTX780) > GK104.........19% improvement -msrp 650 vs 500 - year launched 2013 (may)
The GTX 780 isn't a full-die part. I also specified process node and architecture. Your post that I quoted made no mention of pricing which can be subjective depending upon availability, geographic distribution area, and competition.
Even if you could compare a 20% castrated GK 110 (GTX 780), the closest analogue in GK 104 is the OEM GTX 660 or 760 (1152 shader active of 1536)
N3M3515 GK104(GTX680) > GF110.........19% improvement -msrp 500 vs 500 - year launched 2012 (march) (you could say that gk104 is a midrange part, and you're right, but to US customers it was sold at high end price, sorry.)
Regardless, I was talking about GPU hierarchy, not price segment
N3M3515.........11% improvement -msrp 500 vs 500 - year launched 2010 (november)
GF100(GTX480) > GT200.........49.2% improvement -msrp 500 vs 650 - year launched 2010 (march)
The GTX 480 also isn't a full-die part
N3M3515Now Radeon

Hawaii(R9290X) > Tahiti.........41.3% improvement -msrp: 550 vs 550 - year launched 2013 (october)
Tahiti(HD7970) > Cayman....29.8% improvement -msrp: 550 vs 370 - year launched 2011 (december)
Cayman(HD6970) > Cypress...13.4% improvement -msrp: 370 vs 400 - year launched 2010 (december)
Cypress(HD5870) > RV770....49% improvement -msrp: 400 vs 300 - year launched 2009 (september)
RV770(HD4870) > RV670....57% improvement -msrp: 300 vs 250 - year launched 2008 (june)
I used the highest resolution for the percentages (2560x1440) where possible - overall figures include resolutions like 1024x768...hardly indicative of the GPUs being discussed. I also only looked at fully enabled GPUs. Salvage parts aren't indicative since the degree that they are castrated will differ between architectures.
Nice use of colour - very vibrant.
Posted on Reply
#63
Casecutter
N3M3515You're right about node and architecture, things advance at a much slower pace nowadays...

About the chart thou...
The way i saw the nvidia part (maybe i'm crazy i don't know)
I see the argument of MSRP's to "like" replacements (segments) is very valid. Like as the GTX 480 (not a full part), it was still the utmost replacement offered to the gaming market to replace previous high-end. You can say you have a higher Hp motor back at garage, but you have to race what you brung.

While there a part in this that isn't taken into context, there's also price increases both incur from TSMC. Like the rumored 20-25% increase they enacted on 28nm wafers. Sure it’s moot in the pricing between them, but is a factor still passed on to us, So the escalation to pricing isn't just AMD/Nvidia.

Another piece that needs consideration was up until Kepler Nvidia purchase the chips individually that achieved their requirements. If or how it helps/hinder can't say. Now they buy the wafers and harvest many more variants as was the case like the GK104. I'm not saying there's anything wrong about it... It's just a different business structure and would lead one to figure it's enhanced the margins as they can extract from each wafer. Also why the GTX680 price was maintained as it was the midrange part even though 28nm cost went up.
Posted on Reply
#64
The Von Matrices
CasecutterAnother piece that needs consideration was up until Kepler Nvidia purchase the chips individually that achieved their requirements. If or how it helps/hinder can't say. Now they buy the wafers and harvest many more variants as was the case like the GK104. I'm not saying there's anything wrong about it... It's just a different business structure and would lead one to figure it's enhanced the margins as they can extract from each wafer. Also why the GTX680 price was maintained as it was the midrange part even though 28nm cost went up.
Remember that this pricing structure also disincentives NVidia from being an early adopter of a new process node (with higher defect rate) since NVidia directly pays for all the defects, thus one of the reasons we have a 28nm GM204.
Posted on Reply
#65
Casecutter
The Von MatricesRemember that this pricing structure also disincentives NVidia from being an early adopter of a new process node (with higher defect rate) since NVidia directly pays for all the defects, thus one of the reasons we have a 28nm GM204.
I might be missing your point.
I thought in the old arrangement Nvidia didn’t pay for parts that couldn’t meet the specification? They went later on shrinks because TSCM would work through "risk production" before getting Nvidia stuff going?

The RV670 of the 4870 was 55mn Nov 2007; wasn't the G92+ of the 9800GTX+ the first 55nm released July 2008?
The RV740 (pipe cleaner) of the 4770 was 40nm May 2009 / even the 5870 (RV870) was Sept 2009; while wasn't the GTX480 the first iteration on 40nm and that was March 2010.
And for 28nm the 7970 was out a good 3 months before the GXT680 (both being afflicted by what I understood as TSCM teething pains). That was the first time Nvidia took the regins to look at each chip test and bin them.

I could be mistaken on those releases, and others may show different GPU's (professional) or dates, but for gamers this is the trend I recall.
Posted on Reply
#66
GhostRyder
CasecutterI might be missing your point.
I thought in the old arrangement Nvidia didn’t pay for parts that couldn’t meet the specification? They went later on shrinks because TSCM would work through "risk production" before getting Nvidia stuff going?

The RV670 was 55mn Nov 2007; wasn't the G92+ of the 9800GTX+ the first 55nm release July 2008?
The RV740 of the 4770 was 40nm May 2009 / even the 5870 (RV870) was Sept 2009; while wasn't the GTX480 the first iteration on 40nm and that was March 2010.
And for 28nm the 7970 was out a good 3 months before the GXT680 both afflicted by I see as TSCM teething pains). That was the first time Nvidia was the major role player to look at each chip test and bin them.

I could be mistaken on these, and other may show different GPU's/dates, but for gamers this is the trend I recall.
Nvidia and AMD do not pay for parts that do not meet the full quality standards to an extent, that is the reason we end up at times with the cut down variants of chips and why some cards have so many variants using the same part (GK 110 for instance). Normally the chip does receive rigorous testing by AMD or Nvidia to see if it meets (Insert Requirements Here) and then depending on yes or no the chip moves on down to become the higher chip or the lower chip. At times they then if they do not meet the first requirements they move on to the next testing to see if they will meet the next set of requirements and again same process as before and then the chip moves on down the line. Its a rinse and repeat cycle which is why we end up with cards like the R9 290, GTX 780, and what not. There are exceptions to that rule and at times some chips are just taken and turned into lower parts or checked to meet just the lower requirements because demand is so high for a lower part that they end up just slapping them on the lower part, an example is the R9 290 early in the process had variants that were not laser cut/contained a bad SMX and instead was able to be unlocked later to use the full fledged core which I believe was done due to the fact people were buying the 290 like hot cakes more than the 290X and while yields may not have been as big they ended up selling a lot more. That is more of a minute situation of course and not something that happens every generation but even the 6950 had a similar thing if you will.

There is an article somewhere that refers to this, let me see if I can find it...
Posted on Reply
#67
HumanSmoke
CasecutterI thought in the old arrangement Nvidia didn’t pay for parts that couldn’t meet the specification? They went later on shrinks because TSCM would work through "risk production" before getting Nvidia stuff going?
Pretty much correct. This applied for the era of 80nm to 40nm where ATI (AMD) led the process. Most of that stemmed from the fact that Nvidia used to lead the process prior to this but when TSMC screwed up their 110nm process, Nvidia had to jump onto IBM's 130nm FSG, while ATI persevered with TSMC's existing 130nm Lo-K and transitioned to the 110nm process when it was fixed ( R430 I think, so maybe early-mid 2004) so had a lead of a couple of months or so on Nvidia - who have taken a more cautious approach with TSMC since then, although both companies had 28nm wafer starts in the same time frame.
Posted on Reply
#68
Casecutter
This got much more long winded than I originally thought but it... I could be off in left field, but here's what I see. Straighten-me-out as needed.

They both purchase wafers basically at a set price, no matter the number of parts or the complexity. Traditionally there’s been an determined yield % for tier 1 and tier 2 parts after risk production is satisfied. There are cost allocations (may not pay full price depending on the problems) for at risk wafers, but once they project production yields the wafer will hold to "X" percentages things start. There's the unwritten eventual objective that more tier 1&2 parts should be "harvested" as production matures. Although, parts outside the agreed "yields" that are not so much "defects" they’re parts that can be salvaged/cut-back/fused/gelded to produce other lower variants.

I believe early in the old deal TSCM would go to Nvidia and show they found "tier 3/4 chips" in volumes that could be used for a GSO or some other variants. It wasn't absolute that Nvidia had to contractually take them, but after the margins they paid for tier 1&2 parts, it was very lucrative to find homes for them. However with each shrink TSCM yields where less worthwhile for TSCM, and Nvidia was compelled to make use of remnants or see their part costs skyrocket. With Fermi I see Nvidia embracing a thoughtful architecture layout to provide them even more flexibility past the traditional 2 tier. They work toward the imminent day when the arrangement with TSCM wasn’t flexible enough. It had long since morphed basically into how AMD toiled and my synopsis still are… basically just 2 tiers. Don’t get me wrong the old arrangement was shrewd aiding Nvidia for a good many years, but TSCM no longer wanted to sort chips and Nvidia had long recognized they needed to change.

Now this is where Nvidia gets accolades, when transitioning for 28nm (a point that TSCM not only raised price, but started true parity for cost of a wafer to both) they smash the "just" 2 tier yield concept with Kepler. Fully embracing the notion of multiples of spec’s that can be discerned from a wafer. I think Nvidia also worked very studiously for the development of a apparatuses (machines) that rapidly test, sort, and bin in one quick operation. The long-established method was(is) segregate tiers 1&2, and set the rest aside then come back later and see what the remnants might give... is not what Nvidia does now. Nvidia I believe not only develops in "by design", but more importantly identifies almost immediately the various iterations they could release. Recognizing when a particular spec can be offered in volumes and priced to slot into the market. They realize much earlier their potential product stack, meaning they can be much more forward thinking from the first good wafers, rather than reactionary weeks later.
Posted on Reply
#69
HumanSmoke
^^^^ That is part of the scenario that played out for sure. The other part is that when GPUs first started out there were a lot of vendors, and a lot of pure-play foundry companies catering to them. TSMC's growth allied with the ever decreasing range of vendors basically cut the options of both Nvidia and ATI.
Back in the day (we're talking 800nm - 250nm here), TSMC had competition- lots of it- ST Micro (Nvidia, 3DLabs, ATI, VideoLogic/PowerVR), LSI (Number Nine), LG/Toshiba (Chromatic Research), UMC (S3/XGI, ATI, Matrox, Nvidia), Mitsubishi (Rendition), NEC (Matrox, VideoLogic/PowerVR), IBM (3DLabs, Nvidia), UICC (Trident), MiCRUS (Cirrus Logic), Fujitsu (S3), SMC (SiS), Texas Instruments (Chromatic, 3DLabs) - those are the main ones I remember and GPU vendors usually associated with them, but there a quite a few others (Lockheed-Martin I think only produced for Intel).

As the vendors disappeared, the foundries specializing in large IC's decreased also - some got out of the business by choice, some because their contracts weren't sufficient to remain competitive - but basically TSMC ended up ruling the pure-play foundry business and ATI/AMD and Nvidia became hamstrung. What I was talking about earlier when TSMC's 110nm process got into difficulties. The previous 130nm node had ATI and Nvidia launch at the same time (within a week of each other). 110nm was delayed and forced Nvidia to go with IBM's 130nm FSG process or TSMC's 130nm Lo-K - Nvidia chose IBM for the FX 5700's, and ATI chose Lo-K for the 9600 XT as a stop-gap until 110nm came onstream. Basically as soon as TSMC slipped, Nvidia and ATI were scrambling, and its been that way since early 2003.
Posted on Reply
#70
Casecutter
HumanSmoke^^^^ Basically as soon as TSMC slipped, Nvidia and ATI were scrambling, and its been that way since early 2003.
This is why I had so hoped to almost blindly considering AMD had gone the GloFo with Tonga. If that part was produced by GloFo and came out like that I could've gotten behind it more. If for no other reason than it could be perhaps the first true change in the market, something I know I'm waiting for. But there's tomorrow... :)
Posted on Reply
#71
Slizzo
CasecutterThis is why I had so hoped to almost blindly considering AMD had gone the GloFo with Tonga. If that part was produced by GloFo and came out like that I could've gotten behind it more. If for no other reason than it could be perhaps the first true change in the market, something I know I'm waiting for. But there's tomorrow... :)
Problem with Global Foundries is that they're consistently behind the curve in terms of process technology, and AMD can ill afford to have their GPUs wait for GloFo to get on board with their smaller process technologies and let nVidia soak up TSMC's capacity for reducing size. AMD's CPU business is getting beat up because of this as well; Intel can afford to work on process technology at an alarming pace, GloFo just doesn't have the cash to be able to sink it into that kind of development cycle.
Posted on Reply
Add your own comment
Nov 21st, 2024 14:06 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts