# GTX 970 Memory Drama: Plot Thickens, NVIDIA has to Revise Specs



## btarunr (Jan 27, 2015)

It looks like NVIDIA's first response to the GeForce GTX 970 memory allocation controversy clearly came from engineers who were pulled out of their weekend plans, and hence was too ambiguously technical (even for us). It's only on Monday that NVIDIA PR swung into action, offering a more user-friendly explanation on what the GTX 970 issue is, and how exactly did they carve the GM204 up, when creating the card.

According to an Anandtech report, which cites that easy explanation from NVIDIA, the company was not truthful about specs of GTX 970, at launch. For example, the non-public document NVIDIA gave out to reviewers (which gives them detailed tech-specs), had clearly mentioned ROP count of the GTX 970 to be 64. Reviewers used that count in their reviews. TechPowerUp GPU-Z shows ROP count as reported by the driver, but it has no way of telling just how many of those "enabled" ROPs are "active." The media reviewing the card were hence led to believe that the GTX 970 was carved out by simply disabling three out of sixteen streaming multiprocessors (SMMs), the basic indivisible subunits of the GM204 chip, with no mention of other components like the ROP count, and L2 cache amount being changed from the GTX 980 (a full-fledged implementation of this silicon). 



 




NVIDIA explained to Anandtech that there was a communication-gap between the engineers (the people who designed the GTX 970 ASIC), and the technical marketing team (the people who write the Reviewer's Guide document, and draw the block-diagram). This team was unaware that with "Maxwell," you could segment components previously thought indivisible, or that you could "partial disable" components.

It turns out that in addition to three SMX units being disabled (resulting in 1,664 CUDA cores), NVIDIA reduced the L2 cache (last-level cache) on this chip to 1.75 MB, down from 2 MB, and also disabled a few ROPs. The ROP count is effectively 56, and not 64. The last 8 ROPs aren't "disabled." They're active, but not used, because their connection to the crossbar is too slow (we'll get to that in a bit). The L2 cache is a key component of the "crossbar." Think of the crossbar as a town-square for the GPU, where the various components of the GPU talk to each other by leaving and picking-up data labeled with "from" and "to" addresses. The crossbar routes data between the four Graphics Processing Clusters (GPCs), and the eight memory controllers of 64-bit bus width each (which together make up its 256-bit wide memory interface), and is cushioned by the L2 cache. 

The L2 cache itself is segmented, and isn't a monolithic slab of SRAM. Each of the eight memory controllers on the GM204 is ideally tied to its segment of the L2 cache. Also tied to these segments are segments of ROPs. With NVIDIA reducing the L2 cache amount by disabling one such segment. Its component memory controller is instead rerouted to the cache segment of a neighbouring memory controller. Access to the crossbar for that memory controller is hence slower. To make sure there are no issues caused to the interleaving of these memory controllers, adding up to the big memory amount figure that the driver can address, NVIDIA partitioned the 4 GB of memory to two segments. The first is 3.5 GB large, and is made up of memory controllers with access to their own segments of the L2; the second segment is 512 MB in size, and is tied to that memory controller which is rerouted. 

The way this partitioning works, is that the 3.5 GB partition can't be read while the 512 MB one is being read. Only to an app that's actively using the entire 4 GB of memory, there will be a drop in performance, because the two segments aren't being read at the same time. The GPU is either addressing the 3.5 GB segment, or the 512 MB one. Hence, there's a drop in performance to be expected, again, for apps that use up the entire 4 GB of memory.

While it's technically correct that the GTX 970 has a 256-bit wide memory interface, and given its 7.00 GHz (GDDR5-effective) memory clock, that translates to 224 GB/s of bandwidth on paper, not all of that memory is uniformly fast. You have 3.5 GB of it having normal access to the crossbar (the town-square of the GPU), and 512 MB of it having slower access. Therefore, the 3.5 GB segment really just has 196 GB/s of memory bandwidth (7.00 GHz x 7 ways to reach the crossbar x 32-bit width per chip), which can be said with certainty. Nor can we say how this segment affects the performance of the memory controller whose crossbar port it's using, if the card is using its full 4 GB. We can't tell how fast the 512 MB second segment really is. But it's impossible for the second segment to make up 28 GB/s (of the 224 GB/s), since NVIDIA itself claims this segment is running slower. Therefore NVIDIA's claims of GTX 970 memory bandwidth being 224 GB/s at reference clocks is inaccurate. 

Why NVIDIA chose to reduce cache size and ROP count will remain a mystery. We can't imagine that the people designing the chip will not have sufficiently communicated this to the driver and technical marketing teams. To claim that technical marketing didn't get this the first time around, seems like a hard-sell. We're pretty sure that NVIDIA engineers read reviews, and if they saw "64 ROPs" on a first-page table, they would have reported it up the food-chain at NVIDIA. An explanation about this hardware change should have taken up an entire page in the technical documents the first time around, and NVIDIA could have saved itself a lot of explanation, much of it through the press.

*View at TechPowerUp Main Site*


----------



## RCoon (Jan 27, 2015)

All the benchmarks in all the reviews are still accurate of course, so everything about how it performs in games at various resolutions is still true.

But NVidia has basically lied about hardware specifications. I don't believe for a second this was all one big mistake of somebody not saying to marketing that the card did not in fact have 64 ROPs and 224GB/s bandwidth. By all accounts it's pretty crappy business practice, and they should be punished accordingly.

That being said. I still like my 3.5GB 970 for the price I got it at.


----------



## Selene (Jan 27, 2015)

RCoon said:


> All the benchmarks in all the reviews are still accurate of course, so everything about how it performs in games at various resolutions is still true.
> 
> But NVidia has basically lied about hardware specifications. I don't believe for a second this was all one big mistake of somebody not saying to marketing that the card did not in fact have 64 ROPs and 224GB/s bandwidth. By all accounts it's pretty crappy business practice, and they should be punished accordingly.
> 
> That being said. I still like my 3.5GB 970 for the price I got it at.




This is no different than the dual GPU cards, they physically have double the memory but only half is usable. This changes nothing, the card does have 4gb, and as you said all the benchmarks are still the same. I dont agree with them doing this and not telling people but if you got the card based on reviews and benchmarks you got what you paid for.

The truth is once the card gets a to a rez where 4gb would even be worth having the GPU cant handle it and it would make maybe 1-2fps difference at best, its been show time and time again, 256bit bus really can only handle 2gb.


----------



## 64K (Jan 27, 2015)

Good points especially this one



btarunr said:


> We're pretty sure that NVIDIA engineers read reviews, and if they saw "64 ROPs" on a first-page table, they would have reported it up the food-chain at NVIDIA.



Nvidia needs to do something to make this right with people who already bought a GTX 970 before the truth came out. I haven't run into any problems with my 970 but I would like a partial refund. Maybe a $50 Newegg gift card.

If this customer backlash gains traction it could result is a class action lawsuit. I'm certain with a market cap of 11 billion it would attract a top gun law firm to handle it.


----------



## xkche (Jan 27, 2015)

because 4GB is more fun that 3.5GB


----------



## v12dock (Jan 27, 2015)

Class action lawsuit?


----------



## 64K (Jan 27, 2015)

v12dock said:


> Class action lawsuit?



Possibly. Here in the USA we sue each other for anything and everything. It would be far cheaper for Nvidia to issue a partial refund.


----------



## DeathtoGnomes (Jan 27, 2015)

Sounds like nvidia might have been planning future sales of "new cards" with a rebrand of this card by unlocking more "stuff" later on. They just got caught doing it, some might call it cheating.


----------



## techy1 (Jan 27, 2015)




----------



## pr0fessor (Jan 27, 2015)

Like 4 GB is needed for today's games and 3.5 GB is not enough. The most customers won't notice any difference. A dirty feeling about NVIDIA hardware what so ever arrives. Quality is something else.


----------



## ShurikN (Jan 27, 2015)

"The way it's meant to be played"


----------



## Ferrum Master (Jan 27, 2015)

False ad is an false ad. And it is not as it should be, that's it, it is a lie.

It ain't a question about speed, but common sense. nVidia played faul.

For example AMD got TLB bug on K10, they force fixed it via kernel patch to all, despite it caused BSOD in very few specialized tasks, but yet they played clean.

Intel also plays clean and errata documents are available on Intel site and describes what stepping did correct for each CPU, thus kernels are patched and aware and disable many features, mostly virtualization sub features those are often broken. As all consumer semiconductor makers do with their device data sheet, it's been since 1950ties.

This time it is more than fishy. They actually intended to make such obscure design, they could save more on that one single memory chip, as it really give 2-5% max performance delta as they say. Doing that just for marketing and for the sake of round 4GB number?(noob user actually thinks it is the main criteria, OK yes) And spoofing the ROP count, just why(noob user doesn't know what it is). Gosh this is low... I am actually disappointed having nvidia cards in the past then a bit.

Although... everyone remembers the FX series bloop with broken DX9c shaders, they also acted a bit the same, there was a recall for them?

Well I guess it will bring up more AMD users, as they need dough really bad in order to keep them alive and maintain the competition. As the green camp is getting funny ideas and their marketing team smoking way too green stuff.


----------



## bpgt64 (Jan 27, 2015)

pr0fessor said:


> Like 4 GB is needed for today's games and 3.5 GB is not enough. The most customers won't notice any difference. A dirty feeling about NVIDIA hardware what so ever arrives. Quality is something else.



You are correct, it's likely not needed, or required.  However it is still a lie, or atleast worst a negligent mistake.   The 970 represented the possibility for you to not need to invest 1k+ US to attain a very playable experience at 4k@60 fps without AA and such turned off, at 600 US.  So I would assume people who bought two of these were planning on either that resolution or 1440p @120 fps(arguably just as demanding).   They saw this as opportunity to achieve that, where the ram is relevant.

Ironically, I downgraded from a pair of Titans to a pair of 980s(side grade? iono).  For a simple reason,  Shadowplay works at 4k@60FPS.   It does not with AMD, and it is not possible using fraps, dxtory, or the like(see my vraps, 4x256gb SSD raid 0(lsi 9266-4i).  I can record, and upload 4k video's now that look good.


----------



## GhostRyder (Jan 27, 2015)

64K said:


> Possibly. Here in the USA we sue each other for anything and everything. It would be far cheaper for Nvidia to issue a partial refund.


 There is a high chance that will happen actually, knowing how things have happened in the past and how anyone is willing to do anything for some publicity and a buck this would not surprise me at all.  Its been to long since launch of the cards for this explanation to fly at this point (Being a "whoops there was a miscommunication" line) and they are likely going to feel backlash from it.



pr0fessor said:


> Like 4 GB is needed for today's games and 3.5 GB is not enough. The most customers won't notice any difference. A dirty feeling about NVIDIA hardware what so ever arrives. Quality is something else.


3.5gb is plenty for most scenarios however some people bought this card for extreme resolutions and such that the 4gb would be helpful in the future.  I know at least 1 person who bought 3 intending a 4K rig on a little bit of a lower budget ($900 for 3 cards versus $1,100 for 2 980's) and this is something that might have changed his mind and caused him to upgrade or look at the alternatives (Actually have not heard from him yet or if he knows about it ill have to ask him at the next LAN party).

While I doubt many people here or anywhere were concerned with the ROP count, the L2 Cache, among other things it is still not right to lie to your customers.  Performance has not changed and the numbers seen before still stand, however the 3.5gb is the most concerning part to those running the extreme areas of gaming and still could have effected a small amount of users decisions (I am being conservative with that).  Even if just 5% would have changed their minds based on this information that is 5% of the people who purchased the card that feel ripped off in some way (Random number, not an actual figure).  I don't find the way they are handling this to be smart nor the way it started out to begin with smart.


----------



## Ferrum Master (Jan 27, 2015)

bpgt64 said:


> where the ram is relevant.



The problem is with reasonable buyers, who bought the card to be future proof, and thus taking the vram amount into reasoning. And Games tend to eat more VRAM lately... if you play old ones, except Skyrim, then it is OK, but those who bought 970 wont just play CS:GO. It would be shame if after 6months witcher3 and GTA5 will bring this card to their knees and a new card will be needed again... but hey... that was the plan


----------



## MAXLD (Jan 27, 2015)

It's not super hard to believe that a marketing mistake was made initially when giving infos to reviewers and so on (even if it's reported around the web that those marketing dudes have high knowledge of GPU tech).

What is very hard to believe is that after months passed, and hundreds of reviews and articles, nobody at nVidia noticed the errors on so many reviews up until now. (Which if it was the case, it would prove they just don't give a damn what the reviews say, as long as they give a nice final grade and Pro/Cons appraisal.)

That said, it might mean that they did noticed the info/marketing "mistake" but didn't say anything until now because the card was working as intended anyway, getting mega hype, getting big sales, and pointing out the info mistake would actually be a possible marketing "downgrade", since the card was doing so well and AMD has no new response until Q2. So they just kept quiet, since this was detected only under specific games+settings, reviewers didn't even catch the info issue, and they just decided to shrug their shoulders... hoping that those user reports being made were just something considered by others as inaccurate or just non-relevant to the point of making a fuss about it. A gamble that failed completely.


----------



## the54thvoid (Jan 27, 2015)

Makes me happy I opted out. I was looking at going 4k, still am, and sli 970's looked like a good option.
I'd have bought the cards for the potential 4GB memory. If my experience could have been marred by this, in scenario's where 4k used over 3.5GB, I would be angry.
But, I read reviews and sli 970's seemed weaker than other options. I stayed with my 780ti and sli'd that instead.
This very poor PR for NV. Kind of impossible to defend the lie. Great card but needs to be formally rebranded as 3.5GB.


----------



## matar (Jan 27, 2015)

2 GTX 970 OEM was on my shopping list next month mainly the OEM because I love the OEM cooler and look just like the 980 but now that I read all this I will wait for the GTX 970Ti editions 20nm or what ever they will be called.


----------



## RCoon (Jan 27, 2015)

Just to let you guys know, retailers and AIB partners (Gigabyte, Asus, MSI) are not accepting returns for this problem at this time. I presume they will be in avid communications with NVidia first before we get a response on where to go from here.


----------



## yogurt_21 (Jan 27, 2015)

So they're pretty much saying the 970 as paper spec'ed would have been within a few percentage pts of the 980. Seriously if it's as fast as it is at 56 ROPs and less L2 than we thought, full memory specs and 64 ROPs would seem to further close the 10-12% performance gap between the 2 cards. That puts pressure on the 980 sales and further distances the 970 from the 960 which was already a massive gap to begin with. 

False advertising aside, they had to neuter it.  Next time though a little heads up will save them a lot of PR crap.


----------



## Beertintedgoggles (Jan 27, 2015)

I still call BS on this.... from the front page news yesterday, Nvidia claimed that both the 980 and the 970 suffered slowdowns over 3.5GB of VRAM memory usage.  Today they are claiming that this "issue" was created in the way they disabled some of the components to create the 970 line.  Something still doesn't add up here.

Edit:  After checking the article from yesterday, the table included that showed the effects of running .5GB and >3.5GB were almost identical in the performance hit on both the 980 and the 970.  If that is true, then someone is still lying.


----------



## looniam (Jan 27, 2015)

"Why NVIDIA chose to reduce cache size and ROP count will remain a mystery."

idk, it seemed the TR, PCper and esp. anand tech articles made it quite clear. though i do seem to have a talent at solving murder mysteries within the first chapter of the book.

" We can't imagine that the people designing the chip will not have sufficiently communicated this to the driver and technical marketing teams."

do you think they go out and have after work drinks? i'd be surprise if they're in the same building let alone on the same floor. in a perfect world all departments communicate well w/each other. however in the real world it is lacking.

"To claim that technical marketing didn't get this the first time around, seems like a hard-sell. We're pretty sure that NVIDIA engineers read reviews, and if they saw "64 ROPs" on a first-page table, they would have reported it up the food-chain at NVIDIA."

word on the street is the engineers were too busy watching kitty cat videos while eating cheetos.

"An explanation about this hardware change should have taken up an entire page in the technical documents the first time around, and NVIDIA could have saved itself a lot of explanation, much of it through the press."

yeah and i am surprised that technology journalists who have reported for years didn't see the asymmetrical design also. hopefully they will learn from nvidia's mistake as well.


edit: oh yeah HI, i am new


----------



## Ja.KooLit (Jan 27, 2015)

so NVIDIA in trouble? I wonder what the AMD camp are thinking about this.

I mean if nobody found out about this "anomaly", NVIDIA will not say anything. Thats for sure. I mean its been months since 970 was out and they only found out now?

And response was a communication gap between each NVIDIA departments? For sure they lock this parts in purpose. so when they unlocked it, they will call it 970ti. Look at 780 and 780ti. 780 ti was just unlocked 780. am i right?


----------



## ironwolf (Jan 27, 2015)

Any word from the AMD camp over this?  I'd be curious if they might try to pull some PR stuff using this.  Or if they will just keep their traps shut for the time being.


----------



## RejZoR (Jan 27, 2015)

Selene said:


> This is no different than the dual GPU cards, they physically have double the memory but only half is usable. This changes nothing, the card does have 4gb, and as you said all the benchmarks are still the same. I dont agree with them doing this and not telling people but if you got the card based on reviews and benchmarks you got what you paid for.
> 
> The truth is once the card gets a to a rez where 4gb would even be worth having the GPU cant handle it and it would make maybe 1-2fps difference at best, its been show time and time again, 256bit bus really can only handle 2gb.



SLI/Crossfire actually has a legit explanation behind it. When you merge two or more cards, they still have to process the same frames, either alternating or some other method on all of them. But the fact is, you effectively only have as much memory as each individual card has.

In theory, they could merge the memory pool and share it through PCIe, but I think OS doesn't really support that and GPU's aren't on the level where shaders could co-operate between GPU's in a seamless way in a way where you could just easily stack things up together.


----------



## Ferrum Master (Jan 27, 2015)

RejZoR said:


> In theory, they could merge the memory pool and share it through PCIe



Nada, too much latency for high FPS rate and frame time costs... we are arguing about stutter when same gpu accesses via crossbar the other memory partition of itself, it would be a mess if a second GPU wan't to access unified pool data via PCIE and back to second card?


----------



## Sasqui (Jan 27, 2015)

v12dock said:


> Class action lawsuit?



No doubt there are a ton of lawyers working on this one, looking at the EULA to see if there's a clause that states you can't sue NV if you install the drivers (half joking).

Refund calcs (from previous thread)... 3.5 vs 4.0, I say a 13% refund, lol


----------



## RejZoR (Jan 27, 2015)

Ferrum Master said:


> Nada, too much latency for high FPS rate and frame time costs... we are arguing about stutter when same gpu accesses via crossbar the other memory partition of itself, it would be a mess if a second GPU wan't to access unified pool data via PCIE and back to second card?



That's why I said "in theory". But we'd have to replace sluggish PCIe with something like fiber optics or something to achieve that. And even then it's questionable.


----------



## Ikaruga (Jan 27, 2015)

RCoon said:


> But NVidia has basically lied about hardware specifications. I don't believe for a second this was all one big mistake of somebody not saying to marketing that the card did not in fact have 64 ROPs and 224GB/s bandwidth.



I have to disagree. I can't imagine a single reason why would an engineer lie 64 ROPs when there is only 56.  These monster companies always have dedicated teams for communications with the outside world (press, developers, retailers, etc). The only thing I can imagine is that somebody in that department failed big time, (regardless if it was deliberate or just a stupid a mistake from that person). I can't see why would the company management lie about the 970, if they would need to lie about something to get more sales, they would lie about the flagship product imo.
The engineers probably had a lot of 980s with bad yields at production and they just lasercut them to 970s. It has been a practice for many generations and they never lied about it before why would they start it now? Seriously why?


----------



## btarunr (Jan 27, 2015)

Sasqui said:


> No doubt there are a ton of lawyers working on this one, looking at the EULA to see if there's a clause that states you can't sue NV if you install the drivers (half joking).
> 
> Refund calcs (from previous thread)... 3.5 vs 4.0, I say a 13% refund, lol



Check out our Facebook page.


----------



## Ferrum Master (Jan 27, 2015)

RejZoR said:


> That's why I said "in theory". But we'd have to replace sluggish PCIe with something like fiber optics or something to achieve that. And even then it's questionable.



Actually no... They just need a proper old school northbridge.  It could be done on dual single PCB cards. They have a PLX chip now it just needs a memory controller, and the PLX is wired to the ram chips on the same board. Imho it even won't need a special driver and it the bus width is wide enough it could actually work without rewriting drivers.

On a classic motherboard... without some revolution ie proprietary connector to the motherboard nope...


----------



## NightOfChrist (Jan 27, 2015)

RCoon said:


> All the benchmarks in all the reviews are still accurate of course, so everything about how it performs in games at various resolutions is still true.
> 
> But NVidia has basically lied about hardware specifications. I don't believe for a second this was all one big mistake of somebody not saying to marketing that the card did not in fact have 64 ROPs and 224GB/s bandwidth. By all accounts it's pretty crappy business practice, and they should be punished accordingly.
> 
> That being said. I still like my 3.5GB 970 for the price I got it at.


I agreed. It is still a great card. The segmented vRAM and how it performs surprised me but it does not change the fact it is a great card for ultra 1080p gaming.



Ferrum Master said:


> The problem is with reasonable buyers, who bought the card to be future proof, and thus taking the vram amount into reasoning. And Games tend to eat more VRAM lately... if you play old ones, except Skyrim, then it is OK, but those who bought 970 wont just play CS:GO. It would be shame if after 6months witcher3 and GTA5 will bring this card to their knees and a new card will be needed again... but hey... that was the plan


NVIDIA made a mistake with the design, intentionally or otherwise, so it is fair if people blame them for it. But many customers can be blamed too. From many western forums I read so far when they bought the card they expected a GTX 980 with a price of GTX 970, something I never believed to be existed in the first place. I always thought there's something more than just being cheaper and perform less faster but a lot of people believed they are 「future-proofing」 with this card. Even I was surprised when I read several owners claimed they bought the card ～ a single card ～ for a 4K monitor gaming setup.


----------



## THE_EGG (Jan 27, 2015)

It is a little grey but I guess this would fall under fraudulent misrepresentation which as far as I know is illegal in Australia (found in section 18 of the Australian Consumer Law). I'm sure it is also illegal in most - if not all - other countries too. Nvidia should take the appropriate actions (e.g. some kind of compensation) to resolve this issue fairly.


----------



## FreedomEclipse (Jan 27, 2015)

RCoon said:


> Just to let you guys know, retailers and AIB partners (Gigabyte, Asus, MSI) are not accepting returns for this problem at this time. I presume they will be in avid communications with NVidia first before we get a response on where to go from here.



Pitty.... I was hoping to get a slight refund as i do have 2 970s....

Kinda stumped on what to do next tbh. Might try to return these 2 970s and pick up to 780Ti's off ebay or something on the cheap.


----------



## Ferrum Master (Jan 27, 2015)

THE_EGG said:


> I'm sure it is also illegal in most - if not all - other countries too



Even a cookie must contain description of each component used, who knows, maybe someone is quite allergic to cut down ROP count.


----------



## Sasqui (Jan 27, 2015)

btarunr said:


> Check out our Facebook page.



LOL, cute.  It's all good man.

I saw this in downtown Providence and snapped a photo, he should start a video card ad campaign.


----------



## Ferrum Master (Jan 27, 2015)

NightOfChrist said:


> But many customers can be blamed too.



Nada, never blame the customers, they are fools maybe, but they didn't commit a crime in believing the spec sheet.

And 4K on 970? Where is the problem for older games? All UT3 games? Grid2? Civ? Source based games? And millions of users still for WoW?


----------



## rtwjunkie (Jan 27, 2015)

looniam said:


> "Why NVIDIA chose to reduce cache size and ROP count will remain a mystery."
> 
> idk, it seemed the TR, PCper and esp. anand tech articles made it quite clear. though i do seem to have a talent at solving murder mysteries within the first chapter of the book.
> 
> ...


 
Welcome to TPU!  Great first post.


----------



## Parn (Jan 27, 2015)

While the story of misunderstanding between NV's engineering and PR departments is hard to believe and NV should be taught a lesson for their questionable business practice, I doubt they will get into any big trouble for this other than a few consumers returning their GTX970. 

If Intel could get away with the TSX errata found on Haswell CPUs (feature removed through microcode update after the bug was discovered by the community), I can't see why NV will not considering the performance of GTX970 hasn't changed a bit before or after this marketing fiasco.


----------



## rruff (Jan 27, 2015)

NightOfChrist said:


> NVIDIA made a mistake with the design, intentionally or otherwise, so it is fair if people blame them for it.



Not a mistake in design... that was surely intentional, and the card performs well enough. It *needs* to be significantly slower than the 980. A mistake in marketing and presentation? Maybe. 

I've worked in a few large corporations and marketing tends to live in their own little world of schmoozing and BS. But they wouldn't be allowed mod the specs without the consent of the top brass. The tech guys would likely have had no say in the matter at all. 

The big question in my mind is... did everyone involved expect this scenario to play out like it is, or were they really dumb enough to think no one would notice? The ROPs and cache thing is really weird, because nobody would give a damn about those specs on their own. It's the gimped vram that is bothersome. Maybe they were thinking that would tip someone off that the architecture was funny and prompt them to investigate further? And they wanted the 970 to have a few months of "honeymoon" where it sold like crazy and forced AMD to slash prices on their 290 and 290x. Then they'd do the inevitable dance and damage control later. 

I don't know... what would have happened if Nvidia had told everyone it was 3.5GB+ at the start? I can see marketing's point of view... you don't want to present your new hot product and put "gimped architecture weirdness" in everyone's mind. It definitely gives it one issue that makes it inferior to the competing AMD products, which have 4GB of vram. I'm certain it would have hurt sales and tainted reviews. But it's also hard to imagine that they'd prefer what is happening now. The 970 is still early in it's life cycle and if Nvidia compensates customers, that will certainly cost them more $ than if they were honest at the start.

Doesn't anybody know some one who works there and can tell us what's really going on?


----------



## Protagonist (Jan 27, 2015)

With all this info out in the open,.. It leads me to say that GTX980 is an overpriced card that does not deserve that price tag the GTX980 should have been at max $399.

Nvidia over priced GTX980 by publishing false GTX970 spec so that it appears to be close to GTX980 specs to justify the price they put on the GTX980.

If the real specs for the GTX970 were known from the start i bet the GTX980 would not be selling at those high prices it would be more of around $399 ish to justify the small performance gain it has over the GTX970


----------



## Batou1986 (Jan 27, 2015)

I don't understand all the people hear yelling WELL ALL THE BENCHMARKS ARE STILL TRUE ETC ETC ETC.
In the future if someone running this card runs into a game that uses 4gb of memory and it runs slower because its using all 4gb of memory thats an issue, and issue that should have been clearly explained by Nvidia.

The fact is people bought a card that advertized 4gb of available vram, not a card with 3.5gb of vram and the possibility to use 4gb at reduced performance.


----------



## Uplink10 (Jan 27, 2015)

Cheapness is going to be their doom. Disabling part of GPU, just to sell it lower and then selling fully enabled GPU at a higher price. Intel is the same but they at least get specifications right. Refunds are going to cost them greatly.

Quote from Transformers (2007):

00:18:53,432 --> 00:18:56,458
Wow. You are so cheap.


----------



## FordGT90Concept (Jan 27, 2015)

It doesn't look like there's a class action out yet but I'm positive it is coming.  NVIDIA misrepresented their product.  NVDA shares took a 3% dive today.


----------



## NightOfChrist (Jan 27, 2015)

Ferrum Master said:


> Nada, never blame the customers, they are fools maybe, but they didn't commit a crime in believing the spec sheet.
> 
> And 4K on 970? Where is the problem for older games? All UT3 games? Grid2? Civ? Source based games? And millions of users still for WoW?


Many customers who bought GTX 970 and expected no more than GTX 970 are blameless, it is true, but from their comments or rather complaints several customers expected the card's vRAM to perform like that of GTX 980 and they argued it is the same vRAM configuration utilised by GTX 980 so at least it should perform at the same level. There are obvious reasons why one is named 970 and the other 980. Although I did not expect there would be segmented 3.5+0.5 vRAM with the smaller segment slower than the rest, I did actually expect overall a 970 would be slower than a 980, despite the stats written on the sheet. I would blame them for being naive. Not all customers, of course. Just the naive ones.

As for older games on 4K, I do not think it is going to be a problem. Quite the contrary, it will be a great experience, and a single GTX 970 should suffice, if not more than suffice. But from what I have read some people bought a single GTX 970 and expected to run games like Assassin's Creed: Unity and even Dragon Age: Inquisition on 4K resolution. SLI setup, perhaps. But a single card?

And I apologise if my English is not fluent and understandable. I tried not to use a machine translator. Hopefully nobody would get confused by my poor choice of words.


----------



## newtekie1 (Jan 27, 2015)

So in the end, it all comes down to an incorrect amount of L2 cache given on a non-public spec sheet.  And it matters so little, most reviews don't even mention the L2 cache size in the 970 reviews.  It isn't mentioned in any of the 970 reviews here on TPU.  That is how little L2 cache matters to the public.

They didn't lie about ROP count.  The card has 64 Active ROPs.  That is not a lie.  It only uses 56 of them because using the others would actually slow the card down.  But there are 64 active ROPs.

They didn't lie about memory amount, it has 4GB and all 4GB can be accessed if needed.  The last 0.5GB is slower than the first 3.5GB, but so what?  It is still faster than accessing system RAM.  If they had designed the card as a strict 224-Bit 3.5GB card, it would have been slower than the 970 we got.  There is no getting around that.  They made the decisions they did with the extra 0.5GB because it improves the performance of the card.

Yeah, there is some marketing slight of hand going on here.  But the fact is the card performs great, even at 4k.

Personally, I think they could have left all the specs the listed the same(except L2 size, but again I wouldn't even have listed that).  But they should have given this explanation to the reviewers from the beginning, so they could include in their reviews how the memory subsystem works from the beginning.


----------



## rruff (Jan 27, 2015)

newtekie1 said:


> But they should have given this explanation to the reviewers from the beginning, so they could include in their reviews how the memory subsystem works from the beginning.



Yes, but that isn't good for marketing, because the reviewer will be focused on the weird architecture. The press would have surely been less favorable if that had happened. 

I'm guessing they really thought no one would notice, or at least not until later this year.


----------



## Batou1986 (Jan 27, 2015)

newtekie1 said:


> But they should have given this explanation to the reviewers from the beginning, so they could include in their reviews how the memory subsystem works from the beginning.


This is where the real issue lies, nvidia goes out of their way to explain all their technical features and stuff but some how skims over this part "accidentally", IMO Nvidia was intentionally misleading because they knew it would affect sales


----------



## FordGT90Concept (Jan 27, 2015)

newtekie1 said:


> So in the end, it all comes down to an incorrect amount of L2 cache given on a non-public spec sheet.  And it matters so little, most reviews don't even mention the L2 cache size in the 970 reviews.  It isn't mentioned in any of the 970 reviews here on TPU.  That is how little L2 cache matters to the public.
> 
> They didn't lie about ROP count.  The card has 64 Active ROPs.  That is not a lie.  It only uses 56 of them because using the others would actually slow the card down.  But there are 64 active ROPs.
> 
> ...


Those are misrepresentations.  If a car manufacture sold cars advertising 4 wheels and after you buy it, you discover one of those is the spare, you'd be a little pissed too.  Technically the manufacturer didn't lie, but they still misrepresented what they were selling.

"Active" matters little here.  That's like super gluing a turbo charger on to the hood of a car and selling it as "turbo charged" when it is not.  If the hardware is there but deliberately designed to not be used, it shouldn't be advertised as being there.


----------



## v12dock (Jan 27, 2015)

FordGT90Concept said:


> It doesn't look like there's a class action out yet but I'm positive it is coming.  NVIDIA misrepresented their product.  NVDA shares took a 3% dive today.



On the flip side AMD is up 4%  One of the few companies that is not taking a massive hit today.


----------



## TRWOV (Jan 27, 2015)

ironwolf said:


> Any word from the AMD camp over this?  I'd be curious if they might try to pull some PR stuff using this.  Or if they will just keep their traps shut for the time being.




Next headline on TPU:

*Choose R9 290 Series for its uncompromised 4GB memory: AMD*

**


----------



## the54thvoid (Jan 27, 2015)

v12dock said:


> On the flip side AMD is up 4%  One of the few companies that is not taking a massive hit today.



Unfortunately the AIB's for AMD are reducing orders.  They need to start peddling the new tech coming 2nd Half 2015


----------



## Uplink10 (Jan 27, 2015)

newtekie1 said:


> They didn't lie about ROP count. The card has 64 Active ROPs. That is not a lie. It only uses 56 of them because using the others would actually slow the card down. But there are 64 active ROPs.


They could activate all disabled parts of the GPU and advertise it as they do 980 but activated parts are* not important unless they can be used*. You have a car with V8 (eight cylinders) engine but you can only use six cylinders (V6).


----------



## No Nrg (Jan 27, 2015)

Everyone who bought a 970 because of the ROP count raise their hands. *crickets*

The fact that they got the spec wrong does suck but the benchmarks are what most everyone based their purchase on and those numbers won't change. 

Hopefully there is no underlying issue like this with the 980 I bought.....


----------



## newtekie1 (Jan 27, 2015)

Uplink10 said:


> They could activate all disabled parts of the GPU and advertise it as they do 980 but activated parts are* not important unless they can be used*. You have a car with V8 (eight cylinders) engine but you can only use six cylinders (V6).



My car has 8 Cylinders, and only uses 4 most of the time because it gets better gas mileage.  It is still advertised as an 8 Cylinder, but not using all 8 has benefits.

In theory the ROPs can actually be used, but as said they would actually slow the card down.


----------



## Casecutter (Jan 27, 2015)

Here's the thing if gelding 3 of the SMX's in some way caused the memory controller to be unstable and found it best to fuse off one of the L2 to get the performance; great then just explain that’s part of the differences between the two and show how and why it was done as... they have now. We have to ask, do the GTX980M (12 SMX's) have any L2 fused off? Perhap a as complete block doesn't affect the control in the same way IDK.  Perhaps the explanation is simpler...

Nvidia found they had a large volume of chips with at least one damage L2, and to fulfill the volume of 970's they needed to put to market they fused off one L2 on all cards...   Which is not the issue "in and of itself", but the more I see it appears the scenario of defective/damaged L2, and Nvidia figure out a way to "weasel" around it because nobody ever checks or questions L2 in reviews and would take them at their word (spec's)... why bring it up!

Either way Nvidia misrepresented the product in more ways than one.  It wasn't just one obscure specification, they went as far as touting *the GTX 970 ships with THE SAME MEMORY SUBSYSTEM AS OUR FLAGSHIP GEFORCE GTX 980*.  So Nvidia should feel obligated to make it right for any of those who have purchase 970’s and consider Nvidia misrepresent product. They should be eligible for some amount of reimbursement, or complete refund if they no long want the product.


----------



## Toothless (Jan 27, 2015)

Aaanndd there goes my wishlist.


----------



## BiggieShady (Jan 27, 2015)

No Nrg said:


> Everyone who bought a 970 because of the ROP count raise their hands.



You don't buy car because it has four wheels, but you don't buy a car if it doesn't .... see?


----------



## Casecutter (Jan 27, 2015)

BiggieShady said:


> You don't buy car because it has four wheels, but you don't buy a car if it doesn't .... see?


 Actully just 3-1/2 and then the 1/2 spare that cause you to run slower.


----------



## hyp36rmax (Jan 27, 2015)

Selene said:


> *This is no different than the dual GPU cards, they physically have double the memory but only half is usable. *This changes nothing, the card does have 4gb, and as you said all the benchmarks are still the same. I dont agree with them doing this and not telling people but if you got the card based on reviews and benchmarks you got what you paid for.
> 
> The truth is once the card gets a to a rez where 4gb would even be worth having the GPU cant handle it and it would make maybe 1-2fps difference at best, its been show time and time again, 256bit bus really can only handle 2gb.



I agree with most of your statement however this is nothing like a Dual GPU card as the memory allocation is split and mirrored between each of the two GPU Cores to operate accordingly compared to the Single GPU GTX 970 with 4gb of available GDDR5 Memory with 3.5gb useable.  As a Dual GPU card is akin to a Crossfire or SLI on a single PCB.  You don't claim 8GB of total useable GDDR5 RAM on a Crossfire R9 290X 4gb cards (Unless of course you have two 8gb models)


----------



## FreedomEclipse (Jan 27, 2015)

For those that care - theres a petition up to make a class action lawsuit against Nvidia. Though, the objective of it is to get refunded. They havent mentioned a percentage so im guessing a full refund rather than a partial one.

- Nvidia could just save themselves so much hassle if they just gave away some game keys to people who purchased a 970, Sure it wont 'fix' the 970 but for most of us the card performs flawlessly despite the misadvertised specs.

Heres the link to the Nvidia forums where the petition is posted - It will be interesting to see what steps Nvidia will take to fix the issue.

Already some suggestions from some of the users saying that Nvidia should accept their 970s back and step them up to 980s for a little cash on top - I wouldn't mind this option  though i'll be happy with a partial refund.


----------



## john_ (Jan 27, 2015)

I like how some people see things.

Hawaii GPU throttling: 
Who cares about performance? Throw AMD into the fire.

NVidia lying about GTX 970 specs:
Oh, come on, it's only -3%!


----------



## 64K (Jan 27, 2015)

FreedomEclipse said:


> Already some suggestions from some of the users saying that Nvidia should accept their 970s back and step them up to 980s for a little cash on top - I wouldn't mind this option  though i'll be happy with a partial refund.



I wouldn't mind putting $100 with my 970 return to step up to a GTX 980. That would make it $450 US for me which is what the GTX 980 should have been anyway imo considering the performance increase over the 970.


----------



## Xzibit (Jan 27, 2015)

FreedomEclipse said:


> For those that care - theres a petition up to make a class action lawsuit against Nvidia. Though, the objective of it is to get refunded. They havent mentioned a percentage so im guessing a full refund rather than a partial one.
> 
> - Nvidia could just save themselves so much hassle if they just gave away some game keys to people who purchased a 970, Sure it wont 'fix' the 970 but for most of us the card performs flawlessly despite the misadvertised specs.
> 
> ...



I'd go for the refund or the setup.

What if the game key they offer is for a game that allocates over 3.5GB of memory.  It would be an insult to injury.


----------



## REAYTH (Jan 27, 2015)

newtekie1 said:


> My car has 8 Cylinders, and only uses 4 most of the time because it gets better gas mileage.  It is still advertised as an 8 Cylinder, but not using all 8 has benefits.
> 
> In theory the ROPs can actually be used, but as said they would actually slow the card down.


Most people buy a card off of performance reviews. Not how many ROPSs it has. As a matter of fact I would be willing to bet 99.999999999% of people don't even know WTF a ROP is. But they do know they get 20 FPS more in Battlefield 4.

Is this an issue in advertisement? Yes. Should Nvidia address it? Yes. Should people be able to cry foul and return the card......No. They bought it for the performance. Not the damn ROP count.


----------



## L337One91 (Jan 27, 2015)

All I want to know is whether or not I will experience stuttering playing at 1440/1600p.


----------



## 64K (Jan 27, 2015)

Well, it's a little early to tell how this will all work out. Nvidia is just putting fluff responses out there to stall until they can figure out what is the smartest long term decision that they can make.


----------



## Xzibit (Jan 27, 2015)

64K said:


> Well, it's a little early to tell how this will all work out. Nvidia is just putting fluff responses out there to stall until they can figure out what is the smartest long term decision that they can make.



Well they could be putting this out with the intention of minimizing penalties down the road in a False Advertising Lawsuit case.  One of the severe penalties that can be levied by the courts is if the advertiser had intent.  The first address Nvidia made was "Miscommunication" to try and minimize public perception and an early sign of Cover Your own A**.


----------



## Uplink10 (Jan 27, 2015)

64K said:


> I wouldn't mind putting $100 with my 970 return to step up to a GTX 980.


Nvidia would not mind this since they would get another 100 USD and we all know that if GTX 980 would cost 450 USD this is still too much, I mean they deserve this since they intentionally disable parts of GPU on GTX 970.



REAYTH said:


> Most people buy a card off of performance reviews.


And because of specifications, with more resources it should be faster. This is technicality (in contrast to performance) and when you violate them it is going to stab you in the back.


----------



## newtekie1 (Jan 27, 2015)

REAYTH said:


> Most people buy a card off of performance reviews. Not how many ROPSs it has. As a matter of fact I would be willing to bet 99.999999999% of people don't even know WTF a ROP is. But they do know they get 20 FPS more in Battlefield 4.
> 
> Is this an issue in advertisement? Yes. Should Nvidia address it? Yes. Should people be able to cry foul and return the card......No. They bought it for the performance. Not the damn ROP count.



That is exactly my point.  When I read the reviews for the 970 I didn't really even care about the specs.  In fact I jumped straight to the performance section.  In the end, that is all that matters.



L337One91 said:


> All I want to know is whether or not I will experience stuttering playing at 1440/1600p.



I haven't yet @1440p.



Uplink10 said:


> And because of specifications, with more resources it should be faster. This is technicality (in contrast to performance) and when you violate them it is going to stab you in the back.



I've never bought a card based on specs.  If it used 64 ROPs and was slower, I'd be more disappointed than only using 56 of the 64 active and being faster.


----------



## HumanSmoke (Jan 27, 2015)

Ikaruga said:


> I have to disagree. I can't imagine a single reason why would an engineer lie 64 ROPs when there is only 56.  These monster companies always have dedicated teams for communications with the outside world (press, developers, retailers, etc). The only thing I can imagine is that somebody in that department failed big time, (regardless if it was deliberate or just a stupid a mistake from that person).


I'm inclined to agree. Hardware vendors know their products go under the microscope of the community they sell to. I do have to admit that If I have a choice between believing a labyrinthine conspiracy theory, or the communication between engineering and marketing screwed up, I'm inclined to go with the latter. As I pointed out in another thread, wasn't Bulldozers missing 800 million transistors just such a case? The alternative conspiracy theory (which some oddballs gave credence to) would be that AMD boosted the trans count to make it seem like a more complex chip.


v12dock said:


> Class action lawsuit?


Certainly a shitty situation, but is it actionable in court? Might be good leverage from a bad PR standpoint to get some action but it has to be an all-or-nothing scenario - refund or trade-up. The heart of the matter would seem to be the memory and bus width discrepancy, but even the testing shows that albeit slower it is active. The only revised specs I've seen are the ROP count and L2 cache, neither of which appear in the official product specifications (this a cached copy of the original listed spec sheet) of Nvidia or their partners. It might be a shitty situation, but I'd be a little dubious if that it alone constitutes a case. The only evidence I've seen that might be indictable is the claim in the reviewers guide stating that the GTX 970 shares the same memory subsystem with the 980 - but that isn't part of the official product specification. If reviewers guides with their cherry-picked best case scenario benchmarks are litigation fodder then I don't think many products are safe from civil suit.


----------



## the54thvoid (Jan 27, 2015)

Class action lawsuit might not get very far.  Even if it was going somewhere, it would be settled out of court to the small number of complainants and then be signed off to a confidentiality clause.  Nvidia would ride that out and take the hit to it's image.  It'd bounce back by releasing it's next chip and charge lower than expected, bringing back the love. 

For those that keep bitching on about people saying it's not making a difference - the game reviews still stand.  The performance is still there.  Nvidia have entered a PR nightmare and they cannot come out looking clean.  They cocked up for sure - no doubt - let's not defend them.

The irony is those people that have not noticed any problems with their cards suddenly thinking - is my card bust?  No, it's not - it's running as designed.  The problem is not the design but the marketing of it.  There were no lies as such, only technical ambiguity (that's why the lawsuit wont work).  The dual gpu cards have set a precedent there.  Someone said elsewhere that it's not relevant but it really is.  A 12GB Titan Z or a 8GB R9 295 does not have 12 or 8 GB of functioning memory as we know it but they both have what is stated.  This would be used as a defence by NV.  The 970 does have all the things listed but they're just not used as we 'assumed'.

Have Nvidia been arseholes (and continuing to make themselves look even more raw?) yes, of course.

Are some forum posters being fanatically childish about it?  of course.

Should you be pissed if you have a 970?  Only if it's actually affecting you.  FFS, my 780Ti's only have 3GB memory but they still pull 5083 in Firestrike Ultra (at stock).

The best thing about this entire debate is watching those it affects sort of going 'meh' and watching (sorry guys) AMD loyalists getting their knickers in a twist.   Can't we all agree - Nvidia are dicks, but the cards still work.


----------



## Casecutter (Jan 27, 2015)

newtekie1 said:


> That is exactly my point.  When I read the reviews for the 970 I didn't really even care about the specs.  In fact I jumped straight to the performance section.  In the end, that is all that matters.


If Nvidia called it 3.5Gb Active Boost... or 4Gb Memory Compression... or something like that I think a lot of folk would've... But we weren't privy to that information.


----------



## TRWOV (Jan 27, 2015)

HumanSmoke said:


> As I pointed out in another thread, wasn't Bulldozers missing 800 million transistors just such a case? The alternative conspiracy theory (which some oddballs gave credence to) would be that AMD boosted the trans count to make it seem like a more complex chip.



Sorry for countering so many of your posts  but in that case AMD actually did what people are saying nVidia should have done. AMD saw the error on a review and corrected the mistake. Kind of a typo if you think about it: I can easily see 1.2 become 2 if at some point someone just wrote ".2" forgetting the "1" and then someone "corrected" that to 2 (a .2B CPU wouldn't make sense but a 2B wasn't much of a stretch). Marketing didn't catch that up, after reviews went up AMD engineers must have caught the error and sent the mentioned correction.



It's quite amusing how people react to mixups. Here we are sacrificing nVidia for not telling people and qubit actually berated AMD for making the correction:



qubit said:


> If this is an attempt to make the processor look better by showing it "doing more with less", then this PR stunt has backfired spectacularly and it would have been better to have left the "error" as it was. Paradoxically, FX processors are a sales success and are flying off the shelves as we just reported, here.



Bad if you do, bad if you don't.


----------



## HumanSmoke (Jan 27, 2015)

TRWOV said:


> Sorry for countering so many of your posts


Well, that is perfectly fine. Healthy debate never killed anyone. The point I was making is that screw-ups in communication (esp. involving marketing) are quite prevalent - certainly more so than vast conspiracies. Even the fallout (the delay in recognition and acknowledgement) from major screw-up's on a hardware level such as Intel's FDIV bug arose through lower-level management going missing on basic protocol. The day it made it to board level, Board Chairman Arthur Rock instituted a mea culpa.

If anyone is actually interested in the performance aspects of this issue, PCGH have quite an interesting analysis (German) comparing the GTX 970 with a GTX 980 downclocked to simulate the reduced bandwidth/lower shader count etc. but with a full 4GB of vRAM. The results tend to show significant separation in the 4K benchmark (although since neither offer playable frame rates I'm not too sure of the relevancy).


----------



## damric (Jan 28, 2015)

Maybe Nvidia will give you $10 off your next Titan purchase


----------



## xfia (Jan 28, 2015)

AMD should make a commercial side by side with 970+290 that shows what happens when you go over 3.5gb..  I could see the ending now!


----------



## qubit (Jan 28, 2015)

NVIDIA normally make such great graphics cards which is why I've been buying them for years. Yet, they also have a tendency to pull a fast one like this, trashing their reputation and customers' trust in their products, including mine. What's worse is that this isn't the first time we've seen underhand tactics like this result in scandals for them. A good class action win against them for this would be great at helping to keep them honest in the future. It's not gonna happen though, is it?

I never like the idea of a gimped GPU with a bit of it disabled since it reduces performance and unbalances the design and this is a graphic example of exactly why this is bad. This is the sort of reason why I always insist on getting the top GPU in the range. Yeah, it costs me a lot of money, but I also don't have to put up with garbage like this.


----------



## Bad Bad Bear (Jan 28, 2015)

I purchased 2 of these cards. Not happy.

I have lodged a formal complaint with the ACCC ( I'm in Australia ) Have forwarded my formal complaint to Nvidia and my local retailer Umart. Under Australian consumer law i'm entitled to a refund.

Let's see what eventuates from this.....


----------



## damric (Jan 28, 2015)

Bad Bad Bear said:


> I purchased 2 of these cards. Not happy.
> 
> I have lodged a formal complaint with the ACCC ( I'm in Australia ) Have forwarded my formal complaint to Nvidia and my local retailer Umart.
> 
> Let's see what eventuates from this.....



Yeah I imagine that SLI would take advantage of more VRAM so I would be pissed too. I never thought my HD 7850s could use the full 2GB but they do in games like Far Cry 3 1080p even with no AA.


----------



## silapakorn (Jan 28, 2015)

damric said:


> Yeah I imagine that SLI would take advantage of more VRAM so I would be pissed too. I never thought my HD 7850s could use the full 2GB but they do in games like Far Cry 3 1080p even with no AA.



But SLI won't stack VRAM of both cards.


----------



## Bad Bad Bear (Jan 28, 2015)

damric said:


> Yeah I imagine that SLI would take advantage of more VRAM so I would be pissed too. I never thought my HD 7850s could use the full 2GB but they do in games like Far Cry 3 1080p even with no AA.



The Nvidia moderators just deleted my post off the offical forums. 

They are in damage control for sure. Too late now !


----------



## AsRock (Jan 28, 2015)

I wanted a 4GB card and the 290\X was the only real option, if i had waited i probably would of got a 970 and this finding would not make happy if a performance hit was there or not.

A play a fair few games that take over 3.5GB ram and it would just annoy me just being sold some thing that is not truly what it claims.


----------



## damric (Jan 28, 2015)

silapakorn said:


> But SLI won't stack VRAM of both cards.



We know this.


----------



## Fluffmeister (Jan 28, 2015)

AsRock said:


> I wanted a 4GB card and the 290\X was the only real option, if i had waited i probably would of got a 970 and this finding would not make happy if a performance hit was there or not.
> 
> A play a fair few games that take over 3.5GB ram and it would just annoy me just being sold some thing that is not truly what it claims.



Out of interest what are those games? I'm a sucker for creating problems for myself.


----------



## ensabrenoir (Jan 28, 2015)

....ok all Nvidia has to do is release some awesomesauce  that fully activate all of the 970's  potential (scapping the ti version) and everyone's happy.  Either way I'm gonna buy a nice card really cheap due to crowd hysteria.


----------



## Steevo (Jan 28, 2015)

Fluffmeister said:


> Out of interest what are those games? I'm a sucker for creating problems for myself.


 
It could be modded Skyrim, modded GTA4, modded Farcry 3, Watch Dogs, Far Cry 4 without mods and high res, future games like GTA5 that want more vmem to hold textures.


----------



## Xzibit (Jan 28, 2015)

ensabrenoir said:


> ....ok all Nvidia has to do is release some awesomesauce  that fully activate all of the 970's  potential (scapping the ti version) and everyone's happy.  Either way I'm gonna buy a nice card really cheap due to crowd hysteria.



Naw, They just have to rename the 970 to 970 SE (Segmented Edition). Marketing gold!!! $$$

Maybe Peter will sell you his.


----------



## Justin_you (Jan 28, 2015)

Can we bring NVDIA to court because of their false advertisement on the GTX970 spec?
Heck, can we even ask for a refund because we have bought something that not according to the spec?


----------



## FordGT90Concept (Jan 28, 2015)

REAYTH said:


> Most people buy a card off of performance reviews. Not how many ROPSs it has. As a matter of fact I would be willing to bet 99.999999999% of people don't even know WTF a ROP is. But they do know they get 20 FPS more in Battlefield 4.


Most people don't know the difference between GB and GiB either.  Seagate got sued for it and lost even though math was clearly on their side.


----------



## Ralfies (Jan 28, 2015)

Read this on the NVIDIA forum:



			
				PeterS@NVIDIA said:
			
		

> Hey,
> 
> First, I want you to know that I'm not just a mod, I work for NVIDIA in Santa Clara.
> 
> ...


----------



## Xzibit (Jan 28, 2015)

Ralfies said:


> Read this on the NVIDIA forum:
> It's good that they're trying to work on a fix through drivers, but the performance of the card isn't the big issue. The fact they lied about the specs is why most folk are upset.



Good catch.  *Here is the link to his post...*

Just so people know its real and your not flamed..


----------



## Prima.Vera (Jan 28, 2015)

Steevo said:


> It could be modded Skyrim, modded GTA4, modded Farcry 3, Watch Dogs, Far Cry 4 without mods and high res, future games like GTA5 that want more vmem to hold textures.


Let's relax a little. Those games, even ubber modded, they will only go beyound 4GB of VRAM, only if you play on 4K resolution with 4xMSAA enabled. Which even if you had 1TB of VRAM would still be unplayable due to insufficient horse power 

http://www.digitalstormonline.com/unlocked/video-memory-usage-at-4k-uhd-resolutions-idnum146/


----------



## xfia (Jan 28, 2015)

wow.. he is actually contacting manufacturers for people. they should just hash out a deal that lets you trade up for a 980.


----------



## Vario (Jan 28, 2015)

Its unfortunate but at the same time, the card does perform well for a lot of what you guys are using them for.


----------



## damric (Jan 28, 2015)

Ugh this is like finding out the doctor took a little extra off the tip then was needed.


----------



## AsRock (Jan 28, 2015)

Fluffmeister said:


> Out of interest what are those games? I'm a sucker for creating problems for myself.



FC4, Watch Dogs and Arma 3 to name a few.



Ralfies said:


> Read this on the NVIDIA forum:



Convenience,  cannot imagine them saying that there is a chance of your game when it hits over 3.5GB usage it could take a dump.

EDIT:


Prima.Vera said:


> Let's relax a little. Those games, even ubber modded, they will only go beyound 4GB of VRAM, only if you play on 4K resolution with 4xMSAA enabled. Which even if you had 1TB of VRAM would still be unplayable due to insufficient horse power
> 
> http://www.digitalstormonline.com/unlocked/video-memory-usage-at-4k-uhd-resolutions-idnum146/



You can hit 3.5+GB with them without mods.  And i know WD can hit that under 1080P.


----------



## sergionography (Jan 28, 2015)

So to summarize what we know so far

1. each Maxwell smm can only address 4 pixels per clock meaning even with the rops present they r pretty much useless. 
2 Maxwell is having this dilemma because it's the trade off nvidia made when they made Maxwell a more cache dependant design than it is on external bus width.
3. Bus width is meaningless when the gpu computer units can't handle the throughput, it's almost like having 5 lanes open at a movie theater counter but with only 2 employees working

This being said this is more reason to stop looking at theoretical specs and actually measure real time performance. Had it been that memory bandwidth was properly measured it would've been obvious that gtx 970 has lower throughout than a gtx980 despite sharing the same memory width and "rops". And last but not least I wanna see more benchmarks on amd cards measuring the effectiveness/efficiency of the memory width vs throughput of gcn and have a more lower level look at architecture in general because I can't help but think amd just goes extreme on certain aspects and slaps random hardware on their chips, idk if anyone noticed but gcn cards have insane variance in efficiency between different chips  because the design has become "less modular?" Or should I say too modular? Meaning the parts on the chip are too independent of each other while nvidia has a more focused graphics module containing pretty much everything at the perfect ratio with as little bottlenecks as possible so when they scale up and down the ratio of rendering/throughput remains unchanged across the board, the downside is less flexibility to customize specs which caused this gtx970 dilemma, but the benefit is having consistency and less chance of a "hit or miss"


----------



## FordGT90Concept (Jan 28, 2015)

xfia said:


> wow.. he is actually contacting manufacturers for people. they should just hash out a deal that lets you trade up for a 980.


Would probably cost them more than the class action suit.  Last one several years ago was settled for around $18 million if memory serves.  That's chump change for NVIDIA.  NVIDIA also can't force third party manufactures to do things not in their agreement.  The manufacturers have to do that on their own accord and I doubt they'd jump in front of the bus for NVIDIA because they have no reason to.


----------



## RCoon (Jan 28, 2015)

The most hilarious thing about all this is that most of the people who are successfully RMA'ing their 970s are then buying a 980 as a replacement from the same retailer. It's not like NVidia is even losing anything out of this, if anything retailers and AIB's are probably going to make more profit from 980 sales now.

The world we live in.


----------



## Xzibit (Jan 28, 2015)

RCoon said:


> The most hilarious thing about all this is that most of the people who are successfully RMA'ing their 970s are then buying a 980 as a replacement from the same retailer. It's not like NVidia is even losing anything out of this, if anything retailers and AIB's are probably going to make more profit from 980 sales now.
> 
> The world we live in.



The consumer can't sue Nvidia directly unless its a made by Nvidia since they don't put the memory, which is at issue on the board. The fury will have to be directed at the AiBs, The AiBs are extremely unlikely to sue Nvidia to recoup losses and they will take the financial hit on returns & lawsuits. Nvidia will in return compensate them and probably discount future shipment to make nice. Depending on how this plays out AIBs will probably be more cautious when ordering next go around.


We'll see how long "performance is the only thing that matters" sticks around.  Next GPU rumor/release thread -->


----------



## ensabrenoir (Jan 28, 2015)

.....you know a trade up offer at a discounted  price would be an epic Win for everyone. Then sell the 970's as refurbished.


----------



## HumanSmoke (Jan 28, 2015)

RCoon said:


> The most hilarious thing about all this is that most of the people who are successfully RMA'ing their 970s are then buying a 980 as a replacement from the same retailer. It's not like NVidia is even losing anything out of this, if anything retailers and AIB's are probably going to make more profit from 980 sales now.
> The world we live in.


A very practical demonstration of the power of mindshare. People may (and do) have issues with the company - myself included, but the product and the brand tend to stay in the forefront of the consumers perception...something that Intel has also been built on.

Having said that, how many mirrors must AMD's hierarchy broken, and how many black cats crossed their paths? Every time the competition slips up - this, Intel's Cougar Point SATA bug, Nvidia's 40nm woes (and the list goes on), and AMD are never in a position to capitalize.


----------



## MustSeeMelons (Jan 28, 2015)

So, they messed up - really want to see how are they going to fix this. If they do nothing and I will be affected - I doubt my next card will be from Nvidia, I have no problem jumping ship. All left to do is grab some popcorn and watch this unravel.


----------



## Parn (Jan 28, 2015)

HumanSmoke said:


> A very practical demonstration of the power of mindshare. People may (and do) have issues with the company - myself included, but the product and the brand tend to stay in the forefront of the consumers perception...something that Intel has also been built on.
> 
> Having said that, how many mirrors must AMD's hierarchy broken, and how many black cats crossed their paths? Every time the competition slips up - this, Intel's Cougar Point SATA bug, Nvidia's 40nm woes (and the list goes on), and AMD are never in a position to capitalize.



The reason AMD cannot capitalize on their competitors' recent slip ups is they do not have any competitive products on the market. A couple of examples are 290/290X being slower & less efficient than 970/980, Bulldozer/Piledriver being less performant than Sandy/Ivy. 

When AMD WAS competitive back in the days of P3 1.13GHz recall & horrendus 1st Gen P4 efficiency, their K7 and K8 based CPU did manage to capitalize on those.


----------



## GhostRyder (Jan 28, 2015)

RCoon said:


> The most hilarious thing about all this is that most of the people who are successfully RMA'ing their 970s are then buying a 980 as a replacement from the same retailer. It's not like NVidia is even losing anything out of this, if anything retailers and AIB's are probably going to make more profit from 980 sales now.
> The world we live in.


That is the funniest part, its really hard to imagine how some peoples logic works in that sense.  But then again one thing happening like that is not a reason to stop buying the components from them all together unless they consistently do it.



Parn said:


> The reason AMD cannot capitalize on their competitors' recent slip ups is they do not have any competitive products on the market. A couple of examples are 290/290X being slower & less efficient than 970/980, Bulldozer/Piledriver being less performant than Sandy/Ivy.
> 
> When AMD WAS competitive back in the days of P3 1.13GHz recall & horrendus 1st Gen P4 efficiency, their K7 and K8 based CPU did manage to capitalize on those.


The R9 290X is right near the performance of a 980 and an older model card released over a year ago so its no shocker its not as good but especially comparing the higher resolution results they are on par and its better than the 970 in performance numbers while costing about the same/less/slightly more depending on the variant.  If both the 980 and 970 were completely blowing away the 290X then that would be a different story but they really are not as even the 780ti is not much weaker than a 980 and bests the 970.

Either way in the end, all this can only make things better for everyone as it just shows people will at least put there foot down (Well at least enough to make a difference).  This will help NVidia in the future with their next products not make the same mistakes.


----------



## Casecutter (Jan 28, 2015)

sergionography said:


> And last but not least I wanna see more benchmarks on amd cards measuring the effectiveness/efficiency of the memory width vs throughput of gcn and have a more lower level look at architecture in general because I can't help but think amd just goes extreme on certain aspects and slaps random hardware on their chips, idk if anyone noticed but gcn cards have insane variance in efficiency between different chips  because the design has become "less modular?"


Not to bust you hump... but you made recall my 8th grade spinster grammar teacher going ballistic 



sergionography said:


> Or should I say too modular?


As it was said by btarunr in this article, "This team (PR) was unaware that with "Maxwell," you could segment components previously thought indivisible, or that you could "partial disable" components."  Nvidia (and AMD) if going this route will need to make such "disablements" hopefully more seamless and unnoticeable with upcoming product releases, or best just tell us.


----------



## inferKNOX (Jan 28, 2015)

I'm really bummed about all this! Not excited about the prospect of stuttering at higher rezos... and I was just on my way to stepping up my monitor to a ULMB, higher rez monitor.

The GTX970 is my first foray to the green team since my 8600GT and it crossed oceans to get from the store to me, so swaps/refunds ain't really an option for me.
Give us free Witcher 3 preorder keys (and some other decent options like Mortal Kombat X / Arkham Knight / Resident Evil for those that want others) and let's call it a day, nVIDIA.


----------



## rruff (Jan 28, 2015)

Parn said:


> When AMD WAS competitive back in the days of P3 1.13GHz recall & horrendus 1st Gen P4 efficiency, their K7 and K8 based CPU did manage to capitalize on those.



AMD isn't competitive because they stopped investing in R&D. 2014 R&D was 58% of 2008. Nvidia now outspends their whole operation, and Intel outspends them many times over. It seems to me that AMD decided to quit competing many years ago, but they are dragging on as long as possible with older architecture. Like when the GTX 960 came out, they bragged that their 3 year old GPUs were still "competitive"! Sure, your top of the line GPUs from 3 years ago that you've been rebadging and dropping the price on, are competitive in FPS/$ at the low-mid range now. Nvidia could do the same thing, just invest in the top range, and rebadge and downgrade everything else, but I'm glad they don't. 

I didn't really pay any attention to computers until I needed a new machine and built one last year. I initially looked at only AMD, because they were good last time I checked (2008), and I like the underdog. You could have called me a fan-boy. But as I got into the details I changed my mind.


----------



## rruff (Jan 28, 2015)

RCoon said:


> The most hilarious thing about all this is that most of the people who are successfully RMA'ing their 970s are then buying a 980 as a replacement from the same retailer. It's not like NVidia is even losing anything out of this, if anything retailers and AIB's are probably going to make more profit from 980 sales now.



Nvidia is basically selling a 970 and a 980 for the price of a 980 in that case, and possibly getting something for the old 970s. Will we see a flood of used/refurbs on the market soon? Or will Nvidia just destroy them rather than dilute the sales of new 970s? How will Nvidia compensates the EVGAs and Neweggs for refunds that *aren't* upgrades but just cash? 

I think Nvidia is losing a chunk of money even in the most ideal scenario.


----------



## rruff (Jan 28, 2015)

Xzibit said:


> The consumer can't sue Nvidia directly unless its a made by Nvidia since they don't put the memory, which is at issue on the board.



The architecture is the same on all, isn't it? It's Nvidia's design, not something the AIB's control.


----------



## RejZoR (Jan 28, 2015)

rruff said:


> AMD isn't competitive because they stopped investing in R&D. 2014 R&D was 58% of 2008. Nvidia now outspends their whole operation, and Intel outspends them many times over. It seems to me that AMD decided to quit competing many years ago, but they are dragging on as long as possible with older architecture. Like when the GTX 960 came out, they bragged that their 3 year old GPUs were still "competitive"! Sure, your top of the line GPUs from 3 years ago that you've been rebadging and dropping the price on, are competitive in FPS/$ at the low-mid range now. Nvidia could do the same thing, just invest in the top range, and rebadge and downgrade everything else, but I'm glad they don't.
> 
> I didn't really pay any attention to computers until I needed a new machine and built one last year. I initially looked at only AMD, because they were good last time I checked (2008), and I like the underdog. You could have called me a fan-boy. But as I got into the details I changed my mind.



Making wasteful R&D spending doesn't necessarily result in superior end products.


----------



## rruff (Jan 28, 2015)

RejZoR said:


> Making wasteful R&D spending doesn't necessarily result in superior end products.



Of course not, but what does drastically cutting R&D signal? That isn't a move any company would make that intended to be competitive long term. AMD's direct competition is now outspending them >10x. Even if AMD's engineers are more clever than Nvidia's and Intel's, this seriously effects their ability to deliver new products.

Oh... and AMD is currently losing money while Nvidia and Intel are not. How to fix that? Cut "wasteful" R&D even more?


----------



## Xzibit (Jan 28, 2015)

rruff said:


> The architecture is the same on all, isn't it? It's Nvidia's design, not something the AIB's control.



Yes its the same.  You can sue both and at least in the US different states have additional consumer laws in addition to Federal laws.

*FTC - Truth In Advertising*

Nvidia is based in California
*California - Truth In Advertising
*
Lawyers will decide where the money can be squeezed out from the best.


----------



## HumanSmoke (Jan 28, 2015)

Parn said:


> The reason AMD cannot capitalize on their competitors' recent slip ups is* they do not have any competitive products on the market*. A couple of examples are 290/290X being slower & less efficient than 970/980, Bulldozer/Piledriver being less performant than Sandy/Ivy.
> When AMD WAS competitive back in the days of P3 1.13GHz recall & horrendus 1st Gen P4 efficiency, their K7 and K8 based CPU did manage to capitalize on those.


Really?
So what is the excuse when AMD launched the Evergreen series (HD 5000) in September 2009? Between the launching of the HD 5970 (fastest card during virtually its entire lifetime), 5870, 5850 (best bang for buck card of its era), 5770 (best mainstream card of its era), and 5750, and the arrival of Nvidia's Fermi architecture at the end of March 2010, Nvidia fielded nothing newer than an 18-month old GT200 and an almost 2.5 year old G92.....yet AMD's discrete graphics market share actually* declined*.
It also doesn't explain how AMD's market share is steadily declining while the company still generally being first to market with new graphics architectures (Evergreen launched before Fermi, Southern Islands launched before Kepler).
And K7 and K8? At the height of AMD's competitiveness, the company never accounted for more than 25.3% of the x86 processor market, which was achieved in Q1 2006.


----------



## 64K (Jan 28, 2015)

Parn said:


> The reason AMD cannot capitalize on their competitors' recent slip ups is they do not have any competitive products on the market. A couple of examples are 290/290X being slower & less efficient than 970/980



It's not fair to compare the newer Maxwell architecture to the older 290/290X. When AMD releases their R9 380 in a couple of months then we will see a GPU that's probably a good bit faster than the GTX 980.

imo the reason AMD struggles so much is they charged too little for their chips. This is evidenced in the Profit Margin for AMD and Nvidia and Intel.

*NVIDIA Corporation*
*Key stats and ratios*
Q4 (Oct '14)                       2014
Net profit margin 14.12% 10.65%
Operating margin 17.41% 12.01%
EBITD margin - 18.51%
Return on average assets 10.04% 6.44%
Return on average equity 16.31% 9.48%
Employees 8,808 -
CDP Score -  87 B
http://www.google.com/finance?cid=662925

*Advanced Micro Devices, Inc.
Key stats and ratios*
Q3 (Sep '14)                      2013
Net profit margin 1.19%  -1.57%
Operating margin 4.41% 1.92%
EBITD margin - 6.10%
Return on average assets 1.59% -1.99%
Return on average equity 13.16% -15.34%
Employees 10,671 -
CDP Score -  *67 C*
http://www.google.com/finance?q=NASDAQ:AMD&ei=hEjJVMCCFOOWsge1goDYCw

*Intel Corporation*
*Key stats and ratios*
Q3 (Sep '14)                        2013
Net profit margin 22.79%  18.25%
Operating margin 30.97% 23.09%
EBITD margin - 39.01%
Return on average assets 14.59% 10.89%
Return on average equity 23.07% 17.58%
Employees 105,600 -
CDP Score -  85 B
http://www.google.com/finance?q=NASDAQ:INTC&ei=yEnJVNn3FJOTsgfRnYHYCg
*


*


----------



## Parn (Jan 28, 2015)

HumanSmoke said:


> Really?
> So what is the excuse when AMD launched the Evergreen series (HD 5000) in September 2009? Between the launching of the HD 5970 (fastest card during virtually its entire lifetime), 5870, 5850 (best bang for buck card of its era), 5770 (best mainstream card of its era), and 5750, and the arrival of Nvidia's Fermi architecture at the end of March 2010, Nvidia fielded nothing newer than an 18-month old GT200 and an almost 2.5 year old G92.....yet AMD's discrete graphics market share actually* declined*.
> It also doesn't explain how AMD's market share is steadily declining while the company still generally being first to market with new graphics architectures (Evergreen launched before Fermi, Southern Islands launched before Kepler).
> And K7 and K8? At the height of AMD's competitiveness, the company never accounted for more than 25.3% of the x86 processor market, which was achieved in Q1 2006.



Forgot the Fermi delay, my fault. 

Well, regarding AMD's market share in x86, their historical 25.3% figure was achieved because of the major success of K7 and K8. Don't forget Q1 2006 was about 6 months before Intel managed to return to dominance with their Core 2 series of CPUs. During the period between P3 Coppermine and Conroe, most enthusiasts and IT literate gamers had Athlon XP, 64, 64 X2, FX or Opeteron in their PCs. And I almost forgot that NV also got a free ride with their nForce chipsets on AMD's success at the time.


----------



## HumanSmoke (Jan 28, 2015)

Parn said:


> Forgot the Fermi delay, my fault.


It isn't an isolated case by any means.


Parn said:


> Well, regarding AMD's market share in x86, their historical 25.3% figure was achieved because of the major success of K7 and K8. Don't forget Q1 2006 was about 6 months before Intel managed to return to dominance with their Core 2 series of CPUs.


I'm well aware of the timeline. I referenced this and a fair bit more in a series I wrote a few months back.
The reason of AMD's market share mediocrity is actually fairly simple. The company was founded on a sales/marketing foundation, Intel and Nvidia (and ATI for that matter) were founded on engineering. When you have substantial saleable IP from the get-go and it arrives at market in good shape, the company has a decided advantage (AMD's early years are mostly about making licensed copies of Fairchild TTL chips, Intel ROM/EPROM/SRAM/processors, analog circuits) in mindshare - firstly from the engineers in the companies they deal with, and later the roll-on effect to end users with the establishment of the brand. Once the brand has been established, any rival has to not only deliver better product, but the incumbent market leader also has to fail. Aside from the occasional blip, these two things have never occurred simultaneously to give AMD any advantage. Without the brand and IP basis, the company is playing follow-the-leader, it can't charge as much for products, it's margins are lower (see 64K's post above as example), and with lower revenue it can't take the financial risks on expansion and R&D without a greater risk of compromising the integrity of the company - you then sit at the crux of a damned if you do, damned if you don't situation. Play it safe (Cyrix, Mostek, S3, Matrox, Chips & Technologies etc.) and undergo death by a thousand cuts, or buy IP as a shortcut but incurring debt - as AMD did with the purchases of ATI, NexGen, the DEC licenses etc.

It also doesn't help that AMD promote the "underdog" persona - plenty of people root for the underdog, but a much larger percentage of people want to allied with the dominant brand.


----------



## rruff (Jan 29, 2015)

HumanSmoke said:


> Nvidia fielded nothing newer than an 18-month old GT200 and an almost 2.5 year old G92.....yet AMD's discrete graphics market share actually* declined*.



Your link shows that AMD gained market share that year, while Nvidia declined a lot. Intel was the big winner.

2006 was the year that Nvidia jumped way ahead of ATI/AMD, and they've held about a 60% vs 40% market share lead in discrete graphics since then. Recently though, Nvidia's share has jumped to 72%. 40% to 28% in half a year is a steep decline for AMD. You could say that is a temporary thing and AMD will come on strong after they introduce new cards this year... but that remains to be seen. I fully expect AMD's top card to beat the GTX 980 in performance, but it will be 28nm and likely no more power efficient than previous AMD offerings, which means it will be a beast. Lower down the pecking order I don't expect their new cards to be any more impressive than the R9 285.


----------



## xfia (Jan 29, 2015)

rruff said:


> Your link shows that AMD gained market share that year, while Nvidia declined a lot. Intel was the big winner.
> 
> 2006 was the year that Nvidia jumped way ahead of ATI/AMD, and they've held about a 60% vs 40% market share lead in discrete graphics since then. Recently though, Nvidia's share has jumped to 72%. 40% to 28% in half a year is a steep decline for AMD. You could say that is a temporary thing and AMD will come on strong after they introduce new cards this year... but that remains to be seen. I fully expect AMD's top card to beat the GTX 980 in performance, but it will be 28mm and likely no more power efficient than previous AMD offerings, which means it will be a beast. Lower down the pecking order I don't expect their new cards to be any more impressive than the R9 285.



all the rumors seem to point at the new radeons having hbm and 20nm cores


----------



## HumanSmoke (Jan 29, 2015)

rruff said:


> Your link shows that AMD gained market share that year, while Nvidia declined a lot. Intel was the big winner.


Look past the graphs (and BTW, the reason Nvidia's overall graphics shipments slipped was because Intel basically killed Nvidia's IGP business with the release of the Nehalem architecture)


> Broken down by form factor, AMD gained integrated mobile GPU market share *but slipped in discrete in both desktop and mobile*. *NVIDIA's share of the discrete desktop market grew in the fourth quarter*,


or another similar take:


> It seems the Radeon HD 5870, HD 5850, HD 5770, HD 5750 and even the almighty HD 5970 were just not enough – or simply just not in enough supply – to tempt punters away from the more widely available 55nm offerings from Nvidia, and that’s even with the temptations of EyeFinity and DX11. Perhaps people don’t care about flashy features as much as AMD thinks they should.





rruff said:


> Nvidia's share has jumped to 72%. 40% to 28% in half a year is a steep decline for AMD. You could say that is a temporary thing and AMD will come on strong after they introduce new cards this year... but that remains to be seen.


AMD have released new graphics series before- and as I noted, some had little or no competition from Nvidia....yet the historical data and balance sheet doesn't lie


----------



## newtekie1 (Jan 29, 2015)

ensabrenoir said:


> Then sell the 970's as refurbished.



Yeah, so I can buy a second on the cheap!


----------



## rruff (Jan 29, 2015)

xfia said:


> all the rumors seem to point at the new radeons having hbm and 20nm cores



This is where I got 28nm:


----------



## xfia (Jan 29, 2015)

14 to 28 is a huge jump in lithography..  guess we just need to wait for the real deal to hit the market.


----------



## Prima.Vera (Jan 29, 2015)

64K said:


> ...When AMD releases their R9 380 in a couple of months then we will see a GPU that's probably a good bit faster than the GTX 980.


And then, nVidia will release in 1 week time the 980*Ti*. Same ol' same 'ol.


----------



## xfia (Jan 29, 2015)




----------



## TRWOV (Jan 29, 2015)

You know, this will make Matthew's work harder. I don't know if the  GPU database would be able to show all this info in a easy to understand format. Maybe change 256bit to "224+32bit"? Memory size to "3.5GB+0.5GB"? Maybe add notes?  Actually even 224+32bit is wrong since nVidia said that the two partitions can't be accessed at the same time so bus width is either 224bit or 32bit depending on the accessed partition. Maybe 224bit/32bit???


----------



## Parn (Jan 29, 2015)

HumanSmoke said:


> It also doesn't help that AMD promote the "underdog" persona - plenty of people root for the underdog, but a much larger percentage of people want to allied with the dominant brand.



Totally agree with that. This underdog strategy is really hurting the profit margins for AMD.

When I said AMD was less competitive recently, it was more from a power efficiency and cost point of view. Performance wise 290X is only marginally worse than 980, but because of the poor efficiency and higher manufacturering cost (~ 20% larger die size) the 290X has to be sold with a far smaller margin than 980.

I fully expect the new 380X to be faster than 980. However what's more important for AMD is to find a way to bring the die size (cost) and power consumption down to a level which is closer or better than the Maxwells. If not, then eventually the new flagship will be again priced as a mid-range product once 980Ti is out and that's not good for the profit margin.


----------



## Warology (Jan 30, 2015)

I bought 2 970's sli cant complain they were working awesome but I returned them since I was on day 13 of 14 for best buys return limit and got a 980 along with a small cash refund. Just didn't like what people were saying about higher res gaming and the being lied to part kind of sucks. But still the 970 is a awesome card really no reason to feel bad for buying 1 or 2 of these you people all got really good video cards but yes the lying part just gives it a bad taste all around. Just got lucky I was able to return them with no questions asked threw best buy and that they have both cards in stock at there stores. I was having small issues with older games running sli which is normal but for me sli is overkill and 1 card just runs a bit smoother. But lol use best buy to test sli and different model cards they take anything! back and yes I over clocked the F#$ing shit out of the 2 sli 970s before I returned them  pretty good benchmarks and they took it like a champ.


----------



## _Flare (Feb 14, 2015)




----------



## W1zzard (Feb 14, 2015)

TRWOV said:


> You know, this will make Matthew's work harder. I don't know if the  GPU database would be able to show all this info in a easy to understand format. Maybe change 256bit to "224+32bit"? Memory size to "3.5GB+0.5GB"? Maybe add notes?  Actually even 224+32bit is wrong since nVidia said that the two partitions can't be accessed at the same time so bus width is either 224bit or 32bit depending on the accessed partition. Maybe 224bit/32bit???


We have no plans to expand the gpudb in such a way (I do all the coding). There is a text field for comments which could be suited for that kind of info though.


----------



## vega22 (Feb 21, 2015)

suit has been filed in cali.

never saw that coming...


----------



## Xzibit (Feb 21, 2015)

*PCWorld - Nvidia hit with false advertising suit over GTX 970 performance*

*KitGuru - Nvidia slammed with class-action lawsuit over GeForce GTX 970 specifications*

*IncGamers - Nvidia faces lawsuit over GTX 970 specification claims *


You can read the lawsuit at *Scribd here.*


----------



## 64K (Feb 21, 2015)

Xzibit said:


> *PCWorld - Nvidia hit with false advertising suit over GTX 970 performance*
> 
> *KitGuru - Nvidia slammed with class-action lawsuit over GeForce GTX 970 specifications*
> 
> ...



Lawyers will suck up most of the money in legal fees and this could be very expensive for Nvidia if the judge allows it to be a class action suit on behalf of the entire USA. It would have been cheaper to allow 970 owners the option of returning for a refund or giving a partial rebate in exchange for promising not to sue if this does turn into a class action.


----------



## Xzibit (Feb 21, 2015)

3 lawfirms aren't going after chump change that's for sure.  The one that was investigating isn't listed.  They might jump on this one or file a second one in another state.


----------



## HumanSmoke (Feb 22, 2015)

Xzibit said:


> *KitGuru - Nvidia slammed with class-action lawsuit over GeForce GTX 970 specifications*


Kitguru...should be Kitnoob


> _ Keeping in mind that mobile Nvidia GeForce GTX 980M graphics processor is also affected by the same memory and ROP issues as the GeForce GTX 970, expect notebook gamers to slam Nvidia too… _


Basic failure in understanding the architecture





[Source]


----------



## TheoneandonlyMrK (Feb 22, 2015)

HumanSmoke said:


> Kitguru...should be Kitnoob
> 
> Basic failure in understanding the architecture
> 
> ...




Easily done apparently, NVIDIA's marketing department did worse.

on that note are we to believe that a new specially developed option(linked IMC port(optional post bin)) put into a chips design specifically to up yields and create more profit(or less scraped parts essentially)
 within a new generation arch, was Not talked about in presentation's to the company's own staff, way before release etc,,

,  very very feckin dubious,,, all of it reek's of incompetence or fraud (via the see no evil pact) either way I would never buy anything on a whim from nvidia again, and i was going to (ie 69$ tegra 3 7'' pad, Was going to be mine) ill look elsewhere first now as will a few others.


----------



## HumanSmoke (Feb 22, 2015)

theoneandonlymrk said:


> Easily done apparently, NVIDIA's marketing department did worse.


+1 wry observation 
Still setting the bar pretty low if an "enthusiast" sites knowledge base is deemed acceptable because it rivals that of a marketing/PR Dept.


theoneandonlymrk said:


> very very feckin dubious,,, all of it reek's of incompetence or fraud (via the see no evil pact)


  There's a actually a low-cost solution staring everyone in the face. Why not just give those affected a R9 290 or 290X? AMD's board partners (many of whom sell Nvidia cards) have AMD inventory backlog thanks to the channel stuffing Rory instituted. With AMD's designs not exactly flying of the shelves, why not cut a deal? 


theoneandonlymrk said:


> I would never buy anything on a whim from nvidia again, and i was going to (ie 69$ tegra 3 7'' pad, Was going to be mine) ill look elsewhere first now as will a few others.


Happy shopping. The only thing at this stage with a Tegra inside that interests me is an Audi.


----------



## Octopuss (Feb 23, 2015)

http://enbdev.com/index_en.html
Look at the "post" from 28 january. I don't know what to think, is the guy full of shit or not?


----------

