# NVIDIA Responds to GTX 970 Memory Allocation 'Bug' Controversy



## btarunr (Jan 26, 2015)

The GeForce GTX 970 memory allocation bug discovery, made towards last Friday, wrecked some NVIDIA engineers' weekends, who composed a response to what they tell is a non-issue. A bug was discovered in the way GeForce GTX 970 was allocating its 4 GB of video memory, giving some power-users the impression that the GPU isn't addressing the last 700-500 MB of its memory. NVIDIA, in its response, explained that the GPU is fully capable of addressing its 4 GB, but does so in an unusual way. Without further ado, the statement.



> The GeForce GTX 970 is equipped with 4GB of dedicated graphics memory. However the 970 has a different configuration of SMs than the 980, and fewer crossbar resources to the memory system. To optimally manage memory traffic in this configuration, we segment graphics memory into a 3.5GB section and a 0.5GB section. The GPU has higher priority access to the 3.5GB section. When a game needs less than 3.5GB of video memory per draw command then it will only access the first partition, and 3rd party applications that measure memory usage will report 3.5GB of memory in use on GTX 970, but may report more for GTX 980 if there is more memory used by other commands. When a game requires more than 3.5GB of memory then we use both segments.



Continued




> We understand there have been some questions about how the GTX 970 will perform when it accesses the 0.5GB memory segment. The best way to test that is to look at game performance. Compare a GTX 980 to a 970 on a game that uses less than 3.5GB. Then turn up the settings so the game needs more than 3.5GB and compare 980 and 970 performance again.
> 
> Here's an example of some performance data:



<table class="tputbl hilight" cellspacing="0" cellpadding="3">
 <caption>
GTX 970 vs. GTX 980 Memory-Intensive Performance Data
 </caption>
 <tr>
 <th scope="col">&nbsp;</th>
 <th scope="col">GeForce <br />
 GTX 980</th>
 <th scope="col">GeForce <br />
 GTX 970</th>
 </tr>
 <tr>
 <th scope="row">Shadow of Mordor</th>
 <td align="right"></td>
 <td align="right"></td>
 </tr>
 <tr class="alt">
 <th scope="row">.5GB setting = 2688x1512 Very High</th>
 <td align="right">72 fps</td>
 <td align="right">60 fps</td>
 </tr>
 <tr>
 <th scope="row">>3.5GB setting = 3456x1944</th>
 <td align="right">55fps (-24%)</td>
 <td align="right">45fps (-25%)</td>
 </tr>
 <tr class="alt">
 <th scope="row">Battlefield 4</th>
 <td align="right"></td>
 <td align="right"></td>
 </tr>
 <tr>
 <th scope="row">.5GB setting = 3840x2160 2xMSAA</th>
 <td align="right">36 fps</td>
 <td align="right">30 fps</td>
 </tr>
 <tr class="alt">
 <th scope="row">>3.5GB setting = 3840x2160 135% res</th>
 <td align="right">19fps (-47%)</td>
 <td align="right">15fps (-50%)</td>
 </tr>
 <tr>
 <th scope="row">Call of Duty: Advanced Warfare</th>
 <td align="right"></td>
 <td align="right"></td>
 </tr>
 <tr class="alt">
 <th scope="row">.5GB setting = 3840x2160 FSMAA T2x, Supersampling off</th>
 <td align="right">82 fps</td>
 <td align="right">71 fps</td>
 </tr>
 <tr class="alt">
 <th scope="row">.5GB setting = >3.5GB setting = 3840x2160 FSMAA T2x, Supersampling on</th>
 <td align="right">48fps (-41%)</td>
 <td align="right">40fps (-44%)</td>
 </tr>
</table>



> On GTX 980, Shadows of Mordor drops about 24% on GTX 980 and 25% on GTX 970, a 1% difference. On Battlefield 4, the drop is 47% on GTX 980 and 50% on GTX 970, a 3% difference. On CoD: AW, the drop is 41% on GTX 980 and 44% on GTX 970, a 3% difference. As you can see, there is very little change in the performance of the GTX 970 relative to GTX 980 on these games when it is using the 0.5GB segment.



*View at TechPowerUp Main Site*


----------



## Ikaruga (Jan 26, 2015)

They really should have gone the other way around with this whole story. The 970 is still a hell of a card. If they would have said that it only has 3.5GB, it would be all fine now.
They could have just „leaked” how to enable the last 0.5GB and enthusiast would all try it and argue and debate about it on every tech forums if it’s worth it or not because of the performance hit (which is about 2-3fps?)

There would be no drama now then.


----------



## 1c3d0g (Jan 26, 2015)

I don't see what the big deal is.  The 970 still performs within spitting distance of the 980, for a lot less $. All of those have their panties twisted over some non-standard memory configuration need to go back and read the 970 reviews again. This discovery does not suddenly make the 970 slower than it was a month ago. Besides, if you wanted higher performance, you'd have to open up your wallet, there's no way around this, especially with GM200 on the horizon.


----------



## robert3892 (Jan 26, 2015)

On NVIDIA's own forums NVIDIA has stated more information will be forthcoming and that this explanation was to let everyone understand 'how' the memory allocation works.


----------



## Pasha (Jan 26, 2015)

Ikaruga said:


> They really should have gone the other way around with this whole story. The 970 is still a hell of a card. If they would have said that it only has 3.5GB, it would be all fine now.
> They could have just „leaked” how to enable the last 0.5GB and enthusiast would all try it and argue and debate about it on every tech forums if it’s worth it or not because of the performance hit (which is about 2-3fps?)
> 
> There would be no drama now then.



Fps is ok, u got sttuter when using all memory.


----------



## Ikaruga (Jan 26, 2015)

Pasha said:


> Fps is ok, u got sttuter when using all memory.


Yes, but it really depends on the actual scenario and the engine in question. If it's allocated as texture memory for example, most modern engines would just draw the frame without waiting for the texture to be streamed. But yes, you would get stutter if it's a frame buffer, a surface or compute memory, etc... it can be bad I agree.


----------



## RejZoR (Jan 26, 2015)

Cranking up settings is imo not the way it should be tested, because you change conditions. Make textures occupy more memory at same resolutions...


----------



## Xzibit (Jan 26, 2015)




----------



## ShredBird (Jan 26, 2015)

Nvidia is a competent company.  They're likely prioritizing latency sensitive tasks to the 3.5GB segment and keeping lower priority memory in the 0.5GB segment.  I mean, their logic is solid, the only time you'll really notice any degradation is when a single frame requires MORE than the entirety of the 3.5GB segment for a single frame, because the software has to work out some extra overhead with virtual addressing.

The likelihood of any game needing more than 3.5GB for a single frame is pretty low for where most people game.  Nvidia has to make some tradeoffs in their architecture to achieve their massive efficiency gains over Kepler, tying the processors addressing capabilities to SMs is one of those tradeoffs.  If anything, I think it was a smart tradeoff, it's taken this long for people to even notice there was a compromise made at all.

I think at the end of the day people are going to say, well this is one hell of a card still for 95% of gamers out there, and the other 5% that want to squeeze every single performance percentage out of 4k and multimonitor setups are just gonna throw down cash on a GTX 980(s) or 290X(s).


----------



## silapakorn (Jan 26, 2015)

RejZoR said:


> Cranking up settings is imo not the way it should be tested, because you change conditions. Make textures occupy more memory at same resolutions...



How exactly do we do that?


----------



## ShredBird (Jan 26, 2015)

silapakorn said:


> How exactly do we do that?



Seriously, I agree, the graphics API and driver are handling all the memory allocations and I'm sure Nvidia is handling in such a way that this handicapped scenario is minimized.  I would imagine the only way to expose it is to brute force the memory capacity as in Nvidia's examples.


----------



## xfia (Jan 26, 2015)

you guys are talking like its not both the 970-980. they clearly show both suffer the same performance loss when going over 3.5gb and is as much half in the games shown. 

its not fair to people that have bought 2 or more for high res and multiple monitors. what would really make everyone happy again is if they could magically make vram stack in sli.


----------



## Xzibit (Jan 26, 2015)

ShredBird said:


> Seriously, I agree, the graphics API and driver are handling all the memory allocations and I'm sure Nvidia is handling in such a way that this handicapped scenario is minimized.  I would imagine the only way to expose it is to brute force the memory capacity as in Nvidia's examples.



There example of CoD doesn't mention quality. The only one that does it SoM with less than 3.5GB. CoD:AW At 4k VHQ your already suppose to be at 4GB. BF4 at 4k VHQ uses 2.3GB. Details are lacking.

CoD:AW can allocate 3.2GB of memory usage on a 980 at 1080p VHQ according to *GameGPU*. Testing at 1440p would have been more convincing provide they show a detail rundown. Not that they need to convince me but not being forth coming is whats got them here.


----------



## ShredBird (Jan 26, 2015)

Xzibit said:


> There example of CoD doesn't mention quality. The only one that does it SoM with less than 3.5GB. CoD:AW At 4k VHQ your already suppose to be at 4GB. BF4 at 4k VHQ uses 2.3GB. Details are lacking.
> 
> CoD:AW can allocate 3.2GB of memory usage on a 980 at 1080p VHQ according to *GameGPU*. Testing at 1440p would have been more convincing provide they show a detail rundown. Not that they need to convince me but not being forth coming is whats got them here.



I agree, more thorough testing is required, I was just mainly getting at their initial reasoning and simple example made sense to me.  I think this will all blow over and people will go, "Ah, yeah, that's hardly a big deal in the big picture, GTX 970 is still a killer deal."


----------



## matar (Jan 26, 2015)

NVidia start offering price cuts or rebates on GTX 970 that should make everybody happy


----------



## SamuilM (Jan 26, 2015)

These numbers look ok ... but my experience with a 2GB card is that FPS drops only when the game needs significantly more memory than you have . The first thing that happens is actually stuttering . Usually when I play a game that saturates the framebuffer , with say 2.1gb , fps is almost not affected ... but stutters like crazy . When I try to play a game like Shadow of Mordor in Ultra , fps plumets to the low 20 , from about 60 on high ... . 
To think that I was actually waiting on a double framebuffer variant of the 970 to upgrade my 2gb 770 . The only reason I whant to upgrade is that I made the mistake of getting a 2gb card instead of 4gb one . The performance is enough for my needs , but the stuttering is not something I can live with . Some more in-depth tests would be nice , I know stuttering can be subjective , but there must be a way to test that aspect of the performance .


----------



## ZoneDymo (Jan 26, 2015)

Xzibit said:


>



just golden


----------



## HTC (Jan 26, 2015)

matar said:


> NVidia start offering price cuts or rebates on GTX 970 that should make everybody happy



More testing needs to be done before that.

*If* this is indeed the result of this card's problem, then no amount of price cuts or rebates can make up for it and a recall SHOULD be made for cards showing this.


----------



## RejZoR (Jan 26, 2015)

I don't think they'll do that, because that would mean the most succesful chip would become the biggest flop...


----------



## Fluffmeister (Jan 26, 2015)

The price will likely drop a bit anyway once AMD actually have a new product to sell, the current fire sale of products in a market already flooded with cheap ex-miners clearly doesn't make much difference.

The card will already be 6+ months old by then too.


----------



## Prima.Vera (Jan 26, 2015)




----------



## GhostRyder (Jan 26, 2015)

The issue is only a big deal because it was hidden and can cause issues.  The solution should have been just at the beginning to advertise the card as a 3.5gb card and the last .5gb that can cause the issues to be locked out or prioritized for basics that would have just been a nice bonus for people.  When you hide a fact from people it tends to make people mad and make things bigger than they might be.  Not everyone will be effected by this and many (if not most) who bought the card probably would not care when looking at the card to buy a 3.5gb card versus a 4gb card at the price point and performance shown.  Its the people who bought the card specifically for tasks that use that extra .5gb (Like a 4K, surround, or some serious texture packs at high resolutions and FPS) that might see this issue come to play and might have changed their mind about the purchase.

Making this out to be no big deal is how you end up with this type of issue as a continued thing to deal with.  Voicing issues and opinions is how you end up with better products that is great for everyone and drives things forward.


----------



## LAN_deRf_HA (Jan 26, 2015)

For anyone who finds this response lacking you're not alone http://www.anandtech.com/show/8931/nvidia-publishes-statement-on-geforce-gtx-970-memory-allocation


----------



## Easy Rhino (Jan 26, 2015)

Ha! There were a lot of armchair engineers on here yapping about this and that. /rekt

Just because you can Google doesn't mean you have any idea what you are talking about.


----------



## XL-R8R (Jan 26, 2015)

Easy Rhino said:


> Ha! There were a lot of armchair engineers on here yapping about this and that. /rekt
> 
> Just because you can Google doesn't mean you have any idea what you are talking about.



A voice of reason!


----------



## xfia (Jan 26, 2015)

ok so they are saying the 980 doesnt have the partition right? and they both have the same performance loss in game testing..  hell if that is the case there is seriously nothing wrong.. I mean what happens to any other gpu if you throw it more than it can handle or cpu for that matter. 
nvidia is probably laughing about it because it took them an hour to figure it out and forward a statement to tech sites that is so simple its hard for a 20 year professional to quickly figure out when they where expecting something much more complex.


----------



## newtekie1 (Jan 26, 2015)

Ikaruga said:


> They really should have gone the other way around with this whole story. The 970 is still a hell of a card. If they would have said that it only has 3.5GB, it would be all fine now.
> They could have just „leaked” how to enable the last 0.5GB and enthusiast would all try it and argue and debate about it on every tech forums if it’s worth it or not because of the performance hit (which is about 2-3fps?)
> 
> There would be no drama now then.



But why say a card with 4GB, with access to all 4GB, is a 3.5GB card?  I don't believe this is the first time we've seen something like this.  All those asymmetrical cards out there, with memory amounts that don't fit the memory bus have the same problem once you cross a certain memory amount.  The final bit of memory has a lot lower bandwidth.  But the latency is still super low compared to offloading that data to the system RAM and reading it from there.



Pasha said:


> Fps is ok, u got sttuter when using all memory.



Stuttering when you hit the extra 500MB partition shouldn't be that bad, I'd venture to say it is probably going to be unnoticeable.  That extra partition still has way better performance and latency than when the GPU has to start swapping things out to the system RAM.  That is when the real stuttering starts to happen.



HTC said:


> *If* this is indeed the result of this card's problem, then no amount of price cuts or rebates can make up for it and a recall SHOULD be made for cards showing this.



It isn't.  The video is either fake or he has something else going on causing the issue.  Just look at his memory usage in the second part.  His system memory usage is under 6GB in the first half and over 13GB in the second...


----------



## xorbe (Jan 26, 2015)

I would rather have a straight-up 3.5GB card, than a 4GB card that gets wonky on the last 0.5GB


----------



## buggalugs (Jan 26, 2015)

xfia said:


> ok so they are saying the 980 doesnt have the partition right? and they both have the same performance loss in game testing..  hell if that is the case there is seriously nothing wrong.. I mean what happens to any other gpu if you throw it more than it can handle or cpu for that matter.
> nvidia is probably laughing about it because it took them an hour to figure it out and forward a statement to tech sites that is so simple its hard for a 20 year professional to quickly figure out when they where expecting something much more complex.



 It isn't just FPS though. The difference is the 980 still renders fluid gameplay above 3.5GB,  the 970 turns into a  stuttering mess above 3.5GB and some users get weird artifacts and flashes of purple and stuff across the screen.


----------



## xfia (Jan 26, 2015)

if you read between the lines they basically said look nubs there is nothing wrong with the memory and this is how you test it. 

nvidia would probably say the 980 can deliver smoother game play because it is more powerful, harder to push its limit and as a result cost more. 

I think people with really bad issues overclocked to high and messed them up.
calling out the experts at  to end the non since. lets see the test benches fire up!


----------



## XL-R8R (Jan 26, 2015)

buggalugs said:


> It isn't just FPS though. The difference is the 980 still renders fluid gameplay above 3.5GB, * the 970 turns into a  stuttering mess above 3.5GB *and some users get weird artifacts and flashes of purple and stuff across the screen.



Got any of your own evidence, or, like the majority of the people here, are just repeating FUD from around the interwebz? 


Having *used my 970*_* extensively*_, and even _breaking the mythical 3.5GB_ barrier on a few AAA titles, it doesn't bring performance down to the ground like the naysayers and trolls would LOVE people to believe... OR, I'm yet to see any discernible difference.  


I somehow think nVidia's recent graph's on performance characteristics are very close, if not exactly on par with, the real numbers after going over 3.5GB. 

More testing is needing, this is obvious, but it isn't nearly as bad as it appears.... about the 'worst' thing is that (once again) a company played a sneaky PR move lol


----------



## Sasqui (Jan 26, 2015)

GhostRyder said:


> The issue is only a big deal because it was hidden and can cause issues.  The solution should have been just at the beginning to advertise the card as a 3.5gb card and the last .5gb that can cause the issues to be locked out or prioritized for basics that would have just been a nice bonus for people.



So, doing some math 3.5 vs. 4.0... they should quickly offer a 13% refund to folks who purchased the card prior to the news


----------



## 64K (Jan 26, 2015)

Sasqui said:


> So, doing some math 3.5 vs. 4.0... they should quickly offer a 13% refund to folks who purchased the card prior to the news



I will take the 13% ($45.50) or Jen-Hsung Huang can come to my house and detail my Camry.


----------



## the54thvoid (Jan 26, 2015)

From Hexus...

http://hexus.net/tech/news/graphics/79925-nvidia-explains-geforce-gtx-970s-memory-problems/



> This Nvidia-provided slide gives brief insight into how the GTX 970 is constructed. The three disabled SMs are shown at the top and 256KB L2s and pairs of 32-bit memory controllers on the bottom. Notice the greyed-out right-hand L2 for this GPU? Tied into the ROPs as they are this is a direct consequence of reducing the overall ROP count. GTX 970 has 1,792KB of L2 cache, not 2,048KB, but, as Alben points out, still has a greater cache-to-SMM ratio than GTX 980.
> 
> Historically, including up to the Kepler generation, cutting off the L2/ROP portion would require the entire right-hand quad section to be deactivated too. Now, with Maxwell, Nvidia is able to use some smarts and still tap into the 64-bit memory controllers and associated DRAM even though the final L2 is missing/disabled. In other words, compared to previous generations, it can preserve more of the performance architecture even though a key part of a quad is purposely left out. This is good engineering.
> 
> ...



This seems pertinent: 



> The Lazygamer test at the >3.5GB metric simply probes bandwidth on a single DRAM, which is admittedly low, or 1/8th of the total speed, while in-game code, according to Nvidia, doesn't pinpoint memory in this way


----------



## HumanSmoke (Jan 26, 2015)

Sasqui said:


> So, doing some math 3.5 vs. 4.0... they should quickly offer a 13% refund to folks who purchased the card prior to the news


Like AMD did with Bulldozer when people found out that 4 modules isn't the necessarily the same as 8 cores ?





EDIT: Anandtech's Ryan Smith has a decent write-up of the issue for anyone interested - it certainly beats the flailing around in the dark that some people are indulging in.


----------



## trenter (Jan 26, 2015)

It's not a "bug", it's a total slap in the face of 970 owners. Nvidia just realized that this whole time they were selling 970's the specifications they were quoting were actually wrong. They admitted the 970 only really has 3.5gb of useable memory because the other .5gb is accessed at 28 gbps....WTF. Now let's hear the nvidia fangirls find a way to defend this joke for a company.....starting with W11zzardd.


----------



## 64K (Jan 26, 2015)

trenter said:


> It's not a "bug", it's a total slap in the face of 970 owners. Nvidia just realized that this whole time they were selling 970's the specifications they were quoting were actually wrong. They admitted the 970 only really has 3.5gb of useable memory because the other .5gb is accessed at 28 gbps....WTF. Now let's hear the nvidia fangirls find a way to defend this joke for a company.....starting with W11zzardd.



Flame much?


----------



## HumanSmoke (Jan 26, 2015)

trenter said:


> It's not a "bug", it's a total slap in the face of 970 owners. Nvidia just realized that this whole time they were selling 970's the specifications they were quoting were actually wrong. They admitted the 970 only really has 3.5gb of useable memory because the other .5gb is accessed at 28 gbps....WTF. Now let's hear the nvidia fangirls find a way to defend this joke for a company.....starting with W11zzardd.


How auspicious. 2 posts in and you've already called out two staff members.


----------



## Xzibit (Jan 26, 2015)

64K said:


> I will take the 13% ($45.50) or Jen-Hsung Huang can come to my house and detail my Camry.



Don't forget the difference in ROP and L2 cache.

GTX 970 now has 52 ROPs instead of 64 and 1792KB of L2 Cache instead of 2048KB



			
				NVIDIA’s Senior VP of GPU Engineering said:
			
		

> _To those wondering how peak bandwidth would remain at 224 GB/s despite the division of memory controllers on the GTX 970, Alben stated that it can reach that speed only when memory is being accessed in both pools._



This is entertaining to see develop


----------



## rtwjunkie (Jan 26, 2015)

HumanSmoke said:


> How auspicious. 2 posts in and you've already called out two staff members.


 
Of course, his bravery is questionable, since neither post actually tagged them.  In fact, it seems he deliberately mispelled our site owner's name. LOL!


----------



## Rahmat Sofyan (Jan 26, 2015)

For Fun, W1zz should change the Value and Conclusion for GTX 970 right now 
Oh, and nVidia seems fucked


----------



## HumanSmoke (Jan 26, 2015)

Xzibit said:


> GTX 970 now has 52 ROPs instead of 64 and 1792KB of L2 Cache instead of 2048KB


The revised total of ROPs is 56 not 52.


Rahmat Sofyan said:


> For Fun, W1zz should change the Value and Conclusion for GTX 970 right now
> Oh, and nVidia seems fucked


Why? What changed? Is the performance of the card any less? Does it use more power than it did when it launched?


----------



## Steevo (Jan 26, 2015)

Xzibit said:


> Don't forget the difference in ROP and L2 cache.
> 
> GTX 970 now has 52 ROPs instead of 64 and 1792KB of L2 Cache instead of 2048KB
> 
> ...




Considering it cannot physically access both pools as the cache crossbar is used to access the last .5 and it cannot be done while its busy reading its own memory in the current configuration.......

Looks like Nvidia made up specs and hoped no one would notice, and now that they have they are going to "correct" them to what they know them to be from the start.


----------



## Sasqui (Jan 26, 2015)

HumanSmoke said:


> Like AMD did with Bulldozer when people found out that 4 modules isn't the necessarily the same as 8 cores ?



50% refund right there!


----------



## Xzibit (Jan 26, 2015)

Here is PcPer take on it.


----------



## the54thvoid (Jan 26, 2015)

Xzibit said:


> Here is PcPer take on it.



Kinda what Anandtech are saying too.  

Nvidia has egg on it's face for 'ahem' lying about it's card, no doubt but the performance of it isn't an issue.  Each reviewer looking at it in turn, (PCper, Anand, Hexus) has the same conclusion which is threefold:
1) Nvidia have slipped up and undoubtedly their PR and engineering sections have 'misled' the public somewhat.  (IMO, I don't believe it was innocent but hey)
2) The real performance impact isn't there.  The card, according to all sites so far, is still great.
3) People are trying and failing so far to find a real world gaming example that kills the cards performance, outside of a load that would do that anyway based on it's SMM units etc.

FWIW, IMO, Nvidia knew fine well what they were releasing and probably expected no fall out from it, due to the fact it has no impact on real scenario's.  But techy people like to dig and found an anomaly.  Now NV have to explain it and it's hard to make this one sound like a genuine 'miss'.  Even if it was a genuine lapse, it's very hard to sell to us, the public.

But hey, this ugly truth (bad move NV, but still a great card) won't stop people throwing those ignorance stones.


----------



## FX-GMC (Jan 26, 2015)

> With that said, our 4K test did pick up a potential discrepancy in _Shadows of Mordor_. While the frame rates were equivalently positioned at both 4K and 1080p, the frame _times _weren’t. The graph below shows the 1% frame times for _Shadows of Mordor_, meaning the _worst_1% times (in milliseconds). http://www.extremetech.com/extreme/...idias-penultimate-gpu-have-a-memory-problem/2








Memgate or Stuttergate?


----------



## Ja.KooLit (Jan 26, 2015)

phew. its all over the net now this issue.


----------



## Rahmat Sofyan (Jan 26, 2015)

HumanSmoke said:


> The revised total of ROPs is 56 not 52.
> 
> Why? What changed? Is the performance of the card any less? Does it use more power than it did when it launched?



Just for fun dude ...


----------



## GhostRyder (Jan 26, 2015)

Xzibit said:


> Don't forget the difference in ROP and L2 cache.
> 
> GTX 970 now has 52 ROPs instead of 64 and 1792KB of L2 Cache instead of 2048KB
> 
> ...


Yea, I wonder how many more changes to cards we will get down the line LOL.  A little annoying to hide certain little details about cards and then expect to just sweep it under the rug.  It still would have been easier to be much more up front about it and then the amount of people who bought the card changing their mind would probably be a small margin (Heck some might end up going to the GTX 980).  It just makes you look bad not saying something and now doubt lingers on the mind no matter how you want to think about it.


Xzibit said:


> Here is PcPer take on it.


Good vid.


----------



## Xzibit (Jan 26, 2015)

the54thvoid said:


> Kinda what Anandtech are saying too.
> 
> Nvidia has egg on it's face for 'ahem' lying about it's card, no doubt but the performance of it isn't an issue.  Each reviewer looking at it in turn, (PCper, Anand, Hexus) has the same conclusion which is threefold:
> 1) Nvidia have slipped up and undoubtedly their PR and engineering sections have 'misled' the public somewhat.  (IMO, I don't believe it was innocent but hey)
> ...



Right now the "miscommunication" is more interesting along with the debate.

Performance wise its not visually apparent with the majority of games but if the so called "Next Gen PS4-XB1 ports" games ever get here with "DX12" the issues will be more apparent to the majority. At least that how I see it. Most games are catching up to DX10+ and the new PS4-XB1 game are being ported with texture packs that are coming in at 3GB at VHQ @ 1080p.  Who know by then Nvidia might also have a 1070 that doesn't have this issues.

I go back to my displeasure of both camps minimizing the offerings and the 970 looks like its was more of a just good enough to replace the 780s. 280->285, 760->960.  As consumers we are going to keep getting screwed and it seems more and more of the majority are willing to spread cheeks and take it and brag about how a wonderful experience it was.

It will eventually play itself out or continue being a thorn.


----------



## Steevo (Jan 26, 2015)

http://www.dsogaming.com/news/nvidi...able-to-efficiently-allocate-more-than-3-5gb/


Looks to be one of the original sources, and like many others have noted, when modding games such as skyrim you run out of memory long before GPU power. Wonder how well GTA5 will do with the lack of more vmem for Nvidia users again?


Also describes many of the users issues http://forums.evga.com/Games-stuttering-with-GTX-970-and-Vsync-m2222444.aspx with stuttering as the textures required are pulled out of the slower memory. https://forums.geforce.com/default/topic/777475/geforce-900-series/gtx-970-frame-hitching-/


Current user fix? Run at 30 FPS, just like a console!!!!


----------



## 64K (Jan 26, 2015)

How exactly is the GTX 970 a failure in real world performance in games?


----------



## newtekie1 (Jan 26, 2015)

GhostRyder said:


> Good vid.


I agree.  And it gives a good idea of why they decided to do it this way from an engineering standpoint.  That 0.5GB is still much faster than having to access system RAM.  So it acts as a final buffer before having to start offloading to system RAM.  It does in fact make the card faster in situations where you exceed 3.5GB of memory used.

The wrong ROP and L2 amounts does suck, that is something nVidia should address.(Give me a free game and I'll be happy.)


----------



## Steevo (Jan 26, 2015)

64K said:


> How exactly is the GTX 970 a failure in real world performance in games?




For most stock games that use less than 3.5GB of vmem its great. as soon as you step over that it starts to stutter even though it has the GPU horsepower to run it. So the issue is all the people who bought a 4GB card to run modded skyrim, and other games that use vmem to its full, who can't. 

If you don't understand that it's OK, but there are some who mod games and buy hardware to support it.


----------



## Sasqui (Jan 26, 2015)

newtekie1 said:


> It does in fact make the card faster in situations where you exceed 3.5GB of memory used.)



Faster than paging, yes.  Sorry, I find the whole thing kind of amusing.  Truth be told, if I could trade my 290x for a 270 I'd still consider it.


----------



## Xzibit (Jan 26, 2015)

Sasqui said:


> *Faster than paging, yes*.  Sorry, I find the whole thing kind of amusing.  Truth be told, if I could trade my 290x for a 270 I'd still consider it.



Still haven't gotten your msg yet






Heh. I been meaning to replace a aging 480 with a 270/x then decided to wait until the 960 that disappointed. Now i'll wait to see what the 370 has to offer.  Don't want to spend too much on it because I'm still up in the air about either keeping the PC for HTPC or giving it away.


----------



## newtekie1 (Jan 26, 2015)

Steevo said:


> For most stock games that use less than 3.5GB of vmem its great. as soon as you step over that it starts to stutter even though it has the GPU horsepower to run it. So the issue is all the people who bought a 4GB card to run modded skyrim, and other games that use vmem to its full, who can't.
> 
> If you don't understand that it's OK, but there are some who mod games and buy hardware to support it.



I play modded Skyrim, it often goes over 3.5GB memory usage on my 970, and the stuttering that people say is so horrible just simply isn't.  It isn't nearly as bad as when the GPU has to access system RAM, because that 0.5GB is still way faster than accessing system RAM.

And think about it, people playing these games that go over 3.5GB haven't been complaining about stuttering.  The only reason we even noticed this problem was because some people noticed that some programs were saying they were only using 3.5GB and no more when they knew they should be using more.  It wasn't because they were experiencing stuttering.


----------



## Casecutter (Jan 26, 2015)

In PCPer video at 8:30 they say "It make sense if you have to disable one of L2 for binning purposes, which it was... all for binning purposes..." the other guy then says " Yes, it was all for binning purposes."

I think we need to know is that L2 "cut" as being something that is also done only because the 3 - SM cores being disabled? Or is it that more often one L2 came out bonkers/defective and they could only make the chip go as fast as it does by disabling it?

If we knew the truth either... One of the L2 is "unutilized" due to lower SM count, and they burn one off so that all 970's provide their level of performance then... fine and PR/marketing might have just goofed.  Although, if the SM count has nothing to do with the L2 being defective, then I would think someone thought they could pull the wool over folks.  Trying to not say they had to burn-off the L2, is messing with the published spec's.


----------



## trenter (Jan 26, 2015)

It seems they didn't just lie about the memory, 970 only has 56 ROPS and 1.75 megs of L2 cache.


HumanSmoke said:


> How auspicious. 2 posts in and you've already called out two staff members.


Only one member, the one that seems to be infatuated with the nvidia corporation.


----------



## Eroticus (Jan 26, 2015)

Some surprising news came from PCPerspective today. After a long debate, hundreds of reports of slower memory buffer of GTX 970, NVIDIA officially admitted that there was a mistake between marketing and engineering teams.

NVIDIA GeForce GTX 970 3.5 GB memory issue
The GM204 diagram below was made by NVIDIA’s Jonah Alben (SVP of GPU engineering) specifically to explain the differences between the GTX 970 and GTX 980 GPU. What was not known till today, and it was falsely advertised by NVIDIA, is that GTX 970 only has 56 ROPs and smaller L2 cache than GTX 980. Updated specs clarify that 970 has one out of eight L2 modules disabled and as a result the total L2 cache is not 2048 KB, but 1792 KB. It wouldn’t probably change anything, however this particular L2 module is directly connected to 0.5 GB DRAM module.

To put this as simply as possible: GeForce GTX 970 has two memory pools: 3.5 GB running at full speed, and 0.5 GB only used when 3.5 GB pool is exhausted. However the second  pool is running at 1/7th speed of the main pool.

So technically, till you deplete the memory available in the first pool, you will be using 3.5 GB buffer with 224-bit interface.

Ryan Shrout explains:

In a GTX 980, each block of L2 / ROPs directly communicate through a 32-bit portion of the GM204 memory interface and then to a 512MB section of on-board memory. When designing the GTX 970, NVIDIA used a new capability of Maxwell to implement the system in an improved fashion than would not have been possible with Kepler or previous architectures. Maxwell’s configurability allowed NVIDIA to disable a portion of the L2 cache and ROP units while using a “buddy interface” to continue to light up and use all of the memory controller segments. Now, the SMMs use a single L2 interface to communicate with both banks of DRAM (on the far right) which does create a new concern. (…)

And since the vast majority of gaming situations occur well under the 3.5GB memory size this determination makes perfect sense. It is those instances where memory above 3.5GB needs to be accessed where things get more interesting.

Let’s be blunt here: access to the 0.5GB of memory, on its own and in a vacuum, would occur at 1/7th of the speed of the 3.5GB pool of memory. If you look at the Nai benchmarks (EDIT: picture here) floating around, this is what you are seeing.






NVIDIA GeForce GTX 970 Corrected Specifications
GeForce GTX 970GeForce GTX 970 ‘Corrected’
Picture



GPU28nm GM204-20028nm GM204-200
CUDA Cores16641664
TMUs104104
ROPs6456
L2 Cache2048 KB1792 KB
Memory Bus256-bit256-bit
Memory Size4GB4GB (3.5GB + 0.5GB)
TDP145W145W
Check this video from PCPerspective:


Source: PCPerspective






-----------

NOT GOING TO BE FIXED =[ TO ALL 970 OWNERS UPDATE UR BOX WITH PEN OR SOMETHING....


----------



## Xzibit (Jan 26, 2015)

newtekie1 said:


> I play modded Skyrim, it often goes over 3.5GB memory usage on my 970, and the stuttering that people say is so horrible just simply isn't.  It isn't nearly as bad as when the GPU has to access system RAM, because that 0.5GB is still way faster than accessing system RAM.
> 
> And think about it, people playing these games that go over 3.5GB haven't been complaining about stuttering.  The only reason we even noticed this problem was because some people noticed that some programs were saying they were only using 3.5GB and no more when they knew they should be using more.  It wasn't because they were experiencing stuttering.



The vast majority overlook things.  Recently the G-Sync 30fps flickering issue.  It was even in the PcPer review graphs and they didn't bother looking at until well after production in ASUS ROG Swift models several months after release. Ryan didn't catch it Allyn didn't I beleive it was one of his friends. Even after the modules were updated to minimize the issue.. Blurbusters and other forums had users complaining about it but not until it hits certain people in the face is it widely exposed.  Heck I'm sure people still don't know about it or just forgot because its something that they deem it wont effect them.

It comes down to how individuals are effected by it.  Like this current debate we have people saying no big deal and others saying its horrible.  I rather have the information out there if it effects me or not.  The more information we know the better informed decisions one can make.


----------



## trenter (Jan 26, 2015)

the54thvoid said:


> Kinda what Anandtech are saying too.
> 
> Nvidia has egg on it's face for 'ahem' lying about it's card, no doubt but the performance of it isn't an issue.  Each reviewer looking at it in turn, (PCper, Anand, Hexus) has the same conclusion which is threefold:
> 1) Nvidia have slipped up and undoubtedly their PR and engineering sections have 'misled' the public somewhat.  (IMO, I don't believe it was innocent but hey)
> ...


Ignorance stones? Only a fanboy in denial, or a complete moron would take nvidia's explanation as truth with zero skepticism. Performance at the time of review isn't the problem, the problem is that people were sold a gpu expecting the specifications of that gpu to be the same four months later as they were at release. Also, people don't just buy a gpu to play only the games on market up to the point of release, they buy them with future performance in mind. You don't think there are some people that may have decided to skip the 970 if they had known there could be problems addressing the full 4gb of ram in the future, especially when console ports are using up to 3.5gb at 1080p now? What about the people that were worried about that 256 bit memory bus? Nvidia pointed out to reviewers that their improved L2 cache would keep memory requests to a minimum, therefore allowing more "achieved" memory bandwidth even with a smaller memory interface. The point is they lied about the specs, and it's almost impossible to believe that this is some kind of big misunderstanding between the engineering and marketing team that they just happened to realize after 970 users discovered it.


----------



## HTC (Jan 26, 2015)

newtekie1 said:


> It isn't.  The video is either fake or he has something else going on causing the issue.  Just look at his memory usage in the second part.  His system memory usage is under 6GB in the first half and over 13GB in the second...



I did say more testing was required and enfasised the literally big "If".


----------



## Rahmat Sofyan (Jan 26, 2015)

???



> if the specs for the gtx 970 are wrong how come gpu z shows 64 rops



from geforce forums



> GPU-Z reads card bios, so the bios has deliberately had this wrong info.



true or not?


----------



## HumanSmoke (Jan 27, 2015)

Xzibit said:


> Performance wise its not visually apparent with the majority of games but if the so called "Next Gen PS4-XB1 ports" games ever get here with "DX12" the issues will be more apparent to the majority. At least that how I see it. Most games are catching up to DX10+ and the new PS4-XB1 game are being ported with texture packs that are coming in at 3GB at VHQ @ 1080p.  Who know by then Nvidia might also have a 1070 that doesn't have this issues.


Well, if 4GB is the new black now, it is almost certain that the requirement in the enthusiast segment will go higher with the headroom available to consoles. With 4Gbit chips now in production, it seems likely that 256-bit/8GB could well be the next stepping stone - at least as far as Nvidia is concerned until HBM gen2 arrives. Not sure how AMD gets around the 4GB limitation for HBM gen1 though.
vRAM capacity in the lower segments has always been more about marketing than real-world gain - you still need the GPU power to fully utilize the framebuffer, otherwise (technically) you could release a 256-bit card with 16GB of vRAM ( 16 chips @ 4Gbit with dual 16-bit I/O - the same reduced I/O that allows a FirePro W9100 to carry 16GB) - might be marketable, but it sure won't be a balanced design.


Xzibit said:


> I go back to my displeasure of both camps minimizing the offerings and the 970 looks like its was more of a just good enough to replace the 780s. 280->285, 760->960.  As consumers we are going to keep getting screwed and it seems more and more of the majority are willing to spread cheeks and take it and brag about how a wonderful experience it was.


Well, both vendors are constrained by the process node, transistor density, die size, and power budget. Any gains made on GPUs using the same 28nm process aren't going to significant compared to moving to a new process. Technically, both vendors could go for broke and churn out 650mm^2 GPUs, but the pricing to recoup costs, lower yields, and limited market would be a killer- and of course a quantum leap in single GPU performance basically starts killing the market for dual cards and multi-card SLI/CFX, unless the software evolves at a similar (or faster) rate. It also doesn't address far larger an more lucrative markets - the low power mobile sector, and shoehorning the latest "must have" features into the mainstream products.


----------



## Steevo (Jan 27, 2015)

newtekie1 said:


> I play modded Skyrim, it often goes over 3.5GB memory usage on my 970, and the stuttering that people say is so horrible just simply isn't.  It isn't nearly as bad as when the GPU has to access system RAM, because that 0.5GB is still way faster than accessing system RAM.
> 
> And think about it, people playing these games that go over 3.5GB haven't been complaining about stuttering.  The only reason we even noticed this problem was because some people noticed that some programs were saying they were only using 3.5GB and no more when they knew they should be using more.  It wasn't because they were experiencing stuttering.




Look at the dates in the threads I listed before you start spewing will ya? I was contemplating a 970 until I started reading all the stuttering and glitching issues thy were plagued with, I had money in hand, instead spent money on close performance in this card for half the price and a bunch of new games.


----------



## newtekie1 (Jan 27, 2015)

Eroticus said:


> NOT GOING TO BE FIXED =[ TO ALL 970 OWNERS UPDATE UR BOX WITH PEN OR SOMETHING....



I was going to use a sharpie, but then I realized none of the affected specs are actually listed on the box, because GPUs aren't marketed based on L2 cache size and ROPs...


Steevo said:


> Look at the dates in the threads I listed before you start spewing will ya? I was contemplating a 970 until I started reading all the stuttering and glitching issues thy were plagued with, I had money in hand, instead spent money on close performance in this card for half the price and a bunch of new games.



What the f*(k are you talking about? I'm not going to look through a bunch of threads to find your useless posts.


----------



## TRWOV (Jan 27, 2015)

This thread has been awarded 3 popcorn MJs  Congrats!!!


----------



## Steevo (Jan 27, 2015)

newtekie1 said:


> I was going to use a sharpie, but then I realized none of the affected specs are actually listed on the box, because GPUs aren't marketed based on L2 cache size and ROPs...
> 
> 
> What the f*(k are you talking about? I'm not going to look through a bunch of threads to find your useless posts.


Not mine, but threads at Nvidia about stuttering from right after release, 18 pages long, game threads about stuttering.


----------



## lukesky (Jan 27, 2015)

Xzibit said:


>


Nvidia surely is a master of the 'force.


----------



## 15th Warlock (Jan 27, 2015)

http://www.anandtech.com/show/8935/geforce-gtx-970-correcting-the-specs-exploring-memory-allocation

Holy shit, cat's out of the bag, the hardware spec sheet given to review sites was wrong, the 970 does in fact feature less ROPs and cache than the 980, besides the divided VRAM partition mentioned before.

The card uses the first 3.5GBs of VRAM at full bandwidth for the memory crossbar accessing 7 memory modules, but the remaining 512MBs have to be accessed in tandem at a much lower bandwidth due to the single channel nature of the separate memory crossbar, faster than regular PCIe bandwidth but many times slower than the high performance 3.5GBs of VRAM in the first partition. So the card technically speaking has 4GBs of VRAM but the uppermost segment of it is almost an order of magnitude slower than the first chunk of memory.

The card still is a solid performer, and probably the best bang for your buck for gaming at 1440p and bellow, but Nvidia made a big no-no here, and they must be in full damage control mode


----------



## Xzibit (Jan 27, 2015)

15th Warlock said:


> http://www.anandtech.com/show/8935/geforce-gtx-970-correcting-the-specs-exploring-memory-allocation
> 
> Holy shit, cat's out of the bag, the hardware spec sheet given to review sites was wrong, the 970 does in fact feature less ROPs and cache than the 980, besides the divided VRAM partition mentioned before.
> 
> ...



Here is that page






If you really want a chuckle look at this page.









			
				Nvidia GeForce GTX 980/970 Reviewer's Guide said:
			
		

> _Equipped with 13 SMX units and 1664 CUDA Cores the GeForce GTX 970 also has the rending horsepower to tackle *NEXT GENERATION GAMING*. And with its 256-bit memory interface, 4GB frame buffer, and 7Gbps memory *the GTX 970 ships with THE SAME MEMORY SUBSYSTEM AS OUR FLAGSHIP GEFORCE GTX 980*, allowing gamers to crank up the settings and resolutions in graphic-intensive games like Assasin's Creed: Unity and still enjoy fluid frame rates._


----------



## xorbe (Jan 27, 2015)

Am I reading into it wrongly, or is the 970 basically operating in 224-bit mode most of the time, and not 256-bit mode?


----------



## Steevo (Jan 27, 2015)

Xzibit said:


> Here is that page
> 
> 
> 
> ...




NOOOO NVIDIA IS OUR BULL GOD!!!!!!!


Seriously though, for most its still a good deal.


----------



## R-T-B (Jan 27, 2015)

Fluffmeister said:


> The price will likely drop a bit anyway once AMD actually have a new product to sell, the current fire sale of products in a market already flooded with cheap ex-miners clearly doesn't make much difference.



Wait a second, ex-miners?  That was so 2013.  I haven't seen cards used in mining at any profitable margins since mid-2014, and even then nearly no one was doing it anymore.

There might still be a few on the market, but I doubt it.


----------



## Caring1 (Jan 27, 2015)

xorbe said:


> Am I reading into it wrongly, or is the 970 basically operating in 224-bit mode most of the time, and not 256-bit mode?


Yep, you're reading it wrong, that's the bandwidth in Gb/s


----------



## HumanSmoke (Jan 27, 2015)

Steevo said:


> NOOOO NVIDIA IS OUR BULL GOD!!!!!!!
> Seriously though, for most its still a good deal.


Pretty much, the numbers might change but the performance hasn't magically altered since it launched.

Having said that, I wouldn't mind one bit if Nvidia dropped prices across the board, offered AAA title game keys, and intro'd the GM 200 early just to erase the bad taste.


----------



## xorbe (Jan 27, 2015)

Caring1 said:


> Yep, you're reading it wrong, that's the bandwidth in Gb/s



Are you sure?  If it's only using 7 of 8 channels for 3.5GB that's 224-bit mode.  Then the 8th channel is like 32-bit mode, and the two modes are exclusive.  I thought AnandTech made it clear that it strides 1-2-3-4-5-6-7 - 1-2-3-4-5-6-7 and doesn't touch the remaining 8th channel until more than 3.5GB.


----------



## the54thvoid (Jan 27, 2015)

trenter said:


> Ignorance stones? Only a fanboy in denial, or a complete moron would take nvidia's explanation as truth with zero skepticism. Performance at the time of review isn't the problem, the problem is that people were sold a gpu expecting the specifications of that gpu to be the same four months later as they were at release. Also, people don't just buy a gpu to play only the games on market up to the point of release, they buy them with future performance in mind. You don't think there are some people that may have decided to skip the 970 if they had known there could be problems addressing the full 4gb of ram in the future, especially when console ports are using up to 3.5gb at 1080p now? What about the people that were worried about that 256 bit memory bus? Nvidia pointed out to reviewers that their improved L2 cache would keep memory requests to a minimum, therefore allowing more "achieved" memory bandwidth even with a smaller memory interface. The point is they lied about the specs, and it's almost impossible to believe that this is some kind of big misunderstanding between the engineering and marketing team that they just happened to realize after 970 users discovered it.



Mmm. Read my posts much? I did say I don't believe NV were genuine. Yawn...


----------



## HumanSmoke (Jan 27, 2015)

R-T-B said:


> Wait a second, ex-miners?  That was so 2013.  I haven't seen cards used in mining at any profitable margins since mid-2014, and even then nearly no one was doing it anymore.
> There might still be a few on the market, but I doubt it.


There are still more than a few out there. A quick browse of eBay shows multiple listings. This guy has *8* used R9 290's for sale  , used "in an air conditioned environment", kind of screams scrypt mining, even if the number of boards being sold isn't a big enough clue.


----------



## R-T-B (Jan 27, 2015)

Ah.  Must be some alt-coin craze I was unaware of after litecoin then.  Wow, you'd think people would realize GPU mining profit wasn't all that great after that and cut it out way before now.  *shrugs*

For what it's worth, when I sold my mining cards, I did so here and sold them dirt cheap with big warnings on them.  Thanks for the correction, anyhow.


----------



## buggalugs (Jan 27, 2015)

I'm waiting for an apology to the community for all the guys here that said there was no issue..... Or at least just say you were wrong.

Nvidia advertised the 970 as the same memory subsystem as the 980. That is clearly a big fat lie. and its not even a little different, its completely different. I cant see how they could "overlook" that in the specs.

Edit: ....and why does GPUz show 64 rops when there are 56?? Does GPUz actually detect or are the specs written in based on the model number??


----------



## W1zzard (Jan 27, 2015)

buggalugs said:


> why does GPUz show 64 rops when there are 56?? Does GPUz actually detect or are the specs written in based on the model number??


GPU-Z asks the NVIDIA driver. Even if I queried the hardware units, it would still show them as active, because they are not really disabled (they can be used for AA)


----------



## rooivalk (Jan 27, 2015)

It's amusing that what many people want is:
a. apology
b. free game
c. price drop

rather than anything that solve, negate, or prevent further problem.


----------



## Xzibit (Jan 27, 2015)

W1zzard said:


> GPU-Z asks the NVIDIA driver. Even if I queried the hardware units, it would still show them as active, because they are not really disabled (they can be used for AA)



If your quering the driver and its telling you 64 does that mean the marketing department is making drivers now ?

We all just got told that the engineers knew but never communicated the correct specs to the marketing team.


----------



## buggalugs (Jan 27, 2015)

W1zzard said:


> GPU-Z asks the NVIDIA driver. Even if I queried the hardware units, it would still show them as active, because they are not really disabled (they can be used for AA)



Ok I see.....and GPUz doesn't detect cache so that went unnoticed.

The more I think about this, the worse it looks for Nvidia. There is a very long thread on Nvidia forums about this starting since just after the 970 release late last year. Nvidia only makes comment when the story is reported by major tech sites. I cant believe they didn't know about this much earlier......its like they held off as long as they could to continue the hype and get 970 sales over the Christmas period.

The 970 memory subsystem is unique, I'm not aware of a similar system at least on mid-high end graphics cards. Unless they have been doing it and we weren't aware. I just don't accept they overlooked a memory subsystem that is so unique and forgot to mention it....and I don't accept it took them 3 months to figure it out.


----------



## HumanSmoke (Jan 27, 2015)

buggalugs said:


> The more I think about this, the worse it looks for Nvidia. There is a very long thread on Nvidia forums about this starting since just after the 970 release late last year. Nvidia only makes comment when the story is reported by major tech sites. I cant believe they didn't know about this much earlier..........and I don't accept it took them 3 months to figure it out.


It's actually quite believable.
What percentage of GTX 970 buyers encountered the problem? Sometimes people (especially the disgruntled) post multiple times over multiple forums, but for the most part, 970 owners don't seem that affected - even owners here and elsewhere say it hasn't impacted them personally (in fact the biggest outcry is from people who own AMD cards, go figure!). Nvidia stated they'd sold a million GTX 970's and 980's, and I'll go out on a limb and say that the bulk of those sales are of the former. Hundreds of thousands of cards sold, and how many individual issues (as opposed to multiple postings by individuals) reported ?
You don't have to look very far for a precedent. AMD's Evergreen series owners started questioning the stability of the cards almost from launch in September 2009 ( I returned two cards of mine personally and 3-4 from builds I was doing for others). It wasn't until the issue became better publicized in early 2010 that AMD started work on trying to locate and remedy the problem ( PowerPlay state voltage settings).
It would be nice to entertain the thought that this kind of stuff is acted as soon as it rears it head, but that seldom happens unless the issue is pervasive.


----------



## sergionography (Jan 27, 2015)

xorbe said:


> Are you sure?  If it's only using 7 of 8 channels for 3.5GB that's 224-bit mode.  Then the 8th channel is like 32-bit mode, and the two modes are exclusive.  I thought AnandTech made it clear that it strides 1-2-3-4-5-6-7 - 1-2-3-4-5-6-7 and doesn't touch the remaining 8th channel until more than 3.5GB.




Another thing that was mentioned is that each smm on Maxwell is capable of outputting 4pixels per clock meaning even tho it has 58 rops, in reality it only has 52pixel/second fillrate due to having 13smm. and in other words it's more like 208bit or being able to feed 6.5 of the channels, now had it been that nvidia enabled only 12smms then 192 bit would've fed the gpu just fine. Now with all this brought to attention it only makes me realize how ignorant I been about so many of the details on gpus. How does this translate to other architectures like gcn for example? Yes Hawaii has 512bit and 64rops, but do they get fed efficiently or are they there more for compute rather than graphics? Because that's what it feels like. And with this being said everyone complained about gtx 960 having only 128bit, but to come think about it having 32rops it gets fed exactly according to how much it can handle without any bottlenecks.


----------



## xorbe (Jan 27, 2015)

When I say 256-bit vs 224-bit, I'm referring to the effective vram width (not sm/core/rop guts of the GPU, that can vary a lot as usual).  I'm surprised everyone is hung up on the 4 vs 3.5, and not the 256 vs 224.


----------



## TRWOV (Jan 27, 2015)

HumanSmoke said:


> AMD's Evergreen series owners started questioning the stability of the cards almost from launch in September 2009 ( I returned two cards of mine personally and 3-4 from builds I was doing for others). It wasn't until the issue became better publicized in early 2010 that AMD started work on trying to locate and remedy the problem ( PowerPlay state voltage settings).



Yeah, but in AMDs case they didn't really know what was going on and had to investigate ans since it wasn't repeatable 100% of the time that made things more difficult, in this case nVidia knew everything beforehand and in fact they themselves caused the confusion by providing the press with wrong specs. Not saying that AMD hasn't done things wrong in the past just that the example you used isn't comparable IMO.

I mean, we all know AMD and nVidia cherry pick benchmarks and stuff to make their products look better but flat out giving wrong specs is a different thing.


----------



## Casecutter (Jan 27, 2015)

What sinks is that the more I read the more we uneath the smell of stinky fish... In that Nvidia looks to have found plenty of chip with 3-SM that needed to be fused, as planned.  However they found many chips having one on the L2 defective on too many chip and appeared to figure out a way to weasel around it because "nobody ever checks or question L2" we take their word (spec's).  I don't care if it performs alright, they duped folks because their equiptment wasn’t as advertised.

Nvidia screwed to pooch on this by saying *the GTX 970 ships with THE SAME MEMORY SUBSYSTEM AS OUR FLAGSHIP GEFORCE GTX 980*, and I'm not buying the... _*PR didn't get the message. *_Nvidia doesn’t need to lower price or give away games… It's the owners who purchase prior who should have a way to be compensated if they want.  Nvidia should just step-up… come out and say if you want to return them we’ll refund all your money, or you can apply for some form of settlement.  If Nvidia can’t bring themselves to do that then, a Class Action Suit should be brought to make the owner who were duped be provide some compensation.

What's so funny are the guy here defending Nvidia saying it not a big deal, or they all do it… OMG!  If this was AMD or Apple those same folks would be calling for their heads. To allow Nvidia to sweep this under the rug would just promote other Tech companies to further deceive with impunity.


----------



## HumanSmoke (Jan 27, 2015)

TRWOV said:


> Yeah, but in AMDs case they didn't really know what was going on and had to investigate ans since it wasn't repeatable 100% of the time that made things more difficult, in this case nVidia knew everything beforehand and in fact they themselves caused the confusion by providing the press with wrong specs. Not saying that AMD hasn't done things wrong in the past just that the example you used isn't comparable IMO.


I was using comparable in the sense that between the first signs of detection and subsequent actioning there is a lag. If you wanted a more apples-to-apples comparison of the companies prior knowledge of performance discrepancy but declining to publicize it, a more apt comparison would be AMD's advertising of Bulldozer as an eight core processor knowing full well that limitations of shared resources meant a degradation in core performance. 


TRWOV said:


> I mean, we all know AMD and nVidia cherry pick benchmarks and stuff to make their products look better but flat out giving wrong specs is a different thing.


Agreed.


----------

