# NVIDIA to Tune GTX 970 Resource Allocation with Driver Update



## btarunr (Jan 28, 2015)

NVIDIA plans to release a fix for the GeForce GTX 970 memory allocation issue. In an informal statement to users of the GeForce Forums, an NVIDIA employee said that the company is working on a driver update that "will tune what's allocated where in memory to further improve performance." The employee also stressed that the GTX 970 is still the best performing graphics card at its price-point, and if current owners are not satisfied with their purchase, they should return it for a refund or exchange.





*View at TechPowerUp Main Site*


----------



## puma99dk| (Jan 28, 2015)

I doubt we will could see a return of this card world-wide, bcs soo many ppl incl. myself purchased this card last year when it was released but if it happens it wouldn't stop me to get it exchanged and pay a little extra to get a GTX 980


----------



## esrever (Jan 28, 2015)

Sounds like damage control more than anything. From the look of it, its serious hardware problem when you can't use the 0.5GB at the same time as the 3.5GB. It may not show up in nvidia's PR but the use of the last 0.5GB seems to be filled with microstutter because of the bus having to switch from 1 memory pool to the other with some latency. The drivers already currently try to not ever use the last 0.5GB unless absolutely needed, the best they could do is a hard lock to remove any use of the last 0.5GB if they want to remove the stutter.


----------



## Pickles Von Brine (Jan 28, 2015)

Definitely damage control. 

I own a 970 and happy with it.


----------



## HumanSmoke (Jan 28, 2015)

puma99dk| said:


> I doubt we will could see a return of this card world-wide, bcs soo many ppl incl. myself purchased this card last year when it was released but if it happens it wouldn't stop me to get it exchanged and pay a little extra to get a GTX 980


I'm guessing Nvidia, its partners, and sellers will close that down pretty promptly. It probably isn't that big a deal to add a  superscript and footnoted conditions in fine print to the spec sheet in much the same way that AMD does. Doesn't seem particularly logical to continue a bad situation when the remedy should be easy enough to implement.


----------



## Naito (Jan 28, 2015)

Lord Xeb said:


> I own a 970 and happy with it.



Rightfully so. The performance hasn't changed since the initial batch of reviews just, as Anandtech put it, our perception of the GPU.


----------



## zzzaac (Jan 28, 2015)

True, performance is still top notch (and is business as usual when reaching up to 3.5GB). It is disappointing that the NVIDIA spec sheet was wrong, as I'm sure it wouldn't of ended up like this if they mentioned it properly.

Not sure about recalls (highly doubt it though), If I hazard a guess, at best, buyers might get a free game.


----------



## buggalugs (Jan 28, 2015)

Naito said:


> Rightfully so. The performance hasn't changed since the initial batch of reviews just, as Anandtech put it, our perception of the GPU.



 Yeh but most reviews missed this phenomenon, and If you're in the category of user with high res display who needs 4GBs of memory its not good.  Those users may have chosen a 290X instead or something else.  Nvidia deserves heat from this because they were dishonest about specs.....and they waited until after Christmas sales until the tech media reported on it before they would admit it.

 Nvidia advertised the 970 as "having the same memory subsystem as the 980" when it clearly doesn't. There has been a thread on Nvidia forums for 3 months since the 970 came out about stuttering over 3.5Gb, I don't believe Nvidia just figured this out now.


----------



## Recus (Jan 28, 2015)

When you say Nvidia should compensate GTX 970 remember that others should do it too.


----------



## john_ (Jan 28, 2015)

People just found GTX 970's "Kill Switch" that Nvidia didn't wanted to be found. I bet that optimization was already happening. And when Nvidia will want to push 970 owners to upgrade, that optimization will become history.


----------



## Xzibit (Jan 28, 2015)

john_ said:


> People just found GTX 970's "Kill Switch" that Nvidia didn't wanted to be found. I bet that optimization was already happening. And when Nvidia will want to push 970 owners to upgrade, that optimization will become history.



Isn't that the typical cycle ?

I read it more like a directional path in architecture/software/binning.  I think the end goal was to salvage more chips by using software to assist in those borderline unsalvageable part like the L2 cache without taking out the other L2 and pairing MCs and DRAM.  Probably would have preferred to have kept it but too many that didn't qualify for 980 were showing 1 bad L2.  They saw a way to make it work with software and were hoping it go un-noticed and they could just take out the 2 L2+MC and 1GB for the 960 TI. If they were to cut off the 2 L2s that would be less salvable chips and the "960 Ti" would be a 2GB 1k L2 salvage.  That would have put them in another marketing dilemma given they are still 280/280x with 3GB & 290 with 4GB.






Time will tell if we see a similar segmented GPU down the line with GM200, Pascal or Volta.


----------



## bogami (Jan 28, 2015)

That Nvidia have failed us nothing new. Especially with the continued excessive products 100% over the class value ! .However, I am not surprised how they grow prototypes of STELTH aircraft in China like mushrooms after rain where are the main co-operating manufacturers factories ..Fuckers  to us sell. failure cut
processors that have so locked sherders on best models ....I look forward to AMD 300 best because it looks like it will be again better than the TITAN X and a half cheaper. nvidia I buy only due to the acquisition of AGEIA physics unit, now is the AMD solve this problem with its engine .Except as crazy prices lie fraud manipulations are ..I not here to say something good about nVidia  recently -horor.


----------



## Fluffmeister (Jan 28, 2015)

Good news, I look forward to the driver update(s).


----------



## HumanSmoke (Jan 28, 2015)

bogami said:


> That Nvidia have failed us nothing new. Especially with the continued excessive products 100% over the class value ! .However, I am not surprised how they grow prototypes of STELTH aircraft in China like mushrooms after rain where are the main co-operating manufacturers factories ..Fuckers  to us sell. failure cut
> processors that have so locked sherders on best models ....I look forward to AMD 300 best because it looks like it will be again better than the TITAN X and a half cheaper. nvidia I buy only due to the acquisition of AGEIA physics unit, now is the AMD solve this problem with its engine .Except as crazy prices lie fraud manipulations are not here to say something about nVidia  recently horor.


It's like trying to read a message in fridge magnet letters during a violent earthquake.


----------



## john_ (Jan 28, 2015)

Fluffmeister said:


> Good news, I look forward to the driver update(s).



Yes, very good news. With any driver update there are performance gains of course. What Nvidia will do is every performance gain that will also affect GTX 970 to call it "better resource allocation". Placebo pills for the owners of the cards, but OK, good news, who am I to disagree with that?


----------



## Fluffmeister (Jan 28, 2015)

Good point, more performance, can't wait!


----------



## ShurikN (Jan 28, 2015)

Recus said:


> When you say Nvidia should compensate GTX 970 remember that others should do it too.


The PS4 was advertised as 8GB GDDR5 total system ram, some is used on cpu, some on gpu. I dont see your point.


----------



## Severus (Jan 28, 2015)

I also own the 970 Strix and am very pleased with it. I haven't encountered the problem on my daily use.


----------



## CounterZeus (Jan 28, 2015)

The card already performs great for me (1080p), but I image the update will be good for those using SLI and actually have the GPU power to use more than 3.5GB video memory. (Yes I know, Skyrim with mods is an exception)

I just wish Nvidia told us everything from the start because the real life performance still stands out for the price.

Would have been nice to see the reviews testing scenarios where the extra disable parts actually have an impact.


----------



## HumanSmoke (Jan 28, 2015)

CounterZeus said:


> Would have been nice to see the reviews testing scenarios where the extra disable parts actually have an impact.


PCGH have done some testing pitting the GTX 970 against a GTX 980 simulating a 970 able to use the full 4GB vRAM allocation at full speed.


----------



## Mathragh (Jan 28, 2015)

So what is there to tune? Making more sure the least-used data gets put in the extra 0,5GB?

From all I've read, it seems kinda unlikely they didn't already spend a significant amount of effort doing this, but time will tell I guess!

This all is becoming quite a complicated and difficult story. I hope people won't be put off simply by the massive amounts of (mis)information going around causing them to forget the real issue: Trust has once again been broken.


----------



## bogami (Jan 28, 2015)

HumanSmoke said:


> It's like trying to read a message in fridge magnet letters during a violent earthquake.


I hope . 100% too expensive locked (meaning that the failure cut the processor)and they will do the same with TITAN X .. GTX480 core processors then the same the following year sold GTX580 processors .GTX 680 middle class sold for the highest, as now GTX 980, 27 nm and is not 20 nm. sold as the highest class processor is only a medium .TITAN onle 2660 core unlocked (unsuccessfully cut)sould as the best the useless increase in shoulder 100% mor..........lies Fraud Management.
when I hear nVidija is screwed up again I do not see a good product but a lot of greed .I own 5 generations nVidia but I think I'll wait on the AMD new gen .I started with AMD (ATI) and it looks like I'll be back.. GL with GTX970 100% overestimated price card nVidia i think . 20nm is at the door.


----------



## iO (Jan 28, 2015)

Lets hope they dont just limit the card to 3.5GB and multiply the memory read-out by 12% and then say "Hey, look, there are your 4 gigs of vRAM. Now move along"...


----------



## Mathragh (Jan 28, 2015)

iO said:


> Lets hope they dont just limit the card to 3.5GB and multiply the memory read-out by 12% and then say "Hey, look, there are your 4 gigs of vRAM"...


Don't forget also multiplying the memory bandwidth by 12,5%! Can't have people thinking the memory is slower than it would've been with 4GB!


----------



## 64K (Jan 28, 2015)

I'm fine with Nvidia doing what they can with the drivers. I really don't want to return my 970 as it's a very good performer unless they offered me a trade up to a 980 for $100 more with my return but I still think Nvidia owes us something for their misrepresentation. Hell, even EA gave a free game to people that pre-ordered that mess of a game SimCity.


----------



## Rahmat Sofyan (Jan 28, 2015)

It'll be a miracle, these problem can fixed only by an update driver... finger crossed 

Oh yeah, this just in

*Truth about the G-sync Marketing Module (NVIDIA using VESA Adaptive Sync Technology – Freesync)*



> Basically, what NVIDIA is trying to force you to do is to buy their Module license while using VESA Adaptive-Sync Technology !.
> 
> Let’s be more clear. CUDA was made to force the developers to work on a NVIDIA GPU instead of an AMD GPU. The reason is that, if CUDA was capable to be used more widely, like DirectCompute or OpenCL, CUDA will certainly work better on AMD GPU. This is not good for NVIDIA, and the same goes for PhysX that could work on AMD GPU (actually working on CPU and NVIDIA GPU only).
> 
> ...





> Basically the NVIDIA drivers control everything between the G-sync Module and the Geforce GPU. The truth is that the G-sync Module does nothing else than confirm that the module is right here.
> 
> Which Monitors are compatible with the G-sync Modded Drivers ?
> 
> ...




Source

I wish this wasn't true...

PS : Sorry if OOT , really bored with 3.5GB hype..


----------



## ryun (Jan 28, 2015)

Rahmat Sofyan said:


> Source
> 
> I wish this wasn't true...
> 
> PS : Sorry if OOT , really bored with 3.5GB hype..



Good news, then! It's not: http://www.overclock.net/t/1538208/nvidia-g-sync-free-on-dp-1-2-monitors/10#post_23466190


----------



## xfia (Jan 28, 2015)

seems like he may be onto something..  I mean AMD is going free for freesync and with the practically empty bank account they have. I think if he really hits the code he will come out with something more like freesync and show gsync is a bloated technology made to make them more money when the technology at hand was already capable of producing the same end game. basically what AMD is already showing to be true.

@ryun dont be so fast to dismiss this..  people where complaining about the 970 on the nvidia forum since day 1 with no real answers.


----------



## john_ (Jan 28, 2015)

If the above is true even partially, it will mean that Freesync/Adaptive Sync produces the same result as GSync for no cost.


----------



## GhostRyder (Jan 28, 2015)

Recus said:


> When you say Nvidia should compensate GTX 970 remember that others should do it too.


If your going to make a joke like that you need to at least make it right.  There are *8* cores inside an FX 8XXX series and 9XXX series chip, its 4 modules 8 cores because there are 2 cores per module meaning it really is an 8 core.

Also the PS4 and XB0X ONE do have 8gb of memory inside but some is reserved at times for system resources.  However recently they have even stated they are/will be allowing more and more to be accessed for games.

Anyway, either way any update will help the GTX 970 but it will never alleviate the root cause completely.  It might be better if they find a way to utilize the ~500mb in a different method that will alleviate the main ram for its necessary tasks.  Could be a good way to get some extra performance in a way that could make the 3.5gb feel a bit larger and use it effectively though that is just a random thought.  Any performance work is good for the owners or for the future so if they can help it at all that is going to be nice.

Also why are we discussing G-Sync vs FreeSync on this thread?  I do not see the relevance?


----------



## 64K (Jan 28, 2015)

GhostRyder said:


> Also why are we discussing G-Sync vs FreeSync on this thread?  I do not see the relevance?



This happens on a lot of Nvdia/AMD/Intel threads. People that dislike a particular company and like another company will dredge up anything to dump in that thread. There will probably be people 10 years from now still talking about the 970 misrepresentation.


----------



## Casecutter (Jan 28, 2015)

GhostRyder said:


> Anyway, either way any update will help the GTX 970 but it will never alleviate the root cause completely.


Yea Nvidia will continue to press those originally tasked to find a "workaround" for the necessitating in "fusing-off" defective/damaged L2 to keep refining this new process feature. This won't be an isolated case going forward, as Nvidia terms it "partial disabling" will propagate into the future.  As it was said in btarunr' article yesterday, "This team (PR) was unaware that with "Maxwell," you could segment components previously thought indivisible, or that you could "partial disable" components."  Nvidia will need to make such "disablements" hopefully more seamless and unnoticeable with upcoming product releases.

As for them finding any "across the board" FpS performance highly doubtful, other than a few instances, perhaps they'll get it to more usable for those with SLI.  Either way it seems Nvidia is sweeping this under the rug.


----------



## wickedcricket (Jan 28, 2015)

Rahmat Sofyan said:


> It'll be a miracle, these problem can fixed only by an update driver... finger crossed
> 
> Oh yeah, this just in
> 
> ...




haha read that just a minute ago!  Well, well what d'ya know! :O


----------



## alwayssts (Jan 28, 2015)

I haven't weighed in on this whole thing as I really don't know what to add that's constructive.

I certainly have noticed micro-stutter in Mordor that I associate with this, and it sucks.  It's very noticeable, even in benchmarking (where you see a very quick plummet of framerate).  It would be great if I could use higher resolution/textures (memory-heavy options) and simply have a lower, if consistent experience....I feel the resolution options are granular enough (say from 2560x1440->2688x1512->etc) the space between 3.5-4 would be useful.  I would have also appreciated if the L2C/SYS/XBAR clocks would have been set to GPC clocks out of the box (like previous generations?), as raising those up seemed to help things quite a bit for me.  There are simply quite a few nvidia and aib (mostly evga) kerfluffles that really rub me the wrong way about this product since launch, and are only being fixed/addressed now because of greatly-appreciated community digging.

That allll said, and this doesn't excuse it, they are being addressed...and that means something.  Yes, nvidia screwed the pooch by not disclosing this information at launch, but not only does this revelation not change the product, it also holds their feet to the fire for optimizations more-so than if it had been disclosed from the start.  It also does not change the fact when I run at 2560x1440 (etc), which is really all I need from my viewing distance, I am still getting better performance than any other 9.5'' card on the market.  I feel for the price I paid, the performance is fair.  Had I paid more than I did, or adversely had they cut the product to 12SMMs/192-bit, I would likely be disappointed.  This is obviously a very well thought-out and placed product, and still deserves praise for the weird amalgamation that it is.

Edit:  added pic of how I changed l2c etc clocks.  I know this is a common mod amongst the community, but am curious if this helped others stabilize things as much as it helped me.


----------



## RejZoR (Jan 28, 2015)

Recus said:


> When you say Nvidia should compensate GTX 970 remember that others should do it too.



If you see 8 cores in Windows, any system will detect 8 cores, you'll have 8 physical threads, meaning it is in fact a 8 core CPU and they weren't lying. What's the design behind those 8 threads is irrelevant, because the fact still stands that you have 8 physical threads. Unlike with missing ROP's that are just, missing (or shall I say not functional) PHYSICALLY. Where NVIDIA advertised GTX 970 as 64 ROP card just to be uncovered as a 56 ROP card. Not quite the same aye?

And for the memory capacity. It says 8GB and guess what, PS4 has 8GB of memory. Are you saying it doesn't have it on PCB? Oh wait... No one said GTX 970 doesn't have 4GB of memory. Because we all know that it does. But the way how it's accessing it and utilizing it, that's the shitty part and the reason for the outrage beyond 3,5GB. Get your facts straight man...

Would you mind telling us where did you get the 1.2 billion transistor count? From speculations on desktop Jaguar cores? PS4 ain't desktop system you know, it's custom designed for PS4 specifically and based upon desktop components...


----------



## GhostRyder (Jan 28, 2015)

RejZoR said:


> If you see 8 cores in Windows, any system will detect 8 cores, you'll have 8 physical threads, meaning it is in fact a 8 core CPU and they weren't lying. What's the design behind those 8 threads is irrelevant, because the fact still stands that you have 8 physical threads. Unlike with missing ROP's that are just, missing (or shall I say not functional) PHYSICALLY. Where NVIDIA advertised GTX 970 as 64 ROP card just to be uncovered as a 56 ROP card. Not quite the same aye?
> 
> And for the memory capacity. It says 8GB and guess what, PS4 has 8GB of memory. Are you saying it doesn't have it on PCB? Oh wait... No one said GTX 970 doesn't have 4GB of memory. Because we all know that it does. But the way how it's accessing it and utilizing it, that's the shitty part and the reason for the outrage beyond 3,5GB. Get your facts straight man...
> 
> Would you mind telling us where did you get the 1.2 billion transistor count? From speculations on desktop Jaguar cores? PS4 ain't desktop system you know, it's custom designed for PS4 specifically and based upon desktop components...


 The transistor count he's referring to is the bulldozer architecture change from 2 to 1.2 which was changed a little time down the road as a mistake was caught on the chip and not the PS4 unless I have misinterpreted.  But your assentation is correct since there are 8 cores on the chip and the SP4 (And Xbox for that matter) both have the memory there and its functional/used.



Casecutter said:


> Yea Nvidia will continue to press those originally tasked to find a "workaround" for the necessitating in "fusing-off" defective/damaged L2 to keep refining this new process feature. This won't be an isolated case going forward, as Nvidia terms it "partial disabling" will propagate into the future.  As it was said in btarunr' article yesterday, "This team (PR) was unaware that with "Maxwell," you could segment components previously thought indivisible, or that you could "partial disable" components."  Nvidia will need to make such "disablements" hopefully more seamless and unnoticeable with upcoming product releases.
> 
> As for them finding any "across the board" FpS performance highly doubtful, other than a few instances, perhaps they'll get it to more usable for those with SLI.  Either way it seems Nvidia is sweeping this under the rug.


 Yep, the problem is more just than just wrong original specs and sweeping this under the rug as a miscommunication especially in the memory area where most of the problems lie is the root of the issues.  It is an area they need to work in and next time make sure to just say it first instead of putting something that has issues and cannot be resolved (Or cannot be easily resolved).  Does not make the card bad, just not advertised correctly especially to those wanting to go extreme resolution on a budget since the biggest area this is probably impactful is SLI since that was what many people I have heard chose to purchase 2 of these over a GTX 980 (Or R9 290/X).



64K said:


> This happens on a lot of Nvdia/AMD/Intel threads. People that dislike a particular company and like another company will dredge up anything to dump in that thread. There will probably be people 10 years from now still talking about the 970 misrepresentation.


 Yea that has been like half of the threads regarding this.  Its all hey this is similar to how the other company did (Insert thing here), its not relevant in this instance but I have to make the other side look bad as well or the balance of the Universe is out of whack.  We just need to focus on the subjects at hand and these wars will lessen as the people starting them start being ignored.


----------



## newtekie1 (Jan 28, 2015)

RejZoR said:


> If you see 8 cores in Windows, any system will detect 8 cores, you'll have 8 physical threads, meaning it is in fact a 8 core CPU and they weren't lying.




Looks like Windows sees 4 cores...







RejZoR said:


> Where NVIDIA advertised GTX 970 as 64 ROP card just to be uncovered as a 56 ROP card. Not quite the same aye?



The card does in fact have 64 ROPs.  It just only uses 56 because using the others would actually make the card slower.


----------



## xorbe (Jan 28, 2015)

Everyone knew the PS4 was a shared mem architecture.  NV hid the brain damaged design.  Two different scenarios.


----------



## Uplink10 (Jan 28, 2015)

> The employee also stressed that the GTX 970 is still the best performing graphics card at its price-point


If you exclude AMD cards which have a very low price right now.


----------



## Xzibit (Jan 28, 2015)

Uplink10 said:


> If you exclude AMD cards which have a very low price right now.



He is an engineer.  Nvidia established that they don't read web sites.


----------



## alwayssts (Jan 28, 2015)

Uplink10 said:


> If you exclude AMD cards which have a very low price right now.



I was going to say that was arguable dependent upon how a given 290x clocks, but then I checked the prices at Newegg.

Holy shit.  290x got super cheap.  Like...wow.  I was thinking they were $360-370 (or whatever the last price drop was), in which case I think the premium over a 290 vanilla was (and is) still worth it.  At $280 for a 290x though (!) you're totally right.  If you can handle one of those beasts, they are one hell of a deal.

I still stand by the nicety that is a short card though, as well as being able (even if by bios mods) to draw a huge amount of power over 2x6-pin.  Having a highish-end card in a mid-range package is super nice...granted it's totally out of all kind of specs and perhaps bound one day to release magic smoke.  The fact it exists and CAN do it though; worth the over-all smallish premium on the $/perf to me; for others they may see perf/w at stock as a boon.  It's all relative (and in regards to PR-speak at that), but yeah....those 290x are a nice deal.



Xzibit said:


> He is an engineer.  Nvidia established that they don't read web sites.



................


----------



## HumanSmoke (Jan 28, 2015)

Uplink10 said:


> > The employee also stressed that the GTX 970 is still the best performing graphics card at its price-point
> 
> 
> If you exclude AMD cards which have a very low price right now.


Nvidia (and Intel) rarely if ever reference AMD by means of comparison. As market leaders they don't acknowledge a smaller player in their respective markets - standard market position strategy.


----------



## ShurikN (Jan 28, 2015)

newtekie1 said:


> The card does in fact have 64 ROPs.  It just only uses 56 *because using the others would actually make the card slower*.


Haha, that's even worse.


----------



## L337One91 (Jan 28, 2015)

So what happens to those who can't return the card due to the return period having expired?


----------



## Casecutter (Jan 28, 2015)

To put an upside to this... It is just a more "granular" implementation of what Nvidia has done with segmented/asymmetrical/unbalanced memory configurations going back to the GTX 500 series.

While NVIDIA has never fully explained in-depth how such memory allocation is handled for those cards, it worked and we knew it was there (perhaps got closer now than they ever wanted). Nvidia should've at least present very top level overview, saying this is engineering based multiple generations of experience.  This time I feel they didn't want it known they'd implemented it on such a high-performance card.  It's not that big a deal given the price/performance, trade-offs are made for yields and offering the enormous volume they needed...

But the fact is engineering crafted it (as customer finally unearth), while marketing I really believe couldn't bring themselves to admit gritty truth and they have the blame.  I said there should be some type of restitution, but as I'm not affected others can figure that out.


----------



## TheGuruStud (Jan 28, 2015)

If you can sue jimmyjohns over literally nothing, then you can definitely sue nvidia for lying in their advertisement of the product.


----------



## HumanSmoke (Jan 28, 2015)

Casecutter said:


> But the fact is engineering crafted it (as customer finally unearth), while marketing I really believe couldn't bring themselves to admit gritty truth and they have the blame.  I said there should be some type of restitution, but as I'm not affected others can figure that out.


Well, by the sounds of it Nvidia (judging by the statements and their forum reps) are moving in that direction. As for not being affected, I wouldn't let that stop you - plenty of people here don't use Nvidia products let alone the GTX 970, and it doesn't stop them shouting down the owners of the actual card being discussed. It's a pity that the maturity shown by the community when AMD* falsely advertised that their flagship offered video decode hardware is now largely missing. We truly live in a Golden Age of Enlightenment Entitlement 

* AMD acquired ATI in October 2006. the 2900 XT launched in May 2007


----------



## Casecutter (Jan 28, 2015)

HumanSmoke said:


> We truly live in a Golden Age of Enlightenment Entitlement


Folks who buy knowing they like/want/gravitate to  features being advertised, and then find that it's not as described/promised are _entitled_ to have some course of restitution.  If companies find impunity what stops the next time, or others see_ those guy got away with it,_ we are doomed to see it as you point out... again.  Is this that the time (line in the sand) where we finally stand as enlighten to those goings on?

It's wrong and not something anyone should take lightly or slight.


----------



## HumanSmoke (Jan 29, 2015)

Casecutter said:


> Folks who buy knowing they like/want/gravitate to  features being advertised, and then find that it's not as described/promised are _entitled_ to have some course of restitution.


I think that is actually being addressed, is it not?


Casecutter said:


> If companies find impunity what stops the next time, or others see_ those guy got away with it,_ we are doomed to see it as you point out... again.


So , you are of the opinion that is was a planned strategy from the get-go as well? Just as a counterpoint to that notion, AMD and B3D guru Dave Baumann's take:


> Perfectly understandable that this would be "discovered" by end users rather than theoretical noodling around. For one, the tools for the end user have got better (or at least, more accessible) to be able to discover this type of thing. Second, the fundamental interconnects within a GPU are not the parts that are ever discussed, because largely they aren't necessary to know about





Casecutter said:


> Is this that the time (line in the sand) where we finally stand as enlighten to those goings on?


Well we have different five threads devoted to the subject here. I guess the next advertising/marketing misstep (assuming there is one by your reckoning) should be a doozy.


Casecutter said:


> It's wrong and not something anyone should take lightly or slight.


The issue certainly isn't, but the level of outrage being shown over a hardware component whose performance hasn't deviated one iota since it was launched and reviewed is certainly cause for humour.


----------



## xfia (Jan 29, 2015)

alwayssts said:


> I haven't weighed in on this whole thing as I really don't know what to add that's constructive.
> 
> I certainly have noticed micro-stutter in Mordor that I associate with this, and it sucks.  It's very noticeable, even in benchmarking (where you see a very quick plummet of framerate).  It would be great if I could use higher resolution/textures (memory-heavy options) and simply have a lower, if consistent experience....I feel the resolution options are granular enough (say from 2560x1440->2688x1512->etc) the space between 3.5-4 would be useful.  I would have also appreciated if the L2C/SYS/XBAR clocks would have been set to GPC clocks out of the box (like previous generations?), as raising those up seemed to help things quite a bit for me.  There are simply quite a few nvidia and aib (mostly evga) kerfluffles that really rub me the wrong way about this product since launch, and are only being fixed/addressed now because of greatly-appreciated community digging.
> 
> ...




not really much of a performance difference with a 290x at 1440p


----------



## Kyuuba (Jan 29, 2015)

Can software repair a broken hardware?


----------



## xfia (Jan 29, 2015)

broken? maybe..  if each segment cant simultaneously be accessed then I dont see optimization going very far.


----------



## TRWOV (Jan 29, 2015)

Recus said:


> When you say Nvidia should compensate GTX 970 remember that others should do it too.



FX 8xxx CPUs have 8 integer cores and AMD always showed how the FPU resources were shared. The 2B transistor thing was corrected by themselves a mere day after reviews hit, not 4 months after. And the PS4 has 8GB of GDDR5, having a chunk reserved for the operating system is expected nowadays (XBM used 80 of the 256MB the PS3 had), we are not in the SNES era. 




xfia said:


> broken? maybe..  if each segment cant simultaneously be accessed then I dont see optimization going very far.



They could use the 0.5GB partition for window composing and such, kinda like having a secondary 512MB GPU for the OS or something.

nVidia says that the last 512MB chunk is 4 times faster than system RAM over PCIe... but you have to take into account that if said 512MB were accessible over the same link you wouldn't need to use system RAM in the first place making the comparison moot.


----------



## mouacyk (Jan 29, 2015)

Recus said:


> ...
> When you say Nvidia should compensate GTX 970 remember that others should do it too.



Each company's/product's customers are the most credible critics of that company or product.  Otherwise, the company or product has no incentive to budge.



Kyuuba said:


> Can software repair a broken hardware?



This is exactly a bridge-out problem.  If one of two bridges went out, you need to re-route traffic from one bridge to the other.  The only way you can maintain the same throughput (people per crossing) is either double the speed on the bridge or shrink the people to half size but moving at the same speed.  NVidia's software solution does not seem to accomplish either of those, because it's a hardware/physical problem.  At best, they can smartly use the extra 0.5GB as a Level3 Cache to avoid hampering the performance of the primary 3.5GB.


----------



## Kyuuba (Jan 29, 2015)

> This is exactly a bridge-out problem.  If one of two bridges went out, you need to re-route traffic from one bridge to the other.  The only way you can maintain the same throughput (people per crossing) is either double the speed on the bridge or shrink the people to half size but moving at the same speed.  NVidia's software solution does not seem to accomplish either of those, because it's a hardware/physical problem.  At best, they can smartly use the extra 0.5GB as a Level3 Cache to avoid hampering the performance of the primary 3.5GB.


Sad news then.


----------



## Yorgos (Jan 29, 2015)

Recus said:


>



Another idiot with MS Paint skills.
A CMT core/thread (call it whatever you want) has fully functional cores/threads.
The pipeline is complete for each one, as opposed to SMT.

2 billion transistors vs 1.2 billion. REALLY, is this even an argument. So you wake up one day and decide to buy 2 billion transistors? It's not like they told you that there are 2 billion transistors but the .8 are overbaked and the 1/3 of the CPU is not working.


----------



## Naito (Jan 29, 2015)

buggalugs said:


> Yeh but most reviews missed this phenomenon, and If you're in the category of user with high res display who needs 4GBs of memory its not good.  Those users may have chosen a 290X instead or something else.  Nvidia deserves heat from this because they were dishonest about specs.....and they waited until after Christmas sales until the tech media reported on it before they would admit it.
> 
> Nvidia advertised the 970 as "having the same memory subsystem as the 980" when it clearly doesn't. There has been a thread on Nvidia forums for 3 months since the 970 came out about stuttering over 3.5Gb, I don't believe Nvidia just figured this out now.



That's a fair point. Nvidia should not have mislead customers, however 4GB is just another number. Just because you have 4GB of VRAM, doesn't mean that it is suddenly capable of running high resolution/high quality games, many other factors come into play. For example, a 4GB GTX 670 doesn't perform to a much higher degree than the standard 2GB SKU at say 1440p, just because it has double the VRAM that's because there are other limiting factors. This also holds true for the GTX 780/Ti; a full 1GB less VRAM than the GTX 970/980, but still perform to a similar level even at 4K.

The way I see it, is that there is two issues here:

1. Nvidia mislead customers. If you purchased the GTX 970 purely on the basis that it has the same memory subsystem as the GTX 980 then sure, by all means be pissed off however, the GTX 970 still performs where it did in the initial reviews and still competes with the competitors products. Its ability to run high resolution/high quality content hasn't changed, it just has optimization issues in very particular scenarios.

2. Perhaps the biggest problem here is not that it has 3.5GB primary partition and a 0.5GB secondary partition, but rather how it is utilized. From what I have read, only one partition can be accessed at a time in hardware and due to optimization issues it can, on occasion, cause stuttering. Not knowing anything about the inner workings and latencies associated with VRAM to system RAM _vs _VRAM partition one to VRAM partition two and vice versa, I'd hazard a guess and say while having the secondary partition as a cache of sorts, with the idea that it saves time accessing system RAM, may seem like a good idea, if it limits access to primary partition and still gives similar/more latency to VRAM to system RAM access, it may not be such a good idea after all (i.e. it causes more stuttering than it helps reduce it). Either way, Nvidia really need to work out the optimization issues.


----------



## xorbe (Jan 29, 2015)

Software fix in nv control panel 3D hw settings:    Gimped 0.5GB segment      [ Enabled / Disabled ]


----------



## RejZoR (Jan 29, 2015)

You forgot the "Auto" setting...


----------



## Fluffmeister (Jan 29, 2015)

Yorgos said:


> 2 billion transistors vs 1.2 billion. REALLY, is this even an argument. So you wake up one day and decide to buy 2 billion transistors? It's not like they told you that there are 2 billion transistors but the .8 are overbaked and the 1/3 of the CPU is not working.



The number of transistors is of course irrelevant, but pople are questioning how a large company can have such miscommication betweens departments (surely not a revelation?), or if they even read reviews of their own products, AMD seems to be no different in this regard.



			
				TRWOV said:
			
		

> The 2B transistor thing was corrected by themselves a mere day after reviews hit, not 4 months after.



http://www.techpowerup.com/156123/a...million-less-transistors-than-it-thought.html


----------



## FYFI13 (Jan 29, 2015)

I hope Nvidia will slash the price of GTX 970 because of this issue and then I'll buy one for myself


----------



## THU31 (Jan 29, 2015)

This card needs 8 GiB of VRAM with eight 8-Gbit GDDR5 chips (instead of eight 4-Gbit ones). The price would not be much higher, but we would get 7 GiB of full bandwidth. That would be enough for pretty much anything until Pascal comes along.


----------



## TRWOV (Jan 29, 2015)

Fluffmeister said:


> The number of transistors is of course irrelevant, but pople are questioning how a large company can have such miscommication betweens departments (surely not a revelation?), or if they even read reviews of their own products, AMD seems to be no different in this regard.
> 
> 
> 
> http://www.techpowerup.com/156123/a...million-less-transistors-than-it-thought.html




Yeah, but the "loss" of .8B transistor didn't impact how the CPU performed and AMD sent a correction precisely after reviews hit, thus they must have read them. The memory partition on the 970 has the potential of lowering the card's performance on some setups and people didn't know in advance or in a timely manner.

Now, I know you haven't had any problems with yours on any games but there are people that has had them so the algorithm that nVidia used isn't 100% reliable. You can see lots of videos on YouTube showing random stuttering on the 970. Granted, several were uploaded this week so some could be fake but there are some that were uploaded weeks before the drama imploded:









Uploaded 29/12/2014









Uploaded 04/10/2014









Uploaded 07/01/2015

So the card has problems in some setups. I've often had to help people with a problem I can't reproduce, not every PC is the same even if you have two with the same components, OS, etc.


----------



## daftshadow (Jan 29, 2015)

FYFI13 said:


> I hope Nvidia will slash the price of GTX 970 because of this issue and then I'll buy one for myself



Prices won't be slashed until the upcoming Holiday season of this year or whenever AMD launches its r300 series. My opinion, Nvidia will not cut prices on the GTX 970 because of this "issue". Although I would prefer some kind of compensation such as a trade up program for exchanging the 970 for the 980 at a reduced cost.


----------



## Casecutter (Jan 29, 2015)

HumanSmoke said:


> I think that is actually being addressed, is it not?


I've yet to hear Nvidia is going do any sort of restitution, for those who are seeing the problems.
I suppose if the driver can resolve the 0.5GB secondary partition running slower and more seamless in transitioning between the partitions then they'll have redemption.



HumanSmoke said:


> So , you are of the opinion that is was a _planned strategy from the get-go_ as well?


*Well YES!* Nvidia engineering said they fused off the L2.  Just because the company says that marketing wast _unaware_ doesn't mean it wasn't "planned".
"This team (PR) was unaware that with "Maxwell," you could segment components previously thought indivisible, or that you could "partial disable" components."
There big boys and darn well knew at Executive level meetings (marketing isn’t part of that?) what and how those chips were needing to be “segmented” to achieve the volumes asked for by marketing?



HumanSmoke said:


> Well we have different five threads devoted to the subject here. I guess the next advertising/marketing misstep (assuming there is one by your reckoning) should be a doozy..


I don't start the treads.  As above... this can’t be sloughed-off as just some "miss-step" by low level marketing nerds.



HumanSmoke said:


> The issue certainly isn't, but the level of outrage being shown over a hardware component whose performance hasn't deviated one iota since it was launched and reviewed is certainly cause for humour.


Agreed, while owners who run into this might see the level at which some in community more "white-washing" legitimate concerns just because the "canned" reviews didn't expose it! (Nvidia darn well knew they wouldn't.)  If it has no effect as you intend, why couldn't Nvidia not just put information (at release or since) of the use of "segmented" implementation of memory, which something they’re executed in the past.  I can't believe there wasn’t many engineers and executives reading reviews who couldn’t recognize this “supposed marketing over-site”, but those employees/company couldn't come out right then, or months ago after they saw the rise of issues on their own forums!


----------



## RealNeil (Jan 29, 2015)

NVIDIA Said:  "and if current owners are not satisfied with their purchase, they should return it for a refund or exchange"

Yeah! All of you guys start returning your 970s and I'll buy them used or refurbished for a lot less!


----------



## newtekie1 (Jan 29, 2015)

I see a lot of people saying they've lost faith in nVidia and are buying AMD now because of this.  But what gets me is this has no affect on the performance we saw in reviews.  These are just specs on paper, yes they were reported wrong, but they are just the specs on paper.  The performance does not change just because the specs on paper change.  Just like changing the name doesn't make a difference.  And some of you know I'm very much not against renaming cards.  I didn't have a problem with it when nVidia did it with G92, and I didn't have a problem with it when AMD did it with Tahiti.  Again, the performance for the money is what matters, and that remains unchanged.

But the question I have for everyone trying to say AMD is somehow above doing what nVidia did is, do you remember when AMD released a series of graphics cards and then 2 months later after all the reviews were done and published, released a driver that sneakily reduced performance to stop the cards from overheating and dying prematurely? Do you remember that?

So, which is worse, revising some specs on paper and leaving the performance the same, or purposely lowering the performance with a driver update and not telling anyone?


----------



## HumanSmoke (Jan 29, 2015)

newtekie1 said:


> I see a lot of people saying they've lost faith in nVidia and are buying AMD now because of this.


A lot of that will be AMD shilling and general trolling. Even a cursory look at the forums here will show the most vociferous posters aren't using Nvidia, let alone the GTX 970. I also note that many of the disgruntled "users" on many sites are first-time posters (the same can be said for similar issues that befall AMD, Intel, and Apple). From our own membership of people that actually own the card, I don't see many in a hurry to return it - the larger sentiment seems to be a hope that prices of refurbed cards allows them to buy a second (or more). Of the people I know that actually own them, most are pretty happy - they still haven't got over the buzz of what is for them is an affordable, quiet, and impressive piece of kit (they all bought Gigabyte G1 gaming cards).
An indication might be seen with the poll TPU are conducting. The comments seem to indicate that while Nvidia did wrong, the hardware is OK - but the poll indicates an overwhelming difference of opinion.


newtekie1 said:


> But what gets me is this has no affect on the performance we saw in reviews.


Matters not a jot. People gotta get their hate on. You aren't living unless you embrace armchair/hashtag activism, and join the raging against everyone from Obama to why Japan has more flavours of Kit-Kats than your country.


----------



## xorbe (Jan 29, 2015)

nVidia has altered the deal.  Pray that they don't alter it any further.


----------



## alwayssts (Jan 29, 2015)

newtekie1 said:


> I see a lot of people saying they've lost faith in nVidia and are buying AMD now because of this.  But what gets me is this has no affect on the performance we saw in reviews.  These are just specs on paper, yes they were reported wrong, but they are just the specs on paper.  The performance does not change just because the specs on paper change.  Just like changing the name doesn't make a difference.  And some of you know I'm very much not against renaming cards.  I didn't have a problem with it when nVidia did it with G92, and I didn't have a problem with it when AMD did it with Tahiti.  Again, the performance for the money is what matters, and that remains unchanged.
> 
> But the question I have for everyone trying to say AMD is somehow above doing what nVidia did is, do you remember when AMD released a series of graphics cards and then 2 months later after all the reviews were done and published, released a driver that sneakily reduced performance to stop the cards from overheating and dying prematurely? Do you remember that?
> 
> So, which is worse, revising some specs on paper and leaving the performance the same, or purposely lowering the performance with a driver update and not telling anyone?



The problem that I think many non-owners can't seem to grasp (and as shown in the videos kindly posted above) is not that general performance hasn't changed, as you're right it doesn't change the performance shown at launch.  *The problem is the transition from the 3.5GB to .5GB segment causes stutter*.  This is very real and it is *extremely *annoying.  This was not showcased/highlighted in many (any?) reviews, perhaps as they didn't think to look for it or saw any potential hiccups as some other personal anomaly.  Maybe most tested it at resolutions that could be contained within 3.5GB (again, this is a great 1080p->1440p card as it is), or scenarios the core was bottlenecked before vram became the bottleneck.  The fact remains, there are scenarios where the core can put up with gaming scenarios that would utilize that partition for a fluid experience (in essence I disagree with many that say it is moot because it can't).  There are instances where the bottleneck is that .5GB, or rather switching to it causes stutter (ie resolutions/settings in Mordor that would otherwise run solidly above 30fps, I'm sure there are others) and that is a problem, especially because we were lied to about it's capabilities.  Had we known about that, it may have caused some people to buy a 290x, as at higher resolutions (while otherwise a similar-performing core) the AMD cards will not have this problem.  I have said it about 37 times in this thread: no 290(x) would fit in my case; the 970 is the best option for me regardless.  That doesn't change the fact the stutter is annoying.

From my understanding, nvidia is doing it's best to shove everything typically contained in ram (driver/os stuff even used at idle) into that .5GB partition so games will generally not utilize it.  That *should* help, granted I have no idea what is typical in that regard.  As I sit idle at my desktop currently, the OS is using 1GB video ram (but that is also at 4k60).  I have no idea how much of that generally is taken away from the OS when a game takes exclusive control of the gpu.


----------



## Xzibit (Jan 29, 2015)

xorbe said:


> nVidia has altered the deal.  Pray that they don't alter it any further.



There is no updated driver to come fix the day anymore.

*PeterS@Nvidia has edited his original post.*

Looks like Nvidia is erasing any knowledge of culpability for any potential legal action to come.

Once an engineer now customer care.   Poor Peter is going to be tossed to the wolfs.


----------



## alwayssts (Jan 29, 2015)

Xzibit said:


> There is no updated driver to come fix the day anymore.
> 
> *PeterS@Nvidia has edited his original post.*
> 
> ...




Yeah, that's unfortunate.  I truly wish large corporations realized a little a honesty/culpability can go a long way towards customer loyalty.  Honest/thorough/grounded (less pr-speak, more actual no-bs dialogue) interaction between providers and customers is an important tool this day and age...point of the internet and all that.  Can't say I'm surprised, nvidia has never been much for giving the personal leeway you see in the blunt expose'-style Richard Huddy/Eric Demers/Wavey Dave approach....Granted I have noticed even AMD gives Huddy a handler these days when doing interviews.


----------



## HumanSmoke (Jan 29, 2015)

alwayssts said:


> Can't say I'm surprised, nvidia has never been much for giving the personal leeway you see in the blunt expose'-style Richard Huddy/Eric Demers/Wavey Dave approach....Granted I have noticed even AMD gives Huddy a handler these days when doing interviews.


Probably a dying breed given the microscope companies are examined under these days. Is Dave still at AMD, I thought he gave that gig away (commenting on the GTX 970 issue would tend to support that). As for Huddy, I'm surprised AMD don't have a ball gag on hand whenever there is a microphone in close proximity - he does have a habit of putting his employer in difficult situations (whether AMD, Intel, or Nvidia).


----------



## RejZoR (Jan 30, 2015)

newtekie1 said:


> I see a lot of people saying they've lost faith in nVidia and are buying AMD now because of this.  But what gets me is this has no affect on the performance we saw in reviews.  These are just specs on paper, yes they were reported wrong, but they are just the specs on paper.  The performance does not change just because the specs on paper change.  Just like changing the name doesn't make a difference.  And some of you know I'm very much not against renaming cards.  I didn't have a problem with it when nVidia did it with G92, and I didn't have a problem with it when AMD did it with Tahiti.  Again, the performance for the money is what matters, and that remains unchanged.
> 
> But the question I have for everyone trying to say AMD is somehow above doing what nVidia did is, do you remember when AMD released a series of graphics cards and then 2 months later after all the reviews were done and published, released a driver that sneakily reduced performance to stop the cards from overheating and dying prematurely? Do you remember that?
> 
> So, which is worse, revising some specs on paper and leaving the performance the same, or purposely lowering the performance with a driver update and not telling anyone?



Now it doesn't. But do you know for a fact that all will work fine after 1 year when games that demand more memory start coming out? Do you think NVIDIA will care if it will perform like crap then? So, why exactly this defense mode from the users? If you actualyl bought the damn thing you should be even more outraged than we who were going to buy it and then this shit came up. Just a thought...


----------



## Xzibit (Jan 30, 2015)

More confirmation no Driver Fix for 970 is coming...

*Nvidia GeForce twitter response*


			
				Nvidia GeForce Twitter said:
			
		

> _We are always improving performance through drivers but there are no plans for an update specifically for the GTX 970_



Since this has blown up they are going into shut-down mode. Any statement they make will have to be vented through there legal team and I'm sure they are telling them don't reference 970 as it can be seen and used as an admission of culpability.


----------



## newtekie1 (Jan 30, 2015)

alwayssts said:


> The problem that I think many non-owners can't seem to grasp (and as shown in the videos kindly posted above) is not that general performance hasn't changed, as you're right it doesn't change the performance shown at launch. *The problem is the transition from the 3.5GB to .5GB segment causes stutter*. This is very real and it is *extremely *annoying. This was not showcased/highlighted in many (any?) reviews, perhaps as they didn't think to look for it or saw any potential hiccups as some other personal anomaly. Maybe most tested it at resolutions that could be contained within 3.5GB (again, this is a great 1080p->1440p card as it is), or scenarios the core was bottlenecked before vram became the bottleneck. The fact remains, there are scenarios where the core can put up with gaming scenarios that would utilize that partition for a fluid experience (in essence I disagree with many that say it is moot because it can't). There are instances where the bottleneck is that .5GB, or rather switching to it causes stutter (ie resolutions/settings in Mordor that would otherwise run solidly above 30fps, I'm sure there are others) and that is a problem, especially because we were lied to about it's capabilities. Had we known about that, it may have caused some people to buy a 290x, as at higher resolutions (while otherwise a similar-performing core) the AMD cards will not have this problem. I have said it about 37 times in this thread: no 290(x) would fit in my case; the 970 is the best option for me regardless. That doesn't change the fact the stutter is annoying.



I've been playing FC4@1440p MSAA4 since I got my GTX970(and on my 4GB 670s before that).  Memory usage is often over 3.7GB, the stuttering really isn't bad, or even noticeable.  The odd thing is those videos show the GPU usage drop to 0% when the stuttering happens, and that doesn't happen with my card.  The GPU usage is pegged at 100% always.

Plus there were plenty of opportunities where this should have come up in the reviews.  W1z did a lot of testing at 4k with the card both single card and SLI.  You'd think he would have mentioned the stuttering instead of praising the card as a great card for 4k.  He even tested BF4 and Watch_Dogs at 4k, both of which I know use more than 3.5GB.


----------



## Xzibit (Jan 30, 2015)

newtekie1 said:


> Plus there were plenty of opportunities where this should have come up in the reviews.  W1z did a lot of testing at 4k with the card both single card and SLI.  You'd think he would have mentioned the stuttering instead of praising the card as a great card for 4k.  He even tested BF4 and Watch_Dogs at 4k, both of which I know use more than 3.5GB.



Might just be a bechmarks suite he lets run and gets FPS results since I don't recall W1zzard ever commenting about playability experience in his reviews. Maybe outside of his reviews from personal experience but he hasn't commented has he ?


----------



## TRWOV (Jan 30, 2015)

I've never experienced the so called Radeon black screen hardlock *knocks wood* but that doesn't mean every other guy that has had that problem is lying or delusional. Not everyone could be experiencing a problem even if their setups are similar.


----------



## HumanSmoke (Jan 30, 2015)

newtekie1 said:


> I've been playing FC4@1440p MSAA4 since I got my GTX970(and on my 4GB 670s before that).  Memory usage is often over 3.7GB, the stuttering really isn't bad, or even noticeable.  The odd thing is those videos show the GPU usage drop to 0% when the stuttering happens, and that doesn't happen with my card.  The GPU usage is pegged at 100% always.
> 
> Plus there were plenty of opportunities where this should have come up in the reviews.  W1z did a lot of testing at 4k with the card both single card and SLI.  You'd think he would have mentioned the stuttering instead of praising the card as a great card for 4k.  He even tested BF4 and Watch_Dogs at 4k, both of which I know use more than 3.5GB.


HardOCP did some pretty intensive 4K benchmarks using SLI at max playable settings, and also didn't really find that much discrepancy in playability, and they pegged the 970 setup between the 290X and 290 Crossfire. Techspot also did 4K testing with SLI. Funnily enough I mentioned the lack of texture fill vs the 980 in the comments (as dividebyzero, post #2).
It is definitely going to come down to games/image quality on a case by case basis


----------



## xfia (Jan 30, 2015)

newtekie1 said:


> I've been playing FC4@1440p MSAA4 since I got my GTX970(and on my 4GB 670s before that).  Memory usage is often over 3.7GB, the stuttering really isn't bad, or even noticeable.  The odd thing is those videos show the GPU usage drop to 0% when the stuttering happens, and that doesn't happen with my card.  The GPU usage is pegged at 100% always.
> 
> Plus there were plenty of opportunities where this should have come up in the reviews.  W1z did a lot of testing at 4k with the card both single card and SLI.  You'd think he would have mentioned the stuttering instead of praising the card as a great card for 4k.  He even tested BF4 and Watch_Dogs at 4k, both of which I know use more than 3.5GB.



who ever says spending all that money on a 4k gaming rig is on crack and didnt test enough games..  hell no 4gb is not enough..  I dont play games with no low standards..  If I spend thousands of dollars I dont want just high settings with busted minimum frames. 
I have two 1080p monitors.. 60hz-144hz and there is no going back to a lower refresh rate for me just so I can have a pixel density that matters at like what 40in or more.. more like a tv.


----------



## Hayder_Master (Jan 30, 2015)

bohaha most ever funny video about 970 vram


----------



## Red_Machine (Jan 30, 2015)

I just saw this on twitter, not sure what to make of it.


----------



## Casecutter (Jan 30, 2015)

alwayssts said:


> I truly wish large corporations realized a little a *honesty/culpability* can go a long way towards customer loyalty.


This is an unfortunately ebb-and-flow at companies, especially those corporations that are compelled to demonstrate qtr-qtr gains. 

AMD seems to have treaded discreetly and certainly shouldn't be seen as "piling-on"... even that "4 GB means 4 GB" is too much. They should know dang well (as any smart company knows) this kind of "Doh" moment could be just around the corner, while they don't want to see their past digressions dredged-up in such conversations.

Honestly, Dave Baumann _(and not finding for sure he’s still with AMD)_ comment was perhaps more that companies don't have to tell us or right to know saying, *"Fundamental interconnects within a GPU are not the parts that are ever discussed, because largely they aren't necessary to know about; additionally ASIC "harvesting".*  In and of itself he’s right, as long as specifications presented are correct and/or the information provide isn't a pretense for concealling such weaknesses.  It's was reckless in this case, because this was something that consumers might encounter as he said, "understandable that this would be "discovered" by end users."

Any company especially at such a level must maintain an ethical rapport, not just for the end-user customer, but for their overall long-term health in other segments. As it might have an adverse effect on OE's consideration for engineered solution provider, and professional markets.


----------



## HumanSmoke (Jan 30, 2015)

Casecutter said:


> This is an unfortunately ebb-and-flow at companies, especially those corporations that are compelled to demonstrate qtr-qtr gains.
> AMD seems to have treaded discreetly and certainly shouldn't be seen as "piling-on"... even that "4 GB means 4 GB" is too much. They should know dang well (as any smart company knows) this kind of "Doh" moment could be just around the corner, while they don't want to see their past digressions dredged-up in such conversations.


That's kind of what I was alluding to earlier. Not sure whether if its budget cuts/R&D trimming, or just the effort needed to get the console APU parts to market, but AMD are starting to fall behind in some of the very time sensitive markets they've targeted. As an example (there are others but I won't spoil the need to play tech detective), AMD's push into ARM servers - the reason they acquired SeaMicro- seems to be leading to a climb down from earlier lofty claims. Remember that Seattle (Opteron A1100 series) was due in the second half of 2014 fully wired for SeaMicro's Freedom Fabric interconnect? A few months later and Freedom Fabric was quietly dumped from at least the first generation, and while the development kits have been around since mid-2014, Seattle is for the most part still MIA - delayed (according to AMD) because of a lack of software support.


Casecutter said:


> Honestly, Dave Baumann _(and not finding for sure he’s still with AMD)_ comment was perhaps more that companies don't have to tell us or right to know saying, *"Fundamental interconnects within a GPU are not the parts that are ever discussed, because largely they aren't necessary to know about; additionally ASIC "harvesting".*  In and of itself he’s right, as long as specifications presented are correct and/or the information provide isn't a pretense for concealling such weaknesses.  It's was reckless in this case, because this was something that consumers might encounter as he said, "understandable that this would be "discovered" by end users."


I think Dave was alluding to the sensitivity of the information to other vendors (AMD specifically in this case) as well as the mainstream user base, because widely publicizing the information would allow AMD an insight into Nvidia's binning strategy. If the dies/defects per wafer and wafer cost are known, it becomes a relatively easy task to estimate yields of any ASIC. To use the previous example, AMD are similarly tight-lipped about Seattle's cache coherency network protocol, even though it is supposedly a shipping product. The problem with tech is that that industrial secrecy has a tendency to spill over into the consumer arena - some more disastrously than others, where it invariably comes to light because it is in the nature of tech enthusiasts to tinker and experiment ( as example- albeit very minor in the greater scheme of things;  it wasn't AMD that alerted the community that their APUs perform worse with single rank memory DIMMs)


Casecutter said:


> Any company especially at such a level must maintain an ethical rapport, not just for the end-user customer, but for their overall long-term health in other segments. As it might have an adverse effect on OE's consideration for engineered solution provider, and professional markets.


Agreed, but I think the ethical relationship between vendor and OEM/ODM only extends as far as it costing either of them money. Hardware components have such a quick product cycle that individual issues - even major ones like Nvidia's eutectic underfill problem, tend to pass from the greater consumer consciousness fairly quickly. I would hazard a guess, and say that 90% or more of consumer computer electronics buyers couldn't tell you anything substantive about the issue, or any of the others that have befallen vendors (FDIV, f00f, TLB, Enduro, GSoD, Cougar Point SATA, AMD Southbridge I/O and god knows how many others). What does stick in the public consciousness are patterns (repeat offending), so for Nvidia's sake (and any other vendor caught in the same mire) it has to become a lesson learned - and nothing makes a vendor take notice quicker than a substantial hit to the pocketbook.


----------



## RejZoR (Jan 31, 2015)

Basically they'll make it go over 3,5GB even rarely than it does now...


----------



## RealNeil (Jan 31, 2015)

Give us ~this~ driver for them,.............

*http://www.pcper.com/reviews/Graphics-Cards/Mobile-G-Sync-Confirmed-and-Tested-Leaked-Alpha-Driver*


----------



## MetalRacer (Feb 1, 2015)

RealNeil said:


> Give us ~this~ driver for them,.............
> 
> *[URL='http://www.pcper.com/reviews/Graphics-Cards/Mobile-G-Sync-Confirmed-and-Tested-Leaked-Alpha-Driver[/QUOTE']http://www.pcper.com/reviews/Graphics-Cards/Mobile-G-Sync-Confirmed-and-Tested-Leaked-Alpha-Driver*[/QUOTE[/URL]]
> 
> ...


----------



## TRWOV (Feb 1, 2015)

RealNeil said:


> Give us ~this~ driver for them,.............
> 
> *http://www.pcper.com/reviews/Graphics-Cards/Mobile-G-Sync-Confirmed-and-Tested-Leaked-Alpha-Driver*



So basically G-sync is like FreeSync, just that nVidia developed a module that enabled DP1.2a features on non DP1.2a displays??? Judging by the article that seems to be the case.


----------



## HumanSmoke (Feb 1, 2015)

TRWOV said:


> So basically G-sync is like FreeSync, just that nVidia developed a module that enabled DP1.2a features on non DP1.2a displays??? Judging by the article that seems to be the case.


Seems to be, which would make sense since AMD didn't request the Adaptive Sync addition to the DisplayPort spec until after G-Sync launched.


----------



## Xzibit (Feb 1, 2015)

We will eventually discover that Nvidia sink method is different than DP 1.2a+ and why it disables audio.


----------

