Wednesday, January 28th 2015

NVIDIA to Tune GTX 970 Resource Allocation with Driver Update

NVIDIA plans to release a fix for the GeForce GTX 970 memory allocation issue. In an informal statement to users of the GeForce Forums, an NVIDIA employee said that the company is working on a driver update that "will tune what's allocated where in memory to further improve performance." The employee also stressed that the GTX 970 is still the best performing graphics card at its price-point, and if current owners are not satisfied with their purchase, they should return it for a refund or exchange.
Source: GeForce Forums
Add your own comment

89 Comments on NVIDIA to Tune GTX 970 Resource Allocation with Driver Update

#76
Xzibit
newtekie1Plus there were plenty of opportunities where this should have come up in the reviews. W1z did a lot of testing at 4k with the card both single card and SLI. You'd think he would have mentioned the stuttering instead of praising the card as a great card for 4k. He even tested BF4 and Watch_Dogs at 4k, both of which I know use more than 3.5GB.
Might just be a bechmarks suite he lets run and gets FPS results since I don't recall W1zzard ever commenting about playability experience in his reviews. Maybe outside of his reviews from personal experience but he hasn't commented has he ?
Posted on Reply
#77
TRWOV
I've never experienced the so called Radeon black screen hardlock *knocks wood* but that doesn't mean every other guy that has had that problem is lying or delusional. Not everyone could be experiencing a problem even if their setups are similar.
Posted on Reply
#78
HumanSmoke
newtekie1I've been playing FC4@1440p MSAA4 since I got my GTX970(and on my 4GB 670s before that). Memory usage is often over 3.7GB, the stuttering really isn't bad, or even noticeable. The odd thing is those videos show the GPU usage drop to 0% when the stuttering happens, and that doesn't happen with my card. The GPU usage is pegged at 100% always.

Plus there were plenty of opportunities where this should have come up in the reviews. W1z did a lot of testing at 4k with the card both single card and SLI. You'd think he would have mentioned the stuttering instead of praising the card as a great card for 4k. He even tested BF4 and Watch_Dogs at 4k, both of which I know use more than 3.5GB.
HardOCP did some pretty intensive 4K benchmarks using SLI at max playable settings, and also didn't really find that much discrepancy in playability, and they pegged the 970 setup between the 290X and 290 Crossfire. Techspot also did 4K testing with SLI. Funnily enough I mentioned the lack of texture fill vs the 980 in the comments (as dividebyzero, post #2).
It is definitely going to come down to games/image quality on a case by case basis
Posted on Reply
#79
xfia
newtekie1I've been playing FC4@1440p MSAA4 since I got my GTX970(and on my 4GB 670s before that). Memory usage is often over 3.7GB, the stuttering really isn't bad, or even noticeable. The odd thing is those videos show the GPU usage drop to 0% when the stuttering happens, and that doesn't happen with my card. The GPU usage is pegged at 100% always.

Plus there were plenty of opportunities where this should have come up in the reviews. W1z did a lot of testing at 4k with the card both single card and SLI. You'd think he would have mentioned the stuttering instead of praising the card as a great card for 4k. He even tested BF4 and Watch_Dogs at 4k, both of which I know use more than 3.5GB.
who ever says spending all that money on a 4k gaming rig is on crack and didnt test enough games.. hell no 4gb is not enough.. I dont play games with no low standards.. If I spend thousands of dollars I dont want just high settings with busted minimum frames.
I have two 1080p monitors.. 60hz-144hz and there is no going back to a lower refresh rate for me just so I can have a pixel density that matters at like what 40in or more.. more like a tv.
Posted on Reply
#81
Red_Machine
I just saw this on twitter, not sure what to make of it.

Posted on Reply
#82
Casecutter
alwaysstsI truly wish large corporations realized a little a honesty/culpability can go a long way towards customer loyalty.
This is an unfortunately ebb-and-flow at companies, especially those corporations that are compelled to demonstrate qtr-qtr gains.

AMD seems to have treaded discreetly and certainly shouldn't be seen as "piling-on"... even that "4 GB means 4 GB" is too much. They should know dang well (as any smart company knows) this kind of "Doh" moment could be just around the corner, while they don't want to see their past digressions dredged-up in such conversations.

Honestly, Dave Baumann (and not finding for sure he’s still with AMD) comment was perhaps more that companies don't have to tell us or right to know saying, "Fundamental interconnects within a GPU are not the parts that are ever discussed, because largely they aren't necessary to know about; additionally ASIC "harvesting". In and of itself he’s right, as long as specifications presented are correct and/or the information provide isn't a pretense for concealling such weaknesses. It's was reckless in this case, because this was something that consumers might encounter as he said, "understandable that this would be "discovered" by end users."

Any company especially at such a level must maintain an ethical rapport, not just for the end-user customer, but for their overall long-term health in other segments. As it might have an adverse effect on OE's consideration for engineered solution provider, and professional markets.
Posted on Reply
#83
HumanSmoke
CasecutterThis is an unfortunately ebb-and-flow at companies, especially those corporations that are compelled to demonstrate qtr-qtr gains.
AMD seems to have treaded discreetly and certainly shouldn't be seen as "piling-on"... even that "4 GB means 4 GB" is too much. They should know dang well (as any smart company knows) this kind of "Doh" moment could be just around the corner, while they don't want to see their past digressions dredged-up in such conversations.
That's kind of what I was alluding to earlier. Not sure whether if its budget cuts/R&D trimming, or just the effort needed to get the console APU parts to market, but AMD are starting to fall behind in some of the very time sensitive markets they've targeted. As an example (there are others but I won't spoil the need to play tech detective), AMD's push into ARM servers - the reason they acquired SeaMicro- seems to be leading to a climb down from earlier lofty claims. Remember that Seattle (Opteron A1100 series) was due in the second half of 2014fully wired for SeaMicro's Freedom Fabric interconnect? A few months later andFreedom Fabric was quietly dumpedfrom at least the first generation, and while the development kits have been around since mid-2014, Seattle is for the most part still MIA - delayed (according to AMD) because of a lack of software support.
CasecutterHonestly, Dave Baumann (and not finding for sure he’s still with AMD) comment was perhaps more that companies don't have to tell us or right to know saying, "Fundamental interconnects within a GPU are not the parts that are ever discussed, because largely they aren't necessary to know about; additionally ASIC "harvesting". In and of itself he’s right, as long as specifications presented are correct and/or the information provide isn't a pretense for concealling such weaknesses. It's was reckless in this case, because this was something that consumers might encounter as he said, "understandable that this would be "discovered" by end users."
I think Dave was alluding to the sensitivity of the information to other vendors (AMD specifically in this case) as well as the mainstream user base, because widely publicizing the information would allow AMD an insight into Nvidia's binning strategy. If the dies/defects per wafer and wafer cost are known, it becomes a relatively easy task to estimate yields of any ASIC. To use the previous example, AMD are similarly tight-lipped about Seattle's cache coherency network protocol, even though it is supposedly a shipping product. The problem with tech is that that industrial secrecy has a tendency to spill over into the consumer arena - some more disastrously than others, where it invariably comes to light because it is in the nature of tech enthusiasts to tinker and experiment ( as example- albeit very minor in the greater scheme of things; it wasn't AMD that alerted the communitythat their APUs perform worse with single rank memory DIMMs)
CasecutterAny company especially at such a level must maintain an ethical rapport, not just for the end-user customer, but for their overall long-term health in other segments. As it might have an adverse effect on OE's consideration for engineered solution provider, and professional markets.
Agreed, but I think the ethical relationship between vendor and OEM/ODM only extends as far as it costing either of them money. Hardware components have such a quick product cycle that individual issues - even major ones like Nvidia's eutectic underfill problem, tend to pass from the greater consumer consciousness fairly quickly. I would hazard a guess, and say that 90% or more of consumer computer electronics buyers couldn't tell you anything substantive about the issue, or any of the others that have befallen vendors (FDIV, f00f, TLB, Enduro, GSoD, Cougar Point SATA, AMD Southbridge I/O and god knows how many others). What does stick in the public consciousness are patterns (repeat offending), so for Nvidia's sake (and any other vendor caught in the same mire) it has to become a lesson learned - and nothing makes a vendor take notice quicker than a substantial hit to the pocketbook.
Posted on Reply
#84
RejZoR
Basically they'll make it go over 3,5GB even rarely than it does now...
Posted on Reply
#88
HumanSmoke
TRWOVSo basically G-sync is like FreeSync, just that nVidia developed a module that enabled DP1.2a features on non DP1.2a displays??? Judging by the article that seems to be the case.
Seems to be, which would make sense since AMD didn't request the Adaptive Sync addition to the DisplayPort spec until after G-Sync launched.
Posted on Reply
#89
Xzibit
We will eventually discover that Nvidia sink method is different than DP 1.2a+ and why it disables audio.
Posted on Reply
Add your own comment
Dec 22nd, 2024 22:32 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts