Friday, January 23rd 2015

GeForce GTX 970 Design Flaw Caps Video Memory Usage to 3.3 GB: Report

It may be the most popular performance-segment graphics card of the season, and offer unreal levels of performance for its $329.99 price, but the GeForce GTX 970 suffers from a design flaw, according to an investigation by power-users. GPU memory benchmarks run on GeForce GTX 970 show that the GPU is not able to address the last 700 MB of its 4 GB of memory.

The "GTX 970 memory bug," as it's now being called on tech forums, is being attributed to user-reports of micro-stutter noticed on GTX 970 setups, in VRAM-intensive gaming scenarios. The GeForce GTX 980, on the other hand, isn't showing signs of this bug, the card is able to address its entire 4 GB. When flooded with posts about the investigation on OCN, a forum moderator on the official NVIDIA forums responded: "we are still looking into this and will have an update as soon as possible."
Sources: Crave Online, LazyGamer
Add your own comment

192 Comments on GeForce GTX 970 Design Flaw Caps Video Memory Usage to 3.3 GB: Report

#51
nickbaldwin86
I read this on OCN right when it was posted... I thought, I am sure glad I went with the GTX980 :P
Posted on Reply
#52
xfia
there has been shit loads of complaints on the nvidia forums since the 970 was launched. the issue has either been publicly found out or it is multiple issues.
the thought that a team of engineers are scratching heads over a gpu is laughable. not a robot or a space shuttle. at this point they are getting pressure to do anything they can to stop a recall but if they cant nvidia will just do what they have to and apologize.
Posted on Reply
#55
Fluffmeister
Beat you to it :p If what they say as true, and you have to assume it is, it does make sense and doesn't sound like too much of a big deal to me frankly.
Posted on Reply
#56
The N
thats where QC matters, company like NVIDIA if can't pay for QC, then its shamef ul i guess. they already charging premium prices for there cards, from years now, especially kepler series. the 770/780/780Ti all were the most sell product past 1, 2 years.
Posted on Reply
#58
xfia
o jeeze.. i can see where this is going :laugh: i was right they put pressure down the lines at least :laugh:
Posted on Reply
#59
DarkOCean
there's another problem with nvidia architecture since fermi that no one seems to talk about. It's about the number of ROPS these gpus can acces. I've only seen the kind of tests that reveal this on sites like harware.fr.
For example a gtx 970 only uses ~40/64 rops, a gtx 580 only 32/48, gtx 780 32/48, etc.
Posted on Reply
#60
Digital Dreams
Fluffmeisterdoesn't sound like too much of a big deal to me frankly.
They are counting on people just like yourself to lie down and take it regardless of what was advertised to you or what you paid for.
Posted on Reply
#61
Steevo
Perhaps they should rewrite the driver so that the desktop, secondary monitor and other lower priority tasks use that memory space, while the main portion is used for gaming and high performance tasks. Who needs 150GBps of memory bandwidth for their desktop?
Posted on Reply
#63
Fluffmeister
Digital DreamsThey are counting on people just like yourself to lie down and take it regardless of what was advertised to you or what you paid for.
Like I said, it hasn't affected me at all since I've owned the card so yeah it is kinda hard to be that upset about it.
Posted on Reply
#65
xfia
haha my bad... it was tiny but still a little strange.
Posted on Reply
#67
xfia
shhh it was tiny haha i guess if it would suggest anything at all is that the 960 has better memory management than the 760.
Posted on Reply
#68
GorbazTheDragon
Here's the way I'd look at it. The architecture is designed to operate as the full chips, when you start cutting bits off the balance between the processors/schedulers/registers/controllers/etc. changes and as NV says, it does the same with the interconnects. The way the architecture is designed, assigns certain areas into blocks, each with their own components, buses, etc. and cutting one of the blocks away, or part of it, gimps the performance of that block. Now, the crossbars partially solve this, by allowing some intercommunication between the different parts of the chip, however, there is a tradeoff to be made, because more crossbars means more cost, and they are not utilized as much in a full chip configuration. A way to potentially alleviate this issue, is similar to what intel does with their extremely large server chips, using a ring bus configuration, however this has (AFAIK) not been implemented to a chip that has neither the size (talking GM200/GK110/GF110 here) or bandwidth requirements, so could result in being even worse than the current configuration due to extra die area necessitated for such a system. And I don't think the performance benefit will justify a ring bus architecture on the smaller chips.

What it comes down to is optimizations on the architecture level and tradeoffs they will most certainly have taken into account when designing the full and cut chips. They have spent way more R&D time on it than we have, and I'm sure they have a lot more resources to use too, so I don't feel we are in a position to question HOW they lay out their architecture. I am also quite sure that these kinds of issues exist with almost any architecture, especially with cut dies, both CPU and GPU (or any other processor for that matter).

BUT, and here is a big but (and it's underlined too, guess that makes it an important but...) I have to question NVidia's way of marketing this. OK, sure, there are 4GB of accessible to, and the memory bus operates at the stated speed, but I feel there should at least be a side note that not all of the memory is addressed at the stated speed. Then again, this complicates things for the less tech savvy, and results in more confusing numbers.
Posted on Reply
#69
HumanSmoke
GorbazTheDragonBUT, and here is a big but (and it's underlined too, guess that makes it an important but...) I have to question NVidia's way of marketing this. OK, sure, there are 4GB of accessible to, and the memory bus operates at the stated speed, but I feel there should at least be a side note that not all of the memory is addressed at the stated speed. Then again, this complicates things for the less tech savvy, and results in more confusing numbers.
Welcome to the wonderful world of marketing!
I think you'd run out of superscript/asterisk notations if you fully tried to explain* the finer points of IC architectures. Both vendors tout DX12 support - Nvidia is careful to append their support with "API", while AMD's fine print reads (paraphrased) "at this time- based on known specifications". AMD were very quick off the mark in publicizing the FX series as the world's first desktop 8-core CPU, but declined to mention the shared resources and compromises involved that separate it from truly being 8 independent cores.
What it comes down to in most instances is how much the user is affected within the space between truth and claims.

* I've always found it astounding the vast number of buyers that don't even read the spec sheet, let alone the fine print and reasons behind it. The number of people who buy based on a few marketing bullet points seems to far outweigh those who research their prospective purchases. Maybe it's the nature of an industry manipulated by built-in obsolescence (or its illusion).
xfiashhh it was tiny haha i guess if it would suggest anything at all is that the 960 has better memory management than the 760.
Sleeping Dogs is an AMD Gaming Evolved title tailored to GCN and heavily coded for post-process compute - something Kepler wasn't particularly well suited for
Posted on Reply
#70
BiggieShady
FluffmeisterBeat you to it :p
Ah, fellow ninja master of the control V technique :respect: I yet have to learn to use it alone and by itself only :p
FluffmeisterIf what they say as true, and you have to assume it is, it does make sense and doesn't sound like too much of a big deal to me frankly.
I think it's the best they could do with this kind of asymmetric memory configuration, they get to use the greatest part of vram with full bandwidth most of the time ... when it needs full 4 GB they get that with 1-3% overall performance penalty.
Posted on Reply
#71
maximoor
HumanSmokeWhat u smokin' ?
Said by HumanSmoke! It's fanny... but i agree with u :)
Posted on Reply
#72
anubis44
yapchagia recall or a simple update might fix the issue. A recall perhaps...with a nice free upgrade to GTX 980 :)
A recall? LOL. Do you realize how expensive that would be? The profit margin on the card is less than the amount it would cost to pay for the shipping back to the manufacturer and send out a replacement. Then there's the cost to nVidia of replacing all those chips with new ones. If this issue can't be resolved with a software patch/driver update, nVidia is going to be spending hundreds of millions of dollars to fix this.
Posted on Reply
#73
Lionheart
Nvidia why you do dis :cry:

But seriously I hope Nvidia look into this ASAP. :pimp:
Posted on Reply
#74
fusionblu
Damn, I brought a Gigabyte GTX 970 4GB G1 Gaming Graphics Card as a part of a PC I will ship to my sister as a present. This is some what awkward...
Posted on Reply
#75
sumludus
If you're gaming on a single 60 hertz 1080p monitor, this issue probably won't affect you for the life of the card (of course, if that's your setup, why blow so much on a 970 to begin with).

So who are going to feel the pains of this card as it ages? 144 hertz 1080p? 60 hertz 1440p? People with multi-monitor setups are seeing the issues now, but going forward, which consumers will be at risk?
Posted on Reply
Add your own comment
Nov 23rd, 2024 06:44 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts