Tuesday, November 11th 2014

NVIDIA GeForce GTX 960 to Retain Memory Bus from GTX 970

Among other things like CUDA core and TMU counts, NVIDIA was expected to give its next mid-range graphics card, the GeForce GTX 960, a narrower memory bus, with 3 GB of memory, if not less. A sample sniffed out by India's overly transparent customs department, en route testing facilities in the country, reveal that it's not the case.

The GeForce GTX 960, according to description given in the shipping manifest of the sample, features 4 GB of memory, with a full 256-bit wide memory interface. It also reveals clock speeds to be in the neighborhood of 993 MHz core, with 6.00 GHz memory (GDDR5-effective). It doesn't, however, confirm that the GTX 960 is based on a cut-down GM204 silicon. This could still be different chip, the so-called GM206, which succeeds the GK106.
Add your own comment

22 Comments on NVIDIA GeForce GTX 960 to Retain Memory Bus from GTX 970

#1
GhostRyder
Interesting proposal Nvidia, could rock the mid range world with that but honestly it was about time for the mid range areas to get some love in the memory world. I think games were getting to the point the is a necessity to make the mid range offering more appealing. Guessing the price will be about $250 so that is going to be a great value!
Posted on Reply
#2
Animalpak
very few people are interested in the GTX 960 that sounds strange to me.
Posted on Reply
#3
apoe
Animalpakvery few people are interested in the GTX 960 that sounds strange to me.
Says who?

$250 price range cards have almost always been the sweet spot for price/performance ratio and usually gets the most sales.

Even though you get huge value for money with the 970, the 960 will still be of interest to a lot of people.
Posted on Reply
#4
HumanSmoke
Animalpakvery few people are interested in the GTX 960 that sounds strange to me.
Because this a tech site, and there is currently a boatload of rumour flying around about GM 200 and Fiji. Consequently the lower tier parts (GM 206 and Bermuda) that aren't the performance kings get marginalized. Pricing will be the motivating factor for interest for these parts more so than hardware specs. Most people should be expecting the incoming parts to be cheaper and offer roughly the same performance as cards one rung up the hierarchy from the previous generation.
Posted on Reply
#5
NC37
GhostRyderInteresting proposal Nvidia, could rock the mid range world with that but honestly it was about time for the mid range areas to get some love in the memory world. I think games were getting to the point the is a necessity to make the mid range offering more appealing. Guessing the price will be about $250 so that is going to be a great value!
Kinda have to if they want to tout 4k stuff. Don't think its all about the games. 4GB VRAM on a midrange...thats totally for pushing high res imo. Course not that it can handle it in games, just able to support the res so they can say they have another GPU capable of it. SLI might be a different matter.
Posted on Reply
#6
GhostRyder
NC37Kinda have to if they want to tout 4k stuff. Don't think its all about the games. 4GB VRAM on a midrange...thats totally for pushing high res imo. Course not that it can handle it in games, just able to support the res so they can say they have another GPU capable of it. SLI might be a different matter.
Well honestly this area I would not expect much for the realm of 4K just because the upper parts have a difficult time as is but for 1440p its going to be very nice not to be stuck at 2gb anymore on top of having a decent sized memory bus that matches the upper tier parts. The compression method that has become a staple of the new cards as well helps in that area so I have a feeling this is going to be a great value overall no matter what (Though I would have thought 3gb is what we would be looking at honestly even though that would be an oddity in its own right). I am glad Nvidia is finally getting over being stingy on the ram because that is something I have had an issue with for years.
Posted on Reply
#7
The Von Matrices
NC37Kinda have to if they want to tout 4k stuff. Don't think its all about the games. 4GB VRAM on a midrange...thats totally for pushing high res imo. Course not that it can handle it in games, just able to support the res so they can say they have another GPU capable of it. SLI might be a different matter.
I wouldn't think that Nvidia has plans to target the 4K market with this card. In my judgment the optimal amount of memory for a 2015 mid-range card would be 3 GB, but on a 256-bit bus that would result in an asymmetric memory configuration (not that Nvidia hasn't done it before). It would seem that the only reason to put 4 GB on the card is because 2 GB is too little, not because Nvidia wants to target 4K or expects all 4 GB to be needed.
Posted on Reply
#8
thebluebumblebee
GhostRyderGuessing the price will be about $250
15,747INR=$256, so yeah.
This is from the end of September. Maybe these will be out before the end of the year.
I was hoping that the GTX 960 would be under 100 watts, but with 4GB of VRAM, I doubt it. I'll predict: 25 watts less than the 970.:p
Posted on Reply
#9
Champ
Whenever you go on ebay or CL to look at gaming PCs, they always have 560s, 660s and 760s. The power gamers like ourselves are uncommon.
Posted on Reply
#10
Easy Rhino
Linux Advocate
If the specs are correct the 960 will outsell the 970 and 980 in spades.
Posted on Reply
#11
xorbe
3072 is only 6.7% more than 2880 (780Ti / Titan Black).
Posted on Reply
#12
RealNeil
At the 250 price point, two GTX-960s in SLI would be attractive to some of us.
Posted on Reply
#13
HalfAHertz
xorbe3072 is only 6.7% more than 2880 (780Ti / Titan Black).
Maxwell packs less shaders in more clusters and thus manages to increase the efficiency by quite a bit compared to Kepler. Comparing the number of shaders is not a good indicator of performance differences.
Posted on Reply
#14
renz496
RealNeilAt the 250 price point, two GTX-960s in SLI would be attractive to some of us.
indeed. it will be good candidate to replace my current 660s.
Posted on Reply
#15
GhostRyder
RealNeilAt the 250 price point, two GTX-960s in SLI would be attractive to some of us.
I agree completely especially if its just another cut down chip. Though if it is I bet there will be some hard limits in the way sadly of overclocking.
Posted on Reply
#16
Casecutter
Just perhaps... in that, IF AMD is doing enough HBM there may be good amount of GDDR (production) that's freed-up that such memory will be abundant and less costly. Nvidia see's the marketing advantage to claim 4Gb for not that much more cost (as when DDR3 was packed on to project a "WOW" on the packaging.

AMD using just 2Gb for the 285 got a bunch of prejudicial comments, even if it showed their compression technology did make 3Gb not necessary for bulk of games. Although, I couldn't imagine whatever a Tonga XT shows as it will pack 4Gb.
Posted on Reply
#17
Hilux SSRG
The 970 is already too cut down from the mid-high range 980 chip. A 960 at $200 would be the better price point for the consumer.
Posted on Reply
#18
Nabarun
I'd rather have a few more TMUs, ROPs etc than useless 4GB VRAM. People who opt for cards at this price range are not going to be using anything more than 1080p.
Posted on Reply
#19
The Von Matrices
CasecutterAMD using just 2Gb for the 285 got a bunch of prejudicial comments, even if it showed their compression technology did make 3Gb not necessary for bulk of games.
You're confused about GPU memory compression.

The current implementation of GPU memory compression does not allow more data to fit in the same memory space; a 2GB GPU with compression will still only allow 2GB of uncompressed data to fit within VRAM. This is because you can't know the compression ratio beforehand, so you have to assume the worst case scenario - that all data is not compressible - so that you do not unexpectedly run out of VRAM and have to begin flushing to system memory.

What the compression does do is allow the same data to be stored in fewer bytes in the VRAM, increasing effective memory bandwidth. For example, if you can compress a 2MB texture to 1.5MB before storing it in memory, that is functionally equivalent to a 33% increase in memory bandwidth.

An application that is short on VRAM and is overflowing to system memory will not benefit from memory compression. There is still the same incentive as ever to put more VRAM on a card if the application can use it. The problem with benchmarking GPUs is that memory bus width and memory capacity are correlated (you need more chips to populate a wider bus) so it's easy to confuse additional performance as being related to memory capacity as opposed to the increased bandwidth a wider bus offers.

When you see benchmarks where a 2GB Tonga performs the same as a 3GB Tahiti, there are two factors involved. The card is achieving a high memory compression ratio compensating for the 33% decrease memory bus width, and more importantly the application never needed more than 2GB in the first place. An application that needs 3GB of memory will show an improvement on a card with 3GB of VRAM no matter how well compression works on a 2GB card.
Posted on Reply
#20
Casecutter
Casecuttereven if it showed their compression technology did make 3Gb not necessary for bulk of games.
Thank for that clarity... I probably should've used better wording than "not necessary", but I believe I was holding that to the context of 1080p resolution. There's not that many Titles that make use of 3Gb @1080p and that is all the 285 was predestined to encompass.
Posted on Reply
#21
xorbe
The Von MatricesYou're confused about GPU memory compression.

The current implementation of GPU memory compression does not allow more data to fit in the same memory space; a 2GB GPU with compression will still only allow 2GB of uncompressed data to fit within VRAM. This is because you can't know the compression ratio beforehand, so you have to assume the worst case scenario - that all data is not compressible - so that you do not unexpectedly run out of VRAM and have to begin flushing to system memory.
That doesn't sound right to me. Vram management should be transparent to the app.
Posted on Reply
#22
Disparia
It is transparent. Why do you think otherwise?
Posted on Reply
Add your own comment
Dec 23rd, 2024 15:07 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts