Monday, December 29th 2014

NVIDIA GeForce GTX 960 Launch Date Revealed

Originally expected to launch its mid-range GeForce GTX 960 graphics card on the sidelines of the 2015 International CES, in early January, NVIDIA is now expected to launch the card on the 22nd of the month. The card will be based on NVIDIA's new GM206 silicon, that's based on its "Maxwell" architecture. Among its known features are a 128-bit wide GDDR5 memory interface, 2 GB of memory, and significantly lower power draw compared to its predecessor. The card will draw power from a single 6-pin PCIe power connector. It's expected to be priced around the $200 mark.
Source: Hermitage Akihabara
Add your own comment

70 Comments on NVIDIA GeForce GTX 960 Launch Date Revealed

#26
newtekie1
Semi-Retired Folder
The 980 has a 256-bit bus, I don't know why anyone would expect anything more than 128-bit for the mid-range. Plus, the 256-bit handles 4k just fine, so there should be no question 128-bit will handle 1080p just fine. I'd bet two of these in SLI would kill 1440p.

Memory bandwidth is almost never the limiting factor.
Posted on Reply
#27
McSteel
newtekie1Memory bandwidth is almost never the limiting factor.
Somehow I think this will die together with "you don't need more than 500W for single GPU systems" - i.e. never...
Posted on Reply
#28
ZoneDymo
DeSantaExactly, because everyone has $350, $550, or a grand for a graphics card.
better off just getting a higher performing second hand older card or waiting for the new gen to come out then to waste money on this mediocre performing thing, why buy a new card when you know its going to be "meh" at best?
Posted on Reply
#29
Fluffmeister
ZoneDymobetter off just getting a higher performing second hand older card or waiting for the new gen to come out then to waste money on this mediocre performing thing, why buy a new card when you know its going to be "meh" at best?
Think of this card as Tonga... but good.
Posted on Reply
#30
Drac
Year 2015, 128 bit bus, 2 GB and asking 200 $ (or more) is overpriced for me.
Posted on Reply
#31
newtekie1
Semi-Retired Folder
FluffmeisterThink of this card as Tonga... but good.
Basically, this. The 960 should be able to do about 60% of the performance of the 980, based on the specs we know and the rumors of shader counts. But its power consumption should be amazing. I'd be surprised if it used 100w.
Posted on Reply
#32
peche
Thermaltake fanboy
Affirmative I'll keep my current GTX760…
970 maybe next December…:roll:
Posted on Reply
#33
Xzibit
newtekie1The 980 has a 256-bit bus, I don't know why anyone would expect anything more than 128-bit for the mid-range. Plus, the 256-bit handles 4k just fine, so there should be no question 128-bit will handle 1080p just fine. I'd bet two of these in SLI would kill 1440p.

Memory bandwidth is almost never the limiting factor.
4k resolution yes but all Nvidia GeForce cards are limited to 8-bit output and 4k content is 10-bit 4:4:4 even 4k streams will be 10-bit 4:2:2 so attaching a GeForce to a 10-bit panel or decent TV isn't the smartest thing. Not to mention they are compressing 4k output already to 8-bit 4:2:2 instead of 8-bit 4:4:4 so your already getting degrading results if content quality matters to you when your not gaming.
Posted on Reply
#34
newtekie1
Semi-Retired Folder
Xzibit4k resolution yes but all Nvidia GeForce cards are limited to 8-bit output and 4k content is 10-bit 4:4:4 even 4k streams will be 10-bit 4:2:2 so attaching a GeForce to a 10-bit panel or decent TV isn't the smartest thing. Not to mention they are compressing 4k output already to 8-bit 4:2:2 instead of 8-bit 4:4:4 so your already getting degrading results if content quality matters to you when your not gaming.
I think you'd be hard pressed to find anyone that could actually tell the difference in a blind test.
Posted on Reply
#35
Xzibit
newtekie1I think you'd be hard pressed to find anyone that could actually tell the difference in a blind test.
Its pretty straight forward unless you have bad eye-sight. They are countless examples on the web for comparisons.





If the output is not native you'll experience more banding (color smearing).

Blue-Rays are also going to be providing 4k content in 10-bit 4:4:4.
Posted on Reply
#36
xenocide
DracYear 2015, 128 bit bus, 2 GB and asking 200 $ (or more) is overpriced for me.
Hey, the HD2900XT had a 512-bit bus. Honestly, what do the numbers matter as long as the card performs? Things like larger memory buses and more VRAM take up die space and power, and Nvidia is trying to cut down on both. If this card lands between the GTX770 and GTX780, it will be an absolute steal.
XzibitIts pretty straight forward unless you have bad eye-sight. They are countless examples on the web for comparisons.

Your proof\example in discussing image quality is seriously a 285x158 jpg?
Posted on Reply
#37
newtekie1
Semi-Retired Folder
XzibitIts pretty straight forward unless you have bad eye-sight. They are countless examples on the web for comparisons.





If the output is not native you'll experience more banding (color smearing).

Blue-Rays are also going to be providing 4k content in 10-bit 4:4:4.
And everything you see on the internet is artificially made to look a lot worse than the real world difference because most are viewing them on 8-bit panels with 8-bit consumer cards. Heck, the pictures you posted are 8-bit jpegs. How can you show the difference 10-bit makes in an 8-bit picture?

I've actually seen them side by side, it is pretty much impossible to tell the difference.
Posted on Reply
#38
Xzibit
newtekie1And everything you see on the internet is artificially made to look a lot worse than the real world difference because most are viewing them on 8-bit panels with 8-bit consumer cards. Heck, the pictures you posted are 8-bit jpegs. How can you show the difference 10-bit makes in an 8-bit picture?

I've actually seen them side by side, it is pretty much impossible to tell the difference.
xenocideYour proof\example in discussing image quality is seriously a 285x158 jpg?
No point in providing native 10-bit examples if your viewing them through a 8-bit GPU process... :rolleyes:
Posted on Reply
#39
newtekie1
Semi-Retired Folder
XzibitNo point in providing native 10-bit examples if your viewing them through a 8-bit GPU process... :rolleyes:
Exactly. It is far better to show a 8-bit image, and make an artificially terrible copy of it and claim the difference is what you see between 8-bit and 10-bit...:rolleyes:

Look how much smoother the 10-bit image is! Oh wait, that is how smooth an 8-bit image is because it IS and 8-bit image...so what does a 10-bit look like? Answer: pretty much the same at the 8-bit.
Posted on Reply
#40
wolf
Better Than Native
I'm glad a few people jumped in and shut down the 128 bit jibber jabber. 128 is just a number, what matters is performance, this is a new generation, with a new architecture and memory controller, and even the memory itself, the proof is always in the pudding and only so much can be taken from a spec sheet, so lets just wait and see eh?

as for 2gb only... I feel there will be 4gb cards. texture size is growing and so is memory needs. 2gb wasnt enough for my GTX670 IMO and it won't be enough for this card if you intend to keep it for 2-3 years.

you only need to be needing 2.1gb of vram for the difference between 2 and 4 to be obvious. 2gb is clearly the stock amount to help this card reach it's price point and also help push people to go the 970 if they want 4.
Posted on Reply
#41
RejZoR
That 10bit examples above are greatly exaggerated just to prove a point.. The same as 4K upscaling examples where you supposedly make grainy crap video into a super sharp image. And we know it's never like that, but you'll see fancy side to side comparisons, just to prove a point.

And with 10bit output you also need monitor that actually is capable of displaying 10bit, otherwise, it's like sticking a V10 750HP engine on a bicycle...
Posted on Reply
#42
rtwjunkie
PC Gaming Enthusiast
Those that say this is "meh" are missing the point. This card is meant for the resolution that the overwhelming majority of gamers have: 1080p. For THAT resolution it should be a screamer at minimal power usage. You can't compare it and say it won't do 1600 with all visual features.

With compression, this Maxwell can perform just as well at 128 bit as older dies did on 256 bit. I know it's hard to let go, but times and technology change, and get more efficient, so the old standards no longer apply.
Posted on Reply
#43
Thimblewad
Okay, just an example for everyone who doesn't get the bandwith on these GPUs. The Titan has a 384bit bus while a GTX 680 only has 256, hence 50% more memory bandwidth (assuming clock and latencies are identical.

I'll try to explain the whole concept a bit more: the following is a simplified model of the factors that determine the performance of RAM (not only on a graphics cards).

Factor A: Frequency

RAM is running at a clock speed. RAM running at 1 GHz "ticks" 1,000,000,000 (a billion) times a second. With every tick, it can receive or send one bit on every lane. So a theoretical RAM module with only one memory lane running at 1GHz would deliver 1 Gigabit per second, since there are 8 bits to the bytes that means 125 Megabyte per second.

Factor B: "Pump Rate"

DDR-RAM (Double Data Rate) can deliver two bits per tick, and there even are "quad-pumped" buses that deliver four bits per tick, but I haven't heard of the latter being used on graphics cards.


Factor C: Bus width


RAM doesn't just have one single lane to send data. Even the Intel 4004 had a 4 bit bus. The graphics cards you here have 256 bus lanes and 384 bus lanes respectively.

All of the above factors are multiplied to calculate the theoretical maximum at which data can be sent or received:

**Maximum throughput in bytes per second= Frequency * Pumprate * BusWidth / 8 **

Now lets do the math for these two graphics cards. They both seem to use the same type of RAM (GDDR5 with a pump rate of 2), both running at 3 GHz.

GTX-680: 3 Gbps * 2 * 256 / 8 = 192 GB/s

GTX-Titan: 3 Gbps * 2 * 384 / 8 = 288 GB/s

Factor D: Latency - or reality kicks in

This factor is a LOT harder to calculate than all of the above combined. Basically, when you tell your RAM "hey, I want this data", it takes a while until it comes up with the answer. This latency depends on a number of things and is really hard to calculate, and usually results in RAM systems delivering way less than their theoretical maxima. This is where all the timings, prefetching and tons of other stuff comes into the picture. Since it's not just numbers that could be used for marketing, where higher numbers translate to "better", the marketing focus is mostly on other stuff.

Conclusion
So, since NVIDIA is making use of the advanced texture compression I see absolutely no problem regarding the smaller memory bus. The new architecture gives them the ability to decrease the bus bandwith and really shouldn't be considered a problem. Since more than half of the "gaming community" uses less than 1080p (most of them are on 1680x1050) there is absolutely nothing wrong with the 2 GB VRAM. Okay, enough.
Posted on Reply
#45
HisDivineOrder
I'm sure the cut-down GM204 will be in the 960 Ti right around the time when the 970 sales slow, which will probably be around March at this rate. It was rumored to be released a few months back, but now we get this version of the 960 instead.

I presume this was meant to be lower priced, but when they saw the sales of 970 (and to a lesser degree 980), they decided not to kill the golden goose by making a part that bludgeoned its sales the way the 970 bludgeons the 980. If not for the widespread complaints of coil whine and the lack of a reference design (with nVidia reference cooler), the 970 would be virtually the only card selling.

So the last thing nVidia really wants on a new GPU die is one that lets people forego the 970 in favor of a 960...
Posted on Reply
#46
bubbleawsome
HisDivineOrderthe lack of a reference design (with nVidia reference cooler)
The 970 actually has one I think, though it could just be marketing. I know EVGA said they might have one out by early 2015 since there was so much interest.
Posted on Reply
#48
Dave65
I have NO coil whine with my Gigabyte 970, I am very happy with it!
Posted on Reply
#49
Xzibit
RejZoRThat 10bit examples above are greatly exaggerated just to prove a point.. The same as 4K upscaling examples where you supposedly make grainy crap video into a super sharp image. And we know it's never like that, but you'll see fancy side to side comparisons, just to prove a point.

And with 10bit output you also need monitor that actually is capable of displaying 10bit, otherwise, it's like sticking a V10 750HP engine on a bicycle...
At least you understand what it takes for it to work.

People running 8-bit gpus with 6-bit TN panels want to see a difference. I'm pretty sure its the same old, I want to argue for the sake of it mentality :rolleyes:

All Nvidia has to do is enable 10bit processing on there GeForce line like they do their Quadro cards. AMD has been doing it for awhile and they can be future prof for true 4k content. I'm sure a lot of people will appreciate it down the line even those eyeing this GTX 960.
Posted on Reply
#50
newtekie1
Semi-Retired Folder
XzibitAll Nvidia has to do is enable 10bit processing on there GeForce line like they do their Quadro cards. AMD has been doing it for awhile and they can be future prof for true 4k content. I'm sure a lot of people will appreciate it down the line even those eyeing this GTX 960.
Thing is, almost no one is going 4k to get 10-bit. If you asked most people they wouldn't even know that 4k also includes 10-bit. And like I said, if you put two identical monitors running identical content, but one running 8-bit on a Geforce and one running 10-bit on a Quadro 90% probably couldn't tell the difference. And if the content isn't a still image I'd be willing to bet 99% couldn't tell the difference.
Posted on Reply
Add your own comment
Jul 16th, 2024 19:30 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts