Monday, December 29th 2014
NVIDIA GeForce GTX 960 Launch Date Revealed
Originally expected to launch its mid-range GeForce GTX 960 graphics card on the sidelines of the 2015 International CES, in early January, NVIDIA is now expected to launch the card on the 22nd of the month. The card will be based on NVIDIA's new GM206 silicon, that's based on its "Maxwell" architecture. Among its known features are a 128-bit wide GDDR5 memory interface, 2 GB of memory, and significantly lower power draw compared to its predecessor. The card will draw power from a single 6-pin PCIe power connector. It's expected to be priced around the $200 mark.
Source:
Hermitage Akihabara
70 Comments on NVIDIA GeForce GTX 960 Launch Date Revealed
Memory bandwidth is almost never the limiting factor.
970 maybe next December…:roll:
If the output is not native you'll experience more banding (color smearing).
Blue-Rays are also going to be providing 4k content in 10-bit 4:4:4.
I've actually seen them side by side, it is pretty much impossible to tell the difference.
Look how much smoother the 10-bit image is! Oh wait, that is how smooth an 8-bit image is because it IS and 8-bit image...so what does a 10-bit look like? Answer: pretty much the same at the 8-bit.
as for 2gb only... I feel there will be 4gb cards. texture size is growing and so is memory needs. 2gb wasnt enough for my GTX670 IMO and it won't be enough for this card if you intend to keep it for 2-3 years.
you only need to be needing 2.1gb of vram for the difference between 2 and 4 to be obvious. 2gb is clearly the stock amount to help this card reach it's price point and also help push people to go the 970 if they want 4.
And with 10bit output you also need monitor that actually is capable of displaying 10bit, otherwise, it's like sticking a V10 750HP engine on a bicycle...
With compression, this Maxwell can perform just as well at 128 bit as older dies did on 256 bit. I know it's hard to let go, but times and technology change, and get more efficient, so the old standards no longer apply.
I'll try to explain the whole concept a bit more: the following is a simplified model of the factors that determine the performance of RAM (not only on a graphics cards).
Factor A: Frequency
RAM is running at a clock speed. RAM running at 1 GHz "ticks" 1,000,000,000 (a billion) times a second. With every tick, it can receive or send one bit on every lane. So a theoretical RAM module with only one memory lane running at 1GHz would deliver 1 Gigabit per second, since there are 8 bits to the bytes that means 125 Megabyte per second.
Factor B: "Pump Rate"
DDR-RAM (Double Data Rate) can deliver two bits per tick, and there even are "quad-pumped" buses that deliver four bits per tick, but I haven't heard of the latter being used on graphics cards.
Factor C: Bus width
RAM doesn't just have one single lane to send data. Even the Intel 4004 had a 4 bit bus. The graphics cards you here have 256 bus lanes and 384 bus lanes respectively.
All of the above factors are multiplied to calculate the theoretical maximum at which data can be sent or received:
**Maximum throughput in bytes per second= Frequency * Pumprate * BusWidth / 8 **
Now lets do the math for these two graphics cards. They both seem to use the same type of RAM (GDDR5 with a pump rate of 2), both running at 3 GHz.
GTX-680: 3 Gbps * 2 * 256 / 8 = 192 GB/s
GTX-Titan: 3 Gbps * 2 * 384 / 8 = 288 GB/s
Factor D: Latency - or reality kicks in
This factor is a LOT harder to calculate than all of the above combined. Basically, when you tell your RAM "hey, I want this data", it takes a while until it comes up with the answer. This latency depends on a number of things and is really hard to calculate, and usually results in RAM systems delivering way less than their theoretical maxima. This is where all the timings, prefetching and tons of other stuff comes into the picture. Since it's not just numbers that could be used for marketing, where higher numbers translate to "better", the marketing focus is mostly on other stuff.
Conclusion
So, since NVIDIA is making use of the advanced texture compression I see absolutely no problem regarding the smaller memory bus. The new architecture gives them the ability to decrease the bus bandwith and really shouldn't be considered a problem. Since more than half of the "gaming community" uses less than 1080p (most of them are on 1680x1050) there is absolutely nothing wrong with the 2 GB VRAM. Okay, enough.
I presume this was meant to be lower priced, but when they saw the sales of 970 (and to a lesser degree 980), they decided not to kill the golden goose by making a part that bludgeoned its sales the way the 970 bludgeons the 980. If not for the widespread complaints of coil whine and the lack of a reference design (with nVidia reference cooler), the 970 would be virtually the only card selling.
So the last thing nVidia really wants on a new GPU die is one that lets people forego the 970 in favor of a 960...
www.overclockers.co.uk/showproduct.php?prodid=GX-205-OK
People running 8-bit gpus with 6-bit TN panels want to see a difference. I'm pretty sure its the same old, I want to argue for the sake of it mentality :rolleyes:
All Nvidia has to do is enable 10bit processing on there GeForce line like they do their Quadro cards. AMD has been doing it for awhile and they can be future prof for true 4k content. I'm sure a lot of people will appreciate it down the line even those eyeing this GTX 960.