Wednesday, May 23rd 2018
NVIDIA GeForce "Volta" Graphics Cards to Feature GDDR6 Memory According to SK Hynix Deal
NVIDIA's upcoming GeForce GTX graphics cards based on the "Volta" architecture, could feature GDDR6 memory, according to a supply deal SK Hynix struck with NVIDIA, resulting in the Korean memory manufacturer's stock price surging by 6 percent. It's not known if GDDR6 will be deployed on all SKUs, or if like GDDR5X, it will be exclusive to a handful high-end SKUs. The latest version of SK Hynix memory catalogue points to an 8 Gb (1 GB) GDDR6 memory chip supporting speeds of up to 14 Gbps at 1.35V, and up to 12 Gbps at 1.25V.
Considering NVIDIA already got GDDR5X to run at 11 Gbps, it could choose the faster option. Memory remains a cause for concern. If 8 Gb is the densest chip from SK Hynix, then the fabled "GV104" (GP104-successor), which could likely feature a 256-bit wide memory interface, will only feature up to 8 GB of memory, precluding the unlikely (and costly) option of piggy-backing chips to achieve 16 GB.
Source:
Appuals
Considering NVIDIA already got GDDR5X to run at 11 Gbps, it could choose the faster option. Memory remains a cause for concern. If 8 Gb is the densest chip from SK Hynix, then the fabled "GV104" (GP104-successor), which could likely feature a 256-bit wide memory interface, will only feature up to 8 GB of memory, precluding the unlikely (and costly) option of piggy-backing chips to achieve 16 GB.
55 Comments on NVIDIA GeForce "Volta" Graphics Cards to Feature GDDR6 Memory According to SK Hynix Deal
Not boring but also not really exciting..
But I hope Volta was just a mid step and Turing or whatever has recieved quite some architectural and memory compression related improvements in the last year of development since GV100 was launched.
It's not unlikely that "Volta" will offer a performance increase of over 30%, which isn't bad at all. But the interesting chip will be the big one, "GV102", which probably wouldn't launch anytime soon.
The exciting part will be the potential additional features "Volta" may bring to the consumer space, such as RTX, fp16, etc. I just want to add, increasing the resolution matters relatively little to memory consumption and bandwidth usage today (if other parameters are unchanged). It did matter when GPUs had ~1 GB of memory, but not today even for a 4K framebuffer with 8xAA which would only consume about ~256 MB.
In most cases computational load grows quicker than memory bandwidth and capacity requirements. And as we've seen for the last few generations, Nvidia have yet to release a GPU actually bottlenecked by memory, despite what some in the public believe. Well, probably not.
Fragment processing is internally done at fp32 in GPUs, and many games already use HDR internally which they tone map to 8-bit per channel in post-processing.
So it actually comes down to if games will use higher resolution assets (textures) because of displaying HDR or not, some games already use HDR textures.
It was Ms. GeForce do your research.
Curie is an architecture too before Tesla
if turing was real it would be there already
your slide is 2013, updated with pascal and no volta in 2015 then later it shows both
Celsius - NV10+
Kelvin - NV20+
Rankine - NV30+
Curie - NV40/G70+
Tesla - G80+
Tesla 2.0 - GT200+
Fermi - GF100+
Fermi 2.0 - GF110+
Kepler - GK100+
Kepler 2.0 - GK200+
Maxwell - GM100+
Maxwell 2.0 - GM200+
Pascal - GP100+
Volta - GV100+
Ampere - Fake
Turing - Rumor
going from "to feature" in the title to "could feature"
Peeps have been misinterpreting these utilities for ages and, if ya try real hard, you can create issues, but they just don't pop up under normal usage except on rare occasions ... and when they do, it doesn't matter. What benefit is it if ya can get a 31 % increase in fps, when it takes you from an unplayable 13 fps to an unplayable 17 fps. In each case when that happens,the GPU can't handle the resolution / settings anyway. Putting more VRAM in is like putting a 2,000 pond chain on a 1,000 pound hoist. :
600 Series - www.pugetsystems.com/labs/articles/Video-Card-Performance-2GB-vs-4GB-Memory-154/
"So, what can we glean from all of that? For one thing, with any single monitor the 2GB video card is plenty - even on the most demanding games and benchmarks out today. When you scale up to three high-res screens the games we tested were all still fine on 2GB, though maxing out some games’ settings at that resolution really needs more than a single GPU to keep smooth frame rates. "
700 series -
The original web site is gone and you have to put up with the non-english narration but the data is there. They tested the 2GB and 4 GB 770s in some 45 games up to 5760 x 1080. Again, they found no significant difference in perfopmance with the 2Gb oft beating the 4 GB. And again, in any game where the 4GB showed a substantial gain over the 2Gb model, settings and resolution rendered the game unplayable anyway. The icing on the cake was in Max payne 2, where the game would not install at 5760 x 1080 on the 2 GB card ... they did install on the 4Gb card and, then went back and tried the 2 GB card ... again, once the install routine was "tricked", the game ran at no significant decrease in fps and no decrease in playability or image quality.
900 series - www.extremetech.com/gaming/213069-is-4gb-of-vram-enough-amds-fury-x-faces-off-with-nvidias-gtx-980-ti-titan-x
"[Insert and VRAM utilization tool here] doesn’t actually report how much VRAM the GPU is actually using — instead, it reports the amount of VRAM that a game has requested. We spoke to Nvidia’s Brandon Bell on this topic, who told us the following: “None of the GPU tools on the market report memory usage correctly, whether it’s GPU-Z, Afterburner, Precision, etc. They all report the amount of memory requested by the GPU, not the actual memory usage. Cards will larger memory will request more memory, but that doesn’t mean that they actually use it. They simply request it because the memory is available.
We began this article with a simple question: “Is 4GB of RAM enough for a high-end GPU?” .... Based our results, I would say that the answer is yes — but the situation is more complex than we first envisioned. ...
First, there’s the fact that out of the fifteen games we tested, only four of could be forced to consume more than the 4GB of RAM. In every case, we had to use high-end settings at 4K to accomplish this. ..... provoking this problem in current-generation titles required us to use settings that rendered the games unplayable any current GPU ”
For the 1000 series, I had one user give me the link below to prove that 6GB was faster than 3 GB
www.techpowerup.com/reviews/MSI/GTX_1060_Gaming_X_3_GB/26.html
Problem is the 6GB 1060 has 10% more shaders so the it has a faster GPU. If VRAM was in fact a contributor to performance, we could show this by looking at how much a decrease in perfomance the 3GB model incurred when we went from 1080 p to 1440p. Problem is, there is none. The ^GB model is 6% faster at 1080p and it's still 6% faster at 1440p. Now of course, it is certainly going to be an issue at 2160p.... but as before, the GPU itself is not suited for 2160p so what's the point.
Yes ... certain games, poor console ports for example, can have issues with inadequate VRAM ... and yes, if you try hard enough you can create issues. But the fact remains, this will not be the case in the majority of cases. Card offerings with multiple VRAM choices, just have not hisorically shown wide variations ... exceptions exist, but rare. So while I'm all for having as much memory as possible paired with any GPU, having more than what the card can actually benefit from, doesn't serve a purpose. When doing builds we recommend the following:
Minimum for Budget Limited Builds / Recommended Minimum
1080p - 3 GB / 4 GB
1440p - 6 GB / 8 GB
2160p - 12 GB / 16 GB
Yes, no gaming oriented card exists past the Ti's 11GB which is why don't consider 4k ready for prime time. But logic dictates, if 11 GB is OK for 2160p ( which has same pxels as (4) 1080 screens... then 1/4 of 11GB should be fine for 1080p.
Now, I'm not saying when paying top $$$ it's ok to get a card that only gets by. Just that the inference "less VRAM => sucky gaming" is wrong.
And as some have already mentioned, allocated and needed memory are not the same thing. I'm pretty confident that if Nvidia decide to launch "GTX 1180" with just 8 GB, it will still be a very balanced card.
Look at the past:
GTX 580 1.5 GB
GTX 680 2 GB
GTX 780 3 GB
GTX 980 4 GB
GTX 1080 8 GB
I think GTX 1080 is ahead of the memory requirements.
"According to SK Hynix deal, Nvidia Volta could feature GDDR6"
Those are not equivalent statements. One is an indefinite the other is not.