Monday, November 14th 2016
NVIDIA GeForce GTX 1080 Ti Features 10GB Memory?
Air-cargo shipping manifest descriptions point to the possibility that NVIDIA's upcoming high-end graphics card based on the GP102 silicon, the GeForce GTX 1080 Ti, could feature 10 GB of memory, and not the previously thought of 12 GB, which would be the same amount as the TITAN X Pascal. NVIDIA apparently is getting 10 GB to work over a 384-bit wide memory interface, likely using chips of different densities. The GTX 1080 Ti is also rumored to feature 3,328 CUDA cores, 208 TMUs, and 96 ROPs.
NVIDIA has, in the past, used memory chips of different densities to achieve its desired memory amount targets over limited memory bus widths. The company achieved 2 GB of memory across a 192-bit wide GDDR5 memory interface on the GeForce GTX 660, for example. The product number for the new SKU, as mentioned in the shipping manifest, is "699-1G611-0010-000," which is different from the "699-1G611-0000-000" of the TITAN X Pascal, indicating that this is the second product based on the PG611 platform (the PCB on which NVIDIA built the TITAN X Pascal). NVIDIA is expected to launch the GeForce GTX 1080 Ti in early-2017.
Source:
VideoCardz
NVIDIA has, in the past, used memory chips of different densities to achieve its desired memory amount targets over limited memory bus widths. The company achieved 2 GB of memory across a 192-bit wide GDDR5 memory interface on the GeForce GTX 660, for example. The product number for the new SKU, as mentioned in the shipping manifest, is "699-1G611-0010-000," which is different from the "699-1G611-0000-000" of the TITAN X Pascal, indicating that this is the second product based on the PG611 platform (the PCB on which NVIDIA built the TITAN X Pascal). NVIDIA is expected to launch the GeForce GTX 1080 Ti in early-2017.
45 Comments on NVIDIA GeForce GTX 1080 Ti Features 10GB Memory?
I'll pass on this round.
When I was weighing up the Fury X vs the 980ti, the red teams card just fell short for me, along with the lack of HDMI2 (aimed at 4K gaming, that makes sense...?), and the fear that 4GB of VRAM - however special - was going to quickly hit a wall. Anyway, this time around, DX12 is a more relevant argument than it was a year and a half ago. I will wait to see what Vega looks like and frankly am willing it to be my next card. Im sick of nVidia (but damn they make nice cards...).
Likewise AMD people can upgrade to whatever AMD is offering next just the same.
And its not like we have had any game so far where this 4gb limit has had consequences.
But if you look at the benchmarks the 4gb Fury X does not perform any worse the the 6gb 980Ti
That's the GPUs job.
You are mistaken in believing the GPU is the only part that consumes a considerable amount of power. The link between the GPU and the memory is one of the most power hungry components of a graphics card. As memory demands continue to increase so too will efficiency be needed. The more RAM, the more power drawn (As proven by the 8 GB RX 480 fiasco). The higher the bandwidth, the more power drawn (as proven by the R9 290x, which had a larger memory bus than it needed, wasting power).
here is an interesting piece on memory, bandwidth and power consumption with comparisons, kind of supports both views...... differently.................
wccftech.com/nvidia-geforce-amd-radeon-graphic-cards-memory-analysis/
but i think the amount of vram a game needs is governed by the amount thats available on the average mid to high level cards.. once 8 gig becomes the norm 4 gigs for sure wont be enough..
trog
390X has superhigh voltages added because driving that 512-bit bus at 6Ghz was nothing more than desperation to increase performance. (Hawaiis memory controller was not designed for that speed in the first place)
Usually, ~10% of the powerbudget goes to memory and maybe half of that for HBM. Which still is an improvement, but it's nowhere near the amount of gain that you believe.
The 970 fiasco, OTOH, was due to nvidia lying about how it's 4GB/256 bit bus was done. It didnt have enough rops and as a result one of its memory controllers didnt perform like the others did. Nvidia screwed up hard there.