Monday, November 14th 2016

NVIDIA GeForce GTX 1080 Ti Features 10GB Memory?

Air-cargo shipping manifest descriptions point to the possibility that NVIDIA's upcoming high-end graphics card based on the GP102 silicon, the GeForce GTX 1080 Ti, could feature 10 GB of memory, and not the previously thought of 12 GB, which would be the same amount as the TITAN X Pascal. NVIDIA apparently is getting 10 GB to work over a 384-bit wide memory interface, likely using chips of different densities. The GTX 1080 Ti is also rumored to feature 3,328 CUDA cores, 208 TMUs, and 96 ROPs.

NVIDIA has, in the past, used memory chips of different densities to achieve its desired memory amount targets over limited memory bus widths. The company achieved 2 GB of memory across a 192-bit wide GDDR5 memory interface on the GeForce GTX 660, for example. The product number for the new SKU, as mentioned in the shipping manifest, is "699-1G611-0010-000," which is different from the "699-1G611-0000-000" of the TITAN X Pascal, indicating that this is the second product based on the PG611 platform (the PCB on which NVIDIA built the TITAN X Pascal). NVIDIA is expected to launch the GeForce GTX 1080 Ti in early-2017.
Source: VideoCardz
Add your own comment

45 Comments on NVIDIA GeForce GTX 1080 Ti Features 10GB Memory?

#26
Prima.Vera
ZoneDymoAnd its not like we have had any game so far where this 4gb limit has had consequences.
Well, the latest COD:IW, is using all of the 8GB of VRAM on 3440x1440p. Not sure if it really needs that much, or is just the engine caching...
Posted on Reply
#27
xorbe
btarunrThat's why you read the post before commenting.
With all due respect my dearest moderator, that reply was not called for. I did read the post -- I guess I am unaware of how nvidia has configured before. I don't see any explanation following though. Let me try:

1: 1GB + 512MB
2: 1GB + 512MB
3: 1GB + 512MB
4: 1GB + 512MB
5: 1GB + 1GB
6: 1GB + 1GB

Asymmetrical like that? I'm looking at the AnandTech and TPU GTX 660 reviews. 192-bit and 8 memory chips for 2GB. Must be two 512MB channels, then one channel with 1GB.
Posted on Reply
#28
Casecutter
FordGT90ConceptThis sort of memory tomfoolery is what got them into trouble with GTX 970.
When I first read this I was thinking 970 3.5Gb +.5. Kind of the same thing this time in fusing off L2 and using crossbars to get back orphaned memory controllers. This time they are not "fooling with marketing" it's 10Gb (a nice round number not 3.5Gb), while there's 2Gb that could be accessed if need be. I'll bet it's the same type gelding though of GP102 chips. I might say they could they use regular GDDR5 for the last 2Gb, (not 5X) and save cost?
Posted on Reply
#29
Franzen4Real
FordGT90ConceptShadows of Mordor comes to mind.
trog100the game as it came played fine on 4 gigs.. the textures that needed 6 gig were offered as a downloadable extra.. most people would not have bothered..

but i think the amount of vram a game needs is governed by the amount thats available on the average mid to high level cards.. once 8 gig becomes the norm 4 gigs for sure wont be enough..

trog
I have had GPUTweak running during Warcraft Legion and in some zones (Val'Sharah) I have peaked at 5.1GB of VRAM used, and don't know that I ever have seen it drop below 4GB in any zone (it's usually between 4.2 and 4.7 used). This is on a GTX1080 at 2560x1440, all settings maxed except with 1.0X resolution (also no DSR enabled). Yes, it can run fine on less VRAM with the settings turned down, but just saying that I was surprised to see this particular title, which is about as mainstream as you can get for a AAA game, use as much GPU as it does.
Posted on Reply
#30
efikkan
evernessinceWaiting for Vega and HBM2. We've already seen the power savings of HBM1, AMD's top end cards should have excellent watt / performance that should compare to Nvidia.
Even without HBM, Nvidia still is ~50-80% more power efficient.

There will be no need for HBM for Pascal, Vega and possibly Volta for gaming. What matters is real world performance, not theoretical performance figures you'll never get to experience.
qubitYup, designs are so much more efficient when sticking to powers of 2. Unfortunately the chips can get so large that physical constraints force them to make compromises with the higher end models so we end up with lopsided memory buses and memory amounts.
I guess you mean powers of two per memory controller. In a GPU each 64-bit memory controller works independently, and it doesn't matter if the total is not a power of two, or even odd for that matter. It is however preferable that the memory capacity and bandwidth is identical for each controller, unlike products like GTX 970, GTX 660 Ti and GTX 660 among others.
The following configurations are unproblematic:
128-bit (2×64-bit)
192-bit (3×64-bit)
256-bit (4×64-bit)
320-bit (5×64-bit)
384-bit (6×64-bit)

-----

"GTX 1080 Ti"* will be an exciting product, and will finally be an "affordable" powerhouse in the Pascal family. If the claims of "10 GB" memory is correct, it will surely be more than enough.

Just like it's big brother Titan, GTX 1080 Ti will feature full performance fp16, with the possibility for great performance gains for games utilizing it. Vega is supposed to get the same feature.

*) Product naming is not confirmed. Personally I think "GTX 1090" will be more suiting, to indicate the gain over GTX 1080.
Posted on Reply
#31
$ReaPeR$
as always Nvidia is extremely meticulous in sectioning its products. i would put 12 gig on the card and be done with it. the price is more than enough for 12 gig..
Posted on Reply
#32
Prima.Vera
$ReaPeR$as always Nvidia is extremely meticulous in sectioning its products. i would put 12 gig on the card and be done with it. the price is more than enough for 12 gig..
that's overkill...
efikkan*) Product naming is not confirmed. Personally I think "GTX 1090" will be more suiting, to indicate the gain over GTX 1080.
Neh. It will be slower than Titan X, and Titan is not that fast compared to the 1080.
Posted on Reply
#33
qubit
Overclocked quantum bit
efikkanI guess you mean powers of two per memory controller. In a GPU each 64-bit memory controller works independently, and it doesn't matter if the total is not a power of two, or even odd for that matter. It is however preferable that the memory capacity and bandwidth is identical for each controller, unlike products like GTX 970, GTX 660 Ti and GTX 660 among others.
The following configurations are unproblematic:
128-bit (2×64-bit)
192-bit (3×64-bit)
256-bit (4×64-bit)
320-bit (5×64-bit)
384-bit (6×64-bit)
I did actually mean the whole chip, but yes, per memory controller seems to be the logical place to cut it back from if the physical silicon gets too large. In this instance, the GP102 GPU is only 1.5x as large as the GP104 GPU instead of the expected 2x which would have delivered double the performance, for this very reason. On top of that, the GP102 is still crippled to improve yields, reducing its performance further. It's a shame, but there you go. :ohwell:
Prima.VeraNeh. It will be slower than Titan X, and Titan is not that fast compared to the 1080.
That's what I thought, but I also did notice in a few places in TPU's review where the card was being held back by the CPU so we weren't seeing its true performance.
Posted on Reply
#34
efikkan
Prima.VeraNeh. It will be slower than Titan X, and Titan is not that fast compared to the 1080.
It will be very close to Titan X, and Titan X is performing 35% better than GTX 1080(in a closed case with pre-warming before benchmarking).
qubitI did actually mean the whole chip, but yes, per memory controller seems to be the logical place to cut it back from if the physical silicon gets too large. In this instance, the GP102 GPU is only 1.5x as large as the GP104 GPU instead of the expected 2x which would have delivered double the performance, for this very reason.
I need to correct you there; GP102 is 30-35% faster than GP104, not twice as fast.
If you are thinking about fp16 performance, fp16 only needs half the bandwidth of fp32…
Posted on Reply
#35
qubit
Overclocked quantum bit
efikkanI need to correct you there; GP102 is 30-35% faster than GP104, not twice as fast.
If you are thinking about fp16 performance, fp16 only needs half the bandwidth of fp32…
No, I think you've misunderstood what I said there.

I'm simply saying that if the GP102 had been double the size of GP104 that it would have been twice as fast (at the same clock speeds and not crippled in any way). However, it's 1.5x as big and crippled too, so we don't get so much performance gain.
Posted on Reply
#36
Beastie
There doesn't seem to be a problem with the 980ti's "odd" 6GB.
Posted on Reply
#37
Captain_Tom
evernessinceWaiting for Vega and HBM2. We've already seen the power savings of HBM1, AMD's top end cards should have excellent watt / performance that should compare to Nvidia.



Yeah, that 4 GB was just a bit too little. I fully expect VEGA with HBM II to be very competitive so long as AMD doesn't go crazy on the price. They also need to work out overclocking, as both the original FURY lineup and Polaris can't overclock well at all.
Please stop perpetuating these myths.

1) Pascal is a horrid overclocker compared to GCN 1.0, Maxwell, and even Kepler. Polaris isn't great, but it isn't any worse than Pascal (Especially now that Polaris is fully supported in TRIXX overclocking).

2) Fury was never bad. It launched with near zero overclocking support at launch, which looked bad next to the 980 Ti. However now it is fully unlocked, and 10 - 20% performance increases are quite common.
Posted on Reply
#38
xorbe
BeastieThere doesn't seem to be a problem with the 980ti's "odd" 6GB.
That's because it's not an odd size for a 384-bit bus ...
Posted on Reply
#39
$ReaPeR$
Prima.Verathat's overkill...



Neh. It will be slower than Titan X, and Titan is not that fast compared to the 1080.
well.. i remember the time when 1gig was considered "overkill". for that price, even if it was made out of pure gold it wouldnt be an overkill.
Posted on Reply
#40
Captain_Tom
$ReaPeR$well.. i remember the time when 1gig was considered "overkill". for that price, even if it was made out of pure gold it wouldnt be an overkill.
The 1GB thing was always hilarious to me. I remember when people said "Doesn't matter if the 6950 has 2GB. The 570 doesn't need more than 1.25GB". Then by the last BF3 expansion the chat log was filled with "Why is my game stuttering?!"

It's stuttering because your card has half as much RAM as it should. Oh you paid more for it too? Why?
Posted on Reply
#41
Prima.Vera
Captain_TomThe 1GB thing was always hilarious to me. I remember when people said "Doesn't matter if the 6950 has 2GB. The 570 doesn't need more than 1.25GB". Then by the last BF3 expansion the chat log was filled with "Why is my game stuttering?!"

It's stuttering because your card has half as much RAM as it should. Oh you paid more for it too? Why?
I can play ANY game with my 780Ti (3GB) in 1080p with 0 (zero) stuttering. Games usually cache a lot of VRAM memory, but don't necessary need that much to run properly.
Posted on Reply
#42
Captain_Tom
Prima.VeraI can play ANY game with my 780Ti (3GB) in 1080p with 0 (zero) stuttering. Games usually cache a lot of VRAM memory, but don't necessary need that much to run properly.
LMAO ok buddy. This has been proved wrong in a lot of the latest games in even 1080p.

3GB was for enthusiasts in bloody 2011. By 2013 4GB was the minimum and still barely is.
Posted on Reply
#43
Prima.Vera
Captain_TomLMAO ok buddy. This has been proved wrong in a lot of the latest games in even 1080p.

3GB was for enthusiasts in bloody 2011. By 2013 4GB was the minimum and still barely is.
What about 2017?
Posted on Reply
#44
efikkan
Even at 10 GB(320-bit memory bus), GTX 1080 Ti will run into other bottlenecks like computational power, long before memory restrictions.
Posted on Reply
#45
TheHunter
Captain_TomLMAO ok buddy. This has been proved wrong in a lot of the latest games in even 1080p.

3GB was for enthusiasts in bloody 2011. By 2013 4GB was the minimum and still barely is.
He is right. I had Oc'ed 780GTX up until Feb 2016 and could run just fine.

RoTR np with ultra textures, even though it can fill up to 6120MB, COD Ghosts 3040MB now up to 6000MB, Shadow of morodor also ran ok at ultra, Assassins creed, etc..

But I do remember BF3 with 570GTX, although I could ran it fine 1080p ultra and 2xmsaa, BF4 on the other hand was that stutter fest, had to drop textures from ultra to high.
Posted on Reply
Add your own comment
Jan 18th, 2025 14:05 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts