Tuesday, February 28th 2017
NVIDIA Announces the GeForce GTX 1080 Ti Graphics Card at $699
NVIDIA today unveiled the GeForce GTX 1080 Ti graphics card, its fastest consumer graphics card based on the "Pascal" GPU architecture, and which is positioned to be more affordable than the flagship TITAN X Pascal, at USD $699, with market availability from the first week of March, 2017. Based on the same "GP102" silicon as the TITAN X Pascal, the GTX 1080 Ti is slightly cut-down. While it features the same 3,584 CUDA cores as the TITAN X Pascal, the memory amount is now lower, at 11 GB, over a slightly narrower 352-bit wide GDDR5X memory interface. This translates to 11 memory chips on the card. On the bright side, NVIDIA is using newer memory chips than the one it deployed on the TITAN X Pascal, which run at 11 GHz (GDDR5X-effective), so the memory bandwidth is 484 GB/s.
Besides the narrower 352-bit memory bus, the ROP count is lowered to 88 (from 96 on the TITAN X Pascal), while the TMU count is unchanged from 224. The GPU core is clocked at a boost frequency of up to 1.60 GHz, with the ability to overclock beyond the 2.00 GHz mark. It gets better: the GTX 1080 Ti features certain memory advancements not found on other "Pascal" based graphics cards: a newer memory chip and optimized memory interface, that's running at 11 Gbps. NVIDIA's Tiled Rendering Technology has also been finally announced publicly; a feature NVIDIA has been hiding from its consumers since the GeForce "Maxwell" architecture, it is one of the secret sauces that enable NVIDIA's lead.The Tiled Rendering technology brings about huge improvements in memory bandwidth utilization by optimizing the render process to work in square sized chunks, instead of drawing the whole polygon. Thus, geometry and textures of a processed object stays on-chip (in the L2 cache), which reduces cache misses and memory bandwidth requirements.Together with its lossless memory compression tech, NVIDIA expects Tiled Rendering, and its storage tech, Tiled Caching, to more than double, or even close to triple, the effective memory bandwidth of the GTX 1080 Ti, over its physical bandwidth of 484 GB/s.NVIDIA is making sure it doesn't run into the thermal and electrical issues of previous-generation reference design high-end graphics cards, by deploying a new 7-phase dual-FET VRM that reduces loads (and thereby temperatures) per MOSFET. The underlying cooling solution is also improved, with a new vapor-chamber plate, and a denser aluminium channel matrix.Watt-to-Watt, the GTX 1080 Ti will hence be up to 2.5 dBA quieter than the GTX 1080, or up to 5°C cooler. The card draws power from a combination of 8-pin and 6-pin PCIe power connectors, with the GPU's TDP rated at 220W. The GeForce GTX 1080 Ti is designed to be anywhere between 20-45% faster than the GTX 1080 (35% on average).The GeForce GTX 1080 Ti is widely expected to be faster than the TITAN X Pascal out of the box, despite is narrower memory bus and fewer ROPs. The higher boost clocks and 11 Gbps memory, make up for the performance deficit. What's more, the GTX 1080 Ti will be available in custom-design boards, and factory-overclocked speeds, so the GTX 1080 Ti will end up being the fastest consumer graphics option until there's competition.
Besides the narrower 352-bit memory bus, the ROP count is lowered to 88 (from 96 on the TITAN X Pascal), while the TMU count is unchanged from 224. The GPU core is clocked at a boost frequency of up to 1.60 GHz, with the ability to overclock beyond the 2.00 GHz mark. It gets better: the GTX 1080 Ti features certain memory advancements not found on other "Pascal" based graphics cards: a newer memory chip and optimized memory interface, that's running at 11 Gbps. NVIDIA's Tiled Rendering Technology has also been finally announced publicly; a feature NVIDIA has been hiding from its consumers since the GeForce "Maxwell" architecture, it is one of the secret sauces that enable NVIDIA's lead.The Tiled Rendering technology brings about huge improvements in memory bandwidth utilization by optimizing the render process to work in square sized chunks, instead of drawing the whole polygon. Thus, geometry and textures of a processed object stays on-chip (in the L2 cache), which reduces cache misses and memory bandwidth requirements.Together with its lossless memory compression tech, NVIDIA expects Tiled Rendering, and its storage tech, Tiled Caching, to more than double, or even close to triple, the effective memory bandwidth of the GTX 1080 Ti, over its physical bandwidth of 484 GB/s.NVIDIA is making sure it doesn't run into the thermal and electrical issues of previous-generation reference design high-end graphics cards, by deploying a new 7-phase dual-FET VRM that reduces loads (and thereby temperatures) per MOSFET. The underlying cooling solution is also improved, with a new vapor-chamber plate, and a denser aluminium channel matrix.Watt-to-Watt, the GTX 1080 Ti will hence be up to 2.5 dBA quieter than the GTX 1080, or up to 5°C cooler. The card draws power from a combination of 8-pin and 6-pin PCIe power connectors, with the GPU's TDP rated at 220W. The GeForce GTX 1080 Ti is designed to be anywhere between 20-45% faster than the GTX 1080 (35% on average).The GeForce GTX 1080 Ti is widely expected to be faster than the TITAN X Pascal out of the box, despite is narrower memory bus and fewer ROPs. The higher boost clocks and 11 Gbps memory, make up for the performance deficit. What's more, the GTX 1080 Ti will be available in custom-design boards, and factory-overclocked speeds, so the GTX 1080 Ti will end up being the fastest consumer graphics option until there's competition.
160 Comments on NVIDIA Announces the GeForce GTX 1080 Ti Graphics Card at $699
It was disabling the L2 that caused the issue. Each memory controller, and it's ROPs, are linked to a block of L2. When they disabled a block of L2 in the GTX970, that block's memory controller and ROPs had to be jumpered over to another block of L2.
The ROPs in the jumpered section were technically still active. However, nVidia designed their driver to not use them, because using them would have actually resulted in slower performance.
In the case of the GTX1080Ti, they likely also lowered the amount of L2. We won't know for sure, because L2 is not an advertised spec. And you are probably right, in this case, they also just went ahead and disabled the memory controller and it's associated ROPs to avoid any kind of fiasco. The only way I see his statement making sense is if he use using a DP -> DVI adapter. I've had some of those really suck.
But he's really complaining about nothing for two reasons:
1.) This is just the reference output design. AIBs can change it however they want, and I'm sure some will add a DVI port.
2.) It has an HDMI port. Since DVI and HDMI use the exact same signal, he can just pick up a cheap HDMI -> DVI adapter or cable. Just like so many great GPUs before it. It doesn't matter which axis is which. The graph would read the same. However, the point nVidia was making was that the 1080Ti cooler gives lower temperatures than the 1080 cooler. So, most people expect the 1080Ti line to then be lower on the graph than the 1080. Not shifted a little to the left. Visually, if you're point is that something is lower than something else, you orient your graph axes so that it is visually lower on the graph.
And they did put, in clear as day letters, that they were testing both at 220w.
On the cooler subject, id like to point that there are 2 things that favors the 1080 ti cooling over the 1080:
1. No dvi port - increased flow, lower turbulence.
2. Bigger die area - higher heat transfer at same power.
I bet those two make up for the most of this 5 C diference at the same power. And nividia changed nothing or almost nothing.
Interesting $499?
Edit: you meant the 1080...in a 1080ti thread.... and didn't say it. LOL!
Vega should have no trouble defeating this if they want it to.
Nvidia are doomed.
there is not much you can do to a blower type cooler beyond what nvidia currently has on 1080/1080ti/titanxp. at least not in reasonable price range.
Stop being delusional.
Also, it looks amazing, when are the actual reviews out?
They have done that every generation lol.
however, amd's previous flagship was fury x. 50% on top of fury x would put performance at only slightly faster than gtx1080.
depending on where exactly they want vega to be it might not be enough.
In all seriousness, AMD should be able to do way more than +50% over Fury X, which came with a gimped HBM infrastructure. My bet though, top Vega won't beat 1080Ti, but come to 10% neighborhood.
now that i looked closer at that temp/noise graph though, 1080@220w and 1080ti@220w is misleading as hell. 1080 reference tdp is 180w, 1080's has 250w. even with a better cooler the actual end result will be worse
This is what many are ignoring, also the reason why the graph didn't make sense to me at first, not to mention I couldn't recall 1080's TDP immediately.
Interested in how this thing will perform and how this cut up card handles things in the memory department. For the price, I may trade up my Titan XP for a pair of these instead of grabbing a second Titan XP (Also depends on how it overclocks).