# NVIDIA Announces the GeForce GTX 1080 Ti Graphics Card at $699



## btarunr (Mar 1, 2017)

NVIDIA today unveiled the GeForce GTX 1080 Ti graphics card, its fastest consumer graphics card based on the "Pascal" GPU architecture, and which is positioned to be more affordable than the flagship TITAN X Pascal, at USD $699, with market availability from the first week of March, 2017. Based on the same "GP102" silicon as the TITAN X Pascal, the GTX 1080 Ti is slightly cut-down. While it features the same 3,584 CUDA cores as the TITAN X Pascal, the memory amount is now lower, at 11 GB, over a slightly narrower 352-bit wide GDDR5X memory interface. This translates to 11 memory chips on the card. On the bright side, NVIDIA is using newer memory chips than the one it deployed on the TITAN X Pascal, which run at 11 GHz (GDDR5X-effective), so the memory bandwidth is 484 GB/s. 

Besides the narrower 352-bit memory bus, the ROP count is lowered to 88 (from 96 on the TITAN X Pascal), while the TMU count is unchanged from 224. The GPU core is clocked at a boost frequency of up to 1.60 GHz, with the ability to overclock beyond the 2.00 GHz mark. It gets better: the GTX 1080 Ti features certain memory advancements not found on other "Pascal" based graphics cards: a newer memory chip and optimized memory interface, that's running at 11 Gbps. NVIDIA's Tiled Rendering Technology has also been finally announced publicly; a feature NVIDIA has been hiding from its consumers since the GeForce "Maxwell" architecture, it is one of the secret sauces that enable NVIDIA's lead.



 

 

 

 




The Tiled Rendering technology brings about huge improvements in memory bandwidth utilization by optimizing the render process to work in square sized chunks, instead of drawing the whole polygon. Thus, geometry and textures of a processed object stays on-chip (in the L2 cache), which reduces cache misses and memory bandwidth requirements.



 

 

 

Together with its lossless memory compression tech, NVIDIA expects Tiled Rendering, and its storage tech, Tiled Caching, to more than double, or even close to triple, the effective memory bandwidth of the GTX 1080 Ti, over its physical bandwidth of 484 GB/s. 



 

 

 

 

NVIDIA is making sure it doesn't run into the thermal and electrical issues of previous-generation reference design high-end graphics cards, by deploying a new 7-phase dual-FET VRM that reduces loads (and thereby temperatures) per MOSFET. The underlying cooling solution is also improved, with a new vapor-chamber plate, and a denser aluminium channel matrix. 



 

 

Watt-to-Watt, the GTX 1080 Ti will hence be up to 2.5 dBA quieter than the GTX 1080, or up to 5°C cooler. The card draws power from a combination of 8-pin and 6-pin PCIe power connectors, with the GPU's TDP rated at 220W. The GeForce GTX 1080 Ti is designed to be anywhere between 20-45% faster than the GTX 1080 (35% on average). 



 

 

The GeForce GTX 1080 Ti is widely expected to be faster than the TITAN X Pascal out of the box, despite is narrower memory bus and fewer ROPs. The higher boost clocks and 11 Gbps memory, make up for the performance deficit. What's more, the GTX 1080 Ti will be available in custom-design boards, and factory-overclocked speeds, so the GTX 1080 Ti will end up being the fastest consumer graphics option until there's competition.



 



*View at TechPowerUp Main Site*


----------



## The Von Matrices (Mar 1, 2017)

The performance doesn't mean much without a price.


----------



## KarymidoN (Mar 1, 2017)

price?


----------



## raptori (Mar 1, 2017)

$699


----------



## ZoneDymo (Mar 1, 2017)

Now we just need Vega to come out and Ill finally have the information for an upgrade


----------



## mrthanhnguyen (Mar 1, 2017)

$699 is a very good price, but who paid $1k for the Titan X must not be happy.


----------



## gigantor21 (Mar 1, 2017)

raptori said:


> $699



Cheaper than I expected, which is a nice change of pace for an NVIDIA card. That's only $50 more than the 980 Ti was at launch.

Still WAY out of my price range, but I'm glad it wasn't $800+, LOL.


----------



## NTM2003 (Mar 1, 2017)

Where's the pre order on Amazon lol take my money


----------



## MrGenius (Mar 1, 2017)

Why does it look like it runs quieter the hotter it gets? Can that be right? Am I hallucinating?


----------



## OneCool (Mar 1, 2017)

Is it just me or is there some wierd stuff going on here. Odd memory count really stands out for one....Seems like Nvidia is rushing it out to make as much as possible...No bias just seems wrong..


----------



## Kalevalen (Mar 1, 2017)

MrGenius said:


> Why does it look like it runs quieter the hotter it gets? Can that be right? Am I hallucinating?



your reading the graph backwards then


----------



## ZoneDymo (Mar 1, 2017)

MrGenius said:


> Why does it look like it runs quieter the hotter it gets? Can that be right? Am I hallucinating?



Its about the cooler, its hot because the fan is barely spinning and therefor its quiter.
The fan spins up to cool it down so noise increases.


----------



## champsilva (Mar 1, 2017)

Kalevalen said:


> your reading the graph backwards then



Yeah, i tried to understand what he saying haha


----------



## yoyo2004 (Mar 1, 2017)

MrGenius said:


> Why does it look like it runs quieter the hotter it gets? Can that be right? Am I hallucinating?



I think it simply means at a given fixed fan noise (Speed) = it runs 5C cooler.


----------



## SpartanM07 (Mar 1, 2017)

mrthanhnguyen said:


> $699 is a very good price, but who paid $1k for the Titan X must not be happy.



$1278 actually with tax and I'm very happy... Sell the Titan X for more than or equal to what I paid on eBay (checkout current selling prices, although it will most likely sell for less meow), buy new 1080 ti, pocket the extra, and all that after I've maxed every game for the past 7 months.


----------



## ShurikN (Mar 1, 2017)

"While it features the same 3,584 CUDA cores as the TITAN X Pascal, the memory amount is now lower, at 11 GB, over a slightly narrower 352-bit with GDDR5X memory interface. This translates to 11 memory chips on the card."
Nice hack job


----------



## swirl09 (Mar 1, 2017)

Actually had fingers crossed for $799, so Im happy


----------



## arnoo1 (Mar 1, 2017)

Does this mean the gtx1070 will get a price drop so i can finally buy an newe gpu??

Few years back x70 card was around 370-420 @ launch ,now the gtx1070 is 480+ euro's wich is way to expensive


----------



## Blacksm1le (Mar 1, 2017)

Even if it is a novelty this technology is already exceeded he even has no memory HBM2 which is very useful to play a resolution 4K On the side AMD the series RX500 will possess it a date of release in MAY, the famous architecture Pascal is a real fiasco on all the line It is only a graphics card  overclocked with a Price of swindle


----------



## Prima.Vera (Mar 1, 2017)

swirl09 said:


> Actually had fingers crossed for $799, so Im happy


You will get it at aprox 850-900$, based on the experience with 1080


----------



## Melvis (Mar 1, 2017)

lol $699? yeah right, I bet it wont be, still be over a grand here easy!


----------



## MrGenius (Mar 1, 2017)

So...I'm reading the graph backwards? Ok. Then let me draw some straight lines and see where I went wrong.





*32.5 dBA @ 88°C
35.5 dBA @ 82°C*

Well looky there. The hotter it runs the quieter the cooler is. So the fan slows down and makes less noise as the card heats up. And the fan speeds up and makes more noise as the card cools down. Yep...makes perfect sense!


----------



## OneCool (Mar 1, 2017)

11gb of VRAM screams something isn't right...W1z...Come on back my old ass up here....  Core math doesn't hold up on this lol...2+2 isn't almost 4


----------



## cdawall (Mar 1, 2017)

What dicks. It's like they sat and had a good laugh when they did the ram. 

"hey guys how do we charge more at a higher margin? 

Let's just pull a ram chip  off they'll still buy it. "


----------



## evernessince (Mar 1, 2017)

MrGenius said:


> Why does it look like it runs quieter the hotter it gets? Can that be right? Am I hallucinating?



I think someone at Nvidia messed up...

The X axis is Noise level increasing from left to right.
The Y axis is Temp increasing from bottom to top

For some reason they have the card starting at around 88c with only 32.5 dBA and the noise increasing as temps go up.


----------



## Kyuuba (Mar 1, 2017)

OneCool said:


> 11gb of VRAM screams something isn't right...W1z...Come on back my old ass up here....  Core math doesn't hold up on this lol...2+2 isn't almost 4


Probably to justify that at least it has less memory than the Titan XP?
I found this interesting on these three cards:
1080         8GB           256-bit Bus width - very common.
1080ti       11GB         352-bit Bus width - this is uncommon on graphics cards.
Titan XP    12GB         384-bit Bus width - common on high end gpus like the 780ti and 980ti.


----------



## evernessince (Mar 1, 2017)

SpartanM07 said:


> $1278 actually with tax and I'm very happy... Sell the Titan X for more than or equal to what I paid on eBay (checkout current selling prices, although it will most likely sell for less meow), buy new 1080 ti, pocket the extra, and all that after I've maxed every game for the past 7 months.



No enthusiast who is looking at a Titan X is going to pay anywhere near what you paid for it now.  These are people who follow the news because they want the latest and greatest.  Why would they buy a Titan X when they can wait and get a higher performing aftermarket 1080 Ti?

If you look at sold eBay listings, a few Titan X video cards sold for 1k today, which is a pretty big drop.  That's considering none of those guys just don't return the cards for a refund.  If you wanted to sell your Titan X, you should have sold it yesterday to get full value back.


----------



## theGryphon (Mar 1, 2017)

MrGenius said:


> So...I'm reading the graph backwards? Ok. Then let me draw some straight lines and see where I went wrong.
> View attachment 84654
> 
> 
> ...




You're getting the causal direction wrong, which is baffling considering the overall high sophistication in this forum.

In terms of correlation, yes, temperature and dBA are (and have always been, since the Big Bang established the laws of physics and thermodynamics) negatively correlated.

Causal relation here though is NOT that the fan slows down as the heat goes up, it IS that as you slow down the fan the heat goes up.

That's why you were told you're reading the graph backwards...

EDIT: The more direct way to read the graph is that as you increase the fan speed (and hence the dBA) the card runs cooler.


----------



## theGryphon (Mar 1, 2017)

evernessince said:


> I think someone at Nvidia messed up...
> 
> The X axis is Noise level increasing from left to right.
> The Y axis is Temp increasing from bottom to top
> ...



No, they did no mess anything up.
Along with my post above, recall that the X axis is typically the control parameter (which here is the fan speed) and the Y axis is the response parameter (which here is the temperature that is realized as response to the fan speed the user decides on).

So, you guys need to learn how to read a graph and reconcile it with the laws of physics and thermodynamics, which I believe everyone here at least has an intuition of


----------



## kruk (Mar 1, 2017)

No founders edition tax? Only $699? 11 GB of RAM? Does anybody else feed AMD tricked them into releasing this card so early with their Vega "reveal"? They could keep charging that much for the 1080 and get more profits. Also, people who just bought the 1080 FE or custom models must be really mad


----------



## RejZoR (Mar 1, 2017)

Camm said:


> Somewhat inflammatory. As much as I wish AMD would just get Vega the fuck out of the door, its still readily apparent that as Pascal clocks higher, its efficiency per mhz drops with what looks like cache idiling. So a 50% claim is highly overrated.
> 
> Oh well, where are the reviews?



The thing is, if you compare RX480 and GTX 1060, the later needs like what, extra 400MHz to match a 1400MHz RX480? Meaning we're dealing with two quite different architectures. NVIDIA's isn't as efficient in terms of raw power per clock and needs to compensate that with really high GPU clocks. It has actually always been like this lately. Even with GTX 900 series, R9 Fury X was what, 1050MHz ? My GTX 980 is running at 1400MHz and it's about matching R9 Fury X in performance. Sometimes. You can't just say uh oh, RX Vega will suck because AMD can't clock it that high. That's kinda irrelevant.

@theGryphon 
What NVIDIA is basically saying is the following...

With GTX 1080, it was required to run cooler fan at speeds that create 32.5dB of noise to achieve 94°C and 35.5dB to achieve 87°C.

With GTX 1080Ti, they've achieved same noise levels, but at lower temperatures of 88°C and what's that, 82°C ?

So, technically, they won't make it quieter out of the box, but it can be quieter because they created this temperature gap. You could theoretically lower the fan speed to achieve old temperatures and gain in quieter operation.


----------



## evernessince (Mar 1, 2017)

theGryphon said:


> No, they did no mess anything up.
> Along with my post above, recall that the X axis is typically the control parameter (which here is the fan speed) and the Y axis is the response parameter (which here is the temperature that is realized as response to the fan speed the user decides on).
> 
> So, you guys need to learn how to read a graph and reconcile it with the laws of physics and thermodynamics, which I believe everyone here at least has an intuition of



No, X axis is clearly labeled dBA.  If they were going to include 3 variables they should have just done a 3d graph.  How you assume people would draw the conclusion that there is a 3rd unmentioned variable without prior context is the baffling part.  This graph was obviously never meant to be taken without the fan context you have provided.


----------



## EarthDog (Mar 1, 2017)

OneCool said:


> 11gb of VRAM screams something isn't right...W1z...Come on back my old ass up here....  Core math doesn't hold up on this lol...2+2 isn't almost 4


That's why there is the odd bus width and back end. 



kruk said:


> No founders edition tax? Only $699? 11 GB of RAM? Does anybody else feed AMD tricked them into releasing this card so early with their Vega "reveal"? They could keep charging that much for the 1080 and get more profits. Also, people who just bought the 1080 FE or custom models must be really mad


Perhaps they want to get it out before Vega for sales without competition. I have to imagine Vega will be as fast as a Titan XP/1080ti


----------



## qubit (Mar 1, 2017)

A slightly crippled GPU and a weird 11GB RAM on their top GTX? Now that's just fugly.  I'll wait for the reviews and Vega before buying, but this puts me off the card and might just stick to a 1080. The thing was plenty fast anyway.

It wouldn't surprise me if NVIDIA release something like a 2080 Ti with the full GPU and 12GB RAM + higher clocks when Vega comes out, for significantly better performance. They might then be able to hike the price, too...


----------



## Aenra (Mar 1, 2017)

kruk said:


> Also, people who just bought the 1080 FE or custom models must be really mad



Naah...
For starters, we knew there'd be a Ti version since like forever. Also, this was meant to originally happen in January or thereabouts if you recall.. So the people buying a 1080 the last few months (myself included) knew everything they needed to know. Either they could not wait, or they had decided in advance that the money recquired was outside their budget/sense of reason.
(don't pay attention to the price tags mentioned here.. don't even pay attention to the price tags + Vat. You need add customs [a world exists outside the US] and you need add the extra bucks charged for the 'improved' models that will be coming out*. For millions of people, this card translates to about 1.5k)

* in case you're too far gone into the techie side of the force, most folks don't give a rat's behind about founder editions. They will go buy the EVGA/Gigabyte/Asus model that will be 20-25% faster and signatures be damned


----------



## Prima.Vera (Mar 1, 2017)

To be honest, even 8GB of VRAM would have suffice. They used 11 I think, not that because of games reqs (overkill), but in order to increase the BUS width to 11x32=352bits, instead of 8x32=256 (1080).


----------



## Patriot (Mar 1, 2017)

ShurikN said:


> "While it features the same 3,584 CUDA cores as the TITAN X Pascal, the memory amount is now lower, at 11 GB, over a slightly narrower 352-bit with GDDR5X memory interface. This translates to 11 memory chips on the card."
> Nice hack job




Well... I guess they didn't want to get sued again...  They still haven't paid out the settlement cost they agreed upon.


----------



## theGryphon (Mar 1, 2017)

evernessince said:


> No, X axis is clearly labeled dBA.  If they were going to include 3 variables they should have just done a 3d graph.  How you assume people would draw the conclusion that there is a 3rd unmentioned variable without prior context is the baffling part.  This graph was obviously never meant to be taken without the fan context you have provided.



Yes, dBA, which is very clearly a proxy parameter for fan speed. I mean, given the context, it is just too obvious don't you think? So, why would they need a 3rd graph? They assumed, which they should, whoever reads this graph would understand automatically that dBA stands for the sound levels coming out of the card... 

I mean, seriously, I won't waste my time on this anymore.


----------



## ZoneDymo (Mar 1, 2017)

fynxer said:


> VEGA is fucked!!! 1080Ti with OC is over 50% faster than stock 1080, VEGA is blown out of the water before it even hits the market. All this at $699, just saying, good luck AMD.
> 
> AMD waited to long with VEGA and will now pay the ultimate price.



man, rarely seen so much fanboy in one post...


----------



## evernessince (Mar 1, 2017)

theGryphon said:


> Yes, dBA, which is very clearly a proxy parameter for fan speed. I mean, given the context, it is just too obvious don't you think? So, why would they need a 3rd graph? They assumed, which they should, whoever reads this graph would understand automatically that dBA stands for the sound levels coming out of the card...
> 
> I mean, seriously, I won't waste my time on this anymore.



I'm sorry, I just didn't make the assumption that RPMs are the exact same between the two cards.  If you read the graph with the assumption that all fan variables are the same then it works fine.


----------



## chr0nos (Mar 1, 2017)

maybe another gtx970 memory fiasco


----------



## johnspack (Mar 1, 2017)

Wow,  only 1000can?  I'll take 2!


----------



## RejZoR (Mar 1, 2017)

evernessince said:


> I'm sorry, I just didn't make the assumption that RPMs are the exact same between the two cards.  If you read the graph with the assumption that all fan variables are the same then it works fine.



If you're hitting same noise levels, the chances are, fan speed is identical. Don't you think? In terms of noise, this just means you can lower the fan speed, achieving old temperatures, but lower noise. You can't have both at once unless you pick a temperature and fan noise half way through both axis... In that case, you'd make it tiny bit quieter and tiny bit cooler than the old GTX 1080.


----------



## petedread (Mar 1, 2017)

I would have been excited about this card had I not bought a 980ti classified on release. The card has put me off Nvidia. Having to lower game settings 3 or 4 times now since I bought it. Fallout 4 was fantastic at 4k to start with, admittedly I could not play at max settings to start with but now I have so many settings turned down or off. I know some people are talking about Nvidia hampering performance on cards, but I am not interested in anybody else's experience. My own experience with Fallout 4 and Dying Light has shown me. I will buy the top Vaga card regardless of how it performs compared to Nvidia's offerings. Instead of performance that goes down, I will have a card with performance that goes up over time lol.


----------



## ratirt (Mar 1, 2017)

Strange this card may be( yeah I watched star wars yesterday  ) I'm waiting for benchmarks but still how they designed the card is weird indeed. Somebody mentioned that NV releases TI version now cause they want to get some shiny penny outta it? I think that's quite right. I think NV was pretty in a hurry to release it. Let's wait and see what this card can do


----------



## the54thvoid (Mar 1, 2017)

petedread said:


> I would have been excited about this card had I not bought a 980ti classified on release. The card has put me off Nvidia. Having to lower game settings 3 or 4 times now since I bought it. Fallout 4 was fantastic at 4k to start with, admittedly I could not play at max settings to start with but now I have so many settings turned down or off. I know some people are talking about Nvidia hampering performance on cards, but I am not interested in anybody else's experience. My own experience with Fallout 4 and Dying Light has shown me. I will buy the top Vaga card regardless of how it performs compared to Nvidia's offerings. Instead of performance that goes down, I will have a card with performance that goes up over time lol.



You break your PC? My 980ti hasn't crippled itself downwards...


----------



## petedread (Mar 1, 2017)

fynxer said:


> VEGA is fucked!!! 1080Ti with OC is over 50% faster than stock 1080, VEGA is blown out of the water before it even hits the market. All this at $699, just saying, good luck AMD.
> 
> AMD waited to long with VEGA and will now pay the ultimate price.


It does not matter how Vaga performs compared to Nvidia cards. I just want a card that does what I want it to do. I can not wait to replace my 980ti.


----------



## R0H1T (Mar 1, 2017)

chr0nos said:


> maybe another gtx970 memory fiasco


Watch this space for more 


johnspack said:


> Wow,  only 1000can?  I'll take 2!


That'll be 2k plus *taxes*, if any


----------



## TheLostSwede (Mar 1, 2017)

Yay! No DVI port


----------



## petedread (Mar 1, 2017)

the54thvoid said:


> You break your PC? My 980ti hasn't crippled itself downwards...


LoL, Like I said, my personal experience has put me off.


----------



## MrGenius (Mar 1, 2017)

Oh for crying out loud. How hard can this be to understand people? They made a mistake. Plain and simple.

Here. I fix.






How difficult was that?


----------



## nguyen (Mar 1, 2017)

MrGenius said:


> Oh for crying out loud. How hard can this be to understand people? They made a mistake. Plain and simple.
> 
> Here. I fix.
> 
> ...



I think your mom made a mistake lol. Their graph is completely fine, the dBA correspond to the fan duty cycle, that mean at constant graphical load (220W as in the graph), increasing fan duty cycle reduces GPU temperature.


----------



## MrGenius (Mar 1, 2017)

Yeah...I know. Fans get louder as their speed increases. Higher dBA = louder. Common knowledge. Therefore the graph is completely and totally wrong.


----------



## ZoneDymo (Mar 1, 2017)

nguyen said:


> I think your mom made a mistake lol. Their graph is completely fine, the dBA correspond to the fan duty cycle, that mean at constant graphical load (220W as in the graph), increasing fan duty cycle reduces GPU temperature.



Makes that "MrGenius" username kinda ironic now doesn't it 



MrGenius said:


> Yeah...I know. Fans get louder as their speed increases. Higher dBA = louder. Common knowledge. Therefore the graph is completely and totally wrong.



You do realize that in your "improved" graph the fans spin down, aka make less noise while also reducing the temperature of the card somehow more then when there were spinning faster right?


----------



## erixx (Mar 1, 2017)

Just came by to add 2 lines of information not included in the TPU news, forgiveme if I am wrong:

- No DVI port: a nice clean backpanel 

- Founders Edition available next week 


BTW: "Founders edition" sounds soooo Karl May to me... (maybe only Germans will understand...  or let's say "The last of the mohicans"


----------



## R0H1T (Mar 1, 2017)

ZoneDymo said:


> You do realize that in your "improved" graph the fans spin down, aka *make less noise while also reducing the temperature of the card* somehow more then when there were spinning faster right?


Not necessarily, if the default *fan curve* is set aggressively the fan speed ramps up quickly depending on the gpu load & temperature. If it's (relatively) conservative then you'll see a flat lining wrt noise & how the fan speed isn't lowered as quickly with the gpu load/temp drop. He's right btw IMO, someone goofed up the original graph.


----------



## wurschti (Mar 1, 2017)

That bus and core count seems not to match. Like everything else. Also. 352-bit bus and 11GB of memory. I think this will be a 970-like GPU, being unable to use some of it's potential.


----------



## Crap Daddy (Mar 1, 2017)

erixx said:


> Just came by to add 2 lines of information not included in the TPU news, forgiveme if I am wrong:
> 
> - No DVI port: a nice clean backpanel
> 
> ...



It's rather Old Shatterhand.


----------



## erixx (Mar 1, 2017)

Crap Daddy said:


> It's rather Old Shatterhand.



Exactly!


----------



## theGryphon (Mar 1, 2017)

R0H1T said:


> He's right btw IMO, someone goofed up the original graph.




You guys are embarrassing yourselves. Maybe today, maybe tomorrow, and I hope one day, you'll look at what you've written and slap yourselves while looking into the mirror.

I don't have anything else to say, other than 'wow'...


----------



## londiste (Mar 1, 2017)

RejZoR said:


> The thing is, if you compare RX480 and GTX 1060, the later needs like what, extra 400MHz to match a 1400MHz RX480? Meaning we're dealing with two quite different architectures. NVIDIA's isn't as efficient in terms of raw power per clock and needs to compensate that with really high GPU clocks. It has actually always been like this lately. Even with GTX 900 series, R9 Fury X was what, 1050MHz ? My GTX 980 is running at 1400MHz and it's about matching R9 Fury X in performance. Sometimes. You can't just say uh oh, RX Vega will suck because AMD can't clock it that high. That's kinda irrelevant.


gtx1060 is ~15% smaller with ~25% less transistors. also, at pascal launch nvidia was talking about having to specifically alter the architecture to make it clock higher, so that clock speed is not just coming out of nowhere. they even admitted that pascal might have lost a bit of ipc in the process compared to maxwell.

and the clock speed difference is even more than 400mhz. rx480 runs at 1266mhz boost and gtx1060 tends to stay above 1800mhz for all but excessive load.

furyx-s clocked to around 1.15-1.2ghz. your gtx980 at 1.4ghz is already pretty good, i think on average they went to 1.4-1.45ghz. i am surprised it'd match furyx at that clock though, fury should still beat it in most cases. fiji is simply much larger.


----------



## RejZoR (Mar 1, 2017)

On Fury X launch, they were about neck on neck. Later, as drivers matured, R9 Fury X pulled ahead. And even more so in D3D12 and Vulkan where it just obliterates it. It's why I tiny bit regret I've gone with GTX 980. I mean, it's not a bad card, but Fury X is just better, despite certain other limitations.


----------



## Aenra (Mar 1, 2017)

(off topic, but..)

Why the joy at no DVI?
I still use the DVI connection in mine and never mind DP, mini or proper.
Knowing full well in advance how each and every one of you here is going to come and correct me, lol, i can tell you that for visual signal only?

If my DVI gives me a 100%, the DP connection gives me about 75-80%. A bit fuzzier, a bit less crisp, the whites not so intense as they were with DVI; like someone took the contrast down a notch or two. No, i didn't forget to set the monitor right (choose PC rather than 'TV'), no i didn't forget to set the frequency to 144; yes, said comparison with settings default, no calibrating; before or after.
DVI-d is a piss poor Chinaman cable that came bundled with my monitor (one in my specs). DP cable cost a fortune, we're talking big brand used in high-end audiovisual equipment. Ended up giving it away.


----------



## ArdWar (Mar 1, 2017)

Seriously guys, please learn to read a graph properly.

As said before, generally X-axis is the control parameter (the source, input) and Y-axis is the response parameter (the result, output). Changing parameter in X results in a change in Y parameter.

In this case the graph depicts *COOLER PERFORMANCE* at dissipasing 220 Watt of heat. Here the performance of a cooler is characterized by how cool the temperature and how quiet the cooling system is.

For a given cooler design (there's two in the graph, 1080 cooler and 1080 Ti cooler) and constant heat load, the resultant junction temperature (Y-axis) is correlated to the noise generated by the cooling system (here it is assumed that noise is correlated to fan speed).
- By varying the noise (fan speed) the resultant temperature will vary too.
- Take sample at low noise (low fan speed)    : with 32.5 dB noise, 1080 cooler results in ~95C, 1080 Ti cooler results in 88C.
- Take sample at high noise (high fan speed) : with 35.5 dB noise, 1080 cooler results in ~87C, 1080 Ti cooler results in 82C.

- Notice that as fan speed goes up (noise increaased) the temperature lowers, intuitive isn't it?
- Notice at the same noise level, 1080 Ti cooler results in lower temperature than 1080 cooler, *this is the point that the graph is trying to get across*.

I sometimes wonder if this kind of error is the cause of some baffling decision from governments and companies.


----------



## Fluffmeister (Mar 1, 2017)

You're move AMD. 
You're move AMD.
You're move AMD.

Seriously, it's you're move AMD.

Anyway, looking forward to seeing custom AIB cards/reviews.


----------



## the54thvoid (Mar 1, 2017)

Fluffmeister said:


> You're move AMD.
> You're move AMD.
> You're move AMD.
> 
> ...



Someone's going to pull up the 'you're' usage. May as well be me!

Your move Fluffmeister.


----------



## Fluffmeister (Mar 1, 2017)

the54thvoid said:


> Someone's going to pull up the 'you're' usage. May as well be me!
> 
> Your move Fluffmeister.



Mobile phone on a Train!


----------



## denixius (Mar 1, 2017)

Looks very good. But unfortunately its price may be $1,375 in Turkey. Tax policies are not good in here. Damn!


----------



## jabbadap (Mar 1, 2017)

Hmm 11Gbps gddr5x, I thought that micron skipped those faster gddr5x for gddr6. But now they are back on their site. 

Price is surprisingly good, so how long it takes them to release titan X black with full gp102 and maybe 12Gbps gddr5x.


----------



## chaosmassive (Mar 1, 2017)

the reason we saw weird memory configuration here is because
its has 6 ROP disabled, we still know that ROP tied with memory controller like GTX 970 was
which 4 ROPs disabled in GTX 970 causing last 32 bit portion need to be 'hike' along with
another ROP which connected to Memory controller (3.5+0.5 GB) IIRC

Now Nvidia learn their lesson, to avoid GTX 970 fiasco, they simply completely disabled last 32 bit MC along with 6 ROP
hence thats what you see now 11 GB via 352 bit bus, if they attempt to 'normalize' 12 GB (11+1) advertising, it will revive old stigma to them .


----------



## medi01 (Mar 1, 2017)

Fluffmeister said:


> Mobile phone on a Train!


You're Your apologies were accepted.


----------



## BiggieShady (Mar 1, 2017)

MrGenius said:


> Therefore the graph is completely and totally wrong.


LOL at the wrong graph conspiracy ... your "fixed" graph says that somehow if you make your card more quiet (by lowering the fan speed), it will also drop temps  maybe in the bizzaro universe


----------



## londiste (Mar 1, 2017)

i wonder if that improved cooling graph is only due to removing the dvi slot. that should give good 40-50% extra exhaust space which is what blowers live by...

in general though, 1080ti is better than expected. with very few details, rumors were around more cut down chip, using gddr5 and higher price. none of that came to pass. performance close to titan xp is not bad at all.


----------



## EarthDog (Mar 1, 2017)

> Why the joy at no DVI?
> I still use the DVI connection in mine and never mind DP, mini or proper.
> Knowing full well in advance how each and every one of you here is going to come and correct me, lol, i can tell you that for visual signal only?
> 
> ...


wuh...what? What's this 100% 80% nonsense you are talking?

DP is a digital signal. It either getside there or doesnt. I have some of the cheapest DP cables I could find driving a 4k and 2460x1440 monitor...DP is equal or better in every way last I understood.



3rold said:


> That bus and core count seems not to match. Like everything else. Also. 352-bit bus and 11GB of memory. I think this will be a 970-like GPU, being unable to use some of it's potential.


It matches. Math above you.


----------



## bogami (Mar 1, 2017)

Remarkable progress in setting up the outputs will be greatly improved cooling. It is best for  the Micro ATX to get more than SLI with the use of liquid cooling block. That only one slot is all together. Nicely. Given the 80% better utilization of the RAM will not be hurt with 1 gb less !However, the price will be higher because 22%  due  tax , and special versions will add another 100 $ to  200$.  900€ I deducted for GTX1080 with block on (MSI EK). GTX1080Ti will be 1000  €. I'm good with a pair of GTX1080 with 1440x3440 Ultra play and enjoy.


----------



## R0H1T (Mar 1, 2017)

ArdWar said:


> Seriously guys, please learn to read a graph properly.
> 
> As said before, generally X-axis is the control parameter (the source, input) and Y-axis is the response parameter (the result, output). Changing parameter in X results in a change in Y parameter.
> 
> ...


Seriously, I'd expect anyone on this site to be able read the graph.

How about adding something like *at constant GPU load. If Nividia were really smart they would've simply put the noise delta on the X axis otherwise this is what people can interpret & they're not wrong either ~


----------



## newtekie1 (Mar 1, 2017)

chaosmassive said:


> the reason we saw weird memory configuration here is because
> its has 6 ROP disabled, we still know that ROP tied with memory controller like GTX 970 was
> which 4 ROPs disabled in GTX 970 causing last 32 bit portion need to be 'hike' along with
> another ROP which connected to Memory controller (3.5+0.5 GB) IIRC
> ...



Not exactly.  You are correct that the ROPs are tied with the memory controller.  However, nVidia didn't actually disable any ROPs on the GTX970.  The GTX970 still had all 64 ROPs enabled, because it had all 8 memory controllers enabled.
It was disabling the L2 that caused the issue.  Each memory controller, and it's ROPs, are linked to a block of L2.  When they disabled a block of L2 in the GTX970, that block's memory controller and ROPs had to be jumpered over to another block of L2.
The ROPs in the jumpered section were technically still active.  However, nVidia designed their driver to not use them, because using them would have actually resulted in slower performance.

In the case of the GTX1080Ti, they likely also lowered the amount of L2.  We won't know for sure, because L2 is not an advertised spec.  And you are probably right, in this case, they also just went ahead and disabled the memory controller and it's associated ROPs to avoid any kind of fiasco.



EarthDog said:


> wuh...what? What's this 100% 80% nonsense you are talking?
> 
> DP is a digital signal. It either getside there or doesnt. I have some of the cheapest DP cables I could find driving a 4k and 2460x1440 monitor...DP is equal or better in every way last I understood.



The only way I see his statement making sense is if he use using a DP -> DVI adapter.  I've had some of those really suck.

But he's really complaining about nothing for two reasons:

1.) This is just the reference output design.  AIBs can change it however they want, and I'm sure some will add a DVI port.
2.) It has an HDMI port.  Since DVI and HDMI use the exact same signal, he can just pick up a cheap HDMI -> DVI adapter or cable.



qubit said:


> A slightly crippled GPU and a weird 11GB RAM on their top GTX? Now that's just fugly.



Just like so many great GPUs before it.



R0H1T said:


> Seriously, I'd expect anyone on this site to be able read the graph.
> 
> How about adding something like *at constant GPU load. If Nividia were really smart they would've simply put the noise delta on the X axis otherwise this is what people can interpret & they're not wrong either ~



It doesn't matter which axis is which.  The graph would read the same.  However, the point nVidia was making was that the 1080Ti cooler gives lower temperatures than the 1080 cooler.  So, most people expect the 1080Ti line to then be lower on the graph than the 1080.  Not shifted a little to the left. Visually, if you're point is that something is lower than something else, you orient your graph axes so that it is visually lower on the graph.

And they did put, in clear as day letters, that they were testing both at 220w.


----------



## Air (Mar 1, 2017)

After the "do you get lower room temperatures with a bigger cpu cooler?", the new TPU complex science challenge is understanding the nvidia cooler graph. Seriously i can even understand what people isn't understanding.

On the cooler subject, id like to point that there are 2 things that favors the 1080 ti cooling over the 1080:

1. No dvi port - increased flow, lower turbulence.
2. Bigger die area - higher heat transfer at same power.

I bet those two make up for the most of this 5 C diference at the same power. And nividia changed nothing or almost nothing.


----------



## Jetster (Mar 1, 2017)

https://www.bit-tech.net/hardware/graphics/2017/03/01/nvidia-announces-geforce-gtx-1080-ti/1

Interesting  $499?


----------



## EarthDog (Mar 1, 2017)

Says 699 in what you linked.. and is correct?

Edit: you meant the 1080...in a 1080ti thread.... and didn't say it. LOL!


----------



## Captain_Tom (Mar 1, 2017)

So 35% stronger than the 1080?   It's just a Titan with a different name.


Vega should have no trouble defeating this if they want it to.


----------



## Fluffmeister (Mar 1, 2017)

Yeah Vega is a beast, at least 50% faster I've heard, should cost only $400 too and come with a free HTC Vive.

Nvidia are doomed.


----------



## newtekie1 (Mar 1, 2017)

Air said:


> After the "do you get lower room temperatures with a bigger cpu cooler?", the new TPU complex science challenge is understanding the nvidia cooler graph. Seriously i can even understand what people isn't understanding.
> 
> On the cooler subject, id like to point that there are 2 things that favors the 1080 ti cooling over the 1080:
> 
> ...



I think the interesting thing is that everyone is arguing about the reference cooler.  Something almost none of us will sue because it's still crap compared to the 3rd party coolers that will be used by the AIBs.


----------



## londiste (Mar 1, 2017)

i would not call it exactly crap. there are tradeoffs to be made with cooler having to be a blower.
there is not much you can do to a blower type cooler beyond what nvidia currently has on 1080/1080ti/titanxp. at least not in reasonable price range.


----------



## ZeroFM (Mar 1, 2017)

1080ti/titan xp pcb should be same ? I want order waterblock


----------



## Steevo (Mar 1, 2017)

If anyone honestly thinks the price is due to anything than wanting to capture marketshare and competition even from older cards and AMD. 

Stop being delusional.

Also, it looks amazing, when are the actual reviews out?


----------



## Captain_Tom (Mar 1, 2017)

Fluffmeister said:


> Yeah Vega is a beast, at least 50% faster I've heard, should cost only $400 too and come with a free HTC Vive.
> 
> Nvidia are doomed.



Do you realize how silly you sound?   You are acting like it's insane that after 2 years AMD could make a card 50% stronger than their previous flagship.


They have done that every generation lol.


----------



## londiste (Mar 1, 2017)

Captain_Tom said:


> Do you realize how silly you sound?   You are acting like it's insane that after 2 years AMD could make a card 50% stronger than their previous flagship.
> They have done that every generation lol.


you are right that amd should be able to do 50% over previous flagship.
however, amd's previous flagship was fury x. 50% on top of fury x would put performance at only slightly faster than gtx1080.

depending on where exactly they want vega to be it might not be enough.


----------



## dalekdukesboy (Mar 1, 2017)

OneCool said:


> 11gb of VRAM screams something isn't right...W1z...Come on back my old ass up here....  Core math doesn't hold up on this lol...2+2 isn't almost 4



With modern math AKA common core math it certainly is! Probably more like -14.5 with where our math education is going. And to the 2+2 that probably = wtf you want it to!


----------



## dalekdukesboy (Mar 1, 2017)

MrGenius said:


> Oh for crying out loud. How hard can this be to understand people? They made a mistake. Plain and simple.
> 
> Here. I fix.
> 
> ...



Ask Nvidia, they fucked up a simple graph.


----------



## theGryphon (Mar 1, 2017)

Captain_Tom said:


> Do you realize how silly you sound?   You are acting like it's insane that after 2 years AMD could make a card 50% stronger than their previous flagship.
> 
> 
> They have done that every generation lol.



In his sarcasm I bet he meant 50% faster than 1080Ti.

In all seriousness, AMD should be able to do way more than +50% over Fury X, which came with a gimped HBM infrastructure. My bet though, top Vega won't beat 1080Ti, but come to 10% neighborhood.


----------



## theGryphon (Mar 1, 2017)

dalekdukesboy said:


> Ask Nvidia, they fucked up a simple graph.



oh I so hope this was sarcasm...


----------



## dalekdukesboy (Mar 1, 2017)

Yes...yet no. As you pointed out they didn't technically screw up the graph or get it "wrong" they show proper relationship of noise of cooler and temperature. So yes I was just being funny and sarcastic, but as others' have stated it isn't an incorrect graph necessarily but it is poorly executed with parameters not set or stated like fan speed etc which would have stopped everyone from questioning it. So yes the graph seems accurate I get what it is trying to say, but it did take me a moment to look at it and see what they were doing. It works, just could have been better executed.


----------



## londiste (Mar 1, 2017)

noise level and fan speed would be on the same axis and graph would largely be the same. maybe they chose noise as it drew more straight lines?

now that i looked closer at that temp/noise graph though, 1080@220w and 1080ti@220w is misleading as hell. 1080 reference tdp is 180w, 1080's has 250w. even with a better cooler the actual end result will be worse


----------



## R0H1T (Mar 1, 2017)

londiste said:


> noise level and fan speed would be on the same axis and graph would largely be the same. maybe they chose noise as it drew more straight lines?
> 
> now that i looked closer at that temp/noise graph though, 1080@220w and 1080ti@220w is misleading as hell. 1080 reference tdp is 180w, 1080's has 250w. even with a better cooler the actual end result will be worse


Of course, news at 11 the 180W TDP cooler is less efficient/more noisy than 220W TDP cooler.
This is what many are ignoring, also the reason why the graph didn't make sense to me at first, not to mention I couldn't recall 1080's TDP immediately.


----------



## Air (Mar 1, 2017)

dalekdukesboy said:


> Yes...yet no. As you pointed out they didn't technically screw up the graph or get it "wrong" they show proper relationship of noise of cooler and temperature. So yes I was just being funny and sarcastic, but as others' have stated it isn't an incorrect graph necessarily but it is poorly executed with parameters not set or stated like fan speed etc which would have stopped everyone from questioning it. So yes the graph seems accurate I get what it is trying to say, but it did take me a moment to look at it and see what they were doing. It works, just could have been better executed.


The graph is perfect. RPM value is meaningless, what matters is noise levels and cooling performance, both shown on the graph. But, as I said before, is not an apples to apples comparison because of the difference in die area and outlet design. So I'm not buying the "better cooler" claim.


----------



## nickbaldwin86 (Mar 1, 2017)

throw a water block on it and you get a single slot card!!! FINALLY!! the day of DVI is over!


----------



## rtwjunkie (Mar 1, 2017)

nickbaldwin86 said:


> throw a water block on it and you get a single slot card!!! FINALLY!! the day of DVI is over!



It's not. AIB's are 99% likely to put one on there, because the vast majority of buyers have DVI monitors.  Yes you can use an adapter, but they won't want to alienate buyers.


----------



## Steevo (Mar 1, 2017)

Probably 11GB due to the die cuts directly effecting memory interface and Nvidia trying to avoid a much slower 12th GB.


----------



## Prince Valiant (Mar 1, 2017)

I love official graphs for this stuff, always making minor differences look huge . The percentage comparison at the end is solid gold . It'll be interesting to see what the non-reference boards manage.


----------



## GhostRyder (Mar 1, 2017)

nickbaldwin86 said:


> throw a water block on it and you get a single slot card!!! FINALLY!! the day of DVI is over!


Yea, I am thankful at least on reference no DVI.  I prefer this 1 HDMI and 3 DP design (Same with AMD) well over having anything on the top.

Interested in how this thing will perform and how this cut up card handles things in the memory department.  For the price, I may trade up my Titan XP for a pair of these instead of grabbing a second Titan XP (Also depends on how it overclocks).


----------



## Hotobu (Mar 1, 2017)

Just posting for posterity in this quasi tragedy called a thread. LOL @ people who can't read the most basic of graphs and thought the original was reversed. Like... how hard is it to correlate a higher noise level with higher fan speed and lower temps? 

What really sucks is that this embarrassment has to be on the main page. Hard to be a reputable site when a guy with genius (lol?) in his name goes through the trouble of fixing something that isn't broken. What's even worse is that some people probably STILL haven't figured it out.


----------



## EarthDog (Mar 1, 2017)

If 256bit handled it just  fine, 352 will...


----------



## dalekdukesboy (Mar 1, 2017)

Hotobu said:


> Just posting for posterity in this quasi tragedy called a thread. LOL @ people who can't read the most basic of graphs and thought the original was reversed. Like... how hard is it to correlate a higher noise level with higher fan speed and lower temps?
> 
> What really sucks is that this embarrassment has to be on the main page. Hard to be a reputable site when a guy with genius (lol?) in his name goes through the trouble of fixing something that isn't broken. What's even worse is that some people probably STILL haven't figured it out.



Also he originally misread the graph and was first to be confused by it ironically..."genius" guy that is. Site is very reputable, I don't think TPU is any less or more due to certain posters and hey, maybe they just didn't have their coffee before posting.


----------



## NTM2003 (Mar 1, 2017)

any Idea when pre order starts I keep refreshing the amazon page but nothing yet not even price drops


----------



## Captain_Tom (Mar 1, 2017)

londiste said:


> you are right that amd should be able to do 50% over previous flagship.
> however, amd's previous flagship was fury x. 50% on top of fury x would put performance at only slightly faster than gtx1080.
> 
> depending on where exactly they want vega to be it might not be enough.




Not at all true.

https://tpucdn.com/reviews/Gigabyte/GTX_1080_Aorus_Xtreme_Edition/images/perfrel_3840_2160.png


That's 15%+ higher than a stock 1080, and that's using a list that includes many older games.   All of this is just the bare minimum too.  If you actually look at the architectural enhancements and leaked specs it could be as much as high as twice as strong - *we just don't know yet.

*
The point is that the 7970 was practically twice as strong as the 6970, and the 290X was 50 - 65% stronger than the 7970.   We should _expect_ at least that much from AMD considering how long it has been.


----------



## Hotobu (Mar 1, 2017)

I was considering waiting for Vega, but now I don't see a reason to. Even if it does beat out this card I don't see that happening with a large enough performance increase or price difference to justify the wait.


----------



## GhostRyder (Mar 1, 2017)

NTM2003 said:


> any Idea when pre order starts I keep refreshing the amazon page but nothing yet not even price drops


You need to buy from Nvidia's store.  They are doing the FE versions first only on the NVidia store.

Still debating if I want to trade up.


----------



## TheoneandonlyMrK (Mar 1, 2017)

Wow ,i've  a comment but I'm keeping it to myself, it errr lolz lots enough said.


----------



## JJJJJamesSZH (Mar 1, 2017)

Now I feel sad for my TTXP


----------



## NTM2003 (Mar 1, 2017)

I want the ti but the normal 1080 is a major upgrade from my 960. but then again the price on the ti made me want that more now. cant wait I just hope my cpu will handle it I'm sure it will.


----------



## Grings (Mar 1, 2017)

I was expecting 799 (didnt last gen titan cost that?)

Annoyingly, Nvidia have finally gone for 1:1 $>£ currency conversion, so its £699, i was hoping it might be £649

Seems they took that opportunity to jack the price of the Titan from £1099 to £1179 too


----------



## NTM2003 (Mar 1, 2017)

I thought they went around $1099 or $1199 when the 980ti was released


----------



## Kyuuba (Mar 1, 2017)

Perfect card to upgrade from 780ti.
Wallet ready to put one inside my case!


----------



## efikkan (Mar 1, 2017)

This is by far the greatest top consumer model Nvidia has released in ages, with a great price, 35% extra performance, outstanding efficiency and thermals, and still some decent overclocking headroom. Many of you dismiss this product, yet you create hype about Vega, which will not even compete with this one.



kruk said:


> No founders edition tax? Only $699? 11 GB of RAM? Does anybody else feed AMD tricked them into releasing this card so early with their Vega "reveal"? They could keep charging that much for the 1080 and get more profits.


Nvidia was not tricked, in fact GTX 1080 Ti was postponed, it was supposed to arrive end of 2016 but was delayed due to supply issues. Why are so many people complaining about the memory size/bus width? The memory controllers in a GPU are separate and work independently, you can have as many 32-bit controllers you want, it's not a technical problem.



qubit said:


> A slightly crippled GPU and a weird 11GB RAM on their top GTX? Now that's just fugly.  I'll wait for the reviews and Vega before buying, but this puts me off the card and might just stick to a 1080. The thing was plenty fast anyway.


And exactly how did the 11 GB memory put you off?
It's not like Vega is going to beat this anyway.



chr0nos said:


> maybe another gtx970 memory fiasco


How will this be a fiasco?
None of the memory controllers or chips are crippled in any way.



Aenra said:


> (off topic, but..)
> Why the joy at no DVI?


Because the DVI-port is blocking ~30 of the exhaust of GTX 1060/1070/1080, even though it's not that useful any more.


----------



## londiste (Mar 1, 2017)

Grings said:


> I was expecting 799 (didnt last gen titan cost that?)
> Annoyingly, Nvidia have finally gone for 1:1 $>£ currency conversion, so its £699, i was hoping it might be £649
> Seems they took that opportunity to jack the price of the Titan from £1099 to £1179 too


nvidia has very little to do with that.
for £ prices, thank brexit. especially titan x one. nvidia actually has a press release or something about that, saying they needed to adjust prices to due £/$ rate changes.
for the rest - usd has been gaining a lot against other currencies, € is also almost 1:1 now.



efikkan said:


> The memory controllers in a GPU are separate and work independently, you can have as many 32-bit controllers you want, it's not a technical problem.


not really separate controllers.
idea is correct though, memory controller can work with somewhat arbitrary width of memory bus. memory bus width increments are defined by the data bus width of a single memory chip, in case of gddr5(x) that is 32 bits.
nitpicky, i know. sorry.


----------



## kruk (Mar 1, 2017)

efikkan said:


> This is by far the greatest top consumer model Nvidia has released in ages, with a great price, 35% extra performance, outstanding efficiency and thermals, and still some decent overclocking headroom. Many of you dismiss this product, yet you create hype about Vega, which will not even compete with this one.



Man, you sure know a lot of things about the card which wasn't released and benchmarked yet. And Vega. Do you work in/for GPU industry? Just curious, not accusing you of anything ...


----------



## Kyuuba (Mar 1, 2017)

I need an advice here, my monitor is 144 Hertz capable but in order to work at 144 Hz it needs the DVI port, will the DVI adapter found in the 1080ti package content allow my VG248QE to run at 144Hz ?


----------



## efikkan (Mar 1, 2017)

londiste said:


> not really separate controllers.
> idea is correct though, memory controller can work with somewhat arbitrary width of memory bus. memory bus width increments are defined by the data bus width of a single memory chip, in case of gddr5(x) that is 32 bits.
> nitpicky, i know. sorry.


I'm sorry, but that's incorrect.
Modern GPUs work by having multiple separate 32-bit memory controllers, each complete with their own ROPs. GTX 1080 Ti has one of these disabled, which is why it also has fewer ROPs. This is one of the nice modular features of modern GPUs.

When a cluster of cores wants to access a block of memory it addresses the respective memory controller.


----------



## iO (Mar 1, 2017)

Yeah all nice and fast but 819€ for a cut down GPU in a ref design is still not cool...


----------



## efikkan (Mar 1, 2017)

iO said:


> Yeah all nice and fast but 819€ for a cut down GPU in a ref design is still not cool...


Why does it matter if it's "cut down"?
Isn't all that matters what it _actually_ gives you?
If Nvidia needs to add more cores/controllers/etc. to get decent yields and keep the prices low, _precisely how_ is this bad for the end user?


----------



## the54thvoid (Mar 1, 2017)

Captain_Tom said:


> Not at all true.
> 
> https://tpucdn.com/reviews/Gigabyte/GTX_1080_Aorus_Xtreme_Edition/images/perfrel_3840_2160.png
> 
> ...



Using your own rational, the increase in performance from a 390X to Fury X was only 30%.  Vega has the same core count as Fiji.  So the arch tweaks and clockspeeds will be the difference.  I can't see Vega being 100% faster than Fury X.  Not even 75% faster.  I'd love to be wrong but the history doesn't back it up.


----------



## TheoneandonlyMrK (Mar 1, 2017)

the54thvoid said:


> Using your own rational, the increase in performance from a 390X to Fury X was only 30%.  Vega has the same core count as Fiji.  So the arch tweaks and clockspeeds will be the difference.  I can't see Vega being 100% faster than Fury X.  Not even 75% faster.  I'd love to be wrong but the history doesn't back it up.


in some cases it will be 100% but how many being realistic ,all yawn and no action  for me this Ti


----------



## efikkan (Mar 1, 2017)

If 1080 Ti is _boring_ (even though it's the most exciting high end model in recent history), then Vega is going to bore you to death.


----------



## jabbadap (Mar 1, 2017)

BTW. there's now giveaway on nvidia site:


> *SIGN UP FOR GEFORCE EXPERIENCE AND GET REWARDED*
> Being a member of the GeForce Experience community means you can receive a ton of great giveaways—from game codes to graphics cards and more!
> 
> *CHECK OUT OUR NEWEST GIVEAWAY!*
> ...





Spoiler: youtube crap


----------



## Fluffmeister (Mar 1, 2017)

efikkan said:


> If 1080 Ti is _boring_ (even though it's the most exciting high end model in recent history), then Vega is going to bore you to death.



Indeed, all credit to the AMD fanboys for their patience waiting for that mythical beast.


----------



## Ascalaphus (Mar 2, 2017)

This was the card I was waiting for. 

Going to get 2 of these bad boys to replace my SLI 980Tis.


----------



## W1zzard (Mar 2, 2017)

NTM2003 said:


> any Idea when pre order starts I keep refreshing the amazon page but nothing yet not even price drops


We wanted to let you know that the GTX 1080 Ti will officially be available for pre-orders starting at 8:00 a.m. PST tomorrow morning. For folks that want to get in the action, the pre-order link is: https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080-ti/

Got this from NVIDIA earlier today.


----------



## GhostRyder (Mar 2, 2017)

efikkan said:


> If 1080 Ti is _boring_ (even though it's the most exciting high end model in recent history), then Vega is going to bore you to death.


How so?  While its definitely a better deal its essentially a Titan XP without 1gb of ram and with better voltage control/Boost clocks.  That's not completely interesting even if its a good price.



Fluffmeister said:


> Indeed, all credit to the AMD fanboys for their patience waiting for that mythical beast.


So your saying people waiting for a GPU release are dumb fanboys?  Oh right its just the people who are waiting on an AMD card.  Heaven forbid waiting and seeing before making an expensive purchase.



W1zzard said:


> We wanted to let you know that the GTX 1080 Ti will officially be available for pre-orders starting at 8:00 a.m. PST tomorrow morning. For folks that want to get in the action, the pre-order link is: https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080-ti/
> 
> Got this from NVIDIA earlier today.


Wow that fast, really considering off loading my Titan XP for a pair of these now!!!


----------



## qubit (Mar 2, 2017)

newtekie1 said:


> Just like so many great GPUs before it.


Ya. 11GB RAM just doesn't sit right though lol. Maybe I'll get one just for the pervesity of it...


----------



## medi01 (Mar 2, 2017)

efikkan said:


> This is by far the greatest top consumer model Nvidia has released in ages


Yeah. Ages.
Not counting the card it released in August 2016, then it is whopping 5 month. Exciting.



efikkan said:


> 35% extra performance


"extra", eh?



efikkan said:


> Many of you dismiss this product, yet you create hype about Vega...


It's not hard to see the difference.
nVidia released rebranded Titan.
AMD is expected to release a brand new card.



efikkan said:


> ...which will not even compete with this one.


This is simply fanboi-ism, we don't know yet, but pricing that Huang has opted for, hints at something rather close.


----------



## newtekie1 (Mar 2, 2017)

qubit said:


> Ya. 11GB RAM just doesn't sit right though lol. Maybe I'll get one just for the pervesity of it...



Honestly, neither does 12GB.  I'm still stuck on 2, 4, 8, 16, 32.  But even system RAM isn't sticking to that anymore... I'm just too old fashioned.


----------



## qubit (Mar 2, 2017)

newtekie1 said:


> Honestly, neither does 12GB.  I'm still stuck on 2, 4, 8, 16, 32.  But even system RAM isn't sticking to that anymore... I'm just too old fashioned.


I know. When it's not a power of 2 it gets my OCD going like crazy, too. It creates such inefficiencies to make such designs. I know why they do it though, since chip sizes would otherwise grow exponentially which is unsustainable.


----------



## EarthDog (Mar 2, 2017)

Do tell why an 11GB setup is 'less efficient' than12GB...


----------



## dalekdukesboy (Mar 2, 2017)

Air said:


> The graph is perfect. RPM value is meaningless, what matters is noise levels and cooling performance, both shown on the graph. But, as I said before, is not an apples to apples comparison because of the difference in die area and outlet design. So I'm not buying the "better cooler" claim.



Later point you make is fine, but I call BS on "graph is perfect"...you can fool some of the techpowerup people all the time, but not most of them most of the time as Honest Abe said; if the graph were so "perfect" no one would question what it said or hardly anyone would and myself included at least questioned the format which frankly, was bad and not complete.


----------



## dalekdukesboy (Mar 2, 2017)

efikkan said:


> I'm sorry, but that's incorrect.
> Modern GPUs work by having multiple separate 32-bit memory controllers, each complete with their own ROPs. GTX 1080 Ti has one of these disabled, which is why it also has fewer ROPs. This is one of the nice modular features of modern GPUs.
> 
> When a cluster of cores wants to access a block of memory it addresses the respective memory controller.



What would you expect from someone who wants to stay in the EU?


----------



## Air (Mar 2, 2017)

dalekdukesboy said:


> Later point you make is fine, but I call BS on "graph is perfect"...you can fool some of the techpowerup people all the time, but not most of them most of the time as Honest Abe said; if the graph were so "perfect" no one would question what it said or hardly anyone would and myself included at least questioned the format which frankly, was bad and not complete.


How is it incomplete? It has 4 noise and temperature data points, both correctly labeled, with the correct units and correctly ploted. It gets the point across that you can get 5 °C lower or 3,5 db(A) quieter on the Ti at the same power. You could argue, maybe, that it has too much information for the average audience and should be simpler. Maybe with only a bar graph for a single data point.


----------



## dalekdukesboy (Mar 2, 2017)

Call it whatever you want, it showed causality of temps between the two cards fine but it's not the complexity of it, but more the left to right nature of how we read sentences is broken on this graph....it even has arrow pointing right to left. This may work for certain languages or people who go right to left but at least for English, Spanish, etc we are trained to go left to right.


----------



## Air (Mar 2, 2017)

dalekdukesboy said:


> Call it whatever you want, it showed causality of temps between the two cards fine but it's not the complexity of it, but more the left to right nature of how we read sentences is broken on this graph....it even has arrow pointing right to left. This may work for certain languages or people who go right to left but at least for English, Spanish, etc we are trained to go left to right.


But the X axis values increases from left to right. Y axis values increase from bottom to top. That's the standard for graphs, independent of language. Pretty intuitive I would say. If they made the values for noise decrease from left to right, now THAT would be confusing. You cant expect all graphs to have ascendant lines.


----------



## dalekdukesboy (Mar 2, 2017)

Ok, then why would you have a big obvious arrow going from the right to left to throw off the left to right continuity? No, but ascendant lines tend to work best, 2 of 3 graphs here are that way for example. I simply am pointing out graph is far from "perfect" and many people not "getting" it pretty much proves that.


----------



## efikkan (Mar 2, 2017)

GhostRyder said:


> How so?  While its definitely a better deal its essentially a Titan XP without 1gb of ram and with better voltage control/Boost clocks.  That's not completely interesting even if its a good price.


Well, for starters it's 35% better than GTX 1080, and secondly it's reducing the price of GTX 1080. Thirdly, it's roughly the same price per dollar as GTX 1080 after the price adjustment, and it's amazing to see a high-end model retaining this awesome value while delivering the best performance. Forth, it's the best high-end Ti model ever, much better than 980 Ti and 780 Ti. Fifth; great energy efficiency and hopefully some overclocking headroom.


----------



## qubit (Mar 2, 2017)

EarthDog said:


> Do tell why an 11GB setup is 'less efficient' than12GB...


I didn't say that. It's less efficient than a power of 2 design. In this case, you'd need to have 16GB RAM for a "perfect" design. You always need to go to the next power of 2 up.

It'll be interesting to see if that 11GB RAM has a similar issue as the GTX 970 with that slow memory due to the cut down GPU. I suspect it won't though as NVIDIA have learned their lesson from that particular scandal.


----------



## Air (Mar 2, 2017)

dalekdukesboy said:


> Ok, then why would you have a big obvious arrow going from the right to left to throw off the left to right continuity? No, but ascendant lines tend to work best, 2 of 3 graphs here are that way for example. I simply am pointing out graph is far from "perfect" and many people not "getting" it pretty much proves that.


The only flaw i could point is that it does not inform ambient temperature (which i think can safely be assumed to be 25 °C). 

Honestly i can't think of any other way to portray the same amount of information in a simpler way. You cant just make a graph have an ascendant line, its not something you can chose. Noise x temperature graphs for coolers will always have a descendant line. Change it to fan speed x temperature and it will be similar. 

Well i guess you could change it to fan noise x heat dissipation, at a fixed GPU temperature instead of power, which would result in ascendant lines. But really not the usual information people look for when choosing coolers.

Whats causing confusion is not the graph per se, but some lack of experience in interpreting graphs. If you take your time and observe the information it presents its pretty clear.


----------



## efikkan (Mar 2, 2017)

qubit said:


> I didn't say that. It's less efficient than a power of 2 design. In this case, you'd need to have 16GB RAM for a "perfect" design. You always need to go to the next power of 2 up.


That's completely wrong. Each memory controller is still accessing a power of 2 amount of memory. Memory controllers in GPUs work independently. There is no scientific basis of claiming that the total number of resources have to be a power of 2. Just look at the core count in modern GPU, almost none of them add up to a power of 2.



qubit said:


> It'll be interesting to see if that 11GB RAM has a similar issue as the GTX 970 with that slow memory due to the cut down GPU. I suspect it won't though as NVIDIA have learned their lesson from that particular scandal.


That will never happen. You are conflating two unrelated design choices.
The "issue" with GTX 970 was that two 32-bit chips shared a single 32-bit bus, but with the first chip having priority resulting in "unreliable" memory performance. This is actually not new, GTX 660/660 Ti did a similar thing, but nobody complained then.


----------



## GhostRyder (Mar 2, 2017)

efikkan said:


> Well, for starters it's 35% better than GTX 1080, and secondly it's reducing the price of GTX 1080. Thirdly, it's roughly the same price per dollar as GTX 1080 after the price adjustment, and it's amazing to see a high-end model retaining this awesome value while delivering the best performance. Forth, it's the best high-end Ti model ever, much better than 980 Ti and 780 Ti. Fifth; great energy efficiency and hopefully some overclocking headroom.


Yes, but its still just a Titan XP at the end of the day with better voltage control, better stock clocks, and less ram.  Even though its cheaper its just a more affordable Titan, that is good but not exciting as a brand new never before seen chip.  It would be interesting if say it had more cores unlocked but that is not the case.  I am only interested in it for its price and if it can overclock further.



qubit said:


> I didn't say that. It's less efficient than a power of 2 design. In this case, you'd need to have 16GB RAM for a "perfect" design. You always need to go to the next power of 2 up.
> 
> It'll be interesting to see if that 11GB RAM has a similar issue as the GTX 970 with that slow memory due to the cut down GPU. I suspect it won't though as NVIDIA have learned their lesson from that particular scandal.


My thoughts as well, I am considering trading my Titan XP in for a pair (Maybe just one) but only after reviews and some time has passed.  I want to see if it has those issues as well and what other ones come out (Better VRM versions).


----------



## EarthDog (Mar 2, 2017)

qubit said:


> I didn't say that. It's less efficient than a power of 2 design. In this case, you'd need to have 16GB RAM for a "perfect" design. You always need to go to the next power of 2 up.
> 
> It'll be interesting to see if that 11GB RAM has a similar issue as the GTX 970 with that slow memory due to the cut down GPU. I suspect it won't though as NVIDIA have learned their lesson from that particular scandal.


Ok... but why is a power of 2 more efficient? My apologies here for being dense...

Again, it shouldnt have that 970 issue. The back end ROPs (read: the math) seems to all jive to me?


----------



## efikkan (Mar 3, 2017)

GhostRyder said:


> While its definitely a better deal its essentially a Titan XP without 1gb of ram and with better voltage control/Boost clocks.  That's not completely interesting even if its a good price.





GhostRyder said:


> Yes, but its still just a Titan XP at the end of the day with better voltage control, better stock clocks, and less ram.  Even though its cheaper its just a more affordable Titan, that is good but not exciting as a brand new never before seen chip.  It would be interesting if say it had more cores unlocked but that is not the case.


Why isn't it interesting that you can get the high-end consumer card for 58% of the price of the professional card?
Why isn't GTX 1080 Ti interesting when it reduces the prices of the remaining lineup as well?
Anyone interested in buying a decent card soon should be cheering, it's in fact the biggest news of the year.



EarthDog said:


> Ok... but why is a power of 2 more efficient? My apologies here for being dense...


Power of 2 matters for certain things when it comes to building integrated circuits. Allocations in system memory, allocations in GPU memory, sizes of sectors on SSDs/HDDs, etc. are all power of 2 because it decreases the complexity of the integrated circuits.

Let me crate a small example:


Spoiler: Example



You have 4 memory modules of 16kB (16384 bytes)
Now, let's look at the address space in binary:

```
0   First address:  0000 0000 0000 0000
    Last address:   0011 1111 1111 1111

1   First address:  0100 0000 0000 0000
    Last address:   0111 1111 1111 1111

2   First address:  1000 0000 0000 0000
    Last address:   1011 1111 1111 1111

3   First address:  1100 0000 0000 0000
    Last address:   1111 1111 1111 1111
```
Do you see any pattern?

You can use the two first bits to check which memory module the address belongs to, so the memory controller just needs a few transistors to calculate this, instead of some complex transformation of the address. The remaining 14 bits becomes the internal address space of a module. Then the module does the same thing to find out which chip("memory bank") the address belongs to.

Now let's compare this with an address space that's power of 10 instead, four modules of 10.000 bytes:

```
0   First address:  0000 0000 0000 0000
    Last address:   0010 0111 0000 1111

1   First address:  0010 0111 0001 0000
    Last address:   0100 1110 0001 1111

2   First address:  0100 1110 0010 0000
    Last address:   0111 0101 0010 1111

3   First address:  0111 0101 0011 0000
    Last address:   1001 1100 0011 1111
```
Even though an address space that's power of 10 is much more simple for us humans, it becomes obviously much harder for computers.



-----

So back to the question at hand, does it matter that GTX 1080 Ti have a total memory bandwidth of 352-bit? No, as I've said numerous times already, it has 11 separate 32-bit controllers, each accessing a power of 2 address space, adding up to a continuous address space without any kind of performance penalty. So it's not any kind of problem with 352-bits total, and if you know math you'll know that even 384-bit is not a power of 2!

An analogy; your harddrive consists of sectors, where every single one is a power of 2 in size, but the total count never is.

*Edit:*
Memory controllers for GPUs are in fact even more simple than CPU memory controllers. Not only is the size of allocations power of 2, but when allocating buffers for textures etc. each dimension has to be a power of 2. If you create a texture of 144×129, your API will pad it to 256×256.


----------



## EarthDog (Mar 3, 2017)

Well said... thank you!


----------



## GhostRyder (Mar 3, 2017)

efikkan said:


> Why isn't it interesting that you can get the high-end consumer card for 58% of the price of the professional card?
> Why isn't GTX 1080 Ti interesting when it reduces the prices of the remaining lineup as well?
> Anyone interested in buying a decent card soon should be cheering, it's in fact the biggest news of the year.


Because its something we can already expect and see out in the open.  Its good that it reduces prices across the board, however the card itself is not that interesting.  That does not mean it wont sell well and its not a good card, it means its boring because we already know its basics and where its going to sit on the performance chart.  The most interesting part is how its going to react to having higher voltage control because that will result in better clocks over the Titan XP.  The price is not interesting, just great news because its more affordable to the masses.  I am still interested in it, but its not a mystery for the most part.


----------



## qubit (Mar 3, 2017)

EarthDog said:


> Ok... but why is a power of 2 more efficient? My apologies here for being dense...
> 
> Again, it shouldnt have that 970 issue. The back end ROPs (read: the math) seems to all jive to me?


Basically, it's all to do with addressing and building the infrastructure for it inside the chip. I'm going to assume that you're familiar with base 2 (binary) and number bases in general here.

To make for a really simple example, imagine that you have a memory chip with just 4 locations. These will take 2 bits to address, ie a 2-bit address bus. The value of the bottom (first) address will be zero (00 binary) and the last (top) address 3 (11 binary).

Now imagine a lopsided memory chip with just 3 locations. You will still need to build the infrastructure for 4 addresses into the chip, since the top bit is still being set, ie value 2 (10 binary) with the top address of 3 (11 binary) pointing nowhere and likely having to be masked off to avoid a crash. Hence the chip will still take the same number of transistors as if it had 4 locations, but not actually _have_ that extra location in it and therefore the chip will not be an optimal design. Of course, what you get back is that the extra circuitry for the 4th location is missing, saving space, hence making for a compromise.

You have a similar situation regardless of what you're addressing, whether it's CUDA units and the number of bits they each handle in a GPU, or the number of CUDA units in the GPU, or whatever aspect of a digital circuit.

The problem in the real world of course, is that building a perfect power of 2 chip causes the number of transistors and physical size of that chip to double each time it's expanded, ie to grow exponentially which is unsustainable.

When you get to the large sizes of modern GPUs with their billions of transistors, it would tend to quickly outgrow the manufacturing capabilities of current technology. Or if not for a particular design, it would just be excessively large, such as being, for example, 40 millimeters on a side which is impractical for a commercial product that's supposed to make a profit.

No doubt it would also use a tremendous amount of power and emit a correspondingly tremendous amount of heat, making things difficult. Therefore, we see the lopsided GPUs of today to avoid this fate, or at least reduce its impact. Think of the GTX 480 and the tremendous amount of power and heat it used, despite being such a lopsided design. It's a shame and I really don't like this lopsidedness, but there's no choice for a real world GPU.

If you're curious, check out the designs of older entry level GPUs, where you'll see that quite often everything is a perfect power of 2, eg data bus, CUDA cores etc, since it's practical to do so at the smaller sizes.

The 970 memory issue came about, because NVIDIA nibbled a bit off the GPU, giving rise to a compartmentalized memory addressing design, where they chose to use slow RAM for that last 500MB, but didn't declare it, leading to this scandal.

When I saw that the 1080 Ti with its weird 11GB RAM and crippled GPU, it brought back to me that NVIDIA could potentially have the same design issue. However, it all really depends on the details of the design whether this happens or not and we'll soon know once the official reviews are out. I doubt they'd repeat the same mistake, especially on their flagship product.

@efikkan back there thinks I'm "completely wrong" about a power of 2 chip being optimal, but I'm not, as I've explained above. He just didn't quite understand what I was saying.

Oh and you asked for it - check my sig!


----------



## efikkan (Mar 3, 2017)

qubit said:


> Now imagine a lopsided memory chip with just 3 locations. You will still need to build the infrastructure for 4 addresses into the chip, since the top bit is still being set, ie value 2 (10 binary) with the top address of 3 (11 binary) pointing nowhere and likely having to be masked off to avoid a crash. Hence the chip will still take the same number of transistors as if it had 4 locations, but not actually have that extra location in it and therefore the chip will not be an optimal design. Of course, what you get back is that the extra circuitry for the 4th location is missing, saving space, hence making for a compromise.


No memory controller works the way you describes.
For starters, the memory controllers on GPUs have all power of 2 address space as I've said a number of times already, how hard is this to understand?
But for the hypothetical scenario where 3 out of 4 memory slots i occupied, the memory controller will never check if a memory address is inside the range on read/write, that would be too costly anyway. The whole "problem" is solved on allocation of memory (which is costly anyway, and done very rarely compared to read/write), and the only thing to check then is whether the memory address is above the maximum size, so the problem you describes doesn't exist.

Just to illustrate how wrong you are, I checked two of the machines I'm running here;
i7-3930K, 46-bit controller, 65536 GB (64 TB) theoretical physical address space, but the CPU is "limited" to 64 GB.
i5-4690K, 39-bit controller, 512 GB theoretical physical address space, but the CPU is "limited" to 32 GB.
(This is fetched directly from the CPU's cpuid instruction, so it's what the OS sees and is guaranteed to be correct)



qubit said:


> You have a similar situation regardless of what you're addressing, whether it's CUDA units and the number of bits they each handle in a GPU, or the number of CUDA units in the GPU, or whatever aspect of a digital circuit.


Number of bits of what? Data bus? Memory bus? Register width?
If you look at GPU architectures you'll see that most of then don't have a core count which adds up to a power of 2, like 256, 512, 1024, 2048, 4096, etc. Just scroll through here and here, that power of 2 is more the exception than the rule.



qubit said:


> The problem in the real world of course, is that building a perfect power of 2 chip causes the number of transistors and physical size of that chip to double each time it's expanded, ie to grow exponentially which is unsustainable.


The term "a perfect power of 2 chip" doesn't make any sense.



qubit said:


> The 970 memory issue came about, because NVIDIA nibbled a bit off the GPU, giving rise to a compartmentalized memory addressing design, where they chose to use slow RAM for that last 500MB, but didn't declare it, leading to this scandal.
> 
> When I saw that the 1080 Ti with its weird 11GB RAM and crippled GPU, it brought back to me that NVIDIA could potentially have the same design issue…


The GTX 970 "issue" was that two memory chips shared one 32-bit controller, while the others didn't, creating an address space where some of it was slower without the allocator taking this into account. *Power of 2 had absolutely nothing to do with it*.

If GTX 1080 Ti were to do the same thing it would have to do 12 memory chips on 11 controllers, which we know it doesn't, so we know it can't happen. If you still think it's a problem, then you're having a problem understanding how processors and memory work.



qubit said:


> Oh and you asked for it - check my sig!


Your avatar is cool though.


----------



## StefanM (Mar 4, 2017)

First benchmark at https://compubench.com/result.jsp


----------



## SaltyFish (Mar 5, 2017)

We've seen 1.5GB, 3GB, and 6GB but, yeah, that 11GB still looks especially odd. Maybe because it's doesn't follow any previous multiple, power of two or otherwise; 1.5GB was an odd configuration back when Nvidia's 5xx line came out but we're used to it and its multiples now. I'm pretty that some other manufacturer (Palit, MSI, Gigabyte, ASUS, etc.) will release a GTX 1080 Ti 12GB version sooner or later since it's one of the few things they can muck around with (maybe even a 16GB version but that's probably pushing it). Regardless of VRAM configuration, I wonder if the performance of the GTX 1080 Ti over that of Titan X Pascal will affect the Titan line down the road. Maybe we'd all get used to it like Intel's top-of-the-line HEDT CPUs that cost 1K USD since I see parallels with it.

Now that the Pascal line is more or less done, is it too soon to enthuse about Vega and/or Volta? Especially among those disappointed the 1080 Ti.


----------



## Hotobu (Mar 5, 2017)

SaltyFish said:


> We've seen 1.5GB, 3GB, and 6GB but, yeah, that 11GB still looks especially odd. Maybe because it's doesn't follow any previous multiple, power of two or otherwise; 1.5GB was an odd configuration back when Nvidia's 5xx line came out but we're used to it and its multiples now. I'm pretty that some other manufacturer (Palit, MSI, Gigabyte, ASUS, etc.) will release a GTX 1080 Ti 12GB version sooner or later since it's one of the few things they can muck around with (maybe even a 16GB version but that's probably pushing it). Regardless of VRAM configuration, I wonder if the performance of the GTX 1080 Ti over that of Titan X Pascal will affect the Titan line down the road. Maybe we'd all get used to it like Intel's top-of-the-line HEDT CPUs that cost 1K USD since I see parallels with it.
> 
> Now that the Pascal line is more or less done, is it too soon to enthuse about Vega and/or Volta? Especially among those disappointed the 1080 Ti.




Question: How can anyone really be disappointed in the 1080 Ti with the price and performance it's offering relative to the current market?


----------



## medi01 (Mar 5, 2017)

Hotobu said:


> ...relative to the current market?


This is the key here.
The only reason 314mm^2 chip was sold for 700$ is lack of competition in mid/high end.

To see, why Huang has decided to cannibalize Titan's, one needs to wait until Vega (likely early May).


----------



## efikkan (Mar 5, 2017)

SaltyFish said:


> Now that the Pascal line is more or less done, is it too soon to enthuse about Vega and/or Volta? Especially among those disappointed the 1080 Ti.


For anyone disappointed with GTX 1080 Ti, what are you disappointed about?
And then how is Vega going to be any more exciting when it's not going to be better?


----------



## kruk (Mar 5, 2017)

efikkan said:


> For anyone disappointed with GTX 1080 Ti, what are you disappointed about?
> And then how is Vega going to be any more exciting when it's not going to be better?



We have already seen what Pascal can do, so there is nothing that can surprise us with 1080 Ti. 

We however don't know a lot about Vega, and it will be exciting to see what performance will they be able to squeeze out of it. Of course, fanboys really don't care about the new tech from the opposition, they will buy and defend their favorite brand no matter what ...


----------



## EarthDog (Mar 5, 2017)

Well, they have plenty of time to tweak it's performance in response anyway. 

If they can't bin enough to adjust and beat the ti, then it was never meant to be.


----------



## efikkan (Mar 5, 2017)

kruk said:


> We have already seen what Pascal can do, so there is nothing that can surprise us with 1080 Ti.
> 
> We however don't know a lot about Vega, and it will be exciting to see what performance will they be able to squeeze out of it. Of course, fanboys really don't care about the new tech from the opposition, they will buy and defend their favorite brand no matter what ...


So performance per dollar, performance per watt, lowering ther price of the product range etc. is not any exciting?
AMD has demonstrated what we can expect from Vega, and since we know they'll have to almost double their efficiency to beat GP102 we can pretty safely assume it's not going to happen.


----------



## kruk (Mar 5, 2017)

efikkan said:


> So performance per dollar, performance per watt, lowering ther price of the product range etc. is not any exciting?
> AMD has demonstrated what we can expect from Vega, and since we know they'll have to almost double their efficiency to beat GP102 we can pretty safely assume it's not going to happen.



We will see about first two in the benchmarks, but lower prices for 1080 are ok. I'm just saying that we have seen the Titan X Pascal review months ago, but the Vega is still a mystery ...


----------



## Captain_Tom (Mar 6, 2017)

the54thvoid said:


> Using your own rational, the increase in performance from a 390X to Fury X was only 30%.  Vega has the same core count as Fiji.  So the arch tweaks and clockspeeds will be the difference.  I can't see Vega being 100% faster than Fury X.  Not even 75% faster.  I'd love to be wrong but the history doesn't back it up.



It goes 290X - Fury X, first of all; and the Fury X is 40% stronger than the 290X.  Also keep in mind that is the 3rd gen on the same process node, so also an unfair comparison (980 Ti is only about 40% stronger than the 780 Ti as well).



Comparing Fury X to Vega 10 has the following differences (At least):

-~50% higher clockspeeds
-~50%+ higher memory compression
-2x the geometry IPC
-Massively streamlined memory system (Hard to quantify yet, but we know games need half the RAM now)
-Dozens of architectural tweaks, improvements, and just straight up changes.


I'm sorry but I see those as big enhancements.   But I never said this would be twice as strong as the Fury X, and it doesn't have to be considering the Titan X is only ~60% stronger than the  Fury X.   Overall though Polaris is the newest (Released) arch from AMD, and so comparing Vega to the Fury is stupid if we can compare it to Polaris instead.


----------

