Thursday, September 15th 2016

NVIDIA GeForce GTX 1080 Ti Specifications Leaked, Inbound for Holiday 2016?

NVIDIA is giving finishing touches to its next enthusiast-segment graphics card based on the "Pascal" architecture, the GeForce GTX 1080 Ti. Its specifications were allegedly screengrabbed by a keen-eyed enthusiast snooping around NVIDIA website, before being redacted. The specs-sheet reveals that the GTX 1080 Ti is based on the same GP102 silicon as the TITAN X Pascal, but is further cut-down from it. Given that the GTX 1080 is unflinching from its $599-$699 price-point, with some custom-design cards even being sold at over $800, the GTX 1080 Ti could either be positioned around the $850-mark, or be priced lower, disrupting currently overpriced custom GTX 1080 offerings. By pricing the TITAN X Pascal at $1200, NVIDIA appears to have given itself headroom to price the GTX 1080 Ti in a way that doesn't cannibalize premium GTX 1080 offerings.

The GTX 1080 Ti is carved out of the GP102 silicon by disabling 4 out of 30 streaming multiprocessors, resulting in 3,328 CUDA cores. The resulting TMU count is 208. The card could retain its ROP count of 96. The card will be endowed with 12 GB of GDDR5 memory across the chip's 384-bit wide memory interface, instead of GDDR5X on the TITAN X Pascal. This should yield 384 GB/s of memory bandwidth, significantly lesser than the 480 GB/s bandwidth the TITAN X Pascal enjoys, with its 10 Gbps memory chips. The GPU is clocked at 1503 MHz, with 1623 MHz GPU Boost. The card's TDP is rated at 250W, same as the TITAN X Pascal.
GeForce GTX 1080 Ti Specifications:
  • 16 nm GP102 silicon
  • 3,328 CUDA cores
  • 208 TMUs
  • 96 ROPs
  • 12 GB GDDR5 memory
  • 384-bit GDDR5 memory interface
  • 1503 MHz core, 1623 MHz GPU Boost
  • 8 GHz (GDDR5-effective) memory
  • 384 GB/s memory bandwidth
  • 250W TDP
Source: OC3D
Add your own comment

176 Comments on NVIDIA GeForce GTX 1080 Ti Specifications Leaked, Inbound for Holiday 2016?

#101
$ReaPeR$
BiggieShadyAs you can see, in DX12 game made using GCN asynchronous engines (which is the best case for AMD) they're not even close
right..


as for performance in Vulkan, which is much closer to what AMD hoped for in an API



you can see that the fury x has 2/3 of the performance for half the price.
to the point though, all this conversation is a pointless red vs green fight. we are the consumers, and as such, we should be outraged with the insane prices of both Nvidia and intel. i suggest voting with our wallets and leaving aside any personal feelings of misguided loyalty to whatever corporation.

www.overclock3d.net/reviews/gpu_displays/rise_of_the_tomb_raider_directx_12_performance_review/6
www.guru3d.com/articles_pages/gigabyte_radeon_rx_480_g1_gaming_review,16.html
Posted on Reply
#102
Prima.Vera
Maybe this will be what 780Ti was for the first Titan, instead of what 980Ti was for the next Titan ...
Knowing nVidia this would hardly be called a surprise.
Posted on Reply
#103
xorbe
Prima.VeraMaybe this will be what 780Ti was for the first Titan, instead of what 980Ti was for the next Titan ...
Knowing nVidia this would hardly be called a surprise.
I was wondering the same thing. ie, maybe 1080Ti will be 3840 cores, fully enabled like the 780Ti was.
Posted on Reply
#104
dalekdukesboy
...methinks Captain Tom has been in space too long, must be Major Tom from Space Oddysey/David Bowie.
Posted on Reply
#105
dalekdukesboy
as for performance in Vulkan, which is much closer to what AMD hoped for in an API



you can see that the fury x has 2/3 of the performance for half the price.
to the point though, all this conversation is a pointless red vs green fight. we are the consumers, and as such, we should be outraged with the insane prices of both Nvidia and intel. i suggest voting with our wallets and leaving aside any personal feelings of misguided loyalty to whatever corporation.


I removed the graphs don't seem necessary everyone can go back and see the obvious plus you state it, the point for me was simply to state I have no idea how Tom's original statement is even vaguely close to reality. That was really it, it's not pro-green or anti-red it's simply me looking at the facts as I have them and how the cards' perform even in best case scenarios. You proved that with your graph which is arguably the best at this point in time you can do with ATI and worst you can do for team green. Even with that you still get as you said 2/3's the performance...hardly close or even more as Tom's original post said, that is what I was addressing.
Posted on Reply
#106
dalekdukesboy
Prima.VeraMaybe this will be what 780Ti was for the first Titan, instead of what 980Ti was for the next Titan ...
Knowing nVidia this would hardly be called a surprise.
Just picked up at 980ti actually:). Seemed to be best performance I could get at a relatively reasonable price especially second hand, truthfully other than efficiency gains (massive granted) I'm not at all impressed with performance of 1000 series; you can basically overclock a 980ti and a 1080 and the ti nips at it's heels or at least stays closer than 2/3's :). Regardless it is good enough that they at first didn't include ti numbers in the 1000 series reviews because it would look too good compared to their new GPU's performance. As I said only real hit on last gen vs. this gen of Nvidia is the efficiency is greatly improved.
Posted on Reply
#107
Captain_Tom
BiggieShadyAs you can see, in DX12 game made using GCN asynchronous engines (which is the best case for AMD) they're not even close
Nice cherry picking. Tombraider got its DX12 support LONG after launch, and it was more or less a half implementation.
Posted on Reply
#108
dalekdukesboy
Captain_TomNice cherry picking. Tombraider got its DX12 support LONG after launch, and it was more or less a half implementation.
Ok what about Reaper who argued both camps suck on pricing and railed against us being on any side...is he cherry picking? Doesn't sound like he would based on his own words and sentiments. He picked figures a bit more favorable than BiggieShady but still as he pointed out AMD still only got to about 2/3's the performance of the titan. If the fury X was so wonderful and AMD was half as confident as you that they could have ANY and I mean ANY of their cards vaguely compete with titan for obviously way less cash I think they'd be touting it to the hills which obviously they aren't.
Posted on Reply
#109
efikkan
Captain_TomDude it's already close to the 1080 in Vulkan/DX12, and lol it smokes the old Titans.


Call me crazy all you want - but the 7970 is stronger than the original Titan, and thus it isn't insane to think it's possible the Fury X will come close to the new Titan in a while. Seems to happen to all of Nvidia's cards.
Crazy is indeed the word.
There is nothing in Direct3D 12 nor Vulkan which will greatly benefit GCN more than Pascal. The primary reason why AMD show greater relative gains in some games is that Nvidia brought most of the Direct3D 12 improvements to all APIs.
All the games shown this far favoring GCN has been AMD exclusives and are clearly biased. And there will be a handful of these, as there will be many console ports ahead.
Even with these biased games, it still wouldn't make a GPU twice as fast. That's just a crazy idea spread by fans.
Posted on Reply
#110
qubit
Overclocked quantum bit
Captain_TomNice cherry picking. Tombraider got its DX12 support LONG after launch, and it was more or less a half implementation.
Look, it's so easy for you to put @BiggieShady in his place: these are objective measurements, so just show some graphs of AMD beating or even equaling NVIDIA in DX12 and include a link to their origin. You'll then have won your argument hands down and he'll look a fool.

I predict a deathly silence follows or more strawman arguments. Place your bets!
Posted on Reply
#111
Captain_Tom
qubitLook, it's so easy for you to put @BiggieShady in his place: these are objective measurements, so just show some graphs of AMD beating or even equaling NVIDIA in DX12 and include a link to their origin. You'll then have won your argument hands down and he'll look a fool.

I predict a deathly silence follows or more strawman arguments. Place your bets!
www.guru3d.com/articles_pages/deus_ex_mankind_divided_pc_graphics_performance_benchmark_review,9.html

www.techpowerup.com/reviews/ASUS/GTX_1060_STRIX_OC/12.html

Those are the latest well-built new-API games. BF1 will get DX12 and then we will have another good comparison.

I am here saying that in a year the Fury X will match the 1080 in most of the latest games. If I am wrong you can say so :D
Posted on Reply
#112
BiggieShady
Captain_TomNice cherry picking. Tombraider got its DX12 support LONG after launch, and it was more or less a half implementation.
No problem, let's cherry pick from the cherry picked scenarios ... meaning the good case scenarios for AMD (dx12 or vulkan) ... what we get is: the best case scenario for AMD is when Titan XP is 1.5 times faster than Fury X and worst when it's double the performance.
Interestingly enough there are several newer DX11 titles where Titan XP is only 1.5 times faster. (mindblown, I know, seems like you can optimize for gpu architecture even in dx11)
The point is that gap is way too big and fury x is 28 nm chip ffs :laugh:
$ReaPeR$you can see that the fury x has 2/3 of the performance for half the price.
Yeah, the price is completely different argument here because every company sets the product's price to the highest amount consumers are ready to pay considering the market at the time. Price changes much more than relative performance level and high fps in games isn't the only thing that makes this kind of product desirable ;)
Posted on Reply
#113
Captain_Tom
BiggieShadyNo problem, let's cherry pick from the cherry picked scenarios ... meaning the good case scenarios for AMD (dx12 or vulkan) ... what we get is: the best case scenario for AMD is when Titan XP is 1.5 times faster than Fury X and worst when it's double the performance.
Interestingly enough there are several newer DX11 titles where Titan XP is only 1.5 times faster.
The point is that gap is way too big and fury x is 28 nm chip ffs :laugh:
The Fury is a $300 28nm chip. The Titan is a $1200 16nm chip. It is 50% stronger. That is pathetic.
Posted on Reply
#114
BiggieShady
Captain_TomThe Fury is a $300 28nm chip. The Titan is a $1200 16nm chip. It is 50% stronger. That is pathetic.
Oh Captain my captain, aren't you repeating what I just said? Why are you comparing them in the first place then ;) also don't you know Maxwell Titan is faster than Fury? Additionally, don't you know what price is, and what value is?
You see, the way you value graphics cards is somewhat limited ... and that also goes for all people that use Titan for gaming.
If you ask nvidia, having a gpu that market is willing to pay $1200 for is exactly opposite of pathetic. (How is this possible, have people ever heard of how good fury x is? How could nvidia brainwash so many people at once, have they been putting chemicals into water supply? I wonder how well Radeon Pro Duo sells ... but at least you don't see people gaming on those :laugh:)
Posted on Reply
#115
qubit
Overclocked quantum bit
Captain_Tomwww.guru3d.com/articles_pages/deus_ex_mankind_divided_pc_graphics_performance_benchmark_review,9.html

www.techpowerup.com/reviews/ASUS/GTX_1060_STRIX_OC/12.html

Those are the latest well-built new-API games. BF1 will get DX12 and then we will have another good comparison.

I am here saying that in a year the Fury X will match the 1080 in most of the latest games. If I am wrong you can say so :D
Ok, I'm pleasantly surprised. :)

Those Guru3D results clearly shows it comfortably beating a GTX 1080 which is what we wanna see. Perhaps it should actually be beating the TITAN X Pascal if we are comparing the top models of both brands? Not sure on this one, but it's still a really good result and the kind of competition that I wanna see. Just imagine, a reasonably priced high end NVIDIA card that doesn't sport a crippled GPU, lol.

We need all new games to perform like this ideally and keep the two companies head-to-head for the best deals. But then they'll get into a little cartel... No, let's not go there lol.

The TPU graph isn't really valid as the best NVIDIA card there is only a GTX 1070 which is some way behind the 1080.
Posted on Reply
#116
Captain_Tom
qubitOk, I'm pleasantly surprised. :)

Those Guru3D results clearly shows it comfortably beating a GTX 1080 which is what we wanna see. Perhaps it should actually be beating the TITAN X Pascal if we are comparing the top models of both brands? Not sure on this one, but it's still a really good result and the kind of competition that I wanna see. Just imagine, a reasonably priced high end NVIDIA card that doesn't sport a crippled GPU, lol.

We need all new games to perform like this ideally and keep the two companies head-to-head for the best deals. But then they'll get into a little cartel... No, let's not go there lol.

The TPU graph isn't really valid as the best NVIDIA card there is only a GTX 1070 which is some way behind the 1080.
LOL I am so tired of feeling like an AMD fanboy when I just flat out am not. I have owned plenty of Nvidia cards, and some of them I liked a lot.

But the fact is that it is obvious to me that these Paxwell cards will fall off a cliff in performance by spring.


When it comes to actual final performance numbers (Once the dust settles: I think the best indicators you can look at are a combination of TFLOPS and Bandwidth.

-Fury OC / Fury X will = 1080

-480 will be like 10% behind the 1070

-470 will beat the 1060 by at least 20%

-460 will probably equal the 1050


What you really need to think about is that Vega should easily be 50% faster than the Fury X, and that will likely put it a tad above the Titan XP. Then Nvidia will launch the 1180 with HBM in July 2017 ;). The real question is if Nvidia can get Volta (With true DX12 support) out before 2018. If not....I am not so sure the 1180 will be able to beat Vega.
Posted on Reply
#117
dalekdukesboy
fury x = 1080 after/will be etc, so not a totally known quantity #1, and #2 you said fury essentially was on par with titan...now you're saying 1080, big difference between those two chips even with the crap stock cooler the titan comes with that limits it. Anyway all good I want AMD competitive, but atm with games and direct x/vulcan etc as it is that just isn't the case, will it be? Well, maybe what you said is accurate, however again we don't know for sure or exactly how it will shake out your guestimating however well educated the guess is based on facts.
Posted on Reply
#118
efikkan
Captain_TomBut the fact is that it is obvious to me that these Paxwell cards will fall off a cliff in performance by spring.

When it comes to actual final performance numbers (Once the dust settles: I think the best indicators you can look at are a combination of TFLOPS and Bandwidth.

-Fury OC / Fury X will = 1080

-480 will be like 10% behind the 1070

-470 will beat the 1060 by at least 20%

-460 will probably equal the 1050
You are talking about peak FLOP/s, which is computational power, not rendering performance.
For an AMD GPU to scale as well as Pascal they need to overcome the following:
1) Saturate the GPU
Computational power is useless unless your scheduler is able to feed it, analyze data dependencies and avoid stalls. Nvidia is excellent at this, while GCN is not. Nothing in either Mantle, Direct3D 12 nor Vulkan exposes these features, so no such API will have any impact on this.
2) Efficient rendering avoiding bottlenecks
One of the most clear examples where Nvidia chose a more efficient path is when it comes to rasterizing and fragment processing. AMD processes it in screen space, which means the same data has to travel back and forth between GPU memory and L2 cache multiple times during one frame rendering, which means memory bandwidth, cache misses and data dependencies becomes an issue. Nvidia on the other hand, has since Maxwell rasterized and processed fragments in regions/tiles, so the data can be mostly kept in L2 cache until it's done, and thereby keeping the GPU at peak performance all throughout rasterizing and fragment processing, which after all is most of the load when rendering.

If AMD were to achieve their peak computational power during rendering, they would need to overhaul their architecture. Only then can this performance level be achieved. It doesn't matter if you have the most theoretical power in the world, if you are not able to utilize it.

So RX 480 will always perform close to GTX 1060, it will never rise above it.
Captain_TomWhat you really need to think about is that Vega should easily be 50% faster than the Fury X, and that will likely put it a tad above the Titan XP. Then Nvidia will launch the 1180 with HBM in July 2017 ;). The real question is if Nvidia can get Volta (With true DX12 support) out before 2018. If not....I am not so sure the 1180 will be able to beat Vega.
Both Maxwell and Pascal have more complete Direct3D 12 support than any other. Stop spinning the lie of a "missing feature", when everybody knows it has been proven that Nvidia supports it.
Captain_TomLOL I am so tired of feeling like an AMD fanboy when I just flat out am not. I have owned plenty of Nvidia cards, and some of them I liked a lot.
The problem is that you are clearly misguided and biased when discussing the subject. A person can own something and still be biased against them ;)
Posted on Reply
#119
$ReaPeR$
dalekdukesboyas for performance in Vulkan, which is much closer to what AMD hoped for in an API



you can see that the fury x has 2/3 of the performance for half the price.
to the point though, all this conversation is a pointless red vs green fight. we are the consumers, and as such, we should be outraged with the insane prices of both Nvidia and intel. i suggest voting with our wallets and leaving aside any personal feelings of misguided loyalty to whatever corporation.


I removed the graphs don't seem necessary everyone can go back and see the obvious plus you state it, the point for me was simply to state I have no idea how Tom's original statement is even vaguely close to reality. That was really it, it's not pro-green or anti-red it's simply me looking at the facts as I have them and how the cards' perform even in best case scenarios. You proved that with your graph which is arguably the best at this point in time you can do with ATI and worst you can do for team green. Even with that you still get as you said 2/3's the performance...hardly close or even more as Tom's original post said, that is what I was addressing.
indeed, but my point was mainly price/perf argument, meaning, the titan is clearly overpriced even when one takes into account the extra perf. based on perf versus the fury x, the titan should cost less than 1000$. as for the argument "fury x vs titan x", for me, its a non-starter since both cards are in different price segments and their perf is differentiated by a large margin (as expected), so no point in comparing them directly. unless one makes the comparison relative to their architectures and how they perform as such in different APIs.
dalekdukesboyOk what about Reaper who argued both camps suck on pricing and railed against us being on any side...is he cherry picking? Doesn't sound like he would based on his own words and sentiments. He picked figures a bit more favorable than BiggieShady but still as he pointed out AMD still only got to about 2/3's the performance of the titan. If the fury X was so wonderful and AMD was half as confident as you that they could have ANY and I mean ANY of their cards vaguely compete with titan for obviously way less cash I think they'd be touting it to the hills which obviously they aren't.
indeed they aren't, because yes it is not faster than the titan, but that wasn't their goal to begin with. for a 1 gen behind card its doing pretty well IMO. also, i think its a bit pointless for a company to brag about how well their older gen card is ageing.
Posted on Reply
#120
64K
efikkanA person can own something and still be biased ;)
Well said.
Posted on Reply
#121
dalekdukesboy
$ReaPeR$indeed, but my point was mainly price/perf argument, meaning, the titan is clearly overpriced even when one takes into account the extra perf. based on perf versus the fury x, the titan should cost less than 1000$. as for the argument "fury x vs titan x", for me, its a non-starter since both cards are in different price segments and their perf is differentiated by a large margin (as expected), so no point in comparing them directly. unless one makes the comparison relative to their architectures and how they perform as such in different APIs.

Yes I know but that isn't what Tom was saying, he didn't mention prices at all, just started with the idea that a fury x was as good or even better than a titan by suggesting if we wanted performance of what titan might do in future ( I assume with driver updates etc) we'd actually should get a fury x...so obviously that infers the fury x is as good and even better than titan so we should buy it. I agree I won't ever get a titan price is way above what I'd ever get for a card. Yes that also is something I didn't say but should have is that they aren't even in same class/price so that is another reason why I thought it was a joke.



indeed they aren't, because yes it is not faster than the titan, but that wasn't their goal to begin with. for a 1 gen behind card its doing pretty well IMO. also, i think its a bit pointless for a company to brag about how well their older gen card is ageing.
Yes, not only is it not faster than a titan, it can't even come close to tying a titan. True it is an older gen card but at this point it's all they got, literally. So yeah maybe they'd not be touting older cards, but I was simply making a point maybe they would if it showed the card favorably and minimized the titan's value/relative performance etc. So yeah a non-starter is best way to put it for all the reasons you cited as well as I and others. For now we only have Nvidia in high end new cards and we have to wait for AMD to get the fork out of its' ass and make something vaguely comparable.
Posted on Reply
#122
Captain_Tom
efikkanYou are talking about peak FLOP/s, which is computational power, not rendering performance.
For an AMD GPU to scale as well as Pascal they need to overcome the following:
1) Saturate the GPU
Computational power is useless unless your scheduler is able to feed it, analyze data dependencies and avoid stalls. Nvidia is excellent at this, while GCN is not. Nothing in either Mantle, Direct3D 12 nor Vulkan exposes these features, so no such API will have any impact on this.
2) Efficient rendering avoiding bottlenecks
One of the most clear examples where Nvidia chose a more efficient path is when it comes to rasterizing and fragment processing. AMD processes it in screen space, which means the same data has to travel back and forth between GPU memory and L2 cache multiple times during one frame rendering, which means memory bandwidth, cache misses and data dependencies becomes an issue. Nvidia on the other hand, has since Maxwell rasterized and processed fragments in regions/tiles, so the data can be mostly kept in L2 cache until it's done, and thereby keeping the GPU at peak performance all throughout rasterizing and fragment processing, which after all is most of the load when rendering.

If AMD were to achieve their peak computational power during rendering, they would need to overhaul their architecture. Only then can this performance level be achieved. It doesn't matter if you have the most theoretical power in the world, if you are not able to utilize it.

So RX 480 will always perform close to GTX 1060, it will never rise above it.


Both Maxwell and Pascal have more complete Direct3D 12 support than any other. Stop spinning the lie of a "missing feature", when everybody knows it has been proven that Nvidia supports it.


The problem is that you are clearly misguided and biased when discussing the subject. A person can own something and still be biased against them ;)
I don't inherently disagree with the points you are making, but I have to say that your counter-argument is deeply flawed.

Everything you just said is based in the idea that what I am saying is theoretical. But it isn't - look at some bloody benchmarks of the latest games. In DX12/Vulkan it seems like AMD is indeed taking full advantage of the computational power of their GPU's. In fact your 1060 vs 480 argument is a perfect example - already the 480 is "Rising above the 1060", and in fact at launch they were already trading blows.

Furthermore it seems like you haven't noticed that once games get harder to run they do in fact saturate AMD's hardware. Just look at how the 7970 beat the 680, then the 780, and now the 780 Ti / 970. Also the 290X crushes the 780 Ti now, and the 390X is close to matching the 980 Ti. There is a pattern of AMD cards rising FAR above their initial competition a year after launch, and it isn't because Nvidia is gimping anything.
Posted on Reply
#123
mcraygsx
Seems to be good 30% boost from 1080 but I wish NVidia would stick to GDDRx, GTX 1080 still has outstanding 320 GB/sec.
Posted on Reply
#124
efikkan
Captain_TomEverything you just said is based in the idea that what I am saying is theoretical. But it isn't - look at some bloody benchmarks of the latest games. In DX12/Vulkan it seems like AMD is indeed taking full advantage of the computational power of their GPU's. In fact your 1060 vs 480 argument is a perfect example - already the 480 is "Rising above the 1060", and in fact at launch they were already trading blows.

Furthermore it seems like you haven't noticed that once games get harder to run they do in fact saturate AMD's hardware. Just look at how the 7970 beat the 680, then the 780, and now the 780 Ti / 970. Also the 290X crushes the 780 Ti now, and the 390X is close to matching the 980 Ti. There is a pattern of AMD cards rising FAR above their initial competition a year after launch, and it isn't because Nvidia is gimping anything.
What you are describing is totally impossible. The new APIs will not and can not counter the inefficiencies in the GCN architecture, and will not result in a 50% relative gain for AMD vs Nvidia. The architectural inefficiencies in GCN are not software, it's hardware design.

The only path forward is architectural overhaul. Volta is going to be a bigger architectural change than Pascal, while AMD has stuck to their GCN since the Kepler days of Nvidia.
Posted on Reply
#125
the54thvoid
Intoxicated Moderator
Comparing cards using the Deus Ex MD benchmark isn't demonstrative of actual game performance. There are other Deus Ex MD reviews that show Fury X behind 1080. Cherry picking Guru 3D, who AMD fans often slag off for some reason doesn't illustrate anything.

Also, using Doom Vulkan is an excellent gauge for the future. Made with explicit AMD extensions (because Nv don't have them) shows about the best case scenario, IMO, for AMD's future performance). So, given that it's hard to see how much farther GCN can go (and Navi won't have it) and Titan XP (unrealistic card but shows Nvidia's fastest) is far ahead even in Vulkan, it's very hard to see Captain Tom's future.

Then there is the elephant in the room which few have had the reasoning to spot. The AMD resurgence is clearly based on the move from DX11 to DX12 and one game using Vulkan (again with explicit AMD extensions). Using this new paradigm, we can expect no similar performance improvements from AMD over Pascal in these API's.
The situation of graphics cards will remain as it has with DX11. A game developed with assistance from AMD or Nv will favour that card. Hitman and Deus Ex both favour AMD. Both were developed in the Nvidia classic style of, 'lets hamper the competition'. Just like TWIMTBP games tend to highlight Nv abilities at the expense of AMD.
Dx12 etc will help AMD achieve greater parity but given the Titan XP with fewer shaders than Fury X still soundly beats it in everything (faster clocks but like peeps say, no Async or DX12 magic) then you have to wonder how bad it might be when Nvidia bring back a little parallel async compute based hardware...

And yes. I can compare Pascal to Fiji because all a die shrink does is (simplistically) reduce power use and increase the ability to throw on more hardware. Nv used the shrink to keep the die reasonably clean but bring up clocks.

Anyway, it'll be fun when Vega arrives because with Fury X level of cores on 14nm, it should be clocked far higher. That alone with some GCN tweaks should overtake the 1080. But then Nvidia will react with 'something'. 2017 is worth talking about because Vega will give us some solid numbers to discuss but this will ring true - if in 2017, a Titan XP beats Vega in an AMD Vulkan game, AMD are in trouble. If on the other hand Vega beats Titan, AMD will rightly be confident of a rosy future.

Until Vega is out, all of these awful conversations (including mine) are about as insightful as a cat farting. The proof of science is in the testing and we can't test that future till it's here.
Posted on Reply
Add your own comment
Jul 16th, 2024 05:01 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts