# AMD Vega 10, Vega 20, and Vega 11 GPUs Detailed



## btarunr (Sep 20, 2016)

AMD CTO, speaking at an investors event organized by Deutsche Bank, recently announced that the company's next-generation "Vega" GPUs, its first high-end parts in close to two years, will be launched in the first half of 2017. AMD is said to have made significant performance/Watt refinements with Vega, over its current "Polaris" architecture. VideoCardz posted probable specs of three parts based on the architecture. 

AMD will begin the "Vega" architecture lineup with the Vega 10, an upper-performance segment part designed to disrupt NVIDIA's high-end lineup, with a performance positioning somewhere between the GP104 and GP102. This chip is expected to be endowed with 4,096 stream processors, with up to 24 TFLOP/s 16-bit (half-precision) floating point performance. It will feature 8-16 GB of HBM2 memory with up to 512 GB/s memory bandwidth. AMD is looking at typical board power (TBP) ratings around 225W. 






Next up, is "Vega 20." This is one part we've never heard of today, and it's likely scheduled for much later. "Vega 20" is a die-shrink of Vega 10 to the 7 nm GF9 process being developed by GlobalFoundries. It will feature 4,096 stream processors, too, but likely at higher clocks, up to 32 GB of HBM2 memory running full-cylinders at 1 TB/s, PCI-Express gen 4.0 bus support, and a typical board power of 150W. 

The "Vega 11" part is a mid-range chip designed to replace "Polaris 10" from the product-stack, and offer slightly higher performance at vastly better performance/Watt. AMD is expecting to roll out the "Navi" architecture some time in 2019, and so AMD will hold out for the next two years with "Vega." There's even talk of a dual-GPU "Vega" product featuring a pair of Vega 10 ASICs.

*View at TechPowerUp Main Site*


----------



## AlienIsGOD (Sep 20, 2016)

already announcing a RX 480/470 replacement? lame  Vega 11 should be faster hopefully


----------



## the54thvoid (Sep 20, 2016)

Did AMD say they're positioning between GP104 and GP102? If so, that's bad news. Given the time frame of 2nd half 2017, not first quarter, it's almost definitely a Summer release for 2017. That gives Nvidia a huge leeway for current pricing and Volta development.
Also, of tremendous importance is that this is an investors conference so they need to say all the absolute best things.
I have a bad feeling about Vega. Even if it's better than GTX1080, AMD are saying it won't beat Titan X?
Sad face.


----------



## FordGT90Concept (Sep 20, 2016)

If that picture is accurate, it's going to have a lot more than 4096 stream processors.  That's how many the 28nm Fury X had and these chips appear to have the same die space (perhaps even more because the HBM chips appear smaller).

Is Global Foundries really that close to 7nm?  How can GloFo be making such rapid improvements when Intel appears to be stuck?  If this is accurate (I'd gander that Vega 20 is smoke and mirrors), GloFo could pass up Intel in process tech and that's quite unfathomable.

AMD not having an answer to Pascal for another three quarters is dire news.  Vega 10 is no doubt beyond Pascal's reach but by the time it launches, it will have to contend with Volta.

If AMD's goal is to Nano (huge chip, low power) their entire product line up, that eats directly into AMD's profit margins.  This news post is making me...


----------



## psyph3r (Sep 20, 2016)

AlienIsGOD said:


> already announcing a RX 480/470 replacement? lame  Vega 11 should be faster hopefully


Not at all. Vega is the next part of the market. 480 is budget low end. Vega is enthusiast high end.


----------



## $ReaPeR$ (Sep 20, 2016)

interesting developments on the AMD side of things. it seems that all available resources are thrown towards the release of zen, this will not bode well for the GPU market.


----------



## hardcore_gamer (Sep 20, 2016)

If this is true, an affordable 4K card will continue to be just a dream for the next couple of years.


----------



## qubit (Sep 20, 2016)

I'd love to see this trading blows with Volta, but I'm not holding my breath.

AMD really has to prove itself to us now.


----------



## hardcore_gamer (Sep 20, 2016)

qubit said:


> I'd love to see this trading blows with Volta, but I'm not holding my breath.
> 
> AMD really has to prove itself to us now.



Looking at the specs, Vega 10 will have similar performance of a GTX 1080.

Nvidia can easily charge  $800 for their next mid range GV 104.


----------



## Vayra86 (Sep 20, 2016)

FordGT90Concept said:


> If that picture is accurate, it's going to have a lot more than 4096 stream processors.  That's how many the 28nm Fury X had and these chips appear to have the same die space (perhaps even more because the HBM chips appear smaller).
> 
> Is Global Foundries really that close to 7nm?  How can GloFo be making such rapid improvements when Intel appears to be stuck?  If this is accurate (I'd gander that Vega 20 is smoke and mirrors), GloFo could pass up Intel in process tech and that's quite unfathomable.
> 
> ...



Intel is already expected to lose their process advantage. Intel is taking steps back from being the x86 leader 'no matter what' and its clear for all to see as they have moved away from tick/tock and do three steps between a shrink with Kaby Lake showing that third step to be pretty much no improvement whatsoever. They already scaled down several business units, sold some acquisitions (McAfee for ex) and are definitely a little bit worried even though they will never admit they are. Look at Intel right now, it has its dominance, but only on parts of the market that are stagnant or out of the consumers' eyesight (server, HPC). They know they can't live on those markets alone and they've been making moves to counter that, but so far their GPUs are still nothing special and way too expensive for what they offer, their IoT stuff is being overwhelmed by ARM offerings, and those smaller systems like NUC... well, I doubt that'll ever be more than a niche.

I think the market may very well turn around in AMD's favor. AMD is positioned a LOT better at the moment and they are making efforts to get into the picture for new markets, or already are in it, for example: consoles (again, PS4Pro and Scorpio), APIs, and custom SOCs. AMD is also becoming much more 'lean' than Intel and the funny thing is that AMD is actually ahead of Intel with regards to streamlining the company.

However in terms of process nodes, we know that the '7nm' and FD-SOI processes are not a 'real' node shrink in the original sense of the word. Intel still currently has the smallest 'real' node even on 14nm.

About Vega 10, positioning between GP104 and GP102 seems like a very smart move because by Q2 2017 the GP104 and GP102 will have aged abit. They will probably push that GPU as the bang/buck high end offering and do another HD7950 with it, probably leaning heavily on overclockability to push it towards GP102. With current pricing, the GP104 and GP102 won't be selling like hotcakes anyway and the only way they will sell bigger numbers is through price drops. AMD only needs to position their GPU just a little bit better and they will offer the HBM2 product versus the GDDR5 product that no one was really waiting for, but at a similar price point and with the added advantage of being 9 months newer than its competitors (this matters, look at all those who sold a 980ti to get an almost similar and situationally even weaker 1070).

Look at how Fury X excels on certain games in newer APIs and you can see how HBM2 on an even wider GPU will absolutely be king of the hill, making positioning between 104 and 102 more of a worst case scenario than anything else. In addition, all that matters for AMD is that they push large amounts of products with a little bit of margin, not some Titan XP equivalent for the halo effect that will be out of the picture for 99% of the potential market.

To compete, you don't need the top end product, you just need a product people want to buy.


----------



## Basard (Sep 20, 2016)

$ReaPeR$ said:


> interesting developments on the AMD side of things. it seems that all available resources are thrown towards the release of zen, this will not bode well for the GPU market.



Bodes just fine for a lot of us though.  There are plenty of GPUs to choose from, just not so many CPUs.


----------



## AlienIsGOD (Sep 20, 2016)

btarunr said:


> The "Vega 11" part is a mid-range chip designed to replace "Polaris 10" from the product-stack, and offer slightly higher performance at vastly better performance/Watt. AMD is expecting to roll out the "Navi" architecture some time in 2019, and so AMD will hold out for the next two years with "Vega." There's even talk of a dual-GPU "Vega" product featuring a pair of Vega 10 ASICs.





psyph3r said:


> Not at all. Vega is the next part of the market. 480 is budget low end. Vega is enthusiast high end.



Polaris 10 is RX 480/470, possibly read the article all the way through


----------



## 64K (Sep 20, 2016)

I was thinking the high end Vega 10 would come in close to as fast as the Pascal Titan X. It will be somewhat disappointing if it doesn't but we won't know until reviews from legitimate sites like this one tell the full story. Looks like a long wait for that though.


----------



## Caring1 (Sep 20, 2016)

What's with reducing the FP first from double to single, now half?
"with up to 24 TFLOP/s 16-bit (half-precision) floating point performance"
Obviously reducing the compute side increases gaming usability, as proven by Nvidia cards doing the same.


----------



## Xzibit (Sep 20, 2016)

the54thvoid said:


> Did AMD say they're positioning between GP104 and GP102? If so, that's bad news. *Given the time frame of 2nd half 2017, not first quarter,* it's almost definitely a Summer release for 2017. That gives Nvidia a huge leeway for current pricing and Volta development.
> Also, of tremendous importance is that this is an investors conference so they need to say all the absolute best things.
> I have a bad feeling about Vega. Even if it's better than GTX1080, AMD are saying it won't beat Titan X?
> Sad face.



The source for the article



			
				VideoCardz said:
			
		

> *Vega 10 will be released in first quarter of 2017,* it has 64 Compute Units and 24TF 16-bit computing power. Vega 10 is based on14nm GFX9 architecture. It comes with 16GB of HBM2 memory with a bandwidth of 512 Gbps. The TBP is currently expected at around 225W. Meanwhile, *dual Vega 10 will be released in second quarter of 2017* and TBP should be around 300W.


----------



## Kaotik (Sep 20, 2016)

AMD hasn't announced "Polaris 10 replacement" or anything similar, it's all just VideoCardz speculation of how the chips will be. Earlier rumors actually suggest that Vega 11 is bigger than Vega 10, and as we know from what Raja said earlier, the numbering used since Polaris only tells you which chip was designed first, not their performance.



FordGT90Concept said:


> If that picture is accurate, it's going to have a lot more than 4096 stream processors.  That's how many the 28nm Fury X had and these chips appear to have the same die space (perhaps even more because the HBM chips appear smaller).


The chip in the image _is_ Fiji used in Fury X. The HBM1 chips on it are actually smaller than HBM2-chips used by GP100 (and in future Vega)


----------



## dyonoctis (Sep 20, 2016)

Ow. So Vega  will actually feature mid-range gpu with hbm. I wasn't expecting that, and didn't expect a replacement of Polaris 10 so fast. Captain tom was almost right.


----------



## FordGT90Concept (Sep 20, 2016)

Vayra86 said:


> ...their IoT stuff is being overwhelmed by ARM offerings, and those smaller systems like NUC... well, I doubt that'll ever be more than a niche.


As more and more people move from cable and satellite TV to IPTV/internet streaming, those products are in great demand and that demand is growing.  The thing is, the profit margins on them are so small, Intel would rather use their fabs to build Core I# and Xeon processors that they can sell for a hefty mark up.  NUCs basically get the left overs (older fabs) because Intel really doesn't care.  The fact of the matter is that market will never have huge profit margins--it's always destined to be a volume seller.  This is the same reason why Intel doesn't care about smartphones.



Vayra86 said:


> I think the market may very well turn around in AMD's favor. AMD is positioned a LOT better at the moment and they are making efforts to get into the picture for new markets, or already are in it, for example: consoles (again, PS4Pro and Scorpio), APIs, and custom SOCs. AMD is also becoming much more 'lean' than Intel and the funny thing is that AMD is actually ahead of Intel with regards to streamlining the company.


Consoles (they are custom SOCs so addressing both here...) are in the same boat as NUCs, tablets, and smartphones: volume products with tight profit margins.  AMD may dominate the console market but NVIDIA dominates the desktop market.  Games are developed on Windows which overwhelmingly run on NVIDIA hardware.  Developers may be quite familiar with the ins and outs of GCN because of optimization for consoles but they're optimized on NVIDIA first because that's what they're coding on.

AMD is open-sourcing virtually all of its APIs.  That's great for Linux but that isn't going to translate to profits for AMD.  Well it could because AMD is more appealing now on Linux but realize we're talking about a minority of minority of systems here.



Caring1 said:


> What's with reducing the FP first from double to single, now half?
> "with up to 24 TFLOP/s 16-bit (half-precision) floating point performance"
> Obviously reducing the compute side increases gaming usability, as proven by Nvidia cards doing the same.


It's a DirectX 12 thing.  Some things, especially textures, don't require 32-bits of precision.  At 16-bit, it should be able to process two calculations for the price of one.

Developers aren't using 16-bit because hardware support is iffy.  Five years from now, 16-bits to handle textures will likely become common place.


----------



## the54thvoid (Sep 20, 2016)

So Vega is 1st quarter (thanks @Xzibit). That's far better. Launch of Zen and Vega should light a fire in the stagnant brush of PC tech.


----------



## medi01 (Sep 20, 2016)

FordGT90Concept said:


> Is Global Foundries really that close to 7nm? How can GloFo be making such rapid improvements when Intel appears to be stuck?



My version:

1) By "skipping 10nm to save time" (lol)
2) "Collaboration with IBM" magic (lol)
_“We are well positioned to deliver a differentiated 7nm FinFET technology by tapping our years of experience manufacturing high-performance chips, the talent and know-how *of our former IBM Microelectronics colleague*s and the world-class R&D pipeline from our* research alliance.* No other foundry can match this legacy of manufacturing high-performance chips.”_
3) Cause VCDZ made it up (welp)
However:
_“The technology is expected to be ready for customer product* design starts in the second half of 2017, with ramp to risk production in early 2018*”.
http://techfrag.com/2016/09/16/7nm-finfet-chip-production/_




FordGT90Concept said:


> Developers may be quite familiar with the ins and outs of GCN because of optimization for consoles but they're optimized on NVIDIA first because that's what they're coding on.



Other explanations can exist.
Consoles are rather weaksauce so it makes sense/you are forced to optimized for them.
As for PC, oh well, dudes can buy another 1.5$k card, who fecks.




the54thvoid said:


> So Vega is 1st quarter (thanks @ Xzibit). That's far better. Launch of Zen and Vega should light a fire in the stagnant brush of PC tech.


Amen.

And we need AMD to get back into game.


----------



## qubit (Sep 20, 2016)

the54thvoid said:


> Did AMD say they're positioning between GP104 and GP102? If so, that's bad news. Given the time frame of 2nd half 2017, not first quarter, it's almost definitely a Summer release for 2017. That gives Nvidia a huge leeway for current pricing and Volta development.
> Also, of tremendous importance is that this is an investors conference so they need to say all the absolute best things.
> I have a bad feeling about Vega. Even if it's better than GTX1080, AMD are saying it won't beat Titan X?
> Sad face.


Looks like they're once again gonna be forced into playing the "value card" against NVIDIA.   At least NVIDIA are still releasing decent cards, even if they're competing with themselves mostly and we have to pay more for less. 

We can only hope that AMD pull a rabbit out of the hat with DX12 performance, as we've seen one or two benchmarks show a significant performance increase. Somehow I suspect that NVIDIA is gonna neutralize that advantage though.


----------



## Captain_Tom (Sep 20, 2016)

Usually we use single precision to calculate gaming compute right?

If so I am reading 512GB/s and 12 TFLOPs.  That would put it at or slightly above the Titan.


----------



## m1dg3t (Sep 20, 2016)

What? No wood screws?


----------



## Captain_Tom (Sep 20, 2016)

qubit said:


> Looks like they're once again gonna be forced into playing the "value card" against NVIDIA.   At least NVIDIA are still releasing decent cards, even if they're competing with themselves mostly and we have to pay more for less.
> 
> We can only hope that AMD pull a rabbit out of the hat with DX12 performance, as we've seen one or two benchmarks show a significant performance increase. Somehow I suspect that NVIDIA is gonna neutralize that advantage though.




The only way Nvidia can neutralize that advantage is if AMD continues to fail to capture marketshare.  Early reports show AMD capturing marketshare with polaris, but imo they will need to capture at least 40% for them to stop worrying about Nvidia's numbers advantage.


Otherwise note that this will have 12 TFLOPS vs 11 of the Titan X, and 512 - 1 TB/s of bandwidth beats the 480 of the Titan X.   Even if it doesn't soundly beat the Titan X, it will easily trade blows if these specs are true.

P.S.   Where is this 512 GB/s coming from?   HBM2 comes in 720 GB/s and 1 TB/s flavors so I am calling BS on that spec.


----------



## bug (Sep 20, 2016)

the54thvoid said:


> Did AMD say they're positioning between GP104 and GP102? If so, that's bad news. Given the time frame of 2nd half 2017, not first quarter, it's almost definitely a Summer release for 2017. That gives Nvidia a huge leeway for current pricing and Volta development.
> Also, of tremendous importance is that this is an investors conference so they need to say all the absolute best things.
> I have a bad feeling about Vega. Even if it's better than GTX1080, AMD are saying it won't beat Titan X?
> Sad face.


Even worse, it's a 225W part. GP102 is 250W, but that's without HBM and Vega 10 is supposed to be "between GP104 and GP102". Meaning weaker than GP102.
AMD is skating to where the puck is, not where it will be 

Also, first half of 2017 can easily turn into a July or back-to-school launch. Oh well, I wasn't planning on buying any of these anyway.


----------



## RejZoR (Sep 20, 2016)

I certainly hope Vega will be a success. We need to get product share of both closer to 50%. For obvious reasons. 2017 will certainly be interesting, for both, consumers and AMD.

EDIT:
People dramatizing like crazy again, not realizing even Fury X runs all current games at 4K over 30fps. I'd say that's pretty damn good considering it's last year's card. Whatever Vega will be, you can be assured it'll run things well even if it's not absolute king of the hill.


----------



## Captain_Tom (Sep 20, 2016)

bug said:


> Even worse, it's a 225W part. GP102 is 250W, but that's without HBM and Vega 10 is supposed to be "between GP104 and GP102". Meaning weaker than GP102.
> AMD is skating to where the puck is, not where it will be
> 
> Also, first half of 2017 can easily turn into a July or back-to-school launch. Oh well, I wasn't planning on buying any of these anyway.



Can we drop the whole HBM vs GDDR5 thing?


It doesn't matter if they make the card with potatos or Jet packs.   What matters is the final product.  

Furthermore it kinda sounded like Nvidia was looking at GDDR6 over HBM2.   GDDR6 will offer HBM1 - Cheap HBM2 like performance, but with 3x the power usage.


----------



## Captain_Tom (Sep 20, 2016)

RejZoR said:


> I certainly hope Vega will be a success. We need to get product share of both closer to 50%. For obvious reasons. 2017 will certainly be interesting, for both, consumers and AMD.



Nvidia is a company that is 2-4x bigger than Radeon  (Keep in mind most of AMD is devoted to the CPU tech).   If Radeon capture even 40% again like they used to it would be a big deal imo.  Especially as Intel iGPU's continue to slowly eat away marketshare.


----------



## ensabrenoir (Sep 20, 2016)

.....so from Hype train:





to silence:





to slowly back peddling....




then a "Second place Champions" /great value / 
a little  something something for our loyal sheep  release......





...... yep....typical  Amd.....


----------



## bug (Sep 20, 2016)

Captain_Tom said:


> Can we drop the whole HBM vs GDDR5 thing?



No, we cannot. If Titan X was HBM2 powered it would be a 225W part with more power than Vega 10.

If we drop "the whole HBM vs GDDR5 thing", all that's left is that AMD will have a part that's as powerful as the GTX 1080, only a year later. Are you happier now?


----------



## Vayra86 (Sep 20, 2016)

Captain_Tom said:


> Can we drop the whole HBM vs GDDR5 thing?
> 
> 
> It doesn't matter if they make the card with potatos or Jet packs.   What matters is the final product.
> ...



GDDR6 doesn't exist. They call it GDDR5X and it's already here.


----------



## Captain_Tom (Sep 20, 2016)

Vayra86 said:


> GDDR6 doesn't exist. They call it GDDR5X and it's already here.



No they already announced GDDR6 buddy.  Google is your friend.


----------



## Captain_Tom (Sep 20, 2016)

bug said:


> No, we cannot. If Titan X was HBM2 powered it would be a 225W part with more power than Vega 10.



But it doesn't.  So it doesn't matter what it COULD BE if it isn't.


----------



## Vayra86 (Sep 20, 2016)

Captain_Tom said:


> No they already announced GDDR6 buddy.  Google is your friend.



Actually all they did was vaguely hint at GDDR6, and just a year ago, GDDR5X 'was the new GDDR' and also touted GDDR6. Release for this presumed GDDR6 are somewhere in 2018, in other words, this is completely up in the air at the moment.

Google is my friend - the only confirmation of sorts are 'we will probably'. Seeing as GP102 launched with good ol' GDDR5, I think we shouldn't get ahead of ourselves right now. This falls into the same category as GlobalFoundries being all happy about how they are going to push 7nm very soon. Meanwhile, we've been sitting at 28nm for over two years longer than expected and we saw Intel move away from Tick/Tock.

Perspective


----------



## phanbuey (Sep 20, 2016)

Let's wait for benchies and pricing.

But yeah bummer...


----------



## m1dg3t (Sep 20, 2016)

Vayra86 said:


> Perspective



If it were up to nVidia we'd prolly still be using GDDR3


----------



## LightningJR (Sep 20, 2016)

Captain_Tom said:


> Usually we use single precision to calculate gaming compute right?
> 
> If so I am reading 512GB/s and 12 TFLOPs.  That would put it at or slightly above the Titan.



You can't compare TFLOPS across GPU manufacturers if you could then the RX480 with 5.8TFLOPS would wreck a GTX 980 with only 5TFLOPS and it doesn't. The card falls somewhere in between the 980 and a 970 (with 4TFLOPS)

The TFLOPS are there but it just doesn't translate in to equalivent performance so 12TFLOPS looks sexy it just don't translate 1:1 with nVidia unfortunately.


----------



## Steevo (Sep 20, 2016)

FordGT90Concept said:


> If that picture is accurate, it's going to have a lot more than 4096 stream processors.  That's how many the 28nm Fury X had and these chips appear to have the same die space (perhaps even more because the HBM chips appear smaller).
> 
> Is Global Foundries really that close to 7nm?  How can GloFo be making such rapid improvements when Intel appears to be stuck?  If this is accurate (I'd gander that Vega 20 is smoke and mirrors), GloFo could pass up Intel in process tech and that's quite unfathomable.
> 
> ...




7nm is looking like a go as they are already in risk/validation phase with test patterns, X-ray double patterned immersion lithography requiring two processes with polarization to get better fin patterns is the supposed process. http://www.geek.com/chips/x-ray-lithography-continues-to-advance-551433/ It was the thing IBM was working on for a long time they got.

https://en.wikipedia.org/wiki/LIGA


----------



## qubit (Sep 20, 2016)

Captain_Tom said:


> The only way Nvidia can neutralize that advantage is if AMD continues to fail to capture marketshare.  Early reports show AMD capturing marketshare with polaris, but imo they will need to capture at least 40% for them to stop worrying about Nvidia's numbers advantage.
> 
> 
> Otherwise note that this will have 12 TFLOPS vs 11 of the Titan X, and 512 - 1 TB/s of bandwidth beats the 480 of the Titan X.   Even if it doesn't soundly beat the Titan X, it will easily trade blows if these specs are true.
> ...


AMD might capture market share if they're priced well, but NVIDIA want the performance crown at all costs and that's what I'm talking about, since they then set the prices and get the prestige that goes with that performance crown.

How Vega fairs against Volta remains to be seen and until I see reviews of the final product on TPU which shows it beating NVIDIA's best (Volta) or at least equalling it, I'm going to remain sceptical. All the usual stuff about power consumption, heat and noise still applies, too. Personally, I'd rather a card was 10% slower, but much quieter. Protecting what little sanity I have left is very important to me!


----------



## HD64G (Sep 20, 2016)

LightningJR said:


> You can't compare TFLOPS across GPU manufacturers if you could then the RX480 with 5.8TFLOPS would wreck a GTX 980 with only 5TFLOPS and it doesn't. The card falls somewhere in between the 980 and a 970 (with 4TFLOPS)
> 
> The TFLOPS are there but it just doesn't translate in to equalivent performance so 12TFLOPS looks sexy it just don't translate 1:1 with nVidia unfortunately.



Wrong. They translate perfectly in true gaming performance in Doom using Vulcan. Divide FPS/TFlops there and check again. Software was keeping CGN behind until Vulcan. DX11 and OpenGL time will be over in 2017 for any new game imho. That's why nVidia pushed Volta in late 2017 afterall.


----------



## Vayra86 (Sep 20, 2016)

HD64G said:


> Wrong. They translate perfectly in true gaming performance in Doom using Vulcan. Divide FPS/TFlops there and check again. Software was keeping CGN behind until Vulcan. DX11 and OpenGL time will be over in 2017 for any new game imho. That's why nVidia pushed Volta in late 2017 afterall.



Those APIs are still too rare in real world applications to say anything substantial about that.


----------



## bug (Sep 20, 2016)

m1dg3t said:


> If it were up to nVidia we'd prolly still be using GDDR3


Yes, because it's Nvidia that sells rebranded products from 3 years ago.
Neither Nvidia nor AMD will innovate unless pushed to. Baseless extrapolations do not help.

@HD64G Are you really inferring the future of GPUs from one title or am I missing something?


----------



## Captain_Tom (Sep 20, 2016)

LightningJR said:


> You can't compare TFLOPS across GPU manufacturers if you could then the RX480 with 5.8TFLOPS would wreck a GTX 980 with only 5TFLOPS and it doesn't. The card falls somewhere in between the 980 and a 970 (with 4TFLOPS)
> 
> The TFLOPS are there but it just doesn't translate in to equalivent performance so 12TFLOPS looks sexy it just don't translate 1:1 with nVidia unfortunately.



It does in Vulkan lol. 

http://i.imgur.com/OITaDBd.jpg

I never claimed that it directly translates though.   But if you look at the Fury's TFLOPS it will be a 30-50% increase over that, and actually Polaris offers more perf/TFLOPS than Fiji did - so this is quite promising.


----------



## Captain_Tom (Sep 20, 2016)

bug said:


> Yes, because it's Nvidia that sells rebranded products from 3 years ago.
> Neither Nvidia nor AMD will innovate unless pushed to. Baseless extrapolations do not help.
> 
> @HD64G Are you really inferring the future of GPUs from one title or am I missing something?



Hey when a higher binned 7970 can beat a 780 why release something new? 

To be fair I think AMD is done with its near-constant rebranding.  However they wouldn't have rebranded cards 3 times if Nvidia could make an arch that ages better than fruit.


----------



## bug (Sep 20, 2016)

Captain_Tom said:


> Hey when a higher binned 7970 can beat a 780 why release something new?
> 
> To be fair I think AMD is done with its near-constant rebranding.  However they wouldn't have rebranded cards 3 times if Nvidia could make an arch that ages better than fruit.


Right, *when* can it it beat the 780? https://www.techpowerup.com/reviews/ASUS/GTX_780_STRIX_6_GB/25.html


----------



## Captain_Tom (Sep 20, 2016)

Vayra86 said:


> Those APIs are still too rare in real world applications to say anything substantial about that.



Really rare?   Mantle was "rare", but this is different.

Tomb Raider, Hitman, DOOM, DEUS EX, and now BF1.  

It's pretty clear that this will be the new standard of 2017.


----------



## Fluffmeister (Sep 20, 2016)

bug said:


> Even worse, it's a 225W part. GP102 is 250W, but that's without HBM and Vega 10 is supposed to be "between GP104 and GP102". Meaning weaker than GP102.
> AMD is skating to where the puck is, not where it will be
> 
> Also, first half of 2017 can easily turn into a July or back-to-school launch. Oh well, I wasn't planning on buying any of these anyway.



Seems they rounded down to get that 150W Polaris too.

Still 5-6+ months away anyway, can't get excited over this yet.


----------



## thesmokingman (Sep 20, 2016)

FordGT90Concept said:


> If that picture is accurate, it's going to have a lot more than 4096 stream processors.  That's how many the 28nm Fury X had and these chips appear to have the same die space (perhaps even more because the HBM chips appear smaller).
> 
> Is Global Foundries really that close to 7nm?  How can GloFo be making such rapid improvements when Intel appears to be stuck?  If this is accurate (I'd gander that Vega 20 is smoke and mirrors), GloFo could pass up Intel in process tech and that's quite unfathomable.
> 
> ...



That pic is from the Fiji release. Big grain of salt...



m1dg3t said:


> What? No wood screws?



lol, did you go to the link? Any pics there?


----------



## cdawall (Sep 20, 2016)

bug said:


> Yes, because it's Nvidia that sells rebranded products from 3 years ago.
> Neither Nvidia nor AMD will innovate unless pushed to. Baseless extrapolations do not help



8800GTS 512, 9800GTX, 9800GTX+, GTS  250. Know what those 4 different graphics cards have in common? They are all the same damn card. Don't forget nvidia has rebranded and carried over cards for generations just as often as AMD.


----------



## the54thvoid (Sep 20, 2016)

Captain_Tom said:


> Hey when a higher binned 7970 can beat a 780 why release something new?
> 
> To be fair I think AMD is done with its near-constant rebranding.  However they wouldn't have rebranded cards 3 times if Nvidia could make an arch that ages better than fruit.



And on the flip side Nvidia haven't had to innovate too much because of a lack of serious competition. Anything AMD released, Nvidia matched or beat within weeks, they're always holding back. Until Vega, Nvidia have the high ground and will (ab)use that position. If Vega doesn't dislodge Titan X as the undisputed champion (especially in Vulkan Doom) all hope is lost.


----------



## thesmokingman (Sep 20, 2016)

the54thvoid said:


> And on the flip side Nvidia haven't had to innovate too much because of a lack of serious competition. Anything AMD released, Nvidia matched or beat within weeks, they're always holding back. Until Vega, Nvidia have the high ground and will (ab)use that position. If Vega doesn't dislodge Titan X as the undisputed champion (especially in Vulkan Doom) all hope is lost.



That's harsh but its more or less true. Vega is win or die trying.


----------



## Captain_Tom (Sep 20, 2016)

bug said:


> Right, *when* can it it beat the 780? https://www.techpowerup.com/reviews/ASUS/GTX_780_STRIX_6_GB/25.html



Good ol Nvidia fanboy posting old benches.  Man are you guys scared of the future 

https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1070/24.html

They are roughly equal,  and that isn't even including the new games like Deus Ex.


----------



## Fx (Sep 20, 2016)

hardcore_gamer said:


> Looking at the specs, Vega 10 will have similar performance of a GTX 1080.
> 
> Nvidia can easily charge  $800 for their next mid range GV 104.



Shiet... I'd NEVER pay $800 for even a high end graphics card, and that isn't because I couldn't afford it but because I refuse to be played that hard.


----------



## qubit (Sep 20, 2016)

cdawall said:


> 8800GTS 512, 9800GTX, 9800GTX+, GTS  250. Know what those 4 different graphics cards have in common? They are all the same damn card. Don't forget nvidia has rebranded and carried over cards for generations just as often as AMD.


I'd love to add all of those cards to my NVIDIA collection and bench them lol. Oh look! They all perform to within 5% of each other!


----------



## Captain_Tom (Sep 20, 2016)

the54thvoid said:


> And on the flip side Nvidia haven't had to innovate too much because of a lack of serious competition. Anything AMD released, Nvidia matched or beat within weeks, they're always holding back. Until Vega, Nvidia have the high ground and will (ab)use that position. If Vega doesn't dislodge Titan X as the undisputed champion (especially in Vulkan Doom) all hope is lost.




I want to be clear that I do think the rebranding is stupid.  Just because they can doesn't mean they should.   Sure the 280X humiliated the 760, but they could have had the full tonga GPU ready instead (380X).  That would have nearly matched the 960's efficiency and launched before it with an even more commanding performance lead.


As for "all hope is lost".  I think people need to realize that (at least for now) it seems like AMD'S current strategy is working.  Marketshare is far more important to AMD than a halo product right now.   Also anyone remember the days when RADEON had 40-52% marketshare and was profiting like crazy?  Well that was back in the 4000/5000/6000 Era when they werent trying to win a absolute performance.   I loved the 7970/290X, but apparently it didn't make AMD much money... :/


----------



## bug (Sep 20, 2016)

Captain_Tom said:


> Good ol Nvidia fanboy posting old benches.  Man are you guys scared of the future
> 
> https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1070/24.html
> 
> They are roughly equal,  and that isn't even including the new games like Deus Ex.


How did you get from "higher binned 7970 beats 780" to posting 1070 benchmarks? Maybe your logic is too nimble for me to keep up, help me out here.


----------



## KainXS (Sep 20, 2016)

What AMD really needs is a better driver team, they have good hardware but don't utilize it properly. All of their recent high end offering have been more beefier than Nvidia's and they simply can't take the crown and thats why they're pushing vulcan. With vulcan you can get closer to that theoretical performance mark but even then your still relying on the developer to program correctly for that title as they are in control and theres only what 1 or 2 games out that can use vulcan. It sounds promising but thats about all its promising and maybe in the future there will be real support behind it but right now I don't see it as everyone is flocking to DX12. I'm no fortune teller and neither are you guys.

On a different note Nvidia did rebrand years ago because they were selling cards better than amd and still does but refresh now but amd has been doing that more than nvidia has recently and are doing worst in graphics sales.


----------



## Captain_Tom (Sep 20, 2016)

qubit said:


> I'd love to add all of those cards to my NVIDIA collection and bench them lol. Oh look! They all perform to within 5% of each other!



I think everyone here can agree that we all hate rebrands.   However they aren't going anywhere for either company.


----------



## Captain_Tom (Sep 20, 2016)

bug said:


> How did you get from "higher binned 7970 beats 780" to posting 1070 benchmarks? Maybe your logic is too nimble for me to keep up, help me out here.



Erhhh can you not read a chart?

The 780 and 280X are on it buddy...


----------



## thesmokingman (Sep 20, 2016)

Captain_Tom said:


> I loved the 7970/290X, but apparently it didn't make AMD much money... :/



I don't know about that, in context it kept selling when they were stretched thin with no new product stack for years till fiji. It's a testament to the design of tahiti for it to be pushed well beyond typical product cycles.


----------



## Captain_Tom (Sep 20, 2016)

thesmokingman said:


> I don't know about that, in context it kept selling when they were stretched thin with no new product stack for years till fiji. It's a testament to the design of tahiti for it to be pushed well beyond typical product cycles.



I mean it is a fact that sales went down after the 290X (Although the 290 series itself sold really well).  Problem is AMD was stretched thin (they are a smaller company) and so their mid range well objectively better than Nvidia's, was still full of rebrand.  People only like buying new things even if they are worse lol


----------



## bug (Sep 20, 2016)

Captain_Tom said:


> Erhhh can you not read a chart?
> 
> The 780 and 280X are on it buddy...


They are, but they're absent from individual game benchmarks, so I don't where the overall performance value comes from or whether any of the tested titles were actually playable.


----------



## Patriot (Sep 20, 2016)

Caring1 said:


> What's with reducing the FP first from double to single, now half?
> "with up to 24 TFLOP/s 16-bit (half-precision) floating point performance"
> Obviously reducing the compute side increases gaming usability, as proven by Nvidia cards doing the same.



Because the P100 lists it that way.
Deep learning/neural networks can use half precision... 
Gaming doesn't have a 1:1 correlation with Tflops anyways so just go with the bigger number


----------



## 64K (Sep 20, 2016)

Captain_Tom said:


> I mean it is a fact that sales went down after the 290X (Although the 290 series itself sold really well).  Problem is AMD was stretched thin (they are a smaller company) and so their mid range well objectively better than Nvidia's, was still full of rebrand.  People only like buying new things even if they are worse lol



R9 290 was "best bang for the buck" there for a little while. I've never seen a break down of how much profit is made by AMD by category but due to the number of people I see on the Steam Hardware Survey running entry level and mid range GPUs I suspect that AMD makes more profit in those categories.


----------



## Recon-UK (Sep 20, 2016)

FordGT90Concept said:


> As more and more people move from cable and satellite TV to IPTV/internet streaming, those products are in great demand and that demand is growing.  The thing is, the profit margins on them are so small, Intel would rather use their fabs to build Core I# and Xeon processors that they can sell for a hefty mark up.  NUCs basically get the left overs (older fabs) because Intel really doesn't care.  The fact of the matter is that market will never have huge profit margins--it's always destined to be a volume seller.  This is the same reason why Intel doesn't care about smartphones.
> 
> 
> Consoles (they are custom SOCs so addressing both here...) are in the same boat as NUCs, tablets, and smartphones: volume products with tight profit margins.  *AMD may dominate the console market but NVIDIA dominates the desktop market.  Games are developed on Windows which overwhelmingly run on NVIDIA hardware.*  Developers may be quite familiar with the ins and outs of GCN because of optimization for consoles but they're optimized on NVIDIA first because that's what they're coding on.
> ...




Where i bolded, any competition is a great thing and i don't exactly like seeing Nvidia or AMD plastered all over my games either but if that is how it is going to be then i want solid competition from AMD too, i'm a warm blooded soul so red runs through my veins whilst i cut my green grass in the garden. Without the blood i can't live and that person who is not alive can't cut that grass.


----------



## thesmokingman (Sep 20, 2016)

Captain_Tom said:


> I think everyone here can agree that we all hate rebrands.   However they aren't going anywhere for either company.



I remember when the 280x dropped. PPL were gushing over it, some thought they were just plain cooler because they bought that and looked down on 7970s roflmao. In reality 7970s were much better, clocked higher, no stupid boost, and did not get neutered by AIB partners. So much for the new bins lol. But the once the masses make up their minds, there's no discouraging it.


----------



## Recon-UK (Sep 20, 2016)

thesmokingman said:


> I remember when the 280x dropped. PPL were gushing over it, some thought they were just plain cooler because they bought that and looked down on 7970s roflmao. In reality 7970s were much better, clocked higher, no stupid boost, and did not get neutered by AIB partners. So much for the new bins lol. But the once the masses make up their minds, there's no discouraging it.



280x was a fine card it was a middle of the road option to make up the gap between the 270 and the 290x.


----------



## m1dg3t (Sep 20, 2016)

bug said:


> Yes, because it's Nvidia that sells rebranded products from 3 years ago.
> Neither Nvidia nor AMD will innovate unless pushed to. Baseless extrapolations do not help.



Both nVidia & AMD are currently selling re-branded older products. From what I can recall though ATi/AMD have been consistently first to adopt/implement and even directly assist in advancing GFX memory.

I see a bunch of others share my sentiments, good thing I held off on hitting 'post reply' LoLoLoLoL


----------



## Recon-UK (Sep 20, 2016)

From evidence and experience AMD are the only company to start using new memory tech first and will push for it.

Nvidia always threw a larger bus at inferior memory.

HD 4870 vs GTX 280 etc, GDDR5 on 256 vs GDDR3 on 512.

ATi 3K series with DDR4.


----------



## KainXS (Sep 20, 2016)

Recon-UK said:


> From evidence and experience AMD are the only company to start using new memory tech first and will push for it.
> 
> Nvidia always threw a larger bus at inferior memory.
> 
> ...



True but GDDR4 is a bad example, that was a complete flop due to its crazy latency.They did push GDDR5 though when Nvidia was not planning to use it.  Nvidia tries to not take risks really. That does slow down progression in graphics a little though but with the way AMD was performing recently until polaris in graphics it didn't really matter.


----------



## Chaitanya (Sep 20, 2016)

KainXS said:


> What AMD really needs is a better driver team, they have good hardware but don't utilize it properly. All of their recent high end offering have been more beefier than Nvidia's and they simply can't take the crown and thats why they're pushing vulcan. With vulcan you can get closer to that theoretical performance mark but even then your still relying on the developer to program correctly for that title as they are in control and theres only what 1 or 2 games out that can use vulcan. It sounds promising but thats about all its promising and maybe in the future there will be real support behind it but right now I don't see it as everyone is flocking to DX12. I'm no fortune teller and neither are you guys.
> 
> On a different note Nvidia did rebrand years ago because they were selling cards better than amd and still does but refresh now but amd has been doing that more than nvidia has recently and are doing worst in graphics sales.


Yep thats exactly where AMD is lagging and why they are betting on low level APIs.


----------



## Recon-UK (Sep 20, 2016)

KainXS said:


> True but GDDR4 is a bad example, that was a complete flop due to its crazy latency. Nvidia tries to not take risks really. That does slow down progression in graphics a little though but with the way AMD was performing recently until polaris in graphics it didn't really matter


I never spoke of the performance of their GPU's just that AMD would push for better tech rather than undercutting new tech and only focusing on who bench's higher.

AMD showed us that with the Bulldozer arch... crap CPU but the idea was cool.


----------



## ZoneDymo (Sep 20, 2016)

Lets hope they make good on these claims 

and RX480 with consistent R9 390x levels of performance at a lower power usage sounds like music to my ears.
And anything to bring the prices down of the higher segment is great


----------



## sith'ari (Sep 20, 2016)

Vayra86 said:


> ................................
> Look at how Fury X excels on certain games in newer APIs and you can see how HBM2 on an even wider GPU will absolutely be king of the hill,.....................................




Here goes all over again the same story just like the period prior FuryX's release!! 
I still remember all the glorious comments from AMD about the use of HBM memory, and after months & months of hype and brainwash, this supreme card struggle to compete with a reference 980Ti (*and stayed far behind the aftermarket Ti s).
I've said it back then and i'll say it again. HBM technology was already known to the companies years ago. There was no chance a colossus company like NVidia not having done their own research with HBM. So in order not to use them (*back then at least), made me suspect that there were disadvantages at HBM's usage.
Indeed, the HBM memory was only limited to 4GB size, which led to the downfall of FuryX's effort for the top.
( Just like last time, there is no way that again NVidia not having made their own research for the HBM2.)


----------



## the54thvoid (Sep 20, 2016)

Captain_Tom said:


> I want to be clear that I do think the rebranding is stupid.  Just because they can doesn't mean they should.   Sure the 280X humiliated the 760, but they could have had the full tonga GPU ready instead (380X).  That would have nearly matched the 960's efficiency and launched before it with an even more commanding performance lead.
> 
> 
> As for "all hope is lost".  I think people need to realize that (at least for now) it seems like AMD'S current strategy is working.  Marketshare is far more important to AMD than a halo product right now.   Also anyone remember the days when RADEON had 40-52% marketshare and was profiting like crazy?  Well that was back in the 4000/5000/6000 Era when they werent trying to win a absolute performance.   I loved the 7970/290X, but apparently it didn't make AMD much money... :/



I'm surprised the 7970 lost them money as it launched at a surprisingly high price point (I know - I bought two).  But I think that also was a bit of their problem - they went from the better value option to being the same price as the contemporary Nvidia offering, the GTX680. From there on it allowed Nvidia to 'justifiably' hike prices if their next card beat AMD's.

I do believe though that from a shareholder point of view, Vega HAS to perform well.  I look forward to seeing it in action.


----------



## geon2k2 (Sep 20, 2016)

What AMD needs to do is implement at once tile based rendering, and only after this is done think about ridiculous buses and memory architectures.
Maxwell and Pascal have this, and that's why they are so efficient and have such lower buses and yet perform on par or better with double bus on AMD side.

Tile based rendering was introduced by PowerVR back in 1996.
Read more:
http://www.anandtech.com/show/735/3

https://en.wikipedia.org/wiki/Tiled_rendering


----------



## thesmokingman (Sep 20, 2016)

sith'ari said:


> Here goes all over again the same story just like the period prior FuryX's release!!
> I still remember all the glorious comments from AMD about the use of HBM memory, and after months & months of hype and brainwash, this supreme card struggle to compete with a reference 980Ti (*and stayed far behind the aftermarket Ti s).
> I've said it back then and i'll say it again. HBM technology was already known to the companies years ago. There was no chance a colossus company like NVidia not having done their own research with HBM. So in order not to use them (*back then at least), made me suspect that there were disadvantages at HBM's usage.
> Indeed, the HBM memory was only limited to 4GB size, which led to the downfall of FuryX's effort for the top.
> ( Just like last time, there is no way that again NVidia not having made their own research for the HBM2.)



What is your point? Did you state anything in particular? Nvidia didn't make HBM, they put their efforts in HMC and Micron. They lost out with HMC. HBM was adopted as the standard. HBM was limited to 4GB in its first generation, thus ultimately limiting the Fiji. You want to down the Fury for 4GB, but that's a limitation of the bleeding edge tech. Now that its on Gen 2, both Nvidia and AMD are now racing to capitalize on HBM. Fury not scaling higher or not competing with 980ti in certain areas isn't indicative of the true potential of HBM. That said, in some instances the Fury X runs toe to toe with 1080s today in dx12, omg?


----------



## Jism (Sep 20, 2016)

Looks on pair with release of microsoft's new console.

Edit: Don't confuse HBM2 to better then HBM in general. It's still the same clocks and bandwidth compared to HBM. The only difference is is the stacking of chips which makes it up to 16GB of HBM2 memory. Perhaps even more.


----------



## sith'ari (Sep 20, 2016)

thesmokingman said:


> What is your point? ......................



My point is that just like in the past i didn't believe AMD's ultra-hype for HBM, i'm also not going to believe anything about HBM2 untill i see reviews first. It's AMD's standard policy to create great hype with mediocre results.
( P.S. You might want to check FuryX's & RX480's pathetic performance at VR benchmarks. http://www.hardocp.com/reviews/vr/
 So much again for Raja's hype about "premium VR performance" !!)


----------



## RejZoR (Sep 20, 2016)

It's not hype. HBM actually works. Just look at Fury X. In most cases it goes against 8GB graphic cards with ease because it has basically twice the bandwidth of any graphic card on the market. Or at least 1/3 more. Being stuck with 4GB was simply technological limit which isn't causing much trouble. Now, they have full potential of 8-16GB. Which is plenty even for professional usage. I mean, I have a GTX 980 which used to be the king, but I kinda regret I haven't taken a Fury X or a vanilla Fury. Dunno why, they just have some sort of charm with HBM because it's so exotic.


----------



## dyonoctis (Sep 20, 2016)

guru3d version of the same news isn't really the same, they don't talk about vega 11 being a replacement of polaris, but rather a much bigger chip with like 6144 stream processors.  
http://www.guru3d.com/news-story/amd-vega-10-vega-20-and-vega-11-gpus-mentioned-by-cto.html

They said that the article is still being rewritten but they seem to be the only one to make that statement.


----------



## Recon-UK (Sep 20, 2016)

I understand fetishes and all that but having it off over a GPU brand is really something else...


----------



## thesmokingman (Sep 20, 2016)

sith'ari said:


> My point is that just like in the past i didn't believe AMD's ultra-hype for HBM, i'm also not going to believe anything about HBM2 untill i see reviews first. It's AMD's standard policy to create great hype with mediocre results.
> ( P.S. You might want to check FuryX's & RX480's pathetic performance at VR benchmarks. http://www.hardocp.com/reviews/vr/
> So much again for Raja's hype about "premium VR performance" !!)



It's clear you bleed green but one doesn't have everything to do with the other.


----------



## sith'ari (Sep 20, 2016)

thesmokingman said:


> It's clear you bleed green but one doesn't have everything to do with the other.



Yeah, i also have to mention that i wrote the text about this pathetic VR performance of AMD's cards at Hardocp's VR benchmarks with my green blood!! 
(Probably you didn't even bother to check the link i put because your thoughts are pure red! )


----------



## thesmokingman (Sep 20, 2016)

sith'ari said:


> Yeah, i also have to mention that i wrote the text about this pathetic VR performance of AMD's cards at Hardocp's VR benchmarks with my green blood!!
> (Probably you didn't even bother to check the link i put because your thoughts are pure red! )



Now you continue to troll? I don't need to see your link because we all know already. You seem to think you are spouting news?


----------



## sith'ari (Sep 20, 2016)

yeah you know it but you just "neglected" to mention it then. You only mentioned what suited you best.
Next time i'll ask for your approval before i post a link. You seem to be an objective person after all.


----------



## thesmokingman (Sep 20, 2016)

sith'ari said:


> yeah you know it but you just "neglected" to mention it then. You only mentioned what suited you best.
> Next time i'll ask for your approval before i post a link. You seem to be an objective person after all.



What drugs are you on? HBM good or bad doesn't have a lot to do with the Fiji chip, it was limited because it was first gen tech. Its now on 2nd gen and both NV and AMD will be pushing it. Wtf does that have to do with your link troll?


----------



## sith'ari (Sep 20, 2016)

I was talking about AMD's ultra-hype ....troll !! And i put a link about AMD's pathetic results at VR benchmarks on the contrary to Raja's and AMD's claims about "premium VR performance" !! Is that clear to you now?


----------



## looncraz (Sep 20, 2016)

Captain_Tom said:


> P.S.   Where is this 512 GB/s coming from?   HBM2 comes in 720 GB/s and 1 TB/s flavors so I am calling BS on that spec.



2048-bit bus and two HBM2 stacks.  I've been suggesting this for at least a month now - it's the most prudent option for AMD to take - they simply don't need more bandwidth... or capacity... that you can get from two HBM2 stacks in the consumer market.

Smaller interposer, reduced cost, and many knock-on benefits from this.

Some models MAY have four stacks (16GB, IIRC, would actually require that), but I don't think we need more than 8GB for consumer cards for the next couple of years - even at the high end.  We have yet to see 8GB be stressed... it's sometimes hard enough just to max out 4GB.


----------



## looncraz (Sep 20, 2016)

bug said:


> No, we cannot. If Titan X was HBM2 powered it would be a 225W part with more power than Vega 10.



Don't be so certain that Vega 10 will be that weak.

It is updated from GCN4 (which is already 15% faster than the GCN in Fury X), has all of the updated geometry engines, schedulers, etc... of Polaris - all updated yet another generation.  It is using the second generation HBM memory, with double the bandwidth of RX 480 (but only 77% more shaders to feed).

So, at WORST, Vega 10 is Fury X + 15% + 15% + 15% = ~50% faster than Fury X.

That is to say:

~15% higher IPC
~15% boost from clock speed
~15% boost from bandwidth

And that's just assuming a scaled-up Polaris GPU... which Vega 10 is not.

Still, maybe only add 10% for architectural improvements after Polaris - Vega only has another six months of development on it compared to Polaris... but that's 67% faster than Fury X...


----------



## NDown (Sep 20, 2016)

They really need something like what i'd call the "290X/290 release moments" Titan performance for ~$450/600 less 

Everything after that has been pretty underwhelming for me

Regardless of what the naysayers saying about the noise and heat, it was the best gpu release i've ever witnessed 

cant speak much about the probability of it happening with the Vega release but one can hope at least :^(

I dont want to see x70 cards in the $500-700 range next time Nvidia releases something new again


----------



## qubit (Sep 20, 2016)

Recon-UK said:


> I understand fetishes and all that but having it off over a GPU brand is really something else...


Oh no, what has been imagined cannot be unimagined...


----------



## m1dg3t (Sep 20, 2016)

qubit said:


> Oh no, what has been imagined cannot be unimagined...



What you think DVI stand for? D1nk33V4#g33n41n73rf4c3


----------



## yotano211 (Sep 20, 2016)

Recon-UK said:


> *I understand fetishes* and all that but having it off over a GPU brand is really something else...


Especially the feet kind.


----------



## BiggieShady (Sep 20, 2016)

LightningJR said:


> You can't compare TFLOPS across GPU manufacturers if you could then the RX480 with 5.8TFLOPS would wreck a GTX 980 with only 5TFLOPS and it doesn't. The card falls somewhere in between the 980 and a 970 (with 4TFLOPS)
> 
> The TFLOPS are there but it just doesn't translate in to equalivent performance so 12TFLOPS looks sexy it just don't translate 1:1 with nVidia unfortunately.



That's because those values represent peak theoretical maximum compute performance. With all different shaders/compute code in the wild running on both GPU architectures, nvidia on average operates closer to its peak maximum (better gpu cache hierarchy)
There are cases where code can actually pull 5.8 TFLOPS effective out of RX480, but that happens rarely even for gpgpu compute devs let alone gamers 



geon2k2 said:


> What AMD needs to do is implement at once tile based rendering, and only after this is done think about ridiculous buses and memory architectures.
> Maxwell and Pascal have this, and that's why they are so efficient and have such lower buses and yet perform on par or better with double bus on AMD side.
> 
> Tile based rendering was introduced by PowerVR back in 1996.
> ...



Bingo. We had a little discussion about it here on TPU: https://www.techpowerup.com/forums/...secretly-use-tile-based-rasterization.224773/
Their tile based rendering isn't completely "tile based", only the rasterization part ... they keep the tile size small enough so it can fit completely into first level cache ... all for a magical efficiency increase


----------



## renz496 (Sep 20, 2016)

bug said:


> *Even worse, it's a 225W part. GP102 is 250W,* but that's without HBM and Vega 10 is supposed to be "between GP104 and GP102". Meaning weaker than GP102.
> AMD is skating to where the puck is, not where it will be
> 
> Also, first half of 2017 can easily turn into a July or back-to-school launch. Oh well, I wasn't planning on buying any of these anyway.



if you look at the rumor at VCZ this info is coming from AMD server roadmap. hence the spec mention for firepro variant and not regular consumer variant (radeon). i still remember when there is leak about Firepro (tonga based) rated at 150w i see in some people in another forum touting the maxwell killer are finally arrive. but in the end 285 power was rated way more than 150w.


----------



## renz496 (Sep 20, 2016)

Captain_Tom said:


> Really rare?   Mantle was "rare", but this is different.
> 
> Tomb Raider, Hitman, DOOM, DEUS EX, and now BF1.
> 
> It's pretty clear that this will be the new standard of 2017.



yeah. a new standard for "broken" games?


----------



## Vayra86 (Sep 20, 2016)

sith'ari said:


> My point is that just like in the past i didn't believe AMD's ultra-hype for HBM, i'm also not going to believe anything about HBM2 untill i see reviews first. It's AMD's standard policy to create great hype with mediocre results.
> ( P.S. You might want to check FuryX's & RX480's pathetic performance at VR benchmarks. http://www.hardocp.com/reviews/vr/
> So much again for Raja's hype about "premium VR performance" !!)



You don't really seem to get it.

VR performance? Hand in the air if you really care about VR performance right now. Worried that tech demo run will not be smooth?? VR is a niche, that is all. Let's move on.

Fury X was showing lackluster performance and meh overclockability when it was just released.

TODAY Fury X is showing its true power and there is actually a lot more still in the can (stock cooled Fury X throttles) - it is still competing against top end GPUs like 980ti (comfortably beating it at 4K, and equal or better in almost any DX12/Vulkan game, 2-3% lower in DX11 lower res), 1070 and in an edge case, the 1080. So let's turn this around: AMD's ONLY HBM offering in last years' (!) line up is now competing with Nvidia's most shiny new arch. The entire rest of AMDs line up, including the RX480, is getting swamped with a lower amount of shaders from the competition.

And there you are, saying HBM is junk and not of use for GPUs. Mkay 

Also, a general note and to unburden our kind mods... STOP DOUBLE POSTING AND USE AN EDIT BUTTON


----------



## Recon-UK (Sep 20, 2016)

I will be the first admit just how great GCN is, it's not up there with Nvidia's newest but it's gained so much performance over time.


----------



## BiggieShady (Sep 20, 2016)

Recon-UK said:


> I will be the first admit just how great GCN is, it's not up there with Nvidia's newest but it's gained so much performance over time.


True and also when it comes to async compute it's vice versa ... that's where nvidia has some catching up to do


----------



## bug (Sep 20, 2016)

looncraz said:


> Don't be so certain that Vega 10 will be that weak.
> 
> It is updated from GCN4 (which is already 15% faster than the GCN in Fury X), has all of the updated geometry engines, schedulers, etc... of Polaris - all updated yet another generation.  It is using the second generation HBM memory, with double the bandwidth of RX 480 (but only 77% more shaders to feed).
> 
> ...


This very leak (or whatever it is) says Vega 10 is between GP104 and GP102, placing a hard limit on the upper limit.


----------



## looncraz (Sep 20, 2016)

bug said:


> This very leak (or whatever it is) says Vega 10 is between GP104 and GP102, placing a hard limit on the upper limit.



Not exactly. 12TFLOPS single precision doesn't tell you everything you need to know about the GPU's performance.

For example, it doesn't take into account, at all, the impact of memory bandwidth.  It doesn't take into account things such as primitive discard, improved scheduling, and so on...  It's just an accumulation of the processing power in the shaders.

12 TFLOPs tells us only two two things that are likely:  GCN4 SPs are still likely in use.  1.266GHz is the target clock-speed:

+15% - GCN SPs vs Fiji SPs
+20% - Higher clock speed than Fury X (1.26GHz vs 1.05GHz)

That takes Fury X's 8.6T to right about 12T.  A few minor improvements (and some rounding in the rumor) will cover the minor gap.

Given how constrained performance on Polaris 10 is by memory bandwidth, we should expect an additional gain in FPS from that - 15% is fair game (you can nearly get that just by overclocking RX 480's memory... gains are effectively linear). Vega includes the new primitive discard and other benefits seen in Polaris 10 as well, so this will push effective performance compared to Fury X even more.

If you add 60% more performance to Fury X, where do you land?  Bingo.  Still, from 40% to 60% over Fury X is a pretty broad range - and only the top of this will catch up to the top Pascal GPU.


----------



## HD64G (Sep 20, 2016)

bug said:


> Yes, because it's Nvidia that sells rebranded products from 3 years ago.
> Neither Nvidia nor AMD will innovate unless pushed to. Baseless extrapolations do not help.
> 
> @HD64G Are you really inferring the future of GPUs from one title or am I missing something?


Just pointing out the mistake most people do by judging HW instead of SW when comparing archs. Only by making full use of an arch you can compare its efficiency. And Vulcan is the best case scenario for AMD GPUs to get them fully utilised. DX12 games built from the ground up might achieve the same. DX11 and Open GL do already the same for nVidia ones.


----------



## Captain_Tom (Sep 20, 2016)

AlienIsGOD said:


> already announcing a RX 480/470 replacement? lame  Vega 11 should be faster hopefully



Polaris is just a cheap stop-gap.  I believe next year plans to have their line-up nearly top-bottom Vega HBM cards.

Hence the 475 and 485 will be 4GB HBM cards, and the RX Furies will be 8GB HBM cards.  The 465 may even be a 2GB HBM card.


----------



## ensabrenoir (Sep 20, 2016)

......got it!!! No need for further bickering guys by using this simple equation:






I have determined that Nvdia will maintain the performance crown.....at a hefty price and Amd will design something ridiculously advanced that won't be fully
utilized for years to come but be at a great price.
/thread.


----------



## $ReaPeR$ (Sep 20, 2016)

Basard said:


> Bodes just fine for a lot of us though.  There are plenty of GPUs to choose from, just not so many CPUs.


how come? its another 6 months till vega comes out and until then AMD has only 3 cards in the market and none of them is in the high-end upper section of the market.


----------



## Captain_Tom (Sep 20, 2016)

$ReaPeR$ said:


> how come? its another 6 months till vega comes out and until then AMD has only 3 cards in the market and none of them is in the high-end upper section of the market.



Fury is $300 and trades blows with the 1070.  


I definitely wish Vega was here as an alternative to the Titan, but I would rather AMD wait and release a monster card with HBM2 than rush out a half-baked GDDR5X card in the Ultra-High end.


----------



## $ReaPeR$ (Sep 20, 2016)

Captain_Tom said:


> Fury is $300 and trades blows with the 1070.
> 
> 
> I definitely wish Vega was here as an alternative to the Titan, but I would rather AMD wait and release a monster card with HBM2 than rush out a half-baked GDDR5X card in the Ultra-High end.


I'm not arguing with that, but you know how the market generally works, recent=new.


----------



## the54thvoid (Sep 20, 2016)

Vayra86 said:


> Fury X was showing lackluster performance and meh overclockability when it was just released.
> 
> TODAY Fury X is showing its true power and there is actually a lot more still in the can (stock cooled Fury X throttles) - it is still competing against top end GPUs like 980ti (comfortably beating it at 4K, and equal or better in almost any DX12/Vulkan game, 2-3% lower in DX11 lower res)



Stock cooled Fury X is water cooled....

Also, When Fury X was released* it was meant to be awesome *but it was a bit of a disappointment.  When you say it matches a 980ti and beats it (game environment dependent, massive 20% OC of 980ti not withstanding) it actually should match it - they were price point stable mates.  They were last years top gen.  A Fury X was meant to take on Nvidia.  Now it is, a year later.  The 1070 replaces the 980 ti and the 1080 takes the 2nd top crown in the vast majority of titles.  1080 even beats Fury X in AotS, Hitman, ROTR and Warhammer (all DX12).   Remember, generationally the 1080 replaces the 980 (which Fury X pisses on).

No arguing Fury X is a great card but it's hard to listen to people saying how poorly NVidia cards age when my 980ti is still trouncing framerates.  I'm on DX11 of course so I don't get to see the fuss but so far, DX12 hasn't delivered better graphics.

So, to recap, when you say



> it is still competing against top end GPUs like 980ti



It came out after it and was 'meant' to beat it back then.  It's only now performing as it was meant to back in 2015 (and mainly in specific, not all, DX12 and one Vulkan title) but it's still weaker than Nvidia's 2nd top card (1080).

And to clarify - I want Vega to beat the Titan X.


----------



## Basard (Sep 20, 2016)

ensabrenoir said:


> .....so from Hype train:
> View attachment 79031
> 
> to silence:
> ...



I'll have one AMD please!


----------



## Captain_Tom (Sep 20, 2016)

$ReaPeR$ said:


> I'm not arguing with that, but you know how the market generally works, recent=new.




Haha I know buddy.   Imo the new cards are just crazy overpriced and not at all future-proofed (At least Nvidia's).

The 480 is fantastic, but just not quite strong enough for me.   Thus I ended up buying a Fury Nitro and overclocking it to 1135/518.   I wish I had more, but for $300 I can't complain.


----------



## dyonoctis (Sep 20, 2016)

Captain_Tom said:


> Polaris is just a cheap stop-gap.  I believe next year plans to have their line-up nearly top-bottom Vega HBM cards.
> 
> Hence the 475 and 485 will be 4GB HBM cards, and the RX Furies will be 8GB HBM cards.  The 465 may even be a 2GB HBM card.



More like 8Gb, It would be a shame to lauch a polaris replacement with (we can only assume ) twice the bandwidth, with only 4Gb. 

But that's also mean another thing: people are going to wait to get their mid-rang hbm gpu, meaning that polaris gpu will still get a lot of stock when the refresh launch. I have a bad feeling about the pricing of the refresh.


----------



## Fluffmeister (Sep 20, 2016)

This stinks of 980 Ti Vs Fury X all over again, sure the Fury might come good eventually, but everyone else has already moved on.

I'm already looking forward to Volta.


----------



## Captain_Tom (Sep 20, 2016)

the54thvoid said:


> Stock cooled Fury X is water cooled....
> 
> Also, When Fury X was released* it was meant to be awesome *but it was a bit of a disappointment.  When you say it matches a 980ti and beats it (game environment dependent, massive 20% OC of 980ti not withstanding) it actually should match it - they were price point stable mates.  They were last years top gen.  A Fury X was meant to take on Nvidia.  Now it is, a year later.  The 1070 replaces the 980 ti and the 1080 takes the 2nd top crown in the vast majority of titles.  1080 even beats Fury X in AotS, Hitman, ROTR and Warhammer (all DX12).   Remember, generationally the 1080 replaces the 980 (which Fury X pisses on).
> 
> ...





I have to ask what games it is "trouncing" in:

https://www.techpowerup.com/forums/...a-11-gpus-detailed.226012/page-5#post-3526293


On average the Fury X is beating it in every resolution but 1080p (And even then it loses by 4%).   Furthermore the best indicators of this fall and next year's game performance are DOOM and Deus EX where the Fury X does in fact TROUNCE the 980 Ti by 20%.


I will agree it would have been nice if the Fury X performed this well last year, but at the end of the day who really cares?!   Both the 980 Ti and the Fury X destroyed all games last year, and it never mattered the 980 Ti was like 5% stronger.   Yes it overclocked WAY better at launch, but then again the Fury series overclocked perfectly fine once voltages were unlocked (Although of course Maxwell overclocks better on average still).


----------



## Captain_Tom (Sep 20, 2016)

dyonoctis said:


> More like 8Gb, It would be a shame to lauch a polaris replacement with (we can only assume ) twice the bandwidth, with only 4Gb.
> 
> But that's also mean another thing: people are going to wait to get their mid-rang hbm gpu, meaning that polaris gpu will still get a lot of stock when the refresh launch. I have a bad feeling about the pricing of the refresh.



I mean HBM2 would also make it perform like 25% better while reducing powerdraw by at least 50w.   Totally worth it for a 1440p card.


----------



## $ReaPeR$ (Sep 20, 2016)

Captain_Tom said:


> Haha I know buddy.   Imo the new cards are just crazy overpriced and not at all future-proofed (At least Nvidia's).
> 
> The 480 is fantastic, but just not quite strong enough for me.   Thus I ended up buying a Fury Nitro and overclocking it to 1135/518.   I wish I had more, but for $300 I can't complain.


well, future proofing is a long conversation.. but, to the point, the fury is an eol product and as such is not really relevant to the market today, not for the vast majority of it at least,


----------



## danbert2000 (Sep 20, 2016)

Captain_Tom said:


> Fury is $300 and trades blows with the 1070.




https://www.techpowerup.com/reviews/MSI/GTX_1070_Gaming_Z/26.html

Yeah, I see the Fury somewhere on that list. Somewhere very far away from the 1070 stock, let alone OC. I guess you'll retort that all the recent AMD Gaming Evolved (totally not GamWorks guyz) games have been doing pretty well on the Fury series, too bad that comes with crazy frame spikes on AMD cards. Curious that only AMD sponsored games have had such differing performance, when AotS, TW: Warhammer, Tomb Raider, etc have been doing just fine on Nvidia.

https://techreport.com/review/30639...x-12-performance-in-deus-ex-mankind-divided/3

Who knows, maybe vendor-neutral DirectX 12 and Vulkan games will continue to help the Fury reach up near 1070 performance. All I know is that the vast majority of games being played today perform 20% better on the 1070 and don't suck up 300 watts while doing it.


----------



## Captain_Tom (Sep 20, 2016)

$ReaPeR$ said:


> well, future proofing is a long conversation.. but, to the point, the fury is an eol product and as such is not really relevant to the market today, not for the vast majority of it at least,



Is the 1070 EOL?  Fury Nitro matches it, and the Fury X trades blows with the 1080 in the latest games.   Say whatever you want - an EOL AMD product matches Nvida's new High-end cards.


All future poofing means for me is that the product will continue to provide the same experience, or even a better experience for 2-3 years.  People who bought an i7 5 years ago continue to get 60-100 FPS in the latest games while the old i3's need to be replaced (SB i3's can't really do 60 FPS anymore).    Of course nothing is completely future proof, but I expect my products to maintain their relative value for 3 years, and frankly they should never lose performance to their launch competition.


----------



## TheGuruStud (Sep 21, 2016)

Captain_Tom said:


> Can we drop the whole HBM vs GDDR5 thing?
> 
> 
> It doesn't matter if they make the card with potatos or Jet packs.   What matters is the final product.
> ...



Didn't samsung say it won't be ready until 2018 (I would guess mid year)? I guess that could work. Nvidia just won't put out any new cards except 1080ti and maybe a 1060/1070 refresh. There will be no reason for Volta if AMD doesn't deliver something good.


----------



## Captain_Tom (Sep 21, 2016)

danbert2000 said:


> https://www.techpowerup.com/reviews/MSI/GTX_1070_Gaming_Z/26.html
> 
> Yeah, I see the Fury somewhere on that list. Somewhere very far away from the 1070 stock, let alone OC. I guess you'll retort that all the recent AMD Gaming Evolved (totally not GamWorks guyz) games have been doing pretty well on the Fury series, too bad that comes with crazy frame spikes on AMD cards. Curious that only AMD sponsored games have had such differing performance, when AotS, TW: Warhammer, Tomb Raider, etc have been doing just fine on Nvidia.
> 
> ...



So you can include a list of benchmarks that includes Gameworks games, but I can't list Gaming Evolved games?  Ok yeah that makes sense buddy.   I also love that you act like AMD-sponsored games are the same as Gamesworks even though it has been proven time and again that they don't actively hamper Nvidia's performance.  This is before we even get into the fact that AMD's Evolved packages are open-sourced and almost all Nvidia-sponsored games include a black box of code that nukes AMD's performance and makes the game unplayable FOR EVERYONE at launch.   Maybe you liked the launch of Batman.



We can talk about AoTS and Warhammer all you want lol:

http://www.guru3d.com/articles-page...-graphics-performance-benchmark-review,6.html

The way it's meant to be played: At a lower framerate!


----------



## Captain_Tom (Sep 21, 2016)

TheGuruStud said:


> Didn't samsung say it won't be ready until 2018 (I would guess mid year)? I guess that could work. Nvidia just won't put out any new cards except 1080ti and maybe a 1060/1070 refresh. There will be no reason for Volta if AMD doesn't deliver something good.



I can't honestly remember off the top of my head.   I thought it was 2017.  Provide a link if you please


----------



## TheGuruStud (Sep 21, 2016)

Captain_Tom said:


> I can't honestly remember off the top of my head.   I thought it was 2017.  Provide a link if you please



https://www.techpowerup.com/225255/samsung-bets-on-gddr6-for-2018-rollout


----------



## danbert2000 (Sep 21, 2016)

Captain_Tom said:


> The way it's meant to be played: At a lower framerate!



...except the benchmark you shared proves that the 1070 does better than the Fury, and that's before OC. So what are you trying to prove again?

EDIT: Actually, Techpowerup themselves just did that bench again and the stock 1070 beats the Fury X at all resolutions. Looks like your argument is crumbling away. Yikes, check out those No Man's Sky benches too.

https://www.techpowerup.com/reviews/MSI/GTX_1070_Gaming_Z/23.html


----------



## ensabrenoir (Sep 21, 2016)

.....I like many...find it fun to poke at Amd extremist from time to time.... But the delusion is palatable in here.  Kinda scary....every video card has at least one game it excells at.....but its true measure is determined over a variety of titles.


----------



## Audiophizile (Sep 21, 2016)

thesmokingman said:


> What is your point? Did you state anything in particular? Nvidia didn't make HBM, they put their efforts in HMC and Micron. They lost out with HMC. HBM was adopted as the standard. HBM was limited to 4GB in its first generation, thus ultimately limiting the Fiji. You want to down the Fury for 4GB, but that's a limitation of the bleeding edge tech. Now that its on Gen 2, both Nvidia and AMD are now racing to capitalize on HBM. Fury not scaling higher or not competing with 980ti in certain areas isn't indicative of the true potential of HBM. That said, in some instances the Fury X runs toe to toe with 1080s today in dx12, omg?



Do you mean comes close to 1080 levels in vulkan/1 game?


----------



## phanbuey (Sep 21, 2016)

danbert2000 said:


> ...except the benchmark you shared proves that the 1070 does better than the Fury, and that's before OC. So what are you trying to prove again?
> 
> EDIT: Actually, Techpowerup themselves just did that bench again and the stock 1070 beats the Fury X at all resolutions. Looks like your argument is crumbling away. Yikes, check out those No Man's Sky benches too.
> 
> https://www.techpowerup.com/reviews/MSI/GTX_1070_Gaming_Z/23.html



Eh don't even try...  There will always be some excuse/reason, and then point to the 3 games out of 20 where AMD comes close and be like 'SEE! The same!'

They will also tell you that their 4GB fury X is more future proof than a 8GB 1070 "because shaders/vulcan/HBM".

In this case, though, I really do hope AMD isnt too far below that 1080... I have the 1080 and it's 100% artificially crippled...  nvidia already holding back and gouging.



Audiophizile said:


> Do you mean comes close to 1080 levels in vulkan/1 game?



10-15% slower but yes - that game.


----------



## Audiophizile (Sep 21, 2016)

While it is proven AMD makes up a very small amount of ground in dx12(non-vulkan) and does really well with the 1 vulkan title it seems silly to buy a card that is STILL SLOWER in almost every or every game. I am trying to hold off for Vega (originally wanted to rebuild before bf1 release) but my hopes are understandably not crazy high. If replacing a GPU every 2 or so years, right now nvidia is a better bet as amd has really only proven it can win in vulkan. How many titles that you will actually play be vulkan before your next GPU upgrade? Probably WAY less than non-vulkan titles. I want Vega to be good. I want competition. I would prefer to not wait a full year from the 1080 release to get competition. If Vega doesn't outright beat a 1070 at $300-350 or beat a 1080 at $450-525 I will pretty much have no hope left in AMD. Volta will release in less time than between pascal and Vega so amd will need to work miracles on pricing to be a valid option.


----------



## Divide Overflow (Sep 21, 2016)

Somewhere between all the hate and the hype exists the real product, for which I will patiently wait for W1zzard to review.


----------



## Captain_Tom (Sep 21, 2016)

danbert2000 said:


> ...except the benchmark you shared proves that the 1070 does better than the Fury, and that's before OC.



Proves what?  The 980 Ti is below even the $300 Fury, and the 1070 ties the Fury X!!!   


And overclock?!   ROFL!   My Fury is at 1135 MHz - I get 10% more performance.  Pascal overclocks usually yield a 7% boost - WOW!


----------



## Audiophizile (Sep 21, 2016)

Captain_Tom said:


> Proves what?  The 980 Ti is below even the $300 Fury, and the 1070 ties the Fury X!!!
> 
> 
> And overclock?!   ROFL!   My Fury is at 1135 MHz - I get 10% more performance.  Pascal overclocks usually yield a 7% boost - WOW!



The 1070 beats a furyx most games stock v stock. As it should being a gen later and a step below. That's how generations work usually... Next gen step down = last gen step up +/- a few % and much cheaper than the last gen step up launch price. But let's talk about current gen v current gen high end. 1070, 1080 and titan x. That ends this discussion.

Edit: just noticed a big mistake. The 1070 is actually 2 steps below a fury x as a fury x was meant to compete with the ti model. So things are looking pretty good there.


----------



## looncraz (Sep 21, 2016)

Captain_Tom said:


> And overclock?!   ROFL!   My Fury is at 1135 MHz - I get 10% more performance.  Pascal overclocks usually yield a 7% boost - WOW!



Yeah, there's usually not much point in overclocking video cards.

The only real exception is with Polaris... GPU overclocks aren't worth much alone, but memory overclocks combined with higher GPU clocks can be quite good.  Still, even with that, you're only talking about 15% more performance over reference... and I consider a 30% improvement to be the smallest worthwhile improvement to consider spending money on... unless you're just at the cusp of stable frame rates or just barely outside of your monitor's FreeSync range, when even 5~10% more performance can make the difference between stutters and silky-smooth experiences.

For me, though, I usually underclock my video cards.  My RX 480 is currently running 640Mhz on the core and 1Ghz on the RAM.  I can play most of the simple games and do all of the normal tasks that I want - while consuming only ~35W under load (above idle usage of ~20W).

I then have three other performance presets (using Afterburner).

900/1500 - Somewhat more demanding games (Crysis [Warhead] mostly)
1000/2000 - Demanding games (BF4, Hitman [Absolution], Civ V)
1288/2150 - Seriously Demanding Games (BF1 the only one so far)

Mind you, I have to keep frame rates at or below 146Hz and above 30Hz for FreeSync, so I tune things appropriately. All of my profile have significant under-volting applied and have had a few hours, at least, of games played with them.



Audiophizile said:


> The 1070 is actually 2 steps below a fury x as a fury x was meant to compete with the ti model. So things are looking pretty good there.



This is expected, Pascal and Polaris both represent a double-generation jump (well, Polaris anyway... Pascal is basically just a shrunken Maxwell with clock speed improvements enough for two generations of performance increase).

The only effort this generation that should really be applauded, IMHO, is AMD's.  GCN4 is ~15% faster per clock, more efficient, brings quite a bit of meaningful new tech, and still clocks 15%+ higher... the fact that it was only used on a mid-range GPU is irrelevant from a technology perspective.  Pascal is just the same old stuff with a few tweaks and clock speed improvements... boring stuff, really, albeit the feat is still impressive... it's basically just teams looking for critical paths and optimizing them away.


----------



## Audiophizile (Sep 21, 2016)

looncraz said:


> This is expected, Pascal and Polaris both represent a double-generation jump (well, Polaris anyway... Pascal is basically just a shrunken Maxwell with clock speed improvements enough for two generations of performance increase).
> 
> The only effort this generation that should really be applauded, IMHO, is AMD's.  GCN4 is ~15% faster per clock, more efficient, brings quite a bit of meaningful new tech, and still clocks 15%+ higher... the fact that it was only used on a mid-range GPU is irrelevant from a technology perspective.  Pascal is just the same old stuff with a few tweaks and clock speed improvements... boring stuff, really, albeit the feat is still impressive... it's basically just teams looking for critical paths and optimizing them away.



While it is a good step for AMD and possibly better from a technology standpoint the efficiency is still not as good as nvidias. A 480 uses as much or more power than a 1070 while being substantially slower. That part is not so good.


----------



## Captain_Tom (Sep 21, 2016)

Audiophizile said:


> The 1070 beats a furyx most games stock v stock. As it should being a gen later and a step below. That's how generations work usually... Next gen step down = last gen step up +/- a few % and much cheaper than the last gen step up launch price. But let's talk about current gen v current gen high end. 1070, 1080 and titan x. That ends this discussion.
> 
> Edit: just noticed a big mistake. The 1070 is actually 2 steps below a fury x as a fury x was meant to compete with the ti model. So things are looking pretty good there.



Edit: Thank you 



looncraz said:


> This is expected, Pascal and Polaris both represent a double-generation jump (well, Polaris anyway... Pascal is basically just a shrunken Maxwell with clock speed improvements enough for two generations of performance increase).
> 
> The only effort this generation that should really be applauded, IMHO, is AMD's.  GCN4 is ~15% faster per clock, more efficient, brings quite a bit of meaningful new tech, and still clocks 15%+ higher... the fact that it was only used on a mid-range GPU is irrelevant from a technology perspective.  Pascal is just the same old stuff with a few tweaks and clock speed improvements... boring stuff, really, albeit the feat is still impressive... it's basically just teams looking for critical paths and optimizing them away.



It's very clear that AMD is just buying dirt cheap silicon to spit out chips so they can capture marketshare.   If they were using the more expensive TSMC 16nm or if GloFlo 14nm was mature.... It would be a bloodbath, and Nvidia knows it.


----------



## looncraz (Sep 21, 2016)

Audiophizile said:


> While it is a good step for AMD and possibly better from a technology standpoint the efficiency is still not as good as nvidias. A 480 uses as much or more power than a 1070 while being substantially slower. That part is not so good.



If AMD were to strip their GPU they could be "efficient," too!

Jokes aside, AMD does need to make some more serious efforts towards improving architectural efficiency.  It mostly comes down to under-utilization, IMHO.  If AMD had the same utilization rates as nVidia, they would be pretty close to just as efficient... instead AMD crams 35%+ more computing power to break even.

Per TFLOP, AMD isn't less efficient... but per FPS, they certainly are.


----------



## Audiophizile (Sep 21, 2016)

looncraz said:


> If AMD were to strip their GPU they could be "efficient," too!
> 
> Jokes aside, AMD does need to make some more serious efforts towards improving architectural efficiency.  It mostly comes down to under-utilization, IMHO.  If AMD had the same utilization rates as nVidia, they would be pretty close to just as efficient... instead AMD crams 35%+ more computing power to break even.
> 
> Per TFLOP, AMD isn't less efficient... but per FPS, they certainly are.



The 480 has the same TFLOP as the 1070? I guess with vulkan we are starting to see more utilization with AMD which does start to level or even tip in amds favor at their price points but we're a long way off vulkan becoming the standard api. Again though, I think amds pricing for Vega is going to have to be extremely low because volta is most likely going to be a pretty large step over pascal. Vega coming out over a half year after pascal with Volta following most likely more closely than that... AMD needs to beat the 1070 and/or 1080 performance AND cost $50-75 less per card compared to the 1070/1080 or Volta will annihilate it.


----------



## Captain_Tom (Sep 21, 2016)

Audiophizile said:


> The 480 has the same TFLOP as the 1070?



No the 1070 has 20% more TFLOP's.  However they have the same 256 GB/s of bandwidth, and as such it should even out to the 1070 being 10% stronger if the 480 is fully utilized.  However the wild card here i Async compute.  I would wonder if that could allow AMD to gain even more than Nvidia per TFLOP (But I would highly doubt it for now).


Consider though that by the time Vega launches almost all AAA games will be DX12/Vulkan, and Vega will have at least 12 TFLOP's while the Titan has 11.   Vega will also have more bandwidth too...


----------



## thesmokingman (Sep 21, 2016)

Captain_Tom said:


> Consider though that by the time Vega launches almost all AAA games will be DX12/Vulkan, and Vega will have at least 12 TFLOP's while the Titan has 11.   Vega will also have more bandwidth too...



That's a bit, no... way freaking optimistic man. I want pure DX12 games as much as the next guy whose low level API biased, but devs are not going to jump on that gravy train because its not a gravy train since it cuts out many of their potential customers. I do think it will happen... eventually, years from now.


----------



## Captain_Tom (Sep 21, 2016)

thesmokingman said:


> That's a bit, no... way freaking optimistic man. I want pure DX12 games as much as the next guy whose low level API biased, but devs are not going to jump on that gravy train because its not a gravy train since it cuts out many of their potential customers. I do think it will happen... eventually, years from now.



I mean what games won't be?   Dishonered 2 and a freaking Skyrim re-release.  Those games don't need it lol.


----------



## looncraz (Sep 21, 2016)

Audiophizile said:


> The 480 has the same TFLOP as the 1070? I guess with vulkan we are starting to see more utilization with AMD which does start to level or even tip in amds favor at their price points but we're a long way off vulkan becoming the standard api. Again though, I think amds pricing for Vega is going to have to be extremely low because volta is most likely going to be a pretty large step over pascal. Vega coming out over a half year after pascal with Volta following most likely more closely than that... AMD needs to beat the 1070 and/or 1080 performance AND cost $50-75 less per card compared to the 1070/1080 or Volta will annihilate it.



RX 480 has 5.8TFLOPS @ 1.266Ghz - GTX 1070 has 6.5TFLOPS at 1.683GHz.  GTX 1060 has 4.4TFLOPS at 1.709GHz... and is about the same performance as RX 480 at 1.266GHz if both are pegged at the stated clocks (GTX 1060 usually runs at > 1.8GHz stock, RX 480 at  ~1.2GHz stock, giving the GTX 1060 about a 10% lead in performance).

Vulkan/DX 12 are allowing AMD GPUs to fill in the gaps left by their scheduler windows... gaps which really shouldn't exist as much as they do in the first place.  AMD needs a new ABI, scheduler, and driver in order to get rid of more of those gaps using DX11... but they could, conceivably, then see a ~25% boost in performance... without providing more hardware processing power...

Vega will need to come out with a low price, no doubt.  This is one of the major reason I think AMD will use a 2048-bit HBM2 bus... two stacks are cheaper than four.. and even if the costs are not so much better, you still have a smaller die/interposer or both to help.

Volta will trash Vega 10, no doubt.  The only way this wouldn't happen is if Vega is using the long-rumored new ABI... then we could see Vega 10 being 
as much as twice as fast as the Fury X.  I have serious doubts about this, though... the same team that created Polaris created Vega just six months later, so it's most certainly a direct a descendant of GCN 4... and there's some indication in my own circle that AMD has abandoned the new ABI altogether as the software is catching up and they anticipate DX11 performance to become irrelevant - as well as their utilization issues... all with no work on their part.


----------



## Audiophizile (Sep 21, 2016)

thesmokingman said:


> That's a bit, no... way freaking optimistic man. I want pure DX12 games as much as the next guy whose low level API biased, but devs are not going to jump on that gravy train because its not a gravy train since it cuts out many of their potential customers. I do think it will happen... eventually, years from now.



Yeah. Not even bf1 is pure dx12, dx11 with "some dx12 features". And dx12 alone doesn't greatly favor AMD. Vulkan being a standard is not anytime soon sadly. The fact Volta will follow Vega closer than Vega followed pascal and Volta will have a sync is AMDs biggest problem. Their timing is very poor.


----------



## phanbuey (Sep 21, 2016)




----------



## looncraz (Sep 21, 2016)

Audiophizile said:


> Yeah. Not even bf1 is pure dx12, dx11 with "some dx12 features". And dx12 alone doesn't greatly favor AMD. Vulkan being a standard is not anytime soon sadly. The fact Volta will follow Vega closer than Vega followed pascal and Volta will have a sync is AMDs biggest problem. Their timing is very poor.



nVidia just doesn't have as much to gain from DX12/Vulkan as AMD.  nVidia spent a great deal of resources optimizing their drivers for DX11 and OpenGL - so their performance is already quite great in those APIs relative to their hardware capabilities.  Async compute is good, but its greatest benefit will be seen in architectures with large empty time slices in its execution - AMD's hardware has large periods of time when it's just waiting for work... nVidia's does not. I don't see Volta changing that dynamic (though there are always ways to improve asynchronous workloads).

Timing, however, is a major issue for AMD.  Unless Vega 10 is much better than anticipated, Volta will wipe the floor with Vega... and AMD's own timeline doesn't have Navi until 2018... giving nVidia the top tiers for nearly two years.


----------



## Audiophizile (Sep 21, 2016)

looncraz said:


> nVidia just doesn't have as much to gain from DX12/Vulkan as AMD.  nVidia spent a great deal of resources optimizing their drivers for DX11 and OpenGL - so their performance is already quite great in those APIs relative to their hardware capabilities.  Async compute is good, but its greatest benefit will be seen in architectures with large empty time slices in its execution - AMD's hardware has large periods of time when it's just waiting for work... nVidia's does not. I don't see Volta changing that dynamic (though there are always ways to improve asynchronous workloads).
> 
> Timing, however, is a major issue for AMD.  Unless Vega 10 is much better than anticipated, Volta will wipe the floor with Vega... and AMD's own timeline doesn't have Navi until 2018... giving nVidia the top tiers for nearly two years.



Agreed. I have hope but even if it doesn't work out bad for them financially their timing is hurting us(consumers) no matter what way you look at it. Unless, as you stated, Vega is much better than anticipated. That rarely is ever the case with any company however.


----------



## HTC (Sep 21, 2016)

looncraz said:


> nVidia just doesn't have as much to gain from DX12/Vulkan as AMD.  nVidia spent a great deal of resources optimizing their drivers for DX11 and OpenGL - so their performance is already quite great in those APIs relative to their hardware capabilities.  Async compute is good, but its greatest benefit will be seen in architectures with large empty time slices in its execution - AMD's hardware has large periods of time when it's just waiting for work... nVidia's does not. I don't see Volta changing that dynamic (though there are always ways to improve asynchronous workloads).
> 
> Timing, however, is a major issue for AMD.  *Unless Vega 10 is much better than anticipated, Volta will wipe the floor with Vega...* and AMD's own timeline doesn't have Navi until 2018... giving nVidia the top tiers for nearly two years.



Not necessarily: remember why nVidia dropped allot of it's compute power in the 1st place? Volta will have async but it will come @ a cost: speed AND efficiency.

I've got serious doubts nVidia will manage to keep it's speed advantage over AMD (their cards run @ a much higher speed then AMD's) and, if so, that also means efficiency will suffer: time will tell. If they do manage to keep the speed WHILE incorporating async, then i agree with you.


----------



## Prima.Vera (Sep 21, 2016)

I mean even if AMD lunches something on Spring 2016, doesn't mean that nVidia wont do the same. I have a feeling we are going to see the 11xx series much earlier next year, probably to be lunched on a time frame simmilar with AMDs. Or maybe will see an 1080Ti delayed until next year?


----------



## thesmokingman (Sep 21, 2016)

When Vegans attack is when we will see the rumored Ti.


----------



## laszlo (Sep 21, 2016)

over 100 posts arguing & debating what?

is like a never ending soap tv show when every week a new rumor appear...

"The future has a way of arriving unannounced." George Will


----------



## ppn (Sep 21, 2016)

It's just 25% overclock over 14nm shrink, all it is. will be. 2018, 7nm is just another shrink.  15% GCN optimizations don't count on it. FuryX wasn't particularly efficient compared to 390X. add 25% to it and you get the picture., assuming memory will not overclock (memory bandwidth is the same) it could be as low as 15%. Here where 15% GCN+memory compression comes into play to pull it back to 25%.


----------



## the54thvoid (Sep 21, 2016)

Captain_Tom said:


> I have to ask what games it is "trouncing" in:
> 
> https://www.techpowerup.com/forums/...a-11-gpus-detailed.226012/page-5#post-3526293
> 
> ...



I meant that my card trounces fps.  Not that it trounces Fury X.  I'm getting (@1440p) framerates that are high enough to support usually maxing out all settings or just dropping a little.  Also, using TPU graphs and checking benchmarks, my card is 15-25% faster (in actual fps) than the TPU benches (I'm at 1501 under water).  So my card is holding up very well indeed.  So much so, I don't see any point upgrading (1080 is only 20% faster and Titan X is £1100 .  On my cards performance, a Fury X is a downgrade/sidegrade.  And again, for me, I use DX11.  I'm moving to W10 when i get a new cpu build (Zen or Kabylake).

I did see this tidbit from WCCFtech that got me thinking WTF?



> And the recently leaked Vega 20 flagship GPU. A cutting edge graphics chip debuting on 7nm FinFet with 12 teraflops of single precision compute horsepower and double that in half precision compute. Making it the fastest ever graphics chip we’ve learned about to date, edging out Nvidia’s monstrous GP100 GPU and Tesla P100 accelerator by a fair margin.



It's hype to the max when a website is looking at a card to be released in 3 years time and then comparing it against a card out now (not AMD's fault).  This is part of an ongoing problem when AMD start to release info, the rest of the web starts to pump it up, unfortunately.

Anyhoo, I've said enough here.  Suffice to say let's not diss the 980ti as it still is a very powerful card when it's let loose, especially so if you're still on DX11 and lots of us still are.  And in the UK, the main suppliers don't have many (if any) Fury X on sale and they are still £500 (not $300).  Conversely, the 980ti's are under £350.

Funny ol' world.

ps- can people stop double and triple posting?  Use multi-quote.

EDIT:  I had to come back

Given this was a year ago and I checked the only 2 games left in TPU suite (Witcher 3 and BF4) both give gains to the 980ti a year on (as does Fury X).  To understand at least my point and that of enthusiasts with overclocked 980ti's.  My card runs pretty much where the OC Lightning is.  You need to understand that performance to see why people defend their beloved 980ti's so much.  A stock 980ti is meaningless when looked at alone.  That was why I never bought a Fury X - it released at an awful OC scenario.  Even with voltage it pales in comparison to a 980ti.

So for some performance perspective, this is why you shouldn't write of a 980ti.  In that example below it's >30% above stock.  That's where my card games.


----------



## $ReaPeR$ (Sep 21, 2016)

Captain_Tom said:


> Is the 1070 EOL?  Fury Nitro matches it, and the Fury X trades blows with the 1080 in the latest games.   Say whatever you want - an EOL AMD product matches Nvida's new High-end cards.
> 
> 
> All future poofing means for me is that the product will continue to provide the same experience, or even a better experience for 2-3 years.  People who bought an i7 5 years ago continue to get 60-100 FPS in the latest games while the old i3's need to be replaced (SB i3's can't really do 60 FPS anymore).    Of course nothing is completely future proof, but I expect my products to maintain their relative value for 3 years, and frankly they should never lose performance to their launch competition.


mate, my point was that it is pretty much pointless to compare an EOL product to a brand new one for the majority of people. AMD at this point needs numbers, market share, and it won't get it from an EOL product, no matter how good it is.


----------



## LemmingOverlord (Sep 21, 2016)

the54thvoid said:


> Did AMD say they're positioning between GP104 and GP102? If so, that's bad news. Given the time frame of 2nd half 2017, not first quarter, it's almost definitely a Summer release for 2017. That gives Nvidia a huge leeway for current pricing and Volta development.
> Also, of tremendous importance is that this is an investors conference so they need to say all the absolute best things.
> I have a bad feeling about Vega. Even if it's better than GTX1080, AMD are saying it won't beat Titan X?
> Sad face.



I've been listening to Papermaster's presentation at the DBTC, but I failed to hear any reference to Vega... The presentation was short (10 mins or so) with the rest being Q&A. I haven't heard it all yet, but... if true it's a massive letdown.


----------



## Prima.Vera (Sep 21, 2016)

laszlo said:


> over 100 posts arguing & debating what?
> 
> is like a never ending soap tv show when every week a new rumor appear...
> 
> "The future has a way of arriving unannounced." George Will


...and you just added more to the value. Just like me also


----------



## $ReaPeR$ (Sep 21, 2016)

Prima.Vera said:


> ...and you just added more to the value. Just like me also


LOL this conversation has become rather ridiculous at this point..


----------



## 64K (Sep 21, 2016)

This may have already been posted in this thread but according to the source for this thread there is some more info about Vega 20 and according to Videocardz it won't come until 2nd half of 2018 (about 2 years). Volta will have come out long long before that and I'm not sure exactly what to expect as far as an improvement over Pascal but I speculate for now that it will be a good bit faster than Vega 10. I hope we have more DX12 games to get an idea what will be required by then.

http://videocardz.com/63715/amd-vega-and-navi-roadmap


----------



## rtwjunkie (Sep 21, 2016)

Captain_Tom said:


> Consider though that by the time Vega launches almost all AAA games will be DX12/Vulkan



You're dreaming.  I applaud your optimism, but cut back on the weed, man! 

Seriously, you won't see that kind of adoption rate in 1 year.


----------



## Ungari (Sep 21, 2016)

Captain_Tom said:


> People who bought an i7 5 years ago continue to get 60-100 FPS in the latest games while the old i3's need to be replaced (SB i3's can't really do 60 FPS anymore).



I was roundly attacked by i3 users after posting this. You must carry a lot of weight on this board.



Audiophizile said:


> A 480 uses as much or more power than a 1070 while being substantially slower. That part is not so good.



Every time I see the point brought up, I think of a parrot; "_Power Consumption, the Power Consumption...caw caw rawwwk!_" .

The RX 480/470 is the card of choice in the Ethereum Mining community, where lower energy costs are crucial to profitability.
One of the main reasons Polaris is sought after by miners _is_ the power efficiency.
https://etherscan.io/ether-mining-calculator
https://etherscan.io/ether-mining-calculator


----------



## Captain_Tom (Sep 21, 2016)

the54thvoid said:


> So for some performance perspective, this is why you shouldn't write of a 980ti.  In that example below it's >30% above stock.  That's where my card games.



Dear lord look at the very graph you posted!   That 30% overclock got you an 11% performance increase!   That's about the same I get with my 13% overclock on my Fury, and it is because Nvidia has been memory starving its cards for 4 generations now.



Ungari said:


> I was roundly attacked by i3 users after posting this. You must carry a lot of weight on this board.



Hahahaha that's pretty funny.    Look I love the i3 as a gaming cpu, but for a longterm system it is only mean for Midrange build.


For example I built my cousin a system 4 years ago with an i3 and 560 Ti in it, and that thing is still chugging along just fine in every game that comes.   Soon he should upgrade, and throwing an RX 470 in there will be fine, but throwing a Fury or lol a 1080 would just be a complete waste.   But if he had an i7 he could still take that 1080.



$ReaPeR$ said:


> mate, my point was that it is pretty much pointless to compare an EOL product to a brand new one for the majority of people. AMD at this point needs numbers, market share, and it won't get it from an EOL product, no matter how good it is.



And the 480, 470, and 460 are capturing marketshare right now.   Polaris is allowing them to capture the biggest chunk of the market while they wait for API's and 14nm to mature.   As AMD sees it there is no point in releasing Enthusiast cards if they won't be at their peak potential, and if developers won't fully utilize them.


----------



## 64K (Sep 21, 2016)

@Captain_Tom and @Ungari you could use edit. Just saying.


----------



## rtwjunkie (Sep 21, 2016)

64K said:


> @Captain_Tom and @Ungari you could use edit. Just saying.


That is their modus_operandi: spam the shit out of every thread they are in with double and triple posting.


----------



## Ungari (Sep 21, 2016)

64K said:


> @Captain_Tom and @Ungari you could use edit. Just saying.



Sorry, are there some grammatical or spelling errors I need to correct?


----------



## 64K (Sep 21, 2016)

Ungari said:


> Sorry, are there some grammatical or spelling errors I need to correct?



I was referring to the double-posting.

From the Forum Guidelines
"If you reply to multiple posts use the "multi quote" button, that way the forum is easier to read."

https://www.techpowerup.com/forums/threads/forum-guidelines.197329/


----------



## looncraz (Sep 21, 2016)

ppn said:


> It's just 25% overclock over 14nm shrink, all it is. will be. 2018, 7nm is just another shrink.  15% GCN optimizations don't count on it. FuryX wasn't particularly efficient compared to 390X. add 25% to it and you get the picture., assuming memory will not overclock (memory bandwidth is the same) it could be as low as 15%. Here where 15% GCN+memory compression comes into play to pull it back to 25%.



20% higher clocks and 15% architectural improvement.  That gives about 12TFLOP/s.... or 40% more than Fury X.  This doesn't tell you everything you need to know about gaming performance, though.

Polaris is insanely bottlenecked by memory - I estimated that RX 480 should have nearly 320GB/s of bandwidth for peak performance... and it would be about 20% faster as a result.  Vega 10 is 77% larger than Polaris 10, so needs that much extra bandwidth to see the GCN improvements... or about 566GB/s.  With 512 GB/s, we should see about 15% better relative performance than we see with Polaris - partly eaten up by GCN scaling... but we're still looking at a card that is more than double the RX 480 with only 77% more resources... but double the bandwidth.

That means Vega 10 will range from ~40% to nearly 70% faster than the Fury X, depending on how much other hamstringing of performance AMD does (ROPs, mainly).  The strongest probability cloud is nearer to 50% than to anything else.



64K said:


> Vega 20 and according to Videocardz it won't come until 2nd half of 2018 (about 2 years). Volta will have come out long long before that and I'm not sure exactly what to expect as far as an improvement over Pascal but I speculate for now that it will be a good bit faster than Vega 10.



Vega 20 is a mid-range 2018 card and a test chip for 7nm.  Looks like AMD plans on some pretty impressive improvements with Navi (or will tighten the gap between mid-range and the top tier).



rtwjunkie said:


> Seriously, you won't see that kind of adoption rate in 1 year.



The majority of the market can now run DX12.  Game developers address as much of the market as they can - with most being able to now enjoy DX12 or Vulkan, all new game engines will make use.  We are already seeing nearly every new AAA title have support for one or the other.  In a year's time, many top CPU-bound DX11 games will have DX12 patches and nearly every new game will have support at or shortly after launch.

Microsoft's free upgrade offer pushed a lot of people onto Windows 10 (that and their underhanded upgrade tactics...).  I didn't anticipate moving to Windows 10 at all, but my ability to make it like Windows 7 (stripping most of the new Windows 10 garbage) while still benefiting from the core updates made it a worthwhile endeavor.


----------



## Ungari (Sep 21, 2016)

64K said:


> I was referring to the double-posting.
> 
> From the Forum Guidelines
> "If you reply to multiple posts use the "multi quote" button, that way the forum is easier to read."
> ...



I wasn't replying to multiple posts, only that specific point. That is why I only quoted that part, see?


----------



## looncraz (Sep 21, 2016)

Ungari said:


> I wasn't replying to multiple posts, only that specific point. That is why I only quoted that part, see?



Multi-posting really isn't an issue, IMHO, unless you are doing more than three.  It's sometimes simply unavoidable or undesriable if you're behind and trying to catch up on the comments.

You did two and got called out, I did three just a few pages back and no one said anything... but I like to post really long comments, which I think changes the dynamic.


----------



## 64K (Sep 21, 2016)

looncraz said:


> Vega 20 is a mid-range 2018 card and a test chip for 7nm.  Looks like AMD plans on some pretty impressive improvements with Navi (or will tighten the gap between mid-range and the top tier).



I wasn't aware of that about Vega 20. Thanks.

The problem that I'm seeing is that Nvidia is getting away with charging alot, imo, for the 1080 and Titan XP and most likely the 1080 Ti as well. They can do this because there is no competition from AMD for these GPUs. I see that Vega 10 is coming first half of next year (some are saying 1st quarter next year) but until then there is no competition for Nvidia. Volta will be coming near that time (some are saying Q2 2017 and if it beats Vega 10 then Volta has no competition until 2nd half 2018 if Vega 20 isn't competition for Volta then it will be 2019 for Navi is what I'm seeing.

Everybody loses in a lopsided market due to lack of competition.


----------



## rtwjunkie (Sep 21, 2016)

Ungari said:


> I wasn't replying to multiple posts, only that specific point. That is why I only quoted that part, see?



That's what multi quoting does. It allows you to respond to different people within one post, even if different thoughts.


----------



## Ungari (Sep 21, 2016)

rtwjunkie said:


> That's what multi quoting does. It allows you to respond to different people within one post, even if different thoughts.



I wasn't responding to different people, nor was I responding to the poster's entire quote, only that phrase.
There is an option to either _Multi-Quote_ or _Reply_, and I chose to _Reply_ only to what I specifically highlighted, see?


----------



## 64K (Sep 21, 2016)

Ungari said:


> I wasn't responding to different people, nor was I responding to the poster's entire quote, only that phrase.
> There is an option to either _Multi-Quote_ or _Reply_, and I chose to _Reply_ only to what I specifically highlighted, see?



You responded to Captain_Tom and then on a new post you responded to Audiophizile. What we are trying to tell you is that you could have combined them into one post and avoided double posting. If you only want to respond to a certain part of a persons post then delete the parts that you don't want to respond to within the quote as I did in #169.


----------



## looncraz (Sep 21, 2016)

64K said:


> You responded to Captain_Tom and then on a new post you responded to Audiophizile. What we are trying to tell you is that you could have combined them into one post and avoided double posting. If you only want to respond to a certain part of a persons post then delete the parts that you don't want to respond to within the quote as I did in #169.



Stop being an etiquette Nazi, he made two posts... big whoop.  It's really only a problem when someone makes four or five posts like that.

And you've now taken up a page with this nonsense...


----------



## 64K (Sep 21, 2016)

looncraz said:


> Stop being an etiquette Nazi, he made two posts... big whoop.  It's really only a problem when someone makes four or five posts like that.
> 
> And you've now taken up a page with this nonsense...



Only because of a lack of understanding a simple thing like double posting and using multi quote instead but it's not my job to moderate. I will tell you that one mod in particular really doesn't like double posting.


----------



## Tatty_One (Sep 21, 2016)

64K said:


> Only because of a lack of understanding a simple thing like double posting and using multi quote instead but it's not my job to moderate. I will tell you that one mod in particular really doesn't like double posting.


I especially don't like triple and quadruple posting, it alters the flow of conversations within a thread and is lazy which is why I spend time going through tidying them up, to the point that if they continue I just stop people doing it, but thank you for pointing it out, as you say correctly, its in the guidelines so lets draw the line at this point and see if anyone causes me any further work.

it may also be a good opportunity at this point to suggest that a couple of pages of off topic chitchat is probably enough, this is a news thread about Vega, lets keep it that way please.


----------



## Audiophizile (Sep 21, 2016)

Captain_Tom said:


> Dear lord look at the very graph you posted!   That 30% overclock got you an 11% performance increase!   That's about the same I get with my 13% overclock on my Fury, and it is because Nvidia has been memory starving its cards for 4 generations



102fps to 138fps is not 11%


----------



## the54thvoid (Sep 21, 2016)

Audiophizile said:


> 102fps to 138fps is not 11%



Thanks for saving me the bother.  And here I thought Captain Tom wasn't that 'silly'.  So quick to slag of Nvidia...


----------



## phanbuey (Sep 21, 2016)

basically AMD is going to become the new minimum standard in the mid-high end.  Nv will basically move the goal posts as far as it needs to and then charge out the nose for anything past that... which is basically what it's doing now, so no change there.
Reminds me of back in the day with the 2900XT vs the 8800GTS/ 8800GTX/Ultra.


----------



## HD64G (Sep 21, 2016)

If Vega 10 gets sold from 1st Q of 2017 they will have over 10 months until Volta gets on the market. And nobody knows which Volta will be 1st to come out after all. If not the biggest possible, Vega 10 could be too close to that also and beating 1080 for less money. How is that wrong for us? Only extreme gamers need 1080Ti  and have the cash for it atm.


----------



## Dbiggs9 (Sep 21, 2016)

I picked up 100 Jan,18 2019 options for AMD, i also picked up Shares a $2.80. Zen,Vega,PS,Xbox this might be a good 2 years for them or it could be a repeat of the last 5.


----------



## Captain_Tom (Sep 21, 2016)

64K said:


> I was referring to the double-posting.
> 
> From the Forum Guidelines
> "If you reply to multiple posts use the "multi quote" button, that way the forum is easier to read."
> ...



LOL I honestly didn't know about that.   Thanks 



HD64G said:


> If Vega 10 gets sold from 1st Q of 2017 they will have over 10 months until Volta gets on the market. And nobody knows which Volta will be 1st to come out after all. If not the biggest possible, Vega 10 could be too close to that also and beating 1080 for less money. How is that wrong for us? Only extreme gamers need 1080Ti  and have the cash for it atm.



If the rumored specs are true it will absolutely crush the 1080 and trade blows or beat the Titan.    The problem is timing, HBM2 selection, and pricing.

If AMD selects the dirt cheap 512 GB/s HBM2 then they better not price this more than $899, and I would hope they would choose 8GB to keep prices down.  On the other hand if they select the 720 or 1000 GB/s HBM then they could put 16GB on it and charge $1000+.


Also timing is key.   It seems like if they can launch an UBER card Before March they may have a full year before Nvidia can respond with Volta, and the worst they would have to put up with is some 1180 with 8GB of HBM in July.   But if Vega launches in June it will give Nvidia supremacy of the high end for FAR too long - hopefully a 490 with GDDR5 (X?) will launch this fall either way.


----------



## Vayra86 (Sep 21, 2016)

Captain_Tom said:


> LOL I honestly didn't know about that.   Thanks





Captain_Tom said:


> If the rumored specs are true it will absolutely crush the 1080 and trade blows or beat the Titan.    The problem is timing, HBM2 selection, and pricing.
> 
> If AMD selects the dirt cheap 512 GB/s HBM2 then they better not price this more than $899, and I would hope they would choose 8GB to keep prices down.  On the other hand if they select the 720 or 1000 GB/s HBM then they could put 16GB on it and charge $1000+.
> 
> ...



Bonus points if you can see the irony here


----------



## Captain_Tom (Sep 22, 2016)

Vayra86 said:


> Bonus points if you can see the irony here



Oh I can.  I realized it the second I posted.  Old habits die hard :/


----------



## ensabrenoir (Sep 22, 2016)

Captain_Tom said:


> *If the rumored specs are true* it will absolutely crush the 1080 and trade blows or beat the Titan.    The problem is timing, HBM2 selection, and pricing.
> 
> If AMD selects the dirt cheap 512 GB/s HBM2 then they better not price this more than $899, and I would hope they would choose 8GB to keep prices down.  On the other hand if they select the 720 or 1000 GB/s HBM then they could put 16GB on it and charge $1000+.
> 
> ...



.....every Amd launch starts out like this......and then....


----------



## Captain_Tom (Sep 22, 2016)

ensabrenoir said:


> .....every Amd launch starts out like this......and then....



Nah the 290X overperformed.   


The Fury X also had the correct specs and more-or-less performed as well as it should, it just needed more optimized drivers.


----------



## $ReaPeR$ (Sep 22, 2016)

Captain_Tom said:


> And the 480, 470, and 460 are capturing marketshare right now.   Polaris is allowing them to capture the biggest chunk of the market while they wait for API's and 14nm to mature.   As AMD sees it there is no point in releasing Enthusiast cards if they won't be at their peak potential, and if developers won't fully utilize them.



well, yes, and i don't disagree with their move but they need something in the upper end of the market if they want to gain substantial market share because marketing works that way.


----------



## Captain_Tom (Sep 22, 2016)

$ReaPeR$ said:


> well, yes, and i don't disagree with their move but they need something in the upper end of the market if they want to gain substantial market share because marketing works that way.



I guess they are testing if it does.


They had plenty of marketshare with the weaker 4870, 5870 (This one did dominate for while), and 6970.   Then when the 7970 and 290X were kicking ass they lost marketshare slowly but surely.   Technically Radeon's best days were when they purely whent for price/perf and efficiency. 

Having said that, the performance gulf between even a cut-down Vega and Polaris will be absolutely massive.  I think they will be quite silly if they don't release at least an RX 490 this fall.    They wouldn't even need a new die either, as long as 14nm is maturing they could just release P10 clocked at 1450MHz with GDDR5X and an 8-pin connector.   That would give it a 20% boost over the 480 and allow it to take on the 1070 in at least DX12 games.


----------



## Slizzo (Sep 22, 2016)

HD64G said:


> If Vega 10 gets sold from 1st Q of 2017 they will have over 10 months until Volta gets on the market. And nobody knows which Volta will be 1st to come out after all. If not the biggest possible, Vega 10 could be too close to that also and beating 1080 for less money. How is that wrong for us? Only extreme gamers need 1080Ti  and have the cash for it atm.



Don't forget that nVidia is delivering Volta to the Gov. to fulfill a contract in early 2017. So it could come a lot sooner than people think if Vega turns out to be crazy.  Though honestly I think it'll probably be around GTX1080/Titan X(P) performance.


----------



## looncraz (Sep 22, 2016)

Captain_Tom said:


> They wouldn't even need a new die either, as long as 14nm is maturing they could just release P10 clocked at 1450MHz with GDDR5X and an 8-pin connector.   That would give it a 20% boost over the 480 and allow it to take on the 1070 in at least DX12 games.



Polaris 10 at just 1.3Ghz and GDDR5X would have a 20% improvement... the GPU is just that starving for bandwidth.







We can see that 1.6GHz GDDR5 (204GB/s) is the right amount for a *750MHz* Polaris 10 GPU...

2.2GHz GDDR5 (280GB/s) would be ideal for 1Ghz P10 GPU...
2.8GHz GDDR5 (360GB/s) would be ideal for 1.25GHz P10 GPU
2.9GHz GDDR5 (370GB/s) would be ideal for 1.3GHz P10 GPU.

RX 480 has just 256GB/s of bandwidth... not ideal even at 1GHz GPU clocks.

Vega 10 will need about 75% more bandwidth to feed its CUs... so about 630GB/s would be "ideal."

EDIT:

Also, you can see the diminishing returns from GPU frequency from all of the tests I ran:

1250MHz and 1350MHz are much closer together than 1150MHz and 1250MHz...


----------



## Captain_Tom (Sep 22, 2016)

looncraz said:


> Polaris 10 at just 1.3Ghz and GDDR5X would have a 20% improvement... the GPU is just that starving for bandwidth.
> 
> 
> 
> ...



That's actually really interesting, and similar to the gains I found overclocking 7970 memory (Stock was 1375, and even at 1850MHz the card clearly wanted more bandwidth).  


I think AMD will have 2 Vega GPU's early next year:

-Cut-down 3584-SP with 768-GB/s HBM   - $500 - $600

-Full-die 4096-SP with 1 TB/s HBM - $650 - $800

Keep in mind that nothing is ideal right now, so even if the chips are bandwidth starved AMD has to release something.


----------



## looncraz (Sep 22, 2016)

Slizzo said:


> Don't forget that nVidia is delivering Volta to the Gov. to fulfill a contract in early 2017. So it could come a lot sooner than people think if Vega turns out to be crazy.  Though honestly I think it'll probably be around GTX1080/Titan X(P) performance.



Indeed, I think Volta will probably be focused on bringing compute improvements more than anything.  This is allegedly a whole new architecture, though, which could mean anything from a gaming performance perspective.


----------



## $ReaPeR$ (Sep 22, 2016)

Captain_Tom said:


> I guess they are testing if it does.
> 
> 
> They had plenty of marketshare with the weaker 4870, 5870 (This one did dominate for while), and 6970.   Then when the 7970 and 290X were kicking ass they lost marketshare slowly but surely.   Technically Radeon's best days were when they purely whent for price/perf and efficiency.
> ...



for all of our sakes i hope you are right. the GPU market needs some competition and lower prices.


----------



## Captain_Tom (Sep 22, 2016)

$ReaPeR$ said:


> for all of our sakes i hope you are right. the GPU market needs some competition and lower prices.



Well we have great prices in the mid-range right now, period.  But yeah Nvidia will continue to price gouge in the High-End as long as their only high-end competition is 1-year old Fury cards.  Hopefully AMD launches the 490 this year.

The funny thing is AMD has also been able to price gouge -> in the Mid-Range.    The 480 is beating the 1060 by quite a bit (It's still constantly selling out).   As such the 480 is about $20-$40 more than it was supposed to be, and the 470 is being sold for the same price as lower-tier 480's since stock issues are such a problem.  4GB 460 prices are laughable, but again it is because it has zero competition in the space.


----------



## $ReaPeR$ (Sep 22, 2016)

Captain_Tom said:


> Well we have great prices in the mid-range right now, period.  But yeah Nvidia will continue to price gouge in the High-End as long as their only high-end competition is 1-year old Fury cards.  Hopefully AMD launches the 490 this year.
> 
> The funny thing is AMD has also been able to price gouge -> in the Mid-Range.    The 480 is beating the 1060 by quite a bit (It's still constantly selling out).   As such the 480 is about $20-$40 more than it was supposed to be, and the 470 is being sold for the same price as lower-tier 480's since stock issues are such a problem.  4GB 460 prices are laughable, but again it is because it has zero competition in the space.


i dont know if this is AMD's fault or just a vendor issue since low numbers and high demand tend to have this effect on products.


----------



## Captain_Tom (Sep 22, 2016)

$ReaPeR$ said:


> i dont know if this is AMD's fault or just a vendor issue since low numbers and high demand tend to have this effect on products.



I really wish we could get some concrete numbers on GPU sales.  All I have to go by is how often things are out of stock, and unverified leaks from vendors saying "We have 25x the 480's ready for launch compared to how many 1080's were ready for launch".


----------



## $ReaPeR$ (Sep 22, 2016)

Captain_Tom said:


> I really wish we could get some concrete numbers on GPU sales.  All I have to go by is how often things are out of stock, and unverified leaks from vendors saying "We have 25x the 480's ready for launch compared to how many 1080's were ready for launch".


well, yes, that's a tad difficult. but indeed it would clarify the situation.


----------



## midnightoil (Sep 22, 2016)

Much of this 'news' is made up.  At least the detail of it is.

NAVI is still expected for early 2018, a year after Vega.

Vega 20 (if it is to be called Vega) is a real part and is indeed 2019 ... but it won't be HBM2 32GB.  Would make no sense whatsoever to have that much VRAM with that many shaders, unless 4K 144Hz / 8K is the norm by then, and the 7nm process take stock clocks to 2.5Ghz - 3Ghz.  Neither of which is likely.

Vega 11 is in no way replacing Polaris 10.  It will sit at or above Polaris 10, probably at or above Fury X performance.  They'll be sold in the same segments that you'd expect them to inhabit today.

Vega 10 details are mostly right.

The thing that bothers me about Vega 10 & 11 is GF's copy (not very) exactly (in fact quite badly) Samsung 14nmFF LP+ process.  If they're using it again for Vega (the high performance part), I expect power consumption and clocks may disappoint a little again.   

We know Samsung are building something for AMD ... surely AMD should be using their new high performance 14nmFF process(es) for Vega 10, 11 and Zen?


----------



## rvalencia (Sep 25, 2016)

Caring1 said:


> What's with reducing the FP first from double to single, now half?
> "with up to 24 TFLOP/s 16-bit (half-precision) floating point performance"
> Obviously reducing the compute side increases gaming usability, as proven by Nvidia cards doing the same.


GP104's SM only has a single 16 bit FP unit with 128 32bit units which is different from GP100's SM design.



looncraz said:


> RX 480 has 5.8TFLOPS @ 1.266Ghz - GTX 1070 has 6.5TFLOPS at 1.683GHz.  GTX 1060 has 4.4TFLOPS at 1.709GHz... and is about the same performance as RX 480 at 1.266GHz if both are pegged at the stated clocks (GTX 1060 usually runs at > 1.8GHz stock, RX 480 at  ~1.2GHz stock, giving the GTX 1060 about a 10% lead in performance).
> 
> Vulkan/DX 12 are allowing AMD GPUs to fill in the gaps left by their scheduler windows... gaps which really shouldn't exist as much as they do in the first place.  AMD needs a new ABI, scheduler, and driver in order to get rid of more of those gaps using DX11... but they could, conceivably, then see a ~25% boost in performance... without providing more hardware processing power...
> 
> ...



From https://developer.nvidia.com/dx12-dos-and-donts

_On DX11 the driver does farm off asynchronous tasks to driver worker threads where possible_.


NVIDIA's DX11 driver already has multiple threads and asynchronous tasks.



midnightoil said:


> Much of this 'news' is made up.  At least the detail of it is.
> 
> 
> *Vega 11 is in no way replacing Polaris 10.*  It will sit at or above Polaris 10, probably at or above Fury X performance.  They'll be sold in the same segments that you'd expect them to inhabit today.


Vega 11 seems to be Scorpio's GPU build with it's worst working chip yield around 6 TFLOPS instead of Polaris 10's worst working chip yield of 4.9 TFLOPS for RX-470.   Vega 11 effectively replaces Polaris 10.

For Scorpio, MS waited for AMD's medium size GPU chip to reach 6 TFLOPS with good yields.

Scorpio with Polaris 10 at 6 TFLOPS target would have terrible working chip yields.


----------



## lorraine walsh (Sep 26, 2016)

DX11 is pretty good today with them mind you, they can only do so much with an architechture that REQUIRES synchronous loads, unlike Nvidia which is build to process specific loads altogether in one.


----------



## Captain_Tom (Sep 26, 2016)

midnightoil said:


> Much of this 'news' is made up.  At least the detail of it is.
> 
> 
> Vega 11 is in no way replacing Polaris 10.  It will sit at or above Polaris 10, probably at or above Fury X performance.  They'll be sold in the same segments that you'd expect them to inhabit today.



Why not?


A lot of reports stated that Samsung is working on making slower (768 GB/s) HBM2 chips that are far cheaper than HBM1.   Having one unified architecture means they don't have to segment between GDDR and non-GDDR dies, which is incredibly helpful when it comes to utilizing early yields.

Furthermore it would shave off about 25 - 50w off of all of their models, and it would give them all a nice 20-50% performance boost from last years model.


----------

