# AMD's Vega-based Cards to Reportedly Launch in May 2017 - Leak



## Raevenlord (Jan 12, 2017)

According to WCCFTech, AMD's next-generation Vega architecture of graphics cards will see its launch on consumer graphics solutions by May 2017. The website claims AMD will have Vega GPUs available in several SKUs, based on at least two different chips: Vega 10, the high-end part with apparently stupendous performance, and a lower-performance part, Vega 11, which is expected to succeed Polaris 10 in AMD's product-stack, offering slightly higher performance at vastly better performance/Watt. WCCFTech also point out that AMD may also show a dual-chip Vega 10×2 based card at the event, though they say it may only be available at a later date.



 

 

 

 



AMD is expected to begin the "Vega" architecture lineup with the Vega 10, an upper-performance segment part designed to disrupt NVIDIA's high-end lineup, with a performance positioning somewhere between the GP104 and GP102. This chip should carry 4,096 stream processors, with up to 24 TFLOP/s 16-bit (half-precision) floating point performance. It will feature 8-16 GB of HBM2 memory with up to 512 GB/s memory bandwidth. AMD is looking at typical board power (TBP) ratings around 225W.

Next up, is "Vega 20", which is expected to be a die-shrink of Vega 10 to the 7 nm GF9 process being developed by GlobalFoundries. It will supposedly feature the same 4,096 stream processors, but likely at higher clocks, up to 32 GB of HBM2 memory at 1 TB/s, PCI-Express gen 4.0 bus support, and a typical board power of 150W. 

AMD plans to roll out the "Navi" architecture some time in 2019, which means the company will use Vega as their graphics architecture for two years. There's even talk of a dual-GPU "Vega" product featuring a pair of Vega 10 ASICs.

*View at TechPowerUp Main Site*


----------



## hojnikb (Jan 12, 2017)

So a year behind the big green


----------



## cdawall (Jan 12, 2017)

hojnikb said:


> So a year behind the big green



Not really these are targeted at the 1080Ti not 1080. This would be about average for AMD vs Nvidia release schedule. I do however find it to be a bit annoying on the hype train as per usual. I was expecting a Jan-Feb release.


----------



## the54thvoid (Jan 12, 2017)

Oh, that's quite far away. 

One year after Pascal.  FWIW, a Titan X (not full GP102 core) is 37% faster than a 1080 at 4k Vulkan Doom.  It's just when people call Vega's performance stupendous and then repeat such things, it's a bit baity.  Once Vega is out it's on a smaller node, has more cores and more compute with more 'superness' than Pascal.  So if it doesn't beat Titan X, it's not superb enough frankly.



cdawall said:


> Not really these are targeted at the 1080Ti not 1080. This would be about average for AMD vs Nvidia release schedule. I do however find it to be a bit annoying on the hype train as per usual. I was expecting a Jan-Feb release.



The architecture is a year old.


----------



## phanbuey (Jan 12, 2017)

Lol... called it...

Although I thought April... but May!? Ouch.


----------



## IceScreamer (Jan 12, 2017)

Even though I'm not the target customer for this segment I have to say this looks like it will be a disappointing show from AMD. Almost half a year late and (IF the CES showing is to go by) similar performance to Nvidia's current offering will probably not be enough.
But nothing is set in stone yet, I'm really interested how will this thing perform.


----------



## ssdpro (Jan 12, 2017)

AMD has had some delays mounting lately.  The admin over at the ASUS ROG forums also stated there were missing AM4 boards at CES because of "delays with the product schedule" and "major changes to the existing samples".  From the first statement we don't know if that means CPU, chipset, or motherboard.  The second, it means something they wanted to include is getting chopped so the schedule isn't hurt too badly.

https://rog.asus.com/forum/showthre...What-about-AM4&p=626123&viewfull=1#post626123


----------



## Casecutter (Jan 12, 2017)

Considering it was back in June/July Raja announce RTG had made the Vega milestone. I'd say that was first spin of silicone out of GloFo. They revise process and silicone and work the interposer and HBM2.  Figure they've just got final stepping in December (and what is being shown) and now they need to build a war chest.   Say what 2-3 month production at GloFo then sometime to add to the interposer and make actual cards to have a full pipe-lines.  I expected it before May but I thought I read RTG would be doing a full-launch with full product availability. I think looking back at these various points it coming into view.  Don't rush do it right, look to best the completion, but do what you planned and let the chip fall where they may. 

While I don't expect a 1080Ti killer, I anticipate a card that stay in the hunt and project RGT forward focus of there technologies.


----------



## RejZoR (Jan 12, 2017)

I was hoping for a Ryzen and Vega duo launch in March. May feels so late and so far away...


----------



## GhostRyder (Jan 12, 2017)

Maybe its coming under the RX 5XX name instead and there will not be an RX 490...  Seems a bit odd either way as something is not lining up.


----------



## atomicus (Jan 12, 2017)

cdawall said:


> Not really these are targeted at the 1080Ti not 1080...



Errr, no. Sorry to burst your bubble but VEGA is not going to touch the 1080Ti for performance. It's going to scrape past the 1080 if you're lucky, and HOPEFULLY at a good price, but knowing AMD I wouldn't be surprised if they price it identically.


----------



## owen10578 (Jan 12, 2017)

Vega 11 replacing Polaris 10? So is this going to be a complete next gen product stack aka RX 5xx? I thought its supposed to add to the current product stack at the high end. The information seems off.


----------



## cdawall (Jan 12, 2017)

atomicus said:


> Errr, no. Sorry to burst your bubble but VEGA is not going to touch the 1080Ti for performance. It's going to scrape past the 1080 if you're lucky, and HOPEFULLY at a good price, but knowing AMD I wouldn't be surprised if they price it identically.



And you know this how?


----------



## theeldest (Jan 12, 2017)

I'd expect big Vega to be about 10% slower than the Titan X.

Take a look at the review of the TitanX and use the performance summary numbers as a metric.





Look at the Flops numbers over the past few generations and we can do a Performance per Flop (AMD historically has higher theoretical GFLOPS numbers than nVidia for a given performance level).

Do backwards math given that we know Vega is 12 Tflops and AMD averaged 7.6 performances per Teraflop. From there, 91.6% performance vs TitanX.

Actually seems pretty reasonable, so it'll be about where the 1080TI is likely to be and it'll come down to specific optimizations.

*I know this methodology is super hand-wavey. But it gives is another piece of information.


----------



## cdawall (Jan 12, 2017)

theeldest said:


> I'd expect big Vega to be about 10% slower than the Titan X.
> 
> Take a look at the review of the TitanX and use the performance summary numbers as a metric.
> 
> ...



Oh shit so you mean it will be right where I said it was going? Man pure magic. I bet that it manages to beat the Ti in DX12/Vulkan and loses in DX11/openGL


----------



## zo0lykas (Jan 12, 2017)

owen10578 said:


> Vega 11 replacing Polaris 10? So is this going to be a complete next gen product stack aka RX 5xx? I thought its supposed to add to the current product stack at the high end. The information seems off.


----------



## theeldest (Jan 12, 2017)

cdawall said:


> Oh shit so you mean it will be right where I said it was going? Man pure magic. I bet that it manages to beat the Ti in DX12/Vulkan and loses in DX11/openGL



On the other hand, if you pull all the data from the Titan X review we see that both manufacturers see decreasing performance per flop as they go up the scale and it's worse for AMD:




So if we plot out all of these cards as Gflops vs Performance and fit a basic trend line we get that big Vega would be around 80% of Titan performance at 12 TFlops. (this puts it even with the 1080).






So for AMD to be at 1080TI levels, they'll need to have improved their card efficiency by 10 - 15 percent for this architecture.

Given the number of changes they've talked about with this architecture, I don't think that's infeasible but it is a hurdle to overcome.


----------



## cdawall (Jan 12, 2017)

And vulkan/dx12 does exactly that...


----------



## TheGuruStud (Jan 12, 2017)

cdawall said:


> And vulkan/dx12 does exactly that...



Filling the SPs with the much enhanced scheduler should do it easily.


----------



## dozenfury (Jan 12, 2017)

People wildly overestimating performance of an AMD product based on vague marketing slides, months before release?  I seem to have seen this before...


----------



## Camm (Jan 12, 2017)

I actually don't think AMD is late - AMD just didn't play in the high end this generation for various reasons. The problem will be what sort of performance does Vega offer up. If it's faster than a Titan XP, we'll praise AMD for its foresight, if it doesn't, most of us will be going wtf. lol.


----------



## Kofoed (Jan 12, 2017)

Perhaps a 17% discount on the 1080 currently is not such a bad deal after all.. If Vega only comes around in May.
Also - Is it a driver thing that makes AMD faster in Vulkan API? Or will Nvidia need a brand new architecture in order to catch up with that?


----------



## cdawall (Jan 12, 2017)

dozenfury said:


> People wildly overestimating performance of an AMD product based on vague marketing slides, months before release?  I seem to have seen this before...



Their video cards typically deliver what was promised. This isn't bulldozer.


----------



## Camm (Jan 12, 2017)

Kofoed said:


> Perhaps a 17% discount on the 1080 currently is not such a bad deal after all.. If Vega only comes around in May.
> Also - Is it a driver thing that makes AMD faster in Vulkan API? Or will Nvidia need a brand new architecture in order to catch up with that?



Branch, there's a reason why Volta was pushed back (and Pascal didn't exist on engineering slides until a year and a bit before release). Vulkan\DX12 caught Nvidia with their pants down, so Pascal ended up being a Maxwell+ arch to use as a stepping whilst Volta is rearchitected.


----------



## efikkan (Jan 12, 2017)

cdawall said:


> And you know this how?


Vendors always previews the products with the most favorable benchmarks. AMD chose to use DOOM, which is a AMD favoring game. They even said the benchmark of DOOM that this is how Vega 10 will perform, which means it's a GTX 1080 competitor, not a GTX 1080 Ti competitor. Vega 10 is also roughly the size of GP102, but considering Pascal is 80% more efficient than Polaris. Let's give AMD the benefit of the doubt and assume they manage to cut this advantage in half, which would mean the Vega 10 will roughly perform in the range of GP104.


----------



## the54thvoid (Jan 12, 2017)

Camm said:


> Branch, there's a reason why Volta was pushed back (and Pascal didn't exist on engineering slides until a year and a bit before release). Vulkan\DX12 caught Nvidia with their pants down, so Pascal ended up being a Maxwell+ arch to use as a stepping whilst Volta is rearchitected.



The slide below is from GTC2014








The slide below is from GTC 2015







In 2013, Volta was present but it's HMC design stalled somewhat.

In 2014 (that would be almost 3 years ago, Volta had disappeared from the road map.

In 2015, Volta reappeared and it seems very much for a late 2017 release, though it's far enough to be 2018.

So really, Vega, DX12 etc has got nothing to do with Volta.  The memory arrangement affected his position.

Please bear in mind that the only reason Titan X is £1100 is because AMD have NOTHING to touch it with.  Nvidia couldn't care less about DX12 and Vulkan for it's GFX cards - their own mid range GP104 (not GP102 and not GP100) is still top dog.

Nvidia by your own definition (a revamped Maxwell masquerading as Pascal) don't even need to try to stay King of Cards.


----------



## jabbadap (Jan 12, 2017)

GhostRyder said:


> Maybe its coming under the RX 5XX name instead and there will not be an RX 490...  Seems a bit odd either way as something is not lining up.



Well that depends how they define "Generation" on their devising structure.


Spoiler













Spoiler: Off topic



afaik Pascal only exists because of tsmcs 20nm process was crap. Original plan was Kepler&28nm, Maxwell@20nm and then Volta@16nm.


----------



## cdawall (Jan 12, 2017)

efikkan said:


> Vendors always previews the products with the most favorable benchmarks. AMD chose to use DOOM, which is a AMD favoring game. They even said the benchmark of DOOM that this is how Vega 10 will perform, which means it's a GTX 1080 competitor, not a GTX 1080 Ti competitor. Vega 10 is also roughly the size of GP102, but considering Pascal is 80% more efficient than Polaris. Let's give AMD the benefit of the doubt and assume they manage to cut this advantage in half, which would mean the Vega 10 will roughly perform in the range of GP104.



Vega 10 was not at release clocks nor was it using a driver designed for Vega. What is favorable about that situation?


----------



## efikkan (Jan 12, 2017)

cdawall said:


> Vega 10 was not at release clocks nor was it using a driver designed for Vega. What is favorable about that situation?


What? There is no way they will run a driver on Vega hardware if it was not designed for it. If you don't understand this then you'll need to learn what a driver is.
DOOM is a highly AMD favoring game, meaning that Vega will look better in DOOM than the average game.


----------



## kruk (Jan 12, 2017)

efikkan said:


> What? There is no way they will run a driver on Vega hardware if it was not designed for it. If you don't understand this then you'll need to learn what a driver is.
> DOOM is a highly AMD favoring game, meaning that Vega will look better in DOOM than the average game.



PCGamesHardware says otherwise (http://www.pcgameshardware.de/AMD-Radeon-Grafikkarte-255597/Specials/Vega-10-HBM2-GTX-1080-1215734/):



> Zu beachten gilt jedoch, dass erstens kein Vega-optimierter Treiber genutzt wurde, sondern einfach ein Fiji-Treiber mit ein wenig zusätzlicher Debugging-Arbeit.



So Fiji driver + some tweaks.


----------



## Aquinus (Jan 12, 2017)

kruk said:


> So Fiji driver + some tweaks.


For those who can't use Google translate and aren't bilingual:


> Note, however, that first, no Vega-optimized driver was used, but simply a Fiji driver with a little additional debugging work.


----------



## efikkan (Jan 12, 2017)

kruk said:


> So Fiji driver + some tweaks.


This is just typical journalists failing to use the appropriate technical terms.
It will use a driver derived from the previous drivers, it's not like they are writing one from scratch. There will be continuous tweaks in the coming months, but these benchmarks still represent the "best case" of what we can expect from Vega 10.


----------



## theeldest (Jan 12, 2017)

efikkan said:


> This is just typical journalists failing to use the appropriate technical terms.
> It will use a driver derived from the previous drivers, it's not like they are writing one from scratch. There will be continuous tweaks in the coming months, but these benchmarks still represent the "best case" of what we can expect from Vega 10.



Current best case. But that is different than the NOW + 4 Months best case.


----------



## efikkan (Jan 12, 2017)

theeldest said:


> Current best case. But that is different than the NOW + 4 Months best case.


The driver they have now is the same they'll have in four months minus some tweaks. It will not change a lot, just minor tweaks here and there.

If this was a bad demonstration of Vega, they wouldn't have used it you know.


----------



## kruk (Jan 12, 2017)

efikkan said:


> The driver they have now is the same they'll have in four months minus some tweaks. It will not change a lot, just minor tweaks here and there.



AMD usually increases performance in months following the release and you are saying that the card will be only slightly faster? If they are able to exceed 1080 by 10% *now* they need only *15%* to catch up with Titan XP. Why do you think it's impossible? A month or two before the launch, Polaris 10 was rumored to have trouble achieving 850 MHz, to surprise of many it launched with 30+% higher clocks. Why do you think they won't be able to achieve better clocks in 4 months with Vega?


----------



## efikkan (Jan 12, 2017)

kruk said:


> AMD usually increases performance in months following the release and you are saying that the card will be only slightly faster? If they are able to exceed 1080 by 10% *now* they need only *15%* to catch up with Titan XP. Why do you think it's impossible? A month or two before the launch, Polaris 10 was rumored to have trouble achieving 850 MHz, to surprise of many it launched with 30+% higher clocks. Why do you think they won't be able to achieve better clocks in 4 months with Vega?


This is just wild guesses. AMD wanted to showcase the performance of Vega, so this demo is close to the final product. If this was a underclocked sample they would most certainly emphasize that the "production model would be xx % better".


----------



## theeldest (Jan 12, 2017)

kruk said:


> AMD usually increases performance in months following the release and you are saying that the card will be only slightly faster? If they are able to exceed 1080 by 10% *now* they need only *15%* to catch up with Titan XP. Why do you think it's impossible? A month or two before the launch, Polaris 10 was rumored to have trouble achieving 850 MHz, to surprise of many it launched with 30+% higher clocks. Why do you think they won't be able to achieve better clocks in 4 months with Vega?




First Zen demo: September '16 -> 3.0GHz
First Ryzen demo: December '16 -> 3.4GHz
Second Ryzen demo: January '16 -> 3.6GHz

So 20% clock increase in 4-5 months.

Precedent shows that getting the 10% - 15% increase in the next few months isn't unreasonable.


----------



## kruk (Jan 12, 2017)

efikkan said:


> This is just wild guesses. AMD wanted to showcase the performance of Vega, so this demo is close to the final product. If this was a underclocked sample they would most certainly emphasize that the "production model would be xx % better".



I use past available data to extrapolate what *could* happen. AMD already got burned with the FuryX vs 980Ti situation, so if they are smart they will keep their cards hidden until 1080Ti releases.


----------



## theeldest (Jan 12, 2017)

kruk said:


> I use past available data to extrapolate what *could* happen. AMD already got burned with the FuryX vs 980Ti situation, so if they are smart they will keep their cards hidden until 1080Ti releases.



Agree. It would behoove AMD to discuss Vega, let nVidia release the 1080Ti, then realease VegaX. (though I'm not optimistic that this is actually happening)


----------



## efikkan (Jan 12, 2017)

kruk said:


> I use past available data to extrapolate what *could* happen. AMD already got burned with the FuryX vs 980Ti situation, so if they are smart they will keep their cards hidden until 1080Ti releases.


Believing the Vega 10 will compete with GP102 is pure fantacy, that would require AMD to regain all of Nvidia's advantage, and all of this in a single iteration.
Vega 10 is a GP104 competitor.


----------



## theeldest (Jan 12, 2017)

efikkan said:


> Believing the Vega 10 will compete with GP102 is pure fantacy, that would require AMD to regain all of Nvidia's advantage, and all of this in a single iteration.
> Vega 10 is a GP104 competitor.



Given that most of nVidia's advantage is a larger die, it seems pretty realistic that AMD's larger die would be more competitive.

TitanXp = 471mm
1080 = 314mm (33% smaller than TitanX but 21% slower)
1060 = 200mm (58% smaller than 1080 but 41% slower)
480 = 232mm (15% larger than 1060 but 10% slower)
TitanXm = 601mm
FuryX = 596mm (1% smaller but 5% slower)

Estimates for Vega are between 475mm and 525mm. (https://www.reddit.com/r/hardware/comments/5m6tfu/vega_die_size_475mm2/)

So at the same die efficiency Vega should be between 20% slower than the TitanX and 10% faster than the TitanX.


----------



## Camm (Jan 12, 2017)

I would not be surprised with driver advances, take 480 for example, a 10% deficit against the 1060 in DX11 on release, and now is on an even-keel in DX11, and beats it in DX12. Theoretical tflop levels in AMD cards have always pointed to a card with better legs than what release performance is.

However, neither do I think this will 'destroy' Titan XP, and we'll likely have another case of superior in DX12, slower in DX11.


----------



## cdawall (Jan 12, 2017)

efikkan said:


> What? There is no way they will run a driver on Vega hardware if it was not designed for it. If you don't understand this then you'll need to learn what a driver is.
> DOOM is a highly AMD favoring game, meaning that Vega will look better in DOOM than the average game.



They actually said in the PR for it that they were using a modified fiji driver that was in no way shape form or fashion optimal.



efikkan said:


> This is just typical journalists failing to use the appropriate technical terms.
> It will use a driver derived from the previous drivers, it's not like they are writing one from scratch. There will be continuous tweaks in the coming months, but these benchmarks still represent the "best case" of what we can expect from Vega 10.



Yes because the original driver for every video card is the best one. They are the most tweaked drivers out there and no one ever sees performance increases from updated drivers.


----------



## Evo85 (Jan 12, 2017)




----------



## efikkan (Jan 12, 2017)

cdawall said:


> Yes because the original driver for every video card is the best one. They are the most tweaked drivers out there and no one ever sees performance increases from updated drivers.


That doesn't make any sense at all.
AMD will not make a _new_ driver before the release of Vega. It will be a new release of the same driver, people commonly keep confusing this. The driver implementation for Vega was started a while ago, from now on they will just be doing minor tweaks.


----------



## theeldest (Jan 12, 2017)

efikkan said:


> That doesn't make any sense at all.
> AMD will not make a _new_ driver before the release of Vega. It will be a new release of the same driver, people commonly keep confusing this. The driver implementation for Vega was started a while ago, from now on they will just be doing minor tweaks.



The point others are trying to make is that the initial testing occurs with basic changes to the existing driver. Just enough changes that the new architecture will run though new features will not be taken advantage of.

As the product is developed, those new features are added in pieces. 

So depending on when a demo occurs, there are different hardware optimizations that may or may not be enabled in the driver being used at that point in time.

Yes, the vega driver development began at the same time as the chip development, but they are both being worked on today as well. Seeing another 10+ % increase before official launch is not out of the question and 10% is the difference from a 1080 and 1080Ti.


----------



## cdawall (Jan 12, 2017)

efikkan said:


> That doesn't make any sense at all.
> AMD will not make a _new_ driver before the release of Vega. It will be a new release of the same driver, people commonly keep confusing this. The driver implementation for Vega was started a while ago, from now on they will just be doing minor tweaks.



Yes they will? There will be an updated AMD driver in *6 months.*


----------



## efikkan (Jan 12, 2017)

cdawall said:


> Yes they will? There will be an updated AMD driver in *6 months.*


Try reading my post again.


----------



## cdawall (Jan 12, 2017)

efikkan said:


> Try reading my post again.



What do you mean there will be a different coding for vega than fiji. One that is optimized for vega and not fiji.


----------



## Th3pwn3r (Jan 13, 2017)

efikkan said:


> Try reading my post again.



Is there any chance you can stop posting? That'd be great.


----------



## Cybrnook2002 (Jan 13, 2017)

Looking forward to seeing what Vega has. Always like AMD's top gpu's.


----------



## Totally (Jan 13, 2017)

theeldest said:


> On the other hand, if you pull all the data from the Titan X review we see that both manufacturers see decreasing performance per flop as they go up the scale and it's worse for AMD:
> View attachment 82981
> 
> So if we plot out all of these cards as Gflops vs Performance and fit a basic trend line we get that big Vega would be around 80% of Titan performance at 12 TFlops. (this puts it even with the 1080).
> ...



Your conclusion based on the data is wrong. You need to break the data into their proper components.  you need to look at the 9XX, 10XX, 3XX, and 4XX separately since they are all different arc, when lump them together like that you are hiding the fact that the scaling of the 10XX(1060->1080) is pretty bad and being propped up by the 9XX when lumped together as AMD vs. Nvidia.


----------



## Prima.Vera (Jan 13, 2017)

1 word: VAPORWARE


----------



## blued (Jan 13, 2017)

cdawall said:


> Not really these are targeted at the 1080Ti not 1080. This would be about average for AMD vs Nvidia release schedule. I do however find it to be a bit annoying on the hype train as per usual. I was expecting a Jan-Feb release.


What has been released vs the 1080 in the last year? Would you call that average?


----------



## owen10578 (Jan 13, 2017)

zo0lykas said:


>



Right ok..just was saying what i've seen a lot of people and publications has said. Vega=490/Fury2. I guess we'll see what it will be.


----------



## renz496 (Jan 13, 2017)

Camm said:


> Branch, there's a reason why Volta was pushed back (and Pascal didn't exist on engineering slides until a year and a bit before release). Vulkan\DX12 caught Nvidia with their pants down, so Pascal ended up being a Maxwell+ arch to use as a stepping whilst Volta is rearchitected.



Vulkan/DX12 is not the reason why pascal exist. also pascal is a bit more complicated than being a simple "maxwell+". nvidia did not want to repeat the same problem they have with kepler so nvidia actually end up making two version of pascal; compute pascal (GP100) and gaming pascal (GP102 and the rest). kepler excel at GPGPU related work especially DP but as a gaming architecture not so much. maxwell design probably the best design nvidia can come up with right now for gaming purpose.


----------



## Blueberries (Jan 13, 2017)

theeldest said:


> So if we plot out all of these cards as Gflops vs Performance and fit a basic trend line we get that big Vega would be around 80% of Titan performance at 12 TFlops. (this puts it even with the 1080).



What are you defining "Performance" as? TFLOPS is in itself performance so these graphs make no sense to me.


----------



## Kanan (Jan 13, 2017)

I'm sure Vega will be great and a success. Why, because AMD changed a lot and they are surrounded by success rather than failure like before. If Ryzen can be a success, Vega can be too. 

And yes, those drivers aren't even remotely optimal, with 10% faster speed than 1080 in Doom, and further optimizations and higher clockspeed (it was probably thermal throtteling in that closed cage and with barraged fans that don't help to get the heat out the case) it is indeed possible that it could rival Titan XP / GM102.


----------



## Ivaroeines (Jan 13, 2017)

A launch from AMD is when AMD ships their chips to manufacturers, then the card-makers have to produce the cards, with the reference designs that should start quite soon after launch. If the purchase start is a coordinated and worldwide event, we may see/buy cards in early autumn(probably some time later than this). 

It feels like the time between a AMD product announcement and first purchase date, is the same as a generation for Nvidia and Intel. My guess is that promotion of their products are much more important to AMD than product development.


----------



## cdawall (Jan 13, 2017)

blued said:


> What has been released vs the 1080 in the last year? Would you call that average?



September 2014 was the release of the 980, June 2015 was the release of the Ti, October 2015 was the release of the fury. 

The Vega cards replace the fury's not the 290/390 SKU's from what I have seen in the road map. 

I honestly think AMD banked on Polaris clocking a good bit higher and being competitive with the 1070, but were horribly let down by GloFo. AMD has a weird gap yes, but I do not think it was Vega I think it was Polaris failing them.


----------



## mtcn77 (Jan 13, 2017)

'Stupendous', lol.


----------



## Totally (Jan 13, 2017)

Blueberries said:


> What are you defining "Performance" as? TFLOPS is in itself performance so these graphs make no sense to me.



Good catch I didn't see till you pointed that out I had assumed that it was relative performance(%) between the best performing card  and all the others but seeing the top performing card at 98.4 indicates that is not the case.



cdawall said:


> The Vega cards replace the fury's not the 290/390 SKU's from what I have seen in the road map.
> 
> I honestly think AMD banked on Polaris clocking a good bit higher and being competitive with the 1070, but were horribly let down by GloFo. AMD has a weird gap yes, but I do not think it was Vega I think it was Polaris failing them.



But considering they aren't doing anything to address the issue and are awfully quiet about it, seems  like a self-defeating strat to me. I'm not interested in paying $$$$ for a Fury replacement nor in a 480 as it falls short of what I need from a card.


----------



## cdawall (Jan 13, 2017)

Totally said:


> But considering they aren't doing anything to address the issue and are awfully quiet about it, seems like a self-defeating to me. I'm not interested in paying $$$$ for a Fury replacement nor in a 480 as it falls short of what I need from a card.



They can't exactly dump glofo. Vega 10 will be your fury and Vega 11 should be the "490"


----------



## robert3892 (Jan 13, 2017)

If AMD releases their flagship GPU in May NVIDIA will already have a head start on them should NVIDIA announce the 1080Ti in March. That isn't very wise marketing in my opinion.


----------



## R0H1T (Jan 13, 2017)

renz496 said:


> Vulkan/DX12 is not the reason why pascal exist. also pascal is a bit more complicated than being a simple "maxwell+". nvidia did not want to repeat the same problem they have with kepler so nvidia actually end up making two version of pascal; compute pascal (GP100) and gaming pascal (GP102 and the rest). kepler excel at GPGPU related work especially DP but as a gaming architecture not so much. maxwell design probably the best design nvidia can come up with right now for gaming purpose.


They've always had a gaming flagship & a separate (compute) flagship ever since the days of Fermi. That they've neutered DP on subsequent Titan's is something entirely different, the original Titan had excellent DP capabilities but every card that's followed had DP cut down massively & yet many call it a workstation card.

The GP102 & GP100 are different because of HBM2, Nvidia probably felt that they didn't need "nexgen" memory for their gaming flagship that or saving a few more $ was a better idea i.e. with GDDR5x & the single card cannot support these two competing memory technologies.


----------



## Totally (Jan 13, 2017)

R0H1T said:


> They've always had a gaming flagship & a separate (compute) flagship ever since the days of Fermi. That they've neutered DP on subsequent Titan's is something entirely different, the original Titan had excellent DP capabilities but every card that's followed had DP cut down massively & yet many call it a workstation card.
> 
> The GP102 & GP100 are different because of HBM2, Nvidia probably felt that they didn't need "nexgen" memory for their gaming flagship that or saving a few more $ was a better idea i.e. with GDDR5x & the single card cannot support these two competing memory technologies.



The reason they crippled it was that was Titan cutting into their own workstation graphics business. People aren't going to give up their hard earned when they don't have to and the original Titan presented a very viable alternative to their Quadro line, so Titan in that form had to go. I say it was more a self-preservation move than cost-cutting.


----------



## R0H1T (Jan 13, 2017)

Totally said:


> The reason they crippled it was that was Titan cutting into their own workstation graphics business. People aren't going to give up their hard earned when they don't have to and the original Titan presented a very viable alternative to their Quadro line, so Titan in that form had to go. I say it's *self-preservation* and not cost cutting.


No one said it wasn't, but they could've gone the route of AMD & given 16/32 GB of HBM2 to 1080Ti/Titan & yet they didn't & IMO that's down a lot to costs.
The Titan is still marketed as a workstation card, I do wonder why?


----------



## KrisCo (Jan 13, 2017)

cdawall said:


> I honestly think AMD banked on Polaris clocking a good bit higher and being competitive with the 1070, but were horribly let down by GloFo. AMD has a weird gap yes, but I do not think it was Vega I think it was Polaris failing them.



I just chalked up Polaris to be a stop gap for AMD's real show, same with Pascal for Nvidia. Seems kinda reminiscent of the kepler days. Although I may just be bitter at paying flagship prices for my 680s only to discover they were only the midrange cards a very short time later.


----------



## RejZoR (Jan 13, 2017)

Only way I can think of this HBC (High Bandwidth Cache) to work is that driver would analyze per-game memory behavior and adapt operation accordingly, meaning the game performance would improve as you play it. Or via game profiles, much like the ones for CrossfireX. This way it would know what gets shuffled around regularly and what only needs rare access. This way they could stuff framebuffer and frequently accessed data in HBM2, less frequent in DDR4 and even less frequent on SSD. Question is, how efficiently can this be manipulated by driver without the need to predefine all this during game design...


----------



## Brusfantomet (Jan 13, 2017)

RejZoR said:


> Only way I can think of this HBC (High Bandwidth Cache) to work is that driver would analyze per-game memory behavior and adapt operation accordingly, meaning the game performance would improve as you play it. Or via game profiles, much like the ones for CrossfireX. This way it would know what gets shuffled around regularly and what only needs rare access. This way they could stuff framebuffer and frequently accessed data in HBM2, less frequent in DDR4 and even less frequent on SSD. Question is, how efficiently can this be manipulated by driver without the need to predefine all this during game design...



What you are describing here is Cache prefetching. witch has been done in hardware for over 15 years by both AMD and Intel. It could aslo be an hybrid approach with both hardware and software, and this is the part of the Vega drivers thats was not done during the doom Vega demo.


----------



## RejZoR (Jan 13, 2017)

Not necessarely. I'm talking direct memory access, not prefetching. Prefetching still relies on storing whatever data into main GPU memory based on some sort of prefetching algorithm (HBM2 in this case). HBC, from what I understand AMD's slides is mentioning direct, seamless access to these resources. Not sure how, but apparently it can be done. We'll know more when they actually release this thing.

We've seen similar with HyperMemory and TurboCache on low end cards where system memory seamlesly behaved like on-board graphic memory. It saved costs while delivering nearly the same performance. Having systems with quad channel DDR4 RAM, I can see that as a viable game data storage memory.


----------



## Brusfantomet (Jan 13, 2017)

The hardware prefetching uses DMA, the software one does not need to (since it already runs on the cpu).

this is not the same as Hyper memory/TurboCache those used system memory as an extended frame buffer to increase the frame buffer of the card. The dynamic part was only changing the size of the system ram frame buffer.

Quad channel DDR 4 is not viable for high end gaming, your 2400 MHz ram has a theoretical bandwith of 76,8 GB/s witch is about the same as a  R7-250 at 73,6 GB/s.


----------



## Slizzo (Jan 13, 2017)

the54thvoid said:


> The slide below is from GTC2014
> 
> 
> 
> ...



Please keep in mind that Volta's release has been pulled up for a few reasons. This was covered by our very own @btarunr here: https://www.techpowerup.com/224413/nvidia-accelerates-volta-to-may-2017

It's sticking to 16nm, and will not be moving to 10nm.  Also, there's rumors that they needed to pull it up to meet a government contract.


----------



## Prima.Vera (Jan 13, 2017)

Kanan said:


> I'm sure Vega will be great and a success. Why, because AMD changed a lot and they are surrounded by success rather than failure like before. If Ryzen can be a success, Vega can be too.
> 
> And yes, those drivers aren't even remotely optimal, with 10% faster speed than 1080 in Doom, and further optimizations and higher clockspeed (it was probably thermal throtteling in that closed cage and with barraged fans that don't help to get the heat out the case) it is indeed possible that it could rival Titan XP / GM102.


A lot of "IF"s and "Could"s in there....


----------



## efikkan (Jan 13, 2017)

RejZoR said:


> Only way I can think of this HBC (High Bandwidth Cache) to work is that driver would analyze per-game memory behavior and adapt operation accordingly, meaning the game performance would improve as you play it. Or via game profiles, much like the ones for CrossfireX. This way it would know what gets shuffled around regularly and what only needs rare access. This way they could stuff framebuffer and frequently accessed data in HBM2, less frequent in DDR4 and even less frequent on SSD. Question is, how efficiently can this be manipulated by driver without the need to predefine all this during game design...


The algorithm you describe is not possible. In games, the part of the allocated memory that's not used in every frame would be the game would outside the bounds of the camera. But it's impossible for the GPU to know which parts will be used next.



RejZoR said:


> Not necessarely. I'm talking direct memory access, not prefetching. Prefetching still relies on storing whatever data into main GPU memory based on some sort of prefetching algorithm (HBM2 in this case). HBC, from what I understand AMD's slides is mentioning direct, seamless access to these resources. Not sure how, but apparently it can be done. We'll know more when they actually release this thing.


The purpose of caching is to hide latency of a larger storage pool. The two basic prinicples are manual and automatic prefetching. Manual prefetching would require implementation in every game, but AMD has indicated they are talking about automatic prefetching. Automatic prefetching can only work if it's able to detect (linear) patterns in memory accesses. This can work well for certain compute workloads, where the data processed is just a long linear stream of data. It is however impossible to do with random accesses, like rendering. If they try to do this it will result in either unstable performance or popping resources, depending on how they handle missing data on the hardware side.


----------



## RejZoR (Jan 13, 2017)

Yeah, well, they apparently found a way. Because the traditional "prefetching" just wouldn't make any kind of sense, we already do that in existing games. All Unreal 3.x and 4.x games use texture streaming which is location based around the player, so you don't store textures in memory for entire level, but just for the segment player is in and has a view of. The rest is streamed into VRAM per need basis as you move around and done by the engine itself.


----------



## Captain_Tom (Jan 13, 2017)

the54thvoid said:


> Oh, that's quite far away.
> 
> One year after Pascal.  FWIW, a Titan X (not full GP102 core) is 37% faster than a 1080 at 4k Vulkan Doom.  It's just when people call Vega's performance stupendous and then repeat such things, it's a bit baity.  Once Vega is out it's on a smaller node, has more cores and more compute with more 'superness' than Pascal.  So if it doesn't beat Titan X, it's not superb enough frankly.
> 
> ...




I fully expect Vega 10 to beat the Titan XP.  In fact I would say the real question is if it beats the Titan XP Black.


----------



## efikkan (Jan 13, 2017)

RejZoR said:


> Yeah, well, they apparently found a way. Because the traditional "prefetching" just wouldn't make any kind of sense, we already do that in existing games. All Unreal 3.x and 4.x games use texture streaming which is location based around the player, so you don't store textures in memory for entire level, but just for the segment player is in and has a view of. The rest is streamed into VRAM per need basis as you move around and done by the engine itself.


Manual prefetching in a game is possible, because the developer might be able to reason about possible movement several frames ahead, I've done this myself in simulators.
But it's impossible for the GPU to do this itself, since it only sees memory accesses and instructions and clear patterns in these.


----------



## RejZoR (Jan 13, 2017)

efikkan said:


> Manual prefetching in a game is possible, because the developer might be able to reason about possible movement several frames ahead, I've done this myself in simulators.
> But it's impossible for the GPU to do this itself, since it only sees memory accesses and instructions and clear patterns in these.



You do know GPU isn't just hardware these days, it's also software part and drivers sure can know a whole lot of things about what and how game is shuffling stuff through VRAM.


----------



## efikkan (Jan 13, 2017)

RejZoR said:


> You do know GPU isn't just hardware these days, it's also software part and drivers sure can know a whole lot of things about what and how game is shuffling stuff through VRAM.


Even the driver is not able to reason about how the camera might move and which resource which might be needed in the future. The only thing the driver and the GPU sees is simple buffers, textures, meshes, etc. The GPU have no information about the internal logic of the game, so it has no ability to reason about what might happen. The only way to know this is if the programmer intentionally designs the game engine and tells the GPU this somehow.


----------



## RejZoR (Jan 13, 2017)

It does. It sees what resources are being used on what basis and arrange the usage accordingly on a broader scale and not down to individual texture level.


----------



## Steevo (Jan 13, 2017)

efikkan said:


> Even the driver is not able to reason about how the camera might move and which resource which might be needed in the future. The only thing the driver and the GPU sees is simple buffers, textures, meshes, etc. The GPU have no information about the internal logic of the game, so it has no ability to reason about what might happen. The only way to know this is if the programmer intentionally designs the game engine and tells the GPU this somehow.


If you played through a game and identified all the data points of FPS less than 60 you could go find the reasons, and preemptively correct the issue by either prefetching data, precooking some physics (just like Nvidia still does with PhysX on their GPU's) or whatever else caused the slowdown and implement that in drivers to fetch said data or start processing said data.


----------



## theeldest (Jan 13, 2017)

Totally said:


> Good catch I didn't see till you pointed that out I had assumed that it was relative performance(%) between the best performing card  and all the others but seeing the top performing card at 98.4 indicates that is not the case.




I mentioned it in my first post with charts/tables.

Performance is from the "Performance Summary" table in the Titan X Pascal review for 1440p. And I made the disclaimer that this methodology is super "hand-wavey".

Yeah, I get it. It's imperfect for many reasons and wrong for others. But it at least provides some sort of methodology for trying to make predictions.


----------



## theeldest (Jan 13, 2017)

Totally said:


> Your conclusion based on the data is wrong. You need to break the data into their proper components.  you need to look at the 9XX, 10XX, 3XX, and 4XX separately since they are all different arc, when lump them together like that you are hiding the fact that the scaling of the 10XX(1060->1080) is pretty bad and being propped up by the 9XX when lumped together as AMD vs. Nvidia.



It's not 'wrong'. Those data points hold meaningful information.

If you think that a different methodology would be better, then put the info into a spreadsheet and show us what you come up with.

EDIT:
Once you plot everything out, it's pretty easy to see that those changes don't actually mean much for either company. The linear model actually fits pretty well over the past couple generations.






Personally, I think the biggest issue with this model is that the FuryX drastically skews the projection down for high FLOPs cards. If we set the intercepts to 0 (as makes logical sense) the picture changes a bit:





If you want to point out how this methodology is flawed/wrong/terrible, it would help to show us what you think is better. With pictures and stuff.


----------



## jabbadap (Jan 13, 2017)

The problem on your Gflops chart is that Gflops depends on gpu clock and you have taken them on ihvs given numbers. Thumb of rule were that on amd side given gflops are probably little too big, card is throttling clocks back before given up-to clock GFlops(furyX being exception because of water cooling, oh and GFlops for RX480 are not from up-to clock's which should be 5,834GFlops). And for nvidia given GFlops figure is too small, because of real gpu clock is higher than given boost clock GFlops.

I.E. Titan X Gflops are given as 2*1.417GHz*3584cc=10,157GFlops while in gaming gpu clock is higher than that, in tpus review boost clock for titan xp is ~1.62GHz which means 2*1.62GHz*3584cc= 11,612GFlops. Why this matters is that real boost clock for nvidia's cards varies more per card and thus the real GFlops differs more from the given value.


----------



## efikkan (Jan 14, 2017)

RejZoR said:


> It does. It sees what resources are being used on what basis and arrange the usage accordingly on a broader scale and not down to individual texture level.


Neither the driver nor the GPU knows anything about the internal state of the game engine.



Steevo said:


> If you played through a game and identified all the data points of FPS less than 60 you could go find the reasons, and preemptively correct the issue by either prefetching data


The GPU have no ability to reason about why data is missing, it's simply a device processing the data it's ordered to.


----------



## rruff (Jan 14, 2017)

Camm said:


> AMD just didn't play in the high end this generation for various reasons.



Reasons being they are a very small company building both CPUs and GPUs. They have to pick their battles, and the bleeding edge highest performing product isn't it. 

I'm more interested in Vega 11. It a more mainstream product, and if it really does deliver good performance/watt (better than Pascal), that would be something. It would also give AMD a true laptop card.


----------



## Divide Overflow (Jan 14, 2017)

I'm amused how some people are highly skeptical of some info from WCCFTech but other info is considered gospel.


----------



## Kanan (Jan 14, 2017)

Totally said:


> The reason they crippled it was that was Titan cutting into their own workstation graphics business. People aren't going to give up their hard earned when they don't have to and the original Titan presented a very viable alternative to their Quadro line, so Titan in that form had to go. I say it was more a self-preservation move than cost-cutting.


Nah, nothing got crippled after the original Titan. It's just so that Titan X (Maxwell and Pascal) are gaming architectures, they never had DP in the first place.


----------



## theeldest (Jan 14, 2017)

jabbadap said:


> The problem on your Gflops chart is that Gflops depends on gpu clock and you have taken them on ihvs given numbers. Thumb of rule were that on amd side given gflops are probably little too big, card is throttling clocks back before given up-to clock GFlops(furyX being exception because of water cooling, oh and GFlops for RX480 are not from up-to clock's which should be 5,834GFlops). And for nvidia given GFlops figure is too small, because of real gpu clock is higher than given boost clock GFlops.
> 
> I.E. Titan X Gflops are given as 2*1.417GHz*3584cc=10,157GFlops while in gaming gpu clock is higher than that, in tpus review boost clock for titan xp is ~1.62GHz which means 2*1.62GHz*3584cc= 11,612GFlops. Why this matters is that real boost clock for nvidia's cards varies more per card and thus the real GFlops differs more from the given value.



That doesn't actually change any of the prediction because nothing in the model is dependent on nVidia's "performance/flop" when 'predicting' Vega's performance. All it does is drop the dot for Titan X a little more below the linear fit line.

The graphs & charts I made aren't 'wrong' or 'flawed'. They follow standard practices.

What you're doing--making specific modifications due to information you have--is more in line with Bayesian statistics. 

It's not better, it's not worse. It's different.


----------



## ensabrenoir (Jan 14, 2017)

......dosent matter when they release it  cause nvdia has a counter punch ready.  Honestly in spite of how good the "10" series cards are, I feel as though they were just a fund raiser for whats coming next
......but in the mean time even after Amd's new power up Nvdia's like


----------



## efikkan (Jan 14, 2017)

ensabrenoir said:


> ......dosent matter when they release it  cause nvdia has a counter punch ready.


I'm just curious, what is the counterpart from Nvidia? (Except for 1080 Ti which will arrive before Vega)


----------



## ensabrenoir (Jan 14, 2017)

efikkan said:


> I'm just curious, what is the counterpart from Nvidia? (Except for 1080 Ti which will arrive before Vega)



....counter "punch" .......Nvdia could just drop the price of the 1080s, or start talking about the 20 series or do nothing and let the 1080ti ride.  They seem confident that it'll retain the crown even after vega......


----------



## the54thvoid (Jan 14, 2017)

Captain_Tom said:


> I fully expect Vega 10 to beat the Titan XP.  In fact I would say the real question is if it beats the Titan XP Black.



Your enthusiasm is... Optimistic. Would be lovely if true.


----------



## efikkan (Jan 14, 2017)

ensabrenoir said:


> ....counter "punch" .......Nvdia could just drop the price of the 1080s, or start talking about the 20 series or do nothing and let the 1080ti ride.  They seem confident that it'll retain the crown even after vega......


They may lower the price, but they have no new chips until Volta next year.


----------



## jahramika (Jan 14, 2017)

cdawall said:


> Not really these are targeted at the 1080Ti not 1080. This would be about average for AMD vs Nvidia release schedule. I do however find it to be a bit annoying on the hype train as per usual. I was expecting a Jan-Feb release.


----------



## jahramika (Jan 14, 2017)

Hype =  Pascal DX12 Hardware support, HBM2. When will Pascal justify the 1080/Titan pricing.


----------



## Captain_Tom (Jan 15, 2017)

owen10578 said:


> Vega 11 replacing Polaris 10? So is this going to be a complete next gen product stack aka RX 5xx? I thought its supposed to add to the current product stack at the high end. The information seems off.



No the information is dead on with literally everything we have been told from AMD.


Look at their product roadmap:

2015 = Fury HBM, R9 300 28nm

2016 = Polaris 10, 11

2017 = Vega 10, 11


Just read their bloody map!!!   2016 had the 480 as a stopgap while they got their nearly all HBM2 line-up ready.


----------



## dalekdukesboy (Jan 15, 2017)

TheGuruStud said:


> Filling the SPs with the much enhanced scheduler should do it easily.



I'm no expert here but just from looking at specs and being on 7nm tech especially if it clocks like hell (plus whatever they set standard clocks at) this card I would honestly suspect is going to be in 1080ti territory and depending on application/game and drivers etc it may beat it.


----------



## efikkan (Jan 15, 2017)

Captain_Tom said:


> Just read their bloody map!!!   2016 had the 480 as a stopgap while they got their nearly all HBM2 line-up ready.


I've not seen the details of Vega 11 yet, but I assume it will be GDDR 5(X).

HBM has so far been a bad move for AMD, and it's not going to help Vega 10 for gaming either. GP102 doesn't need it, and it's still going to beat Vega 10. HBM or better will be needed eventually, but let's see if even Volta needs it for gaming. AMD should have spent their resources on the GPU rather than memory bandwidth they don't need.


----------



## Blueberries (Jan 15, 2017)

If AMD can release a dual-Vega card with HBM and a closed loop cooler around $600 or so it would blow nVidia's gaming market out of the water until 2018.

It's not likely though

(I'm assuming Vega 10 will beat Pascal in Perf/watt but not in throughput)


----------



## efikkan (Jan 15, 2017)

Blueberries said:


> (I'm assuming Vega 10 will beat Pascal in Perf/watt but not in throughput)


Vega will not beat Pascal in efficiency. Pascal is currently 80% more efficient than Polaris, there is no way AMD can improve like that over night.
And if Vega were more efficient, it would scale past Pascal.


----------



## ensabrenoir (Jan 15, 2017)

Blueberries said:


> If AMD can release a dual-Vega card with HBM and a closed loop cooler around $600 or so it would blow nVidia's gaming market out of the water until 2018.
> 
> It's not likely though
> 
> (I'm assuming Vega 10 will beat Pascal in Perf/watt but not in throughput)



...dual gpu cards have been quite cringe worthy (295X2 & TitianZ) and haven't benefited either team in quite a while..... unless you bought them for under $400.......even then you'd still be that odd fellow (kidding both gpus have a cult like following and a serious fan club)


----------



## TheGuruStud (Jan 15, 2017)

efikkan said:


> Vega will not beat Pascal in efficiency. Pascal is currently 80% more efficient than Polaris, there is no way AMD can improve like that over night.
> And if Vega were more efficient, it would scale past Pascal.



The new silicon alone is going to match/beat Pascal. It just gets worse with dx12/vulkan.

And 80%? Wut?


----------



## efikkan (Jan 15, 2017)

TheGuruStud said:


> The new silicon alone is going to match/beat Pascal. It just gets worse with dx12/vulkan.


How? Please explain yourself. If true it would be the greatest achievement ever, and a much larger progress than anything in over a decade, in fact it would require about the same improvement as ~HD 4000 vs Polaris in a single jump without any node shrinks, dream on…


----------



## dalekdukesboy (Jan 15, 2017)

TheGuruStud said:


> The new silicon alone is going to match/beat Pascal. It just gets worse with dx12/vulkan.
> 
> And 80%? Wut?



Admittedly this.  I was kinda reading everyone saying 1080 if lucky probably not even 1080ti or titan certainly and I'm kind of thinking to myself yeah, we don't know for sure but seems awfully cynical for a GPU that AMD seems pretty confident is worth leaking info about and has promising physical specs plus the die shrink alone if all put together properly could be super efficient and fast.


----------



## Captain_Tom (Jan 15, 2017)

efikkan said:


> I've not seen the details of Vega 11 yet, but I assume it will be GDDR 5(X).
> 
> HBM has so far been a bad move for AMD, and it's not going to help Vega 10 for gaming either. GP102 doesn't need it, and it's still going to beat Vega 10. HBM or better will be needed eventually, but let's see if even Volta needs it for gaming. AMD should have spent their resources on the GPU rather than memory bandwidth they don't need.



Nope it's HBM2.   RX 470 - RX Fury will  ALL be HBM2.   HBM isn't expensive anymore, Nvidia is just milking their customers.


All you need to do is remember back to the HD 4870 that cost HALF as much as the 280.  It had that "Expensive and new"  GDDR5.


----------



## TheGuruStud (Jan 15, 2017)

efikkan said:


> How? Please explain yourself. If true it would be the greatest achievement ever, and a much larger progress than anything in over a decade, in fact it would require about the same improvement as ~HD 4000 vs Polaris in a single jump without any node shrinks, dream on…



Take the 480 down to the reports of 95w with the refresh. Even if a little generous that matches its competition. Vega will have IPC gains (potentially good gains) and using same silicon.


----------



## Captain_Tom (Jan 15, 2017)

dalekdukesboy said:


> Admittedly this.  I was kinda reading everyone saying 1080 if lucky probably not even 1080ti or titan certainly and I'm kind of thinking to myself yeah, we don't know for sure but seems awfully cynical for a GPU that AMD seems pretty confident is worth leaking info about and has promising physical specs plus the die shrink alone if all put together properly could be super efficient and fast.



Have you looked at the leaked specs?  It's easily 2-3x stronger than the 480, and that puts it firmly in Titan XP territory.  


Now I don't have a crystal ball, but that's how strong the Vega 10 card SHOULD be.   If it isn't, it will be a complete failure unless it uses like 100w.


----------



## dalekdukesboy (Jan 15, 2017)

Captain_Tom said:


> Have you looked at the leaked specs?  It's easily 2-3x stronger than the 480, and that puts it firmly in Titan XP territory.
> 
> 
> Now I don't have a crystal ball, but that's how strong the Vega 10 card SHOULD be.   If it isn't, it will be a complete failure unless it uses like 100w.



Um...yes I looked at the specs, and I think you misread me entirely. I was saying I'm not sure why people are putting the card down and seems to me like it should be treated with great optimism and looking at those specs and die shrink gains in efficiency YEAH I'm saying I think it could be quite the card.  Again I don't know for sure no crystal ball here either but I agree with you I think this will be serious competition for whatever Nvidia has out there right now if it is done right.


----------



## Captain_Tom (Jan 15, 2017)

efikkan said:


> How? Please explain yourself. If true it would be the greatest achievement ever, and a much larger progress than anything in over a decade, in fact it would require about the same improvement as ~HD 4000 vs Polaris in a single jump without any node shrinks, dream on…



lmao are you high?!  


The 1080 for reference is only 15% stronger than the Fury X.   So you think AMD will have trouble making a card more than 15% stronger than their 1.5 year old flagship?


Dude stop drinking Nvidia's marketing koolaid...


----------



## Captain_Tom (Jan 15, 2017)

dalekdukesboy said:


> Um...yes I looked at the specs, and I think you misread me entirely. I was saying I'm not sure why people are putting the card down and seems to me like it should be treated with great optimism and looking at those specs and die shrink gains in efficiency YEAH I'm saying I think it could be quite the card.  Again I don't know for sure no crystal ball here either but I agree with you I think this will be serious competition for whatever Nvidia has out there right now if it is done right.



Buddy I was backing you up lol.  I am on your side, sorry if it sounded like I was attacking you.

As for why people are so skeptical?  I really am not sure, but my guess is Nvidia fanboys have just completely bought into Nvidia's marketing.  


I mean at best Nvidia has a 33% efficiency advantage over AMD (Titan X vs 480).  Just switching to HBM alone would make them equally efficient, and we already know that 14nm has become incredibly more efficient with recent maturity.  

To compete with the Titan X AMD would have just had to scale up the 480 to a twice as big die.  However they have done that, and then they overhauled the entire architecture.  The RX Fury (Or 590?) should be a monster.


----------



## efikkan (Jan 15, 2017)

Captain_Tom said:


> Nope it's HBM2.   RX 470 - RX Fury will  ALL be HBM2.   HBM isn't expensive anymore, Nvidia is just milking their customers.


So why will Vega 10 (the consumer version) only have 8 GB?
We all know the supply of HBM2 is low.



TheGuruStud said:


> Take the 480 down to the reports of 95w with the refresh. Even if a little generous that matches its competition. Vega will have IPC gains (potentially good gains) and using same silicon.


There are mythical reports of super binnings of Polaris, but that's not a fair comparison.
This is a real world comparison:



 
(Note: Picture is cut)


----------



## TheGuruStud (Jan 15, 2017)

efikkan said:


> So why will Vega 10 (the consumer version) only have 8 GB?
> We all know the supply of HBM2 is low.
> 
> 
> ...



Now, you're just blatantly trolling and need banned.


----------



## dalekdukesboy (Jan 15, 2017)

Captain_Tom said:


> Buddy I was backing you up lol.  I am on your side, sorry if it sounded like I was attacking you.
> 
> As for why people are so skeptical?  I really am not sure, but my guess is Nvidia fanboys have just completely bought into Nvidia's marketing.
> 
> ...



HA, that's funny, I almost said Hey buddy etc in response to you and said I'm on your side etc. I ended up rephrasing it but I saw your post and I thought it was mine because I was thinking it still and forgot which way I had typed it.  Anyway I don't care about being "attacked" but thanks, I just misread your tone and admittedly was confused it sounded like you agreed but yet didn't, so yeah no worries.  I like arguing my point anyway so no worries on my feelings lol.  Yeah anyway I agree, I saw specs and the doom test and various other leaks and fact they are confident enough to leak it months in advance is a good sign by itself, and all the evidence points to a very strong high end GPU that finally will compete with Nvidia's high end offerings.  

Also that last slide showing efficiency or lack thereof is well known about the rx480 I admit it's a disappointment but this is NOT rx480 no?  If it were, we wouldn't even have this discussion of even the hint of potential AMD could compete, no one would argue for it and no one would bother arguing against it for it would be obvious to all it was like an AMD CPU nowadays and just ok for price point only not performance.


----------



## efikkan (Jan 15, 2017)

TheGuruStud said:


> Now, you're just blatantly trolling and need banned.


I'm getting tired of this old lie.
I've checked >60 RX 480 reviews, both the initial and the more recent ones, and they all show power usage of ~150W +/-10%, in fact Techpowerup's measurements is slightly below average. There are no significant difference between the first batches and the more recent ones, and there are no significant difference between reference and non-reference boards, just the usual variance you'll get with every card. What matters is what you'll get when you buy a card, not what one out of every thousand cards claims to perform. It's disingenuous to claim that buyers will get a "95W" card, when we all know they'll get a ~150W card.


----------



## rruff (Jan 15, 2017)

TheGuruStud said:


> Now, you're just blatantly trolling and need banned.



Do you have any legit info on 95W RX 480s? I'm having no luck with google.


----------



## TheGuruStud (Jan 15, 2017)

efikkan said:


> I'm getting tired of this old lie.
> I've checked >60 RX 480 reviews, both the initial and the more recent ones, and they all show power usage of ~150W +/-10%, in fact Techpowerup's measurements is slightly below average. There are no significant difference between the first batches and the more recent ones, and there are no significant difference between reference and non-reference boards, just the usual variance you'll get with every card. What matters is what you'll get when you buy a card, not what one out of every thousand cards claims to perform. It's disingenuous to claim that buyers will get a "95W" card, when we all know they'll get a ~150W card.



No, now you're BSing and changing the subject after being called out. A 480 does not use 80% more power than a 1060 (trying to slip in 1070/1080 as a replacement). You're not being disingenuous or ignorant. You're lying intentionally. STFU.


----------



## kruk (Jan 15, 2017)

efikkan said:


> Vega will not beat Pascal in efficiency. Pascal is currently 80% more efficient than Polaris, there is no way AMD can improve like that over night.
> And if Vega were more efficient, it would scale past Pascal.



Nice try:
https://www.techpowerup.com/reviews/Zotac/GeForce_GTX_1080_Amp_Extreme/30.html
Compare RX 470 with 1060 3GB and RX 480 with 1060 6 GB at 1080p.


----------



## Captain_Tom (Jan 15, 2017)

efikkan said:


> So why will Vega 10 (the consumer version) only have 8 GB?
> We all know the supply of HBM2 is low.



Why is anyone complaining about 8GB?  That is definitely enough.

Furthermore let's see if AMD's new memory tech indeed cuts usage in half.  If 8GB's of AMD acts as good as 16GB of Nvidia, Nvidia is the one who will need to step their game up.


----------



## rruff (Jan 15, 2017)

kruk said:


> Compare RX 470 with 1060 3GB and RX 480 with 1060 6 GB at 1080p.



The 1060 is 48% better perf/W than the 480. The 470 doesn't compete with the 1060, but it is better than the other AMD cards.

The 1050 Ti is 62% better than the 470, though. https://www.techpowerup.com/reviews/Zotac/GeForce_GTX_1060_Mini_3_GB/30.html


----------



## Captain_Tom (Jan 15, 2017)

kruk said:


> Nice try:
> https://www.techpowerup.com/reviews/Zotac/GeForce_GTX_1080_Amp_Extreme/30.html
> Compare RX 470 with 1060 3GB and RX 480 with 1060 6 GB at 1080p.



Bingo!  If you remove cherry picking Nvidia, really isn't that far ahead.   


It's also hilarious considering how GF's 14nm was clearly being rushed into usage before it was ready.  Just look at how the "Power Hungry" Fury is only 15% behind the 1060.  Pathetic Nvidia can barely stay ahead of AMD's 28nm


----------



## kruk (Jan 15, 2017)

rruff said:


> The 1060 is 48% better perf/W than the 480. The 470 doesn't compete with the 1060, but it is better than the other AMD cards.
> 
> The 1050 Ti is 62% better than the 470, though. https://www.techpowerup.com/reviews/Zotac/GeForce_GTX_1060_Mini_3_GB/30.html



If 470 doesn't compete with 1060, then 1050 Ti also doesn't compete with 470. *Period!*
Still waiting for the 80% difference ...


----------



## rruff (Jan 15, 2017)

kruk said:


> If 470 doesn't compete with 1060, then 1050 Ti also doesn't compete with 470. *Period!*
> Still waiting for the 80% difference ...



Didn't say they did compete. The 470 is AMD's most efficient card, and the 1050 Ti, 1070, and 1080 all beat it by >60%. I think 60% is a lot. 

And I'd love to see some verified testing of new Polaris cards that shows they really have 50% greater efficiency than when first introduced. The only thing I can find are rumors from 3 months ago.


----------



## kruk (Jan 15, 2017)

rruff said:


> Didn't say they did compete. The 470 is AMD's most efficient card, and the 1050 Ti, 1070, and 1080 all beat it by >60%. I think 60% is a lot.
> 
> And I'd love to see some verified testing of new Polaris cards that shows they really have 50% greater efficiency than when first introduced. The only thing I can find are rumors from 3 months ago.



But with the 1060 3GB that is closest with the performance it's sub 20% difference. What's your point?


----------



## efikkan (Jan 15, 2017)

kruk said:


> Still waiting for the 80% difference ...





 


 
These charts are not that hard to read!


----------



## kruk (Jan 15, 2017)

efikkan said:


> View attachment 83100
> View attachment 83101
> These charts are not that hard to read!



No, they are not. But people with *common sense *don't cut them out and compare different tiers. I really can't see the 80% difference between 480 and 1060. Try harder ...


----------



## efikkan (Jan 15, 2017)

kruk said:


> No, they are not. But people with common sense don't cut them out and compare different tiers.


Now who's the one cherry-picking? We are comparing efficiency, not price here, and you wouldn't find two chips that are exactly the same size. If you don't like a comparison with GP104 and GP106 then you have a problem.

I think you have forgot what we were discussing here. Some claims Vega will be more efficient than Pascal, and since Vega 10 will compete with GP104 it's a fair comparison. GP106 is 55% more efficient, GP104 ~80% more efficient and GP102 ~85% more efficient, so AMD have their work cut out for them.


----------



## rruff (Jan 15, 2017)

kruk said:


> But with the 1060 3GB that is closest with the performance it's sub 20% difference. What's your point?



You are comparing the least efficient Pascal card to the most efficient Polaris. Average them all and you'll get Pascal being ~50% more efficient.

EDIT: Actually did the calculations and *Pascal is 59% more efficient on average*. 460,470, and 480 vs 1050 Ti, 1060 3, 1060 6, 1070, and 1080.


----------



## the54thvoid (Jan 15, 2017)

Stop bitching over the usual crap, all the conjecture is pointless, no point getting worked up and feisty over it.  There is no Vega for months.

Edited because there's no point talking performance.


----------



## Camm (Jan 15, 2017)

rruff said:


> You are comparing the least efficient Pascal card to the most efficient Polaris. Average them all and you'll get Pascal being ~50% more efficient.
> 
> EDIT: Actually did the calculations and *Pascal is 59% more efficient on average*. 460,470, and 480 vs 1050 Ti, 1060 3, 1060 6, 1070, and 1080.



The other thing is, are these calculations done under new driver sets? Already we've seen AMD gain another 10% perf with Polaris under DX11. But lets face it, Nvidia does so well efficiency wise as compute features have been stripped off die since Kepler. And is half the reason why even 10xx series Nvidia gpu's are still not fully DX12 and Vulkan spec level compliant, and why they gain nothing from either api's.


----------



## the54thvoid (Jan 16, 2017)

Camm said:


> The other thing is, are these calculations done under new driver sets? Already we've seen AMD gain another 10% perf with Polaris under DX11. But lets face it, Nvidia does so well efficiency wise as compute features have been stripped off die since Kepler. And is half the reason why even 10xx series Nvidia gpu's are still not fully DX12 and Vulkan spec level compliant, and why they gain nothing from either api's.



AMD is not fully DX12 compliant either. The only reason AMD perform better in DX12 and Vulkan is better use of compute and parallelism of sorts.
Too many people wave the DX12 flag as if it's magical. It's not. It brings no better gfx settings and is so far proving hard to code.
People get pissed off Nvidia performs so well with such a lean efficient design and people get pissed off because AMD are doing better.
People ought to look at things from outside the arena and see how fantastic both things are for us consumers in the long run.


----------



## Xzibit (Jan 16, 2017)

the54thvoid said:


> AMD is not fully DX12 compliant either. The only reason AMD perform better in DX12 and Vulkan is better use of compute and parallelism of sorts.
> Too many people wave the DX12 flag as if it's magical. It's not. It brings no better gfx settings and is so far proving hard to code.
> People get pissed off Nvidia performs so well with such a lean efficient design and people get pissed off because AMD are doing better.
> People ought to look at things from outside the arena and see how fantastic both things are for us consumers in the long run.



There is not one PC game so far that requires D3D12 runtime most DX12 games run with 11_1 or 11_3 which are DX12 compliant.  Its going to take a few more years for things to sort out with DX12 and even see a "real DX12" game rather then a DX11 game with a DX12 stamp.

Hardware has to be available
Developers willing to program for
Enough of a customer base to make it worth it


----------



## the54thvoid (Jan 16, 2017)

Xzibit said:


> There is not one PC game so far that requires D3D12 runtime most DX12 games run with 11_1 or 11_3 which are DX12 compliant.  Its going to take a few more years for things to sort out with DX12 and even see a "real DX12" game rather then a DX11 game with a DX12 stamp.
> 
> Hardware has to be available
> Developers willing to program for
> Enough of a customer base to make it worth it



Yes, that's why folk need to stop saying NV aren't compliant, as if AMD are. It's post truth.

Nobody has it nailed down and as you rightly say, it's going to take quite a while to get it right. Still a lot of folk on W7 as well (raises hand) which doesn't help the cause.


----------



## kanecvr (Jan 16, 2017)

hojnikb said:


> So a year behind the big green



Why are you happy about this? If AMD falls to far behind we'll be seeing an nvidia monopoly and GPU prices will skyrocket. Just look at the price of a 1080 vs the 980 at launch - the 1080 is a whopping 775$ (3100 lei - tax included) in my country, while the 980 was about 550-600$ (2000-2450 lei depending on model) soon after launch. That's a HUGE price increase of 200$ for a high-end card over a period of what, two years?

This is what happens when there's no competition. Not to mention lack of competition is BORING for hardware enthusiasts like myself.



the54thvoid said:


> AMD is not fully DX12 compliant either. The only reason AMD perform better in DX12 and Vulkan is better use of compute and parallelism of sorts.
> Too many people wave the DX12 flag as if it's magical. It's not. It brings no better gfx settings and is so far proving hard to code.
> People get pissed off Nvidia performs so well with such a lean efficient design and people get pissed off because AMD are doing better.
> People ought to look at things from outside the arena and see how fantastic both things are for us consumers in the long run.



Agreed.

The thing is, there's huge improvement in performance using DX12 / vulkan. I've had the opportunity to test the 8GB RX 480 (Asus Strix) and a 4GB RX 470 (PowerColor RedDragon V2) and I have to say I was impressed.


----------



## renz496 (Jan 25, 2017)

R0H1T said:


> They've always had a gaming flagship & a separate (compute) flagship ever since the days of Fermi. That they've neutered DP on subsequent Titan's is something entirely different, the original Titan had excellent DP capabilities but every card that's followed had DP cut down massively & yet many call it a workstation card.
> 
> The GP102 & GP100 are different because of HBM2, Nvidia probably felt that they didn't need "nexgen" memory for their gaming flagship that or saving a few more $ was a better idea i.e. with GDDR5x & the single card cannot support these two competing memory technologies.



not really. with fermi and kepler GF100/GF110/GK110 still being use in a gaming card. but with pascal GP100 is not used on gaming card at all. and despite cards like GK110 is more compute oriented and GK104 is more gaming oriented the SM arrangement still the same. that is not the case for GP100 and other pascal chip. 

titan x maxwell might not have excellent DP capability as kepler based titan but it's spec is exactly the same as quadro M6000. so if you don't need those certified drivers that come with quadro those titan x can easily replace quadro M6000 at far cheaper price (5k vs 1k). in fact as anandtech point it out some company actually build their product using titan x maxwell instead of using quadro or tesla. titan x pascal was sold for it's unlock INT8 performance.



R0H1T said:


> No one said it wasn't, but they could've gone the route of AMD & given 16/32 GB of HBM2 to 1080Ti/Titan & yet they didn't & IMO that's down a lot to costs.
> The Titan is still marketed as a workstation card, I do wonder why?



titan x maxwell is a bit weird since there is no advantage to it compared to regular geforce based card. the only advantage it has really is that 12GB of VRAM making it having the same spec as quadro M6000. but i heard talks about some people sought after it's FP16 performance. it might be the reason why nvidia end up cripling FP16 in non tesla based card with pascal.


----------



## medi01 (Jan 25, 2017)

hojnikb said:


> So a year behind the big green


Feels so good, isn't it? I mean paying 700$ for 314mm^2 chips.



theeldest said:


> On the other hand, if you pull all the data from the Titan X review we see that both manufacturers see decreasing performance per flop as they go up the scale and it's worse for AMD:
> View attachment 82981
> 
> So if we plot out all of these cards as Gflops vs Performance and fit a basic trend line we get that big Vega would be around 80% of Titan performance at 12 TFlops. (this puts it even with the 1080).
> ...




It's an interesting take, but you need to explain why Vega would have worse perf/tflop than Polaris (after AMD explicitly claimed more "non tflop" stuff in it).

Also, 1060 outperforming 480 in your chart is... somewhat outdated.




theeldest said:


> 480 = 232mm (15% larger than 1060 but 10% slower)



Well, intresting to mention that 6 month after release 480 took over.
https://www.techpowerup.com/forums/...ver-improving-over-time-is-not-a-myth.228443/



rruff said:


> You are comparing the least efficient Pascal card to the most efficient Polaris. Average them all and you'll get Pascal being ~50% more efficient.
> 
> EDIT: Actually did the calculations and *Pascal is 59% more efficient on average*. 460,470, and 480 vs 1050 Ti, 1060 3, 1060 6, 1070, and 1080.



Bullshit:





https://www.computerbase.de/2017-01...si-gaming-test/3/#abschnitt_leistungsaufnahme

Thanks to TPU benchmarks (I have yet to find site that claims 480 consumes more than 1070) in practice negligible power consumption differences are hyperboled into orbit.


----------



## Captain_Tom (Jan 25, 2017)

medi01 said:


> It's an interesting take, but you need to explain why Vega would have worse perf/tflop than Polaris (after AMD explicitly claimed more "non tflop" stuff in it).
> 
> Also, 1060 outperforming 480 in your chart is... somewhat outdated.
> 
> ...



Exactly.

The odd arguments these people keep making are quite puzzling, no?

-Why do people keep comparing Hawaii and Fiji TFLOPS to Vega, when Polaris is much closer architecturally?

-Why do people keep exaggerating how powerful the Titan is?!   As usual people seem to be mislead by the name - it's only ~45% stronger than the Fury X depending on the game and resolution.  Do people really think it's going to be hard for AMD to make a card 50% stronger than a card they made 2 years ago?!?!?!

-Overall, why do people continue to just assume AMD can't make powerful cards?   The 5870 and 5970 practically had supremacy in the Enthusiast market for a FULL YEAR.  The 7970 and then GHz edition essentially held the performance crown for an entire year until Nvidia released a larger card that cost $1000, and then the 290X crushed the Titan and 780 Ti HARD.    AMD is no stranger to performance wins.


----------



## the54thvoid (Jan 26, 2017)

Captain_Tom said:


> -Overall, why do people continue to just assume AMD can't make powerful cards?   The 5870 and 5970 practically had supremacy in the Enthusiast market for a FULL YEAR.  The 7970 and then GHz edition essentially held the performance crown for an entire year until Nvidia released a larger card that cost $1000, and then the *290X crushed the Titan and 780 Ti HARD*.    AMD is no stranger to performance wins.



Are you Donald Trump? The misuse of historical accuracy is disturbing.  For reference, I simply googled 290x review and went to the first one, a TPU review on the 290X Lightning model.

I don't see the 780ti being crushed HARD, as you put it. I appreciate you prefer AMD (I've seen you on other sites lauding AMD to the moon) but your abuse of truth is silly.

Again, I hope the Vega card is more powerful than most people are expecting.  I'll be very pissed off if a card I'm stalling an entire build for doesn't match  (or beat) the rumoured 1080ti.  I'm in a win, win here - I want Vega to perform well - I'm not a zealot like some Nvidia owners can be.  But you're always a wee bit overly red pumped to be taken seriously.


----------



## Captain_Tom (Jan 26, 2017)

the54thvoid said:


> Are you Donald Trump? The misuse of historical accuracy is disturbing.  For reference, I simply googled 290x review and went to the first one, a TPU review on the 290X Lightning model.
> 
> I don't see the 780ti being crushed HARD, as you put it. I appreciate you prefer AMD (I've seen you on other sites lauding AMD to the moon) but your abuse of truth is silly.
> 
> Again, I hope the Vega card is more powerful than most people are expecting.  I'll be very pissed off if a card I'm stalling an entire build for doesn't match  (or beat) the rumoured 1080ti.  I'm in a win, win here - I want Vega to perform well - I'm not a zealot like some Nvidia owners can be.  But you're always a wee bit overly red pumped to be taken seriously.



HAHAHAHHAAHA -  A 900p benchmark.  Nice "Alternative Facts" shill.  Or perhaps this the ideal resolution for nvidiots.

Here's something from 2016:

http://tpucdn.com/reviews/NVIDIA/GeForce_GTX_1080/images/perfrel_3840_2160.png

http://tpucdn.com/reviews/NVIDIA/GeForce_GTX_1080/images/perfrel_2560_1440.png

780 Ti almost losing to a 7970 GHz (Which beat the pathetic 780).


To be clear I didn't cherry pick at all.  I simply googled "290X 780 Ti 2016".  That is the first thing that popped up.


----------



## Kanan (Jan 27, 2017)

Captain_Tom said:


> -Overall, why do people continue to just assume AMD can't make powerful cards? The 5870 and 5970 practically had supremacy in the Enthusiast market for a FULL YEAR. The 7970 and then GHz edition essentially held the performance crown for an entire year until Nvidia released a larger card that cost $1000, and then the 290X crushed the Titan and 780 Ti HARD. AMD is no stranger to performance wins.



You're talking release performance, right? It seems so.

a) 7970 lost to GTX 680 when 680 was released, but it had performance crown for about ~3 months. Later 7970 GHz Edition equalled performance of GTX 680, or was a bit faster, but with way higher power consumption.

b) 290X was faster than GTX 780 at release and had a good fight with GTX Titan. Nvidia then released the 780 Ti to counter the 290X/Hawaii GPUs and the 780 Ti easily won vs. the 290X. Later custom cards came closer to the performance of the 780 Ti, but the 780 Ti custom versions were again faster.

c) Only thing you had right was HD 5870 and 5970 being unmatched until GTX 480 and GTX 500 series came. GTX 480 came late and was faster, but consumed a hell lot of power. GTX 580 easily won vs HD 5000 and HD 6000 series.

This is still talking release and maximum 6 months after it. It's pretty irrelevant to compare performances now and disregarding release performance of the GPUs, as nobody buys a GPU to have more performance than the competitor GPU 3 years later.

Also @the54thvoid is right, here are the other tables:











https://www.techpowerup.com/reviews/MSI/R9_290X_Lightning/24.html

That's (one of the best) custom 290X losing to reference 780 Ti. Custom 780 Ti is even faster. 290X simply had no chance.


----------



## Captain_Tom (Jan 27, 2017)

Kanan said:


> You're talking release performance, right? It seems so.




No I am talking about right now.  Right now the 290X destroys the 780 Ti and even trades blows with the 980.   Heck even in late 2014 the 290X was curb stomping the 780 Ti.


Sorry but I don't need to read your long fanboy rant.   Continue to live in the past - All Nvidiots do by necessity.


----------



## Kanan (Jan 27, 2017)

Captain_Tom said:


> No I am talking about right now.  Right now the 290X destroys the 780 Ti and even trades blows with the 980.   Heck even in late 2014 the 290X was curb stomping the 780 Ti.
> 
> 
> Sorry but I don't need to read your long fanboy rant.   Continue to live in the past - All Nvidiots do by necessity.


Any evidence for your bold statements? Until now you're just behaving like a childish fanboy, nothing more.


----------



## Captain_Tom (Jan 27, 2017)

Kanan said:


> Any evidence for your bold statements? Until now you're just behaving like a childish fanboy, nothing more.



lol scroll up cry baby.  I already posted TechPowerup  benches from 2016.


Oh.... He can't read.   That's sad.


----------



## Kanan (Jan 27, 2017)

Captain_Tom said:


> lol scroll up cry baby.  I already posted TechPowerup  benches from 2016.
> 
> 
> Oh.... He can't read.   That's sad.


I already stated, nobody cares about performance in 2-3 years when he choses to buy a GPU in anno 2014, if you're unable to understand this, I'm gonna end this discussion. And I thought fanboys like this are prominent elsewhere.

Also calling someone else "nvidiot", and behaving like a fanboy is really funny. Seems you don't see your own behaviour as fanboyism, but it is.


> Sorry but I don't need to read your long fanboy rant.





> Oh.... He can't read. That's sad.



You're a really funny person. I'd say the only fanboy here, is you.


----------



## Captain_Tom (Jan 27, 2017)

Kanan said:


> I already stated, nobody cares about performance in 2-3 years when he buys a 290X in anno 2014,



Anno 2014?  What are you talking about lmao!  You can't read!

I posted the aggregate framerate - that includes Nvidia's broken "The Way It's Meant to not Boot" games.

Most people keep their cards for 2-3 years on average.  Even at launch the 780 Ti was tied in 4K, and only won by 10% at most in 1080p (1440p depended on the game).  Nobody with half a brain pays 30% more money for 0-10% more performance that will only last a year.   Considering the 780 Ti came out AFTER the 290X, I would call it pretty pathetic.


----------



## Kanan (Jan 27, 2017)

Captain_Tom said:


> Anno 2014?  What are you talking about lmao!  You can't read!
> 
> I posted the aggregate framerate - that includes Nvidia's broken "The Way It's Meant to not Boot" games.
> 
> Most people keep their cards for 2-3 years on average.  Even at launch the 780 Ti was tied in 4K, and only won by 10% at most in 1080p (1440p depended on the game).  Nobody with half a brain pays 30% more money for 0-10% more performance that will only last a year.   Considering the 780 Ti came out AFTER the 290X, I would call it pretty pathetic.


Yeah of course, for a fanboy like yourself, anything Nvidia does is pathetic and you hate people that use Nvidia cards, calling them "Nvidiots", but you know nothing of them. That is pathetic behaviour.

Yes 290X is now better than years before, but that doesn't change the fact nobody can see the future. People paying over 500 bucks for a GPU, mostly chose the 780 Ti, because it was simply the all around better GPU, power consumption wise, performance wise by far (custom GPU vs. custom) and also Nvidia had simply better drivers at least until a few months ago. 4K didn't play the slightest role back then, it's not even really important now. 1440p and 1080p which I posted, even the ref 780 Ti had a easy win VS. Lightning 290X there, which is one of the best 290X, and this was many months after the first release of 290X, so drivers were already a lot better. You can say that the 290X/390X is on same level or better now, but power consumption is still a mess (idle, average gaming, multi monitor, blu ray/web).  So in the end, it's still not really better in my books, as I'm using Multi Monitor and I don't want to waste 35-40W. I also don't want a GPU that consumes 250-300W of power, that's especially true for the 390X that's even more power hungry than the 290X. Nvidia GPUs are way more sophisticated power gating wise, much more flexible with core clocking, and only Vega can change that, because Polaris didn't. And yes I hope that Vega is a success.

Comparing the 290/390X with the 980 is just laughable. The 980 is a ton more efficient. Maybe efficiency is not important for you, but for millions of users it is. It has also to do with noise, 780 Ti/980 aren't as loud as 290/390X.

But I'm still laughing about you calling me a "Nvidiot fanboy". I'm a regular poster in AMD reddit, I owned several Radeon cards and I know ~everything about AMD. If at all, I'm more likely a fanboy of AMD than Nvidia, but this doesn't change certain facts that you can't change as well.


----------



## kruk (Jan 27, 2017)

Kanan said:


> Yeah of course, for a fanboy like yourself, anything Nvidia does is pathetic and you hate people that use Nvidia cards, calling them "Nvidiots", but you know nothing of them. That is pathetic behaviour.
> 
> Yes 290X is now better than years before, but that doesn't change the fact nobody can see the future. People paying over 500 bucks for a GPU, mostly chose the 780 Ti, because it was simply the all around better GPU, power consumption wise, performance wise by far (custom GPU vs. custom) and also Nvidia had simply better drivers at least until a few months ago. 4K didn't play the slightest role back then, it's not even really important now. 1440p and 1080p which I posted, even the ref 780 Ti had a easy win VS. Lightning 290X there, which is one of the best 290X, and this was many months after the first release of 290X, so drivers were already a lot better. You can say that the 290X/390X is on same level or better now, but power consumption is still a mess (idle, average gaming, multi monitor, blu ray/web).  So in the end, it's still not really better in my books, as I'm using Multi Monitor and I don't want to waste 35-40W. I also don't want a GPU that consumes 250-300W of power, that's especially true for the 390X that's even more power hungry than the 290X. Nvidia GPUs are way more sophisticated power gating wise, much more flexible with core clocking, and only Vega can change that, because Polaris didn't. And yes I hope that Vega is a success.



Just a moment here. Did you even read the reviews? The reference 780 Ti consumed 15 Watt less when gaming (muh better efficiency!!111), it was 8% faster and 28% more expensive vs the reference 290x. And even if the 290x was 8% slower, it still managed to push 60+ in most games tested here on TPU at 1080p. People only bought the 780 Ti because it was nVidia, not because it was that much better as you say. The only two problems with 290x were it's multimonitor power consumption and poor reference colling. Otherwise it was a great card! Stop making things up ...


----------



## theeldest (Jan 27, 2017)

medi01 said:


> It's an interesting take, but you need to explain why Vega would have worse perf/tflop than Polaris (after AMD explicitly claimed more "non tflop" stuff in it).
> 
> Also, 1060 outperforming 480 in your chart is... somewhat outdated.



It's because as AMDs dies get larger and speeds go up the performance per TFLOP decreases.  

¯\_(ツ)_/¯


----------



## Captain_Tom (Jan 27, 2017)

Kanan said:


> Yeah of course, for a fanboy like yourself, anything Nvidia does is pathetic and you hate people that use Nvidia cards
> 
> Comparing the 290/390X with the 980 is just laughable. The 980 is a ton more efficient. Maybe efficiency is not important for you, but for millions of users it is. It has also to do with noise, 780 Ti/980 aren't as loud as 290/390X.
> 
> But I'm still laughing about you calling me a "Nvidiot fanboy".




No I have nothing inherently against Nvidia, or any company that makes a product.  I don't call people "Nvidiots" because they buy Nvidia cards, I do so when I truly believe they are a fanboy.    And yeah I assume most people who defend Kepler are in fact fanboys, because kepler was a big joke if you actually know what you are talking about.


I have owned plenty of Nvidia cards  (Haven't owned any for a few years now though).  However I honestly don't believe you when you say you own AMD cards considering the continued noob arguments I keep hearing.


The 390X is noisy huh?  Um no they were all AIB whisper quiet cards.  Of course you probably don't know that because you are clearly uninformed on all of these cards from top-to-bottom.  I mean the 290X was hot, but not loud if using its default fan settings,; and again - that's for the cheap launch cards.  If you bought the plentifully available AIB cards you would know they were very quiet.  Quite funny you bring up noise when the Titan series (And now 1070/80 FE) have been heavily criticized for their under-performing and noisy fan systems.


Also the efficiency argument is my favorite myth.   Only with the release of Maxwell did Nvidia start to have any efficiency advantage at all, and that was only against the older generation AMD cards.  I will leave you with this:






^WOW!  A full 5-10% more efficient! (depending on the card).    Anyone who thinks that is worth mentioning is simply looking for reasons to support "Their side."

Pascal was really the first generation Nvidia won efficiency in any meaningful way.


----------



## TheHunter (Jan 27, 2017)

So..  Vega. 

Yeah its a nice little solar system 



March nv, May Amd, seems reasonable, maybe its just enough to fine tune it to 1080ti..


----------



## BiggieShady (Jan 27, 2017)

theeldest said:


> performance per TFLOP decreases


That's completely nonsensical, performance is measured in TFLOPs, there is no such thing as performance per TFLOP.
You probably meant that ratio of average performance in TFLOPs versus peak theoretical performance in TFLOPs decreases.


----------



## Kanan (Jan 28, 2017)

kruk said:


> Just a moment here. Did you even read the reviews? The reference 780 Ti consumed 15 Watt less when gaming (muh better efficiency!!111), it was 8% faster and 28% more expensive vs the reference 290x. And even if the 290x was 8% slower, it still managed to push 60+ in most games tested here on TPU at 1080p. People only bought the 780 Ti because it was nVidia, not because it was that much better as you say. The only two problems with 290x were it's multimonitor power consumption and poor reference colling. Otherwise it was a great card! Stop making things up ...


Did you even read or understand what I've written? I talked about CUSTOM 780 Ti mainly, and those were a great deal faster back then, not only 8%. And afaik the 290X even being Ref run pretty good, because it was tweaked by W1zzard and afaik was a cherry picked GPU of AMD too. But you can't say that about the ref 780 Ti which runs on pretty pretty low clocks compared to custom ones. Over 200 MHz lower than decent customs.



Captain_Tom said:


> No I have nothing inherently against Nvidia, or any company that makes a product.  I don't call people "Nvidiots" because they buy Nvidia cards, I do so when I truly believe they are a fanboy.    And yeah I assume most people who defend Kepler are in fact fanboys, because kepler was a big joke if you actually know what you are talking about.


It's not and I still know what I'm talking about. And you're still behaving like a fanboy or someone who actually has no clue himself. Just a FYI: 1536 shaders GTX 680, that consumed way less power than HD 7970, was faster than the 2048 shader GPU of AMD. AMD had only one way to react to it: power the HD 7970 to retarded clocks, calling it "GHz Edition" with 1050 MHz, but also increasing it's power consumption a lot by the way. The whole HD 7970 and R9 290X GPUs were about brute power with wide buses (384 / 512 bit) and high shader counts. Basically, the mistakes Nvidia did before with GTX 280/480/580 were copied by AMD onto their HD 7000 and later lineups, and Nvidia basically tried to do what AMD pulled off with HD 5000 / 6000 which were pretty efficient compared to GTX 200/400/500 series. Only when the 290X was released it put enough stress on Nvidia to counter it with their own full force GPU, the GK110 with all of its shaders enabled (2880). It was more expensive, but also better in every single aspect.



> I have owned plenty of Nvidia cards  (Haven't owned any for a few years now though).  However I honestly don't believe you when you say you own AMD cards considering the continued noob arguments I keep hearing.


Said the angry fanboy. Who cares. I owned HD 5850 from release to 2013 when I replaced it with HD 5970 and used it until it started to be defective, end of 2015. I own a HD 2600 XT now. Basically I owned 2 of the best AMD/ATI GPUs ever made. The reason why I chose HD 5000 over GTX 200 series back then was simple: because it was a lot better. Because I don't care about brands.



> The 390X is noisy huh?  Um no they were all AIB whisper quiet cards.


I said they are 'noisier', not noisy. Also I was mostly talking about R9 200 series, not 300, which are pretty irrelevant to this discussion. But if you want to talk about the R9 300 series: yes compared to GTX 900 series they were noisy. You can't compare R9 300 series to GTX 700 series because they are a different generation. Go and read some reviews.



> Of course you probably don't know that because you are clearly uninformed on all of these cards from top-to-bottom.


Childish behaviour all the way.



> I mean the 290X was hot, but not loud if using its default fan settings,; and again - that's for the cheap launch cards.  If you bought the plentifully available AIB cards you would know they were very quiet.  Quite funny you bring up noise when the Titan series (And now 1070/80 FE) have been heavily criticized for their under-performing and noisy fan systems.


We are not talking about Titan series. And 1070/1080 are doing relatively good for what they are (exhaust style coolers). AMD's exhaust coolers were just a mess after HD 5000 series. Loud and hot. And later ineffective (R9 290X fiasko).

I can bring up noise every time, since I'm absolutely right about Nvidia cards being more quiet and way less power hungry which has to do with the noise as well.



> Also the efficiency argument is my favorite myth.   Only with the release of Maxwell did Nvidia start to have any efficiency advantage at all, and that was only against the older generation AMD cards.  I will leave you with this:
> 
> View attachment 83498
> 
> ...


Then compare R9 290X custom vs. custom 780 Ti and then you'll see I'm right. They are way more efficient and a lot faster. Also Multi Monitor, idle and Bluray / Web power consumption is still a mess on those AMD cards. Efficiency is not only, when you play games.

I'm gonna drop this discussion now, cause you're a fanboy and just a waste of time. Save to say, you didn't prove anything of what I've said wrong. The opposite is true and you totally confirmed me with your cherry picking. Try that with someone else.

You're on Ignore, byebye.


----------



## kruk (Jan 28, 2017)

Kanan said:


> Did you even read or understand what I've written? I talked about CUSTOM 780 Ti mainly, and those were a great deal faster back then, not only 8%. And afaik the 290X even being Ref run pretty good, because it was tweaked by W1zzard and afaik was a cherry picked GPU of AMD too. But you can't say that about the ref 780 Ti which runs on pretty pretty low clocks compared to custom ones. Over 200 MHz lower than decent customs.



Here are two benchmarks made at the same time with custom 780 Ti and custom 290x:
https://www.techpowerup.com/reviews/ASUS/R9_290X_Direct_Cu_II_OC/
https://www.techpowerup.com/reviews/MSI/GTX_780_Ti_Gaming/

The custom 780 Ti was 15% faster than the custom 290x at 1080p and *cost 18% more*, which makes the custom 290x have better performance/price. The custom 780 Ti also consumed *6%* *more power* on average gaming session making it *only 10%* more efficient at gaming than the custom 290x. Nowadays, in modern games, the reference 780TI and 290x are on par with performance at 1080p, which we can probably safely extrapolate to custom cards. You do the math.

As I said before, you either didn't read the reviews or you have a really bad memory. Pick one. I'm pulling out of this debate as it's not the topic of this thread, however I did enjoy proving you wrong .


----------



## Kanan (Jan 28, 2017)

kruk said:


> Here are two benchmarks made at the same time with custom 780 Ti and custom 290x:
> https://www.techpowerup.com/reviews/ASUS/R9_290X_Direct_Cu_II_OC/
> https://www.techpowerup.com/reviews/MSI/GTX_780_Ti_Gaming/
> 
> ...


You didn't prove me wrong, but thanks for proving me correct. And while you're at it don't forget the bad power consumption of R9 290X on idle/multi monitor and BD/Web. And looking into the future is still impossible. Maybe more people would've bought the 290X knowing it's better in 2-3 years, maybe not, because Nvidia had better drivers back then. I'm also pretty sure most people didn't care, they wanted the best performance NOW, and not in years to come, most enthusiast users that pay over 600 bucks for a GPU don't keep it for 2-3 years anyway, they replace GPUs way faster than normal users. AMD always was a good brand for people that kept their GPU a long time, their GPUs had more memory and future oriented technology like DX11 on HD 5000 series, low level API prowess on R9 200/300 series. Doesn't change the fact, the last GPUs AMD had, that was a real "winner", was the HD 5000 series. Everything after that was just a follow-up to Nvidia, always 1 or 2 steps behind.


----------



## kruk (Jan 28, 2017)

Kanan said:


> You didn't prove me wrong, but thanks for proving me correct. And while you're at it don't forget the bad power consumption of R9 290X on idle/multi monitor and BD/Web. And looking into the future is still impossible. Maybe more people would've bought the 290X knowing it's better in 2-3 years, maybe not, because Nvidia had better drivers back then. I'm also pretty sure most people didn't care, they wanted the best performance NOW, and not in years to come, most enthusiast users that pay over 600 bucks for a GPU don't keep it for 2-3 years anyway, they replace GPUs way faster than normal users.



Look, I know you got Choice-supportive bias because you own 780 Ti, but avoiding the numbers won't make the nVidia card better. In my first post, which you obviously didn't read properly, I said clearly that the multi-monitor was one of the two problems with the 290x.

Let's analyze the power consumption in other areas (*single monitor only* of course, since the number of multimonitor users is way lower):
The difference in single monitor idle is *10 Watts*. This is also the difference between the cards at *average gaming*. If you leave the 290x idling, it can turn off to *ZeroCore Power* modus which consumes virtually no power (i measured it myself on my 7750 and you can find measurements online), but I'm putting *2 W* there so you won't say I'm cheating ...

If you leave the computer running 24/7 and it *idles 12 hours*, you *game 6 hours*, watch a *BlueRay movie for 2 hours* and do other things for *4 hours* the 780 Ti will consume: (12h*10W + 6h*230W + 2h*22W+4h*10W)/24h = *60-70W per hour* and the 290X will consume  (12h*2W + 6h*219W + 2h*74W+4h*20W)/24h = *60-70W per hour*. Virtually the same! You can say whatever you want, but the numbers are on my side .


----------



## Kanan (Jan 28, 2017)

kruk said:


> Look, I know you got Choice-supportive bias because you own 780 Ti, but avoiding the numbers won't make the nVidia card better. In my first post, which you obviously didn't read properly, I said clearly that the multi-monitor was one of the two problems with the 290x.
> 
> Let's analyze the power consumption in other areas (*single monitor only* of course, since the number of multimonitor users is way lower):
> The difference in single monitor idle is *10 Watts*. This is also the difference between the cards at *average gaming*. If you leave the 290x idling, it can turn off to *ZeroCore Power* modus which consumes virtually no power (i measured it myself on my 7750 and you can find measurements online), but I'm putting *2 W* there so you won't say I'm cheating ...
> ...


Maybe so, because I'm kinda tired of this discussion. And yeah I have to admit some bias, though I bought this 780 Ti used after 390X were already released and I still don't regret it a single bit. Maybe I mixed power consumption numbers of 290X with 390X in my mind, because 390X consumes more than 290X (8 GB vs. 4 GB, higher clocks all around), for the same reason, because I compared 780 Ti with 390X back then. So it's okay I don't disagree with your numbers. However, I still don't like the Multi Monitor power consumption on those GPUs, I didn't like it when I had HD 5850/5970 as well, those had the same problem and Web power consumption is way too high too (I don't care about BD, I just use the wording from TPU). For me it matters, for others maybe not. I didn't had the choice between 390 and 780 Ti anyway, I bought this from a friend for a discount. Originally I had a R9 380 delivered to me, but it was defective from the start, so I asked him if he wants to sell me his 780 Ti, because he had just bought a 980 Ti, and returned the 380. That's it.


----------



## medi01 (Jan 28, 2017)

That "people buy good products" crap is annoying in 2017. We have seen it with Prescott we have seen it with Fermi, it is clearly not the case.


----------



## cdawall (Jan 28, 2017)

medi01 said:


> That "people buy good products" crap is annoying in 2017. We have seen it with Prescott we have seen it with Fermi, it is clearly not the case.



Fermi was the superior performing card? It took *two* AMD cards in crossfire to best the GTX480.


----------



## BiggieShady (Jan 28, 2017)

cdawall said:


> Fermi was the superior performing card? It took *two* AMD cards in crossfire to best the GTX480.


Fermi was hot but great, big improvement over 200 series ... however, wasn't as good as evergreen series until 580 model, timeline went something like this:



 
Fermi had huge die size compared to evergreen and succesors ... AMD already had 5870 that 480 couldn't quite dethrone without high tesselation (evergreen's Achilles heel)... thing is 580 was a proper fermi improvement (performance and power, temperature and noise improvement) and 6000 series wasn't improvement at all over evergreen.
Two 5000 or 6000 series gpus in xfire had huge issues with frame pacing back then (and it was an issue all the way to the crimson driver suite), so they would actually beat any fermi in average frame rate, but measured frame time variations made it feel almost the same. Maybe that's what you are referring to?


----------



## Kanan (Jan 28, 2017)

cdawall said:


> Fermi was the superior performing card? It took *two* AMD cards in crossfire to best the GTX480.


The gtx 480 wasn't bested anyway because crossfire back then was a complete mess, frame variances all over the place, only difference being it wasn't public knowledge like it is now. But the gtx 480 bested itself by being loud, hot and power hungry, it wasn't a good gpu if you ask me. Maybe 20-30% faster than HD 5870 but not really worth the hassle. The best *card* was the HD 5970 if you ignore the crossfire problems. It was actually pretty nice with frame pacing, too bad that feature was introduced in 2013 not Nov.  2009 when the GPU was released.


----------



## cdawall (Jan 28, 2017)

Kanan said:


> The gtx 480 wasn't bested anyway because crossfire back then was a complete mess, frame variances all over the place, only difference being it wasn't public knowledge like it is now. But the gtx 480 bested itself by being loud, hot and power hungry, it wasn't a good gpu if you ask me. Maybe 20-30% faster than HD 5870 but not really worth the hassle. The best *card* was the HD 5970 if you ignore the crossfire problems. It was actually pretty nice with frame pacing, too bad that feature was introduced in 2013 not Nov.  2009 when the GPU was released.



I agree my point was performance wise it was almost two full AMD GPU's ahead


----------



## BiggieShady (Jan 28, 2017)

I remember HD5870 vs GTX480 at launch being neck to neck ... radeon was even better in Crysis which was the uber benchmark at the time ...
I believe performance lead for fermi came later with driver maturity ... and by that time it was GTX 580 vs HD 6970 win for nvidia and very soon after GTX 580 vs 7970 win for amd until kepler and so on.


----------



## Kanan (Jan 28, 2017)

BiggieShady said:


> I remember HD5870 vs GTX480 at launch being neck to neck ... radeon was even better in Crysis which was the uber benchmark at the time ...
> I believe performance lead for fermi came later with driver maturity ... and by that time it was GTX 580 vs HD 6970 win for nvidia and very soon after GTX 580 vs 7970 win for amd until kepler and so on.


I only remember out of my head, the GTX 480 being mostly faster but with a few games that favoured ATI tech with the HD 5870 and the HD 5970 being always on top (always but a few games that didn't support CF).

btw.
https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_480_Fermi/32.html


----------



## cdawall (Jan 29, 2017)

Kanan said:


> I only remember out of my head, the GTX 480 being mostly faster, with a few games that favoured ATI tech and the HD 5870 and the HD 5970 being always on top (always but a few games that didn't support CF).
> 
> btw.
> https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_480_Fermi/32.html



It was faster in almost everything lol


----------



## Kanan (Jan 29, 2017)

cdawall said:


> It was faster in almost everything lol


Yeah I screwed the wording lol. Such a nice card and the cooler was designed like a grill too, so it was ready for barbecue. I think they did a great job fixing the issues with the GTX 580 though.


----------



## BiggieShady (Jan 29, 2017)

Kanan said:


> I only remember out of my head, the GTX 480 being mostly faster but with a few games that favoured ATI tech with the HD 5870 and the HD 5970 being always on top (always but a few games that didn't support CF).
> 
> btw.
> https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_480_Fermi/32.html


Right ... my memory isn't what it used to be


----------



## cdawall (Jan 29, 2017)

Kanan said:


> Yeah I screwed the wording lol. Such a nice card and the cooler was designed like a grill too, so it was ready for barbecue. I think they did a great job fixing the issues with the GTX 580 though.



I loved my 470's to the point where I still have three of them with DD blocks, just in case I want to feel nostalgia.


----------



## medi01 (Jan 30, 2017)

cdawall said:


> I agree my point was performance wise it was almost two full AMD GPU's ahead



What on earth:






`


----------



## cdawall (Jan 30, 2017)

medi01 said:


> What on earth:



Wait lets look at this graph. 100% performance match to the 5970. Would you mind telling everyone how many GPU's the 5970 contained?


----------



## theeldest (Jan 30, 2017)

BiggieShady said:


> That's completely nonsensical, performance is measured in TFLOPs, there is no such thing as performance per TFLOP.
> You probably meant that ratio of average performance in TFLOPs versus peak theoretical performance in TFLOPs decreases.



I'm referencing my previous posts with the graphs and tables. There I define performance as the Average Performance as listed in the summary of the Titan X Pascal review.

Post 1: https://www.techpowerup.com/forums/...-launch-in-may-2017-leak.229550/#post-3584964

Post 2: https://www.techpowerup.com/forums/...h-in-may-2017-leak.229550/page-4#post-3585723


The table for reference:





AMD's larger die have lower performance per flop. 

Before you reply that this is a stupid methodology, please put together something based on a methodology that you think is good and use that supporting data.


----------



## EarthDog (Jan 30, 2017)

Can you tell us why the relationship of die size to flop matters to people? Its not like that translates to anything tangible for the user, like FPS, or compute power. Its just some math that divides TFlops by die size...

Am I missing something?


----------



## cdawall (Jan 30, 2017)

EarthDog said:


> Can you tell us why the relationship of die size to flop matters to people? Its not like that translates to anything tangible for the user, like FPS, or compute power. Its just some math that divides TFlops by die size...
> 
> Am I missing something?



No you are missing nothing. He posted graphs in one of the other threads claiming how there is performance per flop etc. Basically he used excel and is proud of it. Let him have his excel moment remember not everyone can use excel.


----------



## EarthDog (Jan 30, 2017)

It just seems so arbitrary.. like, the size of the rims on the car compared to how many windows it has. It really has no bearing on anything. I mean, cool metric, but, what is it actually telling us? How can a consumer use that data to gauge anything??

Consumers could care less if there was something the size of a postage stamp or a 8.5"x11" die under there... really.


----------



## cdawall (Jan 30, 2017)

EarthDog said:


> It just seems so arbitrary.. like, the size of the rims on the car compared to how many windows it has. It really has no bearing on anything. I mean, cool metric, but, what is it actually telling us? How can a consumer use that data to gauge anything??
> 
> Consumers could care less if there was something the size of a postage stamp or a 8.5"x11" die under there... really.



Consumers don't care about anything. but RGB lights.


----------



## BiggieShady (Jan 30, 2017)

theeldest said:


> Before you reply that this is a stupid methodology, please put together something based on a methodology that you think is good and use that supporting data.



Ah, that little table from your original post explains where are you coming from ... I wouldn't use word stupid though, but slightly unscientific ... FLOPS is a unit and performance is measured using that unit (among other things such as fps but that's resolution dependent unlike flops). That little table in column 4 has a ratio "average frame rate per second / peak theoretical floating operations per second" ... and it would be great to have unitless ratio for that purpose, but very difficult because actual performance is measured in fps.

Since you are depicting a trend in your graph, it should work out *if and only if *all is measured in same resolution (either that or all resolution summary) and gpu usage is same in all games/samples. The second condition is a difficult one.
Why usage, because for example 8 TFLOPS GPU @75% usage skews your perf/flops ratio compared to say 8 TFLOPS GPU @100% usage.
As I said, good thing you extrapolated a linear trend out of the data points to combat the gpu usage skew.

I'm not even going into how in one game to calculate a single pixel you need X floating point operations, and for another game 2X floating point operations ... but again same games were used so relative trend analysis works out

I may have came across as a dick which wasn't my intention, but I admit that the fact that you are looking for a trend and relative amd vs nvidia relation between trends (using averages and same games) helps making your methodology work.


----------



## medi01 (Jan 31, 2017)

On par with TX will be good enough for AMD (and is actually reasonable to expect).



cdawall said:


> Wait lets look at this graph. 100% performance match to the 5970. Would you mind telling everyone how many GPU's the 5970 contained?



5870, a single chip, is listed at 91%. (ignoring the fact that TPU % charts were messed up in a major way until recently)


----------



## EarthDog (Jan 31, 2017)

medi01 said:


> (ignoring the fact that TPU % charts were messed up in a major way until recently)


Sorry.. what now?


----------



## cdawall (Jan 31, 2017)

medi01 said:


> 5870, a single chip, is listed at 91%. (ignoring the fact that TPU % charts were messed up in a major way until



Still took two to equal it 100% and that's a 1024x768 chart. I mean really come on.


----------



## Kanan (Jan 31, 2017)

HD 5970 being a CF card scales very poorly on 1024x . Save to say GTX 480 was far away from performing like two ATI Gpus.  HD 5850 CF / 5970 which is essentially almost the same without overclock and especially HD 5870 CF wiped the floor with the GTX 480. The GTX 580 made a much better job at that but still wasn't 2x as fast. This is purely talking fps on high resolutions, not real picture quality. I think in that generation the HD 5850 was the best GPU price to performance wise and HD 5870 the best gpu performance wise without costing too much energy.


----------



## cdawall (Jan 31, 2017)

Kanan said:


> HD 5970 being a CF card scales very poorly on 1024x . Save to say GTX 480 was far away from performing like two ATI Gpus.  HD 5850 CF / 5970 which is essentially almost the same without overclock and especially HD 5870 CF wiped the floor with the GTX 480. The GTX 580 made a much better job at that but still wasn't 2x as fast. This is purely talking fps on high resolutions, not real picture quality. I think in that generation the HD 5850 was the best GPU price to performance wise and HD 5870 the best gpu performance wise without costing too much energy.



I had crossfire back then. In the three games it actually worked in awesome. Fermi was a turning point for nvidia and to this date one of my favorite cards.

As for power consumption...um it wasn't that high, just high for the time. The 290 consumes more power so do the rest of the current top tier cards. The whole it uses less energy crutch is annoying. I didn't buy a gaming PC to save the ecosystem. Performance is performance and fermi offered more of it. They also consumed less power once you got the temps down.


----------



## Kanan (Jan 31, 2017)

cdawall said:


> I had crossfire back then. In the three games it actually worked in awesome. Fermi was a turning point for nvidia and to this date one of my favorite cards.
> 
> As for power consumption...um it wasn't that high, just high for the time. The 290 consumes more power so do the rest of the current top tier cards. The whole it uses less energy crutch is annoying. I didn't buy a gaming PC to save the ecosystem. Performance is performance and fermi offered more of it. They also consumed less power once you got the temps down.


Doesn't change the fact that the gtx 480 was not a efficient gpu and the HD 5870 much better all around.

Comparing older generations with new ones is kinda moot. The energy consumption for its performance was way too high. GTX 580 was okay, GTX 480 was a mess, it's like comparing ref 290 with custom 290 or the 390s.

You find it annoying I'm talking about efficiency and power consumption, fine. I'm not using my pc to save the environment too (look my specs) but I'd chose the more efficient gpu any day, that's my whole point. I always try to find a good balance.


----------



## cdawall (Jan 31, 2017)

580 overclocked had a habit of consuming just as much power as the 480 if not more...


----------



## Kanan (Jan 31, 2017)

cdawall said:


> 580 overclocked had a habit of consuming just as much power as the 480 if not more...


But it was so much faster too. 32 Cuda cores more, much more efficient. Everything. I had a gtx 570 revised edition with smaller pcb that was a great card. Very efficient for a big Fermi.


----------



## cdawall (Jan 31, 2017)

Kanan said:


> But it was so much faster too. 32 Cuda cores more, much more efficient. Everything. I had a gtx 570 revised edition with smaller pcb that was a great card. Very efficient for a big Fermi.



It was an additional cluster unlocked over the 480 and the same basic core just a better process. You know the same issue we are seeing with the current AMD 480. Piss poor process, higher power consumption etc.


----------



## Kanan (Feb 1, 2017)

cdawall said:


> It was an additional cluster unlocked over the 480 and the same basic core just a better process. You know the same issue we are seeing with the current AMD 480. Piss poor process, higher power consumption etc.


I heard they reworked the architecture, I don't think it was the process, or only partly, as they had experience with the process with revamped GTX 275 series before it (the usual test drive GPUs for new processes). In the end it had the additional cluster + way better functioning GPU in general. GF110 != GF100.


----------



## theeldest (Feb 2, 2017)

cdawall said:


> No you are missing nothing. He posted graphs in one of the other threads claiming how there is performance per flop etc. Basically he used excel and is proud of it. Let him have his excel moment remember not everyone can use excel.


----------



## medi01 (Feb 12, 2017)

EarthDog said:


> Sorry.. what now?



Good morning.
https://www.techpowerup.com/forums/threads/tpu-math-check.224326/page-2#post-3493258


----------

