# Tom Clancy's "The Division" Gets DirectX 12 Update, RX 480 Beats GTX 1060 by 16%



## btarunr (Dec 19, 2016)

Over the weekend, gamers began testing the new DirectX 12 renderer of Tom Clancy's "The Division," released through a game patch. Testing by GameGPU (Russian media site) shows that AMD Radeon RX 480 is about 16 percent faster than NVIDIA GeForce GTX 1060 6GB, with the game running in the new DirectX 12 mode. "The Division" was tested with its new DirectX 12 renderer, on an ASUS Radeon RX 480 STRIX graphics card driven by Crimson ReLive 16.12.1 drivers, and compared with an ASUS GeForce GTX 1060 6GB STRIX, driven by GeForce 376.19 drivers. Independent testing by German tech-site ComputerBase.de supports these findings.



 

 

 

 



*View at TechPowerUp Main Site*


----------



## cryohellinc (Dec 19, 2016)

Impressive work from AMD! Nvidia really needs to stop slacking and start optimising their drivers/Gpu's for next gen (or current gen, however id say next gen as currently there are like 1-2 games built from scratch on Dx12/ Vulcan) API's. Otherwise if AMD has much better performance for cheaper price on new API's they might easily win a big chunk of the market back from Nvidia. I mean honestly, Dx12 should give INSANE performance gains, however what we see in all of those "Dx12 updates" for various titles is actually NEGATIVE (wtf) scaling. Shame!

However all in all this is good news, competition always drives development and makes them Work for their money. Can't wait to see Nvidia's reaction after release of Vega.


----------



## P4-630 (Dec 19, 2016)

But nowhere near GTX1070 performance as some people claim an RX480 can match a GTX1070....
RX480 is GTX1060 territory.


----------



## btarunr (Dec 19, 2016)

P4-630 said:


> But nowhere near GTX1070 performance as some people claim an RX480 can match a GTX1070....
> RX480 is GTX1060 territory.



I don't think anybody claims that.


----------



## IceScreamer (Dec 19, 2016)

This is great news for AMD users, but only when DX12 is properly implemented, which was sadly a lottery so far.


----------



## P4-630 (Dec 19, 2016)

btarunr said:


> I don't think anybody claims that.



Some people in the RX480 club thread iirc...

"_A binned and OC'd RX480_"


----------



## btarunr (Dec 19, 2016)

P4-630 said:


> Some people in the RX480 club thread iirc...



Ahh, fanclub threads' views can't be made "general perception." RX 480 is in GTX 1060 territory...and is faster in "The Division" DX12.


----------



## john_ (Dec 19, 2016)

cryohellinc said:


> Impressive work from AMD! Nvidia really needs to stop slacking and start optimising their drivers/Gpu's for next gen (or current gen, however id say next gen as currently there are like 1-2 games built from scratch on Dx12/ Vulcan) API's. Otherwise if AMD has much better performance for cheaper price on new API's they might easily win a big chunk of the market back from Nvidia. I mean honestly, Dx12 should give INSANE performance gains, however what we see in all of those "Dx12 updates" for various titles is actually NEGATIVE (wtf) scaling. Shame!
> 
> However all in all this is good news, competition always drives development and makes them Work for their money. Can't wait to see Nvidia's reaction after release of Vega.



It's NOT Nvidia's fault. We just see a reversal of roles compared to 5 years ago. 5 years ago everyone was developing in Intel+Nvidia platforms. Even the first XBox One demos where running on PCs with Intel and Nvidia hardware. So, programs where getting all the optimizations and stability checks done on Nvidia, and AMD was looking like a company with incompetent programmers. Fast forward to today and you have games getting optimized for DirectX 12(not so much different to Mantle) and GCN architecture. Nvidia is in the same shoes that AMD was 5 years ago. Waiting for developers to improve their games/game engines for it's Maxwell and Pascal architecture with various patches and also having to run a modern API that was tailored for it's main competitor hardware.


----------



## thevoiceofreason (Dec 19, 2016)

It looks like RX470 might be very close then, which seems pretty good considering the prices.


----------



## Lionheart (Dec 19, 2016)

Well that's good news, but I'm more impressed with the FX 8370 comparison.


----------



## Prima.Vera (Dec 19, 2016)

Something's wrong with nVidia drivers. On 1080p they gain just as same as AMD's, but on 1440p or more, they actually loose a lot. WTH nVidia??


----------



## erixx (Dec 19, 2016)

Please, someone remind me the benefit of DX12 I am missing... as I do not hold any stock in AMD


----------



## btarunr (Dec 19, 2016)

erixx said:


> Please, someone remind me the benefit of DX12 I am missing... as I do not hold any stock in AMD



www.techradar.com/news/gaming/DirectX-12-what-is-it-and-why-it-matters-to-PC-gamers/articleshow/51749548.cms


----------



## erixx (Dec 19, 2016)

Yo, Btarunr, thanks but it was more a polemic question, anyway: dead link.

*Sorry! Page not found.*
The article you requested has either been moved or removed from the site.


----------



## btarunr (Dec 19, 2016)

erixx said:


> Yo, Btarunr, thanks but it was more a polemic question, anyway: dead link.
> 
> *Sorry! Page not found.*
> The article you requested has either been moved or removed from the site.



http://www.in.techradar.com/news/ga...matters-to-PC-gamers/articleshow/51749548.cms


----------



## refillable (Dec 19, 2016)

It seems that it is clear right now that the RX 480 is (slightly) the better card among the two. It was far from that at launch. Not good Nvidia.


----------



## ShurikN (Dec 19, 2016)

I've been reading some reddit posts, and ppl are reporting huge gains on low end and/or old cpus. Mostly with AMD cards.


----------



## ZeppMan217 (Dec 19, 2016)

The minimum FPS is notably higher with 1060 though - 42 vs 35 FPS.


----------



## qubit (Dec 19, 2016)

I tested it and got blue screens all over. Glorious.

Even when the bsods stopped, the game still didn't work (crashed). Not sure if the game was faulty, dodgy NVIDIA drivers or a Windows fault. Never mind, I haven't bought the game and the trial period has expired, so the point is moot. It's very likely to work the next time I try it in several months time.

It worked in DX11, but even there it sometimes crashed.


----------



## EzioAs (Dec 19, 2016)

While this is impressive and a good job by Massive Studios, it's only with the FX 8370 in 1080p that the lead is around 16%. In other cases, it's closer than the headline might've suggested.


----------



## bug (Dec 19, 2016)

refillable said:


> It seems that it is clear right now that the RX 480 is (slightly) the better card among the two. It was far from that at launch. Not good Nvidia.


Not necessarily. When you're done benchmarking, you still play in DX11 mode. Cause it's faster.
I've said it from the beginning, but few were listening: by the time DX12/Vulkan become relevant, both Polaris and Pascal will be obsolete. Futureproofing is nice (says the guy that has held on to his 660Ti for four years), but what you're playing today takes precedence.

And before someone burns me to the stake: I know that a 480 offers pretty much the same gaming experience as a 1060. I only went with Nvidia for the Linux support.


----------



## Hokum (Dec 19, 2016)

I think the performance of the Fury X should be comment on also, only just behind the 1070 and 1080 at 1080p and only behind the 1080 at higher resolutions. 
Not bad for the last gen card.


----------



## refillable (Dec 19, 2016)

bug said:


> Not necessarily. When you're done benchmarking, you still play in DX11 mode. Cause it's faster.
> I've said it from the beginning, but few were listening: by the time DX12/Vulkan become relevant, both Polaris and Pascal will be obsolete. Futureproofing is nice (says the guy that has held on to his 660Ti for four years), but what you're playing today takes precedence.
> 
> And before someone burns me to the stake: I know that a 480 offers pretty much the same gaming experience as a 1060. I only went with Nvidia for the Linux support.



I keep DX12/Vulkan after benchmarking as I'm using AMD cards, unlucky Nvidia guys don't have to do the same (hehe, jk). No DX11 renderer are significantly faster, though. Since DX12/Vulkan has made the RX 480 faster, they've been relevant since the day it made RX 480s faster in average.


----------



## Basard (Dec 19, 2016)

Lionheart said:


> Well that's good news, but I'm more impressed with the FX 8370 comparison.



Looks GPU bound to me.  Until you get into 1080 territory, then the faster CPU helps.  Still looks nice, if you own an FX chip.


----------



## Melvis (Dec 19, 2016)

Lionheart said:


> Well that's good news, but I'm more impressed with the FX 8370 comparison.



Yes indeed, I didnt even bother to look at those graphs till you said that and it was surprising to see, not bad at all! must be a game that uses alot of threads?


----------



## NeDix! (Dec 19, 2016)

mmm i am more shocked by the 380x vs 970 :\


----------



## birdie (Dec 19, 2016)

It's a little bit funny how AMD fanatics and NVIDIA fans slant NVIDIA for its DX12 performance, specially when comparing GTX 1060 and RX 480, but then, everyone fails to notice an elephant in the room:







GTX 1060 at DX12 has a *higher* minimum FPS than RX 480 which usually translates into smoother gameplay.

Here's another revelation, DX12 works just fine for Pascal:





Thirdly, RX 480 has 5.8 TFLOPs and GTX 1060 has less @ 3.8 TFLOPs, so naturally RX 480 should _not_ be slower than GTX 1060 at least in cases when raw performance (Direct3D 12/Vulkan/Mantle) matters. At the same time NVIDIA did an impeccable job with its drivers in D3D9/10/11 because it always beats more powerful AMD GPUs while having fewer transistors and lower raw performance.



NeDix! said:


> mmm i am more shocked by the 380x vs 970 :\



I'm not. D3D12/Vulkan were modelled after AMD GPUs so naturally AMD has had a huge head start in regard to performance in these new APIs while the first NVIDIA's attempt at these APIs was the Pascal architecture. Do you remember it took AMD four+ years to match NVIDIA's tesselation performance? No. Then why should NVIDIA's first attempt at D3D12/Vulkan be as fast as its competitor?


----------



## LightningJR (Dec 19, 2016)

idk who is making these graphs but wow the GameGPU ones are bad... yikes. The scale between each bar is horribly off.. 46fps ave is higher than 47fps ave and then 61fps ave has a large gap ahead of 60fps............


----------



## john_ (Dec 19, 2016)

birdie said:


> AMD fanatics and NVIDIA fans


Can someone spot the difference?


----------



## rtwjunkie (Dec 19, 2016)

bug said:


> I've said it from the beginning, but few were listening: by the time DX12/Vulkan become relevant, both Polaris and Pascal will be obsolete.



Right there with you, I have been since middle of last year. I publicly stated a number of times that by the end of this year, DX12 would still not be mainstream or widely adopted. I got flamed so much it's not even funny. Well who is laughing now?

BOTH camps current GPU's will be obsolete before DX12 actually matters.


----------



## ADHDGAMING (Dec 19, 2016)

btarunr said:


> I don't think anybody claims that.


yeah no one has ever claimed that and if they did they did so so they could attempt to misinform the masses


----------



## EarthDog (Dec 19, 2016)

rtwjunkie said:


> Right there with you, I have been since middle of last year. I publicly stated a number of times that by the end of this year, DX12 would still not be mainstream or widely adopted. I got flamed so much it's not even funny. Well who is laughing now?
> 
> BOTH camps current GPU's will be obsolete before DX12 actually matters.


lol, I was in that party.. could care less about flames...nonetheless flames of those without foresight to read what the market is saying, and said in the past.

But hey... a game or so here and there are improving... go amd! Lol



btarunr said:


> I don't think anybody claims that.


Not in this thread.


----------



## ShurikN (Dec 19, 2016)

birdie said:


> GTX 1060 at DX12 has a *higher* minimum FPS than RX 480 which usually translates into smoother gameplay.


https://www.overclock3d.net/reviews...e_division_directx_12_pc_performance_review/2

Ill just leave this here then... never trust one single review


----------



## TheinsanegamerN (Dec 19, 2016)

ShurikN said:


> https://www.overclock3d.net/reviews...e_division_directx_12_pc_performance_review/2
> 
> Ill just leave this here then... never trust one single review


so two reviews vs one review. Almost as if different test benches can reveal different results, hence why we consider the 1060 and 480 equal in performance. 

My only question, is when AMD is going to bother releasing a bigger GPU to take full advantage of DX12. All that low level goodness doesnt matter if you are constantly GPU bound.


----------



## BiggieShady (Dec 19, 2016)

rtwjunkie said:


> BOTH camps current GPU's will be obsolete before DX12 actually matters.


Indeed, remember the first DX11 game BattleForge (yeah, me neither) ... dx11 mode (just the tesselation) was added winter '09, few months after amd evergreen arch debut ... nvidia's fermi came next year as 400 series and full year after evergreen came 500 series with motto "DX11 done right". Fast forward another year to 2011 and you had total of 10 to 15 dx11 games up to that point ... and DX11 was only couple of features tacked on DX10 just as DX10 was couple of features tacked on DX9 - no massive api rework, no huge pipeline change and everything still managed by the driver.
So it took 2 years to adopt incremental api changes in DX11 ... it should take few more for effective DX12 adoption by the devs who (for some reason) yet need to learn how to use it efficiently for both amd and nvidia, because as usual best case for amd is worst case for nvidia and vice versa.


----------



## birdie (Dec 19, 2016)

I feel like it'll take more than 2 years for devs to switch from D3D9/10/11 mindset to D3D12. Then it'll probably take even more than that, because the install base of D3D11 is just too large to ignore (also many gamers refuse to switch to Windows 10 to which D3D12 is exclusive) - so at the moment it's financially unfeasible to target D3D12 only (as Quantum Break has already demonstrated). I for one would like devs to embrace Vulkan instead of vendor locked D3D12 but so far only idSoftware and Valve have embraced this open cross platform/cross OS API.

Remember D3D12 and Vulkan are *completely different* APIs than everything that was before that. To give you an analogy it's like switching from Java/C# to something akin to C or even assembler. It's extremely hard.

Lastly, why do people want to belong to AMD/NVIDIA camps so much? These companies don't give a c r a p about what you think and believe into, yet people vehemently try to praise "their" vendor over the other one. Could we show a little more respect to vendors' best features without turning into a branch of WCCFTech?


----------



## Prima.Vera (Dec 19, 2016)

qubit said:


> I tested it and got blue screens all over. Glorious.
> 
> Even when the bsods stopped, the game still didn't work (crashed). Not sure if the game was faulty, dodgy NVIDIA drivers or a Windows fault. Never mind, I haven't bought the game and the trial period has expired, so the point is moot. It's very likely to work the next time I try it in several months time.
> 
> It worked in DX11, but even there it sometimes crashed.


Oh. Where can you download the trial?


----------



## nguyen (Dec 19, 2016)

lol 20% faster minimum and 7% slower in avg in 1080p, if anything this is still a win for Nvidia in my book. Remind me of AMD obnoxious xfire performance a few years back where scaling is more than 100% in avg fps but massive micro shuttering that only AMD fanatics can tolerate, this looks to be the same.


----------



## bug (Dec 19, 2016)

birdie said:


> I feel like it'll take more than 2 years for devs to switch from D3D9/10/11 mindset to D3D12. Then it'll probably take even more than that, because the install base of D3D11 is just too large to ignore (also many gamers refuse to switch to Windows 10 to which D3D12 is exclusive) - so at the moment it's financially unfeasible to target D3D12 only (as Quantum Break has already demonstrated). I for one would like devs to embrace Vulkan instead of vendor locked D3D12 but so far only idSoftware and Valve have embraced this open cross platform/cross OS API.
> 
> Remember D3D12 and Vulkan are *completely different* APIs than everything that was before that. To give you an analogy it's like switching from Java/C# to something akin to C or even assembler. It's extremely hard.
> 
> Lastly, why do people want to belong to AMD/NVIDIA camps so much? These companies don't give a c r a p about what you think and believe into, yet people vehemently try to praise "their" vendor over the other one. Could we show a little more respect to vendors' best features without turning into a branch of WCCFTech?



That's the exact analogy I made. And sticking to that analogy, just because C can make things run faster doesn't mean everything is written in C. That's why I think not all developers can and will switch to the low-level APIs.
If some choose to be early adopters, good for them. But I'll sit this one out and see what happens.


----------



## qubit (Dec 19, 2016)

Prima.Vera said:


> Oh. Where can you download the trial?


Nah, sorry dude, it was a free weekend on Steam. Simply install and play for a limited time and that time has now expired. 

You might not be missing anything though, as one of my mates says this game is a grind and doesn't like it. I never had the chance to find out, lol.


----------



## BiggieShady (Dec 19, 2016)

birdie said:


> ... at the moment it's financially unfeasible to target D3D12 only (as Quantum Break has already demonstrated). I for one would like devs to embrace Vulkan instead of vendor locked D3D12 but so far only idSoftware and Valve have embraced this open cross platform/cross OS API.


Interesting you mentioned it, because both idSoftware and Valve are game engine devs.
Fore example, unreal engine has an experimental DX12 mode and when you turn it on, fps goes from 110 to 80.
And that is going to be most prevalent way of using DX12  by turning it on in a ready made engine.
Also interesting to note, async feature in UE4 was implemented by Lionhead Studios ... for a game that was canceled.
In other words, it's a mess


----------



## alucasa (Dec 19, 2016)

qubit said:


> I tested it and got blue screens all over. Glorious.



You need to quit piloting that plane of yours (ur avatar). Of course, you see a blue sky (screen) in that thing.


----------



## qubit (Dec 19, 2016)

alucasa said:


> You need to quit piloting that plane of yours (ur avatar). Of course, you see a blue sky (screen) in that thing.


Oh, that SR-72 does Mach 6 and is a tad hard to control with my keyboard and mouse.  One wrong move and it dives into the ground in the blink of an eye. No console could ever handle it.


----------



## phanbuey (Dec 19, 2016)

they need to roll out the 2080 already


----------



## medi01 (Dec 19, 2016)

P4-630 said:


> But nowhere near GTX1070 performance...



"but nowhere as fast as card that costs twice as much (in some markets more than twice)"
Not sure if serious.




qubit said:


> I tested it and got blue screens all over. Glorious.
> 
> Even when the bsods stopped, the game still didn't work (crashed). Not sure if the game was faulty, dodgy NVIDIA drivers or a Windows fault. Never mind, I haven't bought the game and the trial period has expired, so the point is moot. It's very likely to work the next time I try it in several months time.
> 
> It worked in DX11, but even there it sometimes crashed.




"*DX12's focus is on enabling a dramatic increase in visual richness through a significant decrease in API-related CPU overhead*," said Nvidia's Henry Moreton last year.


----------



## Vayra86 (Dec 19, 2016)

Prima.Vera said:


> Something's wrong with nVidia drivers. On 1080p they gain just as same as AMD's, but on 1440p or more, they actually loose a lot. WTH nVidia??



That's why I didn't like Pascal much. You get far too little GPU for your money. A very narrow bus destroys Pascal utterly at anything over 1440p. Ironically the Titan X is the only well balanced GPU, 1070 and 1080 are bandwidth limited.

Pascal's got too much oomph for 1080p, and just doesn't cut it for 4K, while being waaay overpriced for the silicon you're getting. As time progresses, this will become more and more apparent. With every % AMD can squeeze out of driver updates or DX12, they get to use their much wider GPU arch better, it just scales well at any res.


----------



## P4-630 (Dec 19, 2016)

medi01 said:


> "but nowhere as fast as card that costs twice as much (in some markets more than twice)"
> Not sure if serious.



A while ago there were a few in the RX480 thread saying that a binned OC'd RX480 could touch a 1070 performance, thats all.


----------



## Vayra86 (Dec 19, 2016)

rtwjunkie said:


> Right there with you, I have been since middle of last year. I publicly stated a number of times that by the end of this year, DX12 would still not be mainstream or widely adopted. I got flamed so much it's not even funny. Well who is laughing now?
> 
> BOTH camps current GPU's will be obsolete before DX12 actually matters.



Correct, but AMD is already showing promise for DX12 while we haven't got any clue about Volta. Meanwhile AMD is still rocking GCN and Nvidia has burned up its dev cycle starting with Kepler > Max > Pascal. They can't clock any higher (likely) and AMD still can.


----------



## Captain_Tom (Dec 19, 2016)

As usual the Fury is falling in between a 1070 and 1080... For $140 less than a 1070.  LOL 28nm > 16nm?


----------



## TheinsanegamerN (Dec 19, 2016)

Vayra86 said:


> That's why I didn't like Pascal much. You get far too little GPU for your money. A very narrow bus destroys Pascal utterly at anything over 1440p. Ironically the Titan X is the only well balanced GPU, 1070 and 1080 are bandwidth limited.
> 
> Pascal's got too much oomph for 1080p, and just doesn't cut it for 4K, while being waaay overpriced for the silicon you're getting. As time progresses, this will become more and more apparent. With every % AMD can squeeze out of driver updates or DX12, they get to use their much wider GPU arch better, it just scales well at any res.


I just wish AMD would get a bigger chip out already. The 480 isnt enough, and crossfire support, much like SLI, is lacking. 

Even a 3072 core polaris based chip would be a nice improvement.


----------



## Vayra86 (Dec 19, 2016)

TheinsanegamerN said:


> I just wish AMD would get a bigger chip out already. The 480 isnt enough, and crossfire support, much like SLI, is lacking.
> 
> Even a 3072 core polaris based chip would be a nice improvement.



QFT


----------



## Captain_Tom (Dec 19, 2016)

Vayra86 said:


> QFT



The fact is AMD _can_ make a 3072 or even a 4096+ chip now (Or months ago).  But AMD sees no point in doing this unless all games fully take advantage of their arch with DX12, Crossfire, and perfected drivers from AMD.  Just look at how the Fury X was 5% weaker than the 980 Ti at launch, and now it is nearly 20% stronger! 


Most people just read the OG reviews and fail to read recent reviews when they hunt for a GPU upgrade.  Don't worry Vega 10 will be 30 -50% stronger than the Fury X, and it will be out in a few months.  But they don't want to release a new Fury, 490, or 495x2 until they all curb-stomp the competition.


----------



## Fluffmeister (Dec 19, 2016)

Captain_Tom said:


> The fact is AMD _can_ make a 3072 or even a 4096+ chip now (Or months ago).  But AMD sees no point in doing this unless all games fully take advantage of their arch with DX12, Crossfire, and perfected drivers from AMD.  Just look at how the Fury X was 5% weaker than the 980 Ti at launch, and now it is nearly 20% stronger!
> 
> 
> Most people just read the OG reviews and fail to read recent reviews when they hunt for a GPU upgrade.  Don't worry Vega 10 will be 30 -50% stronger than the Fury X, and it will be out in a few months.  But they don't want to release a new Fury, 490, or 495x2 until they all curb-stomp the competition.



That's very generous of AMD, letting Nvidia have no competition in the high end for what's it been like... 7 months already?


----------



## medi01 (Dec 19, 2016)

People forget there are gamers like me who don't upgrade for ages.

Actually, most buyers of mid/low end GPUs keep them for years.
"DX12/Vulkan is irrelevant, since GPUs will be obsolete" is not serious even if that wouldn't be the case, as if I play Doom it matters to me, here and now.


----------



## Captain_Tom (Dec 19, 2016)

Fluffmeister said:


> That's very generous of AMD, letting Nvidia have no competition in the high end for what's it been like... 7 months already?



Haha nothing generous about it.  If you would look at GPU history, you would see that AMD has always been most successful when they focus on the mid-high end, and ignore Ultra Enthusiast.   For some reason people ignored the gems that were the 7970 and 290X, and AMD gets that now.  It's sad but true.


----------



## Fluffmeister (Dec 19, 2016)

Captain_Tom said:


> Haha nothing generous about it.  If you would look at GPU history, you would see that AMD has always been most successful when they focus on the mid-high end, and ignore Ultra Enthusiast.   For some reason people ignored the gems that were the 7970 and 290X, and AMD gets that now.  It's sad but true.



People ignored the 7970 and the 290X?

Have you been living in a cave or something?

It's seems like some others here you opt for playing the "victim card".


----------



## bug (Dec 19, 2016)

Captain_Tom said:


> As usual the Fury is falling in between a 1070 and 1080... For $140 less than a 1070.  LOL 28nm > 16nm?


Depends on your definition of "usual", I guess. According to this: https://www.techpowerup.com/reviews/Zotac/GeForce_GTX_1080_Amp_Extreme/29.html
not even Fury X finishes between the 1070 and 1080. Usually.


----------



## Captain_Tom (Dec 19, 2016)

Fluffmeister said:


> People ignored the 7970 and the 290X?
> 
> Have you been living in a cave or something?
> 
> It's seems like some others here you opt for playing the "victim card".



There sales went down compared to the 4000, 5000, and 6000 series.  Are you saying they didn't?


----------



## bug (Dec 19, 2016)

medi01 said:


> People forget there are gamers like me who don't upgrade for ages.
> 
> Actually, most buyers of mid/low end GPUs keep them for years.
> "DX12/Vulkan is irrelevant, since GPUs will be obsolete" is not serious even if that wouldn't be the case, as if I play Doom it matters to me, here and now.


Let's try to use our brains here a bit, ok?

The "DX12/Vulkan is irrelevant, since GPUs will be obsolete" statement is true, because that's the situation with most titles available now. It does not mean "do no buy a 480 no matter what". IF you happen to play Doom and only Doom, than yes, the 480 is probably the card to get. If you play Doom and something else, things change. And guess what, most of the games don't play just Doom.

Other reasons to buy the 480 could be "it's cheaper than 1060"; which is the case for MSRP, but usually you can't get the 480 at MSRP. If you can, however, even if the 1060 is technically faster in many titles, that rarely (if at all) translates into an ability to play the same game at higher resolutions. Or another reason people buy the 480 is the "I want to support AMD open source drivers for Linux effort".


----------



## TheGuruStud (Dec 19, 2016)

You can see why I'm ditching the 980Ti when Vega comes out.


----------



## Fluffmeister (Dec 19, 2016)

TheGuruStud said:


> You can see why I'm ditching the 980Ti when Vega comes out.



I'll have it, also... can I borrow your crystal ball.


----------



## TheGuruStud (Dec 19, 2016)

Fluffmeister said:


> I'll have it, also... can I borrow your crystal ball.



Nvidia will gain nothing and I can only go up with Vega (I also suspect Async has been beefed up in Vega). Who knows how long this next cycle will last. My card is now 1.5 yrs old and there won't be anything to upgrade to from either camp for months. Plus, it's new monitor time. Nvidia deserves less than 0 cents and I want adaptive sync.

Dumping this shitty haswell, too lol


----------



## Fluffmeister (Dec 19, 2016)

TheGuruStud said:


> Nvidia will gain nothing and I can only go up with Vega (I also suspect Async has been beefed up in Vega). Who knows how long this next cycle will last. My card is now 1.5 yrs old and there won't be anything to upgrade to from either camp for months. Plus, it's new monitor time. Nvidia deserves less than 0 cents and I want adaptive sync.



Fair enough, you go girl... fight the power!

I still want your GTX 980 Ti.


----------



## TheGuruStud (Dec 19, 2016)

Fluffmeister said:


> Fair enough, you go girl... fight the power!
> 
> I still want your GTX 980 Ti.



You'll have to wine and dine me 8>  *kisses*


----------



## Vayra86 (Dec 20, 2016)

Captain_Tom said:


> The fact is AMD _can_ make a 3072 or even a 4096+ chip now (Or months ago).  But AMD sees no point in doing this unless all games fully take advantage of their arch with DX12, Crossfire, and perfected drivers from AMD.  Just look at how the Fury X was 5% weaker than the 980 Ti at launch, and now it is nearly 20% stronger!
> 
> 
> Most people just read the OG reviews and fail to read recent reviews when they hunt for a GPU upgrade.  Don't worry Vega 10 will be 30 -50% stronger than the Fury X, and it will be out in a few months.  But they don't want to release a new Fury, 490, or 495x2 until they all curb-stomp the competition.



That is highly unlikely and no more than an (uneducated) guess of yours. The thing is, AMD has been rebranding old stuff for too long, Fury X didn't fly too well at release, and RX480 did not fill the entire void. There is no sane business practice in that, they just didn't have anything and focused on other efforts to gain traction, and I think the Ryzen reveal is a decent example of that, along with their latest driver update.

AMD is now filling a different kind of void, on the CPU side which is fár more important for them financially, and on the GPU software end they now also have a nice, rounded set of tools. All of this will benefit a next high end GPU. They are also finally turning around the negative PR that's surrounded them for so long.


----------



## Captain_Tom (Dec 20, 2016)

Vayra86 said:


> That is highly unlikely and no more than an (uneducated) guess of yours. The thing is, AMD has been rebranding old stuff for too long, Fury X didn't fly too well at release, and RX480 did not fill the entire void. There is no sane business practice in that, they just didn't have anything and focused on other efforts to gain traction, and I think the Ryzen reveal is a decent example of that, along with their latest driver update.
> 
> AMD is now filling a different kind of void, on the CPU side which is fár more important for them financially, and on the GPU software end they now also have a nice, rounded set of tools. All of this will benefit a next high end GPU. They are also finally turning around the negative PR that's surrounded them for so long.



You do understand that the points you mentioned have added to my argument.... Right?

AMD is waiting to release cards when they will be fully taken advantage of.  Software and PR are a big part of that.   Nothing uneducated about my guess, and on the contrary it is common sense what is going on.


----------



## Vayra86 (Dec 20, 2016)

Captain_Tom said:


> You do understand that the points you mentioned have added to my argument.... Right?
> 
> AMD is waiting to release cards when they will be fully taken advantage of.  Software and PR are a big part of that.   Nothing uneducated about my guess, and on the contrary it is common sense what is going on.



No, I disagree that they had some top-end GPU just waiting to be released. Nothing points to that. They used resources to get other things done and Vega has been on the map for years.


----------



## TheGuruStud (Dec 20, 2016)

Captain_Tom said:


> You do understand that the points you mentioned have added to my argument.... Right?
> 
> AMD is waiting to release cards when they will be fully taken advantage of.  Software and PR are a big part of that.   Nothing uneducated about my guess, and on the contrary it is common sense what is going on.



HBM is holding them back about as much as decent clocking, but not stupid power consumption on finfet LP. HBM only went into volume production very recently (if it indeed has, haven't seen any updates).

They could've shit out a card, I'm sure, but the gains and power consumption wouldn't have been worth the money (lost) and the flop it would be.

Some more well optimized DX12 (à la DX:MD)/Vulkan games and Nvidia will be crying about profit margin loss.


----------



## RealNeil (Dec 20, 2016)

I just bought an 8GB Gigabyte Radeon RX-480 Gaming G-1 for the secondary system. I'm getting another for crossfire early in January.
Two of them should be good to go,.....
I tried it out in my favorite games and I'm impressed. It's faster than my 8-GB Sapphire R9-390X Toxic card is.

I don't know if I'm gonna have DX-12 though. Win-10 sucks. (and yeah, I've tried it out for a long time)


----------



## gupsterg (Dec 20, 2016)

Hokum said:


> I think the performance of the Fury X should be comment on also, only just behind the 1070 and 1080 at 1080p and only behind the 1080 at higher resolutions.
> Not bad for the last gen card.



Indeed, hope more such patches come about  .


----------



## ffleader1 (Dec 20, 2016)

nguyen said:


> lol 20% faster minimum and 7% slower in avg in 1080p, if anything this is still a win for Nvidia in my book. Remind me of AMD obnoxious xfire performance a few years back where scaling is more than 100% in avg fps but massive micro shuttering that only AMD fanatics can tolerate, this looks to be the same.


This kind of altitude made me registered an account here just to comment. Obvious when the case is reserved, aka when Nvidia is faster 7% on average, you would have gone all out and bash AMD on being suck at Dx12.


----------



## Captain_Tom (Dec 20, 2016)

Vayra86 said:


> No, I disagree that they had some top-end GPU just waiting to be released. Nothing points to that. They used resources to get other things done and Vega has been on the map for years.



I should probably clarify my earlier statements:  14nm was (and still seems to be a little) behind 16nm. 

So by no means am I saying some 4096-SP/HBM card would have been ready months ago; but I am saying that AMD could have without a doubt released something stronger by _now.
_
Some 3072-3584-SP chip with 8-12GB of GDDR5X would have been fairly easy to make by now.  But it wouldn't beat the 1080 all that easily, and AMD's perception isn't quite where they want it to be to fully capitalize on a performance (or price/perf) enthusiast win.  And again, the 480 has captured marketshare better than the Fury X ever did.


----------



## RealNeil (Dec 20, 2016)

ffleader1 said:


> This kind of _altitude_ made me registered an account here just to comment. Obvious when the case is reserved, aka when Nvidia is faster 7% on average, you would have gone all out and bash AMD on being suck at Dx12.



Such a high flying comment.

Welcome to TPU. Stick around a while and check it out.


----------



## xkm1948 (Dec 20, 2016)

Developer: DX12 gives Better performance for everyone!

nv/ati fans: Yay!!

Developer: ATi benefits more from DX12

nv fans: bull shit this is fake unfair cheating DX12 irrelevant etc. etc.  Bunch of cry babies.


----------



## the54thvoid (Dec 20, 2016)

Captain_Tom said:


> And again, the 480 has captured marketshare better than the Fury X ever did.



FuryX had quite a limited run to be fair. Very few if any still exist in UK retailer stocks. Has been that way for months and months. (I was looking to grab one cheap). HBM was an experiment on that front.

One thing people are quite hypocritical or ignorant of is the hardware inside 'comparable' cards. 480 should easily beat 1060, so it's no doubt it's getting better. Likewise, Fiji had 4096 shader cores and decent ACE units. That's why it also required water cooling from start.
In terms of hardware power AMD are still not using their hardware well at all. They should be a lot better than Nvidia.


----------



## renz496 (Dec 20, 2016)

cryohellinc said:


> Impressive work from AMD! Nvidia really needs to stop slacking and start optimising their drivers/Gpu's for next gen (or current gen, however id say next gen as currently there are like 1-2 games built from scratch on Dx12/ Vulcan) API's. Otherwise if AMD has much better performance for cheaper price on new API's they might easily win a big chunk of the market back from Nvidia. I mean honestly, Dx12 should give INSANE performance gains, however what we see in all of those "Dx12 updates" for various titles is actually NEGATIVE (wtf) scaling. Shame!
> 
> However all in all this is good news, competition always drives development and makes them Work for their money. Can't wait to see Nvidia's reaction after release of Vega.



the game was faster on radeon hardware even in DX11. and honestly i don't think there is any problem with nvidia hardware. DX12 is mostly solving AMD problem in extracting more performance out of their card. RX480 for example is rated at 5.1tflops while GTX1060 was rated at 3.85tflops. but in majority of games RX480 performance did not reflect the raw performance between the two.


----------



## FMinus (Dec 20, 2016)

The Fiji chips didn't sell well because they were overpriced for what they were. They still are. Mainland Europe the Fury X sold for 850EUR ($882), months later it started sitting at 650-700EUR($674-726), same with the Fury Nano and the cut Fury. HBM didn't help them much, and I call the Fiji lineup an experimental joke from AMD at best. 
Even today they still hold price in Europe from 430-700 EUR  which is insane.


----------



## malitze (Dec 20, 2016)

FMinus said:


> The Fiji chips didn't sell well because they were overpriced for what they were. They still are. Mainland Europe the Fury X sold for 850EUR ($882), months later it started sitting at 650-700EUR($674-726), same with the Fury Nano and the cut Fury. HBM didn't help them much, and I call the Fiji lineup an experimental joke from AMD at best.
> Even today they still hold price in Europe from 430-700 EUR  which is insane.



I bought my Fury X as soon as it was in stock for 698€. So as far as Germany is considered mainland europe this is not quite true.


----------



## medi01 (Dec 20, 2016)

bug said:


> Let's try to use our brains here a bit, ok?
> 
> The "DX12/Vulkan is irrelevant, since GPUs *will be* obsolete" statement is true, because that's the situation with most titles available *now*



Yeah, let's use our brains here a bit, shall we...




bug said:


> IF you happen to play Doom and only Doom, than yes, the 480 is probably the card to get. If you play Doom and something else, things change.


No, they don't change. 4 month after release, 1060 lead in DX11 games shrunk to ignorable, while 480's DX12  lead gap widened.



bug said:


> Other reasons to buy the 480 could be "it's cheaper than 1060"


It isn't cheaper.
Another reason is "I don't want to be bent over when buying monitor with adaptive sync".

Oh, and the main point, which you seem to have COMPLETELY MISSED: AMD GPUs age gracefully, nVidia's GPUs end their lives in shame, with 960 beating 780Ti.



renz496 said:


> RX480 for example is rated at 5.1tflops while GTX1060 was rated at 3.85tflops. but


But GPUs are not only about flops and doesn't cover things, such as geometry processing, for instance.


----------



## Nergal (Dec 20, 2016)

AMD is undeniably gaining more than NV with DX12. 

The cost of the 1060 is still 10% more than the 480 in some parts; in others, they have the same. 
Why? Because they have about the same performance now. (DX11/12 +- titles)
So those owners who initially bought a 1060 at a surplus price have done a worse deal than the 480 buyers.

Making the early 480 adaptors "right" and the 1060 ones "wrong" 
*(basically, the card which was the cheapest for you at the moment was the correct card)*

But this discrepancy will only increase in time. 

It´s rather plain to see that the flops are way higher. In the end, that combined with DX12 will cause 480 owners to have a longer, better experience with their card. 

Even the rushed dump of the 1060 by NV, which was clearly specced higher than they originally intended, can´t hide the strenght of the 480 in the end.


----------



## gupsterg (Dec 20, 2016)

the54thvoid said:


> FuryX had quite a limited run to be fair. Very few if any still exist in UK retailer stocks. Has been that way for months and months. (I was looking to grab one cheap). HBM was an experiment on that front.



I'm not surprised TBH.

Fiji was 28nm, so was Grenada. Grenada was pretty much being phased out in retail supply when Polaris was gonna hit retail. So I reckon Fiji had also stopped being produced. It was no way as popular as Hawaii/Grenada, so unlikely to keep selling due to demand and pricing. I would assume margins must have also been smaller compared with Hawaii/Grenada, so doubt that whole supply channel had room to keep making and lowering the price to make it sell.

Past 5 months off and on Fury has been under £300 at etailers, I just reckon even if the relative performance has been better on Fiji buyers just go RX 480 or GTX 1060.



the54thvoid said:


> One thing people are quite hypocritical or ignorant of is the hardware inside 'comparable' cards. 480 should easily beat 1060, so it's no doubt it's getting better. Likewise, Fiji had 4096 shader cores and decent ACE units. That's why it also required water cooling from start.
> In terms of hardware power AMD are still not using their hardware well at all. They should be a lot better than Nvidia.



Hmm I don't think AMD went AIO on Fury X just due the spec of GPU, I think it was to give a better quieter product.

290X TDP 290W - reference design had blower, which was noisy, etc.
390X TDP 275W - reference design had blower, which was noisy, etc.
Fury X TDP 275W - reference design AIO unit, not noisy, etc.

I'll be honest I got a Fury/X in March 16 just to try, I ended up keeping it  . I had a Vapor-X 290X at the time, as I could sell that for no loss and swap to Fury X which didn't need the cost of custom water cooling it was a no brainer to keep it. Due to the promo price/cashback site I used the Fury X was costing me ~£250 back in March 16  .

They Fury Tri-X and Fury X were way quieter and cooler running cards than any of the Hawaii cards I have owned. I have owned 3x 290 Tri-X, 1x Vapor-X 290X and 1x Asus DCUII 290X. The Fury Tri-X I even unlocked to 3840SP which when benched against the genuine Fury X was on par for performance in some things I tested.

I do agree for the SP count Fiji really should perform so much better, I do not know much about GPU architecture, small discussion I had with The Stilt it boiled down to ROPs.


----------



## Pewzor (Dec 20, 2016)

qubit said:


> I tested it and got blue screens all over. Glorious.
> 
> Even when the bsods stopped, the game still didn't work (crashed). Not sure if the game was faulty, dodgy NVIDIA drivers or a Windows fault. Never mind, I haven't bought the game and the trial period has expired, so the point is moot. It's very likely to work the next time I try it in several months time.
> 
> It worked in DX11, but even there it sometimes crashed.



Why don't you just play in DX11? I mean it's not news that nVidia isn;t very good in next gen API.
I would only play games in DX12 using nVidia if it's a GameWorks or nVidia paid games like Tomb Raider and shit, that's the only time nVidia crap works well on dx12.


----------



## Captain_Tom (Dec 20, 2016)

the54thvoid said:


> FuryX had quite a limited run to be fair. Very few if any still exist in UK retailer stocks. Has been that way for months and months. (I was looking to grab one cheap). HBM was an experiment on that front.
> 
> One thing people are quite hypocritical or ignorant of is the hardware inside 'comparable' cards. 480 should easily beat 1060, so it's no doubt it's getting better. Likewise, Fiji had 4096 shader cores and decent ACE units. That's why it also required water cooling from start.
> In terms of hardware power AMD are still not using their hardware well at all. They should be a lot better than Nvidia.



Don't ignore the fact that the 1060 and 50% more ROP's than the 480, and that Nvidia also uses more transistors per SP than AMD (And almost always has).  AMD's cards are built to do everything and last a long time, and Nvidia builds their cards for VERY specific tasks (And as such hit terrible bottlenecks often).

For sure though AMD has failed to optimize as well as they could in the past on easier to run games, and they just waited for games to get harder to run for full SP saturation (Instead of programing so they can run lighter loads more efficiently).  However I would say AMD is improving on this front quite a lot lately.  Once again I point out that AMD doesn't want to release Enthusiast cards until they fully nip that driver optimization problem in the butt.


----------



## Captain_Tom (Dec 20, 2016)

gupsterg said:


> I'm not surprised TBH.
> 
> Fiji was 28nm, so was Grenada. Grenada was pretty much being phased out in retail supply when Polaris was gonna hit retail. So I reckon Fiji had also stopped being produced. It was no way as popular as Hawaii/Grenada, so unlikely to keep selling due to demand and pricing. I would assume margins must have also been smaller compared with Hawaii/Grenada, so doubt that whole supply channel had room to keep making and lowering the price to make it sell.
> 
> ...



It 100% boiled down to ROPs!  Notice the 290X only has 40% more SP's and like 30% more bandwidth than the 7970, and yet it performed 50 - 75% better!  That is because it had DOUBLE the ROP's and ACE's.   If they would have doubled the ROP's again the Fury X would have likely crushed the 980 Ti, but AMD was running out of die space and figured the extra bandwidth would help make up for this deficiency.


----------



## refillable (Dec 20, 2016)

You know, you might be quite clueless about how things have progressed in the AMD vs Nvidia world. I've been following since back in the days when AMD (still called ATi) cards were wrecking similar Nvidia cards. 4870 vs 280, 5870 vs 470, 6970 vs 570, etc.

But sadly, AMD never made a decisive profit from those cards. Yet, Nvidia was swimming with money. AMD lost the advantage with Maxwell, but sooner than later, with Raja on the helm, I hope they get what they deserve. 

Anyway, apologies for my rather foolish and childish posts, but with RX 480 getting better and better after new updates, people should in some extent avoid bragging launch day performance.


----------



## xkm1948 (Dec 20, 2016)

Nope not ROPs. It is all about Asynchronous computing. In short. ATi's engineers built it specifically this way. With DX11 and OpenGL, most of FijiXT's 4096SP will be in idle mode, not fully utilized at all.

Some good explanations here:

http://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading
http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/9


----------



## bug (Dec 20, 2016)

Captain_Tom said:


> It 100% boiled down to ROPs!  Notice the 290X only has 40% more SP's and like 30% more bandwidth than the 7970, and yet it performed 50 - 75% better!  That is because it had DOUBLE the ROP's and ACE's.   If they would have doubled the ROP's again the Fury X would have likely crushed the 980 Ti, but AMD was running out of die space and figured the extra bandwidth would help make up for this deficiency.


Though admittedly it would seem that way, when asked about, some developer from AMD said "we have no indication Fury X is ROP limited".


----------



## LightningJR (Dec 20, 2016)

So AMD has been behind nVidia for years because their card weren't being utilized properly because Async Compute is "required"?

I would love to know who makes these decisions...


----------



## TheGuruStud (Dec 20, 2016)

LightningJR said:


> So AMD has been behind nVidia for years because their card weren't being utilized properly because Async Compute is "required"?
> 
> I would love to know who makes these decisions...



Async isn't required (but definitely helps). They could never get the chips fed with DX11. I'm sure that's a driver and hardware problem.


----------



## Captain_Tom (Dec 20, 2016)

bug said:


> Though admittedly it would seem that way, when asked about, some developer from AMD said "we have no indication Fury X is ROP limited".



Hey I'm no PC hardware expert, but I think there are enough examples of the contrary to make me question if this dev is full of sh*t.   I mean consider

-The example I gave with the 290X vs 7970

-Based on TFLOPs and effective bandwidth the 390X shouldn't really be any stronger than the 480 (It should be 10% weaker based on the IPC increase from GCN 4.0).  And yet, the 390X still maintains a 10% _lead_.    

-Consider how SO MANY cut down AMD cards perform almost the same as their full brethren (7950, 290, Fury, 470) even with as many as 15% less SP's.  (It isn't just because they share the same bandwidth)

-Nvidia cards almost always have FAR more ROP's than their AMD counterparts, and this explains how they can get along with less bandwidth (Until they choke a year after they come out lol).  Afterall ROP's are what feed the bandwidth.

^I would like this dev to explain these things.


----------



## GhostRyder (Dec 20, 2016)

Not really shocked, we all know that AMD is much better at DX12 than Nvidia.  Makes the RX 480 start to feel like a better buy, however DX11 is still too dominant to take it out of the picture yet and they are pretty even on those fields.


----------



## RealNeil (Dec 20, 2016)

My 480 outperformed my 390 in benchmarks using the same PC.


----------



## bug (Dec 20, 2016)

LightningJR said:


> So AMD has been behind nVidia for years because their card weren't being utilized properly because Async Compute is "required"?
> 
> I would love to know who makes these decisions...


Surprisingly, yes.
When Nvidia went for tiled rendering (a feature that allows more efficient use of resources and thus lower power usage), AMD went and stuffed so many shaders onto the GPU, they couldn't feed them effectively (they needed async for that). Apparently this is hailed as futureproofing these days.


----------



## qubit (Dec 20, 2016)

Pewzor said:


> Why don't you just play in DX11? I mean it's not news that nVidia isn;t very good in next gen API.
> I would only play games in DX12 using nVidia if it's a GameWorks or nVidia paid games like Tomb Raider and shit, that's the only time nVidia crap works well on dx12.


I explained above that it wasn't all that stable in DX11 and bsodded Windows in DX12. Without further troubleshooting it's hard to say whether the fault is with the game, the NVIDIA driver or Windows. I'm thinking the game as my other games work fine. And again, the point is moot, because the trial period has expired and I haven't bought it, certainly not in that dysfunctional state. I've heard it's a bit of grind anyway.


----------



## ADHDGAMING (Dec 20, 2016)

P4-630 said:


> Some people in the RX480 club thread iirc...
> 
> "_A binned and OC'd RX480_"



i mean some of these recent 480s are putting up some insane power and OC #s compared to the launch ones .. like AMD refined a few things but didnt bother to label the cards XT


----------



## ADHDGAMING (Dec 20, 2016)

xkm1948 said:


> Nope not ROPs. It is all about Asynchronous computing. In short. ATi's engineers built it specifically this way. With DX11 and OpenGL, most of FijiXT's 4096SP will be in idle mode, not fully utilized at all.
> 
> Some good explanations here:
> 
> ...



haha i remember this video


----------



## P4-630 (Dec 20, 2016)

@ADHDGAMING learn to use the "Multi-Quote" button instead of double posting...


----------



## Captain_Tom (Dec 20, 2016)

RealNeil said:


> My 480 outperformed my 390 in benchmarks using the same PC.



Only in 1080p (On average), and yeah and a 390X is 10% stronger than a 390.


----------



## RealNeil (Dec 20, 2016)

Captain_Tom said:


> Only in 1080p (On average), and yeah and a 390X is 10% stronger than a 390.



I stated incorrectly. What I mean is that my 480 extreme gaming performs better than my 390X Toxic card. 
Also, my two 390X toxic GPUs didn't score much more than my 290X Tri-X cards did. The 390s were a big waste of money.


----------



## ADHDGAMING (Dec 20, 2016)

P4-630 said:


> @ADHDGAMING learn to use the "Multi-Quote" button instead of double posting...



thats just tells me to Dble post some more xD


----------



## P4-630 (Dec 20, 2016)

ADHDGAMING said:


> thats just tells me to Dble post some more xD



Good luck with that, A mod will send you on holiday if you keep doing that.


----------



## qubit (Dec 20, 2016)

P4-630 said:


> Good luck with that, A mod will send you on holiday if you keep doing that.


^^What he said. The mods will notice and take action.



ADHDGAMING said:


> thats just tells me to Dble post some more xD


It's better that we tell you than they do. This is a heavily moderated forum. I recommend you read the rules so you don't come unstuck.


----------



## Captain_Tom (Dec 20, 2016)

RealNeil said:


> I stated incorrectly. What I mean is that my 480 extreme gaming performs better than my 390X Toxic card.
> Also, my two 390X toxic GPUs didn't score much more than my 290X Tri-X cards did. The 390s were a big waste of money.



That's odd.  When I had a 390X it performed very well.  Hmmmm.


----------



## RealNeil (Dec 20, 2016)

The 390X cards were expensive. 
One of them died when a Silverstone AIO leaked onto it, but I don't see it as much of a loss. 
It wasn't that good, to begin with. 

I plan to post a few benchmark results once I get the second (and maybe a third) RX-480 installed.


----------



## Captain_Tom (Dec 21, 2016)

RealNeil said:


> The 390X cards were expensive.
> One of them died when a Silverstone AIO leaked onto it, but I don't see it as much of a loss.
> It wasn't that good, to begin with.
> 
> I plan to post a few benchmark results once I get the second (and maybe a third) RX-480 installed.



When I switched to a 480 most games saw worst performance than my 390X.  The only time that changed is when I overclocked the ever-living sh*t out of the 480.  Although once both were overclocked the 390X was still probably 5-10% ahead [390X @ 1125/1650, 480 @ 1400/2250].


----------



## RealNeil (Dec 21, 2016)

What brand was the 480 and was it an 8GB memory card?


----------



## Captain_Tom (Dec 21, 2016)

RealNeil said:


> What brand was the 480 and was it an 8GB memory card?



SAPPHIRE Nitro 8GB.


----------



## cdawall (Dec 21, 2016)

ADHDGAMING said:


> i mean some of these recent 480s are putting up some insane power and OC #s compared to the launch ones .. like AMD refined a few things but didnt bother to label the cards XT



And there are already scores on hwbot showing these cards equaling a stock 1070 in 3dmark... It's weird how 480 Tflops wise being closer to the 1070 than the 1060 means it could start catching up with enough overclock. Almost as if you used some form of math you could figure it out.


----------



## Captain_Tom (Dec 21, 2016)

cdawall said:


> And there are already scores on hwbot showing these cards equaling a stock 1070 in 3dmark... It's weird how 480 Tflops wise being closer to the 1070 than the 1060 means it could start catching up with enough overclock. Almost as if you used some form of math you could figure it out.



AMD cards have historically under-performed relative to TFLOPS at launch, and then once harder-to-run games come out (That can fully saturate the massive amount of SP's AMD packs into there cards) AMD always catches up to how Nvidia's cards perform per TFLOP.

Just look at how the 7970 outperforms the OG Titan in the latest games.  Truly pathetic.

Based on specs, I expect the 480 to perform about 10% worse than the 1070 in the majority of games by the end of 2017.


----------



## cdawall (Dec 21, 2016)

Captain_Tom said:


> AMD cards have historically under-performed relative to TFLOPS at launch, and then once harder-to-run games come out (That can fully saturate the massive amount of SP's AMD packs into there cards) AMD always catches up to how Nvidia's cards perform per TFLOP.
> 
> Just look at how the 7970 outperforms the OG Titan in the latest games.  Truly pathetic.
> 
> Based on specs, I expect the 480 to perform about 10% worse than the 1070 in the majority of games by the end of 2017.



I agree completely. My point that someone likes to misquote constantly was that if you overclock an rx480 enough it will catch a stock 1070. People on here act like this is some sort of unfathomable, impossible feat. 

Give amd the time to actually produce some well binned chips and I think we will see some much better clocks out there. Get a 480 into the 1600-1700mhz range and it equals the 1070 at stock.


----------



## Nergal (Dec 21, 2016)

cdawall said:


> I agree completely. My point that someone likes to misquote constantly was that if you overclock an rx480 enough it will catch a stock 1070. People on here act like this is some sort of unfathomable, impossible feat.
> 
> Give amd the time to actually produce some well binned chips and I think we will see some much better clocks out there. Get a 480 into the 1600-1700mhz range and it equals the 1070 at stock.



Hmm, my estimate is by then it will score neatly between the 1060 and the 1070. But the newest 480 (485?) will by then have the same speed as the 1070. (in newer DX12 games)


----------



## cdawall (Dec 21, 2016)

Nergal said:


> Hmm, my estimate is by then it will score neatly between the 1060 and the 1070. But the newest 480 (485?) will by then have the same speed as the 1070. (in newer DX12 games)



If it is the same Polaris 10 core its all a 480 to me. Binning doesn't mean new gpu in my books, but then again nvidia did that for four generations with g92.


----------



## Fabio Pisco (Dec 30, 2016)

P4-630 said:


> But nowhere near GTX1070 performance as some people claim an RX480 can match a GTX1070....
> RX480 is GTX1060 territory.



Yes, "some people" say that not AMD. Btw if u can can overclock your RX 480 to 1400mhz core (most current batches can do it) u get performance way above GTX 1060 at 2000 mhz


----------



## Fluffmeister (Dec 30, 2016)

Fabio Pisco said:


> Yes, "some people" say that not AMD. Btw if u can can overclock your RX 480 to 1400mhz core (most current batches can do it) u get performance way above GTX 1060 at 2000 mhz



http://www.hardocp.com/article/2016...480_o8g_gaming_video_card_review#.WGZtjxuLTIU


----------



## cdawall (Dec 30, 2016)

Fluffmeister said:


> http://www.hardocp.com/article/2016...480_o8g_gaming_video_card_review#.WGZtjxuLTIU



That site is a fucking joke. Somehow the entire 480/470/460 Club can manage to get >1400 on stock voltage and it takes him literally all of the voltage to do it.


----------



## TheGuruStud (Dec 30, 2016)

cdawall said:


> That site is a fucking joke. Somehow the entire 480/470/460 Club can manage to get >1400 on stock voltage and it takes him literally all of the voltage to do it.



I can't speak about current stuff, but they used to be one of the biggest AMD haters (along with Tom's and Anand).


----------



## Fluffmeister (Dec 30, 2016)

I new the results wouldn't go down to well.

/shrug.


----------



## cdawall (Dec 30, 2016)

Fluffmeister said:


> I new the results wouldn't go down to well.
> 
> /shrug.



His results match no one else's on the web...post 16.12 update next to no game shows a performance advantage to the 1060 yet his review shows a stock 1060 beating a 480@1400. Seems fishy.


----------



## Fluffmeister (Dec 30, 2016)

cdawall said:


> His results match no one else's on the web...post 16.12 update next to no game shows a performance advantage to the 1060 yet his review shows a stock 1060 beating a 480@1400. Seems fishy.



It's not a stock 1060.


----------



## cdawall (Dec 30, 2016)

Fluffmeister said:


> It's not a stock 1060.



Ah GOD DAMN that is a poorly written review... Why would they not link this on the test setup page? Or list the clock speed when you mention which card is being used.







Instead you have to scroll down to the bottom of the page






click a link, follow that link to their images to find out that the card was at 2164 mhz boost/9200mhz on the memory






That clears a couple things up for me. Go figure [H] is being [H] again. Hell when they can figure out how to measure actual card draw as opposed to whole system (mind you they have been doing "reviews" for almost 20 years and have been denied cards from AMD in the recent past). So one of a couple things happened, the [H] is just telling lies to people, they cannot overclock AMD cards to save their live's or they got a card that actually clocks worse than the lowest end RX480 on the market.

AKA this one,






The XFX RX480 RS model which is essentially an RX480 on an RX460 PCB has been consistently hitting 1390-1430 on stock voltage. Yet the Asus Strix card (which currently holds ALL of the 480 WR's) can only hit 1410@1.3v. I am not saying take this review with a grain of salt I am saying they are so full of shit it is coming out of Kyle Bennett's mouth as per usual.


----------



## Fluffmeister (Dec 30, 2016)

That's quite a rant, but you're just shooting the messenger, take it up with Kyle.


----------



## cdawall (Dec 30, 2016)

I would just prefer if he stopped doing AMD reviews. I mean why spend time and bandwidth to do such a shit job?


----------



## Fluffmeister (Dec 30, 2016)

Dunno, just look at the pretty graphs then.


----------



## cdawall (Dec 30, 2016)

Fluffmeister said:


> Dunno, just look at the pretty graphs then.



His graphs are so awful though lol


----------



## Fluffmeister (Dec 30, 2016)

Anyway Fabio myth blown, time to move on.


----------



## cdawall (Dec 30, 2016)

Fluffmeister said:


> Anyway Fabio myth blown, time to move on.



Get w1z to do a review with the new driver and both overclocked and I'll care. Posting anything off the [h] typically just makes the world believe the opposite.


----------



## Fluffmeister (Dec 30, 2016)

cdawall said:


> Get w1z to do a review with the new driver and both overclocked and I'll care. Posting anything off the [h] typically just makes the world believe the opposite.



I'd love to, if it stopped you not caring this much.


----------



## cdawall (Dec 30, 2016)

Fluffmeister said:


> I'd love to, if it stopped you not caring this much.



It bothers me that people believe what [H] puts out. It reminds me way to much of people believing CNN.

Now on the other end of the spectrum has anyone seen the 2800-3000mhz clocks they are getting out of the galax version of the 1060? HOLY HELL.


----------



## Fluffmeister (Dec 30, 2016)

I posted about that Galax card too, but they hated on it for being under LN2.

I can't win.


----------



## cdawall (Dec 30, 2016)

Fluffmeister said:


> I posted about that Galax card too, but they hated on it for being under LN2.
> 
> I can't win.



Never can LOL


----------



## anubis44 (Feb 2, 2017)

IceScreamer said:


> This is great news for AMD users, but only when DX12 is properly implemented, which was sadly a lottery so far.



A lottery with increasingly excellent odds of winning for AMD Radeon users.

nVidia left out hardware schedulers from Pascal back in the design phase, before nVidia realized AMD had pressured Microsoft into incorporating DX12 (essentially Mantle) into Windows 10. By the time nVidia knew this, it was too late to change Pascal. nVidia hoped the adoption of Windows 10 would be slow, but Microsoft gave it away for free for nearly a year.

So much for nVidia's plans. They were hoping to milk not-so-bright nVidiots over a longer time frame, before they lost the gaming war with AMD (an inevitability, as AMD now has >25% of the overall x86 gaming market), but AMD had other plans, and Microsoft is a willing accomplice. Now nVidia is pushing like mad to get into self-driving cars and high performance computing because their days of making the big $$ from add-in PC gaming GPUs is coming to an end, much like the add-in sound card days for SoundBlaster.

Once AMD releases Zen APUs with Vega graphics and their new memory fabric, the market for the mid-range add-in GPUs will begin to evaporate just as the low-end add-in GPU board market mostly has. nVidia is getting painted into an ever-smaller unit volume market at the high-end, which is ironic, really, considering what a rip off the price-performance proposition is for a $600 GTX1080. Why anyone continues to funnel that kind of money to the Green Goblin is beyond me. I never spend more than $300 (maybe $320) on a graphics card, and that's my hard limit.


----------



## Nergal (Feb 2, 2017)

anubis44 said:


> A lottery with increasingly excellent odds of winning for AMD Radeon users.
> 
> nVidia left out hardware schedulers from Pascal back in the design phase, before nVidia realized AMD had pressured Microsoft into incorporating DX12 (essentially Mantle) into Windows 10. By the time nVidia knew this, it was too late to change Pascal. nVidia hoped the adoption of Windows 10 would be slow, but Microsoft gave it away for free for nearly a year.
> 
> ...



I would just love to have you be right.

However, M$ wasn´t "pushed" as an accomplice; but they stole the concept of Mantle and incorporated that to counter AMD. (and lose market share&influence) 

What is true is that the ngreedia fellows tried to milk this stagnation for as long as possible; but suddenly came face to face with a rapid DX12 roll-out. 

Seeing as NV has tons of money more for R&D opposed to what AMD has; my guess is they already made headway in development that are kept hush-hush. I am sure they just have some tech on the shelf they can pull out of their *$$ to combat DX12. 

I imagine that whilst AMD will have better DX12 usage, we will see NV using "something" to just crank out more power from what they have. 
A reverse situation where NV has much more TFLOPs than AMD in their cards is certainly possible.


----------



## Vayra86 (Feb 3, 2017)

anubis44 said:


> A lottery with increasingly excellent odds of winning for AMD Radeon users.
> 
> nVidia left out hardware schedulers from Pascal back in the design phase, before nVidia realized AMD had pressured Microsoft into incorporating DX12 (essentially Mantle) into Windows 10. By the time nVidia knew this, it was too late to change Pascal. nVidia hoped the adoption of Windows 10 would be slow, but Microsoft gave it away for free for nearly a year.
> 
> ...



So then you've never been into buying high-end cards, you will always choose midrange and you blame AMD's failure to keep pushing a high end portfolio on Nvidia who still IS able to squeeze 30% more perf into a 225 w TDP every year. Meanwhile you consider 25% a healthy market share when there are only two companies in competition.

In the meantime the majority of DX12 ports are not really showing any gains on DX12 for either company, only a small handful of games do. The native DX12 games are extremely rare still.

Sense, it makes none

It's good that DX12 is paying off for AMD, but the actual fact is that AMD counted on that for waaaay too long, which is the reason they lost their market share under DX11. The company that is much closer to market reality is actually Nvidia, because even today they can still easily transition to DX12, and across the board their cards still do more with less power. Dedicated GPU is here to stay for atleast a few more decades, if not for gaming then it will be for GPGPU and scientific purposes, or deep learning, AI, etc etc etc. GPU is a swiss army knife and the companies will always find a use for it, just like the CPU is super versatile already. Gaming is just a tiny slice of the GPU pie, even if we would love to believe something else.


----------



## AsRock (Feb 3, 2017)

P4-630 said:


> But nowhere near GTX1070 performance as some people claim an RX480 can match a GTX1070....
> RX480 is GTX1060 territory.



Wish full thinking.

How i see it is $200 card v's a $400 card, and for what ? 20fps extra about so is it worth it and to be honest only the buyer can decide that.


----------



## bug (Feb 3, 2017)

AsRock said:


> Wish full thinking.
> 
> How i see it is $200 card v's a $400 card, and for what ? 20fps extra about so is it worth it and to be honest only the buyer can decide that.


Expressing the difference between two video cards in FPS is wrong. What's 20fps? The difference between 2 and 22? 42 and 62? 202 and 222? See how it doesn't work?
% makes more sense. But realistically speaking, 1070 enables you to play at QHD while 480 can only do FHD. Whether that's worth the price difference is indeed for the user to judge.


----------



## RealNeil (Feb 3, 2017)

When you double-up and run Crossfire, you begin to see much better performance numbers. This is where the $235.00 for each RX480 8GB GPU begins to make sense. You can get two of these for not much more than one 1070.
A pair of RX480 8GB cards do all of my games nicely and without any lag to speak of. My 4K screen is running at 60Hz speed, and these two RX480s saturate it so that 4K resolutions are playable without jerking me around. (Shmooth!)
Yes, I know that the 1070 and 1080 cards will be quicker. (but for a lot more money)
1060s don't even factor in because NVIDIA chose to hobble the 60 series of GPUs in SLI this time around. (probably because they ~knew without a doubt~ that we would have jumped at the chance to SLI a pair of GTX-1060s and keep some money at home to eat with)
I have a pair of GTX-980Ti cards that I pulled out of this PC just to test out the 480s for a while. 980Ti cards in SLI are wonderful and probably what I'll keep buying instead of being disemboweled by NVIDIA for the newest thing.


----------



## AsRock (Feb 3, 2017)

bug said:


> Expressing the difference between two video cards in FPS is wrong. What's 20fps? The difference between 2 and 22? 42 and 62? 202 and 222? See how it doesn't work?
> % makes more sense. But realistically speaking, 1070 enables you to play at QHD while 480 can only do FHD. Whether that's worth the price difference is indeed for the user to judge.



For the most part but some of us have a more dedicated game that must improve. to me most benchmarks even in game ones mean pretty much nothing to me and just a little guide as games like Arma cannot done correctly in benchmarks.

In the end it's down to the user and if the benchmarked game in question was my go to game the 480 would be good enough and having the 1070 would be a waste.

Some require 30+ some require 60+ and now these days even more want 100+.  As long as i get 35fps+ i am a happy gamer.

All about user requirements, i don't require 4k. my 290x's is showing it age a little but it still don't justify $200 over the 480 although that said for me the 480 is totally pointless to me.


----------



## bug (Feb 3, 2017)

AsRock said:


> For the most part but some of us have a more dedicated game that must improve. to me most benchmarks even in game ones mean pretty much nothing to me and just a little guide as games like Arma cannot done correctly in benchmarks.
> 
> In the end it's down to the user and if the benchmarked game in question was my go to game the 480 would be good enough and having the 1070 would be a waste.
> 
> ...


The original argument was 480 vs 1070 and the price difference in general.
Sure, there are corner cases and exceptions to everything, but in this particular case, your statement does not hold.


----------



## FordGT90Concept (Feb 3, 2017)

Nergal said:


> However, M$ wasn´t "pushed" as an accomplice; but they stole the concept of Mantle and incorporated that to counter AMD. (and lose market share&influence)


Developers demanded Mantle and, as an extension of that, Xbox One required DirectX 12.  Both technologies were closer-to-the-metal so developers could squeeze more performance out of them.  DirectX 12 was almost the last 3D API to go closer-to-the-metal.  The trend started, I believe, back with PlayStation 3 which had an OpenGL-based closer-to-the-metal implementation to squeeze more power out of the CELL processor.  Sony ported that library to PlayStation 4 adopting and expanding it for AMD's APU.  The push for DirectX 12 really began in earnest when Microsoft saw the performance figures of Xbox One compared to PlayStation 4.  Not only did PlayStation 4 have a beefier APU, it also had a closer-to-the metal API for accessing it which further accelerated its performance.  Microsoft looked to Mantle as an example of what they could accomplish with the APU they already have.  Xbox One apparently received the DirectX 12 API update at the end of 2015.

AMD has a tight working relationship with Microsoft because of the Xbox One which gave AMD a leg up in terms of DirectX 12.  The API was practically designed to run on GCN (which Xbox One has).  That said, NVIDIA has demonstrated that Pascal can run DirectX 12 just as well when optimized for it with Futuremark's TimeSpy.



Nergal said:


> A reverse situation where NV has much more TFLOPs than AMD in their cards is certainly possible.


AMD tends to have more TFLOPs on paper (exception right now because AMD hasn't launched a response to Pascal until Vega) but they also tend to leave more shaders idle.


Back to topic, I tried Direct3D 12 in The Division and it does seem to get a few more frames but it breaks alt+tab functionality causing the game to crash when attempting to restore the window.  Because of that, I run the game on Direct3D 11.


----------



## bug (Feb 3, 2017)

FordGT90Concept said:


> Developers demanded Mantle and, as an extension of that, Xbox One required DirectX 12.  Both technologies were closer-to-the-metal so developers could squeeze more performance out of them.  DirectX 12 was almost the last 3D API to go closer-to-the-metal.  The trend started, I believe, back with PlayStation 3 which had an OpenGL-based closer-to-the-metal implementation to squeeze more power out of the CELL processor.  Sony ported that library to PlayStation 4 adopting and expanding it for AMD's APU.  The push for DirectX 12 really began in earnest when Microsoft saw the performance figures of Xbox One compared to PlayStation 4.  Not only did PlayStation 4 have a beefier APU, it also had a closer-to-the metal API for accessing it which further accelerated its performance.  Microsoft looked to Mantle as an example of what they could accomplish with the APU they already have.  Xbox One apparently received the DirectX 12 API update at the end of 2015.



What developers failed to realise is that on console they only had to deal with a handful of configurations at most, while the PC world is an entirely different beast. Now that developers have to put they money where their mouth was, we get DX11 titles with DX12 bolted on instead.


----------



## FordGT90Concept (Feb 3, 2017)

That's going to change pretty quickly. 48.3% of machines that participated in Steam Hardware Survey is installed on run on DirectX 12 GPU and Windows 10.  An additional 25% have a DirectX 12 GPU and are running Windows 7 or 8.1 which could upgrade to fully support the API.

That said, major engines like Unreal Engine 4 are still struggling to adopt DirectX 12.  DirectX 12 represents a pretty major paradigm shift in 3D engine design so adaption in software is going to be slower than even DirectX 9 to DirectX 10.  Games like The Division bolting on Direct3D 12 support are examples of developers dipping their toes into the API.  Eventually we'll start seeing Direct3D 12 games built from the ground up with Direct3D 11 bolted on for backwards compatibility.  These are the games that will seriously benefit from the closer-to-the-metal APIs.


----------



## bug (Feb 3, 2017)

FordGT90Concept said:


> That's going to change pretty quickly. 48.3% of machines that participated in Steam Hardware Survey is installed on run on DirectX 12 GPU and Windows 10.  An additional 25% have a DirectX 12 GPU and are running Windows 7 or 8.1 which could upgrade pretty quickly to fully support the API.
> 
> That said, major engines like Unreal Engine 4 are still struggling to adopt DirectX 12.  DirectX 12 represents a pretty major paradigm shift in 3D engine design so adaption in software is going to be slower than even DirectX 9 to DirectX 10.  Games like The Division bolting on Direct3D 12 support are examples of developers dipping their toes into the API.  Eventually we'll start seeing Direct3D 12 games built from the ground up with Direct3D 11 bolted on for backwards compatibility.  These are the games that will seriously benefit from the closer-to-the-metal APIs.


My guess is many (smaller) developer either won't have the resources or the will to implement (and test) DX12 and will stick to DX11 instead. Same for Vulkan and OpenGL.


----------



## FordGT90Concept (Feb 3, 2017)

Smaller developers are dependent on the engine they're using.  The bulk of them are on Unreal Engine (discussed previously) and Unity.  Unity is supposed to support DirectX 12 but I don't know if they do/when they implemented it.  I do know Unity 5.6 supports Vulkan.


----------

