Monday, December 19th 2016

Tom Clancy's "The Division" Gets DirectX 12 Update, RX 480 Beats GTX 1060 by 16%

Over the weekend, gamers began testing the new DirectX 12 renderer of Tom Clancy's "The Division," released through a game patch. Testing by GameGPU (Russian media site) shows that AMD Radeon RX 480 is about 16 percent faster than NVIDIA GeForce GTX 1060 6GB, with the game running in the new DirectX 12 mode. "The Division" was tested with its new DirectX 12 renderer, on an ASUS Radeon RX 480 STRIX graphics card driven by Crimson ReLive 16.12.1 drivers, and compared with an ASUS GeForce GTX 1060 6GB STRIX, driven by GeForce 376.19 drivers. Independent testing by German tech-site ComputerBase.de supports these findings.
Sources: GameGPU, ComputerBase.de, Expreview
Add your own comment

142 Comments on Tom Clancy's "The Division" Gets DirectX 12 Update, RX 480 Beats GTX 1060 by 16%

#126
Fluffmeister
cdawallGet w1z to do a review with the new driver and both overclocked and I'll care. Posting anything off the [h] typically just makes the world believe the opposite.
I'd love to, if it stopped you not caring this much.
Posted on Reply
#127
cdawall
where the hell are my stars
FluffmeisterI'd love to, if it stopped you not caring this much.
It bothers me that people believe what [H] puts out. It reminds me way to much of people believing CNN.

Now on the other end of the spectrum has anyone seen the 2800-3000mhz clocks they are getting out of the galax version of the 1060? HOLY HELL.
Posted on Reply
#128
Fluffmeister
I posted about that Galax card too, but they hated on it for being under LN2.

I can't win.
Posted on Reply
#129
cdawall
where the hell are my stars
FluffmeisterI posted about that Galax card too, but they hated on it for being under LN2.

I can't win.
Never can LOL
Posted on Reply
#130
anubis44
IceScreamerThis is great news for AMD users, but only when DX12 is properly implemented, which was sadly a lottery so far.
A lottery with increasingly excellent odds of winning for AMD Radeon users.

nVidia left out hardware schedulers from Pascal back in the design phase, before nVidia realized AMD had pressured Microsoft into incorporating DX12 (essentially Mantle) into Windows 10. By the time nVidia knew this, it was too late to change Pascal. nVidia hoped the adoption of Windows 10 would be slow, but Microsoft gave it away for free for nearly a year.

So much for nVidia's plans. They were hoping to milk not-so-bright nVidiots over a longer time frame, before they lost the gaming war with AMD (an inevitability, as AMD now has >25% of the overall x86 gaming market), but AMD had other plans, and Microsoft is a willing accomplice. Now nVidia is pushing like mad to get into self-driving cars and high performance computing because their days of making the big $$ from add-in PC gaming GPUs is coming to an end, much like the add-in sound card days for SoundBlaster.

Once AMD releases Zen APUs with Vega graphics and their new memory fabric, the market for the mid-range add-in GPUs will begin to evaporate just as the low-end add-in GPU board market mostly has. nVidia is getting painted into an ever-smaller unit volume market at the high-end, which is ironic, really, considering what a rip off the price-performance proposition is for a $600 GTX1080. Why anyone continues to funnel that kind of money to the Green Goblin is beyond me. I never spend more than $300 (maybe $320) on a graphics card, and that's my hard limit.
Posted on Reply
#131
Nergal
anubis44A lottery with increasingly excellent odds of winning for AMD Radeon users.

nVidia left out hardware schedulers from Pascal back in the design phase, before nVidia realized AMD had pressured Microsoft into incorporating DX12 (essentially Mantle) into Windows 10. By the time nVidia knew this, it was too late to change Pascal. nVidia hoped the adoption of Windows 10 would be slow, but Microsoft gave it away for free for nearly a year.

So much for nVidia's plans. They were hoping to milk not-so-bright nVidiots over a longer time frame, before they lost the gaming war with AMD (an inevitability, as AMD now has >25% of the overall x86 gaming market), but AMD had other plans, and Microsoft is a willing accomplice. Now nVidia is pushing like mad to get into self-driving cars and high performance computing because their days of making the big $$ from add-in PC gaming GPUs is coming to an end, much like the add-in sound card days for SoundBlaster.

Once AMD releases Zen APUs with Vega graphics and their new memory fabric, the market for the mid-range add-in GPUs will begin to evaporate just as the low-end add-in GPU board market mostly has. nVidia is getting painted into an ever-smaller unit volume market at the high-end, which is ironic, really, considering what a rip off the price-performance proposition is for a $600 GTX1080. Why anyone continues to funnel that kind of money to the Green Goblin is beyond me. I never spend more than $300 (maybe $320) on a graphics card, and that's my hard limit.
I would just love to have you be right.

However, M$ wasn´t "pushed" as an accomplice; but they stole the concept of Mantle and incorporated that to counter AMD. (and lose market share&influence)

What is true is that the ngreedia fellows tried to milk this stagnation for as long as possible; but suddenly came face to face with a rapid DX12 roll-out.

Seeing as NV has tons of money more for R&D opposed to what AMD has; my guess is they already made headway in development that are kept hush-hush. I am sure they just have some tech on the shelf they can pull out of their *$$ to combat DX12.

I imagine that whilst AMD will have better DX12 usage, we will see NV using "something" to just crank out more power from what they have.
A reverse situation where NV has much more TFLOPs than AMD in their cards is certainly possible.
Posted on Reply
#132
Vayra86
anubis44A lottery with increasingly excellent odds of winning for AMD Radeon users.

nVidia left out hardware schedulers from Pascal back in the design phase, before nVidia realized AMD had pressured Microsoft into incorporating DX12 (essentially Mantle) into Windows 10. By the time nVidia knew this, it was too late to change Pascal. nVidia hoped the adoption of Windows 10 would be slow, but Microsoft gave it away for free for nearly a year.

So much for nVidia's plans. They were hoping to milk not-so-bright nVidiots over a longer time frame, before they lost the gaming war with AMD (an inevitability, as AMD now has >25% of the overall x86 gaming market), but AMD had other plans, and Microsoft is a willing accomplice. Now nVidia is pushing like mad to get into self-driving cars and high performance computing because their days of making the big $$ from add-in PC gaming GPUs is coming to an end, much like the add-in sound card days for SoundBlaster.

Once AMD releases Zen APUs with Vega graphics and their new memory fabric, the market for the mid-range add-in GPUs will begin to evaporate just as the low-end add-in GPU board market mostly has. nVidia is getting painted into an ever-smaller unit volume market at the high-end, which is ironic, really, considering what a rip off the price-performance proposition is for a $600 GTX1080. Why anyone continues to funnel that kind of money to the Green Goblin is beyond me. I never spend more than $300 (maybe $320) on a graphics card, and that's my hard limit.
So then you've never been into buying high-end cards, you will always choose midrange and you blame AMD's failure to keep pushing a high end portfolio on Nvidia who still IS able to squeeze 30% more perf into a 225 w TDP every year. Meanwhile you consider 25% a healthy market share when there are only two companies in competition.

In the meantime the majority of DX12 ports are not really showing any gains on DX12 for either company, only a small handful of games do. The native DX12 games are extremely rare still.

Sense, it makes none

It's good that DX12 is paying off for AMD, but the actual fact is that AMD counted on that for waaaay too long, which is the reason they lost their market share under DX11. The company that is much closer to market reality is actually Nvidia, because even today they can still easily transition to DX12, and across the board their cards still do more with less power. Dedicated GPU is here to stay for atleast a few more decades, if not for gaming then it will be for GPGPU and scientific purposes, or deep learning, AI, etc etc etc. GPU is a swiss army knife and the companies will always find a use for it, just like the CPU is super versatile already. Gaming is just a tiny slice of the GPU pie, even if we would love to believe something else.
Posted on Reply
#133
AsRock
TPU addict
P4-630But nowhere near GTX1070 performance as some people claim an RX480 can match a GTX1070....:p
RX480 is GTX1060 territory.;)
Wish full thinking.

How i see it is $200 card v's a $400 card, and for what ? 20fps extra about so is it worth it and to be honest only the buyer can decide that.
Posted on Reply
#134
bug
AsRockWish full thinking.

How i see it is $200 card v's a $400 card, and for what ? 20fps extra about so is it worth it and to be honest only the buyer can decide that.
Expressing the difference between two video cards in FPS is wrong. What's 20fps? The difference between 2 and 22? 42 and 62? 202 and 222? See how it doesn't work?
% makes more sense. But realistically speaking, 1070 enables you to play at QHD while 480 can only do FHD. Whether that's worth the price difference is indeed for the user to judge.
Posted on Reply
#135
RealNeil
When you double-up and run Crossfire, you begin to see much better performance numbers. This is where the $235.00 for each RX480 8GB GPU begins to make sense. You can get two of these for not much more than one 1070.
A pair of RX480 8GB cards do all of my games nicely and without any lag to speak of. My 4K screen is running at 60Hz speed, and these two RX480s saturate it so that 4K resolutions are playable without jerking me around. (Shmooth!)
Yes, I know that the 1070 and 1080 cards will be quicker. (but for a lot more money)
1060s don't even factor in because NVIDIA chose to hobble the 60 series of GPUs in SLI this time around. (probably because they ~knew without a doubt~ that we would have jumped at the chance to SLI a pair of GTX-1060s and keep some money at home to eat with)
I have a pair of GTX-980Ti cards that I pulled out of this PC just to test out the 480s for a while. 980Ti cards in SLI are wonderful and probably what I'll keep buying instead of being disemboweled by NVIDIA for the newest thing.
Posted on Reply
#136
AsRock
TPU addict
bugExpressing the difference between two video cards in FPS is wrong. What's 20fps? The difference between 2 and 22? 42 and 62? 202 and 222? See how it doesn't work?
% makes more sense. But realistically speaking, 1070 enables you to play at QHD while 480 can only do FHD. Whether that's worth the price difference is indeed for the user to judge.
For the most part but some of us have a more dedicated game that must improve. to me most benchmarks even in game ones mean pretty much nothing to me and just a little guide as games like Arma cannot done correctly in benchmarks.

In the end it's down to the user and if the benchmarked game in question was my go to game the 480 would be good enough and having the 1070 would be a waste.

Some require 30+ some require 60+ and now these days even more want 100+. As long as i get 35fps+ i am a happy gamer.

All about user requirements, i don't require 4k. my 290x's is showing it age a little but it still don't justify $200 over the 480 although that said for me the 480 is totally pointless to me.
Posted on Reply
#137
bug
AsRockFor the most part but some of us have a more dedicated game that must improve. to me most benchmarks even in game ones mean pretty much nothing to me and just a little guide as games like Arma cannot done correctly in benchmarks.

In the end it's down to the user and if the benchmarked game in question was my go to game the 480 would be good enough and having the 1070 would be a waste.

Some require 30+ some require 60+ and now these days even more want 100+. As long as i get 35fps+ i am a happy gamer.

All about user requirements, i don't require 4k. my 290x's is showing it age a little but it still don't justify $200 over the 480 although that said for me the 480 is totally pointless to me.
The original argument was 480 vs 1070 and the price difference in general.
Sure, there are corner cases and exceptions to everything, but in this particular case, your statement does not hold.
Posted on Reply
#138
FordGT90Concept
"I go fast!1!11!1!"
NergalHowever, M$ wasn´t "pushed" as an accomplice; but they stole the concept of Mantle and incorporated that to counter AMD. (and lose market share&influence)
Developers demanded Mantle and, as an extension of that, Xbox One required DirectX 12. Both technologies were closer-to-the-metal so developers could squeeze more performance out of them. DirectX 12 was almost the last 3D API to go closer-to-the-metal. The trend started, I believe, back with PlayStation 3 which had an OpenGL-based closer-to-the-metal implementation to squeeze more power out of the CELL processor. Sony ported that library to PlayStation 4 adopting and expanding it for AMD's APU. The push for DirectX 12 really began in earnest when Microsoft saw the performance figures of Xbox One compared to PlayStation 4. Not only did PlayStation 4 have a beefier APU, it also had a closer-to-the metal API for accessing it which further accelerated its performance. Microsoft looked to Mantle as an example of what they could accomplish with the APU they already have. Xbox One apparently received the DirectX 12 API update at the end of 2015.

AMD has a tight working relationship with Microsoft because of the Xbox One which gave AMD a leg up in terms of DirectX 12. The API was practically designed to run on GCN (which Xbox One has). That said, NVIDIA has demonstrated that Pascal can run DirectX 12 just as well when optimized for it with Futuremark's TimeSpy.
NergalA reverse situation where NV has much more TFLOPs than AMD in their cards is certainly possible.
AMD tends to have more TFLOPs on paper (exception right now because AMD hasn't launched a response to Pascal until Vega) but they also tend to leave more shaders idle.


Back to topic, I tried Direct3D 12 in The Division and it does seem to get a few more frames but it breaks alt+tab functionality causing the game to crash when attempting to restore the window. Because of that, I run the game on Direct3D 11.
Posted on Reply
#139
bug
FordGT90ConceptDevelopers demanded Mantle and, as an extension of that, Xbox One required DirectX 12. Both technologies were closer-to-the-metal so developers could squeeze more performance out of them. DirectX 12 was almost the last 3D API to go closer-to-the-metal. The trend started, I believe, back with PlayStation 3 which had an OpenGL-based closer-to-the-metal implementation to squeeze more power out of the CELL processor. Sony ported that library to PlayStation 4 adopting and expanding it for AMD's APU. The push for DirectX 12 really began in earnest when Microsoft saw the performance figures of Xbox One compared to PlayStation 4. Not only did PlayStation 4 have a beefier APU, it also had a closer-to-the metal API for accessing it which further accelerated its performance. Microsoft looked to Mantle as an example of what they could accomplish with the APU they already have. Xbox One apparently received the DirectX 12 API update at the end of 2015.
What developers failed to realise is that on console they only had to deal with a handful of configurations at most, while the PC world is an entirely different beast. Now that developers have to put they money where their mouth was, we get DX11 titles with DX12 bolted on instead.
Posted on Reply
#140
FordGT90Concept
"I go fast!1!11!1!"
That's going to change pretty quickly. 48.3% of machines that participated in Steam Hardware Survey is installed on run on DirectX 12 GPU and Windows 10. An additional 25% have a DirectX 12 GPU and are running Windows 7 or 8.1 which could upgrade to fully support the API.

That said, major engines like Unreal Engine 4 are still struggling to adopt DirectX 12. DirectX 12 represents a pretty major paradigm shift in 3D engine design so adaption in software is going to be slower than even DirectX 9 to DirectX 10. Games like The Division bolting on Direct3D 12 support are examples of developers dipping their toes into the API. Eventually we'll start seeing Direct3D 12 games built from the ground up with Direct3D 11 bolted on for backwards compatibility. These are the games that will seriously benefit from the closer-to-the-metal APIs.
Posted on Reply
#141
bug
FordGT90ConceptThat's going to change pretty quickly. 48.3% of machines that participated in Steam Hardware Survey is installed on run on DirectX 12 GPU and Windows 10. An additional 25% have a DirectX 12 GPU and are running Windows 7 or 8.1 which could upgrade pretty quickly to fully support the API.

That said, major engines like Unreal Engine 4 are still struggling to adopt DirectX 12. DirectX 12 represents a pretty major paradigm shift in 3D engine design so adaption in software is going to be slower than even DirectX 9 to DirectX 10. Games like The Division bolting on Direct3D 12 support are examples of developers dipping their toes into the API. Eventually we'll start seeing Direct3D 12 games built from the ground up with Direct3D 11 bolted on for backwards compatibility. These are the games that will seriously benefit from the closer-to-the-metal APIs.
My guess is many (smaller) developer either won't have the resources or the will to implement (and test) DX12 and will stick to DX11 instead. Same for Vulkan and OpenGL.
Posted on Reply
#142
FordGT90Concept
"I go fast!1!11!1!"
Smaller developers are dependent on the engine they're using. The bulk of them are on Unreal Engine (discussed previously) and Unity. Unity is supposed to support DirectX 12 but I don't know if they do/when they implemented it. I do know Unity 5.6 supports Vulkan.
Posted on Reply
Add your own comment
Nov 3rd, 2024 14:52 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts