Monday, December 19th 2016
Tom Clancy's "The Division" Gets DirectX 12 Update, RX 480 Beats GTX 1060 by 16%
Over the weekend, gamers began testing the new DirectX 12 renderer of Tom Clancy's "The Division," released through a game patch. Testing by GameGPU (Russian media site) shows that AMD Radeon RX 480 is about 16 percent faster than NVIDIA GeForce GTX 1060 6GB, with the game running in the new DirectX 12 mode. "The Division" was tested with its new DirectX 12 renderer, on an ASUS Radeon RX 480 STRIX graphics card driven by Crimson ReLive 16.12.1 drivers, and compared with an ASUS GeForce GTX 1060 6GB STRIX, driven by GeForce 376.19 drivers. Independent testing by German tech-site ComputerBase.de supports these findings.
Sources:
GameGPU, ComputerBase.de, Expreview
142 Comments on Tom Clancy's "The Division" Gets DirectX 12 Update, RX 480 Beats GTX 1060 by 16%
Now on the other end of the spectrum has anyone seen the 2800-3000mhz clocks they are getting out of the galax version of the 1060? HOLY HELL.
I can't win.
nVidia left out hardware schedulers from Pascal back in the design phase, before nVidia realized AMD had pressured Microsoft into incorporating DX12 (essentially Mantle) into Windows 10. By the time nVidia knew this, it was too late to change Pascal. nVidia hoped the adoption of Windows 10 would be slow, but Microsoft gave it away for free for nearly a year.
So much for nVidia's plans. They were hoping to milk not-so-bright nVidiots over a longer time frame, before they lost the gaming war with AMD (an inevitability, as AMD now has >25% of the overall x86 gaming market), but AMD had other plans, and Microsoft is a willing accomplice. Now nVidia is pushing like mad to get into self-driving cars and high performance computing because their days of making the big $$ from add-in PC gaming GPUs is coming to an end, much like the add-in sound card days for SoundBlaster.
Once AMD releases Zen APUs with Vega graphics and their new memory fabric, the market for the mid-range add-in GPUs will begin to evaporate just as the low-end add-in GPU board market mostly has. nVidia is getting painted into an ever-smaller unit volume market at the high-end, which is ironic, really, considering what a rip off the price-performance proposition is for a $600 GTX1080. Why anyone continues to funnel that kind of money to the Green Goblin is beyond me. I never spend more than $300 (maybe $320) on a graphics card, and that's my hard limit.
However, M$ wasn´t "pushed" as an accomplice; but they stole the concept of Mantle and incorporated that to counter AMD. (and lose market share&influence)
What is true is that the ngreedia fellows tried to milk this stagnation for as long as possible; but suddenly came face to face with a rapid DX12 roll-out.
Seeing as NV has tons of money more for R&D opposed to what AMD has; my guess is they already made headway in development that are kept hush-hush. I am sure they just have some tech on the shelf they can pull out of their *$$ to combat DX12.
I imagine that whilst AMD will have better DX12 usage, we will see NV using "something" to just crank out more power from what they have.
A reverse situation where NV has much more TFLOPs than AMD in their cards is certainly possible.
In the meantime the majority of DX12 ports are not really showing any gains on DX12 for either company, only a small handful of games do. The native DX12 games are extremely rare still.
Sense, it makes none
It's good that DX12 is paying off for AMD, but the actual fact is that AMD counted on that for waaaay too long, which is the reason they lost their market share under DX11. The company that is much closer to market reality is actually Nvidia, because even today they can still easily transition to DX12, and across the board their cards still do more with less power. Dedicated GPU is here to stay for atleast a few more decades, if not for gaming then it will be for GPGPU and scientific purposes, or deep learning, AI, etc etc etc. GPU is a swiss army knife and the companies will always find a use for it, just like the CPU is super versatile already. Gaming is just a tiny slice of the GPU pie, even if we would love to believe something else.
How i see it is $200 card v's a $400 card, and for what ? 20fps extra about so is it worth it and to be honest only the buyer can decide that.
% makes more sense. But realistically speaking, 1070 enables you to play at QHD while 480 can only do FHD. Whether that's worth the price difference is indeed for the user to judge.
A pair of RX480 8GB cards do all of my games nicely and without any lag to speak of. My 4K screen is running at 60Hz speed, and these two RX480s saturate it so that 4K resolutions are playable without jerking me around. (Shmooth!)
Yes, I know that the 1070 and 1080 cards will be quicker. (but for a lot more money)
1060s don't even factor in because NVIDIA chose to hobble the 60 series of GPUs in SLI this time around. (probably because they ~knew without a doubt~ that we would have jumped at the chance to SLI a pair of GTX-1060s and keep some money at home to eat with)
I have a pair of GTX-980Ti cards that I pulled out of this PC just to test out the 480s for a while. 980Ti cards in SLI are wonderful and probably what I'll keep buying instead of being disemboweled by NVIDIA for the newest thing.
In the end it's down to the user and if the benchmarked game in question was my go to game the 480 would be good enough and having the 1070 would be a waste.
Some require 30+ some require 60+ and now these days even more want 100+. As long as i get 35fps+ i am a happy gamer.
All about user requirements, i don't require 4k. my 290x's is showing it age a little but it still don't justify $200 over the 480 although that said for me the 480 is totally pointless to me.
Sure, there are corner cases and exceptions to everything, but in this particular case, your statement does not hold.
AMD has a tight working relationship with Microsoft because of the Xbox One which gave AMD a leg up in terms of DirectX 12. The API was practically designed to run on GCN (which Xbox One has). That said, NVIDIA has demonstrated that Pascal can run DirectX 12 just as well when optimized for it with Futuremark's TimeSpy. AMD tends to have more TFLOPs on paper (exception right now because AMD hasn't launched a response to Pascal until Vega) but they also tend to leave more shaders idle.
Back to topic, I tried Direct3D 12 in The Division and it does seem to get a few more frames but it breaks alt+tab functionality causing the game to crash when attempting to restore the window. Because of that, I run the game on Direct3D 11.
That said, major engines like Unreal Engine 4 are still struggling to adopt DirectX 12. DirectX 12 represents a pretty major paradigm shift in 3D engine design so adaption in software is going to be slower than even DirectX 9 to DirectX 10. Games like The Division bolting on Direct3D 12 support are examples of developers dipping their toes into the API. Eventually we'll start seeing Direct3D 12 games built from the ground up with Direct3D 11 bolted on for backwards compatibility. These are the games that will seriously benefit from the closer-to-the-metal APIs.