Monday, December 19th 2016
Tom Clancy's "The Division" Gets DirectX 12 Update, RX 480 Beats GTX 1060 by 16%
Over the weekend, gamers began testing the new DirectX 12 renderer of Tom Clancy's "The Division," released through a game patch. Testing by GameGPU (Russian media site) shows that AMD Radeon RX 480 is about 16 percent faster than NVIDIA GeForce GTX 1060 6GB, with the game running in the new DirectX 12 mode. "The Division" was tested with its new DirectX 12 renderer, on an ASUS Radeon RX 480 STRIX graphics card driven by Crimson ReLive 16.12.1 drivers, and compared with an ASUS GeForce GTX 1060 6GB STRIX, driven by GeForce 376.19 drivers. Independent testing by German tech-site ComputerBase.de supports these findings.
Sources:
GameGPU, ComputerBase.de, Expreview
142 Comments on Tom Clancy's "The Division" Gets DirectX 12 Update, RX 480 Beats GTX 1060 by 16%
GTX 1060 at DX12 has a higher minimum FPS than RX 480 which usually translates into smoother gameplay.
Here's another revelation, DX12 works just fine for Pascal:
Thirdly, RX 480 has 5.8 TFLOPs and GTX 1060 has less @ 3.8 TFLOPs, so naturally RX 480 should not be slower than GTX 1060 at least in cases when raw performance (Direct3D 12/Vulkan/Mantle) matters. At the same time NVIDIA did an impeccable job with its drivers in D3D9/10/11 because it always beats more powerful AMD GPUs while having fewer transistors and lower raw performance. I'm not. D3D12/Vulkan were modelled after AMD GPUs so naturally AMD has had a huge head start in regard to performance in these new APIs while the first NVIDIA's attempt at these APIs was the Pascal architecture. Do you remember it took AMD four+ years to match NVIDIA's tesselation performance? No. Then why should NVIDIA's first attempt at D3D12/Vulkan be as fast as its competitor?
BOTH camps current GPU's will be obsolete before DX12 actually matters.
But hey... a game or so here and there are improving... go amd! Lol Not in this thread. ;)
Ill just leave this here then... never trust one single review
My only question, is when AMD is going to bother releasing a bigger GPU to take full advantage of DX12. All that low level goodness doesnt matter if you are constantly GPU bound.
So it took 2 years to adopt incremental api changes in DX11 ... it should take few more for effective DX12 adoption by the devs who (for some reason) yet need to learn how to use it efficiently for both amd and nvidia, because as usual best case for amd is worst case for nvidia and vice versa.
Remember D3D12 and Vulkan are completely different APIs than everything that was before that. To give you an analogy it's like switching from Java/C# to something akin to C or even assembler. It's extremely hard.
Lastly, why do people want to belong to AMD/NVIDIA camps so much? These companies don't give a c r a p about what you think and believe into, yet people vehemently try to praise "their" vendor over the other one. Could we show a little more respect to vendors' best features without turning into a branch of WCCFTech?
If some choose to be early adopters, good for them. But I'll sit this one out and see what happens.
You might not be missing anything though, as one of my mates says this game is a grind and doesn't like it. I never had the chance to find out, lol.
Fore example, unreal engine has an experimental DX12 mode and when you turn it on, fps goes from 110 to 80.
And that is going to be most prevalent way of using DX12 :laugh: by turning it on in a ready made engine.
Also interesting to note, async feature in UE4 was implemented by Lionhead Studios ... for a game that was canceled.
In other words, it's a mess :laugh:
Not sure if serious. "DX12's focus is on enabling a dramatic increase in visual richness through a significant decrease in API-related CPU overhead," said Nvidia's Henry Moreton last year.
Pascal's got too much oomph for 1080p, and just doesn't cut it for 4K, while being waaay overpriced for the silicon you're getting. As time progresses, this will become more and more apparent. With every % AMD can squeeze out of driver updates or DX12, they get to use their much wider GPU arch better, it just scales well at any res.
Even a 3072 core polaris based chip would be a nice improvement.