Tuesday, August 18th 2015
AMD GPUs Show Strong DirectX 12 Performance on "Ashes of the Singularity"
Stardock's "Ashes of the Singularity" may not be particularly pathbreaking as an RTS, in the Starcraft era, but has the distinction of being the first game to the market with a DirectX 12 renderer, in addition to its default DirectX 11 one. This gave gamers the first peak at API to API comparisons, to test the tall bare-metal optimizations of DirectX 12, and as it turns out, AMD GPUs do seem to benefit big.
In a GeForce GTX 980 vs. Radeon R9 390X comparison by PC Perspective, the game seems to perform rather poorly on its default DirectX 11 renderer for the R9 390X, which when switched to DirectX 12, not only takes a big leap (in excess of 30%) in frame-rates, but also outperforms the GTX 980. A skeptical way of looking at these results would be that the R9 390X isn't optimized for the D3D 11 renderer to begin with, and merely returns to its expected performance vs. the GTX 980, with the D3D 12 renderer.Comparing the two GPUs on CPU-intensive resolutions (900p and 1080p), across various CPUs (including the i7-5960X, i7-6700K, i3-4330 dual-core, FX-8350, FX-6300, reveals that the R9 390X has a narrow performance drop with fewer CPU cores, and has slight performance gains with increasing number of cores. Find the full insightful review in the source link below.
Source:
PC Perspective
In a GeForce GTX 980 vs. Radeon R9 390X comparison by PC Perspective, the game seems to perform rather poorly on its default DirectX 11 renderer for the R9 390X, which when switched to DirectX 12, not only takes a big leap (in excess of 30%) in frame-rates, but also outperforms the GTX 980. A skeptical way of looking at these results would be that the R9 390X isn't optimized for the D3D 11 renderer to begin with, and merely returns to its expected performance vs. the GTX 980, with the D3D 12 renderer.Comparing the two GPUs on CPU-intensive resolutions (900p and 1080p), across various CPUs (including the i7-5960X, i7-6700K, i3-4330 dual-core, FX-8350, FX-6300, reveals that the R9 390X has a narrow performance drop with fewer CPU cores, and has slight performance gains with increasing number of cores. Find the full insightful review in the source link below.
118 Comments on AMD GPUs Show Strong DirectX 12 Performance on "Ashes of the Singularity"
Win 10 has been DL more than 50 million times allready including dx12 which makes dx11 a thing of the past.
Move on and forget, just keep dx11 where it allready is, thats good enough and start developing dx12 right away.
In Heavy "Low" seams to gain performance where "High" you loose.
1080p "Low" looks to be consistent outcome where DX12 is faster then DX11 on Nvidia using a powerful CPU.
The more interesting or FUN part of it is probably the spat between OXIDE and Nvidia right now...
I'm also pretty sure that HD 6000 card series owners aren't keen to have their gaming marginalized any further than it already is. In this instance yes...although this instance seems to be an alpha build from a game engine, and a developer and game engine very closely allied with AMD. No doubt given the range of DX12 /DX11.3 feature and hardware coding options, you'll see game engines and individual games vary wildly depending upon what is implemented, and what is not.
One thing seems certain from the PC Per graphs Xzibit posted. AMD really need to pour some of those optimizations into CPUs.
but those people who are "stuck" there is also xbmc, far superior than anything out there.
By this time after all we've been through in this dirty market, i would totally believe that these great numbers on DX12 are either because DX11 got purposely butchered or becuase they are genuinely so well-optimized for AMD that no-one actually bothered to suite this thing to NVIDIA cards.
They wanna please their red friends. Those get to sell cards, and those get to sell there 0-PR driven game by having posts like these.
Imma wait fo UE4 to make the transission to DX12 (not this half-ported test thing)
Anyway, "Ashes of the Singularity" shows two things about drivers. First, Nvidia's DX11 are miles ahead of AMD's. Second, AMD's DX12 drivers start from a more concrete base. If AMD's DX11 drivers where started with the wrong foot and latter there was no programmers or money to fix it, at least with DX12 AMD starts from a better position. Let's hope they don't mess up down the road. If they think that with DX12 they have the upper hand, Nvidia does have the resources and the talent, to show them that they are wrong.
Two last things. I was saying in the past that Mantle was made to fix the pathetic Bulldozer's performance in games compared to Intel CPUs and not so to improve AMD's GPUs performance. Seeing the results on PCPerspective's CPU tests and how bad FX processors score, I can say that I was completely wrong. FX is something that can not be saved. At least based on this specific test. The last thing is the results of Radeon R7 370. As we can see here the 370(265, 7850) does not gain much. There could be two reasons for this. GCN 1.0 and 2GB of RAM. I would like to remind here that Mantle was NOT performing well with cards that had less than 3GBs of RAM. It seems that DX12 has the same problem. We might have to consider 3GBs of RAM as the minimum in the future for DX12 performance.
If AMD is doing something wrong now you have to blame nvidia for it, because nvidia started it or invented it
Perhaps, nvidia lacks software optimisation at all in DX12 ?!
Obviously, these results are not very promising, if they do not reveal performance gains everywhere and with everyone.
For some of us who have been 3D graphics enthusiasts for a while, we can remember when ATI outed the "all new" Rage Pro Turbo...with supposedly "40% more performance" than the outgoing Rage Pro in February 1998. The 40% improvement was only driver optimization for Winbench 98. As this review shows, real world gains were nil. The Pro Turbo eventually added some performance for actual games, but by then it had been overtaken by a whole swathe of newer cards including those of ATI itself.
If you're going to make a statement of fact, it should actually be factual.
Most of the graphics vendors indulged in benchmark shenanigans at one time or another - Nvidia included - but ATI's deliberate optimization for a single benchmark actually set the tone.
I think it shows that AMD put all of their effort into Mantle/DX12/Vulkan over DX11 which really shouldn't surprise anyone.
I bet the CPU usage under DX12 with AMD is a lot higher than it is with nVidia. Not that this is necessaryly a bad thing. But it also isn't more efficient per se. The 5960x just has enough crunching power to not be the bottleneck here.
I hope AMD doesn't take their DX12 superiority for granted and stop developing and optimizing their DX12 drivers any further.
If it works out that AMD has the edge in most DX12 games then it's time to go red but how long will it be before it's clear which company has the best offerings? It may be years possibly before there are a good selection of DX12 games.