Tuesday, August 18th 2015
AMD GPUs Show Strong DirectX 12 Performance on "Ashes of the Singularity"
Stardock's "Ashes of the Singularity" may not be particularly pathbreaking as an RTS, in the Starcraft era, but has the distinction of being the first game to the market with a DirectX 12 renderer, in addition to its default DirectX 11 one. This gave gamers the first peak at API to API comparisons, to test the tall bare-metal optimizations of DirectX 12, and as it turns out, AMD GPUs do seem to benefit big.
In a GeForce GTX 980 vs. Radeon R9 390X comparison by PC Perspective, the game seems to perform rather poorly on its default DirectX 11 renderer for the R9 390X, which when switched to DirectX 12, not only takes a big leap (in excess of 30%) in frame-rates, but also outperforms the GTX 980. A skeptical way of looking at these results would be that the R9 390X isn't optimized for the D3D 11 renderer to begin with, and merely returns to its expected performance vs. the GTX 980, with the D3D 12 renderer.Comparing the two GPUs on CPU-intensive resolutions (900p and 1080p), across various CPUs (including the i7-5960X, i7-6700K, i3-4330 dual-core, FX-8350, FX-6300, reveals that the R9 390X has a narrow performance drop with fewer CPU cores, and has slight performance gains with increasing number of cores. Find the full insightful review in the source link below.
Source:
PC Perspective
In a GeForce GTX 980 vs. Radeon R9 390X comparison by PC Perspective, the game seems to perform rather poorly on its default DirectX 11 renderer for the R9 390X, which when switched to DirectX 12, not only takes a big leap (in excess of 30%) in frame-rates, but also outperforms the GTX 980. A skeptical way of looking at these results would be that the R9 390X isn't optimized for the D3D 11 renderer to begin with, and merely returns to its expected performance vs. the GTX 980, with the D3D 12 renderer.Comparing the two GPUs on CPU-intensive resolutions (900p and 1080p), across various CPUs (including the i7-5960X, i7-6700K, i3-4330 dual-core, FX-8350, FX-6300, reveals that the R9 390X has a narrow performance drop with fewer CPU cores, and has slight performance gains with increasing number of cores. Find the full insightful review in the source link below.
118 Comments on AMD GPUs Show Strong DirectX 12 Performance on "Ashes of the Singularity"
Microsoft with their Xbox one (running AMD hardware) and DX12 being the big example - mantle was *proof* the existing hardware would benefit.
Oxide Developer on Nvidia's request to turn off certain settings:
“There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.”
“Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only ‘vendor’ specific code is for Nvidia where we had to shutdown Async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn’t really have Async Compute so I don’t know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don’t think it ended up being very significant. This isn’t a vendor specific path, as it’s responding to capabilities the driver reports.
NVIDIA is just ticking the box for Async compute without any real practical performance.
In general, NVIDIA Gameworks restricts source code access to Intel and AMD.
From slide 23 developer.nvidia.com/sites/default/files/akamai/gameworks/events/gdc14/GDC_14_DirectX Advancements in the Many-Core Era Getting the Most out of the PC Platform.pdf
NVIDIA talks about DX12's Async.
This Oxide news really confirmed for me that DX12 came from Mantle origins since Async Compute was a core feature of that API. It must have caught NV offguard when Mantle was to become DX12, so they append some Maxwell features into DX12.1 and called their GPUs "DX12 compatible" which isn't entirely true. The base feature of DX12 compatibility is Async Compute & Better CPU scaling.
Oxide's full reply from
www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/1200#post_24356995
AMD's reply on Oxide's issue
www.reddit.com/r/AdvancedMicroDevices/comments/3iwn74/kollock_oxide_games_made_a_post_discussing_dx12/cul9auq
Cue the claims that what i said was BS but reality of it is pretty damn plausible. So now Mantle in its dead form could be causing crippling performance. ^ pretty much confirmation of it.
Unlike like gameworks don't look like it can be turned off?
I will head this one off before its said, i bet someone will say "well its a standard". It maybe but so is DX11 tessellation but didn't stop AMD from whining about it when hairworks used it.
With GameWorks on the other hand there is CERTAINTY that NO ONE has access but Nvidia to the source code and that the game ABSOLUTELY favors specific Nvidia hardware (I wouldn't say all Nvidia hardware here, because Kepler owners could have a different opinion on that).
Can you see the difference?
From www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/
Being fair to all the graphics vendors
Often we get asked about fairness, that is, usually if in regards to treating Nvidia and AMD equally? Are we working closer with one vendor then another? The answer is that we have an open access policy. Our goal is to make our game run as fast as possible on everyone’s machine, regardless of what hardware our players have.
To this end, we have made our source code available to Microsoft, Nvidia, AMD and Intel for over a year. We have received a huge amount of feedback. For example, when Nvidia noticed that a specific shader was taking a particularly long time on their hardware, they offered an optimized shader that made things faster which we integrated into our code.
We only have two requirements for implementing vendor optimizations: We require that it not be a loss for other hardware implementations, and we require that it doesn’t move the engine architecture backward (that is, we are not jeopardizing the future for the present).
THAT's "for over a year" hence your "wasn't an option til recently" assertion is wrong.
NVIDIA did what they always do when contacted by AMD: hang up. AMD got the last laugh this time.
It doesn't need to be paid by AMD since XBO will get it's DirectX12 with it's Windows 10 which in-turn influence Async usage with PS4's multi-platform games. Once XBO gains full featured Async APIs, it will be the new baseline programming model. If Pascal gains proper Async, Maxwelv2 will age like Kelper 780 GTX.
The "more then a year" argument is not crap. Async compute is huge, not useless. It is useless when you try to fake it in the drivers, but not when it is implemented in the hardware. And no, it is not just for the consoles.
Also your convenient stories about Mantle.exe and DX12.exe, are not facts, only not so believable excuses.