Tuesday, August 18th 2015

AMD GPUs Show Strong DirectX 12 Performance on "Ashes of the Singularity"

Stardock's "Ashes of the Singularity" may not be particularly pathbreaking as an RTS, in the Starcraft era, but has the distinction of being the first game to the market with a DirectX 12 renderer, in addition to its default DirectX 11 one. This gave gamers the first peak at API to API comparisons, to test the tall bare-metal optimizations of DirectX 12, and as it turns out, AMD GPUs do seem to benefit big.

In a GeForce GTX 980 vs. Radeon R9 390X comparison by PC Perspective, the game seems to perform rather poorly on its default DirectX 11 renderer for the R9 390X, which when switched to DirectX 12, not only takes a big leap (in excess of 30%) in frame-rates, but also outperforms the GTX 980. A skeptical way of looking at these results would be that the R9 390X isn't optimized for the D3D 11 renderer to begin with, and merely returns to its expected performance vs. the GTX 980, with the D3D 12 renderer.
Comparing the two GPUs on CPU-intensive resolutions (900p and 1080p), across various CPUs (including the i7-5960X, i7-6700K, i3-4330 dual-core, FX-8350, FX-6300, reveals that the R9 390X has a narrow performance drop with fewer CPU cores, and has slight performance gains with increasing number of cores. Find the full insightful review in the source link below.
Source: PC Perspective
Add your own comment

118 Comments on AMD GPUs Show Strong DirectX 12 Performance on "Ashes of the Singularity"

#101
john_
xenocideAMD just released Mantle before DX12 as a bit of a PR stunt
I don't think they had the resources and money just for that kind of PR stunt. They didn't just throw out a new "Wonder Driver", they created an API. I don't know. Maybe this is something simple that anyone can do? Microsoft was delaying a low level API, so I think AMD came out with Mantle to warranty that there would be pressure on Microsoft to include DX12 with Windows 10. AMD was the only company losing because of the absence of a low level API. I also find it funny that people believe that AMD come out with a low level API in zero time with Microsoft taking almost two more years. And I find it funny because everyone plus the dog thinks that AMD is completely incompetent in creating anything in software. Not to mention the difference between AMD and Microsoft, one company with no money the other swimming in money, one company being a hardware company the other being a software company.
Posted on Reply
#102
Mussels
Freshwater Moderator
AMD wanted to prove that mantles technology worked, so that others would adopt it.

Microsoft with their Xbox one (running AMD hardware) and DX12 being the big example - mantle was *proof* the existing hardware would benefit.
Posted on Reply
#103
xenocide
I wouldn't describe creating an API as simple, but they also worked with a bunch of very skilled game developers, and from the looks of it grabbed stuff that they were already planning on contributing to DX12.
Posted on Reply
#104
ValenOne
FordGT90ConceptYeah, it's not DirectX 12 that needs to be looked at, it's the Direct3D feature level the game implements. Do we even know what feature level they're using?

en.wikipedia.org/wiki/Graphics_Core_Next

GCN 1.0 is 11_1:
Oland
Cape Verde
Pitcairn
Tahiti

GCN 1.1 is 12.0:
Bonaire
Hawaii
Temash
Kabini
Liverpool
Durango
Kaveri
Godavari
Mullins
Beema

GCN 1.2 is 12.0:
Tonga (Volcanic Islands family)
Fiji (Pirate Islands family)
Carrizo

Only NVIDIA's GM2xx chips are 12.1 compliant (which the GTX 980 Ti is). Even Skylake's GPU is 12.0.

So if the game supports feature level 12.1 and it is using it on GTX 980 but using 12.0 on 290X, it's not an apples to apples comparison. We'd have to know that both cards are running feature level 12.0.
www.dsogaming.com/news/oxide-developer-nvidia-was-putting-pressure-on-us-to-disable-certain-settings-in-the-benchmark/

Oxide Developer on Nvidia's request to turn off certain settings:

“There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.”

“Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only ‘vendor’ specific code is for Nvidia where we had to shutdown Async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn’t really have Async Compute so I don’t know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don’t think it ended up being very significant. This isn’t a vendor specific path, as it’s responding to capabilities the driver reports.




NVIDIA is just ticking the box for Async compute without any real practical performance.
Posted on Reply
#105
Mussels
Freshwater Moderator
rvalenciaNVIDIA is just ticking the box for Async compute without any real practical performance.
and blaming all problems on the game dev for not making the game around their hardware.
Posted on Reply
#106
arbiter
Musselsand blaming all problems on the game dev for not making the game around their hardware.
That reminds of a certain Company so much that compete's with nvidia.
Posted on Reply
#107
rtwjunkie
PC Gaming Enthusiast
arbiterThat reminds of a certain Company so much that compete's with nvidia.
Lol! Too true. I have to say, both companies play that card equally as much.
Posted on Reply
#108
ValenOne
Musselsand blaming all problems on the game dev for not making the game around their hardware.
The difference is Async feature on Maxwellv2 is faked AND Oxide has given Intel, AMD, nVidia and MS equal access to the source code.

In general, NVIDIA Gameworks restricts source code access to Intel and AMD.

From slide 23 developer.nvidia.com/sites/default/files/akamai/gameworks/events/gdc14/GDC_14_DirectX Advancements in the Many-Core Era Getting the Most out of the PC Platform.pdf
NVIDIA talks about DX12's Async.


This Oxide news really confirmed for me that DX12 came from Mantle origins since Async Compute was a core feature of that API. It must have caught NV offguard when Mantle was to become DX12, so they append some Maxwell features into DX12.1 and called their GPUs "DX12 compatible" which isn't entirely true. The base feature of DX12 compatibility is Async Compute & Better CPU scaling.


Oxide's full reply from
www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/1200#post_24356995


AMD's reply on Oxide's issue
www.reddit.com/r/AdvancedMicroDevices/comments/3iwn74/kollock_oxide_games_made_a_post_discussing_dx12/cul9auq
Posted on Reply
#109
arbiter
rvalenciaThe difference is Async feature on Maxwellv2 is faked AND Oxide has given Intel, AMD, nVidia and MS equal access to the source code.

In general, NVIDIA Gameworks restricts source code access to Intel and AMD.

From slide 23 developer.nvidia.com/sites/default/files/akamai/gameworks/events/gdc14/GDC_14_DirectX Advancements in the Many-Core Era Getting the Most out of the PC Platform.pdf
NVIDIA talks about DX12's Async.


This Oxide news really confirmed for me that DX12 came from Mantle origins since Async Compute was a core feature of that API. It must have caught NV offguard when Mantle was to become DX12, so they append some Maxwell features into DX12.1 and called their GPUs "DX12 compatible" which isn't entirely true. The base feature of DX12 compatibility is Async Compute & Better CPU scaling.


Oxide's full reply from
www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/1200#post_24356995


AMD's reply on Oxide's issue
www.reddit.com/r/AdvancedMicroDevices/comments/3iwn74/kollock_oxide_games_made_a_post_discussing_dx12/cul9auq
Here is the question on that "equal" access. I doubt that included Dx12 since well they couldn't add DX12 which means what you see here is AMD and Oxide doing what amd whined nvidia was doing and crippling performance on their cards. Problem with this even with source access DX12 exe for game likely wasn't an option til recently but since game had Mantle in it from Day 1 that let them set the game up for AMD cards specifically and in this case cripple performance on nvidia.

Cue the claims that what i said was BS but reality of it is pretty damn plausible. So now Mantle in its dead form could be causing crippling performance.
rvalenciaThis Oxide news really confirmed for me that DX12 came from Mantle origins since Async Compute was a core feature of that API. It must have caught NV offguard when Mantle was to become DX12,
^ pretty much confirmation of it.
Unlike like gameworks don't look like it can be turned off?

I will head this one off before its said, i bet someone will say "well its a standard". It maybe but so is DX11 tessellation but didn't stop AMD from whining about it when hairworks used it.
Posted on Reply
#110
john_
arbiterHere is the question on that "equal" access. I doubt that included Dx12 since well they couldn't add DX12 which means what you see here is AMD and Oxide doing what amd whined nvidia was doing and crippling performance on their cards. Problem with this even with source access DX12 exe for game likely wasn't an option til recently but since game had Mantle in it from Day 1 that let them set the game up for AMD cards specifically and in this case cripple performance on nvidia.

Cue the claims that what i said was BS but reality of it is pretty damn plausible. So now Mantle in its dead form could be causing crippling performance.


^ pretty much confirmation of it.
Unlike like gameworks don't look like it can be turned off?

I will head this one off before its said, i bet someone will say "well its a standard". It maybe but so is DX11 tessellation but didn't stop AMD from whining about it when hairworks used it.
You can question the equal access and you can guess that the game favors AMD.

With GameWorks on the other hand there is CERTAINTY that NO ONE has access but Nvidia to the source code and that the game ABSOLUTELY favors specific Nvidia hardware (I wouldn't say all Nvidia hardware here, because Kepler owners could have a different opinion on that).

Can you see the difference?
Posted on Reply
#111
ValenOne
arbiterHere is the question on that "equal" access. I doubt that included Dx12 since well they couldn't add DX12 which means what you see here is AMD and Oxide doing what amd whined nvidia was doing and crippling performance on their cards. Problem with this even with source access DX12 exe for game likely wasn't an option til recently but since game had Mantle in it from Day 1 that let them set the game up for AMD cards specifically and in this case cripple performance on nvidia.

Cue the claims that what i said was BS but reality of it is pretty damn plausible. So now Mantle in its dead form could be causing crippling performance.


^ pretty much confirmation of it.
Unlike like gameworks don't look like it can be turned off?

I will head this one off before its said, i bet someone will say "well its a standard". It maybe but so is DX11 tessellation but didn't stop AMD from whining about it when hairworks used it.
Your "even with source access DX12 exe for game likely wasn't an option til recently" statement is false.

From www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/

Being fair to all the graphics vendors

Often we get asked about fairness, that is, usually if in regards to treating Nvidia and AMD equally? Are we working closer with one vendor then another? The answer is that we have an open access policy. Our goal is to make our game run as fast as possible on everyone’s machine, regardless of what hardware our players have.

To this end, we have made our source code available to Microsoft, Nvidia, AMD and Intel for over a year. We have received a huge amount of feedback. For example, when Nvidia noticed that a specific shader was taking a particularly long time on their hardware, they offered an optimized shader that made things faster which we integrated into our code.

We only have two requirements for implementing vendor optimizations: We require that it not be a loss for other hardware implementations, and we require that it doesn’t move the engine architecture backward (that is, we are not jeopardizing the future for the present).



THAT's "for over a year" hence your "wasn't an option til recently" assertion is wrong.
Posted on Reply
#112
64K
That game Ashes of the Singularity sure is getting a lot of free publicity. I bet Stardock is loving it. I hadn't even heard of it before this.
Posted on Reply
#113
FordGT90Concept
"I go fast!1!11!1!"
rvalenciaThe difference is Async feature on Maxwellv2 is faked AND Oxide has given Intel, AMD, nVidia and MS equal access to the source code.

In general, NVIDIA Gameworks restricts source code access to Intel and AMD.

From slide 23 developer.nvidia.com/sites/default/files/akamai/gameworks/events/gdc14/GDC_14_DirectX Advancements in the Many-Core Era Getting the Most out of the PC Platform.pdf
NVIDIA talks about DX12's Async.


This Oxide news really confirmed for me that DX12 came from Mantle origins since Async Compute was a core feature of that API. It must have caught NV offguard when Mantle was to become DX12, so they append some Maxwell features into DX12.1 and called their GPUs "DX12 compatible" which isn't entirely true. The base feature of DX12 compatibility is Async Compute & Better CPU scaling.


Oxide's full reply from
www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/1200#post_24356995


AMD's reply on Oxide's issue
www.reddit.com/r/AdvancedMicroDevices/comments/3iwn74/kollock_oxide_games_made_a_post_discussing_dx12/cul9auq
Makes sense. Maxwell doesn't get the FPS boost AMD GCN 1.0 and newer cards get in DX12. That, in turn, explains why 290X in DX11 goes from about equal to GTX 970 to about 30% faster in DX12. NVIDIA will probably get it fixed for Pascal but NVIDIA users aren't going to see the major performance jump AMD users are until then.


NVIDIA did what they always do when contacted by AMD: hang up. AMD got the last laugh this time.
Posted on Reply
#114
arbiter
rvalenciaYour "even with source access DX12 exe for game likely wasn't an option til recently" statement is false.

From www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/
Sad part about that is the story you posted didn't prove what i said was false, there is no date listed when it was available. So what i said is still valid.
rvalenciaTo this end, we have made our source code available to Microsoft, Nvidia, AMD and Intel for over a year. We have received a huge amount of feedback. For example, when Nvidia noticed that a specific shader was taking a particularly long time on their hardware, they offered an optimized shader that made things faster which we integrated into our code.
The question with that was that when it was DX11 and (proprietary locked) Mantle was 2 options for game? Wouldn't shock me if it was.
FordGT90ConceptNVIDIA did what they always do when contacted by AMD: hang up. AMD got the last laugh this time.
that is if games even use async but that is still up for debate how many will outside the ones paid by AMD.
Posted on Reply
#115
ValenOne
arbiterSad part about that is the story you posted didn't prove what i said was false, there is no date listed when it was available. So what i said is still valid.


The question with that was that when it was DX11 and (proprietary locked) Mantle was 2 options for game? Wouldn't shock me if it was.


that is if games even use async but that is still up for debate how many will outside the ones paid by AMD.
The sad part is it's "more than a year" from the blog was posted i.e. back at least August 16, 2014. Furthermore, NVIDIA made changes to their own code path.

It doesn't need to be paid by AMD since XBO will get it's DirectX12 with it's Windows 10 which in-turn influence Async usage with PS4's multi-platform games. Once XBO gains full featured Async APIs, it will be the new baseline programming model. If Pascal gains proper Async, Maxwelv2 will age like Kelper 780 GTX.
Posted on Reply
#116
arbiter
rvalenciaThe sad part is it's "more than a year" from the blog was posted i.e. back at least August 16, 2014. Furthermore, NVIDIA made changes to their own code path.

It doesn't need to be paid by AMD since XBO will get it's DirectX12 with it's Windows 10 which in-turn influence Async usage with PS4's multi-platform games. Once XBO gains full featured Async APIs, it will be the new baseline programming model. If Pascal gains proper Async, Maxwelv2 will age like Kelper 780 GTX.
As much as people Love to point out "more then a year" crap. Async may been enabled on the Mantle version of the game but that was AMD proprietary API which was closed source so yea. Thing with Async on console it could be useful but as for Desktop its not needed since those console APU's are pretty Low end weak AMD hardware that they gotta squeeze every possible thing outta of. Async as said was in Mantle version but likely wasn't in the DX version of the game til DX12 exe was released hence my point if you don't ignore that fact which wouldn't shock me if you do.
Posted on Reply
#117
FordGT90Concept
"I go fast!1!11!1!"
arbiterthat is if games even use async but that is still up for debate how many will outside the ones paid by AMD.
Async compute is a cornerstone of Direct3D 12/Mantle/Vulkan. It isn't required (they'll be executed synchronously) but having it available means pretty big framerate gains because less of the GPU is idle.
Posted on Reply
#118
john_
arbiterAs much as people Love to point out "more then a year" crap. Async may been enabled on the Mantle version of the game but that was AMD proprietary API which was closed source so yea. Thing with Async on console it could be useful but as for Desktop its not needed since those console APU's are pretty Low end weak AMD hardware that they gotta squeeze every possible thing outta of. Async as said was in Mantle version but likely wasn't in the DX version of the game til DX12 exe was released hence my point if you don't ignore that fact which wouldn't shock me if you do.
All this time that Nvidia looked superior, you and others looked really to enjoy trolling AMD fans, being in an advantageous position. Now you put your head in the sand and try to ignore reality.
The "more then a year" argument is not crap. Async compute is huge, not useless. It is useless when you try to fake it in the drivers, but not when it is implemented in the hardware. And no, it is not just for the consoles.
Also your convenient stories about Mantle.exe and DX12.exe, are not facts, only not so believable excuses.
Posted on Reply
Add your own comment
Nov 21st, 2024 09:27 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts