Tuesday, August 18th 2015

AMD GPUs Show Strong DirectX 12 Performance on "Ashes of the Singularity"

Stardock's "Ashes of the Singularity" may not be particularly pathbreaking as an RTS, in the Starcraft era, but has the distinction of being the first game to the market with a DirectX 12 renderer, in addition to its default DirectX 11 one. This gave gamers the first peak at API to API comparisons, to test the tall bare-metal optimizations of DirectX 12, and as it turns out, AMD GPUs do seem to benefit big.

In a GeForce GTX 980 vs. Radeon R9 390X comparison by PC Perspective, the game seems to perform rather poorly on its default DirectX 11 renderer for the R9 390X, which when switched to DirectX 12, not only takes a big leap (in excess of 30%) in frame-rates, but also outperforms the GTX 980. A skeptical way of looking at these results would be that the R9 390X isn't optimized for the D3D 11 renderer to begin with, and merely returns to its expected performance vs. the GTX 980, with the D3D 12 renderer.
Comparing the two GPUs on CPU-intensive resolutions (900p and 1080p), across various CPUs (including the i7-5960X, i7-6700K, i3-4330 dual-core, FX-8350, FX-6300, reveals that the R9 390X has a narrow performance drop with fewer CPU cores, and has slight performance gains with increasing number of cores. Find the full insightful review in the source link below.
Source: PC Perspective
Add your own comment

118 Comments on AMD GPUs Show Strong DirectX 12 Performance on "Ashes of the Singularity"

#76
Uplink10
arbiterHard to sell people a 2 year old gpu as a new product. Most people that own a 200 series already have no reason to get a 300 series since its pretty much same card's.
Even though it is an older model it is still at the top of the game like you said and with that price it is a steal.
Posted on Reply
#77
HumanSmoke
john_By the way. Did Nvidia also showed numbers for professional cards, or the numbers there where not as pretty? AMD was gaining in the Pro market thanks to the MACs.
Neither Nvidia nor AMD list professional graphics shipments separately these days. Professional, discrete, discrete custom ( FirePro D series), and discrete mobile (MXM) shipments are used in both Mercury and JPR's figures.

AMD's professional board figures aren't a straightforward extrapolation of market share as the consumer market largely is. Nvidia boards sell at vastly greater ASPs. To use the "MAC" (Mac Pro) example you cited:
The top Mac Pro comes standard with Dual FirePro D500's and can be upgraded to Dual FirePro D700's for $600.
The FirePro D500 is a custom part analogous to the HD 7870XT ( so sits between the Pitcairn-based FirePro W7000 and Tahiti PRO- based W8000) The W7000 is priced at around $600 each, the W8000 at ~ $1000 each
The FirePro D700 is a FirePro W9000 (also in a custom form factor as per the other FirePro D boards). The W9000 retails for around $3000 each......and Apple offers an upgrade to two of them - from boards priced at less than $1K, for $600

Given that Apple has to factor in their own profit and amortization from warranty replacements, how much do you think AMD's unit price contract is for these custom SKUs?
AMD basically purchased market share (and the cachet of marketing by being allied to the Apple Mac Pro). If AMD rely upon this business model, they build market share all the way to the poor house.
Posted on Reply
#78
john_
HumanSmokeNeither Nvidia nor AMD list professional graphics shipments separately these days. Professional, discrete, discrete custom ( FirePro D series), and discrete mobile (MXM) shipments are used in both Mercury and JPR's figures.

AMD's professional board figures aren't a straightforward extrapolation of market share as the consumer market largely is. Nvidia boards sell at vastly greater ASPs. To use the "MAC" (Mac Pro) example you cited:
The top Mac Pro comes standard with Dual FirePro D500's and can be upgraded to Dual FirePro D700's for $600.
The FirePro D500 is a custom part analogous to the HD 7870XT ( so sits between the Pitcairn-based FirePro W7000 and Tahiti PRO- based W8000) The W7000 is priced at around $600 each, the W8000 at ~ $1000 each
The FirePro D700 is a FirePro W9000 (also in a custom form factor as per the other FirePro D boards). The W9000 retails for around $3000 each......and Apple offers an upgrade to two of them - from boards priced at less than $1K, for $600

Given that Apple has to factor in their own profit and amortization from warranty replacements, how much do you think AMD's unit price contract is for these custom SKUs?
AMD basically purchased market share (and the cachet of marketing by being allied to the Apple Mac Pro). If AMD rely upon this business model, they build market share all the way to the poor house.
Nvidia wins market share. Absolutely logical. Nothing to analyze.
AMD wins market share. "Let me explain to you why there is nothing to see here, or even worst, why it is bad for AMD".

Posted on Reply
#79
ValenOne
arbiterHow many times has AMD failed and refused to admit fault to turn around and blame nvidia for it?


Wouldn't be surprise they didn't bother to much with the game since it is an Alpha stage game and any work done now might not even matter when its finally released


Consider 390x is a 2 year old gpu so yea it should be cheaper then a much newer one. Even with price difference, DX12 games will trickle out here and there over next 6-12 month's.
From www.dsogaming.com/news/the-witcher-3-developer-its-up-to-nvidia-to-let-amd-users-enjoy-physx-hair-and-fur-effects/

With NVIDIA's Gameworks, CDPR lost control over their Witcher 3 PC source code.

[INDENT]Our dear friends over at PCGamesHardware had an interesting interview with CD Projekt RED’s lead engine developer, Balázs Török, in which Balázs claimed that AMD users will be able to enjoy the new fur and hair GPU-accelerated effects, provided Nvidia lets them to.

When asked about the new hair and fur effects, Balázs said that – at this moment – these aforementioned effects run on AMD Radeon GPUs.

“At the moment it works, but if it can run at the end, the decision of Nvidia. What matters is the direction in which they continue to develop the technology and whether and what barriers they are installed. But I think it will also run on Radeon graphics cards.“
[/INDENT]
Posted on Reply
#80
john_
arbiterAMD's credibility is at an All time low, they have what looks like superior performance in 1 game that is only in Alpha testing isn't really gonna chance that.
That one game will make some people be more skeptical about their next upgrade. It's not something serious, but it could become more serious in the future. Of course Nvidia doesn't have to do much. Just $persuade$ the game developers to not take advantage of DX12 just yet. Wait a little longer, until Pascal comes. They did it before anyway.
Hard to sell people a 2 year old gpu as a new product. Most people that own a 200 series already have no reason to get a 300 series since its pretty much same card's.
There are plenty of rebrands in the market. It's just that AMD's financial position forced them to do rebrands in more expensive cards than Nvidia is doing (GT 730 is one example). At least they do not sell cards with the same name and totally different performance and features. I mean you talk about credibility. Well if the press was treating Nvidia as it treats AMD, Nvidia's credibility would have been in no better position. The typical example is GTX 970, but let's just add here the GT 730 I mentioned.
So we have 3 GT 730s.
One is 96 Fermi cores, 128bit DDR3. <<<This one is not even DX12 yet.
One is 384 Kepler cores, 64bit GDDR5. <<< This one is the good one
And the last one is 384 Kepler cores, 64bit DDR3. <<< This one you throw it out the windows. 12.8GB/sec? Even minesweeper will have performance problems.
Posted on Reply
#81
Aquinus
Resident Wat-man
arbiterHard to sell people a 2 year old gpu as a new product. Most people that own a 200 series already have no reason to get a 300 series since its pretty much same card's.
I don't know, I found the 390 to be a worthy successor to my 6870s in CFX. In all seriousness, despite being rebrands, they've been tuned. This doesn't mean much for power users as overclocking is a think we like to do but, it's not like the 300 series are bad. A lot of things people said were bad about AMD (like multi-monitor idle usage and frame latency,) is a little overstated. So despite not being new technology, it's worth the price you pay for it.

As for DX12, I don't think we can judge everything by one game. It's too early to say much about DX12 other than it potentially can offer some significant improvements. AMD's drivers are known to have more overhead than nVidia's and DX12 might make that less of a problem than it is now.

Honestly, I don't think anyone should get angry or lose sleep over this.
Posted on Reply
#82
HumanSmoke
john_Nvidia wins market share. Absolutely logical. Nothing to analyze.
Really? And why would that be? ...and why bring up Nvidia? I certainly didn't. You were the one telling the world + dog how great AMD's pro graphics were doing. Market share with a plummeting bottom line is hardly a cause for cheerleading....or any real critique at all really, given that this thread is about DX12 - which I'm pretty sure pro graphics and math co-processors aren't leveraging.
john_AMD wins market share. "Let me explain to you why there is nothing to see here, or even worst, why it is bad for AMD".
It was you that bought up the "MAC"
All I did was to point out the pricing of AMD parts used in the "MAC".
If you want to have a good cry about it, be my guest - but between whines, maybe you can explain how AMD's FirePro market share is growing (in a fashion) - largely, as you've already said, because of Apple, yet AMD still bleeds red ink.
john_
Nice AMD supplied PPS. Why not use the latest figures that show that AMD's pro graphics have slipped back to 20%?
Meanwhile, Nvidia remained the dominant force in professional GPUs, responsible for 79.4% of units, while AMD picked up the remaining 20.6%, including a fair number of units sold to Apple to outfit Mac Pros.
So, a large chunk of AMD's pro graphics market is predicated upon a single customer getting boards at knock down pricing - around 1/10th of retail MSRP. As I said, AMD (or anyone for that matter) can grow market share if they offer deals like that - and let's face it, AMD have been infire sale mode for FirePro for some time. I'm also pretty sure if AMD offered Radeons at $5 each, they'd quickly gain a massive market share gain - but it doesn't mean **** all if it isn't sustainable. Not every entity can look forward to an EU bailout.
Posted on Reply
#83
arbiter
rvalenciaFrom www.dsogaming.com/news/the-witcher-3-developer-its-up-to-nvidia-to-let-amd-users-enjoy-physx-hair-and-fur-effects/

With NVIDIA's Gameworks, CDPR lost control over their Witcher 3 PC source code.

[INDENT]Our dear friends over at PCGamesHardware had an interesting interview with CD Projekt RED’s lead engine developer, Balázs Török, in which Balázs claimed that AMD users will be able to enjoy the new fur and hair GPU-accelerated effects, provided Nvidia lets them to.

When asked about the new hair and fur effects, Balázs said that – at this moment – these aforementioned effects run on AMD Radeon GPUs.

“At the moment it works, but if it can run at the end, the decision of Nvidia. What matters is the direction in which they continue to develop the technology and whether and what barriers they are installed. But I think it will also run on Radeon graphics cards.“
[/INDENT]
Well fur which is hairworks relies on tessellation, which is part of DX11 standard. Can't really blame Nvidia for AMD cards being very slow on doing it. Nvidia yea could used it on purpose knowing AMD cards are slow in that just like AMD uses OpenCL for tressFX knowing their cards were a lot faster in OpenCL then Nvidia. Both sides did it at one point, Should least be happy that hairworks uses a standard which everyone including AMD was crying for.
AquinusI don't know, I found the 390 to be a worthy successor to my 6870s in CFX. In all seriousness, despite being rebrands, they've been tuned. This doesn't mean much for power users as overclocking is a think we like to do but, it's not like the 300 series are bad.
They aint bad no but AMD has tried to claim 390(x) wasn't a rebrand when it is. They even tried to give reason that it wasn't but kinda hard to believe that when GPU-z someone posted of it has a GPU rls date of 2013.
wccftech.com/amd-radeon-r9-390-390x-not-rebadges-power-optimization/
john_There are plenty of rebrands in the market. It's just that AMD's financial position forced them to do rebrands in more expensive cards than Nvidia is doing (GT 730 is one example).
Really how many ppl care about those cards being rebrands or not? No one cares if r5 240 or what ever is rebrand or 230. They are low end cards with very low power draw. No one that cares about performance buys them.
Posted on Reply
#85
Fluffmeister
Erm..... what happened to the 390X and the GTX 980 trading blows? Now the 290X trades blows with the 980 Ti?

This gets more hilarious by the day, I'm literally getting upset AMD make no money!
Posted on Reply
#86
Xzibit
FluffmeisterErm..... what happened to the 390X and the GTX 980 trading blows? Now the 290X trades blows with the 980 Ti?

This gets more hilarious by the day, I'm literally getting upset AMD make no money!
Here the comparable numbers if graphs and numbers are too difficult for you (different systems PCPerspective & Arc)

1080p
290X
48

390X
53

980
50

980 Ti
50

1080p Heavy
290X
40

390X
46

980
44

980 Ti
43



Posted on Reply
#87
arbiter
XzibitHere the comparable numbers if graphs and numbers are too difficult for you (different systems PCPerspective & Arc)

1080p
290X
48

390X
53

980
50

980 Ti
50

1080p Heavy
290X
40

390X
46

980
44

980 Ti
43



FluffmeisterErm..... what happened to the 390X and the GTX 980 trading blows? Now the 290X trades blows with the 980 Ti?

This gets more hilarious by the day, I'm literally getting upset AMD make no money!
Wow so AMD won something in a game that is in Alpha stages and won't be out for like a year, o yea its also AMD sponsored game. Those numbers are to be take with a grain of salt as its just 1 game that is Alpha build. That game still has a long way to go coding wise before can take those numbers too seriously. Best not Hype up something that likely Not gonna stay where they are like AMD did with the Fury X.

Point is, don't start that hype train rolling just yet. Might turn in to a train wreck of disappointment like it has many times before. I guess though if people haven't learned by now they never will.
Posted on Reply
#88
ValenOne
arbiterWell fur which is hairworks relies on tessellation, which is part of DX11 standard. Can't really blame Nvidia for AMD cards being very slow on doing it. Nvidia yea could used it on purpose knowing AMD cards are slow in that just like AMD uses OpenCL for tressFX knowing their cards were a lot faster in OpenCL then Nvidia. Both sides did it at one point, Should least be happy that hairworks uses a standard which everyone including AMD was crying for.
Wrong, TressFX uses Microsoft's DirectCompute.

From www.techpowerup.com/180675/amd-tressfx-technology-detailed.html

"Technically, TressFX is a toolset co-developed by AMD and Crystal Dynamics, which taps into DirectCompute"

Should least be happy that TressFX uses a standard which everyone including NVIDIA was crying for.

I'm aware of the technical aspect. The problem was excessive tessellation that doesn't substantially improve the graphics appearance. The workaround was to re-use the same AMD driver side tessellation override feature for Crysis 2 NVIDIA patched.


The difference with Ashes of Singularity vs Witcher 3 is the source code availability for AMD(Red Team), NVIDIA (Green Team) and Intel (Blue Team). All interested IHVs can contribute to the same source code without being blocked an exclusivity contract.

Witcher 3 XBO/PS4 builds uses TressFX instead of NVIDIA's Hairworks.

Unreleased Witcher 3 PC build has TressFX enabled, but blocked NVIDIA exclusivity contract. Witcher 3 PC build with TressFX would have benefited lesser NVIDIA GPU cards.


PS; I own MSI 980 Ti OC with my MSI 290X OC.
arbiterWow so AMD won something in a game that is in Alpha stages and won't be out for like a year, o yea its also AMD sponsored game. Those numbers are to be take with a grain of salt as its just 1 game that is Alpha build. That game still has a long way to go coding wise before can take those numbers too seriously. Best not Hype up something that likely Not gonna stay where they are like AMD did with the Fury X.

Point is, don't start that hype train rolling just yet. Might turn in to a train wreck of disappointment like it has many times before. I guess though if people haven't learned by now they never will.
The pattern is similar to 3DMarks API overhead test results.
XzibitArsTechnica UK - DirectX 12 tested: An early win for AMD, and disappointment for Nvidia
First DX12 gaming benchmark shows R9 290X going toe-to-toe with a GTX 980 Ti.
It could also indicate AMD's DX11 driver is sub-par relative it's TFLOPS potential.
Posted on Reply
#89
OneMoar
There is Always Moar
in other news cpu bound game befits from reduction in cpu load .... more at 11
please don't feed into the hypetrain
Posted on Reply
#90
Tatty_Two
Gone Fishing
On that 290X comparison in DX12, my understanding is that the 290X is only "DX12 ready" and therefore will not support the full DX12 feature set, it certainly indicated that on AMD's site a couple of weeks ago when I looked (because I have a 290X and was interested to know it's DX12 capability) where as the 980 will have the works so I can only guess that the 290X was only doing half the work.... dunno maybe I am wrong there.
Posted on Reply
#91
Mussels
Freshwater Moderator
Tatty_OneOn that 290X comparison in DX12, my understanding is that the 290X is only "DX12 ready" and therefore will not support the full DX12 feature set, it certainly indicated that on AMD's site a couple of weeks ago when I looked (because I have a 290X and was interested to know it's DX12 capability) where as the 980 will have the works so I can only guess that the 290X was only doing half the work.... dunno maybe I am wrong there.
with these early DX12 titles its not likely they'll even use those features, so the graphical quality/load would be the same.
Posted on Reply
#92
FordGT90Concept
"I go fast!1!11!1!"
Yeah, it's not DirectX 12 that needs to be looked at, it's the Direct3D feature level the game implements. Do we even know what feature level they're using?

en.wikipedia.org/wiki/Graphics_Core_Next

GCN 1.0 is 11_1:
Oland
Cape Verde
Pitcairn
Tahiti

GCN 1.1 is 12.0:
Bonaire
Hawaii
Temash
Kabini
Liverpool
Durango
Kaveri
Godavari
Mullins
Beema

GCN 1.2 is 12.0:
Tonga (Volcanic Islands family)
Fiji (Pirate Islands family)
Carrizo

Only NVIDIA's GM2xx chips are 12.1 compliant (which the GTX 980 Ti is). Even Skylake's GPU is 12.0.

So if the game supports feature level 12.1 and it is using it on GTX 980 but using 12.0 on 290X, it's not an apples to apples comparison. We'd have to know that both cards are running feature level 12.0.
Posted on Reply
#93
Xzibit
FordGT90ConceptSo if the game supports feature level 12.1 and it is using it on GTX 980 but using 12.0 on 290X, it's not an apples to apples comparison. We'd have to know that both cards are running feature level 12.0.
DX12 is more application dependent then DX11 was. DX12 moves some management that the driver was doing to the application.

AMD GCN cards can handle more resources as well so its not always going to be an apples to apples comparison.
Posted on Reply
#94
ValenOne
FordGT90ConceptYeah, it's not DirectX 12 that needs to be looked at, it's the Direct3D feature level the game implements. Do we even know what feature level they're using?

en.wikipedia.org/wiki/Graphics_Core_Next

GCN 1.0 is 11_1:
Oland
Cape Verde
Pitcairn
Tahiti

GCN 1.1 is 12.0:
Bonaire
Hawaii
Temash
Kabini
Liverpool
Durango
Kaveri
Godavari
Mullins
Beema

GCN 1.2 is 12.0:
Tonga (Volcanic Islands family)
Fiji (Pirate Islands family)
Carrizo

Only NVIDIA's GM2xx chips are 12.1 compliant (which the GTX 980 Ti is). Even Skylake's GPU is 12.0.

So if the game supports feature level 12.1 and it is using it on GTX 980 but using 12.0 on 290X, it's not an apples to apples comparison. We'd have to know that both cards are running feature level 12.0.
Resource Binding: Maxwell Tier 2, GCN Tier 3.

Feature level: Maxwell 12_1, GCN 12_0.



CR (Conservative Rasterization) feature

Readcommunity.amd.com/message/1308478#1308478

Question:

I need for my application that every drawing produce at least one pixel output (even if this is an empty triangle = 3 identical points). NVidia have an extension (GL_NV_conservative_raster) to enable such a mode (on Maxwell+ cards). Is there a similar extension on AMD cards

Answer (from AMD):

Some of our hardware can support functionality similar to that in the NVIDIA extension you mention, but we are currently not shipping an extension of our own. We will likely hold off until we can come to a consensus with other hardware vendors on a common extension before exposing the feature, but it will come in time.


For ROV feature

AMD already supports Intel's "GL_INTEL_Fragmented_shader_ordering" in OpenGL.

From twitter.com/g_truc/status/581224843556843521

It seems that shader invocation ordering is proportionally a lot more expensive on GM204 than S.I. or HSW.
Posted on Reply
#95
rtwjunkie
PC Gaming Enthusiast
EboThe way I see it, is that why optimize for dx11 anymore ?

Win 10 has been DL more than 50 million times allready including dx12 which makes dx11 a thing of the past.
Move on and forget, just keep dx11 where it allready is, thats good enough and start developing dx12 right away.
You ARE aware, I assume, that 50 million is a drop in the bucket? And that's downloads, not installs. I personally know several people staying on 7 and 8.1. Even here, an enthusiast community, I've seen probably 10% of those that upgraded to W10 go back. So no, DX11 isn't going anywhere,
Posted on Reply
#96
FordGT90Concept
"I go fast!1!11!1!"
I downloaded twice (Home and Pro) and installed 7 times (Pro once, Home six times). XD

The change from DX11 to DX12 should be pretty swift for developers because it is nothing monumental. I believe DX12 automatically falls back to 11 and 10 depending on hardware capabilities.
Posted on Reply
#97
ValenOne
FordGT90ConceptI downloaded twice (Home and Pro) and installed 7 times (Pro once, Home six times). XD

The change from DX11 to DX12 should be pretty swift for developers because it is nothing monumental. I believe DX12 automatically falls back to 11 and 10 depending on hardware capabilities.
There are other API besides DX11 e.g. PS4's lower level APIs.

AMD Mantle and PS4's lower level APIs has set the ground work for DX12.
Posted on Reply
#98
Uplink10
rvalenciaAMD Mantle and PS4's lower level APIs has set the ground work for DX12.
And Microsoft just comes in and starts building off it. The same way Sony used BSD to make OS for PS4, damn BSD licensing.
Posted on Reply
#99
xenocide
Uplink10And Microsoft just comes in and starts building off it. The same way Sony used BSD to make OS for PS4, damn BSD licensing.
Microsoft has been working on DX12 since before DX11.1 actually went live. They started work on it with Intel\AMD\Nvidia before Mantle was even announced, AMD just released Mantle before DX12 as a bit of a PR stunt. As far as I know OpenGL did have extensions that supported some of the features new to DX12. It's more accurate to stay Khronos built on Mantle, but Microsoft built DX12 alongside Mantle.

Better examples of low-level API's would have been Glide and Metal, but most people block those out of their memory because it was a frustrating time in the PC world.
Posted on Reply
#100
ValenOne
Uplink10And Microsoft just comes in and starts building off it. The same way Sony used BSD to make OS for PS4, damn BSD licensing.
I was referring to game engine development work.

Well known 3D engines already has DX12 version e.g. Epic's Unreal Engine 4.9, Crytek's CryEngine, Unity , Square Enix's Luminous Engine and 'etc'.
Posted on Reply
Add your own comment
Nov 21st, 2024 09:00 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts