• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD GPUs Show Strong DirectX 12 Performance on "Ashes of the Singularity"

If these guys are right then AMD really needs something to boost sales. If they do prove to be superior in DX12 then maybe that will do it.

http://www.dsogaming.com/news/amdnv...ering-4-out-of-5-pc-gamers-own-an-nvidia-gpu/


4 out of 5 gamers buying Nvidia GPUs. It doesn't look good right now @Sony Xperia S
AMD's credibility is at an All time low, they have what looks like superior performance in 1 game that is only in Alpha testing isn't really gonna chance that.

They should, R9 290X and 290 have such good performance per dollar. 290X is only 290 USD.
Hard to sell people a 2 year old gpu as a new product. Most people that own a 200 series already have no reason to get a 300 series since its pretty much same card's.
 
Hard to sell people a 2 year old gpu as a new product. Most people that own a 200 series already have no reason to get a 300 series since its pretty much same card's.
Even though it is an older model it is still at the top of the game like you said and with that price it is a steal.
 
By the way. Did Nvidia also showed numbers for professional cards, or the numbers there where not as pretty? AMD was gaining in the Pro market thanks to the MACs.
Neither Nvidia nor AMD list professional graphics shipments separately these days. Professional, discrete, discrete custom ( FirePro D series), and discrete mobile (MXM) shipments are used in both Mercury and JPR's figures.

AMD's professional board figures aren't a straightforward extrapolation of market share as the consumer market largely is. Nvidia boards sell at vastly greater ASPs. To use the "MAC" (Mac Pro) example you cited:
The top Mac Pro comes standard with Dual FirePro D500's and can be upgraded to Dual FirePro D700's for $600.
The FirePro D500 is a custom part analogous to the HD 7870XT ( so sits between the Pitcairn-based FirePro W7000 and Tahiti PRO- based W8000) The W7000 is priced at around $600 each, the W8000 at ~ $1000 each
The FirePro D700 is a FirePro W9000 (also in a custom form factor as per the other FirePro D boards). The W9000 retails for around $3000 each......and Apple offers an upgrade to two of them - from boards priced at less than $1K, for $600

Given that Apple has to factor in their own profit and amortization from warranty replacements, how much do you think AMD's unit price contract is for these custom SKUs?
AMD basically purchased market share (and the cachet of marketing by being allied to the Apple Mac Pro). If AMD rely upon this business model, they build market share all the way to the poor house.
 
Neither Nvidia nor AMD list professional graphics shipments separately these days. Professional, discrete, discrete custom ( FirePro D series), and discrete mobile (MXM) shipments are used in both Mercury and JPR's figures.

AMD's professional board figures aren't a straightforward extrapolation of market share as the consumer market largely is. Nvidia boards sell at vastly greater ASPs. To use the "MAC" (Mac Pro) example you cited:
The top Mac Pro comes standard with Dual FirePro D500's and can be upgraded to Dual FirePro D700's for $600.
The FirePro D500 is a custom part analogous to the HD 7870XT ( so sits between the Pitcairn-based FirePro W7000 and Tahiti PRO- based W8000) The W7000 is priced at around $600 each, the W8000 at ~ $1000 each
The FirePro D700 is a FirePro W9000 (also in a custom form factor as per the other FirePro D boards). The W9000 retails for around $3000 each......and Apple offers an upgrade to two of them - from boards priced at less than $1K, for $600

Given that Apple has to factor in their own profit and amortization from warranty replacements, how much do you think AMD's unit price contract is for these custom SKUs?
AMD basically purchased market share (and the cachet of marketing by being allied to the Apple Mac Pro). If AMD rely upon this business model, they build market share all the way to the poor house.

Nvidia wins market share. Absolutely logical. Nothing to analyze.
AMD wins market share. "Let me explain to you why there is nothing to see here, or even worst, why it is bad for AMD".

AMD-Slash-2.png
 
How many times has AMD failed and refused to admit fault to turn around and blame nvidia for it?


Wouldn't be surprise they didn't bother to much with the game since it is an Alpha stage game and any work done now might not even matter when its finally released


Consider 390x is a 2 year old gpu so yea it should be cheaper then a much newer one. Even with price difference, DX12 games will trickle out here and there over next 6-12 month's.

From http://www.dsogaming.com/news/the-w...t-amd-users-enjoy-physx-hair-and-fur-effects/

With NVIDIA's Gameworks, CDPR lost control over their Witcher 3 PC source code.

Our dear friends over at PCGamesHardware had an interesting interview with CD Projekt RED’s lead engine developer, Balázs Török, in which Balázs claimed that AMD users will be able to enjoy the new fur and hair GPU-accelerated effects, provided Nvidia lets them to.

When asked about the new hair and fur effects, Balázs said that – at this moment – these aforementioned effects run on AMD Radeon GPUs.

“At the moment it works, but if it can run at the end, the decision of Nvidia. What matters is the direction in which they continue to develop the technology and whether and what barriers they are installed. But I think it will also run on Radeon graphics cards.“
 
AMD's credibility is at an All time low, they have what looks like superior performance in 1 game that is only in Alpha testing isn't really gonna chance that.
That one game will make some people be more skeptical about their next upgrade. It's not something serious, but it could become more serious in the future. Of course Nvidia doesn't have to do much. Just $persuade$ the game developers to not take advantage of DX12 just yet. Wait a little longer, until Pascal comes. They did it before anyway.

Hard to sell people a 2 year old gpu as a new product. Most people that own a 200 series already have no reason to get a 300 series since its pretty much same card's.
There are plenty of rebrands in the market. It's just that AMD's financial position forced them to do rebrands in more expensive cards than Nvidia is doing (GT 730 is one example). At least they do not sell cards with the same name and totally different performance and features. I mean you talk about credibility. Well if the press was treating Nvidia as it treats AMD, Nvidia's credibility would have been in no better position. The typical example is GTX 970, but let's just add here the GT 730 I mentioned.
So we have 3 GT 730s.
One is 96 Fermi cores, 128bit DDR3. <<<This one is not even DX12 yet.
One is 384 Kepler cores, 64bit GDDR5. <<< This one is the good one
And the last one is 384 Kepler cores, 64bit DDR3. <<< This one you throw it out the windows. 12.8GB/sec? Even minesweeper will have performance problems.
 
Hard to sell people a 2 year old gpu as a new product. Most people that own a 200 series already have no reason to get a 300 series since its pretty much same card's.
I don't know, I found the 390 to be a worthy successor to my 6870s in CFX. In all seriousness, despite being rebrands, they've been tuned. This doesn't mean much for power users as overclocking is a think we like to do but, it's not like the 300 series are bad. A lot of things people said were bad about AMD (like multi-monitor idle usage and frame latency,) is a little overstated. So despite not being new technology, it's worth the price you pay for it.

As for DX12, I don't think we can judge everything by one game. It's too early to say much about DX12 other than it potentially can offer some significant improvements. AMD's drivers are known to have more overhead than nVidia's and DX12 might make that less of a problem than it is now.

Honestly, I don't think anyone should get angry or lose sleep over this.
 
Nvidia wins market share. Absolutely logical. Nothing to analyze.
Really? And why would that be? ...and why bring up Nvidia? I certainly didn't. You were the one telling the world + dog how great AMD's pro graphics were doing. Market share with a plummeting bottom line is hardly a cause for cheerleading....or any real critique at all really, given that this thread is about DX12 - which I'm pretty sure pro graphics and math co-processors aren't leveraging.
AMD wins market share. "Let me explain to you why there is nothing to see here, or even worst, why it is bad for AMD".
It was you that bought up the "MAC"
All I did was to point out the pricing of AMD parts used in the "MAC".
If you want to have a good cry about it, be my guest - but between whines, maybe you can explain how AMD's FirePro market share is growing (in a fashion) - largely, as you've already said, because of Apple, yet AMD still bleeds red ink.

Nice AMD supplied PPS. Why not use the latest figures that show that AMD's pro graphics have slipped back to 20% ?
Meanwhile, Nvidia remained the dominant force in professional GPUs, responsible for 79.4% of units, while AMD picked up the remaining 20.6%, including a fair number of units sold to Apple to outfit Mac Pros.
So, a large chunk of AMD's pro graphics market is predicated upon a single customer getting boards at knock down pricing - around 1/10th of retail MSRP. As I said, AMD (or anyone for that matter) can grow market share if they offer deals like that - and let's face it, AMD have been in fire sale mode for FirePro for some time. I'm also pretty sure if AMD offered Radeons at $5 each, they'd quickly gain a massive market share gain - but it doesn't mean **** all if it isn't sustainable. Not every entity can look forward to an EU bailout.
 
Last edited:
From http://www.dsogaming.com/news/the-w...t-amd-users-enjoy-physx-hair-and-fur-effects/

With NVIDIA's Gameworks, CDPR lost control over their Witcher 3 PC source code.

Our dear friends over at PCGamesHardware had an interesting interview with CD Projekt RED’s lead engine developer, Balázs Török, in which Balázs claimed that AMD users will be able to enjoy the new fur and hair GPU-accelerated effects, provided Nvidia lets them to.

When asked about the new hair and fur effects, Balázs said that – at this moment – these aforementioned effects run on AMD Radeon GPUs.

“At the moment it works, but if it can run at the end, the decision of Nvidia. What matters is the direction in which they continue to develop the technology and whether and what barriers they are installed. But I think it will also run on Radeon graphics cards.“
Well fur which is hairworks relies on tessellation, which is part of DX11 standard. Can't really blame Nvidia for AMD cards being very slow on doing it. Nvidia yea could used it on purpose knowing AMD cards are slow in that just like AMD uses OpenCL for tressFX knowing their cards were a lot faster in OpenCL then Nvidia. Both sides did it at one point, Should least be happy that hairworks uses a standard which everyone including AMD was crying for.

I don't know, I found the 390 to be a worthy successor to my 6870s in CFX. In all seriousness, despite being rebrands, they've been tuned. This doesn't mean much for power users as overclocking is a think we like to do but, it's not like the 300 series are bad.

They aint bad no but AMD has tried to claim 390(x) wasn't a rebrand when it is. They even tried to give reason that it wasn't but kinda hard to believe that when GPU-z someone posted of it has a GPU rls date of 2013.
http://wccftech.com/amd-radeon-r9-390-390x-not-rebadges-power-optimization/

There are plenty of rebrands in the market. It's just that AMD's financial position forced them to do rebrands in more expensive cards than Nvidia is doing (GT 730 is one example).

Really how many ppl care about those cards being rebrands or not? No one cares if r5 240 or what ever is rebrand or 230. They are low end cards with very low power draw. No one that cares about performance buys them.
 
Last edited:
Erm..... what happened to the 390X and the GTX 980 trading blows? Now the 290X trades blows with the 980 Ti?

This gets more hilarious by the day, I'm literally getting upset AMD make no money!
 
Erm..... what happened to the 390X and the GTX 980 trading blows? Now the 290X trades blows with the 980 Ti?

This gets more hilarious by the day, I'm literally getting upset AMD make no money!

Here the comparable numbers if graphs and numbers are too difficult for you (different systems PCPerspective & Arc)

1080p
290X
48

390X
53

980
50

980 Ti
50

1080p Heavy
290X
40

390X
46

980
44

980 Ti
43

percentile.001-1.png.001.png


percentile.002.png
 
Last edited:
Here the comparable numbers if graphs and numbers are too difficult for you (different systems PCPerspective & Arc)

1080p
290X
48

390X
53

980
50

980 Ti
50

1080p Heavy
290X
40

390X
46

980
44

980 Ti
43

percentile.001-1.png.001.png


percentile.002.png
Erm..... what happened to the 390X and the GTX 980 trading blows? Now the 290X trades blows with the 980 Ti?

This gets more hilarious by the day, I'm literally getting upset AMD make no money!
Wow so AMD won something in a game that is in Alpha stages and won't be out for like a year, o yea its also AMD sponsored game. Those numbers are to be take with a grain of salt as its just 1 game that is Alpha build. That game still has a long way to go coding wise before can take those numbers too seriously. Best not Hype up something that likely Not gonna stay where they are like AMD did with the Fury X.

Point is, don't start that hype train rolling just yet. Might turn in to a train wreck of disappointment like it has many times before. I guess though if people haven't learned by now they never will.
 
Last edited:
Well fur which is hairworks relies on tessellation, which is part of DX11 standard. Can't really blame Nvidia for AMD cards being very slow on doing it. Nvidia yea could used it on purpose knowing AMD cards are slow in that just like AMD uses OpenCL for tressFX knowing their cards were a lot faster in OpenCL then Nvidia. Both sides did it at one point, Should least be happy that hairworks uses a standard which everyone including AMD was crying for.
Wrong, TressFX uses Microsoft's DirectCompute.

From http://www.techpowerup.com/180675/amd-tressfx-technology-detailed.html

"Technically, TressFX is a toolset co-developed by AMD and Crystal Dynamics, which taps into DirectCompute"

Should least be happy that TressFX uses a standard which everyone including NVIDIA was crying for.

I'm aware of the technical aspect. The problem was excessive tessellation that doesn't substantially improve the graphics appearance. The workaround was to re-use the same AMD driver side tessellation override feature for Crysis 2 NVIDIA patched.


The difference with Ashes of Singularity vs Witcher 3 is the source code availability for AMD(Red Team), NVIDIA (Green Team) and Intel (Blue Team). All interested IHVs can contribute to the same source code without being blocked an exclusivity contract.

Witcher 3 XBO/PS4 builds uses TressFX instead of NVIDIA's Hairworks.

Unreleased Witcher 3 PC build has TressFX enabled, but blocked NVIDIA exclusivity contract. Witcher 3 PC build with TressFX would have benefited lesser NVIDIA GPU cards.


PS; I own MSI 980 Ti OC with my MSI 290X OC.

Wow so AMD won something in a game that is in Alpha stages and won't be out for like a year, o yea its also AMD sponsored game. Those numbers are to be take with a grain of salt as its just 1 game that is Alpha build. That game still has a long way to go coding wise before can take those numbers too seriously. Best not Hype up something that likely Not gonna stay where they are like AMD did with the Fury X.

Point is, don't start that hype train rolling just yet. Might turn in to a train wreck of disappointment like it has many times before. I guess though if people haven't learned by now they never will.
The pattern is similar to 3DMarks API overhead test results.




It could also indicate AMD's DX11 driver is sub-par relative it's TFLOPS potential.
 
Last edited:
in other news cpu bound game befits from reduction in cpu load .... more at 11
please don't feed into the hypetrain
 
On that 290X comparison in DX12, my understanding is that the 290X is only "DX12 ready" and therefore will not support the full DX12 feature set, it certainly indicated that on AMD's site a couple of weeks ago when I looked (because I have a 290X and was interested to know it's DX12 capability) where as the 980 will have the works so I can only guess that the 290X was only doing half the work.... dunno maybe I am wrong there.
 
On that 290X comparison in DX12, my understanding is that the 290X is only "DX12 ready" and therefore will not support the full DX12 feature set, it certainly indicated that on AMD's site a couple of weeks ago when I looked (because I have a 290X and was interested to know it's DX12 capability) where as the 980 will have the works so I can only guess that the 290X was only doing half the work.... dunno maybe I am wrong there.

with these early DX12 titles its not likely they'll even use those features, so the graphical quality/load would be the same.
 
Yeah, it's not DirectX 12 that needs to be looked at, it's the Direct3D feature level the game implements. Do we even know what feature level they're using?

https://en.wikipedia.org/wiki/Graphics_Core_Next

GCN 1.0 is 11_1:
Oland
Cape Verde
Pitcairn
Tahiti

GCN 1.1 is 12.0:
Bonaire
Hawaii
Temash
Kabini
Liverpool
Durango
Kaveri
Godavari
Mullins
Beema

GCN 1.2 is 12.0:
Tonga (Volcanic Islands family)
Fiji (Pirate Islands family)
Carrizo

Only NVIDIA's GM2xx chips are 12.1 compliant (which the GTX 980 Ti is). Even Skylake's GPU is 12.0.

So if the game supports feature level 12.1 and it is using it on GTX 980 but using 12.0 on 290X, it's not an apples to apples comparison. We'd have to know that both cards are running feature level 12.0.
 
So if the game supports feature level 12.1 and it is using it on GTX 980 but using 12.0 on 290X, it's not an apples to apples comparison. We'd have to know that both cards are running feature level 12.0.

DX12 is more application dependent then DX11 was. DX12 moves some management that the driver was doing to the application.

AMD GCN cards can handle more resources as well so its not always going to be an apples to apples comparison.
 
Yeah, it's not DirectX 12 that needs to be looked at, it's the Direct3D feature level the game implements. Do we even know what feature level they're using?

https://en.wikipedia.org/wiki/Graphics_Core_Next

GCN 1.0 is 11_1:
Oland
Cape Verde
Pitcairn
Tahiti

GCN 1.1 is 12.0:
Bonaire
Hawaii
Temash
Kabini
Liverpool
Durango
Kaveri
Godavari
Mullins
Beema

GCN 1.2 is 12.0:
Tonga (Volcanic Islands family)
Fiji (Pirate Islands family)
Carrizo

Only NVIDIA's GM2xx chips are 12.1 compliant (which the GTX 980 Ti is). Even Skylake's GPU is 12.0.

So if the game supports feature level 12.1 and it is using it on GTX 980 but using 12.0 on 290X, it's not an apples to apples comparison. We'd have to know that both cards are running feature level 12.0.
Resource Binding: Maxwell Tier 2, GCN Tier 3.

Feature level: Maxwell 12_1, GCN 12_0.



CR (Conservative Rasterization) feature

Read https://community.amd.com/message/1308478#1308478

Question:

I need for my application that every drawing produce at least one pixel output (even if this is an empty triangle = 3 identical points). NVidia have an extension (GL_NV_conservative_raster) to enable such a mode (on Maxwell+ cards). Is there a similar extension on AMD cards

Answer (from AMD):

Some of our hardware can support functionality similar to that in the NVIDIA extension you mention, but we are currently not shipping an extension of our own. We will likely hold off until we can come to a consensus with other hardware vendors on a common extension before exposing the feature, but it will come in time.


For ROV feature

AMD already supports Intel's "GL_INTEL_Fragmented_shader_ordering" in OpenGL.

From https://twitter.com/g_truc/status/581224843556843521

It seems that shader invocation ordering is proportionally a lot more expensive on GM204 than S.I. or HSW.
 
Last edited:
The way I see it, is that why optimize for dx11 anymore ?

Win 10 has been DL more than 50 million times allready including dx12 which makes dx11 a thing of the past.
Move on and forget, just keep dx11 where it allready is, thats good enough and start developing dx12 right away.

You ARE aware, I assume, that 50 million is a drop in the bucket? And that's downloads, not installs. I personally know several people staying on 7 and 8.1. Even here, an enthusiast community, I've seen probably 10% of those that upgraded to W10 go back. So no, DX11 isn't going anywhere,
 
I downloaded twice (Home and Pro) and installed 7 times (Pro once, Home six times). XD

The change from DX11 to DX12 should be pretty swift for developers because it is nothing monumental. I believe DX12 automatically falls back to 11 and 10 depending on hardware capabilities.
 
I downloaded twice (Home and Pro) and installed 7 times (Pro once, Home six times). XD

The change from DX11 to DX12 should be pretty swift for developers because it is nothing monumental. I believe DX12 automatically falls back to 11 and 10 depending on hardware capabilities.
There are other API besides DX11 e.g. PS4's lower level APIs.

AMD Mantle and PS4's lower level APIs has set the ground work for DX12.
 
And Microsoft just comes in and starts building off it. The same way Sony used BSD to make OS for PS4, damn BSD licensing.

Microsoft has been working on DX12 since before DX11.1 actually went live. They started work on it with Intel\AMD\Nvidia before Mantle was even announced, AMD just released Mantle before DX12 as a bit of a PR stunt. As far as I know OpenGL did have extensions that supported some of the features new to DX12. It's more accurate to stay Khronos built on Mantle, but Microsoft built DX12 alongside Mantle.

Better examples of low-level API's would have been Glide and Metal, but most people block those out of their memory because it was a frustrating time in the PC world.
 
Back
Top