# AMD GPUs Show Strong DirectX 12 Performance on "Ashes of the Singularity"



## btarunr (Aug 18, 2015)

Stardock's "Ashes of the Singularity" may not be particularly pathbreaking as an RTS, in the Starcraft era, but has the distinction of being the first game to the market with a DirectX 12 renderer, in addition to its default DirectX 11 one. This gave gamers the first peak at API to API comparisons, to test the tall bare-metal optimizations of DirectX 12, and as it turns out, AMD GPUs do seem to benefit big.

In a GeForce GTX 980 vs. Radeon R9 390X comparison by PC Perspective, the game seems to perform rather poorly on its default DirectX 11 renderer for the R9 390X, which when switched to DirectX 12, not only takes a big leap (in excess of 30%) in frame-rates, but also outperforms the GTX 980. A skeptical way of looking at these results would be that the R9 390X isn't optimized for the D3D 11 renderer to begin with, and merely returns to its expected performance vs. the GTX 980, with the D3D 12 renderer. 



 




Comparing the two GPUs on CPU-intensive resolutions (900p and 1080p), across various CPUs (including the i7-5960X, i7-6700K, i3-4330 dual-core, FX-8350, FX-6300, reveals that the R9 390X has a narrow performance drop with fewer CPU cores, and has slight performance gains with increasing number of cores. Find the full insightful review in the source link below.

*View at TechPowerUp Main Site*


----------



## FordGT90Concept (Aug 18, 2015)

It just brings the AMD up to where it should be though, yeah?  It's not like Fury X is going to curb stomp Titan X on DX12, amiright?


----------



## laszlo (Aug 18, 2015)

so AMD is again ahead with hardware and lack software .....is not the 1st time they do this and seems nothing was learned from past...


----------



## MxPhenom 216 (Aug 18, 2015)

So AMD cards shine in DX12, and Nvidia cards stay about the same.


----------



## Ebo (Aug 18, 2015)

The way I see it, is that why optimize for dx11 anymore ?

Win 10 has been DL more than 50 million times allready including dx12 which makes dx11 a thing of the past.
Move on and forget, just keep dx11 where it allready is, thats good enough and start developing dx12 right away.


----------



## Xzibit (Aug 18, 2015)

MxPhenom 216 said:


> So AMD cards shine in DX12, and Nvidia cards stay about the same.



It looses performance on fast CPUs beyond 1080p "Low" Average settings






In Heavy "Low" seams to gain performance where "High" you loose.





1080p "Low" looks to be consistent outcome where DX12 is faster then DX11 on Nvidia using a powerful CPU.

The more interesting or FUN part of it is probably the spat between OXIDE and Nvidia right now...


----------



## Ubersonic (Aug 18, 2015)

Colour me skeptical but is this really a case of DX12 offering much better FPS than DX11 when paired with a decent CPU, or is it more a case of a game in alpha offering much better FPS now it's launch API has been implemented rather than it's placeholder API?


----------



## HumanSmoke (Aug 18, 2015)

Ebo said:


> The way I see it, is that why optimize for dx11 anymore ?


Not everyone will upgrade to Win10. There are plenty of people - for whatever reasons (although they seem to include technophobia, making a stand against subscription models, abhorrence of "apps", world Government conspiracy theories among others), that will only give up their Win7 serials when they're prised from their cold, dead hands.
I'm also pretty sure that HD 6000 card series owners aren't keen to have their gaming marginalized any further than it already is.


MxPhenom 216 said:


> So AMD cards shine in DX12, and Nvidia cards stay about the same.


In this instance yes...although this instance seems to be an alpha build from a game engine, and a developer and game engine very closely allied with AMD. No doubt given the range of DX12 /DX11.3 feature and hardware coding options, you'll see game engines and individual games vary wildly depending upon what is implemented, and what is not.
One thing seems certain from the PC Per graphs Xzibit posted. AMD really need to pour some of those optimizations into CPUs.


----------



## FordGT90Concept (Aug 18, 2015)

Most people I know that aren't upgrading to Windows 10 are staying with what they are on because of Windows Media Center.  If you don't have an Xbox One and you rely on Windows Media Center, you're basically stuck.


----------



## Yorgos (Aug 18, 2015)

FordGT90Concept said:


> Most people I know that aren't upgrading to Windows 10 are staying with what they are on because of Windows Media Center.  If you don't have an Xbox One and you rely on Windows Media Center, you're basically stuck.


I know a lot of people running around with donkeys instead of using some modern kind of transportation,
but those people who are "stuck" there is also xbmc, far superior than anything out there.


----------



## dj-electric (Aug 18, 2015)

This is one game, a game that its devs been petted by AMD since it was announced.
By this time after all we've been through in this dirty market, i would totally believe that these great numbers on DX12 are either because DX11 got purposely butchered or becuase they are genuinely so well-optimized for AMD that no-one actually bothered to suite this thing to NVIDIA cards.

They wanna please their red friends. Those get to sell cards, and those get to sell there 0-PR driven game by having posts like these.
Imma wait fo UE4 to make the transission to DX12 (not this half-ported test thing)


----------



## Xzibit (Aug 18, 2015)

Dj-ElectriC said:


> This is one game, a game that its devs been petted by AMD since it was announced.
> By this time after all we've been through in this dirty market, i would totally believe that these great numbers on DX12 are either because DX11 got purposely butchered or becuase they are genuinely so well-optimized for AMD that no-one actually bothered to suite this thing to NVIDIA cards.
> 
> They wanna please their red friends. Those get to sell cards, and those get to sell there 0-PR driven game by having posts like these.
> Imma wait fo UE4 to make the transission to DX12 (not this half-ported test thing)



Did you read the article, watch the video or read Ryans comments on it.



			
				PCPerspective Ryan Shrout said:
			
		

> NVIDIA has had source code access for a year. There are no excuses then.


*

*


----------



## john_ (Aug 18, 2015)

Nvidia didn't lose the chance of it's usual marketing stunt. They publish the usual "Game Ready" driver to show that, for every new game, they are ready. Then they followed with their advice to not consider the benchmark results important. Maybe even ignore them completely. If this was AMD, if AMD had come out with a very specific driver for one specific game and then failed at those benchmarks, this thread would be flooded with mocking and derogatory comments for AMD, even hate.

Anyway, "Ashes of the Singularity" shows two things about drivers. First, Nvidia's DX11 are miles ahead of AMD's. Second, AMD's DX12 drivers start from a more concrete base. If AMD's DX11 drivers where started with the wrong foot and latter there was no programmers or money to fix it, at least with DX12 AMD starts from a better position. Let's hope they don't mess up down the road. If they think that with DX12 they have the upper hand, Nvidia does have the resources and the talent, to show them that they are wrong.

Two last things. I was saying in the past that Mantle was made to fix the pathetic Bulldozer's performance in games compared to Intel CPUs and not so to improve AMD's GPUs performance. Seeing the results on PCPerspective's CPU tests and how bad FX processors score, I can say that I was completely wrong. FX is something that can not be saved. At least based on this specific test. The last thing is the results of Radeon R7 370. As we can see here the 370(265, 7850) does not gain much. There could be two reasons for this. GCN 1.0 and 2GB of RAM. I would like to remind here that Mantle was NOT performing well with cards that had less than 3GBs of RAM. It seems that DX12 has the same problem. We might have to consider 3GBs of RAM as the minimum in the future for DX12 performance.


----------



## Tensa Zangetsu (Aug 18, 2015)

Figures why AMD dropped Mantle for DX12. They knew they'd kill nVidia at this


----------



## Xaled (Aug 18, 2015)

Dj-ElectriC said:


> This is one game, a game that its devs been petted by AMD since it was announced.
> By this time after all we've been through in this dirty market, i would totally believe that these great numbers on DX12 are either because DX11 got purposely butchered or becuase they are genuinely so well-optimized for AMD that no-one actually bothered to suite this thing to NVIDIA cards.
> 
> They wanna please their red friends. Those get to sell cards, and those get to sell there 0-PR driven game by having posts like these.
> Imma wait fo UE4 to make the transission to DX12 (not this half-ported test thing)


Hahha Dirty market? It wasnt dirty market when nVidia made the the big Dx10 lie with Crysis, and fooled millions of people with a dx9.0c game, nVidia has been cheating since ages
If AMD is doing something wrong now you have to blame nvidia for it, because nvidia started it or invented it


----------



## FordGT90Concept (Aug 18, 2015)

Yorgos said:


> I know a lot of people running around with donkeys instead of using some modern kind of transportation,
> but those people who are "stuck" there is also xbmc, far superior than anything out there.


It's inferior to Windows Media Center.  Kodi's DVR and EPG (in USA) features leave a lot to be desired; moreover, Kodi's DLNA support is horrendous.  Xbox One functions as a DVR, obtains EPG, and functions as a DLNA server.  A $400 Xbox One, $200 for an external HDD, and $50 for a Hauppauge USB TV tuner for Windows 10 isn't exactly appealing next to a $100 Tuner and Windows Media Center.



Tensa Zangetsu said:


> Figures why AMD dropped Mantle for DX12. They knew they'd kill nVidia at this


Mantle code is in Direct3D 12 and Vulkan.  There's no sense in AMD continuing to develop a proprietary API when it is becoming the standard.


----------



## Sony Xperia S (Aug 18, 2015)

laszlo said:


> so AMD is again ahead with hardware and lack software .....is not the 1st time they do this and seems nothing was learned from past...



Hmm, I don't know how exactly to read these results. 
Perhaps, nvidia lacks software optimisation at all in DX12 ?!
Obviously, these results are not very promising, if they do not reveal performance gains everywhere and with everyone.


----------



## HumanSmoke (Aug 18, 2015)

Xaled said:


> If AMD is doing something wrong now you have to blame nvidia for it, because nvidia started it or invented it


No, I really, really, really wouldn't go there.
For some of us who have been 3D graphics enthusiasts for a while, we can remember when ATI outed the "all new" Rage Pro Turbo...with supposedly "*40% more performance*" than the outgoing Rage Pro in February 1998. The 40% improvement was only driver optimization for Winbench 98. As this review shows, real world gains were nil. The Pro Turbo eventually added some performance for actual games, but by then it had been overtaken by a whole swathe of newer cards including those of ATI itself.

If you're going to make a statement of fact, it should actually be_ factual_.
Most of the graphics vendors indulged in benchmark shenanigans at one time or another - Nvidia included - but ATI's deliberate optimization for a single benchmark actually set the tone.


----------



## FordGT90Concept (Aug 18, 2015)

Sony Xperia S said:


> Hmm, I don't know how exactly to read these results.
> Perhaps, nvidia lacks software optimisation at all in DX12 ?!
> Obviously, these results are not very promising, if they do not reveal performance gains everywhere and with everyone.


AMD has major CPU bottlenecks on low settings on DX11.  Those CPU bottlenecks are gone in DX12 so AMD cards perform like NVIDIA cards in DX12 when they're pretty far behind in DX11.

I think it shows that AMD put all of their effort into Mantle/DX12/Vulkan over DX11 which really shouldn't surprise anyone.


----------



## Breit (Aug 18, 2015)

All I read out from this is that AMDs drivers are crap and with DX12 this doesn't matter as much as it did with DX11.

I bet the CPU usage under DX12 with AMD is a lot higher than it is with nVidia. Not that this is necessaryly a bad thing. But it also isn't more efficient per se. The 5960x just has enough crunching power to not be the bottleneck here.

I hope AMD doesn't take their DX12 superiority for granted and stop developing and optimizing their DX12 drivers any further.


----------



## 64K (Aug 18, 2015)

What does this article tell us? That Ashes of Singularity runs better on AMD than Nvidia. There are games that run better on AMD or Nvidia. It is common. One company or the other works closer with the developer and gains an edge. It doesn't mean much that Nvidia chose not to for whatever reason if that is what happened. imo we can't look at the performance increase in one DX12 game and extrapolate that to most DX12 games.

If it works out that AMD has the edge in most DX12 games then it's time to go red but how long will it be before it's clear which company has the best offerings? It may be years possibly before there are a good selection of DX12 games.


----------



## Breit (Aug 18, 2015)

FordGT90Concept said:


> AMD has major CPU bottlenecks on low settings on DX11. Those CPU bottlenecks are gone in DX12 so AMD cards perform like NVIDIA cards in DX12 when they're pretty far behind in DX11.



I think due to the multithreaded nature of DX12, those bottlenecks (or inefficiencies) are better hidden, but they are still there.


----------



## LAN_deRf_HA (Aug 18, 2015)

There's all sorts of wonkiness going on with all sides of this chart. The only take away from this test is that we need more tests.


----------



## Vayra86 (Aug 18, 2015)

64K said:


> What does this article tell us? That Ashes of Singularity runs better on AMD than Nvidia. There are games that run better on AMD or Nvidia. It is common. One company or the other works closer with the developer and gains an edge. It doesn't mean much that Nvidia chose not to for whatever reason if that is what happened. imo we can't look at the performance increase in one DX12 game and extrapolate that to most DX12 games.
> 
> If it works out that AMD has the edge in most DX12 games then it's time to go red but how long will it be before it's clear which company has the best offerings? It may be years possibly before there are a good selection of DX12 games.



This. Overall, this is not news, this is just a stir in the AMD-Nvidia fanrage bowl, with a game that nobody really plays or even heard of.


----------



## raptori (Aug 18, 2015)

It's better to say AMD catching up in dx12.


----------



## Relayer (Aug 18, 2015)

Xzibit said:


> It looses performance on fast CPUs beyond 1080p "Low" Average settings
> 
> 
> 
> ...



I agree. Wonder what nVidia will do dealing with a dev who doesn't need their trips, free hardware, or precompiled code?


----------



## Prima.Vera (Aug 18, 2015)

This just proves even more, that AMD's CPUs are pure crap for gaming. Like junk.


----------



## Jborg (Aug 18, 2015)

Prima.Vera said:


> This just proves even more, that AMD's CPUs are pure crap for gaming. Like junk.


 
Yes, they do not perform as well as Intel per core.... this is pretty obvious by now.....

Pure crap for gaming? Dunno what your smoking.... With my FX 8350 I have had almost no issues playing any game on High-Ultra detail with 50-60 FPS...

Granted, MMO games DO run better on Intel, but they are still 100% playable on an AMD CPU.

Crysis 3 has honestly been the only game that kinda runs like crap, and that seems to be a game issue...

So, while buying an AMD CPU right now is basically investing in old tech, it still will run games better than you say. Keep in mind too, if I had the option to build another rig I would buy intel.... But I just wanted to set that comment straight... because its false.


----------



## RejZoR (Aug 18, 2015)

Until I see some other benchmarks, this number means nothing. It has been developed with AMD in mind since day one s the results aren't all that surprising (even though I know AMD is strong in this regard). There are also games where NVIDIA absolutely destroys AMD. But we aren't running around saying how AMD sucks and how NVIDIA is awesome.

Either way, if AMD is this good, this will force NVIDIA to enhance their drivers (I think hardware wise NVIDIA is already prepared). Anyone remembers the DX11 boost NVIDIA made when AMD started fiddling with Mantle more aggressively? Yeah, that


----------



## the54thvoid (Aug 18, 2015)

Relayer said:


> I agree. Wonder what nVidia will do dealing with a dev who doesn't need their trips, free hardware, or precompiled code?



They'll ignore them. And instead focus on the triple AAA titles. That's not a good thing. 
BTW, the comment from Relayer could also be pointed at AMD and Star Dock. AMD had a big push with this dev.  

Time will tell but I doubt much will change in the grand scheme of things.


----------



## Mussels (Aug 18, 2015)

Still surprised at the lack of information and fanboyism out there.

A few facts (you can run the 3Dmark DX12 API test for the little repeatable information on DX12 we have publicly available)

1. AMD ran poorly in DX11 (and older) for multi threading. Nvidia had a lead there for a long time.
2. This made AMD GPU's work better on a faster IPC/single threaded CPU
3. AMD do not sell leading performance single threaded CPU's
4. DX12 addresses the poor multi threading, bringing AMD GPU's up to par with Nvidia and greatly increasing performance in some cases
5. This also makes AMD CPU's more attractive to gamers, as the extra threads now help out.


----------



## DeadSkull (Aug 18, 2015)

Bwahahaha, nvidia failing once again and refusing to admit fault. How typical is that...


----------



## RejZoR (Aug 18, 2015)

Extra cores on AMD's don't seem to help much. However, Core i7 5820 all of a sudden becomes a lot more interesting I think...


----------



## Prima.Vera (Aug 18, 2015)

Jborg said:


> Yes, they do not perform as well as Intel per core.... this is pretty obvious by now.....
> 
> Pure crap for gaming? Dunno what your smoking.... With my FX 8350 I have had almost no issues playing any game on High-Ultra detail with 50-60 FPS...
> 
> ...


I was just talking based on those charts. When your top AMD processor is slower than a cheapo i3, then you know imo, that your CPU is crap


----------



## Mussels (Aug 18, 2015)

Prima.Vera said:


> I was just talking based on those charts. When your top AMD processor is slower than a cheapo i3, then you know imo, that your CPU is crap



for some titles that is true, and the i3's cost a lot more... so you know. personal preference.


----------



## Assimilator (Aug 18, 2015)

Everything we've seen so far is showing that DX12 may level the playing field for AMD, and/or that they've been playing the long game by optimising for DX12 while letting DX11 support fall behind. Either way, a few graphs from an alpha piece of software aren't going to sway me. Let's see some released game titles, some optimised for AMD and some for nVIDIA, before we draw any conclusions.


----------



## Sempron Guy (Aug 18, 2015)

Looking forward to TPUs review on this one. I personally don't trust PCPER with any review AMD related.


----------



## the54thvoid (Aug 18, 2015)

Mussels said:


> Still surprised at the lack of information and fanboyism out there.
> 
> A few facts (you can run the 3Dmark DX12 API test for the little repeatable information on DX12 we have publicly available)
> 
> ...



Hope that fanboy comment covers the post immediately after yours...

I don't think Nvidia tried too hard with this game code. It seems that odd that they lose some perf at DX12 where AMD gain. 
Again, AMD worked closely with Stardock so Nvidia may have neglected it. However in general I imagine Nvidia will try harder once the PR starts to look bad for them.
They have the financial clout to 'buy' titles, DX12 or not. Last thing we want is Gameworks everywhere.


----------



## Mussels (Aug 18, 2015)

AMD have an advantage with DX12 this time around, because it uses their code from mantle. Nvidia could well take the lead again with future driver updates, but most likely future hardware releases.

I bet exactly $1 that DX12 being used on the Xbox one and AMD hardware being in that console has something to do with it.


----------



## R-T-B (Aug 18, 2015)

MxPhenom 216 said:


> So AMD cards shine in DX12, and Nvidia cards stay about the same.



Looks more like NVIDIA cards take a hit to me.  WTF?



> AMD have an advantage with DX12 this time around, because it uses their code from mantle.



Not really.



Vayra86 said:


> This. Overall, this is not news, this is just a stir in the AMD-Nvidia fanrage bowl, with a game that nobody really plays or even heard of.



Ah come on man.  It's plastered all over Stardocks homepage like it's Christmas.  There's no way you are doing it justice with that phrase.


----------



## Kemar Stewart (Aug 18, 2015)

I find it interesting that there testing methodology doesn't elaborate on which drivers there are using for thesetests. That being said. Will wait for another reviewer before I speculate


----------



## yogurt_21 (Aug 18, 2015)

So either the AMD cards have always been optimized for DirectX 12 or for some reason this games cripples the AMD cards in DX 11. 

Either way it will be interesting. Love that this is shown on the 390X rather than just existing on the Fury. It means there is a slim chance my 290 could receive a performance boost in future games, adding life to it.


----------



## efikkan (Aug 18, 2015)

Mussels said:


> AMD have an advantage with DX12 this time around, because it uses their code from mantle. Nvidia could well take the lead again with future driver updates, but most likely future hardware releases.


No, AMD have an advantage primarily because this game was developed for GCN(Mantle) and then ported to Direct3D 12. The changes in Direct3D 12 allows for more tailored game engines, which probably will result in AMD "friendly" and Nvidia "friendly" games, depending on which vendor the developers cooperate with during development.

Keep in mind that many of the planned driver improvements for Direct3D 12 was implemented by Nvidia a while ago, and since Nvidia implements Direct3D and OpenGL through their native superset Cg, these improvements have been universal. Also remember that R9 390X grealy surpasses GTX 980 in _theoretical_ performance, so it's about time AMD try to utilize some of this potential.



Mussels said:


> I bet exactly $1 that DX12 being used on the Xbox one and AMD hardware being in that console has something to do with it.


Probably not, it has to do with game engine design.


----------



## Octopuss (Aug 18, 2015)

Isn't it a "little" too early to jump at *any* conclusions?...


----------



## john_ (Aug 18, 2015)

I am just coping and pasting a comment from PCPerspective's article



> http://oxidegames.com/2015/08/16/the-birth-of-a-new-api/
> 
> Baker wrote a specific section just for you.
> 
> ...


That's for everyone talking about companies in bed, probably having Nvidia and Ubisoft in mind.

If you don't like PCPercpective's article, comments about PCPerspective being AMD biased is at least amusing, there are a dozen other reviews at least with results that come to the same conclusions. I bet AMD doesn't have the money to buy the coffees for one site, not to convince a dozen sites to play with the results.


----------



## alwayssts (Aug 18, 2015)

john_ said:


> If you don't like PCPercpective's article, comments about PCPerspective being AMD biased is at least amusing, there are a dozen other reviews at least with results that come to the same conclusions. I bet AMD doesn't have the money to buy the coffees for one site, not to convince a dozen sites to play with the results.



With respect, it appears you clearly don't understand the public history PCP has regarding AMD.  What you took from his comments are the exact opposite of what anyone that has followed them over the years would assume.

They have gotten MUCH better recently, just as TR has, as I think Shrout got an immense amount of heartburn (if you've listened to their podcast over the last year you could tell it was really getting to him) over the outcry at some point.  Now, they just joke about him being "BIOS'd.'

...at least I think that's what they called him on the last nvidia/aib-sponsored livestream, while clearly referencing an inside joke about haters gonna hate (which one can assume was about the exact same thing; his perceived bias against AMD).

Enough about that though...

I agree this is just one game.  I agree this was obviously always meant to showcase amd's improvements in dx/mantle, and was probably programmed with that in mind.  Surely there are other areas of dx12 nvidia's hardware excels and/or their driver is more mature.

That said, I think it's great this is nvidia once again showcasing the mindset that absolutely nothing is their fault, as demoed by Roy Taylor's cutting tweet, as well as Stardock's response to nvidia claiming multiple times in their memo everything is on the dev to fix (rather than immature drivers/dx runtime that should handle situations at default better than they do, granted the dev can fix.)


https://twitter.com/amd_roy/status/633535969015275521
https://twitter.com/draginol/status/632957839556874240


----------



## Fluffmeister (Aug 18, 2015)

If anything I'm just really impressed with Nv's DX11 performance!


----------



## Slizzo (Aug 18, 2015)

Fluffmeister said:


> If anything I'm just really impressed with Nv's DX11 performance!



This. This test showed how abysmal AMD's optimizations were for DX11. Now, for this game that could entirely be because the game was originally Mantle, and was ported to DX12, and thus AMD didn't pay any mind to DX11 as their users would have been using Mantle's API but....

Makes you wonder if they aren't leaving performance on the table for existing DX11 titles.


----------



## Casecutter (Aug 18, 2015)

*NVIDIA GeForce 355.60 WHQL Driver:
Release notes:*
Just in time for the preview of Ashes of the Singularity, this new GeForce *Game Ready* driver ensures you’ll have the best possible gaming experience.
*Game Ready*
Best gaming experience for Ashes of the Singularity preview

Honestly, Nvidia has done nothing indecorous here, so big deal their driver for what to them is "Alpha Preview"  is still in the works.  It's known that AMD has work closely with the Oxide Games and its Nitrous Engine, so AMD has undoubtedly  been working hard to make their asynchronous shader opportunities' work in this game, and leverage the GCN support in executing work from multiple queues.  It's honestly not surprising it's rosy for AMD

Nor should Nvidia be consider behind on matching AMD for what's an "Alpha Preview".  Nvidia newest hardware “Maxwell 2” has 32 queues (composed of 1 graphics queue and 31 compute queues), so Nvidia hardware that can be made to do such work.  As to "Nvidia has had source code access for a year" the only place to clarify that is Ryan Shout signifying he agreed with another post saying "^ this" to someone else's post.  Would be good to have the veracity of such a statement. _*Edit:* After getting this written I saw 'john_' post above regarding this! Thanks,_

Honestly the only gaffe I see was Nvidia attaching the *”Game Ready” *moniker for this driver release.  Had they kept the iteration, “Ashes of the Singularity Preview Driver” they would’ve had no issue.  And even what Brian Burke, Senior PR Manager statement could’ve be less cynical, just saying it like… 'according to the Oxide this Alpha stage preview is still being optimized, and not approaching Beta.  Nvidia has the resources to enhance final *”Game Ready” *driver well before the games final releases.'

Honestly it a-lot about nothing.


----------



## alwayssts (Aug 18, 2015)

Slizzo said:


> This. This test showed how abysmal AMD's optimizations were for DX11. Now, for this game that could entirely be because the game was originally Mantle, and was ported to DX12, and thus AMD didn't pay any mind to DX11 as their users would have been using Mantle's API but....
> 
> Makes you wonder if they aren't leaving performance on the table for existing DX11 titles.



It's long been known how AMD deals with DX11 cpu overhead/bottlenecking isn't exactly the greatest, and much of that was deferred to Mantle/DX12 to solve (which it appears to have done).

Performance in dx11 has been a feather in nvidia's cap going forward since the 337.50 Beta drivers in April 2014.  Their response to mantle was tighten up their dx11 performance with optimizations towards alleviating that specific bottleneck, something AMD never did; it essentially gave somewhat similar results to AMD using Mantle (or now dx12), granted obviously not as profound as the new api itself.

I think that very much explains why nvidia's dx11 and amd's dx12 performance are as they appear.


----------



## HumanSmoke (Aug 18, 2015)

the54thvoid said:


> I don't think Nvidia tried too hard with this game code. It seems that odd that they lose some perf at DX12 where AMD gain.


Another point to consider might be that the Nitrous engine might require some specialized coding. From the developer himself:


> We've offered to do the optimization for their DirectX 12 driver on the app side that is in line with what they had in their DirectX 11 driver.


From that comment, it sounds like optimization by a third party is a non-trivial matter - so Nvidia preferred to do the driver coding themselves. Oxide and Nvidia's coding partnership when Star Swarm launched was abysmal, and wasn't rectified until Nvidia's 337.50 arrived IIRC some months later. The reasons for Nvidia deciding to go it alone could be many and varied, but the first go round as BFF wasn't an unmitigated success.


the54thvoid said:


> Again, AMD worked closely with Stardock so Nvidia may have neglected it. However in general I imagine Nvidia will try harder once the PR starts to look bad for them.


Might be a case of Star Swarm Redux. Initial drivers were a basket case, but once the driver team got to grips with the code, the performance picked up. Probably depends on a lot of variables though - driver coding priority by the vendor, whether the game code undergoes any significant revision before release (as per Tomb Raider), or whether the game engine is simply not tailored that well for Nvidia's architecture (I'd assume that Oxide and AMD's close relationship would make GCN a primary focus for optimization).


----------



## medi01 (Aug 18, 2015)

raptori said:


> It's better to say AMD catching up in dx12.



Uh, doesn't 390x cost about a 100$ less, than 980?


----------



## Casecutter (Aug 18, 2015)

alwayssts said:


> With respect, it appears you clearly don't understand the public history PCP has regarding AMD.  What you took from his comments are the exact opposite of what anyone that has followed them over the years would assume.
> 
> They have gotten MUCH better recently, just as TR has, as I think Shrout got an immense amount of heartburn (if you've listened to their podcast over the last year you could tell it was really getting to him) over the outcry at some point.  Now, they just joke about him being "BIOS'd.'
> 
> ...


I'd second exactly as said; Shrout / PCP weren't known to have any/much predisposed toward AMD, but I see some fair and balance information come-about as of late.  Like with there article on this, they've /or are out ahead of the subject; although it's a lot about nothing.


----------



## Uplink10 (Aug 18, 2015)

Ebo said:


> The way I see it, is that why optimize for dx11 anymore ?
> 
> Win 10 has been DL more than 50 million times allready including dx12 which makes dx11 a thing of the past.
> Move on and forget, just keep dx11 where it allready is, thats good enough and start developing dx12 right away.





HumanSmoke said:


> Not everyone will upgrade to Win10. There are plenty of people - for whatever reasons (although they seem to include technophobia, making a stand against subscription models, abhorrence of "apps", world Government conspiracy theories among others), that will only give up their Win7 serials when they're prised from their cold, dead hands.
> I'm also pretty sure that HD 6000 card series owners aren't keen to have their gaming marginalized any further than it already is.



It is perfectly understandable if you do not want to upgrade to Windows 10 mostly because you cannot turn off automatic updating, you cannot opt-out of Costumer Experience Improvement Program (sending info about your PC) and weird (unclear, invasive) EULA.

Windows Apps suck because they take up more space for not apparent reason and have this unusual windows which are different from classic windows.

Microsoft is forcing people who want to use DirectX 12 to use Windows 10 and they are the one who are guilty of requiring developers to make DirectX 11.0 version. I am all for DX12 but I will not use it if it means that I have to follow Google-like bussiness model just, you know I PAID FOR THE DAMN OS AND DESERVE THE DAMN PRIVACY AND CONTROL!


----------



## john_ (Aug 18, 2015)

alwayssts said:


> With respect, it appears you clearly don't understand the public history PCP has regarding AMD.  What you took from his comments are the exact opposite of what anyone that has followed them over the years would assume.
> 
> They have gotten MUCH better recently, just as TR has, as I think Shrout got an immense amount of heartburn (if you've listened to their podcast over the last year you could tell it was really getting to him) over the outcry at some point.  Now, they just joke about him being "BIOS'd.'
> 
> ...at least I think that's what they called him on the last nvidia/aib-sponsored livestream, while clearly referencing an inside joke about haters gonna hate (which one can assume was about the exact same thing; his perceived bias against AMD).


You are kidding. PCPer is Nvidia's Tom Peterson favorite site to give interviews, they rushed to post Nvidia's PR about GTX 970 and agree with the company's excuses, they have attacked FreeSync in any way possible with their fanboy there Allyn, calling it vaporware everytime he could, they didn't forgot to write multiple negative articles about Fury X's pump, or the lack of WHQL drivers, I mean are you kidding me? The last time they where doing AMD promotions was when AMD released Kaveri. Back then they where creating little youtube videos about Kaveri's superior iGPU over i3's every week. But today they are Green to the bone. I don't know how many years you stopped visiting them, but they have totally changed from what you might remember.


----------



## Xzibit (Aug 18, 2015)

john_ said:


> You are kidding. PCPer is Nvidia's Tom Peterson favorite site to give interviews, they rushed to post Nvidia's PR about GTX 970 and agree with the company's excuses, they have attacked FreeSync in any way possible with their fanboy there Allyn, calling it vaporware everytime he could, they didn't forgot to write multiple negative articles about Fury X's pump, or the lack of WHQL drivers, I mean are you kidding me? The last time they where doing AMD promotions was when AMD released Kaveri. Back then they where creating little youtube videos about Kaveri's superior iGPU over i3's every week. But today they are Green to the bone. I don't know how many years you stopped visiting them, but they have totally changed from what you might remember.



I think both of you are saying the same thing...

Also think that's the reason why most of the talk is going towards PCPerspective when others have done comparisons with AotS and same conclusion.  Its surprising or refreshing which ever way you look at it that Ryan dismissed Nvidias stance on the MSAA issue and OXIDE called Nvidia out on it.

Even if one believes for some reason that Nvidia MSAA issues is real (and not PR damage control) it doesn't explain this graph at all that I posted earlier.






"Low" has MSAA turned off and DX12 is still slower then DX11 at 1600p with a fast CPU


----------



## arbiter (Aug 19, 2015)

DeadSkull said:


> Bwahahaha, nvidia failing once again and refusing to admit fault. How typical is that...



How many times has AMD failed and refused to admit fault to turn around and blame nvidia for it? 



HumanSmoke said:


> Another point to consider might be that the Nitrous engine might require some specialized coding. From the developer himself:


Wouldn't be surprise they didn't bother to much with the game since it is an Alpha stage game and any work done now might not even matter when its finally released



medi01 said:


> Uh, doesn't 390x cost about a 100$ less, than 980?


Consider 390x is a 2 year old gpu so yea it should be cheaper then a much newer one. Even with price difference, DX12 games will trickle out here and there over next 6-12 month's.


----------



## Mussels (Aug 19, 2015)

Kemar Stewart said:


> I find it interesting that there testing methodology doesn't elaborate on which drivers there are using for thesetests. That being said. Will wait for another reviewer before I speculate



with windows 10 you dont get much of a choice for drivers, so they likely used whatever they had at the time.


----------



## kn00tcn (Aug 19, 2015)

why arent people mentioning the fact that MANY games are fine on AMD when using a fast cpu even with the lack of multithread (obviously low IPC cpus are screwed as digitalfoundry has shown in the 750/270 or whatever it was review)

which means... this game is simply more draw calls than ever or a lot of general cpu load

notice how the weaker cpus dont get much of a boost from dx12, so it's mostly a game design limitation rather than a lack of API optimization

why arent there more tests renaming the exe so that driver profiles dont get enabled anyway? watch the nvidia dx11 numbers drop


----------



## arbiter (Aug 19, 2015)

kn00tcn said:


> notice how the weaker cpus dont get much of a boost from dx12, so it's mostly a game design limitation rather than a lack of API optimization


Well most games will have to have a low, medium, and high cpu setting or they will have to detect cpu and scale the # draw calls based on cpu in question.


----------



## Viruzz (Aug 19, 2015)

Ebo said:


> The way I see it, is that why optimize for dx11 anymore ?
> 
> Win 10 has been DL more than 50 million times allready including dx12 which makes dx11 a thing of the past.
> Move on and forget, just keep dx11 where it allready is, thats good enough and start developing dx12 right away.



LOL Mate, we will have 2-3 DX12 games this year, This one, Fable and if MS released it this year then Gear of War.
It means that 99% of games still DX11 (and im sure there are lots of DX9 games being developed by Indies and such)
It will stay the same next year and the year after,by the time 50% of all PC games released in DX12, our GPUs will be OBSOLETE

I have a feeling that next Nvidia series will be so fast that GTX1060 (or whatever the x60 model will be called)  GPU will beat 980 or be 1:1, if Pascal is everything we hear about it might be even faster.


----------



## FordGT90Concept (Aug 19, 2015)

You forgot XCOM 2 which is supposed to be coming out for Christmas.  XCOM 2 is an Unreal Engine 4 title which means any other UE4 games coming out could also use DX12.


----------



## Mussels (Aug 19, 2015)

the amount of DX12 games will climb rapidly, some game devs will release DX12 patches, others will do it for the publicity and all Xbox one ports from here on should include it.


----------



## john_ (Aug 19, 2015)

I am pretty sure people who spent this period 500-1000 dollars for a graphics card, don't care only for the Dx12 games that will come out in 2015. They care also for 2016 and maybe 2017 if they don't change cards often.

That said, Nvidia still have an advantage in DX11, but AMD seems to have the hardware to negate their driver's inefficiencies. It's just here, in this specific test, that their DX11 problems are so obvious. In other games we see AMD cards to be, more or less, competitive with Nvidia cards. Nvidia had the opportunity to fix their own problems thanks to Star Swarm and Mantle (funny, but probably the reason behind Nvidia's ultra Dx11 optimizations with 337.5(?) driver was because of Mantle). AMD just ignored doing any extra job there. I think they already knew that Microsoft was coming with DX12 and they already knew that DX12 will run fine on their GCN architecture. So, they just waited (they are experts in waiting unfortunately).


----------



## Sony Xperia S (Aug 19, 2015)

FordGT90Concept said:


> It just brings the AMD up to where it should be though, yeah?  It's not like Fury X is going to curb stomp Titan X on DX12, amiright?



Nope because it is too early to say. Time will tell, in 2-3-4 years.


----------



## Mussels (Aug 19, 2015)

Sony Xperia S said:


> Nope because it is too early to say. Time will tell, in 2-3-4 years.



those cards wont even be relevant then, we'll know within 6 months.


----------



## 64K (Aug 19, 2015)

If these guys are right then AMD really needs something to boost sales. If they do prove to be superior in DX12 then maybe that will do it.






http://www.dsogaming.com/news/amdnv...ering-4-out-of-5-pc-gamers-own-an-nvidia-gpu/


4 out of 5 gamers buying Nvidia GPUs. It doesn't look good right now @Sony Xperia S


----------



## Uplink10 (Aug 19, 2015)

64K said:


> 4 out of 5 gamers buying Nvidia GPUs. It doesn't look good right now @Sony Xperia S


They should, R9 290X and 290 have such good performance per dollar. 290X is only 290 USD.


----------



## Sony Xperia S (Aug 19, 2015)

Uplink10 said:


> They should, R9 290X and 290 have such good performance per dollar. 290X is only 290 USD.



And even less with a 30$ rebate card - 260$ http://www.newegg.com/Product/Product.aspx?Item=N82E16814150696&cm_re=290x-_-14-150-696-_-Product

The only sensible explanation to that graph is that more and more people, sadly, tend to join the dark side of nvidia. 
Have literally no idea what exactly they want from AMD, why they are so mean to the company and turn their backs...


----------



## john_ (Aug 19, 2015)

64K said:


> If these guys are right then AMD really needs something to boost sales. If they do prove to be superior in DX12 then maybe that will do it.
> 
> 
> 
> ...


It should be true. It's not Nvidia's numbers. 
Anyway it was expected. AMD fans where just waiting to see 300 series and Fury. Nvidia fans where buying like any other day. That's why AMD dropped so low. It wouldn't surprise me to go back at 20% in the 3rd quarter. It will be bad if it stayes there or drop more.

By the way. Did Nvidia also showed numbers for professional cards, or the numbers there where not as pretty? AMD was gaining in the Pro market thanks to the MACs.


----------



## 64K (Aug 19, 2015)

john_ said:


> It should be true. It's not Nvidia's numbers.
> Anyway it was expected. AMD fans where just waiting to see 300 series and Fury. Nvidia fans where buying like any other day. That's why AMD dropped so low. It wouldn't surprise me to go back at 20% in the 3rd quarter. It will be bad if it stayes there or drop more.
> 
> By the way. Did Nvidia also showed numbers for professional cards, or the numbers there where not as pretty? AMD was gaining in the Pro market thanks to the MACs.



Probably has some to do with the numbers. The availability of the Fury/Fury X isn't good right now either for people that want one, at least in the USA. Gouging is taking place especially at Amazon

http://www.amazon.com/dp/B01012TLSS/?tag=tec06d-20

http://www.amazon.com/dp/B0108U6JYC/?tag=tec06d-20


----------



## yogurt_21 (Aug 19, 2015)

64K said:


> Probably has some to do with the numbers. The availability of the Fury/Fury X isn't good right now either for people that want one, at least in the USA. Gouging is taking place especially at Amazon
> 
> http://www.amazon.com/dp/B01012TLSS/?tag=tec06d-20
> 
> http://www.amazon.com/dp/B0108U6JYC/?tag=tec06d-20




even if it were available I seriously doubt the 980 ti or the 980 are making up the bulk of nvidia's sales. I think the nvidia emphasis on a full line of new cards this round combined with the focus on temperature, noise, and power consumption is really what's causing this. The Fury X could have destroyed the 980 Ti and sales would still swing Nvidia's way.


----------



## john_ (Aug 19, 2015)

64K said:


> Probably has some to do with the numbers. The availability of the Fury/Fury X isn't good right now either for people that want one, at least in the USA. Gouging is taking place especially at Amazon
> 
> http://www.amazon.com/dp/B01012TLSS/?tag=tec06d-20
> 
> http://www.amazon.com/dp/B0108U6JYC/?tag=tec06d-20


I think we are talking about 10 million cards every quarter. 1% is 100.000 cards. AMD would have to sell 500.000 cards to keep that 5%. Fury could not save the day in just a few weeks, even if there was availability.


----------



## 64K (Aug 19, 2015)

yogurt_21 said:


> even if it were available I seriously doubt the 980 ti or the 980 are making up the bulk of nvidia's sales. I think the nvidia emphasis on a full line of new cards this round combined with the focus on temperature, noise, and power consumption is really what's causing this. The Fury X could have destroyed the 980 Ti and sales would still swing Nvidia's way.





john_ said:


> I think we are talking about 10 million cards every quarter. 1% is 100.000 cards. AMD would have to sell 250.000 cards to keep that 5%(+2.5% for AMD, -2.5% for Nvidia). Fury could not save the day in just a few weeks, even if there was availability.



Yeah, I know. I have referenced the Steam hardware survey many times in posts to show in their figures that very very few people buy a high end card.


----------



## profoundWHALE (Aug 19, 2015)

64K said:


> Yeah, I know. I have referenced the Steam hardware survey many times in posts to show in their figures that very very few people buy a high end card.


That still leaves out anyone who went ahead and got Battlefield 4 or something like that and just stuck with Origin/Uplay, whatever.

Still, it's a good number to reference. (I think)


----------



## arbiter (Aug 19, 2015)

64K said:


> If these guys are right then AMD really needs something to boost sales. If they do prove to be superior in DX12 then maybe that will do it.
> 
> http://www.dsogaming.com/news/amdnv...ering-4-out-of-5-pc-gamers-own-an-nvidia-gpu/
> 
> ...


AMD's credibility is at an All time low, they have what looks like superior performance in 1 game that is only in Alpha testing isn't really gonna chance that.



Uplink10 said:


> They should, R9 290X and 290 have such good performance per dollar. 290X is only 290 USD.


Hard to sell people a 2 year old gpu as a new product. Most people that own a 200 series already have no reason to get a 300 series since its pretty much same card's.


----------



## Uplink10 (Aug 19, 2015)

arbiter said:


> Hard to sell people a 2 year old gpu as a new product. Most people that own a 200 series already have no reason to get a 300 series since its pretty much same card's.


Even though it is an older model it is still at the top of the game like you said and with that price it is a steal.


----------



## HumanSmoke (Aug 19, 2015)

john_ said:


> By the way. Did Nvidia also showed numbers for professional cards, or the numbers there where not as pretty? AMD was gaining in the Pro market thanks to the MACs.


Neither Nvidia nor AMD list professional graphics shipments separately these days. Professional, discrete, discrete custom ( FirePro D series), and discrete mobile (MXM) shipments are used in both Mercury and JPR's figures.

AMD's professional board figures aren't a straightforward extrapolation of market share as the consumer market largely is. Nvidia boards sell at vastly greater ASPs. To use the "MAC" (Mac Pro) example you cited:
The top Mac Pro comes standard with Dual FirePro D500's and can be upgraded to Dual FirePro D700's for $600.
The FirePro D500 is a custom part analogous to the HD 7870XT ( so sits between the Pitcairn-based FirePro W7000  and Tahiti PRO- based W8000) The W7000 is priced at around $600 each, the W8000 at ~ $1000 each
The FirePro D700 is a FirePro W9000 (also in a custom form factor as per the other FirePro D boards). The W9000 retails for around $3000 each......and Apple offers an upgrade to two of them - from boards priced at less than $1K,* for $600
*
Given that Apple has to factor in their own profit and amortization from warranty replacements, how much do you think AMD's unit price contract is for these custom SKUs?
AMD basically purchased market share (and the cachet of marketing by being allied to the Apple Mac Pro). If AMD rely upon this business model, they build market share all the way to the poor house.


----------



## john_ (Aug 19, 2015)

HumanSmoke said:


> Neither Nvidia nor AMD list professional graphics shipments separately these days. Professional, discrete, discrete custom ( FirePro D series), and discrete mobile (MXM) shipments are used in both Mercury and JPR's figures.
> 
> AMD's professional board figures aren't a straightforward extrapolation of market share as the consumer market largely is. Nvidia boards sell at vastly greater ASPs. To use the "MAC" (Mac Pro) example you cited:
> The top Mac Pro comes standard with Dual FirePro D500's and can be upgraded to Dual FirePro D700's for $600.
> ...



Nvidia wins market share. Absolutely logical. Nothing to analyze. 
AMD wins market share. "Let me explain to you why there is nothing to see here, or even worst, why it is bad for AMD".


----------



## rvalencia (Aug 19, 2015)

arbiter said:


> How many times has AMD failed and refused to admit fault to turn around and blame nvidia for it?
> 
> 
> Wouldn't be surprise they didn't bother to much with the game since it is an Alpha stage game and any work done now might not even matter when its finally released
> ...



From http://www.dsogaming.com/news/the-w...t-amd-users-enjoy-physx-hair-and-fur-effects/

With NVIDIA's Gameworks, CDPR lost control over their Witcher 3 PC source code.

_Our dear friends over at PCGamesHardware had an interesting interview with CD Projekt RED’s lead engine developer, Balázs Török,* in which Balázs claimed that AMD users will be able to enjoy the new fur and hair GPU-accelerated effects, provided Nvidia lets them to.*

When asked about the new hair and fur effects, Balázs said that – at this moment – these aforementioned effects run on AMD Radeon GPUs.

“At the moment it works, but if it can run at the end, *the decision of Nvidia. What matters is the direction in which they continue to develop the technology and whether and what barriers they are installed*. But I think it will also run on Radeon graphics cards.“_​


----------



## john_ (Aug 19, 2015)

arbiter said:


> AMD's credibility is at an All time low, they have what looks like superior performance in 1 game that is only in Alpha testing isn't really gonna chance that.


That one game will make some people be more skeptical about their next upgrade. It's not something serious, but it could become more serious in the future. Of course Nvidia doesn't have to do much. Just $persuade$ the game developers to not take advantage of DX12 just yet. Wait a little longer, until Pascal comes. They did it before anyway.



> Hard to sell people a 2 year old gpu as a new product. Most people that own a 200 series already have no reason to get a 300 series since its pretty much same card's.


There are plenty of rebrands in the market. It's just that AMD's financial position forced them to do rebrands in more expensive cards than Nvidia is doing (GT 730 is one example). At least they do not sell cards with the same name and totally different performance and features. I mean you talk about credibility. Well if the press was treating Nvidia as it treats AMD, Nvidia's credibility would have been in no better position. The typical example is GTX 970, but let's just add here the GT 730 I mentioned.
So we have 3 GT 730s.
One is 96 Fermi cores, 128bit DDR3. <<<This one is not even DX12 yet.
One is 384 Kepler cores, 64bit GDDR5. <<< This one is the good one
And the last one is 384 Kepler cores, 64bit DDR3.  <<< This one you throw it out the windows. 12.8GB/sec? Even minesweeper will have performance problems.


----------



## Aquinus (Aug 19, 2015)

arbiter said:


> Hard to sell people a 2 year old gpu as a new product. Most people that own a 200 series already have no reason to get a 300 series since its pretty much same card's.


I don't know, I found the 390 to be a worthy successor to my 6870s in CFX. In all seriousness, despite being rebrands, they've been tuned. This doesn't mean much for power users as overclocking is a think we like to do but, it's not like the 300 series are bad. A lot of things people said were bad about AMD (like multi-monitor idle usage and frame latency,) is a little overstated. So despite not being new technology, it's worth the price you pay for it.

As for DX12, I don't think we can judge everything by one game. It's too early to say much about DX12 other than it potentially can offer some significant improvements. AMD's drivers are known to have more overhead than nVidia's and DX12 might make that less of a problem than it is now.

Honestly, I don't think anyone should get angry or lose sleep over this.


----------



## HumanSmoke (Aug 19, 2015)

john_ said:


> Nvidia wins market share. Absolutely logical. Nothing to analyze.


Really? And why would that be? ...and why bring up Nvidia? I certainly didn't. You were the one telling the world + dog how great AMD's pro graphics were doing. Market share with a plummeting bottom line is hardly a cause for cheerleading....or any real critique at all really, given that this thread is about DX12 - which I'm pretty sure pro graphics and math co-processors aren't leveraging.


john_ said:


> AMD wins market share. "Let me explain to you why there is nothing to see here, or even worst, why it is bad for AMD".


It was you that bought up the "MAC"
All I did was to point out the pricing of AMD parts used in the "MAC".
If you want to have a good cry about it, be my guest - but between whines, maybe you can explain how AMD's FirePro market share is growing (in a fashion) - largely, as you've already said, because of Apple, yet AMD still bleeds red ink.


john_ said:


>



Nice AMD supplied PPS. Why not use the latest figures that show that AMD's pro graphics have slipped back to 20% ?


> Meanwhile, Nvidia remained the dominant force in professional GPUs, responsible for 79.4% of units, while AMD picked up the remaining 20.6%, including a fair number of units sold to Apple to outfit Mac Pros.


So, a large chunk of AMD's pro graphics market is predicated upon a single customer getting boards at knock down pricing - around 1/10th of retail MSRP. As I said, AMD (or anyone for that matter) can grow market share if they offer deals like that - and let's face it, AMD have been in fire sale mode for FirePro for some time. I'm also pretty sure if AMD offered Radeons at $5 each, they'd quickly gain a massive market share gain - but it doesn't mean **** all if it isn't sustainable. Not every entity can look forward to an EU bailout.


----------



## arbiter (Aug 20, 2015)

rvalencia said:


> From http://www.dsogaming.com/news/the-w...t-amd-users-enjoy-physx-hair-and-fur-effects/
> 
> With NVIDIA's Gameworks, CDPR lost control over their Witcher 3 PC source code.
> 
> ...


Well fur which is hairworks relies on tessellation, which is part of DX11 standard. Can't really blame Nvidia for AMD cards being very slow on doing it. Nvidia yea could used it on purpose knowing AMD cards are slow in that just like AMD uses OpenCL for tressFX knowing their cards were a lot faster in OpenCL then Nvidia. Both sides did it at one point, Should least be happy that hairworks uses a standard which everyone including AMD was crying for.



Aquinus said:


> I don't know, I found the 390 to be a worthy successor to my 6870s in CFX. In all seriousness, despite being rebrands, they've been tuned. This doesn't mean much for power users as overclocking is a think we like to do but, it's not like the 300 series are bad.



They aint bad no but AMD has tried to claim 390(x) wasn't a rebrand when it is. They even tried to give reason that it wasn't but kinda hard to believe that when GPU-z someone posted of it has a GPU rls date of 2013.
http://wccftech.com/amd-radeon-r9-390-390x-not-rebadges-power-optimization/



john_ said:


> There are plenty of rebrands in the market. It's just that AMD's financial position forced them to do rebrands in more expensive cards than Nvidia is doing (GT 730 is one example).



Really how many ppl care about those cards being rebrands or not? No one cares if r5 240 or what ever is rebrand or 230. They are low end cards with very low power draw. No one that cares about performance buys them.


----------



## Xzibit (Aug 20, 2015)

*ArsTechnica UK - DirectX 12 tested: An early win for AMD, and disappointment for Nvidia
First DX12 gaming benchmark shows R9 290X going toe-to-toe with a GTX 980 Ti.
*


----------



## Fluffmeister (Aug 20, 2015)

Erm..... what happened to the 390X and the GTX 980 trading blows? Now the 290X trades blows with the 980 Ti?

This gets more hilarious by the day, I'm literally getting upset AMD make no money!


----------



## Xzibit (Aug 20, 2015)

Fluffmeister said:


> Erm..... what happened to the 390X and the GTX 980 trading blows? Now the 290X trades blows with the 980 Ti?
> 
> This gets more hilarious by the day, I'm literally getting upset AMD make no money!



Here the comparable numbers if graphs and numbers are too difficult for you (different systems PCPerspective & Arc)

*1080p*
290X
48

390X
*53*

980
50

980 Ti
50

*1080p Heavy*
290X
40

390X
*46*

980
44

980 Ti
43


----------



## arbiter (Aug 20, 2015)

Xzibit said:


> Here the comparable numbers if graphs and numbers are too difficult for you (different systems PCPerspective & Arc)
> 
> *1080p*
> 290X
> ...





Fluffmeister said:


> Erm..... what happened to the 390X and the GTX 980 trading blows? Now the 290X trades blows with the 980 Ti?
> 
> This gets more hilarious by the day, I'm literally getting upset AMD make no money!


Wow so AMD won something in a game that is in Alpha stages and won't be out for like a year, o yea its also AMD sponsored game. Those numbers are to be take with a grain of salt as its just 1 game that is Alpha build. That game still has a long way to go coding wise before can take those numbers too seriously. Best not Hype up something that likely Not gonna stay where they are like AMD did with the Fury X.

Point is, don't start that hype train rolling just yet. Might turn in to a train wreck of disappointment like it has many times before. I guess though if people haven't learned by now they never will.


----------



## rvalencia (Aug 20, 2015)

arbiter said:


> Well fur which is hairworks relies on tessellation, which is part of DX11 standard. Can't really blame Nvidia for AMD cards being very slow on doing it. Nvidia yea could used it on purpose knowing AMD cards are slow in that just like AMD uses OpenCL for tressFX knowing their cards were a lot faster in OpenCL then Nvidia. Both sides did it at one point, Should least be happy that hairworks uses a standard which everyone including AMD was crying for.


Wrong, TressFX uses Microsoft's DirectCompute.

From http://www.techpowerup.com/180675/amd-tressfx-technology-detailed.html

*"Technically, TressFX is a toolset co-developed by AMD and Crystal Dynamics, which taps into DirectCompute"
*
Should least be happy that TressFX uses a standard which everyone including NVIDIA was crying for.

I'm aware of the technical aspect.  The problem was excessive tessellation that doesn't substantially improve the graphics appearance. The workaround was to re-use the same AMD driver side tessellation override feature for Crysis 2 NVIDIA patched.


The difference with Ashes of Singularity vs Witcher 3  is the source code availability for AMD(Red Team), NVIDIA (Green Team) and Intel (Blue Team). All interested IHVs can contribute to the same source code without being blocked an exclusivity contract.

Witcher 3 XBO/PS4 builds uses TressFX instead of NVIDIA's Hairworks.

*Unreleased Witcher 3 PC build has TressFX enabled, but blocked NVIDIA exclusivity contract.*  Witcher 3 PC build with TressFX would have benefited lesser NVIDIA GPU cards.


PS; I own MSI 980 Ti OC with my MSI 290X OC.



arbiter said:


> Wow so AMD won something in a game that is in Alpha stages and won't be out for like a year, o yea its also AMD sponsored game. Those numbers are to be take with a grain of salt as its just 1 game that is Alpha build. That game still has a long way to go coding wise before can take those numbers too seriously. Best not Hype up something that likely Not gonna stay where they are like AMD did with the Fury X.
> 
> Point is, don't start that hype train rolling just yet. Might turn in to a train wreck of disappointment like it has many times before. I guess though if people haven't learned by now they never will.


The pattern is similar to 3DMarks API overhead test results.






Xzibit said:


> *ArsTechnica UK - DirectX 12 tested: An early win for AMD, and disappointment for Nvidia
> First DX12 gaming benchmark shows R9 290X going toe-to-toe with a GTX 980 Ti.*


It could also indicate AMD's DX11 driver is sub-par relative it's TFLOPS potential.


----------



## OneMoar (Aug 20, 2015)

in other news cpu bound game befits from reduction in cpu load .... more at 11
please don't feed into the hypetrain


----------



## Tatty_One (Aug 20, 2015)

On that 290X comparison in DX12, my understanding is that the 290X is only "DX12 ready" and therefore will not support the full DX12 feature set, it certainly indicated that on AMD's site a couple of weeks ago when I looked (because I have a 290X and was interested to know it's DX12 capability) where as the 980 will have the works so I can only guess that the 290X was only doing half the work.... dunno maybe I am wrong there.


----------



## Mussels (Aug 20, 2015)

Tatty_One said:


> On that 290X comparison in DX12, my understanding is that the 290X is only "DX12 ready" and therefore will not support the full DX12 feature set, it certainly indicated that on AMD's site a couple of weeks ago when I looked (because I have a 290X and was interested to know it's DX12 capability) where as the 980 will have the works so I can only guess that the 290X was only doing half the work.... dunno maybe I am wrong there.



with these early DX12 titles its not likely they'll even use those features, so the graphical quality/load would be the same.


----------



## FordGT90Concept (Aug 20, 2015)

Yeah, it's not DirectX 12 that needs to be looked at, it's the Direct3D feature level the game implements.  Do we even know what feature level they're using?

https://en.wikipedia.org/wiki/Graphics_Core_Next

GCN 1.0 is 11_1:
Oland
Cape Verde
Pitcairn
Tahiti

GCN 1.1 is 12.0:
Bonaire
Hawaii
Temash
Kabini
Liverpool
Durango
Kaveri
Godavari
Mullins
Beema

GCN 1.2 is 12.0:
Tonga (Volcanic Islands family)
Fiji (Pirate Islands family)
Carrizo

Only NVIDIA's GM2xx chips are 12.1 compliant (which the GTX 980 Ti is).  Even Skylake's GPU is 12.0.

So if the game supports feature level 12.1 and it is using it on GTX 980 but using 12.0 on 290X, it's not an apples to apples comparison.  We'd have to know that both cards are running feature level 12.0.


----------



## Xzibit (Aug 20, 2015)

FordGT90Concept said:


> So if the game supports feature level 12.1 and it is using it on GTX 980 but using 12.0 on 290X, it's not an apples to apples comparison.  We'd have to know that both cards are running feature level 12.0.



DX12 is more application dependent then DX11 was. DX12 moves some management that the driver was doing to the application.

AMD GCN cards can handle more resources as well so its not always going to be an apples to apples comparison.


----------



## rvalencia (Aug 21, 2015)

FordGT90Concept said:


> Yeah, it's not DirectX 12 that needs to be looked at, it's the Direct3D feature level the game implements.  Do we even know what feature level they're using?
> 
> https://en.wikipedia.org/wiki/Graphics_Core_Next
> 
> ...


Resource Binding: Maxwell Tier 2, GCN Tier 3.

Feature level: Maxwell 12_1, GCN 12_0.



CR (Conservative Rasterization) feature

Read https://community.amd.com/message/1308478#1308478

Question:

I need for my application that every drawing produce at least one pixel output (even if this is an empty triangle = 3 identical points). NVidia have an extension (GL_NV_conservative_raster) to enable such a mode (on Maxwell+ cards). Is there a similar extension on AMD cards

Answer (from AMD):

_*Some of our hardware can support functionality similar to that in the NVIDIA extension you mention*, but we are currently not shipping an extension of our own. We will likely hold off until we can come to a consensus with other hardware vendors on a common extension before exposing the feature, but it will come in time._


For ROV feature

AMD already supports Intel's "GL_INTEL_Fragmented_shader_ordering" in OpenGL.

From https://twitter.com/g_truc/status/581224843556843521

_It seems that shader invocation ordering is proportionally a lot more expensive on GM204 than S.I. or HSW._


----------



## rtwjunkie (Aug 21, 2015)

Ebo said:


> The way I see it, is that why optimize for dx11 anymore ?
> 
> Win 10 has been DL more than 50 million times allready including dx12 which makes dx11 a thing of the past.
> Move on and forget, just keep dx11 where it allready is, thats good enough and start developing dx12 right away.


 
You ARE aware, I assume, that 50 million is a drop in the bucket?  And that's downloads, not installs.  I personally know several people staying on 7 and 8.1.  Even here, an enthusiast community, I've seen probably 10% of those that upgraded to W10 go back.  So no, DX11 isn't going anywhere,


----------



## FordGT90Concept (Aug 21, 2015)

I downloaded twice (Home and Pro) and installed 7 times (Pro once, Home six times). XD

The change from DX11 to DX12 should be pretty swift for developers because it is nothing monumental.  I believe DX12 automatically falls back to 11 and 10 depending on hardware capabilities.


----------



## rvalencia (Aug 24, 2015)

FordGT90Concept said:


> I downloaded twice (Home and Pro) and installed 7 times (Pro once, Home six times). XD
> 
> The change from DX11 to DX12 should be pretty swift for developers because it is nothing monumental.  I believe DX12 automatically falls back to 11 and 10 depending on hardware capabilities.


There are other API besides DX11 e.g. PS4's lower level APIs. 

AMD Mantle and PS4's lower level APIs has set the ground work for DX12.


----------



## Uplink10 (Aug 24, 2015)

rvalencia said:


> AMD Mantle and PS4's lower level APIs has set the ground work for DX12.


And Microsoft just comes in and starts building off it. The same way Sony used BSD to make OS for PS4, damn BSD licensing.


----------



## xenocide (Aug 24, 2015)

Uplink10 said:


> And Microsoft just comes in and starts building off it. The same way Sony used BSD to make OS for PS4, damn BSD licensing.



Microsoft has been working on DX12 since before DX11.1 actually went live.  They started work on it with Intel\AMD\Nvidia before Mantle was even announced, AMD just released Mantle before DX12 as a bit of a PR stunt.  As far as I know OpenGL did have extensions that supported some of the features new to DX12.  It's more accurate to stay Khronos built on Mantle, but Microsoft built DX12 alongside Mantle.

Better examples of low-level API's would have been Glide and Metal, but most people block those out of their memory because it was a frustrating time in the PC world.


----------



## rvalencia (Aug 25, 2015)

Uplink10 said:


> And Microsoft just comes in and starts building off it. The same way Sony used BSD to make OS for PS4, damn BSD licensing.


I was referring to game engine development work.

Well known 3D engines already has DX12 version e.g. Epic's Unreal Engine 4.9, Crytek's CryEngine, Unity , Square Enix's Luminous Engine and 'etc'.


----------



## john_ (Aug 25, 2015)

xenocide said:


> AMD just released Mantle before DX12 as a bit of a PR stunt


I don't think they had the resources and money just for that kind of PR stunt. They didn't just throw out a new "Wonder Driver", they created an API. I don't know. Maybe this is something simple that anyone can do? Microsoft was delaying a low level API, so I think AMD came out with Mantle to warranty that there would be pressure on Microsoft to include DX12 with Windows 10. AMD was the only company losing because of the absence of a low level API. I also find it funny that people believe that AMD come out with a low level API in zero time with Microsoft taking almost two more years. And I find it funny because everyone plus the dog thinks that AMD is completely incompetent in creating anything in software. Not to mention the difference between AMD and Microsoft, one company with no money the other swimming in money, one company being a hardware company the other being a software company.


----------



## Mussels (Aug 25, 2015)

AMD wanted to prove that mantles technology worked, so that others would adopt it.

Microsoft with their Xbox one (running AMD hardware) and DX12 being the big example - mantle was *proof* the existing hardware would benefit.


----------



## xenocide (Aug 25, 2015)

I wouldn't describe creating an API as simple, but they also worked with a bunch of very skilled game developers, and from the looks of it grabbed stuff that they were already planning on contributing to DX12.


----------



## rvalencia (Aug 30, 2015)

FordGT90Concept said:


> Yeah, it's not DirectX 12 that needs to be looked at, it's the Direct3D feature level the game implements.  Do we even know what feature level they're using?
> 
> https://en.wikipedia.org/wiki/Graphics_Core_Next
> 
> ...


http://www.dsogaming.com/news/oxide...to-disable-certain-settings-in-the-benchmark/
_
*Oxide Developer* on Nvidia's request to turn off certain settings:

“There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.”

“Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only ‘vendor’ specific code is for Nvidia where we had to shutdown Async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn’t really have Async Compute so I don’t know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don’t think it ended up being very significant. This isn’t a vendor specific path, as it’s responding to capabilities the driver reports._



NVIDIA is just ticking the box for Async compute without any real practical performance.


----------



## Mussels (Aug 30, 2015)

rvalencia said:


> NVIDIA is just ticking the box for Async compute without any real practical performance.



and blaming all problems on the game dev for not making the game around their hardware.


----------



## arbiter (Aug 30, 2015)

Mussels said:


> and blaming all problems on the game dev for not making the game around their hardware.


That reminds of a certain Company so much that compete's with nvidia.


----------



## rtwjunkie (Aug 30, 2015)

arbiter said:


> That reminds of a certain Company so much that compete's with nvidia.



Lol! Too true. I have to say, both companies play that card equally as much.


----------



## rvalencia (Aug 31, 2015)

Mussels said:


> and blaming all problems on the game dev for not making the game around their hardware.


The difference is Async feature on Maxwellv2 is faked *AND *Oxide has given  Intel, AMD, nVidia and MS *equal access* to the source code.

*In general, NVIDIA Gameworks restricts source code access to Intel and AMD.*

From slide 23 https://developer.nvidia.com/sites/default/files/akamai/gameworks/events/gdc14/GDC_14_DirectX Advancements in the Many-Core Era Getting the Most out of the PC Platform.pdf
NVIDIA talks about DX12's Async.


This Oxide  news really confirmed for me that DX12 came from Mantle origins since Async Compute was a core feature of that API. It must have caught NV offguard when Mantle was to become DX12, so they append some Maxwell features into DX12.1 and called their GPUs "DX12 compatible" which isn't entirely true. The base feature of DX12 compatibility is Async Compute & Better CPU scaling.


Oxide's full reply from
http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995


AMD's reply on Oxide's issue
https://www.reddit.com/r/AdvancedMi...ide_games_made_a_post_discussing_dx12/cul9auq


----------



## arbiter (Aug 31, 2015)

rvalencia said:


> The difference is Async feature on Maxwellv2 is faked *AND *Oxide has given  Intel, AMD, nVidia and MS *equal access* to the source code.
> 
> *In general, NVIDIA Gameworks restricts source code access to Intel and AMD.*
> 
> ...


Here is the question on that "equal" access. I doubt that included Dx12 since well they couldn't add DX12 which means what you see here is AMD and Oxide doing what amd whined nvidia was doing and crippling performance on their cards. Problem with this even with source access DX12 exe for game likely wasn't an option til recently but since game had Mantle in it from Day 1 that let them set the game up for AMD cards specifically and in this case cripple performance on nvidia.

Cue the claims that what i said was BS but reality of it is pretty damn plausible. So now Mantle in its dead form could be causing crippling performance.



rvalencia said:


> This Oxide news really confirmed for me that DX12 came from Mantle origins since Async Compute was a core feature of that API. It must have caught NV offguard when Mantle was to become DX12,


^ pretty much confirmation of it.
Unlike like gameworks don't look like it can be turned off?

I will head this one off before its said, i bet someone will say "well its a standard". It maybe but so is DX11 tessellation but didn't stop AMD from whining about it when hairworks used it.


----------



## john_ (Aug 31, 2015)

arbiter said:


> Here is the question on that "equal" access. I doubt that included Dx12 since well they couldn't add DX12 which means what you see here is AMD and Oxide doing what amd whined nvidia was doing and crippling performance on their cards. Problem with this even with source access DX12 exe for game likely wasn't an option til recently but since game had Mantle in it from Day 1 that let them set the game up for AMD cards specifically and in this case cripple performance on nvidia.
> 
> Cue the claims that what i said was BS but reality of it is pretty damn plausible. So now Mantle in its dead form could be causing crippling performance.
> 
> ...


You can question the equal access and you can guess that the game favors AMD.

With GameWorks on the other hand there is CERTAINTY that NO ONE has access but Nvidia to the source code and that the game ABSOLUTELY favors specific Nvidia hardware (I wouldn't say all Nvidia hardware here, because Kepler owners could have a different opinion on that).

Can you see the difference?


----------



## rvalencia (Aug 31, 2015)

arbiter said:


> Here is the question on that "equal" access. I doubt that included Dx12 since well they couldn't add DX12 which means what you see here is AMD and Oxide doing what amd whined nvidia was doing and crippling performance on their cards. *Problem with this even with source access DX12 exe for game likely wasn't an option til recently *but since game had Mantle in it from Day 1 that let them set the game up for AMD cards specifically and in this case cripple performance on nvidia.
> 
> Cue the claims that what i said was BS but reality of it is pretty damn plausible. So now Mantle in its dead form could be causing crippling performance.
> 
> ...


Your * "even with source access DX12 exe for game likely wasn't an option til recently" *statement is false.

From http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/

*Being fair to all the graphics vendors *

_Often we get asked about fairness, that is, usually if in regards to treating Nvidia and AMD equally? Are we working closer with one vendor then another? The answer is that we have an open access policy. Our goal is to make our game run as fast as possible on everyone’s machine, regardless of what hardware our players have._

_*To this end, we have made our source code available to Microsoft, Nvidia, AMD and Intel for over a year.* We have received a huge amount of feedback.* For example, when Nvidia noticed that a specific shader was taking a particularly long time on their hardware*, they offered an optimized shader that made things faster which we integrated into our code._

_We only have two requirements for implementing vendor optimizations: We require that it not be a loss for other hardware implementations, and we require that it doesn’t move the engine architecture backward (that is, we are not jeopardizing the future for the present).



*THAT's "for over a year" hence your "wasn't an option til recently" assertion is wrong.*
_


----------



## 64K (Aug 31, 2015)

That game Ashes of the Singularity sure is getting a lot of free publicity. I bet Stardock is loving it. I hadn't even heard of it before this.


----------



## FordGT90Concept (Aug 31, 2015)

rvalencia said:


> The difference is Async feature on Maxwellv2 is faked *AND *Oxide has given  Intel, AMD, nVidia and MS *equal access* to the source code.
> 
> *In general, NVIDIA Gameworks restricts source code access to Intel and AMD.*
> 
> ...


Makes sense.  Maxwell doesn't get the FPS boost AMD GCN 1.0 and newer cards get in DX12.  That, in turn, explains why 290X in DX11 goes from about equal to GTX 970 to about 30% faster in DX12.  NVIDIA will probably get it fixed for Pascal but NVIDIA users aren't going to see the major performance jump AMD users are until then.


NVIDIA did what they always do when contacted by AMD: hang up.  AMD got the last laugh this time.


----------



## arbiter (Aug 31, 2015)

rvalencia said:


> Your * "even with source access DX12 exe for game likely wasn't an option til recently" *statement is false.
> 
> From http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/


Sad part about that is the story you posted didn't prove what i said was false, there is no date listed when it was available. So what i said is still valid.



rvalencia said:


> *To this end, we have made our source code available to Microsoft, Nvidia, AMD and Intel for over a year.* We have received a huge amount of feedback.* For example, when Nvidia noticed that a specific shader was taking a particularly long time on their hardware*, they offered an optimized shader that made things faster which we integrated into our code.


The question with that was that when it was DX11 and (proprietary locked) Mantle was 2 options for game? Wouldn't shock me if it was.



FordGT90Concept said:


> NVIDIA did what they always do when contacted by AMD: hang up. AMD got the last laugh this time.


that is if games even use async but that is still up for debate how many will outside the ones paid by AMD.


----------



## rvalencia (Sep 1, 2015)

arbiter said:


> Sad part about that is the story you posted didn't prove what i said was false, there is no date listed when it was available. So what i said is still valid.
> 
> 
> The question with that was that when it was DX11 and (proprietary locked) Mantle was 2 options for game? Wouldn't shock me if it was.
> ...


The sad part is it's "more than a year" from the blog was posted i.e. back at least August 16, 2014. Furthermore, NVIDIA made changes to their own code path.

It doesn't need to be paid by AMD since XBO will get it's DirectX12 with it's Windows 10 which in-turn influence Async usage with PS4's multi-platform games. Once XBO gains full featured Async APIs, it will be the new baseline programming model. If Pascal gains proper Async, Maxwelv2 will age like Kelper 780 GTX.


----------



## arbiter (Sep 1, 2015)

rvalencia said:


> The sad part is it's "more than a year" from the blog was posted i.e. back at least August 16, 2014. Furthermore, NVIDIA made changes to their own code path.
> 
> It doesn't need to be paid by AMD since XBO will get it's DirectX12 with it's Windows 10 which in-turn influence Async usage with PS4's multi-platform games. Once XBO gains full featured Async APIs, it will be the new baseline programming model. If Pascal gains proper Async, Maxwelv2 will age like Kelper 780 GTX.


As much as people Love to point out "more then a year" crap. Async may been enabled on the Mantle version of the game but that was AMD proprietary API which was closed source so yea. Thing with Async on console it could be useful but as for Desktop its not needed since those console APU's are pretty Low end weak AMD hardware that they gotta squeeze every possible thing outta of. Async as said was in Mantle version but likely wasn't in the DX version of the game til DX12 exe was released hence my point if you don't ignore that fact which wouldn't shock me if you do.


----------



## FordGT90Concept (Sep 1, 2015)

arbiter said:


> that is if games even use async but that is still up for debate how many will outside the ones paid by AMD.


Async compute is a cornerstone of Direct3D 12/Mantle/Vulkan.  It isn't required (they'll be executed synchronously) but having it available means pretty big framerate gains because less of the GPU is idle.


----------



## john_ (Sep 1, 2015)

arbiter said:


> As much as people Love to point out "more then a year" crap. Async may been enabled on the Mantle version of the game but that was AMD proprietary API which was closed source so yea. Thing with Async on console it could be useful but as for Desktop its not needed since those console APU's are pretty Low end weak AMD hardware that they gotta squeeze every possible thing outta of. Async as said was in Mantle version but likely wasn't in the DX version of the game til DX12 exe was released hence my point if you don't ignore that fact which wouldn't shock me if you do.


All this time that Nvidia looked superior, you and others looked really to enjoy trolling AMD fans, being in an advantageous position. Now you put your head in the sand and try to ignore reality.
The "more then a year" argument is not crap. Async compute is huge, not useless. It is useless when you try to fake it in the drivers, but not when it is implemented in the hardware. And no, it is not just for the consoles. 
Also your convenient stories about Mantle.exe and DX12.exe, are not facts, only not so believable excuses.


----------

