# Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for DirectX 12



## btarunr (Aug 31, 2015)

It turns out that NVIDIA's "Maxwell" architecture has an Achilles' heel after all, which tilts the scales in favor of competing AMD Graphics CoreNext architecture, in being better prepared for DirectX 12. "Maxwell" lacks support for async compute, one of the three highlight features of Direct3D 12, even as the GeForce driver "exposes" the feature's presence to apps. This came to light when game developer Oxide Games alleged that it was pressured by NVIDIA's marketing department to remove certain features in its "Ashes of the Singularity" DirectX 12 benchmark.

Async Compute is a standardized API-level feature added to Direct3D by Microsoft, which allows an app to better exploit the number-crunching resources of a GPU, by breaking down its graphics rendering tasks. Since NVIDIA driver tells apps that "Maxwell" GPUs supports it, Oxide Games simply created its benchmark with async compute support, but when it attempted to use it on Maxwell, it was an "unmitigated disaster." During to course of its developer correspondence with NVIDIA to try and fix this issue, it learned that "Maxwell" doesn't really support async compute at the bare-metal level, and that NVIDIA driver bluffs its support to apps. NVIDIA instead started pressuring Oxide to remove parts of its code that use async compute altogether, it alleges.






"Personally, I think one could just as easily make the claim that we were biased toward NVIDIA as the only "vendor" specific-code is for NVIDIA where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that NVIDIA does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports," writes Oxide, in a statement disputing NVIDIA's "misinformation" about the "Ashes of Singularity" benchmark in its press communications (presumably to VGA reviewers).

Given its growing market-share, NVIDIA could use similar tactics to keep game developers away from industry-standard API features that it doesn't support, and which rival AMD does. NVIDIA drivers tell Windows that its GPUs support DirectX 12 feature-level 12_1. We wonder how much of that support is faked at the driver-level, like async compute. The company is already drawing flack for using borderline anti-competitive practices with GameWorks, which effectively creates a walled garden of visual effects that only users of NVIDIA hardware can experience for the same $59 everyone spends on a particular game.

*View at TechPowerUp Main Site*


----------



## Alexandrus (Aug 31, 2015)

Who the heck writes these and how do they pass to be posted on the website ?
Pressurized, pressurizing, does the writer even know what that means ?
I mean really.....


----------



## Shihab (Aug 31, 2015)

Alexandru Spataru said:


> Who the heck writes these and how do they pass to be posted on the website ?
> Pressurized, pressurizing, does the writer even know what that means ?
> I mean really.....



Huh, I thought it was a pun. "Oxide" 'n all.. 

Need a new humour-meter...


----------



## jihadjoe (Aug 31, 2015)

That's interesting, and now I wonder how well the old Kepler cards do in that DX12 benchmark. Presumably those come with their compute capabilities intact.


----------



## RejZoR (Aug 31, 2015)

I think NVIDIA just couldn't be bothered with driver implementation till now because frankly, async compute units weren't really needed till now (or shall I say till DX12 games are here). Maybe drivers "bluff" the support just to prevent crashing if someone happens to try and use it now, but they'll implement it at later time properly. Until NVIDIA confirms that GTX 900 series have no async units, I call it BS.


----------



## Cybrnook2002 (Aug 31, 2015)

Sneaky Sneaky

So, what your saying is it's easy to shine, so long as the path has been laid for you. The moment you take a detour, flaws get exposed. Have to commend AMD on this one that while they are hurting, at least they are truly baking in support for the features that they claim they are. Not just a quick once over to get to market (obviously they are not rushed on that front...)

Bluffing driver.... Wonder what other features we "think" we are using.


----------



## Ferrum Master (Aug 31, 2015)

Oxide guys truly ignited some fire in nVidia turf... 

Bravo guys... bravo... I need the damn prices going down!


----------



## ZeppMan217 (Aug 31, 2015)

Shihabyooo said:


> Huh, I thought it was a pun. "Oxide" 'n all..
> 
> Need a new humour-meter...


It's should be _pressured._


----------



## cadaveca (Aug 31, 2015)

reminds me of good ol' Gabe Newell handing the smack down on NVidia many many moons ago. (like, well over a decade ago).



ZeppMan217 said:


> It's should be _pressured._



Which the original copy (text) had. It was changed on purpose, as humor, it seems.


----------



## Mr.Newss (Aug 31, 2015)




----------



## FrustratedGarrett (Aug 31, 2015)

Here's the original article at WCCFTECH: http://wccftech.com/oxide-games-dev-replies-ashes-singularity-controversy/

There are two points here to look at:  first, Oxide is saying that Nvida has had access to the source code of the game for over a year and that they've been getting updates to the source code on the same day as AMD and Intel.

*"P.S. There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally."
*
Second, they claim they're using the so called async-compute units found on AMD's GPUs. These are parts on AMD's recent GPUs that can be used to asynchronously schedule work to underutilized GCN clusters.  

"*AFAIK, Maxwell doesn’t support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not. Weather or not Async Compute is better or not is subjective, but it definitely does buy some performance on AMD’s hardware. Whether it is the right architectural decision for Maxwell, or is even relevant to it’s scheduler is hard to say*."


----------



## AsRock (Aug 31, 2015)

Well i believe nVidia already on to this and know full well there is no games yet and will just add it later when their is some and just say tuff crap to those who get the current hardware.


----------



## 64K (Aug 31, 2015)

Looks like AMD got the opening salvo in DX12. I'm not sure what Nvidia can do at this point or how much they will care even if Maxwell sales do fall off some. The entry level Maxwells have been out for a year and a half and the mid range Maxwells for about a year. They've probably sold a ton of them already and the profit was made to cover R&D by now I would imagine.

We are late in the game on Maxwell. Pascal should be here in maybe 6 months and I think I read that they planned to add compute back in on the Pascals.

Hope it helps AMD to sell more cards.


----------



## EarthDog (Aug 31, 2015)

If people had half a clue, they would hold on to current NVIDIA cards until the next generation lands in 2016 that will likely perform better. There are going to be a couple of DX12 titles out by that time at least.


----------



## TheMailMan78 (Aug 31, 2015)

Alexandru Spataru said:


> Who the heck writes these and how do they pass to be posted on the website ?
> Pressurized, pressurizing, does the writer even know what that means ?
> I mean really.....


Well how it is written makes more sense....



> NVIDIA instead started pressuring Oxide to remove parts of its code that use async compute altogether, it alleges



If you wanted to use "pressured" it would read like this....



> Oxide alledes, NVIDIA pressured them to remove parts of its code that use async compute altogether.


----------



## ShurikN (Aug 31, 2015)

> This came to light when game developer Oxide Games claimed that it was pressured by NVIDIA's marketing department to remove certain features in its "Ashes of the Singularity" DirectX 12 benchmark.





> it learned that "Maxwell" doesn't really support async compute at the bare-metal level, and that NVIDIA driver bluffs its support to apps.





> NVIDIA instead started pressuring Oxide to remove parts of its code that use async compute altogether, it alleges.





> Given its growing market-share, NVIDIA could use similar tactics to keep game developers away from industry-standard API features that it doesn't support, and which rival AMD does.





> NVIDIA drivers tell Windows that its GPUs support DirectX 12 feature-level 12_1. We wonder how much of that support is faked at the driver-level, like async compute.





> The company is already drawing flack for using borderline anti-competitive practices with GameWorks, which effectively creates a walled garden of visual effects that only users of NVIDIA hardware can experience for the same $59 everyone spends on a particular game.


Why am I not surprised by any of this...


----------



## rtwjunkie (Aug 31, 2015)

Alexandru Spataru said:


> Who the heck writes these and how do they pass to be posted on the website ?
> Pressurized, pressurizing, does the writer even know what that means ?
> I mean really.....


 
Welcome to TPU.  Quite an auspicious start:
-Complain about article content, Check.
-Insult moderator, Check.


----------



## DeathtoGnomes (Aug 31, 2015)

Cybrnook2002 said:


> Sneaky Sneaky
> 
> *
> 
> Bluffing driver.... Wonder what other features we "think" we are using.



This. We all know Nvdia has cheated at benchmarks in the past.

so its like this:

Oxide: Look! we got benchmarks!
AMD: oh my we almost beat The Green Meanies
Nvidia: @oxide you cheated on the benchmarks
Oxide: did not. nyah.
Nvidia: disable competitive features so our non-async bluff works right!
Oxide: not gonna happen
Nvidia: F*** you AMD! you're not better than us! we'll fix you with our l33t bluffing skillz
AMD: *poops pants* and hides in a corner eating popcorn.


----------



## Assimilator (Aug 31, 2015)

I don't understand why a game company would write a benchmark that favours or doesn't favour a specific card... that seems like the opposite purpose of a benchmark.

Be that as it may, I'd also like to know which version of Maxwell allegedly doesn't support async shaders - v1, v2, or both?

If this is true, then it's a massive c**k-up on nVIDIA's part, but one that probably won't affect them until the end of this year, when big-name DX12 games arrive for the holiday season. Even so, said games will still probably have DX11 rendering paths, and once Pascal arrives in March/April 2016 this will all be forgotten... assuming Pascal isn't delayed.


----------



## yogurt_21 (Aug 31, 2015)

ShurikN said:


> Why am I not surprised by any of this...



Because its been done time and time again. Whichever company doesn't have support for new shiney goes out of their way to downplay its importance and states that their competition is foolish for being early adopters by supporting it.

Later when they do support new shiney they go out of their way to claim its now more important and you should want it.

From a business standpoint not supporting the latest and greatest features of direct x makes sense. How much did nv spend on SM3.0 support in the their 6000 series? By the time the games that supported it were out, most were too resource heavy for anything short of the 6800 Ultra. 

What about the Direct X 10 fiasco? how much was spent on getting compliance only to have next to no games support it and instead stick with Direct X 9 until 11 came out?

Though I am curious like the previous poster is kepler also suffers from this or if its something that was nerfed for maxwell.


----------



## TheMailMan78 (Aug 31, 2015)

EarthDog said:


> If people had half a clue, they would hold on to current NVIDIA cards until the next generation lands in 2016 that will likely perform better. There are going to be a couple of DX12 titles out by that time at least.


Still rocking two 670's here 

Not that I have a clue........I'm just broke.


----------



## TRWOV (Aug 31, 2015)

Maybe that explains Ars technica's results?


----------



## RejZoR (Aug 31, 2015)

It's again a SINGLE game. Until I see more (that aren't exclusive to either camp like this one is to AMD), then I'll accept the info...


----------



## Mr McC (Aug 31, 2015)

No support, no problem, just pay them to gimp it on AMD cards: welcome to the wonderful world of the nVidia console, sorry, pc gaming, the way it's meant to be paid.


----------



## Legacy-ZA (Aug 31, 2015)

*Sigh*

I am so tired of things like this...


----------



## Casecutter (Aug 31, 2015)

So where does this leave Nvidia, as I thought it had been pretty well purported that Nvidia "confirmed" some time ago that Maxwell 2 ( GM 200, 204, 206) can utilize the feature?  Is it now that they don't have async compute (enough) and tried to emulated through software and their see bad teething plains? Is now surfacing  perhaps Nvidia covered up the truth to sell GPU’s?

It been known AMD has baked in async compute into their hardware since GCN, and even in PS/Xbox consoles.  And it been know well before Maxwell async compute was going to be something Dx12 would be able to leverage.

So Nvidia has been selling a shit load of cards, and now all those cards are found they might be able to provide emulated support in due time. Did they intend to have designed hardware that appears not to offer native support (or so little) that one might see it and negligent? So what were they intending most Maxwell owners (and even Kepler) look to do, wait for Pascal and be happy to thrown more money at them… while they watch resale of their cards plummet?  Sure, right now owners can just convince themselves they can do with some half-baked support till more Dx12 games start to show.

Nvidia must provide a clear and truthful statement as to the goings on with this, or… IDK


----------



## truth teller (Aug 31, 2015)

its not like it will be the first time nbadia will resort to a slow software implementation for hardware lacking

*cough*fx5200*cough*

this will keep on happening and their customers wont complain because "muh better in gaymes" payed shilling after these events like always


----------



## nem (Aug 31, 2015)

ahahah nv fanboys today. :B


----------



## Xaled (Aug 31, 2015)

rtwjunkie said:


> Welcome to TPU.  Quite an auspicious start:
> -Complain about article content, Check.
> -Insult moderator, Check.


He is one of those nvidia guys, who created multiple accounts to turn the "what do you think of nvidia's 3.5 gb thing.." poll to nvidia's favor ..


----------



## lilhasselhoffer (Aug 31, 2015)

Slow clap....



Benchmark says a new feature isn't embraced by an Nvidia GPU.  Benchmark says that AMD is ahead, on a standard that they helped write when Vulkan functionally became DX12.



Sorry, but this is kinda derp.  There's no games that actually show this in action.  The people who wrote the code for the benchmark are cagey as to whether real world performance will bear out the superiority as a real asset.  Sorry red team, this isn't a win for you.

On the other hand, this is a loss for Nvidia.  They're desperately trying to play this off as a communication issue internally, but the benchmark writers are claiming they pushed for the feature to be disabled.  Honestly, they could have played this like AMD played the tesselation results snafu but they went full Mcintosh.  Gotta say, objectively that's Nvidia walking into a room and slamming their heads on the table.  It's a loss, but only a minor one.






Again, let's be objective.  Our current node has supported three generations of hardware.  AMD and Nvidia are both running on empty when it comes to real performance improvement.  AMD admitted it with a functional rebrand of the 2xx series, and Nvidia did it by crippling the features in Maxwell not directly related to today's games.  Neither option is good, and it's an admission that they're just treading water until 2016.


----------



## Prima.Vera (Aug 31, 2015)

Good job guys!


----------



## the54thvoid (Aug 31, 2015)

Casecutter said:


> Nvidia must provide a clear and truthful statement as to the goings on with this, or… IDK



^^This.

Two (and a half) situations here - simple as that:

*1)* What Oxide say is true and Nvidia have poor hardware level implementation and the driver level ASync is not well equipped.  This gives AMD a massive boost on titles using fully fledged DX12 that utilised this part of the API (and if it's open, it should use it unless it's not required).
*1 and a half)* Nvidia don't require Async as the cards are well equipped to deal with all other aspects of DX12, therefore in other scenarios the lack of Async wont hamper them (but may still give AMD more leg room if ASync is the be all and end all).
*2) *Async isn't actually the best thing ever and isn't used or required - gives no edge to AMD in other titles.  Nvidia sponsored titles will certainly do this.

Frankly if this is a real case of Bad Nvidia shenanigans, it wont hurt them until the slew of AAA DX12 titles arrive.  If they have new cards out by then, I guarantee they'll have addressed this and taken no prisoners.  Problem is, with Win10 still in adoption mode, DX12 wont matter for the bulk of the market for quite a while until at least a healthy percentage of titles is DX12 coded.  This is still too early to be truly meaningful.  But Kudos to AMD for pushing Mantle and helping get hardware to do the work.


----------



## Sony Xperia S (Aug 31, 2015)

ShurikN said:


> *Why am I not surprised by any of this...*





> Given its growing market-share, NVIDIA could use similar tactics to keep game developers away from industry-standard API features that it doesn't support, and which rival AMD does. *NVIDIA drivers tell Windows that its GPUs support DirectX 12 feature-level 12_1. We wonder how much of that support is faked at the driver-level, like async compute.* The company is already drawing flack for using borderline anti-competitive practices with GameWorks, which effectively creates a walled garden of visual effects that only users of NVIDIA hardware can experience for the same $59 everyone spends on a particular game.



Why am I not surprised at all by this too ?

Nvidia is so dirty like pigs in the mud.

I keep telling you that this company is not good but how many listen to me ?
The world will become a better place when we get rid of nvidia.

Monopoly of AMD (with good hearts) will be better than monopoly of nvidia (who only look how to screw technological progress).


----------



## Agility (Aug 31, 2015)

All i see is fanboyism being anal about the post. You shallow creatures need to look at the bigger picture.



> Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware.



It was mentioned that Nvidia tried to use async compute in which their chip totally does not support it A.K.A Tier 3. Why would Nvidia then tell the world that their GPU does support it? Have you guys totally forgotten about the whole big PR bull-shit of the GTX 970 memory scandal? 

Maybe you guys need some *reminders*



> NVIDIA drivers tell Windows that its GPUs support DirectX 12 feature-level 12_1. We wonder how much of that support is faked at the driver-level, like async compute.





> his came to light when game developer Oxide Games claimed that it was pressured by NVIDIA's marketing department to remove certain features in its "Ashes of the Singularity" DirectX 12 benchmark.



Apparently, it shows that Nvidia just tried to bluff its way through the software via driver support and got backfired. I wouldn't be surprise if Nvidia starts blaming their so-called marketing department for the fault mentioned.

All in all, this entire post clearly mentions and tells us the way Nvidia handles their marketing and business. Simply put, they lack proper business etiquette by doing under-table nonsense and being opaque to consumers.


----------



## KarymidoN (Aug 31, 2015)

the54thvoid said:


> ^^This.
> 
> Two (and a half) situations here - simple as that:
> 
> ...



Today in DX12 = AMD > Nvidia.
When DX12 got Popular and have a lot of games = Nvidia (new Generation) > AMD.

Nvidia Marketing department is so stupid, they said Maxwell will be compatible with all DX12 feature levels... haha, jokes on you Nvidia fanboy.


----------



## Sony Xperia S (Aug 31, 2015)

KarymidoN said:


> Nvidia Marketing department is so stupid, they said Maxwell will be compatible with all DX12 feature levels... haha, jokes on you Nvidia fanboy.



The marketing department is the last responsible. Their job is just to make what is crap looking a little bit better.

The real monsters are those like JHH and the upper management who NEVER EVER in their lives gave the initiative for improving DX or providing more efficient layers like Mantle etc...


----------



## Jborg (Aug 31, 2015)




----------



## vega22 (Aug 31, 2015)

brace yourselves, here comes the butthurt green team xD

doh!

too late :rofl:


----------



## john_ (Aug 31, 2015)

DirectX 10.1 was a bug, Async Compute is also a bug. At least Oxide is not Ubisoft.


----------



## rtwjunkie (Aug 31, 2015)

To me it matters not. Both sides take turns telling lies. I'm still on Kepler, which does have compute, and still trying to delay upgrading, whether to AMD or Nvidia...haven't decided.


----------



## raptori (Aug 31, 2015)

That is some serious disappointment, I still remember what Nvidia said "DX12 will be supported by all cards down to fermi" and that's Fermi and Kepler and Maxwell , is it all bluff now ..... , I can see my self turning DX12 options "off" in the future , what a mess  .


----------



## geon2k2 (Aug 31, 2015)

Suddenly ... the Fury price looks justified.


----------



## lilhasselhoffer (Aug 31, 2015)

Sony Xperia S is here, brace yourselves and prepare cyanide tabs.





Sony Xperia S said:


> Why am I not surprised at all by this too ?
> 
> Nvidia is so dirty like pigs in the mud.
> 
> ...





AMD isn't some sort of angel.  Perhaps you've got blinders on.

AMD sold Bulldozer as a Sandy Bridge killer.  They cherry picked performance figures, and ran testing that specifically favored their processors.  That's lying.
Nvidia sold a 4 GB video card, where the last 0.5 GB being called basically blows its kneecaps out.  That's lying.
Intel regularly engages in shady business practices, and has utilized its near monopoly to stagnate progress ever since Sandy Bridge.


If Nvidia folded tomorrow AMD would gouge on GPU pricing, while justifying it as a means to make Zen a competitor to whatever Intel is offering.  They could theoretically do this, assuming their management wasn't incompetent.  If AMD folded tomorrow the price of all computer hardware would basically have another zero.  No competition means we'd all be screwed.  Both Nvidia and Intel have a documented track record of these practices.  Intel won't fold.  They could release a Dorito, and still have people buy it.  They proved it with the thermal paste crap on Ivy Bridge and Haswell.  



Sony Xperia S said:


> The marketing department is the last responsible. Their job is just to make what is crap looking a little bit better.
> 
> The real monsters are those like JHH and the upper management who NEVER EVER in their lives gave the initiative for improving DX or providing more efficient layers like Mantle etc...



Jesus, I'm going to make the same argument here.  Prove it.
1) Are there any games using the feature?  Nope.
2) Do the people writing the code for the benchmark think it'll be a game changer?  Nope.
3) Who was responsible for pressuring the coders to omit code that they knew made their cards suck?  That's marketing.

As per your usual display, golf clap.  You've somehow managed to pin a conspiracy onto the evil overlords at Nvidia.  The absolute angels that are currently stripping the wall paper out of their AMD offices are poor, misunderstood, giving souls.  They couldn't possibly be riding the company into the ground with painful decision after painful decision.  They aren't responsible for stripping value out of the company, to meet a company valuation report that would earn them a bonus check worth more than some of their employees make in a year.  They aren't cutting good, hard working individuals to meet financial objectives.



Christ!  I'm between atheism and agnosticism, but that's all I can come up with.  I've seen people on bad trips that made more coherent sense than you.


----------



## TheMailMan78 (Aug 31, 2015)

lilhasselhoffer said:


> Sony Xperia S is here, brace yourselves and prepare cyanide tabs.
> 
> 
> 
> ...


WALL OF TEXT ACTIVATE!

I love reading your posts lilhasselhoffer but, man you gotta learn how to articulate your points into torpedo's and not a long spray of .22 caliber rat shot.

The only person who types more in a response than you is Ford.

Learn to hit hard and fast.


----------



## KarymidoN (Aug 31, 2015)

The thing i love about Nvidia is the way they Lie to us, and some people still loving their Lies...










GTX 970, Now Async, who's next?


----------



## rtwjunkie (Aug 31, 2015)

KarymidoN said:


> The thing i love about Nvidia is the way they Lie to us, and some people still loving their Lies...
> 
> 
> 
> ...


 
See posts 40 and 43.  Lies are something both sides do very well.  To think otherwise would show one to be extremely gullible.


----------



## MxPhenom 216 (Aug 31, 2015)

Sony Xperia S said:


> Why am I not surprised at all by this too ?
> 
> Nvidia is so dirty like pigs in the mud.
> 
> ...



Any monopoly is bad. Doesn't matter who has the control over it, its bad for everyone.


----------



## Ikaruga (Aug 31, 2015)

Why do people surprised by this? Directx12 (and Vulcan too) both have AMD roots in them (Microsoft has two consoles in the market with AMD GPUs). This is like saying that *"Lack of <random feature like proper tesselation> on GCN makes Maxwell better in Gameworks titles". *Ofc Nvidia won't be faster in everything if the api is made by people from the other side. It is a bit slower in async compute atm. The question is if they can correct it within their drivers (using a little more CPU time, which we have plenty of unused on the PC, so it will make zero difference).

I would still go for NV cards with this generation (more ROPS, better tesselation, faster AA, etc), but perhaps I will change my mind with the next one.

Just my two cents


----------



## GhostRyder (Aug 31, 2015)

Whether or not this becomes a big deal has yet to be seen at least from a more than one game standpoint.  I believe more information is needed on the subject from both sides before we can make our final decisions, but so far this is just another bit of "Misinformation" that has been put out from their side which seems to be happening a lot recently (Just like how there was a "Misunderstanding" about the GTX 970).

Either way, its definitely nothing we have not seen before as far as any of the sides have made up their own shares of lies over the last few years.


----------



## Ikaruga (Aug 31, 2015)

GhostRyder said:


> Whether or not this becomes a big deal has yet to be seen at least from a more than one game standpoint.  I believe more information is needed on the subject from both sides before we can make our final decisions, but so far this is just another bit of "Misinformation" that has been put out from their side which seems to be happening a lot recently (Just like how there was a "Misunderstanding" about the GTX 970).
> 
> Either way, its definitely nothing we have not seen before as far as any of the sides have made up their own shares of lies over the last few years.


*There is no misinformation at all*, most of the dx12 features will be supported by software on most of the cards, there are no GPU on the market with 100% top tier dx12 support (and I'm not sure if the next generation will be one, but maybe). This is nothing but a very well directed market campaign to level the fields, but I expected more insight into this from some of the TPU vets tbh (I don't mind it btw, AMD needs all the help he can get anyways).


----------



## lilhasselhoffer (Aug 31, 2015)

TheMailMan78 said:


> WALL OF TEXT ACTIVATE!
> 
> I love reading your posts lilhasselhoffer but, man you gotta learn how to articulate your points into torpedo's and not a long spray of .22 caliber rat shot.
> 
> ...



Fair.  My only response is that too often not explaining yourself makes you look like an ass.

More than once I've been guilty of that...sigh....


----------



## ShurikN (Aug 31, 2015)

GhostRyder said:


> Whether or not this becomes a big deal has yet to be seen at least from a more than one game standpoint.  I believe more information is needed on the subject from both sides before we can make our final decisions, but so far this is just another bit of "Misinformation" that has been put out from their side which seems to be happening a lot recently (Just like how there was a "Misunderstanding" about the GTX 970).
> 
> Either way, its definitely nothing we have not seen before as far as any of the sides have made up their own shares of lies over the last few years.


That's a lot of "misunderstanding" from NV in a short amount of time...
When AMD does it it's the fucking guillotine for em. When NV does it, it's a misunderstanding.


----------



## TheMailMan78 (Aug 31, 2015)

lilhasselhoffer said:


> Fair.  My only response is that too often not explaining yourself makes you look like an ass.
> 
> More than once I've been guilty of that...sigh....


Me looking like an ass to people on the internet is the least of my concerns. I make my point and bolt. Either you get it or don't. That's not directed at you by the way. I'm just saying when you respond you don't HAVE to explain yourself unless people ask you to elaborate. FYI I read your posts but, I can tell you that most people do not. Rule of thumb in marketing is most people don't read past the first paragraph unless it pertains to their wellbeing (Money or entertainment).

That's why I tell people to hit and run when you "debate" on the interwebz. You stick longer in peoples heads. My degree is in illustration. However I have worked in marketing/apparel for a LONG time. Why do you think I troll so well? Its all about the delivery. You'll ether love me or hate me but, YOU WILL READ what I say.


----------



## Loosenut (Aug 31, 2015)

lilhasselhoffer said:


> Fair.  My only response is that too often not explaining yourself makes you look like an ass.
> 
> More than once I've been guilty of that...sigh....



Damned if you do, damned if you don't...


----------



## 64K (Aug 31, 2015)

ShurikN said:


> That's a lot of "misunderstanding" from NV in a short amount of time...
> When AMD does it it's the fucking guillotine for em. When NV does it, it's a misunderstanding.



afaik Nvidia is still dealing with a class action suit over the GTX 970 in California. I haven't seen any recent news on it but you never know with juries. It could be an expensive lesson for them.


----------



## Fluffmeister (Aug 31, 2015)

Seems like a bit of storm in the tea cup to me, this is based on one devs engine after all, the same dev AMD used to pimp Mantle.

Are AMD seeing bigger gains? Definitely, but then they are coming from further back because the DX11 performance is much worse compared to the meanies at nV.

It would be nice to at least have a baseline of 5 or so titles using different engines before jumping to any real conclusions.


----------



## geon2k2 (Aug 31, 2015)

Ikaruga said:


> Why do people surprised by this? Directx12 (and Vulcan too) both have AMD roots in them (Microsoft has two consoles in the market with AMD GPUs).



Its about time for the console deal "with no profit in it" to pay back.

Its on NV that they are out of it. Next time they should get more involved in the gaming ecosystem if they want to have a word in its development, even if it doesn't bring them the big bucks immediately.


----------



## rtwjunkie (Aug 31, 2015)

Fluffmeister said:


> Seems like a bit of storm in the tea cup to me, this is based on one devs engine after all, the same dev AMD used to pimp Mantle.
> 
> Are AMD seeing bigger gains? Definitely, but then they are coming from further back because the DX11 performance is much worse compared to the meanies at nV.
> 
> It would be nice to at least *have a baseline of 5 or so titles using different engines before jumping to any real conclusions*.


 
What? Rational thought?!  Thank-you!!


----------



## Xzibit (Aug 31, 2015)

Ikaruga said:


> *There is no misinformation at all*, most of the dx12 features will be supported by software on most of the cards, there are no GPU on the market with 100% top tier dx12 support (and I'm not sure if the next generation will be one, but maybe). This is nothing but a very well directed market campaign to level the fields, but I expected more insight into this from some of the TPU vets tbh (I don't mind it btw, AMD needs all the help he can get anyways).



There bigger implications if either side is up to mischief.

If the initial game engine is being developed under AMD GNC with Async Compute for consoles then we PC gamers are getting even more screwed.

We are already getting a bad port, non-optimize, down-graded from improved API paths. GameWorks middle-ware. driver side emulation, developer who don't care = PC version of Batman Arkham Knight


----------



## GreiverBlade (Aug 31, 2015)

oohhh time to go back to the Reds ... not that i am unhappy with my GTX 980 ... but if i find a good deal on a Fury/Fury X/Fury Nano i would gladly re-switch camp ... (it's not like i care what GPU is in my rig tho ... )


----------



## mastrdrver (Aug 31, 2015)

The reason why async compute is important is because it has to do with why some get nauseated when using VR.

David Kanter says it needs to be under 20ms response, John Carmack agrees.

A quote from some articles linked on the Anandtech forum.


> David Kanter:
> Asynchronous shading is an incredibly powerful feature that increases performance by more efficiently using the GPU hardware while simultaneously reducing latency. In the context of VR, it is *absolutely essential* to reducing motion-to-photon latency, and is also beneficial for conventional rendering..........To avoid simulation sickness in the general population, the motion-to-photon latency should *never exceed* 20 milliseconds (ms).
> 
> John Carmack:
> 20 milliseconds or less will provide the *minimum level* of latency deemed acceptable.



In other words, if you want VR without motion sickness, then AMD's Liquid VR is currently the only way. If nVidia does not add a quicker way to do async compute in Pascal (whether hardware or software), it's going to hurt them when it comes to VR.


----------



## Random Murderer (Aug 31, 2015)

Wow, this thread has been a fun read. Not because of the blatant fanboyism, or the supremely amusing yet subtle trolling, but because everybody seems to be missing the point here.
Both sides have lied, neither side is innocent, this isn't a win for AMD nor a loss for NV, but most importantly (and I'm putting this in all caps for emphasis, not because I'm yelling) :
NOBODY BUYS A GPU THINKING "THIS IS GOING TO BE THE BEST CARD IN A YEAR." You buy for the now and _hope_ that it'll hold up for a year or two(or longer, depending on your build cycles). _Nobody_ can accurately predict what the tech front holds in the next few years. Sure, we all have an idea, but even the engineers that are working on the next two or three generations of hardware at this very moment cannot say for certain what's going to happen between now and release, or how a particular piece of hardware will perform in a year's time, whether the reason is drivers, software, APIs, power restrictions(please Intel, give up the 130W TDP limit on your "Extreme" chips and give us a _proper_ Extreme Edition again!),etc., or something completely unforeseeable.

TL;DR: Whether NV lied about its DX12 support on Maxwell is a moot point, the hardware is already in the wild. I would be extremely surprised if, by the time we have at least three AAA DX12 titles, today's current top-end cards still perform on-par with the top-tier cards of the future that _will _have DX12 support. As far as DX12 stands right now, we have a tech demo and a pre-beta game that is being run as a benchmark. Take the results with a grain of salt, and only as an indicator of how the landscape _might_ look once DX12 is widely-adopted.

And now, before I get flamed to death by all the fanboys, RM OUT!


----------



## Roph (Aug 31, 2015)

They should just leave the benchmark trying to use async compute if whatever card in the system claims it supports it.

It's nvidia's fault, not Oxide's.


----------



## semantics (Aug 31, 2015)

What you mean the game engine that started off as a mantel tech demo runs better on AMD cards than Nvidia who would have thunk it?


----------



## ensabrenoir (Aug 31, 2015)

..........._* THOSE FIENDS!!!!!!!!!!!!!*_    Anyone running a 980Ti or Titan X should immediately sell me their card for next to nothing and go buy one from Amd.........that'll show'em. But seriously...............when this becomes relevant to gaming.......a whole new architecture will be out......so while this is kinda dastardly...............its massively moot


wow by the time i typed this.....Random Murderer said it a lot better.


----------



## NC37 (Aug 31, 2015)

AMD may be a bunch of deceptive scumbags when it comes to their paper launches, but nVidia is still the king of dirty trick scumbaggery. Ha


----------



## rtwjunkie (Aug 31, 2015)

ensabrenoir said:


> ..........._* THOSE FIENDS!!!!!!!!!!!!!*_    Anyone running a 980Ti or Titan X should immediately sell me their card for next to nothing and go buy one from Amd.........that'll show'em. But seriously...............when this becomes relevant to gaming.......a whole new architecture will be out......so while this is kinda dastardly...............its massively moot


 
Love your sarcasm!  

Seriously though, if this does become a real issue; while a whole new architecture will be out, there will be a huge number of people with Maxwells, since they have sold so well.  So, not really moot either, though I see your thought process.


----------



## Sasqui (Aug 31, 2015)

TRWOV said:


> Maybe that explains Ars technica's results?



Wow.


----------



## HD64G (Aug 31, 2015)

IF Oxide dev team is correct about nVidia not having async compute feature embeded in their hw and just letting anyone think they do by showing it up in their driver lvl only, we are talking about fraud here. AMD has done many mistakes in PR level by trying to hide disadvantages in raw performance. But 970 vram fiasco and this (if true) should get anyone willing to see advance in PC gaming or computing in general MAD with green team's practice. On the other side we need to praise AMD for all the features they have brought to us for some time now. Native 64bit support on CPUs, 4-core CPUs in a die, tesselation, DX 10.1, HDM vram, CGN, APUs with some graphic power and reasonable price, etc. They sometimes fail in performance level due to low budget in R&D but they try to advance PC world and they bring forward open source features too. Freesync, OPENCL, Mantle are the recent ones.

To sum up, if I were now in a position to buy a new GPU, I would choose a CGN one without 2nd thoughts just to be sure I will enjoy the most features and higher performance lvl of dx12 games in next 2-3 years.


----------



## the54thvoid (Aug 31, 2015)

KarymidoN said:


> Today in DX12 = AMD > Nvidia.
> When DX12 got Popular and have a lot of games = Nvidia (new Generation) > AMD.
> 
> Nvidia Marketing department is so stupid, they said Maxwell will be compatible with all DX12 feature levels... haha, jokes on you Nvidia fanboy.



I can't seem to lower my IQ far enough to understand anything you have said.  If I could I would perhaps converse with the slow growing slime that forms around my bath plug. Until then I'll just stick to communicating with slugs, which after all, are still an evolutionary jump over your synaptic development.


----------



## ShurikN (Aug 31, 2015)

Random Murderer said:


> Whether NV lied about its DX12 support on Maxwell is a moot point, the hardware is already in the wild. I would be extremely surprised if, by the time we have at least three AAA DX12 titles, today's current top-end cards still perform on-par with the top-tier cards of the future that _will _have DX12 support. As far as DX12 stands right now, we have a tech demo and a pre-beta game that is being run as a benchmark. Take the results with a grain of salt, and only as an indicator of how the landscape _might_ look once DX12 is widely-adopted.



And because of that way of thinking, NV can get away this type of bullshit over and over and over again:


> During to course of its developer correspondence with NVIDIA to try and fix this issue, it learned that "Maxwell" doesn't really support async compute at the bare-metal level, and that *NVIDIA driver bluffs its support to apps.* *NVIDIA instead started pressuring Oxide to remove parts of its code *that use async compute altogether, it alleges.


----------



## N3M3515 (Aug 31, 2015)

the54thvoid said:


> I can't seem to lower my IQ far enough to understand anything you have said.  If I could I would perhaps converse with the slow growing slime that forms around my bath plug. Until then I'll just stick to communicating with slugs, which after all, are still an evolutionary jump over your synaptic development.



Wow.......what's up with the hostility man. Is it really you? Doesn't seem like it.

I wonder why Humansmoke hasn't commented on this news....


----------



## rvalencia (Aug 31, 2015)

FrustratedGarrett said:


> Here's the original article at WCCFTECH: http://wccftech.com/oxide-games-dev-replies-ashes-singularity-controversy/
> 
> There are two points here to look at:  first, Oxide is saying that Nvida has had access to the source code of the game for over a year and that they've been getting updates to the source code on the same day as AMD and Intel.
> 
> ...


here is the original post from Oxdie http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995


----------



## Frick (Aug 31, 2015)

Cherry picked benchmarks are the only benchmarks from a marketing perspective. This is true.

Me I'm sailing the seas of cheese, not caring and obviously being better off by it.


----------



## Xzibit (Aug 31, 2015)

How long before AMD puts a Free-aSync logo on the box.

How long before people start saying AMDs inclusion of Async Compute was a stupid move and should have charged more and Nvidia is doing the right thing by making you upgrade for it if P*asc*al incorporates them.


----------



## HumanSmoke (Aug 31, 2015)

N3M3515 said:


> Wow.......what's up with the hostility man. Is it really you? Doesn't seem like it.
> I wonder why Humansmoke hasn't commented on this news....


1. Because the whole thing is based upon a demo of an unreleased game which may - or may not, have any significant impact on PC gaming
2. Because as others have said, the time to start sweating is when DX12 games actually arrive.
3. As I've said in earlier posts, there are going to instances where game engines favour one vendor or the other - it has always been the case, it will very likely continue to be so. Nitrous is built for GCN. No real surprises since Oxide's Star Swarm was the original Mantle demo poster child. AMD gets its licks in early. Smart marketing move. It will be interesting how they react when they are at the disadvantage, and what games draw what mix of hardware and software features available to DX12
4. With the previous point in mind, *Unreal launched UE 4.9 yesterday*. The engine supports a number of features that AMD has had problems with (GameWorks), or has architectural/driver issues with. 4.9 I believe has VXGI support, and ray tracing. My guess is that the same people screaming "Nvidia SUCK IT!!!!!" will be the same people crying foul when a game emerges that leverages any of these graphical effects.....of course, Unreal Engine 4 might be inconsequential WRT AAA titles, but I very much doubt it.

PC Gaming benchmarks and performance - vendors win some, lose some. Wash.Rinse.Repeat. I just hope the knee-jerk comments keep on coming - I just love bookmarking (and screencapping for those who retroactively rewrite history) for future reference.
The UE4.9 notes are pretty extensive, so here's an editor shot showing the VXGI support.


----------



## the54thvoid (Aug 31, 2015)

N3M3515 said:


> Wow.......what's up with the hostility man. Is it really you? Doesn't seem like it.



Because I tire to my old bones of idiots spouting ill thought out shite.  This bit:



> haha, jokes on you Nvidia fanboy



is where my ire is focused because my post isn't in any way Nvidia-centric.  I believe i say 'kudos' to AMD for bringing the hardware level support to the fore.  This forum is all too often littered with idiotic and childish school ground remarks that would otherwise be met with a smack on the chops.  I'm a pleasant enough chap but I'm no pacifist and the veil of internet anonymity is just one area where cowards love to hide.

So while you decry my hostility - which was in fact a simple retort of intellectual deficit (aimed at myself as well, having the IQ of a slug) _why are you not attacking the tone of the post_ from the fud that laughs in my face and calls me a fanboy?  I'm not turning the other cheek if someone intentionally offends me.

EDIT: where I come from, my post wasn't even a tickle near hostility.


----------



## qubit (Aug 31, 2015)

A while back when it was being said that Kepler and Maxwell were DX12 compliant I said no way, only partial at the most and that we should wait for Pascal for full compliance, since these GPUs precede the DX12 standard and hence cannot possibly fully support it. Nice to see this article prove me right on this one.

It's inevitably the case that the most significant feature of a new graphics API will require new hardware to go with it and that's what we have here.

It also doesn't surprise me that NVIDIA would pressure a dev to remove problematic code from a DX12 benchmark in order not to be shown up. 

What should really happen is that the benchmark point out what isn't supported when run on pre-Pascal GPUs and pre-Fury ones for AMD) but that's not happening is it? It should then run that part of the benchmark on AMD Fury hardware since it does support it. _However, that part of the benchmark is simply not there at all and that's the scandal._


----------



## Frick (Aug 31, 2015)

the54thvoid said:


> EDIT: where I come from, my post wasn't even a tickle near hostility.



Incest-spawned shitface.


----------



## the54thvoid (Aug 31, 2015)

Frick said:


> Incest-spawned shitface.



Love you too!  But I was borne of an ill conceived ICI and brewery conglomerate.  I'm a turpentine spawned expletive.


----------



## Aquinus (Sep 1, 2015)

All this tells me is that GCN has untapped resources that DX12 (in this case,) could take advantage of. Probably a great example of how engines and rendering libraries in the past did a piss poor job of utilizing resources properly. nVidia caught on fast and started throwing away parts of the GPU that games weren't needing while crippling things like DP performance where GCN has always been biased toward compute-heavy workloads. If anything, this is just another example of how DX12 very well might utilize resources better than DX11 and earlier have. The real question is as more DX12 games and benchmarks start cropping up, how many of them will show similar results?


----------



## qubit (Sep 1, 2015)

Aquinus said:


> All this tells me is that GCN has untapped resources that DX12 (in this case,) could take advantage of. Probably a great example of how engines and rendering libraries in the past did a piss poor job of utilizing resources properly. nVidia caught on fast and started throwing away parts of the GPU that games weren't needing while crippling things like DP performance where GCN has always been biased toward compute-heavy workloads. If anything, this is just another example of how DX12 very well might utilize resources better than DX11 and earlier have. The real question is as more DX12 games and benchmarks start cropping up, how many of them will show similar results?


What I don't want to see are games purporting to be DX12 catering to the lowest common denominator, ie graphics cards with only partial DX12 support and there's an awful lot of those about, from both brands.

If DX12 with the latest games and full GPU DX12 features (eg Pascal) doesn't have a real wow factor compelling users to upgrade then this becomes a distinct possibility.


----------



## Fluffmeister (Sep 1, 2015)

After all this fuss let's hope *Ashes of the Singularity* isn't shit.


----------



## MonteCristo (Sep 1, 2015)

RejZoR said:


> It's again a SINGLE game. Until I see more (that aren't exclusive to either camp like this one is to AMD), then I'll accept the info...



So this is what you understand from the article? That "Ashes" is exclusive to AMD? Please take off the green glasses!


----------



## okidna (Sep 1, 2015)

I think the title is a little bit misleading. "Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for Ashes of Singularity" would be better 

And I think it's not surprising if AMD have the upper had on async compute. They just have more "muscles" to do that, especially if the game devs spam the GPU with a lot of graphic and compute task.

As far as I understand, NVIDIA GPUs will still do an async compute, BUT it will be limited to 31 command queue to be effective (a.k.a. not overloading their scheduler) meanwhile AMD can do up to 64 command queue and still be as effective.

NVIDIA = 1 graphic engine, 1 shader engine, with 32 depth command queue (total of 32 queues, 31 maximum usable for graphic/compute mode)
AMD = 1 graphic engine, 8 shader engines (they coined it as an ACE or Asinc Compute Engine), with 8 depth command queue (total of 64 queues)

So if you spam  lot of graphics and computes command (on the GPU in a non-sequential way) to an NVIDIA GPU, it will end up overloading its scheduler and then it will do a lot of context switching (from graphic command to compute command and vice versa), this will result in increased latency, hence the increased time for processing. This is what happened in this specific game demo (Ashes of Singularity, AoS), they use our GPU(s) to process the graphic command (to render all of those little space ships thingy) AND also to process the compute command (the AIs for every single space ship thingy), and the more the space ship thingy, the more NVIDIA GPUs will suffer.

And you'll all thinking : "AMD can only win in DX12, async compute rulezz!", well, the fact is we don't know yet. We don't know how most game devs deal with the graphic and compute side of their games, whether they think it would be wise to offload most compute task to our GPUs (so freeing CPU resource a.k.a. removing most of CPU bottleneck) or just let the CPU do the compute tasks (less hassle in coding and especially synchronizing).

Oh and UE4 wrote in their documentation for async compute implementation in their engine : 



> As more more APIs expose the hardware feature we would like make the system more cross platform. Features that make use use AsyncCompute you always be able to run without (console variable / define) to run on other platforms and easier debugging and profiling. *AsyncCompute should be used with caution as it can cause more unpredicatble performance and requires more coding effort for synchromization.*



From here : https://docs.unrealengine.com/lates...ing/ShaderDevelopment/AsyncCompute/index.html


----------



## lilunxm12 (Sep 1, 2015)

okidna said:


> I think the title is a little bit misleading. "Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for Ashes of Singularity" would be better
> 
> And I think it's not surprising if AMD have the upper had on async compute. They just have more "muscles" to do that, especially if the game devs spam the GPU with a lot of graphic and compute task.
> 
> ...



I don't think it's the command queues causing problem. By AMD side, only hawaii, tonga and fiji have 8 ACEs. Older GCNs have 1 or 2 ACEs. If there is a huge amount of command queues that is causing problem, then not only NVIDIA card's but also those GCN cards with fewer ACEs will have a performance downgrade in DX12. But the benchmark result does not support this argument at all.


----------



## okidna (Sep 1, 2015)

lilunxm12 said:


> I don't think it's the command queues causing problem. By AMD side, only hawaii, tonga and fiji have 8 ACEs. Older GCNs have 1 or 2 ACEs. If there is a huge amount of command queues that is causing problem, then not only NVIDIA card's but also those GCN cards with fewer ACEs will have a performance downgrade in DX12. But the benchmark result does not support this argument at all.



I can't find any AoS benchmark result with older GCN cards like Tahiti, Pitcairn, etc. 
It would be much appreciated if you can provide one for reading purpose, thank you very much.


----------



## lilunxm12 (Sep 1, 2015)

okidna said:


> I can't find any AoS benchmark result with older GCN cards like Tahiti, Pitcairn, etc.
> It would be much appreciated if you can provide one for reading purpose, thank you very much.


http://www.computerbase.de/2015-08/...of-the-singularity-unterschiede-amd-nvidia/2/


----------



## R-T-B (Sep 1, 2015)

DeathtoGnomes said:


> This. We all know Nvdia has cheated at benchmarks in the past.
> 
> so its like this:
> 
> ...



This is a more accurate summary than I'd like to admit.  I always felt AMD would be more partial to eating paste than popcorn however.


----------



## rtwjunkie (Sep 1, 2015)

MonteCristo said:


> So this is what you understand from the article? That "Ashes" is exclusive to AMD? Please take off the green glasses!



See, now this is the danger in making assumptions about people's hardware preferences when you know nothing about them.

What you don't know is that @RejZoR has been a long time AMD supporter, and only recently got a 980 out of frustration.


----------



## R-T-B (Sep 1, 2015)

qubit said:


> A while back when it was being said that Kepler and Maxwell were DX12 compliant



They are.  Heck frickin' fermi is COMPLIANT.

Compliant being the key word.  They aren't FULLY SUPPORTED.  Careful with your words there man. 



rtwjunkie said:


> See, now this is the danger in making assumptions about people's hardware preferences when you know nothing about them.
> 
> What you don't know is that @RejZoR has been a long time AMD supporter, and only recently got a 980 out of frustration.



Indeed.  He did it shortly after the "I'll eat my shoes if AMD make R9 390x GCN 1.1" debacle.  He was like our AMD posterboy only he's proven he'll go green if it's required to get a good game.  I would not target him for assumptions, if I were you.


----------



## btarunr (Sep 1, 2015)

Fluffmeister said:


> After all this fuss let's hope *Ashes of the Singularity* isn't shit.



Oh it is shit. But that's not the point.



okidna said:


> I think the title is a little bit misleading. "Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for Ashes of Singularity" would be better




Async compute is a DirectX feature, not an AotS feature.


----------



## Ikaruga (Sep 1, 2015)

geon2k2 said:


> Its about time for the console deal "with no profit in it" to pay back.
> 
> Its on NV that they are out of it. Next time they should get more involved in the gaming ecosystem if they want to have a word in its development, even if it doesn't bring them the big bucks immediately.


They tried with the last gen and Sony choose them (sadly they couldn't wait a few months for the g80), this gen would have been way much more expensive without APUs, what Nvidia lacks, so there was no real price-war at all, AMD simply made a bad deal imo.



Xzibit said:


> There bigger implications if either side is up to mischief.
> 
> If the initial game engine is being developed under AMD GNC with Async Compute for consoles then we PC gamers are getting even more screwed.
> 
> We are already getting a bad port, non-optimize, down-graded from improved API paths. GameWorks middle-ware. driver side emulation, developer who don't care = PC version of Batman Arkham Knight


Please read my two earlier posts, I suggested a slightly different conclusion with those, but to reply to you; *We are already screwed because the consoles (the main target platforms for developers) have Jaguar cores*. That's why it doesn't really matter what things will be supported from software or hardware, because we will have so many free CPU cycles on the PC, some driver magic won't matter much. I bet even Fermi cards will do just "fine" under dx12 (and yes, Fermi dx12 support is coming probably the end of this year where nvidia will simply implement the missing features from software just how AMD will do with their older architectures).


----------



## xenocide (Sep 1, 2015)

Is anyone surprised AMD's Architecture does better at a Compute-centric task?  They have been championing Compute for the better part of the past 5 years, as Nvidia was shedding it until the technology was actually relevant.  I think this is a good indicator that Pascal is going to usher in the return of a lot of Compute functionality.


----------



## RejZoR (Sep 1, 2015)

MonteCristo said:


> So this is what you understand from the article? That "Ashes" is exclusive to AMD? Please take off the green glasses!



Ahahaha, did this dude just flag me as NVIDIA fanboy? Kiddo, you are hilarious. For last 10 years, I've owned nothing but ATi/AMD graphic cards. This is the first NVIDIA after all those years. Nice try. XD

And I never said it's "exclusive". All I said is that this very specific game has been developed with cooperation with AMD basically since day one using Mantle. And when that dropped in the water, it's DX12. No one says Project Cars is NVIDIA exclusive, but we all know it has been developed with NVIDIA since day one basically and surprise surprise, it runs on NVIDIA far better than on any AMD card. Wanna calls omeone AMD fanboy for that one? 1 single game doesn't reflect perfrmance in ALL games.


----------



## KainXS (Sep 1, 2015)

not really surprised, but its starting to look like nvidia has not made any significant strides in async compute since kepler(async only with compute operations only, but not with compute+graphics)


----------



## the54thvoid (Sep 1, 2015)

Here's a balanced opinion.

AMD have focused on Mantle to get better hardware level implementation to suit their GCN1.1+ architecture.  From this they have set some fire under MS and got DX12 to be closer to the metal.  This focus has left Nvidia to keep on top of things at DX11 level. 
Following Kepler, Nvidia have focused on efficiency and performance and Maxwell has brought them that in spades with DX11.  Nvidia have effectively taken the opposite gamble of AMD.  Nvidia has stuck with DX11 focus and AMD has forged on toward DX12. 

So far so neutral.

They have both gambled and they will both win and lose. AMD have gambled DX12 adoption will be rapid and that will allow their GCN1.1+ to provide a massive performance increase and quite likely surpass Maxwell architecture designs.  Even possibly in best case scenarios with rebranded _Hawaii_ matching top level Maxwell (bravo AMD).  Nvidia have likely thought that DX12 implementation will not occur rapidly enough until 2016, therefore they have settled with the Maxwell DX11 performance efficiency.  Nvidia for their part have probably 'fiddled' to pretend they have most awesome DX12 support when in reality it;s a driver thing (as AoS apparently shows).

So, if DX12 implementation is slow, Nvidia gamble pays off.  If DX12 uptake is rapid and occurs before Pascal, Nvidia lose (and will most definitely cheat with massive developer pressure and incentive).  If DX12 comes through in bits and bobs, it will come down to what games you play (as always).  However, as a gamer, I'm not upgrading to W10 until MS patches the 'big brother' updating mechanisms I keep reading about.

TL.DR? = Like everyone has been saying - despite AMD's GCN advantage, without a slew of top AAA titles, the hardware is irrelevant.  If DX11 games are still being pumped out, GCN wont help.  If DX12 comes earlier, AMD win.


----------



## ThorAxe (Sep 1, 2015)

From what I have read Maxwell is capable of Async compute (and Async Shaders), and is actually faster when it can stay within its work order limit (1+31 queues).

The GTX 980 Ti is twice as the Fury X but only when it is under 31 simultaneous command lists.

The GTX 980 Ti performed roughly equal to the Fury X at up to 128 command lists.

This is why we need to wait for more games to be released before we jump to conclusions.


----------



## rvalencia (Sep 1, 2015)

_


ThorAxe said:



			From what I have read Maxwell is capable of Async compute (and Async Shaders), and is actually faster when it can stay within its work order limit (1+31 queues).

The GTX 980 Ti is twice as the Fury X but only when it is under 31 simultaneous command lists.

The GTX 980 Ti performed roughly equal to the Fury X at up to 128 command lists.

This is why we need to wait for more games to be released before we jump to conclusions.
		
Click to expand...

That Beyond3D benchmark is pretty simple and designed to be in-order.
_
*Maxwellv2 is not capable of concurrent async + rendering without incurring context penalties and it's under this context that Oxdie made it's remarks.*
_
AMD's ACE units are designed to run concurrently with rendering without context penalties and includes out-of-order features.
_
_https://forum.beyond3d.com/threads/dx12-performance-thread.57188/page-10_
_
From sebbbi:
The latency doesn't matter if you are using GPU compute (including async) for rendering. You should not copy the results back to CPU or wait for the GPU on CPU side. Discrete GPUs are far away from the CPU. You should not expect to see low latency. Discrete GPUs are not good for tightly interleaved mixed CPU->GPU->CPU work.

To see realistic results, you should benchmark async compute in rendering tasks. For example render a shadow map while you run a tiled lighting compute shader concurrently (for the previous frame). Output the result to display instread of waiting compute to finish on CPU. For result timing, use GPU timestamps, do not use a CPU timer. CPU side timing of GPU results in lots of noise and even false results because of driver related buffering.
---------------------

AMD APU would be king for tightly interleaved mixed CPU->GPU->CPU work e.g. PS4's APU was designed for this kind of work.

PS4 sports the same 8 ACE units as Tonga, Hawaii and Fury.




Ikaruga said:



			They tried with the last gen and Sony choose them (sadly they couldn't wait a few months for the g80), this gen would have been way much more expensive without APUs, what Nvidia lacks, so there was no real price-war at all, AMD simply made a bad deal imo.

Please read my two earlier posts, I suggested a slightly different conclusion with those, but to reply to you; We are already screwed because the consoles (the main target platforms for developers) have Jaguar cores. That's why it doesn't really matter what things will be supported from software or hardware, because we will have so many free CPU cycles on the PC, some driver magic won't matter much. I bet even Fermi cards will do just "fine" under dx12 (and yes, Fermi dx12 support is coming probably the end of this year where nvidia will simply implement the missing features from software just how *AMD will do with their older architectures*).
		
Click to expand...

XBO is the baseline DirectX12 GPU and it has two ACE units with 8 queues per unit as per Radeon HD 7790 (GCN 1.1).

The older GCN 1.0 still has two ACE units with 2 queues per unit but it's less capable than GCN 1.1.

GCN 1.0 such as 7970/R9-280X is still better than Fermi and Kelper in concurrent Async+Render category.



RejZoR said:



			Ahahaha, did this dude just flag me as NVIDIA fanboy? Kiddo, you are hilarious. For last 10 years, I've owned nothing but ATi/AMD graphic cards. This is the first NVIDIA after all those years. Nice try. XD

And I never said it's "exclusive". All I said is that this very specific game has been developed with cooperation with AMD basically since day one using Mantle. And when that dropped in the water, it's DX12. No one says Project Cars is NVIDIA exclusive, but we all know it has been developed with NVIDIA since day one basically and surprise surprise, it runs on NVIDIA far better than on any AMD card. Wanna calls omeone AMD fanboy for that one? 1 single game doesn't reflect perfrmance in ALL games.
		
Click to expand...

With Project Cars, AMD's lower DX11 draw call limit is the problem.

Read SMS PC Lead's comment on this issue from
http://forums.guru3d.com/showpost.php?p=5116716&postcount=901

For our mix of DX11 API calls, the API call consumption rate of the AMD driver is the bottleneck. 

In Project Cars the range of draw calls per frame varies from around 5-6000 with everything at low up-to 12-13000 with everything at Ultra. Depending on the single threaded performance of your CPU there will be a limit of the amount of draw calls that can be consumed and as I mentioned above, once that is exceeded GPU usage starts to reduce. On AMD/Windows 10 this threshold is much higher which is why you can run with higher settings without FPS loss. 

I also mentioned about 'gaps' in the GPU timeline caused by not being able to feed the GPU fast enough - these gaps are why increasing resolution (like to 4k in the Anandtech analysis) make for a better comparison between GPU vendors... In 4k, the GPU is being given more work to do and either the gaps get filled by the extra work and are smaller.. or the extra work means the GPU is now always running behind the CPU submission rate.

So, on my i7-5960k@3.0ghz the NVIDIA (Titan X) driver can consume around 11,000 draw-calls with our DX11 API call mix - the same Windows 7 System with a 290x and the AMD driver is CPU limited at around 7000 draw-calls : On Windows 10 AMD is somewhere around 8500 draw-calls before the limit is reached (I can't be exact since my Windows 10 box runs on a 3.5ghz 6Core i7)

In Patch 2.5 (next week) I did a pass to reduce small draw-calls when using the Ultra settings, as a concession to help driver thread limitations. It gains around 8% for NVIDIA and about 15% (minimum) for AMD. 
..._
*For Project Cars the 1040 driver is easily the fastest under Windows 10 at the moment - but my focus at the moment is on the fairly large engineering task of implementing DX12 support...*


----------------------------------------


Project Cars with DX12 is coming.


----------



## ThorAxe (Sep 1, 2015)

rvalencia said:


> Maxwellv2 is not capable of concurrent async + rendering without incurring context penalties.
> 
> https://forum.beyond3d.com/threads/dx12-performance-thread.57188/page-10
> _
> ...



https://www.reddit.com/r/nvidia/comments/3j5e9b/analysis_async_compute_is_it_true_nvidia_cant_do/


----------



## buggalugs (Sep 1, 2015)

But the thing is, DX12 is going to take off faster than any other iteration. WIth the big performance gains of DX12, and free windows 10,  DX 12 games are going to be everywhere soon. Pascal is going to be at least 6 months away, after Christmas when no doubt some big DX12 games will be released. .......and AMD has priority for HBM2 so this is going to hurt nvidia.

  Those people that spent top dollar on highend Nvidia cards recently are going to be disappointed over the coming months as new games are released.

 I dont understand why some people are still defending Nvidia, just like they did with the 3.5 GB debacle, nvidia has been dishonest here, if the game developer has gone public, Nvidia must have been assholes by trying to force him to disable the function. They wanted to keep the consumer in the dark.

 No wonder Nvidia keeps on pulling stuff like this, their fanboys will always defend them, or maybe Nvidia pays a bunch of people to troll the forums and defend them, it wouldnt surprise me. haha


----------



## rtwjunkie (Sep 1, 2015)

buggalugs said:


> But the thing is, DX12 is going to take off faster than any other iteration. WIth the big performance gains of DX12, and free windows 10,  DX 12 games are going to be everywhere soon. Pascal is going to be at least 6 months away, after Christmas when no doubt some big DX12 games will be released. .......and AMD has priority for HBM2 so this is going to hurt nvidia.
> 
> Those people that spent top dollar on highend Nvidia cards recently are going to be disappointed over the coming months as new games are released.
> 
> ...



What big games will I be missing out on in the next 6 months? All the big titles announced so far into next year are DX11.  I really don't see the crazy fast DX 12 adoption rate yet.


----------



## rvalencia (Sep 1, 2015)

ThorAxe said:


> https://www.reddit.com/r/nvidia/comments/3j5e9b/analysis_async_compute_is_it_true_nvidia_cant_do/


Against that small amateur beyond3d latency benchmark. refer to https://www.reddit.com/r/nvidia/comments/3i6dks/maxwell_cant_do_vr_well_an_issue_of_latency/
latency numbers done by the professionals.


----------



## Xzibit (Sep 1, 2015)

rvalencia said:


> Against that small amateur beyond3d latency benchmark. refer to https://www.reddit.com/r/nvidia/comments/3i6dks/maxwell_cant_do_vr_well_an_issue_of_latency/
> latency numbers done by the professionals.



Not only that the person who put the story up isn't convinced.



> *Edit - Some additional info*
> This program is created by an amateur developer (this is literally his first DX12 program) and there is _not_ consensus in the thread. In fact, a post points out that due to the workload (1 large enqueue operation) the GCN benches are actually running "serial" too (which could explain the strange ~40-50ms overhead on GCN for pure compute). *So who knows if v2 of this test is really a good async compute test?*


----------



## EarthDog (Sep 1, 2015)

buggalugs said:


> But the thing is, DX12 is going to take off faster than any other iteration. WIth the big performance gains of DX12, and free windows 10,  DX 12 games are going to be everywhere soon. Pascal is going to be at least 6 months away, after Christmas when no doubt some big DX12 games will be released. .......and AMD has priority for HBM2 so this is going to hurt nvidia.
> 
> Those people that spent top dollar on highend Nvidia cards recently are going to be disappointed over the coming months as new games are released.
> 
> ...


You have seen one title perform well in DX12. Outside from that, what other DX12 games are said to be here in the next few months? Perhaps I missed something?

Also, can you link to something that states AMD has priority for HBM2?


----------



## HumanSmoke (Sep 1, 2015)

buggalugs said:


> But the thing is, DX12 is going to take off faster than any other iteration. WIth the big performance gains of DX12, and free windows 10,  DX 12 games are going to be everywhere soon.


Doubtful. Game developers aren't that energetic. The announced list of DX12 games is actually pretty short...and not every DX12 game uses the same resources - which should be fairly obvious, any more than every DX11 game is identical in its feature set. I'm betting Gears of War Ultimate won't be another AotS, or Fable Legends for that matter


buggalugs said:


> Pascal is going to be at least 6 months away, after Christmas when no doubt some big DX12 games will be released. .......and AMD has priority for HBM2 so this is going to hurt nvidia.


So AMD are going to buy up all the HBM to piss on Nvidia's chips? So, after buying up all SK Hynix's HBM production, how much are they going to borrow to buy up all of Samsung's HBM ? AMD to do a full Nelson Bunker Hunt (substitute memory IC's for silver of course) ! buggalugs for AMD CFO.


----------



## 64K (Sep 1, 2015)

EarthDog said:


> You have seen one title perform well in DX12. Outside from that, what other DX12 games are said to be here in the next few months? Perhaps I missed something?
> 
> Also, can you link to something that states AMD has priority for HBM2?



Around mid July a lot of tech sites started reporting that AMD was rumored to have priority access to HBM2. There's a lot of articles spreading this rumor. wccftech seems to be the origin of the rumor so there's that to consider.


----------



## HumanSmoke (Sep 1, 2015)

64K said:


> wccftech seems to be the origin of the rumor so there's that to consider.


I try not to consider WTFtech in any way, shape, or form. Seems like one of those clickbait sites whose links always start with "YOU WILL NEVER BELIEVE..." The comments section seems to be where mental retardation goes to get refresher courses.


----------



## rvalencia (Sep 1, 2015)

HumanSmoke said:


> Doubtful. Game developers aren't that energetic. The announced list of DX12 games is actually pretty short...and not every DX12 game uses the same resources - which should be fairly obvious, any more than every DX11 game is identical in its feature set.* I'm betting **Gears of War Ultimate** won't be another AotS, or **Fable Legends** for that matter*


*Gears of War Ultimate* is a remaster from existing games while *Fable Legends* DX12 is new game and it was stated to use Async shaders. With DX12, it depends how many independent objects they throw on the screen e.g wide scale destructive physics with it's own individual light source would be similar.



HumanSmoke said:


> I try not to consider WTFtech in any way, shape, or form. Seems like one of those clickbait sites whose links always start with "YOU WILL NEVER BELIEVE..." The comments section seems to be where mental retardation goes to get refresher courses.


The only mental retardation is you. On behalf of posters in the comments section who can't reply against you.  I'll take you on.  *You started the personality based attacks, I'll gladly continue it.
*
As posted earlier in this thread,  the original post was from Oxide i.e. read the full post from
http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995


----------



## AsRock (Sep 1, 2015)

Remember they are planing to add DX12 to Ark Survival Evolved too, and as they are calling a 20% gain to me sounds like this has been tested a fair bit already.

Sure the game needs much more than a 20% boost that's for sure but still.

http://steamcommunity.com/app/346110/discussions/0/594820656447032287/


----------



## the54thvoid (Sep 1, 2015)

rvalencia said:


> *Gears of War Ultimate* is a remaster from existing games while *Fable Legends* DX12 is new game and it was stated to use Async shaders. With DX12, it depends how many independent objects they throw on the screen e.g wide scale destructive physics with it's own individual light source would be similar.
> 
> 
> The only mental retardation is you. On behalf of posters in the comments section who can't reply against you.  I'll take you on.  *You started the personality based attacks, I'll gladly continue it.
> ...



Nah, WCCFTECH comments section is pretty bestial, sorry dude. Makes the worst of TPU look civilised in comparison.


----------



## cyneater (Sep 1, 2015)

Meh who cares?

Typical hype bleeding edge crap anyway...

No one seems to remember the geforce FX and how they where not fully DX9 cards...
By the time games started using DX9 everyone had a geforce 6XXX or 7XXX.

DX11 came out in 2010? 
How many games now use DX 11? A few more but it took a few years....

On top of that ... 
Hopefully steam pulls there head out of there arse and makes steam os decent.... And developers start developing for linux. 

As I don't like some of the features in windows 10....


----------



## Ikaruga (Sep 1, 2015)

rvalencia said:


> _XBO is the baseline DirectX12 GPU and it has two ACE units with 8 queues per unit as per Radeon HD 7790 (GCN 1.1).
> 
> The older GCN 1.0 still has two ACE units with 2 queues per unit but it's less capable than GCN 1.1.
> 
> GCN 1.0 such as 7970/R9-280X is still better than Fermi and Kelper in concurrent Async+Render category._


I did not contradict anything in your post addressed to me, but hey, it's a great subject:

This is not how it works, we are talking about performance in games, and not features or a tech-demo like engines abusing a single api feature. Think about the performance impact and usage of dx11.1 or 11.2 in games, because I can't recall if it ever made a huge impact in any game, and I read everything from dev documents through beyond3d to reddit or this forum. There are features which Intel does the best, and one could write a program which would abuse such feature (let's say ROV) to beat both NV and AMD, yet nobody in their right mind would think that Intel has a chance in games against the big boys. On the subject, Kepler and Fermi might be slower indeed, but don't forget that they had a wider bus which could come handy for them to keep a bit up with the younger chips, I think they won't be that bad with dx12 titles, but we will see.


----------



## Sony Xperia S (Sep 1, 2015)

cyneater said:


> DX11 came out in 2010?
> How many games now use DX 11? A few more but it took a few years....



Yeah, sure, I guess nvidia is one of the main reasons and causes for this stupid inconvenience.

AMD came with DX10.1 back then and what's happened ?!
Our "pigs in the mud" nvidia damaged its progress as well.

When they throw so much money so some titles run better on their hardware, why didn't they even a single time think of the option to make some progress, not to spoil all customers' experience ?!


----------



## 64K (Sep 1, 2015)

Sony Xperia S said:


> AMD came with DX10.1 back then and what's happened ?!
> Our "pigs in the mud" nvidia damaged its progress as well.



Is this what you are referring too?

http://www.anandtech.com/show/2549/7


----------



## Ikaruga (Sep 1, 2015)

Sony Xperia S said:


> Yeah, sure, I guess nvidia is one of the main reasons and causes for this stupid inconvenience.
> 
> AMD came with DX10.1 back then and what's happened ?!
> Our "pigs in the mud" nvidia damaged its progress as well.
> ...


No offense but do you even realize that *all you do is whining*... Evil Nvidia did this, evil Nvidia did that... Man the *!#& up!, and stop blaming Nvidia about consoles ruining PC gaming for almost a decade now. Nvidia might be a business monster indeed, I can agree with that, but at least they provide great progression, and they didn't win their market-share lead or their money on the lottery, they actually provided great products to the "costumers". I'm one of those costumers and my only complain to Nvidia is the high price they ask for their products, and other than that, I'm very satisfied. I wish all the best to AMD, if they will have better products I will buy those, but I'm pretty happy with my Maxwell atm, it runs everything smooth and fluid thanks.


----------



## FordGT90Concept (Sep 1, 2015)

HumanSmoke said:


> So AMD are going to buy up all the HBM to piss on Nvidia's chips? So, after buying up all SK Hynix's HBM production, how much are they going to borrow to buy up all of Samsung's HBM ? AMD to do a full Nelson Bunker Hunt (substitute memory IC's for silver of course) ! buggalugs for AMD CFO.


I don't know about that but it does seem to confirm the theory that Fiji is AWOL because HBM chips are in very short supply until next year.  AMD may have gotten the Async Compute right but their decision to adapt HBM now...doesn't seem like it was a good one.  They should have pulled a Skylake putting a GDDR5 + HBM memory controller on Fiji so they could sell bulk orders of Fiji with GDDR5 and sell HBM models as available.  They really shot themselves in the foot by not leaving GDDR5 as an option.


As for HBM-2, I have a sneaking suspicion there will be no mass production of HBM and will only be mass production of HBM-2.  AMD is the only client for HBM, no?  Unless SK Hynix can get more buyers, ramping up production doesn't make much sense.  The bulk of memory orders are still DDR3, DDR3L, DDR4, and GDDR5.  I wonder what SK Hynix said to AMD to get them to sign up.  HBM-2 may be amazing but HBM seems like more trouble than its worth.


----------



## Sony Xperia S (Sep 1, 2015)

Ikaruga said:


> I wish all the best to AMD, if they will have better products I will buy those



AMD have always had THE BETTER products (even though that stupid metric FPS might show different) but you are blind to appreciate.

I am going to buy for my friends the R9 280 for 168 euros now.


----------



## Ikaruga (Sep 1, 2015)

Sony Xperia S said:


> AMD have always had THE BETTER products (even though that stupid metric FPS might show different) but you are blind to appreciate.
> 
> I am going to buy for my friends the R9 280 for 168 euros now.


OK! I'm happy for you, enjoy your new card!


----------



## Sony Xperia S (Sep 1, 2015)

Ikaruga said:


> OK! I'm happy for you, enjoy your new card!



I don't need that you be happy for me. Just be honest in front of yourself and the justice in this world.

And it won't be my new card - I will recommend and buy several R9 280 because there is nothing better, for my friends.


----------



## Ikaruga (Sep 1, 2015)

Sony Xperia S said:


> I don't need that you be happy for me. Just be honest in front of yourself and the justice in this world.
> 
> And it won't be my new card - I will recommend and buy several R9 280 because there is nothing better, for my friends.


l'm always honest. Nvidia is the Apple of GPUs, they are evil, they are greedy, there is almost nothing you could like about them, but they make very good stuff, which works well and also performs well, so they win. If they would suck, nobody would buy their products for decades, the GPU market is not like the music industry, only a very small tech savy percentage of the population buys dedicated GPUs, no Justin Biebers can keep themselves on the surface for a long time without actually delivering good stuff.


----------



## EarthDog (Sep 1, 2015)

Sony Xperia S said:


> AMD have always had THE BETTER products (even though that stupid metric FPS might show different) but you are blind to appreciate.
> 
> I am going to buy for my friends the R9 280 for 168 euros now.


I really hate to feed the nonsensical but, I wonder how you define better...

It can't be in performance /watt...
It can't be in frame time in CFx v SLI...
It can't be in highest FPS/performance... 

Bang for your buck? CHECK.
Utilizing technology to get (TOO FAR) ahead of the curve? CHECK.

I'm spent.


----------



## FordGT90Concept (Sep 1, 2015)

Sony Xperia S said:


> I am going to buy for my friends the R9 280 for 168 euros now.


Beware, I'm hearing about problems with R9 280(X) from all over the place.  Specifically, Gigabyte and XFX come up.


----------



## HumanSmoke (Sep 1, 2015)

FordGT90Concept said:


> As for HBM-2, I have a sneaking suspicion there will be no mass production of HBM and will only be mass production of HBM-2.


Well the Samsung HBM is second generation. Note the density and bandwidth in the slide I posted earlier.


FordGT90Concept said:


> AMD is the only client for HBM, no?


That seems to be the case. Everyone else seems to be waiting for the technology to become more viable. Higher density of HBM2 means less stacks per given capacity >>> smaller interposer required >>> lower production cost and defect rate with lower pin-out. Waiting for HBM2 also means that the memory vendors get more experience with the process, and interposer packaging becomes a more mature process - so a higher production ramp. With SK Hynix the only source initially, I doubt many would have jumped on board. Most IHV's, AIB/AIC would wait for a second source of supply to maintain supply in the eventuality that Hynix couldn't/wasn't able to meet orders


FordGT90Concept said:


> Unless SK Hynix can get more buyers, ramping up production doesn't make much sense.


I get the distinct impression that HBM1 was a proof of concept exercise.


FordGT90Concept said:


> The bulk of memory orders are still DDR3, DDR3L, DDR4, and GDDR5.  I wonder what SK Hynix said to AMD to get them to sign up.  HBM-2 may be amazing but HBM seems like more trouble than its worth.


Well, someone had to get the ball rolling and take a hit in the short term to ensure a long term future. Waiting, waiting, waiting for HBM2 to launch before products ship may have bigger implications for SK Hynix ( proof of concept with HBM1 might have been required to get other vendors onboard with HBM2). HBM1 is in all likelihood a small risk for Hynix given the volumes of the other memory you quoted - all the major risk would be assumed by AMD, since without HBM and Fiji having no GDDR5 memory controllers, Fiji would be stillborn.


rvalencia said:


> *Gears of War Ultimate* is a remaster from existing games while *Fable Legends* DX12 is new game and it was stated to use Async shaders. With DX12, it depends how many independent objects they throw on the screen
> 
> 
> HumanSmoke said:
> ...


So, basically what I just said.


rvalencia said:


> On behalf of posters in the comments section who can't reply against you.  I'll take you on.  *You started the personality based attacks, I'll gladly continue it. *


Don't kid yourself, they only reasons they aren't here is because they're too busy eating their crayons.
bon appétit


the54thvoid said:


> Nah, WCCFTECH comments section is pretty bestial, sorry dude. Makes the worst of TPU look civilised in comparison.


I think rvalencia is attempting to bridge that divide.


----------



## FordGT90Concept (Sep 1, 2015)

HumanSmoke said:


> Well, someone had to get the ball rolling and take a hit in the short term to ensure a long term future. Waiting, waiting, waiting for HBM2 to launch before products ship may have bigger implications for SK Hynix ( proof of concept with HBM1 might have been required to get other vendors onboard with HBM2). HBM1 is in all likelihood a small risk for Hynix given the volumes of the other memory you quoted - all the major risk would be assumed by AMD, since without HBM and Fiji having no GDDR5 memory controllers, Fiji would be stillborn.


Indeed but AMD is the worst company in the world to be taking a gamble like that.  I think the console market had more to do with AMD's decision than discreet GPUs.  Fiji is just their viability test platform.  Maybe AMD excepts to ship second generation APUs for Xbox One and PlayStation 4 with die shrink and HBM and AMD expects to be able to pocket the savings instead of Sony and Microsoft.


----------



## HumanSmoke (Sep 1, 2015)

FordGT90Concept said:


> Indeed but AMD is the worst company in the world to be taking a gamble like that.  I think the console market had more to do with AMD's decision than discreet GPUs.


Possible, but the console APUs are Sony/MS turf. AMD is the designer. Any deviation in development ultimately is Sony/MS's decision


FordGT90Concept said:


> Fiji is just their viability test platform.


A test platform that represents AMD's only new GPU in the last year (and for the next six months at least). Without Fiji, AMD's lineup is straight up rebrands with some mildly warmed over SKUs added into the mix, whose top model is the 390X (and presumably without Fiji, there would be a 395X2). Maybe not a huge gulf in outright performance, but from a marketing angle AMD would get skinned alive. Their market share without Fiji was in a nose dive.


FordGT90Concept said:


> Maybe AMD excepts to ship second generation APUs for Xbox One and PlayStation 4 with die shrink and HBM and AMD expects to be able to pocket the savings instead of Sony and Microsoft.


Devinder Kumar intimated that the APU die shrink would mean AMD's net profit would rise, so that is a fair assumption that any saving in manufacturing cost aids AMD, but even with the APU die and packaging shrink, Kumar expected gross margins to break $20/unit form the $17-18 they are presently residing at. Console APUs are still a volume commodity product, and I doubt that Sony/MS would tolerate any delivery slippage due to process/package deviation unless the processes involved were rock solid - especially if the monetary savings are going into AMD's pocket rather than the risk/reward being shared.


----------



## FordGT90Concept (Sep 1, 2015)

HumanSmoke said:


> Their market share without Fiji was in a nose dive.


It still is because Fiji is priced as a premium product and the bulk of discreet card sales are midrange and low end.  Those cards are all still rebrands.



HumanSmoke said:


> Devinder Kumar intimated that the APU die shrink would mean AMD's net profit would rise, so that is a fair assumption that any saving in manufacturing cost aids AMD, but even with the APU die and packaging shrink, Kumar expected gross margins to break $20/unit form the $17-18 they are presently residing at. Console APUs are still a volume commodity product, and I doubt that Sony/MS would tolerate any delivery slippage due to process/package deviation unless the processes involved were rock solid - especially if the monetary savings are going into AMD's pocket rather than the risk/reward being shared.


Sony/Microsoft would save in other areas like power transformer, cooling, and space.  They can make a physically smaller console which translates to materials saving.  Everyone wins--AMD the most because instead of just getting paid for the APU, they'd also have to get paid for memory too (most of which would be sent to the memory manufacturer but it is still something AMD can charge more for).


----------



## HumanSmoke (Sep 1, 2015)

FordGT90Concept said:


> It still is because Fiji is priced as a premium product and the bulk of discreet card sales are midrange and low end.  Those cards are all still rebrands.


Well, for definitive proof you'd need to see the Q3 market share numbers, since Fiji barely arrived before the close of Q2.
Sales of the top end aren't generally the only advantage to sales. They indirectly influence sales of lower parts due to the halo effect. Nvidia probably sells a bunch of GT (and lower end GTX) 700/900 series cards due to the same halo effect from the Titan and 980 Ti series - a little reflected glory if you like.
AMD obviously didn't foresee GM200 scaling (clocks largely unaffected by increasing die size) as well as it did when it laid down Fiji's design, and had Fiji been unreservedly the "worlds fastest GPU" as they'd intended, it would have boosted sales of the lower tier. AMD's mistake was not taking into account that the opposition also have capable R&D divisions, but when AMD signed up for HBM in late 2013, they had to make a decision on estimates and available information.


FordGT90Concept said:


> Sony/Microsoft would save in other areas like power transformer, cooling, and space.  They can make a physically smaller console which translates to materials saving.  Everyone wins--AMD the most because instead of just getting paid for the APU, they'd also have to get paid for memory too (most of which would be sent to the memory manufacturer but it is still something AMD can charge more for).


Hynix's own rationale seemed to be to keep pace with Samsung ( who had actually already begun rolling out 3D NAND tech by this time). AMD's involvement surely stemmed from HSA and hUMA in general - of which, consoles leverage the same tech to be sure, but I think were only part of the whole HSA implementation strategy.


----------



## EarthDog (Sep 2, 2015)

Sorry to be completely OTHERWISE bere, but human, that quote from that douche canoe charlie is PRICELESS.


----------



## HumanSmoke (Sep 2, 2015)

EarthDog said:


> Sorry to be completely OTHERWISE bere, but human, that quote from that douche canoe charlie is PRICELESS.


I'd say that you simply can't buy insight like that, but you can. For a measly $1000 year subscription, the thoughts and ramblings of Chairman Charlie can be yours! Charlie predicts...he dices...he slices, juliennes, and mashes, all for the introductory low, low price!


----------



## rvalencia (Sep 2, 2015)

HumanSmoke said:


> So, basically what I just said.
> 
> Don't kid yourself, they only reasons they aren't here is because they're too busy eating their crayons.
> bon appétit
> ...


1. With Fables, you asserted a unsupported claim .

2. Your personality based attacks shows you are a hypocrite i.e. not much different to certain WCCFTech's comment section.




FordGT90Concept said:


> Beware, I'm hearing about problems with R9 280(X) from all over the place.  Specifically, Gigabyte and XFX come up.


Do you have a view that Gigabyte and XFX Maxwellv2s are trouble free? Your assertion shows you are a hypocrite.


----------



## xenocide (Sep 2, 2015)

The comment section on WCCFTech is literally a wasteland of human intellect.  I suppose it's fitting for a site that publishes every stray theory and tweet from an engineer as breaking news.  They are second only to S|A on the shortlist of tech sites I cannot stand seeing cited.


----------



## FordGT90Concept (Sep 2, 2015)

rvalencia said:


> Do you have a view that Gigabyte and XFX Maxwellv2s are trouble free? Your assertion shows you are a hypocrite.


I have no idea.  All I know is RMA'ing 280(X) graphics cards is trendy right now.  Specifically, Gigabyte 280X and XFX 280.


----------



## HumanSmoke (Sep 2, 2015)

rvalencia said:


> 1. With Fables, you asserted a unsupported claim .


The only assertion I made was that different games use different resources to different extents. My claim is no more unsupported than yours that they use identical resources to the same extent. I have historical precedent on my side (different game engines, different coders etc), you have hysterical supposition on yours.


rvalencia said:


> FordGT90Concept said:
> 
> 
> > Beware, I'm hearing about problems with R9 280(X) from all over the place.  Specifically, Gigabyte and XFX come up.
> ...


What has one to do with the other? Oh, thats right....nothing!
I'd suggest you calm down, you're starting to sound just like the loons at WTFtech...assuming your violent defense of them means you aren't a fully paid up Disqus member already.


xenocide said:


> *The comment section on WCCFTech is literally a wasteland of human intellect.*  I suppose it's fitting for a site that publishes every stray theory and tweet from an engineer as breaking news.  They are second only to S|A on the shortlist of tech sites I cannot stand seeing cited.


Quoted for truth.


----------



## rvalencia (Sep 2, 2015)

HumanSmoke said:


> The only assertion I made was that different games use different resources to different extents. My claim is no more unsupported than yours that they use identical resources to the same extent. I have historical precedent on my side (different game engines, different coders etc), you have hysterical supposition on yours.


Have you played the new Fables DX12?




HumanSmoke said:


> What has one to do with the other? Oh, thats right....nothing!
> I'd suggest you calm down, you're starting to sound just like the loons at WTFtech...assuming your violent defense of them means you aren't a fully paid up Disqus member already.


You started it. You calm down.  




HumanSmoke said:


> Quoted for truth.


As posted earlier in this thread, the WCCFTech post was from Oxide i.e. read the full post from
http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995




xenocide said:


> The comment section on WCCFTech is literally a wasteland of human intellect.  I suppose it's fitting for a site that publishes every stray theory and tweet from an engineer as breaking news.  They are second only to S|A on the shortlist of tech sites I cannot stand seeing cited.


As posted earlier in this thread, the WCCFTech post was from Oxide i.e. read the full post from
http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995




FordGT90Concept said:


> I have no idea.  All I know is RMA'ing 280(X) graphics cards is trendy right now.  Specifically, Gigabyte 280X and XFX 280.


That's a double standard view point.  http://forums.evga.com/GTX-970-Black-Screen-Crash-during-game-SOLVED-RMA-m2248453.aspx


----------



## nem (Sep 2, 2015)




----------



## FordGT90Concept (Sep 2, 2015)

I'm sure the async compute features of GCN are intrinsically linked to Mantle.  Because AMD supported Mantle and NVIDIA couldn't be bothered to even look into it, AMD has a huge advantage when it comes to DirectX 12 and Vulkan.  It makes sense.  The question is how long will it take for NVIDIA to catch up?  Pascal?  Longer?




rvalencia said:


> That's a double standard view point.  http://forums.evga.com/GTX-970-Black-Screen-Crash-during-game-SOLVED-RMA-m2248453.aspx


I made no mention of NVIDIA in the context of 280(X).


----------



## HumanSmoke (Sep 2, 2015)

rvalencia said:


> Have you played the new Fables DX12?


No but I can read, and more to the point I obviously understand the content of the links you post more than you do.


rvalencia said:


> As posted earlier in this thread, the WCCFTech post was from Oxide i.e. read the full post from
> http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995
> .....[_and again...same link twice in the same post yet you still failed to make the connection_]
> As posted earlier in this thread, the WCCFTech post was from Oxide i.e. read the full post from
> http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995



Allow me to point out the obvious (for most people) from the post you linked to twice...consecutively


> Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven't made their way to the PC yet, but I've heard of developers getting 30% GPU performance by using Async Compute. Too early to tell, of course, but it could end being pretty disruptive in a year or so as these GCN built and optimized engines start coming to the PC. *I don't think Unreal titles will show this very much though*, so likely we'll have to wait to see. Has anyone profiled Ark yet?


You keep linking to the words of the Oxide developer so you obviously place some store in what he's saying - why keep linking otherwise? Yet the developer doesn't see Unreal using the same levels of async compute- and it's almost a certainty that Nitrous and UE4 aren't identical.

Here's the kicker in case you don't understand why I'm singling out the Unreal Engine 4 part of his post....Fable Legends *USES Unreal Engine 4*


----------



## FordGT90Concept (Sep 2, 2015)

It should be noted that Oxide isn't going to know much about Unreal Engine development.  Considering Unreal Engine 4 is used on PlayStation 4 and Xbox One and both of those support async compute, I think it is quite silly to believe Epic would employ async compute where possible.  The only way it wouldn't is if their engine can't be made to use it without pushing out major architectural changes.  In which case, Unreal Engine 5 will be coming sooner rather than later.


----------



## HumanSmoke (Sep 2, 2015)

FordGT90Concept said:


> It should be noted that Oxide isn't going to know much about Unreal Engine development.  Considering Unreal Engine 4 is used on PlayStation 4 and Xbox One and both of those support async compute, I think it is quite silly to believe Epic would employ async compute where possible.  The only way it wouldn't is if their engine can't be made to use it without pushing out major architectural changes.  In which case, Unreal Engine 5 will be coming sooner rather than later.


It's up to individual developers which version of UE4 (or any other UE engine, including the DX12 patched UE3 that Gears of War Unlimited uses) they use. As I noted earlier in post #76, UE4.9 supports both DX12 and supports a number of DX12 features - including async compute and ROVs. The point I was attempting to make - and seemingly failed at - is that depending upon which version is used, and how much effort the developer puts into coding, the feature set used is going to differ from engine to engine, from game engine version to game engine version, and from from game to game. Larger studio's with a larger development team are likely to utilize a greater degree of control over the final product. Small game studios might also either use less features to save expenditure, or rely upon third party dev teams to incorporate elements of their gaming SDK into the final product. rvalencia is putting forward the notion that all DX12 releases are going to be performance clones of AotS - which I find laughable in the extreme given how the gaming industry actually works.

Now, given that Epic haven't exactly hidden their association with Nvidia's game development program:


> "Epic developed Unreal Engine 4 on NVIDIA hardware, and it looks and runs best on GeForce." - *Tim Sweeney*, founder, CEO and technical director of Epic Games.



What are the chances that Unreal Engine 4 (and patched UE3 builds) operates exactly the same way as Oxide's Nitrous Engine, as rvalencia asserts? A game engine that was overhauled (if not developed) as a demonstrator for Mantle.


FordGT90Concept said:


> It should be noted that Oxide isn't going to know much about Unreal Engine development.


Well, that would make them pretty lazy considering UE4 builds are easily sourced, and Epic runs a pretty extensive forum.  I would have thought that a game developer might have more than a passing interest in one of the most widely licensed game engines considering evaluation costs nothing.
I can appreciate that you'd concentrate on your own engine, but I'd find it difficult to imagine that they wouldn't keep an eye on what the competition are doing, especially when the cost is minimal and the information is freely available.


----------



## Sony Xperia S (Sep 2, 2015)

Ikaruga said:


> l'm always honest. Nvidia is the Apple of GPUs, they are evil, they are greedy, there is almost nothing you could like about them, but they make very good stuff, which works well and also performs well, so they win. If they would suck, nobody would buy their products for decades, the GPU market is not like the music industry, only a very small tech savy percentage of the population buys dedicated GPUs, no Justin Biebers can keep themselves on the surface for a long time without actually delivering good stuff.





EarthDog said:


> I really hate to feed the nonsensical but, I wonder how you define better...
> 
> It can't be in performance /watt...
> It can't be in frame time in CFx v SLI...
> ...





FordGT90Concept said:


> Beware, I'm hearing about problems with R9 280(X) from all over the place.  Specifically, Gigabyte and XFX come up.



That's called brainwashing. I have never seen any technological competetive advantages in apple's products compared to the competition. Actually, the opposite - they break like shit.

Anyways, you guys are so mean. I can't comprehend how it's even possible that such people exist.



EarthDog said:


> Bang for your buck? CHECK.
> Utilizing technology to get (TOO FAR) ahead of the curve? CHECK.



Yes, and Image quality CHECK.


----------



## xenocide (Sep 2, 2015)

HumanSmoke said:


> A game engine that was overhauled (if not developed) as a demonstrator for Mantle.



It was developed with Mantle in mind, and it was pretty apparent when AMD dragged a rep from Oxide around like a puppy dog to every tech convention to talk about how superior it was to DirectX and OpenGL at the time.



Sony Xperia S said:


> I have never seen any technological competetive advantages in apple's products compared to the competition.



Ease of Use (Ensured Compatibility, User-Friendly Software, Simple Controls\Navigation), Aesthetics (form over function), Reliability, Build Quality (Keyboards, Trackpads, Durable Materials), Ease of Development\Standardized Hardware--and those are just broad terms.  If you want to get down to it Apple's iPhones and iOS by extension continue to be superior to their competitors, and offer features that are more well-rounded than their competitors.  Companies like Samsung and HTC have been trying to chip away at Apple by offering handsets that are cheaper or have a single feature better than Apple's equivalent in the iPhone, but they almost never have offered a more well-rounded product.  Apple's Cinema Displays are some of the best IPS's you can buy, and they have given up on lower resolutions even in their low-end laptops.  They do a lot of things better than their competitors, that's why people keep buying them.


----------



## the54thvoid (Sep 2, 2015)

I actually just read the extremetech review (from 17th Aug) - the one that places 980ti versus Fury X.  What's the fuss about?

http://www.extremetech.com/gaming/2...-singularity-amd-and-nvidia-go-head-to-head/2

Effectively, Fury X ( a £550 card) pretty much runs the same (or 5-10% better, workload dependent, ironically, heavy workload = 980ti better) than a 980ti (a £510 card). *stock design prices

This is using an engine Nvidia say has an MSAA bug yet the 4xMSAA bench at 1080p:







and 4k:






So is this whole shit fest about top end Fiji and top end Maxwell being.... *EQUAL*, OMG, stop the freaking bus.... (I should've read these things earlier).

This is actually hilarious.  All the AMD muppets saying all the silly things about Nvidia and all the Nvidia muppets saying the same about AMD when the reality of DX12 is.......

They're the same.

wow.

Why aren't we all hugging and saying how great this is?  Why are we fighting over parity?

Oh, one caveat from extremetech themselves:



> but _Ashes of the Singularity_ and possibly Fable Legends are the only near-term DX12 launches, and neither is in finished form just yet. *DX11 and even DX9 are going to remain important for years to come*, and AMD needs to balance its admittedly limited pool of resources between encouraging DX12 adoption and ensuring that gamers
> 
> who don’t have Windows 10 don’t end up left in the cold.



That bit in bold is very important....  DX12 levels the field, even 55-45 in AMD's favour but DX11 is AMD's Achilles heel, worse in DX9.

lol.


----------



## Sony Xperia S (Sep 2, 2015)

xenocide said:


> *Ease of Use (Ensured Compatibility, User-Friendly Software, Simple Controls\Navigation), Aesthetics (form over function), Reliability, Build Quality (Keyboards, Trackpads, Durable Materials), Ease of Development\Standardized Hardware*--and those are just broad terms.  If you want to get down to it Apple's iPhones and iOS by extension continue to be superior to their competitors, and offer features that are more well-rounded than their competitors.  Companies like Samsung and HTC have been trying to chip away at Apple by offering handsets that are cheaper or have a single feature better than Apple's equivalent in the iPhone, but they almost never have offered a more well-rounded product.  Apple's Cinema Displays are some of the best IPS's you can buy, and they have given up on lower resolutions even in their low-end laptops.  They do a lot of things better than their competitors, that's why people keep buying them.



About resolutions I agree. But their disadvantage is the extremely high price tag. So anyways - they don't qualify in terms of cost of investment.

All other so called by you "broad terms" are so subjective. About materials specifically I told you that you can NOT rely on iphone because one or two times on the ground will be enough to break the screen.

* There is an anecdote in my country. That people buy super expensive iphones for 500-600-700 euros and then they don't have 2-3 euros to sit in the cafe.


----------



## Frick (Sep 2, 2015)

I think dx12 should have a faster adaption rate than earlier versions. For one thing, it's pushed by consoles isn't it? And secondly, if they can get higher performance out of it, I mean significant performance, would that make it more interesting to use as well? And Win10 is free for many people so currently it's not associated with money, as it was with say dx10. People have probably made the same argument before.


----------



## Aquinus (Sep 2, 2015)

Sony Xperia S said:


> About resolutions I agree. But their disadvantage is the extremely high price tag. So anyways - they don't qualify in terms of cost of investment.


That's so dumb. It's like saying that anything doesn't qualify unless it's cheap. You know the old adage, "You get what you pay for." That's true of Apple to a point. Their chassis are solid, they've integrated everything to a tiny motherboard so most of Apple's laptops are battery, not circuitry. Having locked down hardware enables devs to have an expectation with respect to what kind of hardware is under the hood. Simple fact is that there are a lot of reasons why Apple is successful right now. Ignoring that is just being blind.


Sony Xperia S said:


> All other so called by you "broad terms" are so subjective. About materials specifically I told you that you can NOT rely on iphone because one or two times on the ground will be enough to break the screen.


Drop any phone flat on the screen hitting something and I bet you the screen will crack. With that said, my iPhone 4S has been dropped a lot without a case and it is still perfectly fine...

Lastly, this is a thread about AMD. Why the hell are you talking about Apple? Stick to the topic and stop being a smart ass. AMD offers price and there have been *arguments in the past* about IQ settings. Simple fact is that nVidia cards can look just as good, they're just tuned for performance out of the box. Nothing more, nothing less.


----------



## Sony Xperia S (Sep 2, 2015)

Aquinus said:


> That's so dumb. It's like saying that anything doesn't qualify unless it's cheap. You know the old adage, "You get what you pay for." That's true of Apple to a point. Their chassis are solid, they've integrated everything to a tiny motherboard so most of Apple's laptops are battery, not circuitry. Having locked down hardware enables devs to have an expectation with respect to what kind of hardware is under the hood. Simple fact is that there are a lot of reasons why Apple is successful right now. Ignoring that is just being blind.
> 
> Drop any phone flat on the screen hitting something and I bet you the screen will crack. With that said, my iPhone 4S has been dropped a lot without a case and it is still perfectly fine...
> 
> Lastly, this is a thread about AMD. Why the hell are you talking about Apple? Stick to the topic and stop being a smart ass. AMD offers price and there have been *arguments in the past* about IQ settings. Simple fact is that nVidia cards can look just as good, they're just tuned for performance out of the box. Nothing more, nothing less.



This is a thread about nvidia and AMD, and somebody brought the comparison that apple is nvidia.

There is no need to have something hit on the ground - just a flat surface like asphalt will be enough. And that's not true - there are videos which you can watch that many other brands offer phones which don't break when hitting the ground.

Oh, and apple is successful because it's an american company and those guys in usa just support it on nationalistic basis.


----------



## HumanSmoke (Sep 2, 2015)

the54thvoid said:


> I actually just read the extremetech review (from 17th Aug) ...


You may want to give *this review* a look-see. Very comprehensive.


the54thvoid said:


> Oh, one caveat from extremetech themselves:
> That bit in bold is very important....  DX12 levels the field, even 55-45 in AMD's favour but DX11 is AMD's Achilles heel, worse in DX9.


The review I just linked to has a similar outlook, and one that I alluded to regarding finances/interest in game devs coding for DX12


> Ultimately, no matter what AMD, Microsoft, or Nvidia might say, there’s another important fact to consider. DX11 (and DX10/DX9) are not going away; the big developers have the resources to do low-level programming with DX12 to improve performance. Independent developers and smaller outfits are not going to be as enamored with putting in more work on the engine if it just takes time away from making a great game. And at the end of the day, that’s what really matters. Games like_StarCraft II_, _Fallout 3_, and the_ Mass Effect_ series have all received rave reviews, with nary a DX11 piece of code in sight. And until DX11 is well and truly put to rest (maybe around the time Dream Machine 2020 rolls out?), things like drivers and CPU performance are still going to be important.





the54thvoid said:


> This is actually hilarious.  All the AMD muppets saying all the silly things about Nvidia and all the Nvidia muppets saying the same about AMD when the reality of DX12 is.......They're the same.


Pretty much. Like anything else graphics game engine related, it all comes down to the application and the settings used. For some people, if one metric doesn't work, try another...and another...and another...and when you find that oh so important (maybe barely) discernible difference, unleash the bile that most sane people might only exhibit on finding out their neighbour is wanted by the International Criminal Court for crimes against humanity.


Frick said:


> I think dx12 should have a faster adaption rate than earlier versions. For one thing, it's pushed by consoles isn't it? And secondly, if they can get higher performance out of it, I mean significant performance, would that make it more interesting to use as well? And Win10 is free for many people so currently it's not associated with money, as it was with say dx10. People have probably made the same argument before.


DX12 might be available to the OS, but game developers still have to code (and optimize that code) for their titles. That isn't necessarily a given, as game devs have pointed out themselves. DX12 as has been quoted many times, puts more input into the hands of the developer. The developer still needs to get to grips with that input. I can see many smaller studios not wanting to expend the effort, and many more may prefer to use a simplified engine of a known quantity (DX9 or 11) for a time-to-market situation. Consoles taking up DX12 is all well and good, but porting a console game to PC isn't a trivial matter.


----------



## Aquinus (Sep 2, 2015)

Sony Xperia S said:


> This is a thread about nvidia and AMD, and somebody brought the comparison that apple is nvidia.


Actually, you removed something that *you said* that started that argument. See this post. Don't try to change history then lie about it.


Sony Xperia S said:


> Oh, and apple is successful because it's an american company and those guys in usa just support it on nationalistic basis.


I don't support Apple, I just said that they have a quality product that you pay for. Also making claims about the American people in general is really bad idea. I have an iPhone because work pays for the monthly bill and I have a Macbook Pro because work gave me one.

This is the nice way of me telling you to shut up and stop posting bullshit but, it appears that I needed to spell that out for you.


----------



## Sony Xperia S (Sep 2, 2015)

Aquinus said:


> Actually, you removed something that *you said* that started that argument. See this post. Don't try to change history then lie about it.



What are you speaking about ?? What did I remove and what am I trying to change ? 

This is the line which introduced apple:



Ikaruga said:


> l'm always honest. Nvidia is the Apple of GPUs, they are evil, they are greedy, there is almost nothing you could like about them


----------



## Aquinus (Sep 2, 2015)

Sony Xperia S said:


> What are you speaking about ?? What did I remove and what am I trying to change ?
> 
> This is the line which introduced apple:


He said nVidia is the Apple of. He wasn't talking about Apple. He went on to say (talking about nVidia):


Ikaruga said:


> they are evil, they are greedy, there is almost nothing you could like about them, but they make very good stuff, which works well and also performs well, so they win. If they would suck, nobody would buy their products for decades, the GPU market is not like the music industry, only a very small tech savy percentage of the population buys dedicated GPUs


Your only digging yourself a deeper hole...


----------



## Sony Xperia S (Sep 2, 2015)

Aquinus said:


> He said nVidia is the Apple of. He wasn't talking about Apple. He went on to say (talking about nVidia)



Really, I never knew and actually don't wanna know that this fruit the apple is so divine. 

Seriously, how would I have known that ? When this is the first time I hear someone speaking like that ?


----------



## Aquinus (Sep 2, 2015)

Sony Xperia S said:


> Really, I never knew and actually don't wanna know that this fruit the apple is so divine.
> 
> Seriously, how would I have known that ? When this is the first time I hear someone speaking like that ?


Then maybe you should learn to read before making assumptions about what people are saying. Considering this is not a thread about Apple, you should have been able to put one and one together to make two.

Remember how I said:


Aquinus said:


> Your only digging yourself a deeper hole...





Aquinus said:


> This is the nice way of me telling you to shut up and stop posting bullshit but, it appears that I needed to spell that out for you.



Well, that all still stands and is only even more relevant now.


----------



## Sony Xperia S (Sep 2, 2015)

Remember how I said:



*I am not in a hole*, and I don't understand what exactly you are speaking about and how in hell you know what that person meant?

Are you threatening me or what?

My reading skills are ok. I am reading. 
What I want to kindly ask you is to leave me alone without all the time analysing in a very negative way my posts and instead trying to respect my opinion.


----------



## Aquinus (Sep 2, 2015)

Sony Xperia S said:


> *I am not in a hole*, and I don't understand what exactly you are speaking about and *how in hell you know what that person meant*?


I can read. It doesn't take a rocket scientist to figure out what he was saying.


Sony Xperia S said:


> My reading skills are ok. I am reading.


Then there is nothing further to discuss because I know English and I understood him just fine.


Sony Xperia S said:


> Are you threatening me or what?


No, just pointing out that you've been pulling the thread off topic because you didn't understand what someone else posted.


Sony Xperia S said:


> What I want to kindly ask you is to leave me alone without all the time analysing in a very negative way my posts and instead trying to respect my opinion.


Then maybe you should stay on topic like I said in the first place. If you want to be left alone, a public forum is not the place to be. Calling you out on BS is not persecution, it's called accountability.


----------



## Captain_Tom (Sep 2, 2015)

RejZoR said:


> I think NVIDIA just couldn't be bothered with driver implementation till now because frankly, async compute units weren't really needed till now (or shall I say till DX12 games are here). Maybe drivers "bluff" the support just to prevent crashing if someone happens to try and use it now, but they'll implement it at later time properly. Until NVIDIA confirms that GTX 900 series have no async units, I call it BS.



Yeah and how long did it take them to admit the 970 has 3.5GB of VRAM?  Heck they still haven't fully fessed up to it.


----------



## cadaveca (Sep 2, 2015)

Uh ,FYI guys, on reddit is a thread with an apparent AMD guy saying that NO GPU ON THE MARKET TODAY is fully DX12 compliant. So...

What AMD does, NVidia doesn't. Also, vice versa.

Now, about that rumour that you could use NVidia and AMD GPUs together in the same system... would that somehow overcome these "issues"?


----------



## EarthDog (Sep 2, 2015)

It has 4GB of vram though... its just that the last .5GB are much slower.


----------



## EarthDog (Sep 2, 2015)

Sony Xperia S said:


> Anyways, you guys are so mean. I can't comprehend how it's even possible that such people exist.
> 
> 
> 
> Yes, and Image quality CHECK.


Mean? How are we mean when we(I) shower you with facts? I like how you cherry pick the two good things (it was one actually) I mentioned, but completely disregard the rest yet still think its better. 

Image quality? You need to prove that Sony... 

You have your head shoved so far up AMD's ass you are crapping AMD BS human caterpillar style (THAT was the first mean thing I have said) and you don't even know it. Since TPU doesn't seem to want to perma ban this clown, I'm just going to put him on ignore. Have fun with this guy people. I can't take the nonsense anymore and risk getting in trouble myself.


----------



## RejZoR (Sep 2, 2015)

*Interesting read on Async Shaders and GTX 900 series:*
https://www.reddit.com/r/nvidia/comments/3j5e9b/analysis_async_compute_is_it_true_nvidia_cant_do/

EDIT:
Removed the info, read it yourself...


----------



## Frick (Sep 2, 2015)

RejZoR said:


> *Interesting read on Async Shaders and GTX 900 series:*
> https://www.reddit.com/r/nvidia/comments/3j5e9b/analysis_async_compute_is_it_true_nvidia_cant_do/
> 
> EDIT:
> Removed the info, read it yourself...



tl;dr "we still don't know lol"


----------



## RejZoR (Sep 2, 2015)

Well, from what I can see so far, NVIDIA is capable of doing async compute, just more limited by the queue scheduler. Still need to read further...


----------



## FordGT90Concept (Sep 2, 2015)

Sony Xperia S said:


> That's called brainwashing. I have never seen any technological competetive advantages in apple's products compared to the competition. Actually, the opposite - they break like shit.


For the record, I just installed my PowerColor PCS+ 290X yesterday and first impressions are excellent.  FurMark (100% load) only took it to 64C.


----------



## rvalencia (Sep 2, 2015)

RejZoR said:


> Well, from what I can see so far, NVIDIA is capable of doing async compute, just more limited by the queue scheduler. Still need to read further...


*Maxwellv2 is not capable of concurrent async + rendering without incurring context penalties and it's under this context that Oxdie made it's remarks.


*
_


cadaveca said:



			Uh ,FYI guys, on reddit is a thread with an apparent AMD guy saying that NO GPU ON THE MARKET TODAY is fully DX12 compliant. So...
		
Click to expand...

_


cadaveca said:


> _What AMD does, NVidia doesn't. Also, vice versa._
> 
> _Now, about that rumour that you could use NVidia and AMD GPUs together in the same system... would that somehow overcome these "issues"?
> _




Intel Xeon 18 CPU core per socket running DirectX12 reference driver is the full DirectX12 renderer.


----------



## FordGT90Concept (Sep 2, 2015)

Direct3D has feature levels.  12.0 is basic DirectX 12 support which AMD GCN, Intel's iGPU, and NVIDIA all support.  Maxwell has 12.1 support officially meaning the cards won't freak out if they see 12.1 instructions but all NVIDIA cards that support 12.0 will take a performance penalty when the software uses async compute.  It supports it but it does a really bad job at supporting it.

I'm curious if Intel's iGPU takes a performance penalty when using async compute too.


----------



## Sony Xperia S (Sep 2, 2015)

FordGT90Concept said:


> For the record, I just installed my PowerColor PCS+ 290X yesterday and first impressions are excellent.  FurMark (100% load) only took it to 64C.



Excellent news ! It is great to hear you have a new card.

Why do you stress her under FurMark ?


----------



## FordGT90Concept (Sep 2, 2015)

Make sure it is stable and the temperatures are reasonable.  I'm only keeping it installed for about a week then I'm going back to 5870 until I can get my hands on a 6700K.  I need to make sure I don't have to RMA it.


----------



## the54thvoid (Sep 2, 2015)

Aquinus said:


> I can read. It doesn't take a rocket scientist to figure out what he was saying.
> 
> Then there is nothing further to discuss because I know English and I understood him just fine.
> 
> ...



lol, I thought "why the hell is @Aquinus triple posting and wtf are these people talking about" then realised - ah it's _him_.  I can't see their posts - still blocked to me - thankfully it seems.  I agree with @EarthDog though - should be banned - simple as that.


----------



## BiggieShady (Sep 2, 2015)

rvalencia said:


> *Maxwellv2 is not capable of concurrent async + rendering without incurring context penalties and it's under this context that Oxdie made it's remarks.*


That is a claim presented at the beginning of the article. Through the end, if you read it, it is proven in benchmark that it is *not true* (number of queues horizontally and time spent computing vertically - lower is better) 



 
Maxwell is faster than GCN up to 32 queues, and it evens out with GCN to 128 queues, where GCN has same speed up to 128 queues.
It's also shown that with async shaders it's extremely important how they are compiled for each architecture. 
Good find @RejZoR


----------



## FordGT90Concept (Sep 2, 2015)

Fermi and newer apparently can handle 31 async commands (jumps up at 32, 64, 96, 128) before the scheduler freaks out.  GCN can handle 64 at which point it starts straining.  GCN can handle far more async commands than Fermi and newer.

The question is how does this translate to the real world?  How many async commands is your average game going to use?  31 or less?  1000s?


----------



## Ikaruga (Sep 2, 2015)

cadaveca said:


> Uh ,FYI guys, on reddit is a thread with an apparent AMD guy saying that NO GPU ON THE MARKET TODAY is fully DX12 compliant. So...
> 
> What AMD does, NVidia doesn't. Also, vice versa.


This what I said in this thread a day earlier than that reddit post, but some people are in write only mode and don't actually read what others are saying.


Ikaruga said:


> *There is no misinformation at all*, most of the dx12 features will be supported by software on most of the cards, there are no GPU on the market with 100% top tier dx12 support (and I'm not sure if the next generation will be one, but maybe). This is nothing but a very well directed market campaign to level the fields, but I expected more insight into this from some of the TPU vets tbh (I don't mind it btw, AMD needs all the help he can get anyways).


----------



## BiggieShady (Sep 2, 2015)

FordGT90Concept said:


> The question is how does this translate to the real world? How many async commands is your average game going to use? 31 or less? 1000s?


The answer to that question is same as the answer to this question: How many different kinds of parallel tasks beside graphics can you imagine in game? Let's say that you don't want to animate leaves in the forest using only geometry shaders but you want real global wind simulation, and you use compute shader for that. Next you want wind on the water geometry, do you go with new async compute shader or append to existing one? As you can see the real world number for simultaneous async compute shaders is how many different kinds of simulations are we going to use: hair, fluids, rigid bodies, custom GPU accelerated AI ... all that would benefit from being in different async shader each, rather than having huge shader with bunch of branching (no branch prediction in gpu cores, even worse - gpu cores almost always execute both if-else paths)
All in all I'd say 32 is more than enough for gaming ... there might be benefit of more in pure compute


Ikaruga said:


> most of the dx12 features will be supported by software on most of the cards, there are no GPU on the market with 100% top tier dx12 support (and I'm not sure if the next generation will be one, but maybe)


Point is good and all but let's not forget how we are here (mostly) very well used to difference between marketing badges on colorful boxes and spotty support for a new API. Major game engine developers will find a well supported feature subset on both architectures and use them ... hopefully every major engine will have optimized code paths for each architecture and automatic fallback to DX11. Let's try keeping out fingers crossed for a couple of years.


 
In august adoption of win10 with dx12 gpu owners went from 0% to 16.32% ... hmm ... using full blown dx12 features - maybe in a year


----------



## truth teller (Sep 2, 2015)

the info on that reddit thread is not really the truth.

the main goal of having *async* compute units is not the major parallelization of workload, but having the gpu compute said workload *while still* performing rendering tasks, which *nvidia hardware can't do* (all the news floating around seeems to indicate so, also the company hasnt addressed the issue in any way so that pretty much admitting fault).

leave that reddit guy with its c source file with cuda preprocessor tags alone, its going nowhere


----------



## Ikaruga (Sep 3, 2015)

BiggieShady said:


> Point is good and all but let's not forget how we are here (mostly) very well used to difference between marketing badges on colorful boxes and spotty support for a new API. Major game engine developers will find a well supported feature subset on both architectures and use them ... hopefully every major engine will have optimized code paths for each architecture and automatic fallback to DX11. Let's try keeping out fingers crossed for a couple of years.
> 
> In august adoption of win10 with dx12 gpu owners went from 0% to 16.32% ... hmm ... using full blown dx12 features - maybe in a year


I agree and did not forget at all, but my conclusion were a little different, as I wrote it somewhere here earlier. Anyways, most of the Multiplatform games will mostly likely pick the features which are fast and available on the major consoles, but this it not the end of the world, you will still be able to play those games, perhaps you will need to set 1-2 sliders to high instead of ultra in the options to get optimal performance. Other titles might use gameworks or even a more direct approach exclusive to the PC, and those will run better on Nvidia and on AMD depending on the path they will take.


truth teller said:


> the info on that reddit thread is not really the truth.
> 
> the main goal of having *async* compute units is not the major parallelization of workload, but having the gpu compute said workload *while still* performing rendering tasks, which *nvidia hardware can't do* (all the news floating around seeems to indicate so, also the company hasnt addressed the issue in any way so that pretty much admitting fault).


I don't think that's correct. Nvidia has a disadvantage with async compute on the hardware level indeed, but we don't know the performance impact if that's properly gets corrected with the help of the driver/CPU (and properly means well optimized here), and there are other features what the Nvidia architecture does faster, and engines using those might easily gain back what they lost with the async compute part.

We just don't know yet.


----------



## eidairaman1 (Sep 3, 2015)

NV been cons for so long


----------



## BiggieShady (Sep 3, 2015)




----------



## truth teller (Sep 3, 2015)

turns out nvidia async implementation is just a wrapper








source: ka_rf @ beyond3d forum

their marketing department should have a full schedule on the next days


----------



## FordGT90Concept (Sep 3, 2015)

We kind of already gathered that, no?  Async on AMD cards is executed asynchronously while async on NVIDIA cards is executed synchronously.

Interesting that on both architectures, 100 threads appears to be the sweet spot.


----------



## Xzibit (Sep 4, 2015)

truth teller said:


> turns out nvidia async implementation is just a wrapper
> 
> 
> 
> ...




Someone has to find out what kind of performance will be expected if the developer codes for Async and has multi-gpu scalability to the game engine.


----------



## BiggieShady (Sep 4, 2015)

Those nvidia cons managed to have a gpu architecture that is good for dx12 while it is good for dx11 at the same time, in the middle of a transition from dx11 to dx12 ... damn them. 
Those bastards may even engineer Pascal completely with dx12 in mind.

But seriously, Maxwell architecture seems to handle async task concurrency between themselves just fine (latencies are in accordance with 32 queue depth)... problem is graphics workload being synchronous against async compute workload - if there is no architectural reason to be that way, this could be solved through a driver update. Troubling thing is, if nvidia knew they could fix it in driver update, they'd be faster with their response. Maybe Jen-Hsun Huang is writing a heartwarming letter.


----------



## qubit (Sep 5, 2015)

Sony Xperia S said:


> Why am I not surprised at all by this too ?
> 
> Nvidia is so dirty like pigs in the mud.
> 
> ...


Say whut?!   Especially the bold bit. With idiotic statements like that, no wonder you're getting criticized by everyone.


----------



## GC_PaNzerFIN (Sep 5, 2015)

What a huge pile of dog turd over something which seems to have been a driver issue, completely expectable with Alpha level implementation of first ever DX12 title. NVIDIA DX12 driver does not seem to yet fully support Async Shaders, although Oxide dev thought it does. 

Yeah, AMD technical marketing might not be your best source for info about competitor products. Combine that with a meltdown from a game dev... Then we have some good old fashioned NVIDIA bashing. 

http://wccftech.com/nvidia-async-compute-directx-12-oxide-games/

"We actually just chatted with Nvidia about Async Compute, indeed the driver *hasn’t fully implemented it yet,* but it appeared like it was. We are *working closely with them as they fully implement Async Compute*.
"


----------



## Ikaruga (Sep 6, 2015)




----------



## BiggieShady (Sep 6, 2015)

@Ikaruga I was wondering what took toothless Spanish Nvidia employee so long

... meanwhile in real nvidia http://www.guru3d.com/news-story/nvidia-will-fully-implement-async-compute-via-driver-support.html


----------



## Ikaruga (Sep 6, 2015)

BiggieShady said:


> @Ikaruga I was wondering what took toothless Spanish Nvidia employee so long


But he has some teeth


----------



## rtwjunkie (Sep 6, 2015)

That comedy team has been running these gags almost 10 years.  There's been a few people in other threads that actually think he works for Nvidia, so I figured I would throw this up here:


----------



## BiggieShady (Sep 6, 2015)

Ikaruga said:


> But he has some teeth


He has a tooth, you are being generous with the plural there


----------



## P-40E (Sep 7, 2015)

Sony Xperia S said:


> Really, I never knew and actually don't wanna know that this fruit the apple is so divine.
> 
> Seriously, how would I have known that ? When this is the first time I hear someone speaking like that ?



Because you are supposed to be smart enough to comprehend it. (No offense) But that's how life works.


----------



## anubis44 (Sep 7, 2015)

Mr McC said:


> No support, no problem, just pay them to gimp it on AMD cards: welcome to the wonderful world of the nVidia console, sorry, pc gaming, the way it's meant to be paid.



More like the way WE'RE meant to be played.


----------



## Vlada011 (Sep 7, 2015)

I don't see single DX12 game, only some calculation for possible scenario.
Until 10 DX12 games show on market Pascal will be old more than year.
Because of that I don't see reason for panic, if some card support DX12 that's not same as capable to offer playable fps.
I remember when 5870 with 2GB show up I bought immediately as first card with DX11 support.
Card was excellent but for DX9 and few DX10 environment, but first playable fps and much better with tessellation and DX11 was GTX580. 
I changed and ATI5870 and ATI6970 but only with GTX580 situation become really better and with Tahiti later. Period between ATI 5870 and AMD 7970 AMD didn't improve nothing on DX11 field and people who wait and upgrade on GTX580 played much better, until HD7950/HD7970.
Because of that no reason to panic, NVIDIA will be ready when time come...
Only one other thing is bad and that's tendency to NVIDIA write driver only for last architecture.
If they continue to do that people will turn them back. At least middle segment.
That's much bigger reason for worry than Maxwell and DX12. We will not play nice DX12 games at least 2 years.
Maybe some very rich people with multi GPU. But I talk for people who play games as on beginning with single powerful graphic.


----------



## Ikaruga (Sep 8, 2015)

Vlada011 said:


> Only one other thing is bad and that's tendency to NVIDIA write driver only for last architecture.
> If they continue to do that people will turn them back. At least middle segment.
> That's much bigger reason for worry than Maxwell and DX12. We will not play nice DX12 games at least 2 years.
> Maybe some very rich people with multi GPU. But I talk for people who play games as on beginning with single powerful graphic.


Nvidia has very good drivers for older generations, even Fermi cards run recent games quite nicely, one just needs to reduce some settings, but that's always the case as time goes by, you get a new card or reduce some settings. I recently helped somebody who has a 560ti + a 2500k  (he bought those from me), and most of the games still look and run quite nicely with "_medium"_ settings, and some of them even runs fine with "_high". _I can't imagine my Maxwell2 would need a replacement any time soon because of performance problems, if I will replace it, that will only happen because I won't be able to resist the upgrade itch again.


----------



## rvalencia (Sep 10, 2015)

BiggieShady said:


> That is a claim presented at the beginning of the article. Through the end, if you read it, it is proven in benchmark that it is *not true* (number of queues horizontally and time spent computing vertically - lower is better)
> View attachment 67772
> Maxwell is faster than GCN up to 32 queues, and it evens out with GCN to 128 queues, where GCN has same speed up to 128 queues.
> It's also shown that with async shaders it's extremely important how they are compiled for each architecture.
> Good find @RejZoR



From  https://forum.beyond3d.com/posts/1870374/



For pure compute, AMD's compute latency (green color areas) rivals NVIDIA's compute latency (refer to the attached file).

http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1710#post_24368195

*Here's what I think they did at Beyond3D*:

They set the amount of threads, per kernel, to 32 (they're CUDA programmers after-all).
They've bumped the Kernel count to up to 512 (16,384 Threads total).
They're scratching their heads wondering why the results don't make sense when comparing GCN to Maxwell 2

*Here's why that's not how you code for GCN*


Why?:

Each CU can have 40 Kernels in flight (each made up of 64 threads to form a single Wavefront).
That's 2,560 Threads total PER CU.
An R9 290x has 44 CUs or the capacity to handle 112,640 Threads total.

If you load up GCN with Kernels made up of 32 Threads you're wasting resources. If you're not pushing GCN you're wasting compute potential. In slide number 4, it stipulates that latency is hidden by executing overlapping wavefronts. This is why GCN appears to have a high degree of latency but you can execute a ton of work on GCN without affected the latency. With Maxwell/2, latency rises up like a staircase with the more work you throw at it. I'm not sure if the folks at Beyond3D are aware of this or not.


Conclusion:

I think they geared this test towards nVIDIAs CUDA architectures and are wondering why their results don't make sense on GCN. If true... DERP! That's why I said the single Latency results don't matter. This test is only good if you're checking on Async functionality.


GCN was built for Parallelism, not serial workloads like nVIDIAs architectures. This is why you don't see GCN taking a hit with 512 Kernels.

What did Oxide do? They built two paths. One with Shaders Optimized for CUDA and the other with Shaders Optimized for GCN. On top of that GCN has Async working. Therefore it is not hard to determine why GCN performs so well in Oxide's engine. It's a better architecture if you push it and code for it. If you're only using light compute work, nVIDIAs architectures will be superior.

This means that the burden is on developers to ensure they're optimizing for both. In the past, this hasn't been the case. Going forward... I hope they do. As for GameWorks titles, don't count them being optimized for GCN. That's a given. Oxide played fair, others... might not.


----------



## BiggieShady (Sep 10, 2015)

rvalencia said:


> This test is only good if you're checking on Async functionality.


That's exactly what the test is for ... checking on how much latency Async functionality introduces on both architectures.
GCN has a constant latency, good enough for compute loads made of small number of async tasks and great for huge number of async tasks. Additionaly GCN mixes compute async load and graphics load in near perfect parallelism.
Maxwell shows varying latency that is extremely low for small number of async tasks and surpasses GCN over 128 async tasks. What's really bad is that in current drivers async compute load and graphics load are done serially.
Mind you, every single async compute task is parallel in itself and can occupy 100% of GPU if the job is suitable (parallelizable), so in most cases penalty boils down in how many times and how much context switching is done. Maxwell has nice cache hierarchy to help with that. 
GCN should destroy Maxwell in special cases where huge number of async tasks depend on results calculated by huge number of other async tasks that are greatly varying in computational complexity


----------



## Xzibit (Sep 12, 2015)

*Gears of War Ultimate Will Have Unlocked Frame Rate; Devs Explain How They’re Using DX12 & Async Compute*

To begin with, Cam McRae (Technical Director for the Windows 10 PC version) explained how they’re going to use DirectX 12 and even Async Compute in Gears of War Ultimate.



> We are still hard at work optimising the game. DirectX 12 allows us much better control over the CPU load with heavily reduced driver overhead. Some of the overhead has been moved to the game where we can have control over it. Our main effort is in parallelising the rendering system to take advantage of multiple CPU cores. Command list creation and D3D resource creation are the big focus here. We’re also pulling in optimisations from UE4 where possible, such as pipeline state object caching. On the GPU side, we’ve converted SSAO to make use of async compute and are exploring the same for other features, like MSAA.


----------



## rtwjunkie (Sep 13, 2015)

Xzibit said:


> *Gears of War Ultimate Will Have Unlocked Frame Rate; Devs Explain How They’re Using DX12 & Async Compute*
> 
> To begin with, Cam McRae (Technical Director for the Windows 10 PC version) explained how they’re going to use DirectX 12 and even Async Compute in Gears of War Ultimate.



So is MS pulling another Halo 2, and making this W10 exclusive? It sounds like it, but doesn't outright say it.


----------



## FordGT90Concept (Sep 13, 2015)

I'm sure they did.  They did that to the Windows version of Minecraft already.  Of course there's no technical reason for doing so.


----------



## Xzibit (Sep 13, 2015)

rtwjunkie said:


> So is MS pulling another Halo 2, and making this W10 exclusive? It sounds like it, but doesn't outright say it.



Its X-Box One & Windows 10.  Fable Legends is the same.


----------

