Monday, August 31st 2015
Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for DirectX 12
It turns out that NVIDIA's "Maxwell" architecture has an Achilles' heel after all, which tilts the scales in favor of competing AMD Graphics CoreNext architecture, in being better prepared for DirectX 12. "Maxwell" lacks support for async compute, one of the three highlight features of Direct3D 12, even as the GeForce driver "exposes" the feature's presence to apps. This came to light when game developer Oxide Games alleged that it was pressured by NVIDIA's marketing department to remove certain features in its "Ashes of the Singularity" DirectX 12 benchmark.
Async Compute is a standardized API-level feature added to Direct3D by Microsoft, which allows an app to better exploit the number-crunching resources of a GPU, by breaking down its graphics rendering tasks. Since NVIDIA driver tells apps that "Maxwell" GPUs supports it, Oxide Games simply created its benchmark with async compute support, but when it attempted to use it on Maxwell, it was an "unmitigated disaster." During to course of its developer correspondence with NVIDIA to try and fix this issue, it learned that "Maxwell" doesn't really support async compute at the bare-metal level, and that NVIDIA driver bluffs its support to apps. NVIDIA instead started pressuring Oxide to remove parts of its code that use async compute altogether, it alleges."Personally, I think one could just as easily make the claim that we were biased toward NVIDIA as the only "vendor" specific-code is for NVIDIA where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that NVIDIA does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports," writes Oxide, in a statement disputing NVIDIA's "misinformation" about the "Ashes of Singularity" benchmark in its press communications (presumably to VGA reviewers).
Given its growing market-share, NVIDIA could use similar tactics to keep game developers away from industry-standard API features that it doesn't support, and which rival AMD does. NVIDIA drivers tell Windows that its GPUs support DirectX 12 feature-level 12_1. We wonder how much of that support is faked at the driver-level, like async compute. The company is already drawing flack for using borderline anti-competitive practices with GameWorks, which effectively creates a walled garden of visual effects that only users of NVIDIA hardware can experience for the same $59 everyone spends on a particular game.
Sources:
DSOGaming, WCCFTech
Async Compute is a standardized API-level feature added to Direct3D by Microsoft, which allows an app to better exploit the number-crunching resources of a GPU, by breaking down its graphics rendering tasks. Since NVIDIA driver tells apps that "Maxwell" GPUs supports it, Oxide Games simply created its benchmark with async compute support, but when it attempted to use it on Maxwell, it was an "unmitigated disaster." During to course of its developer correspondence with NVIDIA to try and fix this issue, it learned that "Maxwell" doesn't really support async compute at the bare-metal level, and that NVIDIA driver bluffs its support to apps. NVIDIA instead started pressuring Oxide to remove parts of its code that use async compute altogether, it alleges."Personally, I think one could just as easily make the claim that we were biased toward NVIDIA as the only "vendor" specific-code is for NVIDIA where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that NVIDIA does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports," writes Oxide, in a statement disputing NVIDIA's "misinformation" about the "Ashes of Singularity" benchmark in its press communications (presumably to VGA reviewers).
Given its growing market-share, NVIDIA could use similar tactics to keep game developers away from industry-standard API features that it doesn't support, and which rival AMD does. NVIDIA drivers tell Windows that its GPUs support DirectX 12 feature-level 12_1. We wonder how much of that support is faked at the driver-level, like async compute. The company is already drawing flack for using borderline anti-competitive practices with GameWorks, which effectively creates a walled garden of visual effects that only users of NVIDIA hardware can experience for the same $59 everyone spends on a particular game.
196 Comments on Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for DirectX 12
Pressurized, pressurizing, does the writer even know what that means ?
I mean really.....
Need a new humour-meter...
So, what your saying is it's easy to shine, so long as the path has been laid for you. The moment you take a detour, flaws get exposed. Have to commend AMD on this one that while they are hurting, at least they are truly baking in support for the features that they claim they are. Not just a quick once over to get to market (obviously they are not rushed on that front...)
Bluffing driver.... Wonder what other features we "think" we are using.
Bravo guys... bravo... I need the damn prices going down!
There are two points here to look at: first, Oxide is saying that Nvida has had access to the source code of the game for over a year and that they've been getting updates to the source code on the same day as AMD and Intel.
"P.S. There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally."
Second, they claim they're using the so called async-compute units found on AMD's GPUs. These are parts on AMD's recent GPUs that can be used to asynchronously schedule work to underutilized GCN clusters.
"AFAIK, Maxwell doesn’t support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not. Weather or not Async Compute is better or not is subjective, but it definitely does buy some performance on AMD’s hardware. Whether it is the right architectural decision for Maxwell, or is even relevant to it’s scheduler is hard to say."
We are late in the game on Maxwell. Pascal should be here in maybe 6 months and I think I read that they planned to add compute back in on the Pascals.
Hope it helps AMD to sell more cards.
-Complain about article content, Check.
-Insult moderator, Check.
so its like this:
Oxide: Look! we got benchmarks!
AMD: oh my we almost beat The Green Meanies
Nvidia: @oxide you cheated on the benchmarks
Oxide: did not. nyah.
Nvidia: disable competitive features so our non-async bluff works right!
Oxide: not gonna happen
Nvidia: F*** you AMD! you're not better than us! we'll fix you with our l33t bluffing skillz
AMD: *poops pants* and hides in a corner eating popcorn.
Be that as it may, I'd also like to know which version of Maxwell allegedly doesn't support async shaders - v1, v2, or both?
If this is true, then it's a massive c**k-up on nVIDIA's part, but one that probably won't affect them until the end of this year, when big-name DX12 games arrive for the holiday season. Even so, said games will still probably have DX11 rendering paths, and once Pascal arrives in March/April 2016 this will all be forgotten... assuming Pascal isn't delayed.
Later when they do support new shiney they go out of their way to claim its now more important and you should want it.
From a business standpoint not supporting the latest and greatest features of direct x makes sense. How much did nv spend on SM3.0 support in the their 6000 series? By the time the games that supported it were out, most were too resource heavy for anything short of the 6800 Ultra.
What about the Direct X 10 fiasco? how much was spent on getting compliance only to have next to no games support it and instead stick with Direct X 9 until 11 came out?
Though I am curious like the previous poster is kepler also suffers from this or if its something that was nerfed for maxwell.
Not that I have a clue........I'm just broke.
I am so tired of things like this...
It been known AMD has baked in async compute into their hardware since GCN, and even in PS/Xbox consoles. And it been know well before Maxwell async compute was going to be something Dx12 would be able to leverage.
So Nvidia has been selling a shit load of cards, and now all those cards are found they might be able to provide emulated support in due time. Did they intend to have designed hardware that appears not to offer native support (or so little) that one might see it and negligent? So what were they intending most Maxwell owners (and even Kepler) look to do, wait for Pascal and be happy to thrown more money at them… while they watch resale of their cards plummet? Sure, right now owners can just convince themselves they can do with some half-baked support till more Dx12 games start to show.
Nvidia must provide a clear and truthful statement as to the goings on with this, or… IDK