Monday, August 31st 2015
Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for DirectX 12
It turns out that NVIDIA's "Maxwell" architecture has an Achilles' heel after all, which tilts the scales in favor of competing AMD Graphics CoreNext architecture, in being better prepared for DirectX 12. "Maxwell" lacks support for async compute, one of the three highlight features of Direct3D 12, even as the GeForce driver "exposes" the feature's presence to apps. This came to light when game developer Oxide Games alleged that it was pressured by NVIDIA's marketing department to remove certain features in its "Ashes of the Singularity" DirectX 12 benchmark.
Async Compute is a standardized API-level feature added to Direct3D by Microsoft, which allows an app to better exploit the number-crunching resources of a GPU, by breaking down its graphics rendering tasks. Since NVIDIA driver tells apps that "Maxwell" GPUs supports it, Oxide Games simply created its benchmark with async compute support, but when it attempted to use it on Maxwell, it was an "unmitigated disaster." During to course of its developer correspondence with NVIDIA to try and fix this issue, it learned that "Maxwell" doesn't really support async compute at the bare-metal level, and that NVIDIA driver bluffs its support to apps. NVIDIA instead started pressuring Oxide to remove parts of its code that use async compute altogether, it alleges."Personally, I think one could just as easily make the claim that we were biased toward NVIDIA as the only "vendor" specific-code is for NVIDIA where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that NVIDIA does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports," writes Oxide, in a statement disputing NVIDIA's "misinformation" about the "Ashes of Singularity" benchmark in its press communications (presumably to VGA reviewers).
Given its growing market-share, NVIDIA could use similar tactics to keep game developers away from industry-standard API features that it doesn't support, and which rival AMD does. NVIDIA drivers tell Windows that its GPUs support DirectX 12 feature-level 12_1. We wonder how much of that support is faked at the driver-level, like async compute. The company is already drawing flack for using borderline anti-competitive practices with GameWorks, which effectively creates a walled garden of visual effects that only users of NVIDIA hardware can experience for the same $59 everyone spends on a particular game.
Sources:
DSOGaming, WCCFTech
Async Compute is a standardized API-level feature added to Direct3D by Microsoft, which allows an app to better exploit the number-crunching resources of a GPU, by breaking down its graphics rendering tasks. Since NVIDIA driver tells apps that "Maxwell" GPUs supports it, Oxide Games simply created its benchmark with async compute support, but when it attempted to use it on Maxwell, it was an "unmitigated disaster." During to course of its developer correspondence with NVIDIA to try and fix this issue, it learned that "Maxwell" doesn't really support async compute at the bare-metal level, and that NVIDIA driver bluffs its support to apps. NVIDIA instead started pressuring Oxide to remove parts of its code that use async compute altogether, it alleges."Personally, I think one could just as easily make the claim that we were biased toward NVIDIA as the only "vendor" specific-code is for NVIDIA where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that NVIDIA does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports," writes Oxide, in a statement disputing NVIDIA's "misinformation" about the "Ashes of Singularity" benchmark in its press communications (presumably to VGA reviewers).
Given its growing market-share, NVIDIA could use similar tactics to keep game developers away from industry-standard API features that it doesn't support, and which rival AMD does. NVIDIA drivers tell Windows that its GPUs support DirectX 12 feature-level 12_1. We wonder how much of that support is faked at the driver-level, like async compute. The company is already drawing flack for using borderline anti-competitive practices with GameWorks, which effectively creates a walled garden of visual effects that only users of NVIDIA hardware can experience for the same $59 everyone spends on a particular game.
196 Comments on Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for DirectX 12
When AMD does it it's the fucking guillotine for em. When NV does it, it's a misunderstanding.
That's why I tell people to hit and run when you "debate" on the interwebz. You stick longer in peoples heads. My degree is in illustration. However I have worked in marketing/apparel for a LONG time. Why do you think I troll so well? Its all about the delivery. You'll ether love me or hate me but, YOU WILL READ what I say.
Are AMD seeing bigger gains? Definitely, but then they are coming from further back because the DX11 performance is much worse compared to the meanies at nV.
It would be nice to at least have a baseline of 5 or so titles using different engines before jumping to any real conclusions.
Its on NV that they are out of it. Next time they should get more involved in the gaming ecosystem if they want to have a word in its development, even if it doesn't bring them the big bucks immediately.
If the initial game engine is being developed under AMD GNC with Async Compute for consoles then we PC gamers are getting even more screwed.
We are already getting a bad port, non-optimize, down-graded from improved API paths. GameWorks middle-ware. driver side emulation, developer who don't care = PC version of Batman Arkham Knight
David Kanter says it needs to be under 20ms response, John Carmack agrees.
A quote from some articles linked on the Anandtech forum. In other words, if you want VR without motion sickness, then AMD's Liquid VR is currently the only way. If nVidia does not add a quicker way to do async compute in Pascal (whether hardware or software), it's going to hurt them when it comes to VR.
Both sides have lied, neither side is innocent, this isn't a win for AMD nor a loss for NV, but most importantly (and I'm putting this in all caps for emphasis, not because I'm yelling) :
NOBODY BUYS A GPU THINKING "THIS IS GOING TO BE THE BEST CARD IN A YEAR." You buy for the now and hope that it'll hold up for a year or two(or longer, depending on your build cycles). Nobody can accurately predict what the tech front holds in the next few years. Sure, we all have an idea, but even the engineers that are working on the next two or three generations of hardware at this very moment cannot say for certain what's going to happen between now and release, or how a particular piece of hardware will perform in a year's time, whether the reason is drivers, software, APIs, power restrictions(please Intel, give up the 130W TDP limit on your "Extreme" chips and give us a proper Extreme Edition again!),etc., or something completely unforeseeable.
TL;DR: Whether NV lied about its DX12 support on Maxwell is a moot point, the hardware is already in the wild. I would be extremely surprised if, by the time we have at least three AAA DX12 titles, today's current top-end cards still perform on-par with the top-tier cards of the future that will have DX12 support. As far as DX12 stands right now, we have a tech demo and a pre-beta game that is being run as a benchmark. Take the results with a grain of salt, and only as an indicator of how the landscape might look once DX12 is widely-adopted.
And now, before I get flamed to death by all the fanboys, RM OUT!
It's nvidia's fault, not Oxide's.
wow by the time i typed this.....Random Murderer said it a lot better.
Seriously though, if this does become a real issue; while a whole new architecture will be out, there will be a huge number of people with Maxwells, since they have sold so well. So, not really moot either, though I see your thought process.
To sum up, if I were now in a position to buy a new GPU, I would choose a CGN one without 2nd thoughts just to be sure I will enjoy the most features and higher performance lvl of dx12 games in next 2-3 years.
I wonder why Humansmoke hasn't commented on this news....
Me I'm sailing the seas of cheese, not caring and obviously being better off by it.
How long before people start saying AMDs inclusion of Async Compute was a stupid move and should have charged more and Nvidia is doing the right thing by making you upgrade for it if Pascal incorporates them.
2. Because as others have said, the time to start sweating is when DX12 games actually arrive.
3. As I've said in earlier posts, there are going to instances where game engines favour one vendor or the other - it has always been the case, it will very likely continue to be so. Nitrous is built for GCN. No real surprises since Oxide's Star Swarm was the original Mantle demo poster child. AMD gets its licks in early. Smart marketing move. It will be interesting how they react when they are at the disadvantage, and what games draw what mix of hardware and software features available to DX12
4. With the previous point in mind, Unreal launched UE 4.9 yesterday. The engine supports a number of features that AMD has had problems with (GameWorks), or has architectural/driver issues with. 4.9 I believe has VXGI support, and ray tracing. My guess is that the same people screaming "Nvidia SUCK IT!!!!!" will be the same people crying foul when a game emerges that leverages any of these graphical effects.....of course, Unreal Engine 4 might be inconsequential WRT AAA titles, but I very much doubt it.
PC Gaming benchmarks and performance - vendors win some, lose some. Wash.Rinse.Repeat. I just hope the knee-jerk comments keep on coming - I just love bookmarking (and screencapping for those who retroactively rewrite history) for future reference.
The UE4.9 notes are pretty extensive, so here's an editor shot showing the VXGI support.