Monday, August 31st 2015
Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for DirectX 12
It turns out that NVIDIA's "Maxwell" architecture has an Achilles' heel after all, which tilts the scales in favor of competing AMD Graphics CoreNext architecture, in being better prepared for DirectX 12. "Maxwell" lacks support for async compute, one of the three highlight features of Direct3D 12, even as the GeForce driver "exposes" the feature's presence to apps. This came to light when game developer Oxide Games alleged that it was pressured by NVIDIA's marketing department to remove certain features in its "Ashes of the Singularity" DirectX 12 benchmark.
Async Compute is a standardized API-level feature added to Direct3D by Microsoft, which allows an app to better exploit the number-crunching resources of a GPU, by breaking down its graphics rendering tasks. Since NVIDIA driver tells apps that "Maxwell" GPUs supports it, Oxide Games simply created its benchmark with async compute support, but when it attempted to use it on Maxwell, it was an "unmitigated disaster." During to course of its developer correspondence with NVIDIA to try and fix this issue, it learned that "Maxwell" doesn't really support async compute at the bare-metal level, and that NVIDIA driver bluffs its support to apps. NVIDIA instead started pressuring Oxide to remove parts of its code that use async compute altogether, it alleges."Personally, I think one could just as easily make the claim that we were biased toward NVIDIA as the only "vendor" specific-code is for NVIDIA where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that NVIDIA does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports," writes Oxide, in a statement disputing NVIDIA's "misinformation" about the "Ashes of Singularity" benchmark in its press communications (presumably to VGA reviewers).
Given its growing market-share, NVIDIA could use similar tactics to keep game developers away from industry-standard API features that it doesn't support, and which rival AMD does. NVIDIA drivers tell Windows that its GPUs support DirectX 12 feature-level 12_1. We wonder how much of that support is faked at the driver-level, like async compute. The company is already drawing flack for using borderline anti-competitive practices with GameWorks, which effectively creates a walled garden of visual effects that only users of NVIDIA hardware can experience for the same $59 everyone spends on a particular game.
Sources:
DSOGaming, WCCFTech
Async Compute is a standardized API-level feature added to Direct3D by Microsoft, which allows an app to better exploit the number-crunching resources of a GPU, by breaking down its graphics rendering tasks. Since NVIDIA driver tells apps that "Maxwell" GPUs supports it, Oxide Games simply created its benchmark with async compute support, but when it attempted to use it on Maxwell, it was an "unmitigated disaster." During to course of its developer correspondence with NVIDIA to try and fix this issue, it learned that "Maxwell" doesn't really support async compute at the bare-metal level, and that NVIDIA driver bluffs its support to apps. NVIDIA instead started pressuring Oxide to remove parts of its code that use async compute altogether, it alleges."Personally, I think one could just as easily make the claim that we were biased toward NVIDIA as the only "vendor" specific-code is for NVIDIA where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that NVIDIA does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports," writes Oxide, in a statement disputing NVIDIA's "misinformation" about the "Ashes of Singularity" benchmark in its press communications (presumably to VGA reviewers).
Given its growing market-share, NVIDIA could use similar tactics to keep game developers away from industry-standard API features that it doesn't support, and which rival AMD does. NVIDIA drivers tell Windows that its GPUs support DirectX 12 feature-level 12_1. We wonder how much of that support is faked at the driver-level, like async compute. The company is already drawing flack for using borderline anti-competitive practices with GameWorks, which effectively creates a walled garden of visual effects that only users of NVIDIA hardware can experience for the same $59 everyone spends on a particular game.
196 Comments on Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for DirectX 12
Sales of the top end aren't generally the only advantage to sales. They indirectly influence sales of lower parts due to the halo effect. Nvidia probably sells a bunch of GT (and lower end GTX) 700/900 series cards due to the same halo effect from the Titan and 980 Ti series - a little reflected glory if you like.
AMD obviously didn't foresee GM200 scaling (clocks largely unaffected by increasing die size) as well as it did when it laid down Fiji's design, and had Fiji been unreservedly the "worlds fastest GPU" as they'd intended, it would have boosted sales of the lower tier. AMD's mistake was not taking into account that the opposition also have capable R&D divisions, but when AMD signed up for HBM in late 2013, they had to make a decision on estimates and available information. Hynix's own rationale seemed to be to keep pace with Samsung ( who had actually already begun rolling out 3D NAND tech by this time). AMD's involvement surely stemmed from HSA and hUMA in general - of which, consoles leverage the same tech to be sure, but I think were only part of the whole HSA implementation strategy.
2. Your personality based attacks shows you are a hypocrite i.e. not much different to certain WCCFTech's comment section. Do you have a view that Gigabyte and XFX Maxwellv2s are trouble free? Your assertion shows you are a hypocrite.
I'd suggest you calm down, you're starting to sound just like the loons at WTFtech...assuming your violent defense of them means you aren't a fully paid up Disqus member already. Quoted for truth.
www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/1200#post_24356995 As posted earlier in this thread, the WCCFTech post was from Oxide i.e. read the full post from
www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/1200#post_24356995 That's a double standard view point. forums.evga.com/GTX-970-Black-Screen-Crash-during-game-SOLVED-RMA-m2248453.aspx
Here's the kicker in case you don't understand why I'm singling out the Unreal Engine 4 part of his post....Fable Legends USES Unreal Engine 4
Now, given that Epic haven't exactly hidden their association with Nvidia's game development program: What are the chances that Unreal Engine 4 (and patched UE3 builds) operates exactly the same way as Oxide's Nitrous Engine, as rvalencia asserts? A game engine that was overhauled (if not developed) as a demonstrator for Mantle. Well, that would make them pretty lazy considering UE4 builds are easily sourced, and Epic runs a pretty extensive forum. I would have thought that a game developer might have more than a passing interest in one of the most widely licensed game engines considering evaluation costs nothing.
I can appreciate that you'd concentrate on your own engine, but I'd find it difficult to imagine that they wouldn't keep an eye on what the competition are doing, especially when the cost is minimal and the information is freely available.
Anyways, you guys are so mean. I can't comprehend how it's even possible that such people exist. Yes, and Image quality CHECK. ;)
www.extremetech.com/gaming/212314-directx-12-arrives-at-last-with-ashes-of-the-singularity-amd-and-nvidia-go-head-to-head/2
Effectively, Fury X ( a £550 card) pretty much runs the same (or 5-10% better, workload dependent, ironically, heavy workload = 980ti better) than a 980ti (a £510 card). *stock design prices
This is using an engine Nvidia say has an MSAA bug yet the 4xMSAA bench at 1080p:
and 4k:
So is this whole shit fest about top end Fiji and top end Maxwell being.... EQUAL, OMG, stop the freaking bus.... (I should've read these things earlier).
This is actually hilarious. All the AMD muppets saying all the silly things about Nvidia and all the Nvidia muppets saying the same about AMD when the reality of DX12 is.......
They're the same.
wow.
Why aren't we all hugging and saying how great this is? Why are we fighting over parity?
Oh, one caveat from extremetech themselves: That bit in bold is very important.... DX12 levels the field, even 55-45 in AMD's favour but DX11 is AMD's Achilles heel, worse in DX9.
lol.
All other so called by you "broad terms" are so subjective. About materials specifically I told you that you can NOT rely on iphone because one or two times on the ground will be enough to break the screen.
* There is an anecdote in my country. That people buy super expensive iphones for 500-600-700 euros and then they don't have 2-3 euros to sit in the cafe. :laugh:
Lastly, this is a thread about AMD. Why the hell are you talking about Apple? Stick to the topic and stop being a smart ass. AMD offers price and there have been arguments in the past about IQ settings. Simple fact is that nVidia cards can look just as good, they're just tuned for performance out of the box. Nothing more, nothing less.
There is no need to have something hit on the ground - just a flat surface like asphalt will be enough. And that's not true - there are videos which you can watch that many other brands offer phones which don't break when hitting the ground.
Oh, and apple is successful because it's an american company and those guys in usa just support it on nationalistic basis.
This is the nice way of me telling you to shut up and stop posting bullshit but, it appears that I needed to spell that out for you.
This is the line which introduced apple: