Wednesday, December 21st 2016
Futuremark Readies New Vulkan and DirectX 12 Benchmarks
Futuremark is working on new game-tests for its 3DMark benchmark suite. One of these is a game test that takes advantage of DirectX 12, but isn't as taxing on the hardware as "Time Spy." Its target hardware is notebook graphics and entry-mainstream graphics cards. It will be to "Time Spy" what "Sky Diver" is to "Fire Strike."
The next, more interesting move by Futuremark is a benchmark that takes advantage of the Vulkan 3D graphics API. The company will release this Vulkan-based benchmark for both Windows and Android platforms. Lastly we've learned that development of the company's VR benchmarks are coming along nicely, and the company hopes to release new VR benchmarks for PC and mobile platforms soon. Futuremark is expected to reveal these new game-tests and benchmarks at its 2017 International CES booth, early January.
The next, more interesting move by Futuremark is a benchmark that takes advantage of the Vulkan 3D graphics API. The company will release this Vulkan-based benchmark for both Windows and Android platforms. Lastly we've learned that development of the company's VR benchmarks are coming along nicely, and the company hopes to release new VR benchmarks for PC and mobile platforms soon. Futuremark is expected to reveal these new game-tests and benchmarks at its 2017 International CES booth, early January.
29 Comments on Futuremark Readies New Vulkan and DirectX 12 Benchmarks
didn't time spy get criticized for its application of Async compute?
if so, i hope the vulkan bench gets better treatment.
Read the summary from Futuremark themselves
www.futuremark.com/pressreleases/a-closer-look-at-asynchronous-compute-in-3dmark-time-spy
In other words, Nvidia video cards only had to use the Async feature if it increased performance, otherwise they can completely ignore it. It's very bad marketing by Futuremark and isn't a proper test of Async Compute performance.
Right now Doom is really the only game out designed from the ground up for Async and you can see the performance benefits from that.
Besides, it's wrong to say a game must use tonnes of Async compute simply because it's used by AMD's ACE units. Devs need to code for the market and putting in some Async is fine. Any Async helps GCN.
However, when it all comes down to it, it's a case of people constantly whining about one API over another and where it's somehow wrong if a suite doesn't try it's hardest to fully utilise AMD hardware at the expense of Nvidia, and vice versa.
You can't expect DX12 or Vulkan applications to simply use all of AMD's hardware when it disadvantages Nvidia. The software Devs have to code with ALL vendors in mind.
Edit: found this.
FYI, Async compute does not disadvantage Nvidia hardware. It doesn't give it any performance loss or gain really so that whole argument that it hurts Nvidia's performance is out the window. Nvidia have had 2 generations of cards where they should have implemented Async but still have not. AMD have had Async in their cards since the 7000 series. At this point it's like having a processor without Hyper-threading, it's a huge feature of DX 12 and Vulkan.
It's all down to the same old story, if you put the work in (which requires resources) you benefit.
Edit: I'm at work so can't keep this discussion going :laugh:
Edit: As long as visual impact is not unduly lowered by omission of API features.
As the article you linked clearly states, the application cannot control if the card uses async compute in DirectX 12. Period.
There is no way in DirectX 12 to force a card to use async compute. All the application can do is to submit the work in multiple queues, with Compute work labeled as such, which in practice means "this work here, this is compute stuff, it is safe to run it in parallel with graphics. You are free to do so. Do your best!". The rest is up to the drivers.
With DirectX 12, video card driver always makes the decision as to how to process multiple DX12 command queues. Benchmark developer cannot force that, short of re-writing the driver - which is obviously somewhat beyond the capabilities of an application developer...
It is possible to force a system not to use Async Compute by submitting all work, even compute work, in a single DIRECT queue, essentially claiming that all this work can only be run sequentially, but 3DMark Time Spy does this only if you specifically turn on a setting in a Custom Run that is there so you can compare between the two. All Default Runs use Async Compute.
Many games are using async shaders for the wrong purpose to begin with. Async shaders was intended to utilize different hardware resources for different tasks, while many games (like AofS) use it for compute shaders, which mostly uses the same resources as rendering. So basically games are optimizing for inferior hardware. As AMD progresses with Vega, Navi and so on, they'll have to create better schedulers, and then there will be less and less gain from doing this, so there is no point in writing games or benchmarks targeting bad hardware.
I'm really glad 3DMark "noticed" Vulkan. Being them I'd even make it a primary benchmark, but then I understand they don't want to be enemies to Microsoft. Pascal does supportAsync Compute. End of story.
FM_Jarnis posted a pretty clear explanation two posts above yours, but somehow you managed to miss it.
Async will be beneficial in the future, but today, off the top of my head, we have Nvidia, intel and consoles all doing just fine without async.
On one hand you have a whole new API, on the other you have async which (from an API point of view) is the overload of a function to accept a queue as an argument.
It's only 50% if you count those as words in the dictionary.
D3D 12 biggest feature is a completely new very close to metal API which allows to extract more performance from your GPU and always have expected results by running your 3D/Shader/Compute code directly on your hardware vs. D3D 11 and earlier which employ very complex OS drivers which translate all your API calls into hardware instructions.
Pure bottleneck benchmarks serve no purpose other than curiosity, like measuring "API overhead", GPU memory bandwidth, etc. Just a few years ago many reviews included benchmarks of 1024x768 screen resolutions, just to see CPU bottlenecks. But those kinds of benchmarks are worthless if no one cares about a high-end GPU in that screen resolution. As always; the only thing that matters is real world performance. It doesn't matter if AMD's comparable products have more Gflop/s, more memory bandwidth, or more "gain" from certain features, at the end of the day actual performance is the only measurement.