Friday, June 24th 2016
Futuremark Teases 3DMark "Time Spy" DirectX 12 Benchmark
Futuremark teased its first benchmark for DirectX 12 graphics, the 3DMark "Time Spy." Likely marketed as an add-on to the 3DMark (2013) suite, "Time Spy" tests DirectX 12 features in a silicon-scorching 3D scene that's rich in geometric, textural, and visual detail. The benchmark is also ready for new generation displays including high resolutions beyond 4K Ultra HD. Existing users of 3DMark get "Basic" access to "Time Spy" when it comes out, with the option to purchase its "Advanced" and "Professional" modes.
Under the hood, "Time Spy" takes advantage of Direct3D feature-level 12_0, including Asynchronous Compute, heavily multi-threaded CPUs (which can make use of as many CPU cores as you can throw at it), and DirectX explicit multi-adapter (native multi-GPU, including mixed setups). Futuremark stated that the benchmark was developed with inputs from AMD, Intel, NVIDIA, Microsoft, and other partners of the Futuremark Benchmark Development Program.A teaser trailer video follows.
Under the hood, "Time Spy" takes advantage of Direct3D feature-level 12_0, including Asynchronous Compute, heavily multi-threaded CPUs (which can make use of as many CPU cores as you can throw at it), and DirectX explicit multi-adapter (native multi-GPU, including mixed setups). Futuremark stated that the benchmark was developed with inputs from AMD, Intel, NVIDIA, Microsoft, and other partners of the Futuremark Benchmark Development Program.A teaser trailer video follows.
43 Comments on Futuremark Teases 3DMark "Time Spy" DirectX 12 Benchmark
Mulit-vendor is an alternate term i've seen a lot that makes sense too.
I think the term should be, mixed/ matched, meaning either or.
I knew they're mammals.
This benchmark could be quite good to show a more 'perceived' neutral DX12 environment. If AMD and Nvidia were involved it might give us a better bench to argue with instead of AotS.
Nvidia hasn't really clarified if its classifying Compute Pre-emption as Async compute. Nvidia has also said "Async" is still not active in their driver.
Maybe they'll come out with a driver now that this is out.
The big question here is whether or not Futuremark will disable async shading when a NVIDIA card is present. I hope not.
Can't wait to see how my GTX 980 will be tortured. Again :D
Not going to upgrade anytime soon.
Hitman DX12
Highest frames and lowest frametimes despite Hitman being an AMD partnered game.
community.amd.com/thread/196920 Even Ashes:
Worse case scenario for Nvidia
A 1080 that cannot do Async still manages to be equal to the Async King.
Better case scenario (really depends on where you look I guess) - I know it's only 1440p.
My point isn't to argue for Nvidia and the whole Async debate but rather to say that if the GTX 1080 can match and even be a lot faster than a Fury X with awesome Async capability with it's GCN arch, why are people bothered so much about it?
That's why Firestrike might be quite a good marker. And for the naysayers, all the RX 480 threads have been listing Firestrike scores from left to right, so people on both sides do give it credit.
If Pascal can perform as it does without async (on extreme workloads as well) then why try harder?
EDIT: lol, apologies for monster graph image in middle - must be AMD promoted :roll:
And I bet Futuremark was like "sure".
Intel was like: ¯\_(ツ)_/¯
And I bet Nvidia had a meeting to find a way out of it: "can't we just pay them with bags of money to remove it? Like we always do?"
And I bet Futuremark was like "best we can do is give you an option to disable it. Now hand over the money"
Implementing it in a game or benchmark means that the cards that have it will run in FASTER. It doesn't mean that if nvidia card doesn't have it, it will run slower than it would if there was no async. It simply will make NO CHANGE for nvidia cards owners. This technology isn't maiming the nvidia cards, it's only for making stuff run faster on GCN cards.
There is no reason to "turn off" async compute. There will be no performance GAIN on nvidia.
I believe the only game that let the user decide is Ashes of the Singularity and there were some good benchmarks done in it a while back comparing on and off states. On the other hand, games like Hitman likely have it disabled on NVIDIA hardware with no option to enable it.
I wouldn't be surprised, at all, if NVIDIA "working with" Futuremark means async shaders will be disabled on NVIDIA hardware just like everywhere else. It's contextual switching. AMD can switch compute units between tasks on demand almost instantaneously. NVIDIA, on the other hand, has to wait for the executing command to finish before it can switch. AMD's design is truly multithreaded where NVIDIA's is not. NVIDIA is going to need a major redesign to catch up and Pascal doesn't represent that.
hooray for my hoarding OCD!
Async Compute is all about doing simultaneous compute workloads without affecting frame latency. If you start increasing the amount of work beyond the supported number of queues, you should see an increase in frame latency as the GPU can't keep up. Nvidia can sort of do Async Compute up until a certain point where it will become bogged down by the amount of work thrown at it (Pascal should be able to keep up with standard Async Compute implementations if it performs as expected even though it's not truly Async Compute capable). Granted, AMD also has a limit (obviously) but it's much higher than Nvidia's.
So in theory, the software should never overburden the card with a number of queues beyond what the card can support. In that case, there will be a performance benefit by parallelizing the work instead of doing it serially like previous APIs. And in that case, all GPUs will benefit.
You're talking GPU-side, I'm talking software-side.
Nvidia did remove hardware schedulers in Maxwell but that's not what I'm referring to.
I referred to that AotS was initially throwing work at the Maxwell cards they couldn't complete in a timely fashion due to the lack of proper Async Compute support resulting in slightly reduced frame rate (and probably poor frame timing) so it was disabled in software so that the Maxwell cards weren't having a virtual handbrake pulled.
It doesn't happen all the time, but it does happen. That's why I'm saying more benchmarks will let us know where Nvidia actually stands.