Friday, June 24th 2016
Futuremark Teases 3DMark "Time Spy" DirectX 12 Benchmark
Futuremark teased its first benchmark for DirectX 12 graphics, the 3DMark "Time Spy." Likely marketed as an add-on to the 3DMark (2013) suite, "Time Spy" tests DirectX 12 features in a silicon-scorching 3D scene that's rich in geometric, textural, and visual detail. The benchmark is also ready for new generation displays including high resolutions beyond 4K Ultra HD. Existing users of 3DMark get "Basic" access to "Time Spy" when it comes out, with the option to purchase its "Advanced" and "Professional" modes.
Under the hood, "Time Spy" takes advantage of Direct3D feature-level 12_0, including Asynchronous Compute, heavily multi-threaded CPUs (which can make use of as many CPU cores as you can throw at it), and DirectX explicit multi-adapter (native multi-GPU, including mixed setups). Futuremark stated that the benchmark was developed with inputs from AMD, Intel, NVIDIA, Microsoft, and other partners of the Futuremark Benchmark Development Program.A teaser trailer video follows.
Under the hood, "Time Spy" takes advantage of Direct3D feature-level 12_0, including Asynchronous Compute, heavily multi-threaded CPUs (which can make use of as many CPU cores as you can throw at it), and DirectX explicit multi-adapter (native multi-GPU, including mixed setups). Futuremark stated that the benchmark was developed with inputs from AMD, Intel, NVIDIA, Microsoft, and other partners of the Futuremark Benchmark Development Program.A teaser trailer video follows.
43 Comments on Futuremark Teases 3DMark "Time Spy" DirectX 12 Benchmark
What we don't know is what happens when async compute is used more sparingly.
Nvidia also claims/implies their pipeline is already used (close) to its fullest without async compute. I'm not sure whether a benchmark can verify that, but I'd surely like for someone to shed some light in that area, too.
And, of course, there are those who, like Mussels above, have already decided that if async compute turns out to be just hot air, then it's Nvidia's fault for not letting developer to use enough of it in their games ;)
Async compute is likely the reason why PS4 and XB1 went with GCN. They could put really crappy CPUs in it because they know they can hand off a lot of heavy workloads to the GPU with async compute (case in point: physics). Async compute isn't going away. It is the direction GPU and API design has been going for the last decade (OpenCL and DirectCompute). NVIDIA needs to address it because, unlike PhysX, async compute isn't a gimmick. The sad irony of it is that PhsyX could have always been done asynchronously as well but NVIDIA never bothered to put that effort into their GPUs.
so yeah, expect this conversation to be going on for at least another 2-3 years.