Friday, June 24th 2016

Futuremark Teases 3DMark "Time Spy" DirectX 12 Benchmark

Futuremark teased its first benchmark for DirectX 12 graphics, the 3DMark "Time Spy." Likely marketed as an add-on to the 3DMark (2013) suite, "Time Spy" tests DirectX 12 features in a silicon-scorching 3D scene that's rich in geometric, textural, and visual detail. The benchmark is also ready for new generation displays including high resolutions beyond 4K Ultra HD. Existing users of 3DMark get "Basic" access to "Time Spy" when it comes out, with the option to purchase its "Advanced" and "Professional" modes.

Under the hood, "Time Spy" takes advantage of Direct3D feature-level 12_0, including Asynchronous Compute, heavily multi-threaded CPUs (which can make use of as many CPU cores as you can throw at it), and DirectX explicit multi-adapter (native multi-GPU, including mixed setups). Futuremark stated that the benchmark was developed with inputs from AMD, Intel, NVIDIA, Microsoft, and other partners of the Futuremark Benchmark Development Program.
A teaser trailer video follows.

Add your own comment

43 Comments on Futuremark Teases 3DMark "Time Spy" DirectX 12 Benchmark

#26
Mussels
Freshwater Moderator
not to mention game devs wont stupidly cripple their games to punish nvidia users - who are a much larger group than AMD users (as much of an AMD fanboy as i am, i can admit nvidia are more popular)
Posted on Reply
#27
bug
rtwjunkieI don't think you'll get any arguments from anyone that we need more than AotS. :D
AotS is good. It puts an upper limit on what gains are to be expected when using async compute heavily.
What we don't know is what happens when async compute is used more sparingly.

Nvidia also claims/implies their pipeline is already used (close) to its fullest without async compute. I'm not sure whether a benchmark can verify that, but I'd surely like for someone to shed some light in that area, too.

And, of course, there are those who, like Mussels above, have already decided that if async compute turns out to be just hot air, then it's Nvidia's fault for not letting developer to use enough of it in their games ;)
Posted on Reply
#28
Hiryougan
Musselsnot to mention game devs wont stupidly cripple their games to punish nvidia users - who are a much larger group than AMD users (as much of an AMD fanboy as i am, i can admit nvidia are more popular)
As i sad, it won't "punish" nvidia users. They just won't get the benefits.
Posted on Reply
#29
FordGT90Concept
"I go fast!1!11!1!"
bugAnd you've never ever seen a single threaded program beat a multithreaded one because of the mutithreading overhead?
That is only the case when the multithreading is done poorly. GPUs are parallel by their nature so the multithreading nature of it is already ingrained. AMD's implementation is more CPU-like than NVIDIAs where AMD can jungle lots of commands inside the GPU simultaneously. As demonstrated by GCN cards, the gains are significant. And FPS isn't the only way to judge async compute either: there's also what they are doing with it. In the case of Ashes of the Singularity, they do physics on the weapons. You're gaining FPS & realism.

Async compute is likely the reason why PS4 and XB1 went with GCN. They could put really crappy CPUs in it because they know they can hand off a lot of heavy workloads to the GPU with async compute (case in point: physics). Async compute isn't going away. It is the direction GPU and API design has been going for the last decade (OpenCL and DirectCompute). NVIDIA needs to address it because, unlike PhysX, async compute isn't a gimmick. The sad irony of it is that PhsyX could have always been done asynchronously as well but NVIDIA never bothered to put that effort into their GPUs.
Posted on Reply
#30
truth teller
Musselsnvidia users - who are a much larger group than AMD users
thats only true on the pc master racer side, and on the big picture, pc is nothing compared to console. so, yeah... if the games are developed with "console first" methodology stuff being disabled and/or unavailable for green camp users will happen (which is only fair, "enhanced" physx effects have been a green camp thing only for quite some time)
Posted on Reply
#31
rtwjunkie
PC Gaming Enthusiast
truth tellerpc is nothing compared to console. so, yeah...
Not really quite true. It's a fairly common misconception that console numbers are bigger, especially if we are not only talking gamers. Those numbers alone are fairly close though, IIRC.
Posted on Reply
#32
fullinfusion
Vanguard Beta Tester
I'm looking forward to this!
Posted on Reply
#33
D007
Must.. Have.. All the benchmarks!
Posted on Reply
#34
Roph
Futuremark stated that the benchmark was developed with inputs from AMD, Intel, NVIDI
Presumably nvidia begging for there to be no async, and instead using overly aggressive tessellation? :laugh:
Posted on Reply
#35
PP Mguire
Oh, is TPU beating the dead Async horse again?
Posted on Reply
#36
rtwjunkie
PC Gaming Enthusiast
PP MguireOh, is TPU beating the dead Async horse again?
:D New cards. It's apparently that time again.
Posted on Reply
#37
Mussels
Freshwater Moderator
in honour of the gods of async, we have to do it really chaotically and out of order.

so yeah, expect this conversation to be going on for at least another 2-3 years.
Posted on Reply
#38
FordGT90Concept
"I go fast!1!11!1!"
I wonder how many PS4 and XB1 games use the ACEs.
Posted on Reply
#39
D007
Any ETA on a release date?
Posted on Reply
#40
HD64G
PP MguireOh, is TPU beating the dead Async horse again?
Async horse was just born last year with DX12. Many years to live and reign. Next nVidia GPU gen will have it also.
Posted on Reply
#41
PP Mguire
HD64GAsync horse was just born last year with DX12. Many years to live and reign. Next nVidia GPU gen will have it also.
As in quoting AOTS benchmarks over and over in an endless battle that means squat really. The tech itself isn't a dead horse, it's something we should have had for a while. The debate is a dead horse.
Posted on Reply
#42
AsRock
TPU addict
Musselsi have a 290 and a 970 here on very similar systems, so i'll happily compare AMDpples to Nvoranges when this is out
I be interested in seeing, how ever it will mean nothing really being a gamer and not a benchmarker.
Posted on Reply
#43
Parn
Is it just me or does the scene looks completely washed out?
Posted on Reply
Add your own comment
Nov 27th, 2024 18:32 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts