Friday, June 24th 2016

Futuremark Teases 3DMark "Time Spy" DirectX 12 Benchmark

Futuremark teased its first benchmark for DirectX 12 graphics, the 3DMark "Time Spy." Likely marketed as an add-on to the 3DMark (2013) suite, "Time Spy" tests DirectX 12 features in a silicon-scorching 3D scene that's rich in geometric, textural, and visual detail. The benchmark is also ready for new generation displays including high resolutions beyond 4K Ultra HD. Existing users of 3DMark get "Basic" access to "Time Spy" when it comes out, with the option to purchase its "Advanced" and "Professional" modes.

Under the hood, "Time Spy" takes advantage of Direct3D feature-level 12_0, including Asynchronous Compute, heavily multi-threaded CPUs (which can make use of as many CPU cores as you can throw at it), and DirectX explicit multi-adapter (native multi-GPU, including mixed setups). Futuremark stated that the benchmark was developed with inputs from AMD, Intel, NVIDIA, Microsoft, and other partners of the Futuremark Benchmark Development Program.
A teaser trailer video follows.

Add your own comment

43 Comments on Futuremark Teases 3DMark "Time Spy" DirectX 12 Benchmark

#1
Mussels
Freshwater Moderator
BTA, the term is "mis-matched" or mixed, not "mixed matched"

Mulit-vendor is an alternate term i've seen a lot that makes sense too.
Posted on Reply
#2
Caring1
MusselsMulit-vendor is an alternate term
You find him at the fish market? :roll:
I think the term should be, mixed/ matched, meaning either or.
Posted on Reply
#3
Mussels
Freshwater Moderator
Caring1You find him at the fish market? :roll:
my good sir, contrary to rumours btarunr is not a fishmonger.
Posted on Reply
#4
ViperXTR
including Asynchronous Compute
Here we go...
Posted on Reply
#5
the54thvoid
Super Intoxicated Moderator
Musselsmy good sir, contrary to rumours btarunr is not a fishmonger.
But he does have a porpoise in life.

I knew they're mammals.

This benchmark could be quite good to show a more 'perceived' neutral DX12 environment. If AMD and Nvidia were involved it might give us a better bench to argue with instead of AotS.
Posted on Reply
#6
Mussels
Freshwater Moderator
i have a 290 and a 970 here on very similar systems, so i'll happily compare AMDpples to Nvoranges when this is out
Posted on Reply
#7
Xzibit
the54thvoidBut he does have a porpoise in life.

I knew they're mammals.

This benchmark could be quite good to show a more 'perceived' neutral DX12 environment. If AMD and Nvidia were involved it might give us a better bench to argue with instead of AotS.
When has 3Dmarks equated to in-game performance. DX12 is even worse due to developers implementation.

Nvidia hasn't really clarified if its classifying Compute Pre-emption as Async compute. Nvidia has also said "Async" is still not active in their driver.

Maybe they'll come out with a driver now that this is out.
Posted on Reply
#8
FordGT90Concept
"I go fast!1!11!1!"
ViperXTRHere we go...
Indeed, I wonder if there is going to be a big rift between Maxwell, GCN, and Pascal.
XzibitWhen has 3Dmarks equated to in-game performance. DX12 is even worse due to developers implementation.

Nvidia hasn't really clarified if its classifying Compute Pre-emption as Async compute. Nvidia has also said "Async" is still not active in their driver.

Maybe they'll come out with a driver now that this is out.
That's the big question. Async shading isn't something just a driver can do. Its implementation should be 90% in silicon.

The big question here is whether or not Futuremark will disable async shading when a NVIDIA card is present. I hope not.
Posted on Reply
#9
RejZoR
If anything, AMD will be very happy about the async shader support where they still reign supreme. I think NVIDIA still doesn't have a functional async in GTX 1080 lineup. If they had, they'd be braging about it on all ends. But they are suspiciously quiet instead...

Can't wait to see how my GTX 980 will be tortured. Again :D
Posted on Reply
#10
P4-630
Well I'm unable to run this, still on win 8.1 :p
Not going to upgrade anytime soon.
Posted on Reply
#11
the54thvoid
Super Intoxicated Moderator
XzibitWhen has 3Dmarks equated to in-game performance. DX12 is even worse due to developers implementation.

Nvidia hasn't really clarified if its classifying Compute Pre-emption as Async compute. Nvidia has also said "Async" is still not active in their driver.

Maybe they'll come out with a driver now that this is out.
It doesn't actually matter a whole lot which way Nvidia handles asynchronous tasks. People ought to ease up on the whole argument. If Nvidia doesn't do Async, this should really worry people:

Hitman DX12



Highest frames and lowest frametimes despite Hitman being an AMD partnered game.

community.amd.com/thread/196920
Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs—called asynchronous compute engines—to handle heavier workloads and better image quality without compromising performance.
Even Ashes:

Worse case scenario for Nvidia


A 1080 that cannot do Async still manages to be equal to the Async King.

Better case scenario (really depends on where you look I guess) - I know it's only 1440p.



My point isn't to argue for Nvidia and the whole Async debate but rather to say that if the GTX 1080 can match and even be a lot faster than a Fury X with awesome Async capability with it's GCN arch, why are people bothered so much about it?

That's why Firestrike might be quite a good marker. And for the naysayers, all the RX 480 threads have been listing Firestrike scores from left to right, so people on both sides do give it credit.

If Pascal can perform as it does without async (on extreme workloads as well) then why try harder?

EDIT: lol, apologies for monster graph image in middle - must be AMD promoted :roll:
Posted on Reply
#12
Shamalamadingdong
I bet AMD was adamant about having Async Compute implemented.

And I bet Futuremark was like "sure".
Intel was like: ¯\_(ツ)_/¯

And I bet Nvidia had a meeting to find a way out of it: "can't we just pay them with bags of money to remove it? Like we always do?"

And I bet Futuremark was like "best we can do is give you an option to disable it. Now hand over the money"
Posted on Reply
#13
Hiryougan
I see a lot of people still misunderstand the AC.
Implementing it in a game or benchmark means that the cards that have it will run in FASTER. It doesn't mean that if nvidia card doesn't have it, it will run slower than it would if there was no async. It simply will make NO CHANGE for nvidia cards owners. This technology isn't maiming the nvidia cards, it's only for making stuff run faster on GCN cards.

There is no reason to "turn off" async compute. There will be no performance GAIN on nvidia.
Posted on Reply
#14
bug
FordGT90ConceptThat's the big question. Async shading isn't something just a driver can do. Its implementation should be 90% in silicon.
Actually, async compute is an API requirement. DX says nothing (nor should it) about how it is to be implemented. If Nvidia can honour the async compute contract without actually having the feature in their silicon AND without a hefty performance penalty, good for them. Hopefully this benchmark will be able to shed a little more light on the matter.
Posted on Reply
#15
FordGT90Concept
"I go fast!1!11!1!"
the54thvoidIt doesn't actually matter a whole lot which way Nvidia handles asynchronous tasks. People ought to ease up on the whole argument. If Nvidia doesn't do Async, this should really worry people:

*snip*
Game developers tend to disable async compute on NVIDIA hardware so, benchmarks are invalid for checking async compute unless the developer has said that they aren't disabling it on NVIDIA hardware. Case in point: that Ashes of Singularity benchmark pretty clearly has it enabled for AMD cards (note the pretty significant FPS boost) where NVIDIA cards have it disabled (more or less equal FPS).

I believe the only game that let the user decide is Ashes of the Singularity and there were some good benchmarks done in it a while back comparing on and off states. On the other hand, games like Hitman likely have it disabled on NVIDIA hardware with no option to enable it.

I wouldn't be surprised, at all, if NVIDIA "working with" Futuremark means async shaders will be disabled on NVIDIA hardware just like everywhere else.
bugActually, async compute is an API requirement. DX says nothing (nor should it) about how it is to be implemented. If Nvidia can honour the async compute contract without actually having the feature in their silicon AND without a hefty performance penalty, good for them. Hopefully this benchmark will be able to shed a little more light on the matter.
It's contextual switching. AMD can switch compute units between tasks on demand almost instantaneously. NVIDIA, on the other hand, has to wait for the executing command to finish before it can switch. AMD's design is truly multithreaded where NVIDIA's is not. NVIDIA is going to need a major redesign to catch up and Pascal doesn't represent that.
Posted on Reply
#16
Mussels
Freshwater Moderator
FordGT90ConceptGame developers tend to disable async compute on NVIDIA hardware so, benchmarks are invalid for checking async compute unless the developer has said that they aren't disabling it on NVIDIA hardware. Case in point: that Ashes of Singularity benchmark pretty clearly has it enabled for AMD cards (note the pretty significant FPS boost) where NVIDIA cards have it disabled (more or less equal FPS).

I believe the only game that let the user decide is Ashes of the Singularity and there were some good benchmarks done in it a while back comparing on and off states. On the other hand, games like Hitman likely have it disabled on NVIDIA hardware with no option to enable it.


It's contextual switching. AMD can switch compute units between tasks on demand almost instantaneously. NVIDIA, on the other hand, has to wait for the executing command to finish before it can switch. AMD's design is truly multithreaded where NVIDIA's is not. You can write software to accept two threads and merge them into one thread but it is obvious, in terms of performance, that the software is not multithreaded. NVIDIA is going to need a major redesign to catch up and Pascal doesn't represent that.
and this is why i have one GPU from each team!

hooray for my hoarding OCD!
Posted on Reply
#17
P4-630
Musselsand this is why i have one GPU from each team!

hooray for my hoarding OCD!
Now if you only could hot swap cards :D
Posted on Reply
#18
FordGT90Concept
"I go fast!1!11!1!"
Musselsand this is why i have one GPU from each team!

hooray for my hoarding OCD!
I bet you're just giddy for games to start supporting D3D12 multi-GPU, aren't you?
Posted on Reply
#19
Mussels
Freshwater Moderator
FordGT90ConceptI bet you're just giddy for games to start supporting D3D12 multi-GPU, aren't you?
you bet my over-strained PSU i am!
Posted on Reply
#20
Shamalamadingdong
HiryouganI see a lot of people still misunderstand the AC.
Implementing it in a game or benchmark means that the cards that have it will run in FASTER. It doesn't mean that if nvidia card doesn't have it, it will run slower than it would if there was no async. It simply will make NO CHANGE for nvidia cards owners. This technology isn't maiming the nvidia cards, it's only for making stuff run faster on GCN cards.

There is no reason to "turn off" async compute. There will be no performance GAIN on nvidia.
I'm pretty sure that is somewhat wrong. After all, Nvidia did forcibly disable it in drivers to avoid increased frame latencies on Maxwell.

Async Compute is all about doing simultaneous compute workloads without affecting frame latency. If you start increasing the amount of work beyond the supported number of queues, you should see an increase in frame latency as the GPU can't keep up. Nvidia can sort of do Async Compute up until a certain point where it will become bogged down by the amount of work thrown at it (Pascal should be able to keep up with standard Async Compute implementations if it performs as expected even though it's not truly Async Compute capable). Granted, AMD also has a limit (obviously) but it's much higher than Nvidia's.

So in theory, the software should never overburden the card with a number of queues beyond what the card can support. In that case, there will be a performance benefit by parallelizing the work instead of doing it serially like previous APIs. And in that case, all GPUs will benefit.
Posted on Reply
#21
rtwjunkie
PC Gaming Enthusiast
ShamalamadingdongAfter all, Nvidia did forcibly disable it in drivers to avoid increased frame latencies on Maxwell.
Actually, it was hardware removed. This is how they were able to run higher clocks at cooler temps with less power than Kepler while still at 28nm.
Posted on Reply
#22
FordGT90Concept
"I go fast!1!11!1!"
Kepler doesn't support it either though (does it serially). That said, GCN 1.0 cards only had 2 ACEs versus 8 in the GCN 1.1 and up.
Posted on Reply
#23
Shamalamadingdong
rtwjunkieActually, it was hardware removed. This is how they were able to run higher clocks at cooler temps with less power than Kepler while still at 28nm.
That's a different thing.
You're talking GPU-side, I'm talking software-side.

Nvidia did remove hardware schedulers in Maxwell but that's not what I'm referring to.

I referred to that AotS was initially throwing work at the Maxwell cards they couldn't complete in a timely fashion due to the lack of proper Async Compute support resulting in slightly reduced frame rate (and probably poor frame timing) so it was disabled in software so that the Maxwell cards weren't having a virtual handbrake pulled.
Posted on Reply
#24
bug
FordGT90ConceptIt's contextual switching. AMD can switch compute units between tasks on demand almost instantaneously. NVIDIA, on the other hand, has to wait for the executing command to finish before it can switch. AMD's design is truly multithreaded where NVIDIA's is not. NVIDIA is going to need a major redesign to catch up and Pascal doesn't represent that.
And you've never ever seen a single threaded program beat a multithreaded one because of the mutithreading overhead?
It doesn't happen all the time, but it does happen. That's why I'm saying more benchmarks will let us know where Nvidia actually stands.
Posted on Reply
#25
rtwjunkie
PC Gaming Enthusiast
bugThat's why I'm saying more benchmarks will let us know where Nvidia actually stands.
I don't think you'll get any arguments from anyone that we need more than AotS. :D
Posted on Reply
Add your own comment
Nov 27th, 2024 16:36 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts