Thursday, July 14th 2016

Futuremark Releases 3DMark Time Spy DirectX 12 Benchmark

Futuremark released the latest addition to the 3DMark benchmark suite, the new "Time Spy" benchmark and stress-test. All existing 3DMark Basic and Advanced users have limited access to "Time Spy," existing 3DMark Advanced users have the option of unlocking the full feature-set of "Time Spy" with an upgrade key that's priced at US $9.99. The price of 3DMark Advanced for new users has been revised from its existing $24.99 to $29.99, as new 3DMark Advanced purchases include the fully-unlocked "Time Spy." Futuremark announced limited-period offers that last up till 23rd July, in which the "Time Spy" upgrade key for existing 3DMark Advanced users can be had for $4.99, and the 3DMark Advanced Edition (minus "Time Spy") for $9.99.

Futuremark 3DMark "Time Spy" has been developed with inputs from AMD, NVIDIA, Intel, and Microsoft, and takes advantage of the new DirectX 12 API. For this reason, the test requires Windows 10. The test almost exponentially increases the 3D processing load over "Fire Strike," by leveraging the low-overhead API features of DirectX 12, to present a graphically intense 3D test-scene that can make any gaming/enthusiast PC of today break a sweat. It can also make use of several beyond-4K display resolutions.

DOWNLOAD: 3DMark with TimeSpy v2.1.2852
Add your own comment

91 Comments on Futuremark Releases 3DMark Time Spy DirectX 12 Benchmark

#2
Caring1
So only works on Windows 10, and no free test version? No thanks.
Posted on Reply
#3
xkm1948
Caring1So only works on Windows 10, and no free test version? No thanks.
There is free test version. You need to pay to get custom runs though.
Posted on Reply
#4
arbiter
Caring1So only works on Windows 10, and no free test version? No thanks.
Yea directx12 requires windows 10


7100 on gpu for mine, noticed a few others with gtx1080 got 7800 but their cpu is clocked a bit higher so that or some weird little setting they got dif then me.
www.3dmark.com/spy/17592

edit: just reran the testing using near same clock other guy did and got 7650
www.3dmark.com/3dm/13220015?
Posted on Reply
#5
Divide Overflow
Know it's not your mistake, but there's a typo in the second image of test details.
3DMark Time Spy Graphics "text" 2

This ain't your granddaddy's text!
Posted on Reply
#6
Assimilator
This benchmark uses async and GTX 1080, GTX 1070 and GTX Titan X outperform everything from the red camp according to Guru3D.
Posted on Reply
#7
the54thvoid
Super Intoxicated Moderator
AssimilatorThis benchmark uses async and GTX 1080, GTX 1070 and GTX Titan X outperform everything from the red camp according to Guru3D.
As @Ferrum Master has pointed out in the forum thread, the Titan X score is actually 7 points lower than Fury X. Hilbert has got them round the wrong way.
Posted on Reply
#8
john_
Maybe the first time we see Nvidia cards gaining something from Async. Futuremark will have to give a few explanations to the world, if in a year from now, their benchmark is the only thing that shows gains in Pascal cards.
On the other hand this is good news. If that dynamic load balancing that Nvidia cooked there, works, It means that developers will have NO excuse to not use async in their games, which will mean at least a 5-10% better performance in ALL future titles.
Posted on Reply
#9
the54thvoid
Super Intoxicated Moderator
john_Maybe the first time we see Nvidia cards gaining something from Async. Futuremark will have to give a few explanations to the world, if in a year from now, their benchmark is the only thing that shows gains in Pascal cards.
On the other hand this is good news. If that dynamic load balancing that Nvidia cooked there, works, It means that developers will have NO excuse to not use async in their games, which will mean at least a 5-10% better performance in ALL future titles.
It's perfectly okay to use Async in games. Nvidia's current cards (and last gen I suppose) already work out at near optimal performance for the architectural design. AMD's implementation of Asynchronous Compute Engines (ACE) meant that the GCN architecture worked far below potential in DX11. So what you really have is before DX12 and Async in particular, NVidia gave you 95% (metaphorically speaking) performance of their chip. AMD could only give you 85%. With DX12, Nvidia can't give much more but AMD can utilise the other 10-15% of the cards design.

There's a massive philisophical and scientific misunderstanding about Asynchronous compute and Async hardware. ACE are proprietary to AMD and only help in DX12, Async settings. Nvidia's design works on all API's at it's near optimum. It's as if in a 10K road race, Nvidia always start the race fast and keeps that speed up. AMD on the other hand starts slower and without shoes on (in a pre-DX12 race) but gets into a good stride as the race progresses, starting to catch up on Nvidia. In a DX12 race, AMD actually starts the race with good shoes and 'terrain' dependent, might start faster.

ACE's are AMD's DX12 Async running shoes. Nvidia hasn't changed shoes because they still work.
Posted on Reply
#10
R-T-B
I see actual performance loss from enabling async on my Maxwell Titan X. My guess is that async is not implemented in Maxwell, but is in Pascal, just as NVIDIA said.

The reason we see no pascal gains in other apps using async compute (AotS in particular) is probably because they blacklist it blatantly on nvidia drivers, even Pascal ones. Why? Few people are running pascal, and it's easier to read a GPUs brand than properly detect a GPUs abilities. That said, it's lazy and should stop if that's really what's going on.
Posted on Reply
#11
Recon-UK
R-T-BI see actual performance loss from enabling async on my Maxwell Titan X. My guess is that async is not implemented in Maxwell, but is in Pascal, just as NVIDIA said.

The reason we see no pascal gains in other apps using async compute (AotS in particular) is probably because they blacklist it blatantly on nvidia drivers, even Pascal ones. Why? Few people are running pascal, and it's easier to read a GPUs brand than properly detect a GPUs abilities. That said, it's lazy and should stop if that's really what's going on.
I will run my 670 on it... i'm ready for laughable performance :D
Posted on Reply
#12
R-T-B
Recon-UKI will run my 670 on it... i'm ready for laughable performance :D
I'm waiting for Fermi to come and play... :laugh:
Posted on Reply
#14
Frankness

Think i need a new System :fear::laugh:
Posted on Reply
#16
Aquinus
Resident Wat-man
the54thvoidACE's are AMD's DX12 Async running shoes.
ACEs are used regardless of the API being used, the problem is that older APIs like DX11 and OpenGL can't take advantage of how many of them there are so tasks aren't getting scheduled optimally. Also async compute can be utilized by any API driver that implements using them which includes Vulkan as well and we've seen what kind of boost AMD cards can get between using OpenGL 4.5 and Vulkan with Doom. What blew my mind is that Doom at 5760x1080 was almost playable on my 390 at full graphics when in OpenGL 4.5 it wasn't anywhere close (20-40FPS for OGL versus 40-60FPS in Vulkan.) What is even more strange is that changing AA modes in DOOM running under Vulkan seems to have minimal performance impact on my 390 but, I'm not exactly sure why yet.
Posted on Reply
#19
Nokiron
R-T-BI see actual performance loss from enabling async on my Maxwell Titan X. My guess is that async is not implemented in Maxwell, but is in Pascal, just as NVIDIA said.

The reason we see no pascal gains in other apps using async compute (AotS in particular) is probably because they blacklist it blatantly on nvidia drivers, even Pascal ones. Why? Few people are running pascal, and it's easier to read a GPUs brand than properly detect a GPUs abilities. That said, it's lazy and should stop if that's really what's going on.
It is implemented, but not in the way you think it is.

Techreport wrote some interesting things regarding this in their 1080-review.

techreport.com/review/30281/nvidia-geforce-gtx-1080-graphics-card-reviewed/2
Posted on Reply
#21
GhostRyder
Well, glad we finally have something to test out DX 12 as a benchmark (least a decent one). Would love to see more of this and how it translates/compares to games that eventually use DX 12.
Posted on Reply
#22
efikkan
the54thvoidIt's perfectly okay to use Async in games. Nvidia's current cards (and last gen I suppose) already work out at near optimal performance for the architectural design. AMD's implementation of Asynchronous Compute Engines (ACE) meant that the GCN architecture worked far below potential in DX11. So what you really have is before DX12 and Async in particular, NVidia gave you 95% (metaphorically speaking) performance of their chip. AMD could only give you 85%. With DX12, Nvidia can't give much more but AMD can utilise the other 10-15% of the cards design.
It's correct that AMD's architecture is wastly different (in terms of queues and scheduling) compared to Nvidia's. But the reason why AMD may draw larger benefits from async shaders is because their scheduler is unable to saturate their huge count of cores. If you compare GTX 980 Ti to Fury X we are talking about:
GTX 980 Ti: 2816 cores, 5632 GFlop/s
Fury X: 4096 cores, 8602 GFlop/s
(The relation is similar with other comparable products with AMD vs. Nvidia)
Nvidia is getting the same performance from far fewer resources using a way more advanced scheduler. In many cases their scheduler has more than 95% computational utilization, and since the primary purpose of async shaders is to utilize idle resources for different tasks, there is really very little to use for compute (which mostly utilizes the same resources as rendering). Multiple queues is not overhead free either, so in order for it to have any purpose there have to be a significant performance gain. This is basically why AofT gave up on Nvidia hardware and just disabled the feature, and their game was fine tuned for AMD in the first place.

It has very little to do with Direct3D 11 vs 12.
btarunrwww.3dmark.com/3dm/13240083

My 2x GTX 970 is about as fast as a single R9 Fury X. Yay for NVIDIA's async compute illiteracy.
This benchmark proves Nvidia can utilize async shaders, ending the lie about lacking hardware features once and for all.

AMD is drawing larger benefits because they have more idle resources. Remember e.g. Fury X has 53% more Flop/s than 980 Ti, so there is a lot to use.

This benchmark also ends the myth that Nvidia is less fit for Direct3D 12.
Posted on Reply
#23
Tomgang
FluffmeisterNice result, still rocking a 920 too, scary to think it will be 8 years old this November.
Yeah i dont complain at all. This november it is in deed 8 years since the I7 920 came out. Freaking best money spend on any hardware in a long time back then.
X58 just wont die or give up, darn solid platform. I mean my CPU has been clokket at 4 GHz for almost 4 years now and it still keep going and going.
Posted on Reply
#24
ShurikN
efikkanThis benchmark proves Nvidia can utilize async shaders, ending the lie about lacking hardware features once and for all.

AMD is drawing larger benefits because they have more idle resources. Remember e.g. Fury X has 53% more Flop/s than 980 Ti, so there is a lot to use.

This benchmark also ends the myth that Nvidia is less fit for Direct3D 12.
I think NV uses pre-emptive through drivers, AMD uses it through hardware.
As Rejzor stated, NV is doing it with brute force. As long as they can ofc its fine. Anand should have shown Maxwell with async on/off for comparison.
Posted on Reply
#25
Hood
A middlin' result for my 2-3 year old system (i7-4790K/GTX 780 Ti)
www.3dmark.com/spy/38286
I need a GPU upgrade - seriously considering 980 Ti, prices are around $400 even for hybrid water-cooled models. And they're available right now, unlike the new cards. If I wait a few months, they'll get even lower...
Posted on Reply
Add your own comment
Dec 22nd, 2024 11:15 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts