• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Futuremark Releases 3DMark Time Spy DirectX 12 Benchmark

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,668 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Futuremark released the latest addition to the 3DMark benchmark suite, the new "Time Spy" benchmark and stress-test. All existing 3DMark Basic and Advanced users have limited access to "Time Spy," existing 3DMark Advanced users have the option of unlocking the full feature-set of "Time Spy" with an upgrade key that's priced at US $9.99. The price of 3DMark Advanced for new users has been revised from its existing $24.99 to $29.99, as new 3DMark Advanced purchases include the fully-unlocked "Time Spy." Futuremark announced limited-period offers that last up till 23rd July, in which the "Time Spy" upgrade key for existing 3DMark Advanced users can be had for $4.99, and the 3DMark Advanced Edition (minus "Time Spy") for $9.99.

Futuremark 3DMark "Time Spy" has been developed with inputs from AMD, NVIDIA, Intel, and Microsoft, and takes advantage of the new DirectX 12 API. For this reason, the test requires Windows 10. The test almost exponentially increases the 3D processing load over "Fire Strike," by leveraging the low-overhead API features of DirectX 12, to present a graphically intense 3D test-scene that can make any gaming/enthusiast PC of today break a sweat. It can also make use of several beyond-4K display resolutions.



DOWNLOAD: 3DMark with TimeSpy v2.1.2852

View at TechPowerUp Main Site
 
5129.png


Not too bad.
 
So only works on Windows 10, and no free test version? No thanks.
 
Last edited:
Know it's not your mistake, but there's a typo in the second image of test details.
3DMark Time Spy Graphics "text" 2

This ain't your granddaddy's text!
 
Last edited:
This benchmark uses async and GTX 1080, GTX 1070 and GTX Titan X outperform everything from the red camp according to Guru3D.
 
This benchmark uses async and GTX 1080, GTX 1070 and GTX Titan X outperform everything from the red camp according to Guru3D.

As @Ferrum Master has pointed out in the forum thread, the Titan X score is actually 7 points lower than Fury X. Hilbert has got them round the wrong way.
 
Maybe the first time we see Nvidia cards gaining something from Async. Futuremark will have to give a few explanations to the world, if in a year from now, their benchmark is the only thing that shows gains in Pascal cards.
On the other hand this is good news. If that dynamic load balancing that Nvidia cooked there, works, It means that developers will have NO excuse to not use async in their games, which will mean at least a 5-10% better performance in ALL future titles.
 
Maybe the first time we see Nvidia cards gaining something from Async. Futuremark will have to give a few explanations to the world, if in a year from now, their benchmark is the only thing that shows gains in Pascal cards.
On the other hand this is good news. If that dynamic load balancing that Nvidia cooked there, works, It means that developers will have NO excuse to not use async in their games, which will mean at least a 5-10% better performance in ALL future titles.

It's perfectly okay to use Async in games. Nvidia's current cards (and last gen I suppose) already work out at near optimal performance for the architectural design. AMD's implementation of Asynchronous Compute Engines (ACE) meant that the GCN architecture worked far below potential in DX11. So what you really have is before DX12 and Async in particular, NVidia gave you 95% (metaphorically speaking) performance of their chip. AMD could only give you 85%. With DX12, Nvidia can't give much more but AMD can utilise the other 10-15% of the cards design.

There's a massive philisophical and scientific misunderstanding about Asynchronous compute and Async hardware. ACE are proprietary to AMD and only help in DX12, Async settings. Nvidia's design works on all API's at it's near optimum. It's as if in a 10K road race, Nvidia always start the race fast and keeps that speed up. AMD on the other hand starts slower and without shoes on (in a pre-DX12 race) but gets into a good stride as the race progresses, starting to catch up on Nvidia. In a DX12 race, AMD actually starts the race with good shoes and 'terrain' dependent, might start faster.

ACE's are AMD's DX12 Async running shoes. Nvidia hasn't changed shoes because they still work.
 
I see actual performance loss from enabling async on my Maxwell Titan X. My guess is that async is not implemented in Maxwell, but is in Pascal, just as NVIDIA said.

The reason we see no pascal gains in other apps using async compute (AotS in particular) is probably because they blacklist it blatantly on nvidia drivers, even Pascal ones. Why? Few people are running pascal, and it's easier to read a GPUs brand than properly detect a GPUs abilities. That said, it's lazy and should stop if that's really what's going on.
 
I see actual performance loss from enabling async on my Maxwell Titan X. My guess is that async is not implemented in Maxwell, but is in Pascal, just as NVIDIA said.

The reason we see no pascal gains in other apps using async compute (AotS in particular) is probably because they blacklist it blatantly on nvidia drivers, even Pascal ones. Why? Few people are running pascal, and it's easier to read a GPUs brand than properly detect a GPUs abilities. That said, it's lazy and should stop if that's really what's going on.

I will run my 670 on it... i'm ready for laughable performance :D
 
LlBCvKp

Think i need a new System :fear::laugh:
 

Attachments

  • 3dmark12.jpg
    3dmark12.jpg
    429.2 KB · Views: 1,224
ACE's are AMD's DX12 Async running shoes.
ACEs are used regardless of the API being used, the problem is that older APIs like DX11 and OpenGL can't take advantage of how many of them there are so tasks aren't getting scheduled optimally. Also async compute can be utilized by any API driver that implements using them which includes Vulkan as well and we've seen what kind of boost AMD cards can get between using OpenGL 4.5 and Vulkan with Doom. What blew my mind is that Doom at 5760x1080 was almost playable on my 390 at full graphics when in OpenGL 4.5 it wasn't anywhere close (20-40FPS for OGL versus 40-60FPS in Vulkan.) What is even more strange is that changing AA modes in DOOM running under Vulkan seems to have minimal performance impact on my 390 but, I'm not exactly sure why yet.
 
I see actual performance loss from enabling async on my Maxwell Titan X. My guess is that async is not implemented in Maxwell, but is in Pascal, just as NVIDIA said.

The reason we see no pascal gains in other apps using async compute (AotS in particular) is probably because they blacklist it blatantly on nvidia drivers, even Pascal ones. Why? Few people are running pascal, and it's easier to read a GPUs brand than properly detect a GPUs abilities. That said, it's lazy and should stop if that's really what's going on.
It is implemented, but not in the way you think it is.

Techreport wrote some interesting things regarding this in their 1080-review.

https://techreport.com/review/30281/nvidia-geforce-gtx-1080-graphics-card-reviewed/2
 
Amazing to see that run at playable frames!

screen-shot-2016-07-15-at-15-40-49-png.76933
 

Attachments

  • Screen Shot 2016-07-15 at 15.40.49.png
    Screen Shot 2016-07-15 at 15.40.49.png
    781.6 KB · Views: 9,651
Last edited:
Well, glad we finally have something to test out DX 12 as a benchmark (least a decent one). Would love to see more of this and how it translates/compares to games that eventually use DX 12.
 
It's perfectly okay to use Async in games. Nvidia's current cards (and last gen I suppose) already work out at near optimal performance for the architectural design. AMD's implementation of Asynchronous Compute Engines (ACE) meant that the GCN architecture worked far below potential in DX11. So what you really have is before DX12 and Async in particular, NVidia gave you 95% (metaphorically speaking) performance of their chip. AMD could only give you 85%. With DX12, Nvidia can't give much more but AMD can utilise the other 10-15% of the cards design.
It's correct that AMD's architecture is wastly different (in terms of queues and scheduling) compared to Nvidia's. But the reason why AMD may draw larger benefits from async shaders is because their scheduler is unable to saturate their huge count of cores. If you compare GTX 980 Ti to Fury X we are talking about:
GTX 980 Ti: 2816 cores, 5632 GFlop/s
Fury X: 4096 cores, 8602 GFlop/s
(The relation is similar with other comparable products with AMD vs. Nvidia)
Nvidia is getting the same performance from far fewer resources using a way more advanced scheduler. In many cases their scheduler has more than 95% computational utilization, and since the primary purpose of async shaders is to utilize idle resources for different tasks, there is really very little to use for compute (which mostly utilizes the same resources as rendering). Multiple queues is not overhead free either, so in order for it to have any purpose there have to be a significant performance gain. This is basically why AofT gave up on Nvidia hardware and just disabled the feature, and their game was fine tuned for AMD in the first place.

It has very little to do with Direct3D 11 vs 12.

http://www.3dmark.com/3dm/13240083

My 2x GTX 970 is about as fast as a single R9 Fury X. Yay for NVIDIA's async compute illiteracy.
This benchmark proves Nvidia can utilize async shaders, ending the lie about lacking hardware features once and for all.

AMD is drawing larger benefits because they have more idle resources. Remember e.g. Fury X has 53% more Flop/s than 980 Ti, so there is a lot to use.

This benchmark also ends the myth that Nvidia is less fit for Direct3D 12.
 
Nice result, still rocking a 920 too, scary to think it will be 8 years old this November.

Yeah i dont complain at all. This november it is in deed 8 years since the I7 920 came out. Freaking best money spend on any hardware in a long time back then.
X58 just wont die or give up, darn solid platform. I mean my CPU has been clokket at 4 GHz for almost 4 years now and it still keep going and going.
 
This benchmark proves Nvidia can utilize async shaders, ending the lie about lacking hardware features once and for all.

AMD is drawing larger benefits because they have more idle resources. Remember e.g. Fury X has 53% more Flop/s than 980 Ti, so there is a lot to use.

This benchmark also ends the myth that Nvidia is less fit for Direct3D 12.
I think NV uses pre-emptive through drivers, AMD uses it through hardware.
As Rejzor stated, NV is doing it with brute force. As long as they can ofc its fine. Anand should have shown Maxwell with async on/off for comparison.
 
Back
Top