• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

3DMark Timespy is out!

I'll post some scores when I get back to the house, but just in case anyone is curious why AMD cards are not seeing the dramatic increase we've seen in the Doom Vulkan patch or in AotS, it appears that Futuremark implemented VERY limited use of Async workloads.

"In the case of async compute, Futuremark is using it to overlap rendering passes, though they do note that 'the asynchronous compute workload per frame varies between 10-20%.' " Source:http://www.anandtech.com/show/10486/futuremark-releases-3dmark-time-spy-directx12-benchmark

It appears if the Async workloads were more fully utilized, say up to 60-70% per frame, that the results would be very dramatic and weighted heavily in AMD's favor.

Edit: For clarity, I might be misinterpreting the quote, and would love someone more knowledgeable than me to correct me if this is a misconception on my part.

Just my 2 cents.

JAT


The timing of Time Spy release is just right. Right after Id released Vulkan support for DOOM4. Part of the reason it to get people's attention away from how Async boosted AMD's cards so much.

Yes it is a conspiracy theory of course.
 
I'll post some scores when I get back to the house, but just in case anyone is curious why AMD cards are not seeing the dramatic increase we've seen in the Doom Vulkan patch or in AotS, it appears that Futuremark implemented VERY limited use of Async workloads.

"In the case of async compute, Futuremark is using it to overlap rendering passes, though they do note that 'the asynchronous compute workload per frame varies between 10-20%.' " Source:http://www.anandtech.com/show/10486/futuremark-releases-3dmark-time-spy-directx12-benchmark

It appears if the Async workloads were more fully utilized, say up to 60-70% per frame, that the results would be very dramatic and weighted heavily in AMD's favor.
This isn't surprising. If 3DMark were not favorable to Nvidia it would end up being replaced by some other benchmark for the most part.

Also of potential interest:

Anandtech reviewer: "Though whenever working with async, I should not that the primary performance benefit as implemented in Time Spy is via concurrency, so everything here is dependent on a game having additional work to submit and a GPU having execution bubbles to fill."

Hruska of ExtremeTech:

The RX 480 is just one GPU, and we’ve already discussed how different cards can see very different levels of performance improvement depending on the game in question — the R9 Nano picks up 12% additional performance from enabling versus disabling async compute in Ashes of the Singularity, whereas the RX 480 only sees a 3% performance uplift from the same feature.

So, although the Anandtech reviewer says "for now it's the best look we can take at async on Pascal", in using just the 480 for the comparison, the picture is far from clear.
 

Those figures cant be right, that just doesnt add up to me. Over 50% of people on the planet run Windows 7 and only 10% run Windows 10 and over 90% of Windows 7 users Jumped to Windows 10 (for gaming) within a yr? naaa thats impossible, those numbers have to be wrong.
 
Those figures cant be right, that just doesnt add up to me. Over 50% of people on the planet run Windows 7 and only 10% run Windows 10 and over 90% of Windows 7 users Jumped to Windows 10 (for gaming) within a yr? naaa thats impossible, those numbers have to be wrong.
I say they didn't jump, they were pushed! ;)
 
AMD sucks under DX11 API, i seem to think that the biggest jumps are coming from the API switch more than anything.
Driver overhead and single threaded drivers are what make AMD stay behind in DX11.

Why being so aggressive against AMD? Sucks means something terrible, too bad. In W1Z reveiws here in TPU, AMD is showing good performance in almost all games (nVidia sponsored aside) especially in 1440P anf 4K. So, even with the DX11 not being so well for CGN, they compare directly to the perfectly optimised (your words) nVidia GPUs. In conclusion, until the end of 2016, with more games running DX12 and Vulcan, AMD will be ahead in all price categories that they will be selling GPUs in average FPS/$. Past is past, DX12 and Vulcan are already here and will only get better. Proof of that is that nVidia will alter their architecture to compete with AMD in async computing.
 
Those figures cant be right, that just doesnt add up to me. Over 50% of people on the planet run Windows 7 and only 10% run Windows 10 and over 90% of Windows 7 users Jumped to Windows 10 (for gaming) within a yr? naaa thats impossible, those numbers have to be wrong.

They aren't wrong. The Steam Hardware survery is about as reliable a gauge of anything you'll find in the gaming community.

I guess DX12 has appeal.
 
Why being so aggressive against AMD? Sucks means something terrible, too bad. In W1Z reveiws here in TPU, AMD is showing good performance in almost all games (nVidia sponsored aside) especially in 1440P anf 4K. So, even with the DX11 not being so well for CGN, they compare directly to the perfectly optimised (your words) nVidia GPUs. In conclusion, until the end of 2016, with more games running DX12 and Vulcan, AMD will be ahead in all price categories that they will be selling GPUs in average FPS/$. Past is past, DX12 and Vulcan are already here and will only get better. Proof of that is that nVidia will alter their architecture to compete with AMD in async computing.

He's not being aggressive by saying something 'sucks'. His point is valid to the extent GCN and the AMD hardware doe not utilise itself well under DX11. It's very well suited to what's available through Async and DX12 (namely the compute orientated tasks). As for Nvidia changing it's architecture, it would actually mean going back to it. Like I said elsewhere, Nvidia dropped compute after Fermi (with I think the first Titan holding it as well). With DX9-DX11 the need for compute on a desktop part was trivialised and what is better to a company is to increase power efficiency, increase clocks and improve performance. Nvidia did all that by dropping unnecessary compute.

AMD kept plugging away at compute and it held it back in chip utilisation until it was heavily optimised after literally years of driver updates (thus the old debate about AMD cards aging better). Low level API's still don't require Nvidia to do anything about compute on the desktop. If Pascal just wants to run fast and stay efficient I'm sure as long as it's up there at the top, Nvidia will be happy. I'm probably sure there's a running joke amongst Nvidia engineers who are wondering why on earth people are harking on about Async when the GTX1080 has shown how fast it is without dedicated hardware that's as good as AMD's.

Look at the hardware specs below. (excuse my ugly cut and pasted TPU chart)


untitled.png


Look at Fury X's shader count (filled with goody ACE's)
Look at Fury X's transistor count
Look at Bus width
Look at Pascal's clocks.

The node shrink from 28nm to 16nm for Pascal allows the clocks but there is no more hardware in there. AMD cards should be performing close to Nvidia in games. It's no surprise that they'll be getting back up again as their design is finally starting to work for them but don't forget, it's very situation and card dependent. Nvidia's architecture is still flat out fast across the board. Jack of all trades speed versus compute speciality? I know what I'd choose.
 
The node shrink from 28nm to 16nm for Pascal allows the clocks but there is no more hardware in there. AMD cards should be performing close to Nvidia in games. It's no surprise that they'll be getting back up again as their design is finally starting to work for them but don't forget, it's very situation and card dependent.
on paper it looks like that on paper but reality of chip build says other wise.
 
He's not being aggressive by saying something 'sucks'. His point is valid to the extent GCN and the AMD hardware doe not utilise itself well under DX11. It's very well suited to what's available through Async and DX12 (namely the compute orientated tasks). As for Nvidia changing it's architecture, it would actually mean going back to it. Like I said elsewhere, Nvidia dropped compute after Fermi (with I think the first Titan holding it as well). With DX9-DX11 the need for compute on a desktop part was trivialised and what is better to a company is to increase power efficiency, increase clocks and improve performance. Nvidia did all that by dropping unnecessary compute.

AMD kept plugging away at compute and it held it back in chip utilisation until it was heavily optimised after literally years of driver updates (thus the old debate about AMD cards aging better). Low level API's still don't require Nvidia to do anything about compute on the desktop. If Pascal just wants to run fast and stay efficient I'm sure as long as it's up there at the top, Nvidia will be happy. I'm probably sure there's a running joke amongst Nvidia engineers who are wondering why on earth people are harking on about Async when the GTX1080 has shown how fast it is without dedicated hardware that's as good as AMD's.

Look at the hardware specs below. (excuse my ugly cut and pasted TPU chart)


untitled.png


Look at Fury X's shader count (filled with goody ACE's)
Look at Fury X's transistor count
Look at Bus width
Look at Pascal's clocks.

The node shrink from 28nm to 16nm for Pascal allows the clocks but there is no more hardware in there. AMD cards should be performing close to Nvidia in games. It's no surprise that they'll be getting back up again as their design is finally starting to work for them but don't forget, it's very situation and card dependent. Nvidia's architecture is still flat out fast across the board. Jack of all trades speed versus compute speciality? I know what I'd choose.

1) Fermi archtecture isn't suitable for DX12 and Vulcan. It is an archaic archi by now, good for general computing tasks. Gaming? Not so efficient at all. AMD is the only of the 2 GPU makers that made an archi for future APIs. nVidia has clearly to catch up now. No matter what anyone says about. Not too hard to do though, but Pascal is just an efficient archi as Maxwell was already. Nothing more. That's why Vega will be the real deal of this GPU gen imho.

2) Why you put this table to compare those GPUs when it clearly doens't show the real clocks 1080 is running when not throttling and which are the ones to compare in order to determine efficiency? 1898MHz was the clock of 1080 when it was being used for gaming. Now compare again the analogies.
 
1) Fermi archtecture isn't suitable for DX12 and Vulcan. It is an archaic archi by now, good for general computing tasks. Gaming? Not so efficient at all. AMD is the only of the 2 GPU makers that made an archi for future APIs. nVidia has clearly to catch up now. No matter what anyone says about. Not too hard to do though, but Pascal is just an efficient archi as Maxwell was already. Nothing more. That's why Vega will be the real deal of this GPU gen imho.

Present NVIDIA cards aren't based on Fermi in any way other than compute setup (which is really just NVIDIA's preferences on how to build a card). Heck they don't even have a separate shader clock anymore...
 
time spy.jpg


normal gaming settings with 4790K cpu at 4.5 gig.. a mild low voltage overclock..

not sure what the validation warning is about..

trog
 
That's why Vega will be the real deal of this GPU gen imho.

I do sincerely hope you are right. Because it (Vega) or GP102/100 is my next card. And if Vega isn't better than full Pascal, my wallet will hurt.
 
Back
Top