• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Futuremark Readies New Vulkan and DirectX 12 Benchmarks

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,288 (7.53/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Futuremark is working on new game-tests for its 3DMark benchmark suite. One of these is a game test that takes advantage of DirectX 12, but isn't as taxing on the hardware as "Time Spy." Its target hardware is notebook graphics and entry-mainstream graphics cards. It will be to "Time Spy" what "Sky Diver" is to "Fire Strike."

The next, more interesting move by Futuremark is a benchmark that takes advantage of the Vulkan 3D graphics API. The company will release this Vulkan-based benchmark for both Windows and Android platforms. Lastly we've learned that development of the company's VR benchmarks are coming along nicely, and the company hopes to release new VR benchmarks for PC and mobile platforms soon. Futuremark is expected to reveal these new game-tests and benchmarks at its 2017 International CES booth, early January.



View at TechPowerUp Main Site
 
Joined
Jan 27, 2015
Messages
1,065 (0.29/day)
System Name loon v4.0
Processor i7-11700K
Motherboard asus Z590TUF+wifi
Cooling Custom Loop
Memory ballistix 3600 cl16
Video Card(s) eVga 3060 xc
Storage WD sn570 1tb(nvme) SanDisk ultra 2tb(sata)
Display(s) cheap 1080&4K 60hz
Case Roswell Stryker
Power Supply eVGA supernova 750 G6
Mouse eats cheese
Keyboard warrior!
Benchmark Scores https://www.3dmark.com/spy/21765182 https://www.3dmark.com/pr/1114767
serious question:

didn't time spy get criticized for its application of Async compute?

if so, i hope the vulkan bench gets better treatment.
 

cdawall

where the hell are my stars
Joined
Jul 23, 2006
Messages
27,680 (4.12/day)
Location
Houston
System Name All the cores
Processor 2990WX
Motherboard Asrock X399M
Cooling CPU-XSPC RayStorm Neo, 2x240mm+360mm, D5PWM+140mL, GPU-2x360mm, 2xbyski, D4+D5+100mL
Memory 4x16GB G.Skill 3600
Video Card(s) (2) EVGA SC BLACK 1080Ti's
Storage 2x Samsung SM951 512GB, Samsung PM961 512GB
Display(s) Dell UP2414Q 3840X2160@60hz
Case Caselabs Mercury S5+pedestal
Audio Device(s) Fischer HA-02->Fischer FA-002W High edition/FA-003/Jubilate/FA-011 depending on my mood
Power Supply Seasonic Prime 1200w
Mouse Thermaltake Theron, Steam controller
Keyboard Keychron K8
Software W10P
I'm super curious how Vulkan performs on android. That is interesting
 
Joined
Jul 13, 2016
Messages
3,319 (1.08/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
serious question:

didn't time spy get criticized for its application of Async compute?

if so, i hope the vulkan bench gets better treatment.

It did create an issue because ultimately the benchmark allowed the video card vendor to decide whether or not to use Async compute.

Read the summary from Futuremark themselves

http://www.futuremark.com/pressreleases/a-closer-look-at-asynchronous-compute-in-3dmark-time-spy

In other words, Nvidia video cards only had to use the Async feature if it increased performance, otherwise they can completely ignore it. It's very bad marketing by Futuremark and isn't a proper test of Async Compute performance.

Right now Doom is really the only game out designed from the ground up for Async and you can see the performance benefits from that.
 

the54thvoid

Super Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
13,105 (2.39/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
It did create an issue because ultimately the benchmark allowed the video card vendor to decide whether or not to use Async compute.

Read the summary from Futuremark themselves

http://www.futuremark.com/pressreleases/a-closer-look-at-asynchronous-compute-in-3dmark-time-spy

In other words, Nvidia video cards only had to use the Async feature if it increased performance, otherwise they can completely ignore it. It's very bad marketing by Futuremark and isn't a proper test of Async Compute performance.

Right now Doom is really the only game out designed from the ground up for Async and you can see the performance benefits from that.

I thought Doom simply had a far greater use of AMD extensions compared to very few from Nvidia.
Besides, it's wrong to say a game must use tonnes of Async compute simply because it's used by AMD's ACE units. Devs need to code for the market and putting in some Async is fine. Any Async helps GCN.
However, when it all comes down to it, it's a case of people constantly whining about one API over another and where it's somehow wrong if a suite doesn't try it's hardest to fully utilise AMD hardware at the expense of Nvidia, and vice versa.

You can't expect DX12 or Vulkan applications to simply use all of AMD's hardware when it disadvantages Nvidia. The software Devs have to code with ALL vendors in mind.

Edit: found this.

Various general purpose AMD Vulkan extensions were quickly finished to enable specific optimizations for DOOM. This effort at AMD involved first working with id Software to understand their need, writing extension specs, getting prototype glslangValidator.exe support for those extensions for GLSL to SPIR-V translation (later sending a pull request to incorporate into the public tool), implementation from the shader compiler and driver teams, and finally testing efforts from the driver QA team.
 
Last edited:
Joined
Jul 13, 2016
Messages
3,319 (1.08/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
I thought Doom simply had a far greater use of AMD extensions compared to very few from Nvidia.
Besides, it's wrong to say a game must use tonnes of Async compute simply because it's used by AMD's ACE units. Devs need to code for the market and putting in some Async is fine. Any Async helps GCN.
However, when it all comes down to it, it's a case of people constantly whining about one API over another and where it's somehow wrong if a suite doesn't try it's hardest to fully utilise AMD hardware at the expense of Nvidia, and vice versa.

You can't expect DX12 or Vulkan applications to simply use all of AMD's hardware when it disadvantages Nvidia. The software Devs have to code with ALL vendors in mind.

No one's saying a game must use a ton of Async compute, what we're saying is that a benchmark that advertises Async as something it tests is wrong when one card isn't really doing Async at all. Why does the Nvidia card get a pass just because it can't do Async well but on other benchmarks that are heavy on tessellation (Nvidia's strength) AMD cards don't get a pass? So it's fair for a benchmark to essentially give one card an alternate rendering path to avoid giving everyone an accurate rating of it's Async compute ability?

FYI, Async compute does not disadvantage Nvidia hardware. It doesn't give it any performance loss or gain really so that whole argument that it hurts Nvidia's performance is out the window. Nvidia have had 2 generations of cards where they should have implemented Async but still have not. AMD have had Async in their cards since the 7000 series. At this point it's like having a processor without Hyper-threading, it's a huge feature of DX 12 and Vulkan.
 

the54thvoid

Super Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
13,105 (2.39/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
Async can be slightly detrimental to Nvidia. Enough benches show that. But while we're at it, Async isn't a game. It's a thing that can be utilised by AMD and Nvidia. AMD get a better performance from it. But still, it doesn't need to be heavily coded in. Also, the Vulkan quote I posted in my edit shows that AMD were very (rightly) keen to get Vulkan Doom working excellently with their hardware. They worked hard to develop the extensions to make it run as fast as it did. If AMD don't do the same work in other Vulkan titles, they won't have such a performance uplift.
It's all down to the same old story, if you put the work in (which requires resources) you benefit.

Edit: I'm at work so can't keep this discussion going :laugh:
 
Joined
Sep 2, 2011
Messages
1,019 (0.21/day)
Location
Porto
System Name No name / Purple Haze
Processor Phenom II 1100T @ 3.8Ghz / Pentium 4 3.4 EE Gallatin @ 3.825Ghz
Motherboard MSI 970 Gaming/ Abit IC7-MAX3
Cooling CM Hyper 212X / Scythe Andy Samurai Master (CPU) - Modded Ati Silencer 5 rev. 2 (GPU)
Memory 8GB GEIL GB38GB2133C10ADC + 8GB G.Skill F3-14900CL9-4GBXL / 2x1GB Crucial Ballistix Tracer PC4000
Video Card(s) Asus R9 Fury X Strix (4096 SP's/1050 Mhz)/ PowerColor X850XT PE @ (600/1230) AGP + (HD3850 AGP)
Storage Samsung 250 GB / WD Caviar 160GB
Display(s) Benq XL2411T
Audio Device(s) motherboard / Creative Sound Blaster X-Fi XtremeGamer Fatal1ty Pro + Front panel
Power Supply Tagan BZ 900W / Corsair HX620w
Mouse Zowie AM
Keyboard Qpad MK-50
Software Windows 7 Pro 64Bit / Windows XP
Benchmark Scores 64CU Fury: http://www.3dmark.com/fs/11269229 / X850XT PE http://www.3dmark.com/3dm05/5532432
Async can be slightly detrimental to Nvidia. Enough benches show that. But while we're at it, Async isn't a game. It's a thing that can be utilised by AMD and Nvidia. AMD get a better performance from it. But still, it doesn't need to be heavily coded in. Also, the Vulkan quote I posted in my edit shows that AMD were very (rightly) keen to get Vulkan Doom working excellently with their hardware. They worked hard to develop the extensions to make it run as fast as it did. If AMD don't do the same work in other Vulkan titles, they won't have such a performance uplift.
It's all down to the same old story, if you put the work in (which requires resources) you benefit.

Edit: I'm at work so can't keep this discussion going :laugh:

idSoftware wanted to get the most out of the consoles, so it made sense to optimize it for GCN... Anyway, the game runs great on both amd and nividia. I reckon async benefits amd's uarch more than nvidia's due to the massive parallel nature of GCN.
 

bug

Joined
May 22, 2015
Messages
13,836 (3.96/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
It did create an issue because ultimately the benchmark allowed the video card vendor to decide whether or not to use Async compute.

Read the summary from Futuremark themselves

http://www.futuremark.com/pressreleases/a-closer-look-at-asynchronous-compute-in-3dmark-time-spy

In other words, Nvidia video cards only had to use the Async feature if it increased performance, otherwise they can completely ignore it. It's very bad marketing by Futuremark and isn't a proper test of Async Compute performance.

You'd think that's how developers are supposed to use async (or any other feature) anyway. Forcing async on/off only makes sense if you're trying to benchmarks async specifically.
 

the54thvoid

Super Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
13,105 (2.39/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
You'd think that's how developers are supposed to use async (or any other feature) anyway. Forcing async on/off only makes sense if you're trying to benchmarks async specifically.

Very good point. The relevant thing is, does the API run and how fast, irrespective of what features the vendor is or is not using.

Edit: As long as visual impact is not unduly lowered by omission of API features.
 
Last edited:
Joined
Feb 6, 2013
Messages
23 (0.01/day)
It did create an issue because ultimately the benchmark allowed the video card vendor to decide whether or not to use Async compute.

This is so grossly incorrect that I must hop in.

As the article you linked clearly states, the application cannot control if the card uses async compute in DirectX 12. Period.

There is no way in DirectX 12 to force a card to use async compute. All the application can do is to submit the work in multiple queues, with Compute work labeled as such, which in practice means "this work here, this is compute stuff, it is safe to run it in parallel with graphics. You are free to do so. Do your best!". The rest is up to the drivers.

With DirectX 12, video card driver always makes the decision as to how to process multiple DX12 command queues. Benchmark developer cannot force that, short of re-writing the driver - which is obviously somewhat beyond the capabilities of an application developer...

It is possible to force a system not to use Async Compute by submitting all work, even compute work, in a single DIRECT queue, essentially claiming that all this work can only be run sequentially, but 3DMark Time Spy does this only if you specifically turn on a setting in a Custom Run that is there so you can compare between the two. All Default Runs use Async Compute.
 
Joined
Jun 10, 2014
Messages
2,987 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
didn't time spy get criticized for its application of Async compute?

if so, i hope the vulkan bench gets better treatment.
It was critisized because it didn't "show similar gains as other (AMD optimized) games".

No one's saying a game must use a ton of Async compute, what we're saying is that a benchmark that advertises Async as something it tests is wrong when one card isn't really doing Async at all. Why does the Nvidia card get a pass just because it can't do Async well but on other benchmarks that are heavy on tessellation (Nvidia's strength) AMD cards don't get a pass? So it's fair for a benchmark to essentially give one card an alternate rendering path to avoid giving everyone an accurate rating of it's Async compute ability?
Stop this BS now. The API traces clearly show async in action.

FYI, Async compute does not disadvantage Nvidia hardware. It doesn't give it any performance loss or gain really so that whole argument that it hurts Nvidia's performance is out the window. Nvidia have had 2 generations of cards where they should have implemented Async but still have not. AMD have had Async in their cards since the 7000 series. At this point it's like having a processor without Hyper-threading, it's a huge feature of DX 12 and Vulkan.
As I've told you guys dozens of times: Async works by utilizing idle resources. Nvidia has better scheduling and fewer bottlenecks, making the GPU more busy to begin with. That's why there is little gain in many games. If a GPU has 2-3 % idle resources then the overhead of multiple queues and synchronization is going to be greater than the benefits.

Many games are using async shaders for the wrong purpose to begin with. Async shaders was intended to utilize different hardware resources for different tasks, while many games (like AofS) use it for compute shaders, which mostly uses the same resources as rendering. So basically games are optimizing for inferior hardware. As AMD progresses with Vega, Navi and so on, they'll have to create better schedulers, and then there will be less and less gain from doing this, so there is no point in writing games or benchmarks targeting bad hardware.
 
Joined
Oct 2, 2004
Messages
13,791 (1.87/day)
Well, isn't the point of benchmarks to be as taxing as possible? This means, also heavy use of ASYNC compute. If your card can't deal with it, tough F luck. Fix your lame hardware and try again. It's how benchmarks-hardware vendors always worked. Now they'll gonna pander to one or another. Just stop it.
 
Joined
Apr 18, 2013
Messages
1,260 (0.30/day)
Location
Artem S. Tashkinov
In other words, Nvidia video cards only had to use the Async feature if it increased performance, otherwise they can completely ignore it. It's very bad marketing by Futuremark and isn't a proper test of Async Compute performance.

Why would a hardware vendor use a feature which makes its products perform slower than without it? Can we stop with AMD/Asynchronous Compute fanboyism? D3D12 is not about Async Compute - it's just one of its features. And not the most crucial one. In fact D3D applications may run just fine when Async Compute requests are executed synchroniously. I vividly remember how everyone hated NVIDIA for using Gameworks which still used standard D3D11 features yet made a better use of NVIDIA hardware. Now we have the same situation with D3D12/AMD and everyone has suddenly forgotten this recent vendor specific "debacle" and praises AMD for basically becoming NVIDIA of the past. Ew!

I'm really glad 3DMark "noticed" Vulkan. Being them I'd even make it a primary benchmark, but then I understand they don't want to be enemies to Microsoft.

No one's saying a game must use a ton of Async compute, what we're saying is that a benchmark that advertises Async as something it tests is wrong when one card isn't really doing Async at all.

Pascal does support Async Compute. End of story.
 
Last edited:

bug

Joined
May 22, 2015
Messages
13,836 (3.96/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Well, isn't the point of benchmarks to be as taxing as possible? This means, also heavy use of ASYNC compute. If your card can't deal with it, tough F luck. Fix your lame hardware and try again. It's how benchmarks-hardware vendors always worked. Now they'll gonna pander to one or another. Just stop it.
Hm, you seem to be under the impression async is taxing by itself. It's not. It's just a different way of scheduling the actual work to be done.
FM_Jarnis posted a pretty clear explanation two posts above yours, but somehow you managed to miss it.
 
Joined
Oct 2, 2004
Messages
13,791 (1.87/day)
I didn't miss anything. Benchmarks are benchmarks for a reason. ASYNC being the hottest stuff these days, so it's kinda expected from benchmarks to utilize this heavily in order to show hardware flaws. Which in this case means scheduling and branching capability. It's not a performance hit thing as much as it being performance benefit if done right.
 

bug

Joined
May 22, 2015
Messages
13,836 (3.96/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
I didn't miss anything. Benchmarks are benchmarks for a reason. ASYNC being the hottest stuff these days, so it's kinda expected from benchmarks to utilize this heavily in order to show hardware flaws. Which in this case means scheduling and branching capability. It's not a performance hit thing as much as it being performance benefit if done right.
You've certainly missed this:
There is no way in DirectX 12 to force a card to use async compute. All the application can do is to submit the work in multiple queues, with Compute work labeled as such, which in practice means "this work here, this is compute stuff, it is safe to run it in parallel with graphics. You are free to do so. Do your best!".

Besides, async is not the hottest thing right now. Outside of AMD, async is a non-issue.
Async will be beneficial in the future, but today, off the top of my head, we have Nvidia, intel and consoles all doing just fine without async.
 
Joined
Oct 2, 2004
Messages
13,791 (1.87/day)
Well, if async is not a hot stuff, then DX12 is pretty much irrelevant then. DX12 brings two important things, async and closer to the metal API. Why on earth would you sack 50% of most important features?
 

bug

Joined
May 22, 2015
Messages
13,836 (3.96/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Well, if async is not a hot stuff, then DX12 is pretty much irrelevant then. DX12 brings two important things, async and closer to the metal API. Why on earth would you sack 50% of most important features?
Because it's not 50%?
On one hand you have a whole new API, on the other you have async which (from an API point of view) is the overload of a function to accept a queue as an argument.
It's only 50% if you count those as words in the dictionary.
 
Joined
Apr 18, 2013
Messages
1,260 (0.30/day)
Location
Artem S. Tashkinov
Well, if async is not a hot stuff, then DX12 is pretty much irrelevant then. DX12 brings two important things, async and closer to the metal API. Why on earth would you sack 50% of most important features?

You're an idiot. Sometimes it helps to at least Google for a minute or two to stop being arrogantly illiterate.

D3D 12 biggest feature is a completely new very close to metal API which allows to extract more performance from your GPU and always have expected results by running your 3D/Shader/Compute code directly on your hardware vs. D3D 11 and earlier which employ very complex OS drivers which translate all your API calls into hardware instructions.
 
Joined
Nov 23, 2016
Messages
80 (0.03/day)
idSoftware wanted to get the most out of the consoles, so it made sense to optimize it for GCN... Anyway, the game runs great on both amd and nividia. I reckon async benefits amd's uarch more than nvidia's due to the massive parallel nature of GCN.

it benefits both Manus its AMD just did it better .. lets call it what it is because if it was the other way around AMD would never hear the end of it and the sky would be falling.
 
Joined
Jun 10, 2014
Messages
2,987 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Well, isn't the point of benchmarks to be as taxing as possible? This means, also heavy use of ASYNC compute. If your card can't deal with it, tough F luck. Fix your lame hardware and try again. It's how benchmarks-hardware vendors always worked. Now they'll gonna pander to one or another. Just stop it.
No, the point of benchmarks like the ones from Futuremark is to represent typical load, not obscure extremes the users never will run into. But still, sometimes Futuremark weigh certain new features too heavily compared to actual games.

Pure bottleneck benchmarks serve no purpose other than curiosity, like measuring "API overhead", GPU memory bandwidth, etc. Just a few years ago many reviews included benchmarks of 1024x768 screen resolutions, just to see CPU bottlenecks. But those kinds of benchmarks are worthless if no one cares about a high-end GPU in that screen resolution. As always; the only thing that matters is real world performance. It doesn't matter if AMD's comparable products have more Gflop/s, more memory bandwidth, or more "gain" from certain features, at the end of the day actual performance is the only measurement.
 
Joined
Mar 24, 2012
Messages
533 (0.11/day)
No one's saying a game must use a ton of Async compute, what we're saying is that a benchmark that advertises Async as something it tests is wrong when one card isn't really doing Async at all. Why does the Nvidia card get a pass just because it can't do Async well but on other benchmarks that are heavy on tessellation (Nvidia's strength) AMD cards don't get a pass? So it's fair for a benchmark to essentially give one card an alternate rendering path to avoid giving everyone an accurate rating of it's Async compute ability?

FYI, Async compute does not disadvantage Nvidia hardware. It doesn't give it any performance loss or gain really so that whole argument that it hurts Nvidia's performance is out the window. Nvidia have had 2 generations of cards where they should have implemented Async but still have not. AMD have had Async in their cards since the 7000 series. At this point it's like having a processor without Hyper-threading, it's a huge feature of DX 12 and Vulkan.

because Async is mostly AMD problem not nvidia. why did you think the first GCN have the hardware despite the API (DX11) did not support it's usage? why only in DX12 async compute being incorporate into direct x? that's because AMD has been pushing async compute to be part of the API since the very first iteration of GCN but it did not become a reality until DX12. so there is no such things such as implementing the proper hardware before it is part of the API for nvidia. also MS did not dictate exactly how async compute must be designed. this actually lead to different gpu maker have different way of implementing async compute in their gpu.
 
Top