• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Futuremark Teases 3DMark "Time Spy" DirectX 12 Benchmark

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,194 (7.56/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Futuremark teased its first benchmark for DirectX 12 graphics, the 3DMark "Time Spy." Likely marketed as an add-on to the 3DMark (2013) suite, "Time Spy" tests DirectX 12 features in a silicon-scorching 3D scene that's rich in geometric, textural, and visual detail. The benchmark is also ready for new generation displays including high resolutions beyond 4K Ultra HD. Existing users of 3DMark get "Basic" access to "Time Spy" when it comes out, with the option to purchase its "Advanced" and "Professional" modes.

Under the hood, "Time Spy" takes advantage of Direct3D feature-level 12_0, including Asynchronous Compute, heavily multi-threaded CPUs (which can make use of as many CPU cores as you can throw at it), and DirectX explicit multi-adapter (native multi-GPU, including mixed setups). Futuremark stated that the benchmark was developed with inputs from AMD, Intel, NVIDIA, Microsoft, and other partners of the Futuremark Benchmark Development Program.



A teaser trailer video follows.


View at TechPowerUp Main Site
 

Mussels

Freshwater Moderator
Joined
Oct 6, 2004
Messages
58,413 (7.96/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
BTA, the term is "mis-matched" or mixed, not "mixed matched"

Mulit-vendor is an alternate term i've seen a lot that makes sense too.
 
Joined
Oct 22, 2014
Messages
14,062 (3.83/day)
Location
Sunshine Coast
System Name Lenovo ThinkCentre
Processor AMD 5650GE
Motherboard Lenovo
Memory 32 GB DDR4
Display(s) AOC 24" Freesync 1m.s. 75Hz
Mouse Lenovo
Keyboard Lenovo
Software W11 Pro 64 bit
Mulit-vendor is an alternate term
You find him at the fish market? :roll:
I think the term should be, mixed/ matched, meaning either or.
 

Mussels

Freshwater Moderator
Joined
Oct 6, 2004
Messages
58,413 (7.96/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
Joined
Jan 31, 2011
Messages
2,210 (0.44/day)
System Name Ultima
Processor AMD Ryzen 7 5800X
Motherboard MSI Mag B550M Mortar
Cooling Arctic Liquid Freezer II 240 rev4 w/ Ryzen offset mount
Memory G.SKill Ripjaws V 2x16GB DDR4 3600
Video Card(s) Palit GeForce RTX 4070 12GB Dual
Storage WD Black SN850X 2TB Gen4, Samsung 970 Evo Plus 500GB , 1TB Crucial MX500 SSD sata,
Display(s) ASUS TUF VG249Q3A 24" 1080p 165-180Hz VRR
Case DarkFlash DLM21 Mesh
Audio Device(s) Onboard Realtek ALC1200 Audio/Nvidia HD Audio
Power Supply Corsair RM650
Mouse Rog Strix Impact 3 Wireless | Wacom Intuos CTH-480
Keyboard A4Tech B314 Keyboard
Software Windows 10 Pro

the54thvoid

Super Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
13,012 (2.39/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
my good sir, contrary to rumours btarunr is not a fishmonger.

But he does have a porpoise in life.

I knew they're mammals.

This benchmark could be quite good to show a more 'perceived' neutral DX12 environment. If AMD and Nvidia were involved it might give us a better bench to argue with instead of AotS.
 

Mussels

Freshwater Moderator
Joined
Oct 6, 2004
Messages
58,413 (7.96/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
i have a 290 and a 970 here on very similar systems, so i'll happily compare AMDpples to Nvoranges when this is out
 
Joined
Apr 30, 2012
Messages
3,881 (0.85/day)
But he does have a porpoise in life.

I knew they're mammals.

This benchmark could be quite good to show a more 'perceived' neutral DX12 environment. If AMD and Nvidia were involved it might give us a better bench to argue with instead of AotS.

When has 3Dmarks equated to in-game performance. DX12 is even worse due to developers implementation.

Nvidia hasn't really clarified if its classifying Compute Pre-emption as Async compute. Nvidia has also said "Async" is still not active in their driver.

Maybe they'll come out with a driver now that this is out.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.47/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Here we go...
Indeed, I wonder if there is going to be a big rift between Maxwell, GCN, and Pascal.

When has 3Dmarks equated to in-game performance. DX12 is even worse due to developers implementation.

Nvidia hasn't really clarified if its classifying Compute Pre-emption as Async compute. Nvidia has also said "Async" is still not active in their driver.

Maybe they'll come out with a driver now that this is out.
That's the big question. Async shading isn't something just a driver can do. Its implementation should be 90% in silicon.

The big question here is whether or not Futuremark will disable async shading when a NVIDIA card is present. I hope not.
 
Joined
Oct 2, 2004
Messages
13,791 (1.88/day)
If anything, AMD will be very happy about the async shader support where they still reign supreme. I think NVIDIA still doesn't have a functional async in GTX 1080 lineup. If they had, they'd be braging about it on all ends. But they are suspiciously quiet instead...

Can't wait to see how my GTX 980 will be tortured. Again :D
 
Joined
Jan 5, 2006
Messages
18,584 (2.70/day)
System Name AlderLake
Processor Intel i7 12700K P-Cores @ 5Ghz
Motherboard Gigabyte Z690 Aorus Master
Cooling Noctua NH-U12A 2 fans + Thermal Grizzly Kryonaut Extreme + 5 case fans
Memory 32GB DDR5 Corsair Dominator Platinum RGB 6000MT/s CL36
Video Card(s) MSI RTX 2070 Super Gaming X Trio
Storage Samsung 980 Pro 1TB + 970 Evo 500GB + 850 Pro 512GB + 860 Evo 1TB x2
Display(s) 23.8" Dell S2417DG 165Hz G-Sync 1440p
Case Be quiet! Silent Base 600 - Window
Audio Device(s) Panasonic SA-PMX94 / Realtek onboard + B&O speaker system / Harman Kardon Go + Play / Logitech G533
Power Supply Seasonic Focus Plus Gold 750W
Mouse Logitech MX Anywhere 2 Laser wireless
Keyboard RAPOO E9270P Black 5GHz wireless
Software Windows 11
Benchmark Scores Cinebench R23 (Single Core) 1936 @ stock Cinebench R23 (Multi Core) 23006 @ stock
Well I'm unable to run this, still on win 8.1 :p
Not going to upgrade anytime soon.
 

the54thvoid

Super Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
13,012 (2.39/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
When has 3Dmarks equated to in-game performance. DX12 is even worse due to developers implementation.

Nvidia hasn't really clarified if its classifying Compute Pre-emption as Async compute. Nvidia has also said "Async" is still not active in their driver.

Maybe they'll come out with a driver now that this is out.

It doesn't actually matter a whole lot which way Nvidia handles asynchronous tasks. People ought to ease up on the whole argument. If Nvidia doesn't do Async, this should really worry people:

Hitman DX12



Highest frames and lowest frametimes despite Hitman being an AMD partnered game.

https://community.amd.com/thread/196920

Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs—called asynchronous compute engines—to handle heavier workloads and better image quality without compromising performance.

Even Ashes:

Worse case scenario for Nvidia


A 1080 that cannot do Async still manages to be equal to the Async King.

Better case scenario (really depends on where you look I guess) - I know it's only 1440p.



My point isn't to argue for Nvidia and the whole Async debate but rather to say that if the GTX 1080 can match and even be a lot faster than a Fury X with awesome Async capability with it's GCN arch, why are people bothered so much about it?

That's why Firestrike might be quite a good marker. And for the naysayers, all the RX 480 threads have been listing Firestrike scores from left to right, so people on both sides do give it credit.

If Pascal can perform as it does without async (on extreme workloads as well) then why try harder?

EDIT: lol, apologies for monster graph image in middle - must be AMD promoted :roll:
 
Joined
Jun 23, 2016
Messages
74 (0.02/day)
I bet AMD was adamant about having Async Compute implemented.

And I bet Futuremark was like "sure".
Intel was like: ¯\_(ツ)_/¯

And I bet Nvidia had a meeting to find a way out of it: "can't we just pay them with bags of money to remove it? Like we always do?"

And I bet Futuremark was like "best we can do is give you an option to disable it. Now hand over the money"
 
Joined
Dec 22, 2014
Messages
101 (0.03/day)
Processor i5-4670k
Motherboard MSI Z87-GD65
Cooling Magicool G2 Slim 240+360, Watercool Heatkiller 3.0, Alphacool GPX 290 M05
Memory G.Skill RipjawsX DDR3 2x4GB 2133MHz CL9
Video Card(s) Gigabyte Radeon R9 290
Storage Samsung 850 EVO 250GB, 320GB+1TB HDDs
Case SilentiumPC Aquarius X90 Pure Black
Power Supply Chieftec Navitas GPM-1000C
I see a lot of people still misunderstand the AC.
Implementing it in a game or benchmark means that the cards that have it will run in FASTER. It doesn't mean that if nvidia card doesn't have it, it will run slower than it would if there was no async. It simply will make NO CHANGE for nvidia cards owners. This technology isn't maiming the nvidia cards, it's only for making stuff run faster on GCN cards.

There is no reason to "turn off" async compute. There will be no performance GAIN on nvidia.
 

bug

Joined
May 22, 2015
Messages
13,731 (3.97/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
That's the big question. Async shading isn't something just a driver can do. Its implementation should be 90% in silicon.

Actually, async compute is an API requirement. DX says nothing (nor should it) about how it is to be implemented. If Nvidia can honour the async compute contract without actually having the feature in their silicon AND without a hefty performance penalty, good for them. Hopefully this benchmark will be able to shed a little more light on the matter.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.47/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
It doesn't actually matter a whole lot which way Nvidia handles asynchronous tasks. People ought to ease up on the whole argument. If Nvidia doesn't do Async, this should really worry people:

*snip*
Game developers tend to disable async compute on NVIDIA hardware so, benchmarks are invalid for checking async compute unless the developer has said that they aren't disabling it on NVIDIA hardware. Case in point: that Ashes of Singularity benchmark pretty clearly has it enabled for AMD cards (note the pretty significant FPS boost) where NVIDIA cards have it disabled (more or less equal FPS).

I believe the only game that let the user decide is Ashes of the Singularity and there were some good benchmarks done in it a while back comparing on and off states. On the other hand, games like Hitman likely have it disabled on NVIDIA hardware with no option to enable it.

I wouldn't be surprised, at all, if NVIDIA "working with" Futuremark means async shaders will be disabled on NVIDIA hardware just like everywhere else.

Actually, async compute is an API requirement. DX says nothing (nor should it) about how it is to be implemented. If Nvidia can honour the async compute contract without actually having the feature in their silicon AND without a hefty performance penalty, good for them. Hopefully this benchmark will be able to shed a little more light on the matter.
It's contextual switching. AMD can switch compute units between tasks on demand almost instantaneously. NVIDIA, on the other hand, has to wait for the executing command to finish before it can switch. AMD's design is truly multithreaded where NVIDIA's is not. NVIDIA is going to need a major redesign to catch up and Pascal doesn't represent that.
 
Last edited:

Mussels

Freshwater Moderator
Joined
Oct 6, 2004
Messages
58,413 (7.96/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
Game developers tend to disable async compute on NVIDIA hardware so, benchmarks are invalid for checking async compute unless the developer has said that they aren't disabling it on NVIDIA hardware. Case in point: that Ashes of Singularity benchmark pretty clearly has it enabled for AMD cards (note the pretty significant FPS boost) where NVIDIA cards have it disabled (more or less equal FPS).

I believe the only game that let the user decide is Ashes of the Singularity and there were some good benchmarks done in it a while back comparing on and off states. On the other hand, games like Hitman likely have it disabled on NVIDIA hardware with no option to enable it.


It's contextual switching. AMD can switch compute units between tasks on demand almost instantaneously. NVIDIA, on the other hand, has to wait for the executing command to finish before it can switch. AMD's design is truly multithreaded where NVIDIA's is not. You can write software to accept two threads and merge them into one thread but it is obvious, in terms of performance, that the software is not multithreaded. NVIDIA is going to need a major redesign to catch up and Pascal doesn't represent that.

and this is why i have one GPU from each team!

hooray for my hoarding OCD!
 
Joined
Jan 5, 2006
Messages
18,584 (2.70/day)
System Name AlderLake
Processor Intel i7 12700K P-Cores @ 5Ghz
Motherboard Gigabyte Z690 Aorus Master
Cooling Noctua NH-U12A 2 fans + Thermal Grizzly Kryonaut Extreme + 5 case fans
Memory 32GB DDR5 Corsair Dominator Platinum RGB 6000MT/s CL36
Video Card(s) MSI RTX 2070 Super Gaming X Trio
Storage Samsung 980 Pro 1TB + 970 Evo 500GB + 850 Pro 512GB + 860 Evo 1TB x2
Display(s) 23.8" Dell S2417DG 165Hz G-Sync 1440p
Case Be quiet! Silent Base 600 - Window
Audio Device(s) Panasonic SA-PMX94 / Realtek onboard + B&O speaker system / Harman Kardon Go + Play / Logitech G533
Power Supply Seasonic Focus Plus Gold 750W
Mouse Logitech MX Anywhere 2 Laser wireless
Keyboard RAPOO E9270P Black 5GHz wireless
Software Windows 11
Benchmark Scores Cinebench R23 (Single Core) 1936 @ stock Cinebench R23 (Multi Core) 23006 @ stock

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.47/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
and this is why i have one GPU from each team!

hooray for my hoarding OCD!
I bet you're just giddy for games to start supporting D3D12 multi-GPU, aren't you?
 

Mussels

Freshwater Moderator
Joined
Oct 6, 2004
Messages
58,413 (7.96/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
Joined
Jun 23, 2016
Messages
74 (0.02/day)
I see a lot of people still misunderstand the AC.
Implementing it in a game or benchmark means that the cards that have it will run in FASTER. It doesn't mean that if nvidia card doesn't have it, it will run slower than it would if there was no async. It simply will make NO CHANGE for nvidia cards owners. This technology isn't maiming the nvidia cards, it's only for making stuff run faster on GCN cards.

There is no reason to "turn off" async compute. There will be no performance GAIN on nvidia.

I'm pretty sure that is somewhat wrong. After all, Nvidia did forcibly disable it in drivers to avoid increased frame latencies on Maxwell.

Async Compute is all about doing simultaneous compute workloads without affecting frame latency. If you start increasing the amount of work beyond the supported number of queues, you should see an increase in frame latency as the GPU can't keep up. Nvidia can sort of do Async Compute up until a certain point where it will become bogged down by the amount of work thrown at it (Pascal should be able to keep up with standard Async Compute implementations if it performs as expected even though it's not truly Async Compute capable). Granted, AMD also has a limit (obviously) but it's much higher than Nvidia's.

So in theory, the software should never overburden the card with a number of queues beyond what the card can support. In that case, there will be a performance benefit by parallelizing the work instead of doing it serially like previous APIs. And in that case, all GPUs will benefit.
 

rtwjunkie

PC Gaming Enthusiast
Supporter
Joined
Jul 25, 2008
Messages
13,980 (2.35/day)
Location
Louisiana
Processor Core i9-9900k
Motherboard ASRock Z390 Phantom Gaming 6
Cooling All air: 2x140mm Fractal exhaust; 3x 140mm Cougar Intake; Enermax ETS-T50 Black CPU cooler
Memory 32GB (2x16) Mushkin Redline DDR-4 3200
Video Card(s) ASUS RTX 4070 Ti Super OC 16GB
Storage 1x 1TB MX500 (OS); 2x 6TB WD Black; 1x 2TB MX500; 1x 1TB BX500 SSD; 1x 6TB WD Blue storage (eSATA)
Display(s) Infievo 27" 165Hz @ 2560 x 1440
Case Fractal Design Define R4 Black -windowed
Audio Device(s) Soundblaster Z
Power Supply Seasonic Focus GX-1000 Gold
Mouse Coolermaster Sentinel III (large palm grip!)
Keyboard Logitech G610 Orion mechanical (Cherry Brown switches)
Software Windows 10 Pro 64-bit (Start10 & Fences 3.0 installed)
After all, Nvidia did forcibly disable it in drivers to avoid increased frame latencies on Maxwell.

Actually, it was hardware removed. This is how they were able to run higher clocks at cooler temps with less power than Kepler while still at 28nm.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.47/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Kepler doesn't support it either though (does it serially). That said, GCN 1.0 cards only had 2 ACEs versus 8 in the GCN 1.1 and up.
 
Joined
Jun 23, 2016
Messages
74 (0.02/day)
Actually, it was hardware removed. This is how they were able to run higher clocks at cooler temps with less power than Kepler while still at 28nm.
That's a different thing.
You're talking GPU-side, I'm talking software-side.

Nvidia did remove hardware schedulers in Maxwell but that's not what I'm referring to.

I referred to that AotS was initially throwing work at the Maxwell cards they couldn't complete in a timely fashion due to the lack of proper Async Compute support resulting in slightly reduced frame rate (and probably poor frame timing) so it was disabled in software so that the Maxwell cards weren't having a virtual handbrake pulled.
 

bug

Joined
May 22, 2015
Messages
13,731 (3.97/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
It's contextual switching. AMD can switch compute units between tasks on demand almost instantaneously. NVIDIA, on the other hand, has to wait for the executing command to finish before it can switch. AMD's design is truly multithreaded where NVIDIA's is not. NVIDIA is going to need a major redesign to catch up and Pascal doesn't represent that.

And you've never ever seen a single threaded program beat a multithreaded one because of the mutithreading overhead?
It doesn't happen all the time, but it does happen. That's why I'm saying more benchmarks will let us know where Nvidia actually stands.
 
Top