- Joined
- Dec 14, 2009
- Messages
- 13,145 (2.39/day)
- Location
- Glasgow - home of formal profanity
Processor | Ryzen 7800X3D |
---|---|
Motherboard | MSI MAG Mortar B650 (wifi) |
Cooling | be quiet! Dark Rock Pro 4 |
Memory | 32GB Kingston Fury |
Video Card(s) | Gainward RTX4070ti |
Storage | Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb |
Display(s) | LG 32" 165Hz 1440p GSYNC |
Case | Asus Prime AP201 |
Audio Device(s) | On Board |
Power Supply | be quiet! Pure POwer M12 850w Gold (ATX3.0) |
Software | W10 |
Yeah, about async compute ... it's super easy on GCN because it is perfectly happy to accept compute commands in the 3D queue. There is no penalty for mixing draw calls and compute commands in the 3D queue.
With Maxwell you have performance penalties from using compute commands concurrently with draw calls, so compute queues are mostly used to offload and execute compute commands in batch.
Essentially if you want to use async compute efficiently on nvidia, you gotta cleanly separate the render pipeline into batches and even consider including CUDA.dll to fully use high priority jobs and independent scheduling (with GK110 and later, CUDA bypasses the graphics command processor and is handled by a dedicated function unit in hardware which runs uncoupled from the regular compute or graphics engine. It even supports multiple asynchronous queues in hardware). It's a complete mess, and all detailed here: http://ext3h.makegames.de/DX12_Compute.html
Nice read. Sort of.
It's possible for a dev to work with CUDA to make async work then. That would require Nvidia to sponsor titles and help with coding for CUDA to prioritise the batches to suit the hardware. The article said that would mean worse case for AMD but good gains for Nvidia as the CUDA route allows the hardware to do async batches better. Vice versa is the hardware only solution as AMD has designed GCN for, which is worse case for Nvidia.
So, AMD can sponsor titles and Nvidia lose out or Nvidia can sponsor titles and AMD can lose out.
No change then!