Friday, March 21st 2025

Microsoft DirectX Raytracing 1.2 and Neural Rendering Brings up to 10x Speedup for AMD, Intel, and NVIDIA GPUs
Microsoft's DirectX Raytracing (DXR) 1.2 announcement at GDC 2025 introduces two technical innovations that address fundamental ray tracing performance bottlenecks. Opacity micromaps (OMM) reduce the computational overhead in alpha-tested geometry by storing pre-computed opacity data, eliminating redundant ray-geometry intersection tests. Shader execution reordering (SER) tackles the inherent GPU inefficiency caused by incoherent ray behavior by dynamically grouping shader invocations with similar execution paths, minimizing thread divergence that has historically plagued ray tracing workloads. The real-world implications extend beyond Microsoft's claimed 2.3x OMM and 2x SER performance improvements. Both techniques are shifting development from brute-force computational approaches toward more intelligent resource management. Notably, both features require specific hardware support.
Hardware vendors' implementation timelines remain undefined despite NVIDIA's announced support across RTX GPUs, raising questions about broader ecosystem adoption rates. Microsoft's Shader Model 6.9 introduces cooperative vectors. This hardware acceleration architecture drastically improves matrix computation performance, enabling a 10x speedup in neural texture compression while reducing memory footprint by up to 75% compared to traditional methods. It bridges the gap between conventional rendering and neural rendering, with Intel, AMD, and NVIDIA already demonstrating implementations that combine path tracing with neural denoising algorithms, potentially making computationally intensive graphics accessible on mid-range consumer hardware by late 2025. While the technical merit of these advancements is clear, the April 2025 preview release timeline for the Agility SDK means developers face at least several months before these features can be meaningfully implemented in production environments.
Sources:
Microsoft, via Wccftech
Hardware vendors' implementation timelines remain undefined despite NVIDIA's announced support across RTX GPUs, raising questions about broader ecosystem adoption rates. Microsoft's Shader Model 6.9 introduces cooperative vectors. This hardware acceleration architecture drastically improves matrix computation performance, enabling a 10x speedup in neural texture compression while reducing memory footprint by up to 75% compared to traditional methods. It bridges the gap between conventional rendering and neural rendering, with Intel, AMD, and NVIDIA already demonstrating implementations that combine path tracing with neural denoising algorithms, potentially making computationally intensive graphics accessible on mid-range consumer hardware by late 2025. While the technical merit of these advancements is clear, the April 2025 preview release timeline for the Agility SDK means developers face at least several months before these features can be meaningfully implemented in production environments.
42 Comments on Microsoft DirectX Raytracing 1.2 and Neural Rendering Brings up to 10x Speedup for AMD, Intel, and NVIDIA GPUs
DirectX 12 is getting too old, and it's still a complete unoptimized mess, compared to DX11.
Ridiculous how in 2025 DX12 is still way slower and buggier than DX11.
www.techpowerup.com/320547/amd-posts-super-early-work-graphs-render-time-numbers-posts-39-render-time-improvements#g320547
developer.nvidia.com/blog/machine-learning-acceleration-vulkan-cooperative-matrices/ Why? You mean making DXR, neural rendering and other bits a mandatory part and fashioning that into a new version? Why? What makes DX12 unoptimized? What bugs and slowness do you mean?
Guys, DX12 is an API. The way it is being or needs to be used is different from API itself. If you are talking about games it is not the API that is buggy - in most cases, there have been some relatively smaller bugs obviously - but the game or application that developer made. DX12 is a comparatively lower-level API, same as Vulkan. Which means the API and IHV implementations of it in drivers will not hold your hand the same way older APIs like OpenGL or DX11 did. While there is a bigger possibility for optimization, there is also a bigger possibility of shooting your own foot.
I don't think there will be any big rush to implement graphics only features, all the architecture changes now are made for AI / LLM acceleration, even if this means problems with graphics on PCs (dropping 32 bit Physx, not noticing missing ROPs which are used only for graphics...).
Will also be interesting to see what sort of legs RDNA 4 has...
That was also the case with open GL, who was there before direct X even. Yet Direct X became the standard for PC gaming. To be fair, developers have the habits of not using any performance improvement to make the same old thing run faster, but to push the graphics even more. When you hear "39% faster when you use that new feature" what you need to read is "we are going to exploit those performance enhancement to push the details level even further. Yes, it won't bring any clearly noticeable visual improvement, but trust us, after a few years of adding even more details, it will make sense, we swear"
Assuming that statement is correct, why was it not used yet? Factor 10? Serious? I highly doubt. ..... up to Factor 10
Wahtaboutism: That is similar to those M2 NVME with up to 7000MB/s write rate which have at at the end of the day 600MB/s. I see a factor up to 12 times better in that example.
I hardly buy windows games. So I did not support only Directx Games. They should use something with open specification which is license free so anyone can use it for any operating system.
Microsoft should focus on a fast vulkan implementation.
"Notably, both features require specific hardware support. Hardware vendors' implementation timelines remain undefined despite NVIDIA's announced support across RTX GPUs"
On MS devblog it's more clear:
"We’re thrilled that our hardware partners are fully embracing these cutting-edge features. NVIDIA has committed driver support across GeForce RTX™ GPUs, and we’re actively working with other hardware vendors, including AMD, Intel, and Qualcomm, to ensure widespread adoption"
Here is copilot trying to make some graphics:
Note the melting faces and fingers conjoining.
See, it's an imitation.
Not a MS issue either, they're not responsible for Vulkan.
I guess it's more of a matter of engines not making proper use/exploiting Vulkan as much as they do with Directx. I don't deal with development in either of those APIs, but I can imagine it to be something really simple, like either due to DX being easier to use, be it on actual code or how to use extensions (VK is known to require tons of boilerplate), or engines just focusing more on DX because that's the standard for games on Windows and that's it.