Saturday, January 11th 2025
Microsoft Lays DirectX API-level Groundwork for Neural Rendering
Microsoft announced updates to the DirectX API that would pave the way for neural rendering. Neural rendering is a concept where portions of a frame in real-time 3D graphics are drawn using a generative AI model that works in tandem with classic raster 3D graphics pipeline, along with other advancements, such as real-time ray tracing. This is different from AI-based super resolution technologies. The generative AI here is involved in rendering the input frames for a super resolution technology. One of the nuts and bolts of neural rendering is cooperative vectors, enable an information pathway between the conventional graphics pipeline and the generative AI, telling it what it's doing, what needs to be done by the AI model, and what the ground truth for the model is.
Microsoft says that its HLSL team is working with AMD, Intel, NVIDIA, and Qualcomm to bring cross-vendor support for cooperative vectors in the DirectX ecosystem. The very first dividends of this effort will be seen in the upcoming NVIDIA GeForce RTX 50-series "Blackwell" GPUs, which will use cooperative vectors to drive neural shading. "Neural shaders can be used to visualize game assets with AI, better organize geometry for improved path tracing performance and tools to create game characters with photo-realistic visuals," Microsoft says.
Source:
Microsoft DirectX Blog
Microsoft says that its HLSL team is working with AMD, Intel, NVIDIA, and Qualcomm to bring cross-vendor support for cooperative vectors in the DirectX ecosystem. The very first dividends of this effort will be seen in the upcoming NVIDIA GeForce RTX 50-series "Blackwell" GPUs, which will use cooperative vectors to drive neural shading. "Neural shaders can be used to visualize game assets with AI, better organize geometry for improved path tracing performance and tools to create game characters with photo-realistic visuals," Microsoft says.
37 Comments on Microsoft Lays DirectX API-level Groundwork for Neural Rendering
That said, when GPU sales / Windows sales start slumping, MS/NV/AMD with get together and repackage everything as DX13 that will only be supported on the latest Windows and the latest GPUs to force obsolescence and drive up sales.
Mission accomplished
Forget Lucas Industrial Lights and Magic, say hello to Nvidia Gaslight
The thing about neural networks is they may take ages to train, but, once trained, they will recognize stuff in a jiffy. If you can leverage that, why not?
And it's not Nvidia anything, this is DX. Available for everyone.
There is also the need to preserve some compatibility because of the install base. The number of people playing games were much smaller than now and hardware is much slower.
We're at two years per new architecture, was in those early days it was 6 months.
Most of it has also been Microsoft's fault, as game developers may not want to support newer standards. Besides the hardware IIRC, Windows 10 didn't get 12 Ultimate for quite some time, as they wanted to push Win 11 adoption.
So, developers tend to stick to a common platform until it is possible to move forward. They are not going to alienate 50%+ of the user base just because Microsoft is stubborn.
I want to have Empirical Ray-tracing There is no people who gawk high-performance graphics pipeline and low-end optimizations in Microsoft because its Microsoft.
Games built with PT as a core requirement, like the new Indiana game, show how much more convincing a scene can be versus all the traditional rasterisation hacks of the past. Once they solve the 'noise' issue and performance hits, it will be a very bright future for gaming indeed.
I for one long to see the day when things like distracting screen-space hacks and cube maps or inaccurate lighting from light/shadow maps are a thing of the past.
The annoying this is, is that NVidia is allowed to rampantly capitalise on the transition because their primary competitor really dropped the ball on the tech. RDNA4 does look like a step in the right direction, but will have to wait for reviews.
DirectX12 / Shader Model 6 was wave intrinsics. A bunch of instructions named ballot, vote, etc. etc that allowed programmers to better think in terms of wavefronts and optimize shaders even further. With new instructions (raytracing, FP16, neural nets, etc. etc.), the new instructions need a place in HLSL (the programming language you use in DirectX to program GPUs). Furthermore, GPUs may have other changes (ex: Work Graphs) which are more efficient than older ways of launching kernels.
In short: OS level system stuff. Way back in the 90s you'd tell the GPU about every triangle in immediate mode. Today, you have Mesh Shaders calculate triangles on the GPU itself programmatically, and the CPU may have never even known these triangles existed as a concept. (Particle effects, new geometries, and more). This only exists because today's GPUs can issue new GPU commands and new GPU programs, a concept that didn't exist 10 years ago.
DirectX12 (and newer) have to add new API calls to Windows so that the game programmers can access these new features.
I'm not playing better games here because of these graphics. That's the bottom line, to me.