Saturday, January 11th 2025
Microsoft Lays DirectX API-level Groundwork for Neural Rendering
Microsoft announced updates to the DirectX API that would pave the way for neural rendering. Neural rendering is a concept where portions of a frame in real-time 3D graphics are drawn using a generative AI model that works in tandem with classic raster 3D graphics pipeline, along with other advancements, such as real-time ray tracing. This is different from AI-based super resolution technologies. The generative AI here is involved in rendering the input frames for a super resolution technology. One of the nuts and bolts of neural rendering is cooperative vectors, enable an information pathway between the conventional graphics pipeline and the generative AI, telling it what it's doing, what needs to be done by the AI model, and what the ground truth for the model is.
Microsoft says that its HLSL team is working with AMD, Intel, NVIDIA, and Qualcomm to bring cross-vendor support for cooperative vectors in the DirectX ecosystem. The very first dividends of this effort will be seen in the upcoming NVIDIA GeForce RTX 50-series "Blackwell" GPUs, which will use cooperative vectors to drive neural shading. "Neural shaders can be used to visualize game assets with AI, better organize geometry for improved path tracing performance and tools to create game characters with photo-realistic visuals," Microsoft says.
Source:
Microsoft DirectX Blog
Microsoft says that its HLSL team is working with AMD, Intel, NVIDIA, and Qualcomm to bring cross-vendor support for cooperative vectors in the DirectX ecosystem. The very first dividends of this effort will be seen in the upcoming NVIDIA GeForce RTX 50-series "Blackwell" GPUs, which will use cooperative vectors to drive neural shading. "Neural shaders can be used to visualize game assets with AI, better organize geometry for improved path tracing performance and tools to create game characters with photo-realistic visuals," Microsoft says.
20 Comments on Microsoft Lays DirectX API-level Groundwork for Neural Rendering
Edit: Here are the release times for past versions:
1.0 to 2.0 1 year
2.0 to 3.0 1 year
3.0 to 5.0 1 year (4.0 never released)
5.0 to 6.0 1 year
6.0 to 7.0 1 year
7.0 to 8.0 1 year
8.0 to 9.0 2 years
9.0 to 10.0 4 years
10.0 to 11.0 3 years
11.0 to 12.0 6 years
12.0 going on 10 years now!!
on the actual article, im still not clear on what this is suppose to be.
"neural rendering" apart from being yet another stupid marketing name, seems to imply its doing something with the actual...ya know.. rendering of the frame, so maybe half is done by traditional rasterization so the AI has something to build off and then it makes the rest of the image?
on the other hand we get this fantastic sentence:
"This is different from AI-based super resolution technologies. The generative AI here is involved in rendering the input frames for a super resolution technology."
why does it say its rendering the input frames.....for an upscaler?
So like it provides the motion vectors or some crap?
Its like a game that runs on windows 10 but not on 7, ok good to know but I dont know the actual reason for it, what tech does 10 has that 7 does not?
likewise directx versions have never meant anything to me, I can run Warframe from the launcher in DX11 and DX12 mode with no visual difference between them.
So when a game runs on DX12, thats just like a game running on Unreal Engine 5, it COULD use certain features but might just as well not, the product will show that eventually, so yeah for all I care it stays DX12 from now on because whatever it supports does not mean the product will use it anyway.
and just to make clear, im not against any change either, its all good for me, I just dont share the feeling that its needed.
Nvidia seems to provide more detail about that. RTX neural seems to be to DX neural what RTX IO is to direct storage: aka the same thing.
NVIDIA RTX Neural Rendering Introduces Next Era of AI-Powered Graphics Innovation | NVIDIA Technical Blog So you train the API on the rasterized game and use AI to improve some areas of the render. Seems promising in theory, but the demo doesn't look visually stable. But that answers a few of my questions: I was wondering how they would pull off stuff like real-time SSS or caustics, but it seems that anything too heavy will be AI-generated.
Edit: It seems that neural rendering will also accelerate RT/PT, if I understand correctly, the GPU isn't going to brute force every single bounce of light, but rather infer data from the first few bounces...and also justify Nvidia being stingy on VRAM :D
Like the face one is in no way better, they just made a more beautiful person lol, and that AI co-player, yikez.
Better compression is cool I guess and if AI somehow does this well...idk I guess thats also something I dont get about AI, its not actively learning right? so its just an algorithm like any other that perhaps was trained in AI terms but after that its just done, a better compression technique, which is cool but why would anything need "AI" hardware to use such a thing?
And if it doesnt then who gives a crap of how the algorithm was made? I assume some calculations were done and now its just here for me to use.....
It's also my understanding that a generalist AI upscaler/frame generator doesn't exist because each vendor is using very specialized ML hardware tailor-made for their API: Xess doesn't use the ML hardware of other vendors, but fall back on something more generalist that doesn't performs as well as native Xess. Direct X neural is supposed to avoid that clusterfuck, so Intel and AMD probably developed their next-gen ML hardware with the required stuff to run all those things.
give me TFLOPS and optimize your spaghetticode Frankenstein Monster of a Game...
I think internally the API is version 12.2 currently.
But it'd be no surprise if Microsoft never changes the major version number internally since they avoid compatibility issues that way (just as Windows 11 doesn't have a major version number 11, it remains as 10). Basically software doesn't like it when you respond to a request in a manner that was not expected.
That aside, I'm not sure if I understand the point of this technology other than for Nvidia et al to sell us hardware we wouldn't need otherwise.
As soon as I get into the hobby it goes to shit, lovely.
It’s hurting everyone so you are not alone.
Raster is faster to begin with because it uses lots of trick when Fully Path traced graphics are all about brute force. Neural rendering is about to add even more tricks to rasterization in the hope of reducing the computational load required to achieve a similar level of graphics. And that includes lowering the load for path tracing as well.
Neural Supersampling and Denoising for Real-time Path Tracing - AMD GPUOpen
PowerPoint Presentation (about AMD neural texture compression)
All of them are looking for ways to improve graphics with more software tricks over simply adding more raw power at the problem.