Saturday, January 11th 2025

Microsoft Lays DirectX API-level Groundwork for Neural Rendering

Microsoft announced updates to the DirectX API that would pave the way for neural rendering. Neural rendering is a concept where portions of a frame in real-time 3D graphics are drawn using a generative AI model that works in tandem with classic raster 3D graphics pipeline, along with other advancements, such as real-time ray tracing. This is different from AI-based super resolution technologies. The generative AI here is involved in rendering the input frames for a super resolution technology. One of the nuts and bolts of neural rendering is cooperative vectors, enable an information pathway between the conventional graphics pipeline and the generative AI, telling it what it's doing, what needs to be done by the AI model, and what the ground truth for the model is.

Microsoft says that its HLSL team is working with AMD, Intel, NVIDIA, and Qualcomm to bring cross-vendor support for cooperative vectors in the DirectX ecosystem. The very first dividends of this effort will be seen in the upcoming NVIDIA GeForce RTX 50-series "Blackwell" GPUs, which will use cooperative vectors to drive neural shading. "Neural shaders can be used to visualize game assets with AI, better organize geometry for improved path tracing performance and tools to create game characters with photo-realistic visuals," Microsoft says.
Source: Microsoft DirectX Blog
Add your own comment

37 Comments on Microsoft Lays DirectX API-level Groundwork for Neural Rendering

#26
GodisanAtheist
bugSomebody hasn't been paying attention. DX12 is "lower level" than DX11 (in a somewhat Vulkan way), it gives more control to the game engines, precisely so it doesn't need to be upgraded as often. The current DX12 is not the same DX12 that was released initially, several extensions have been added since: en.wikipedia.org/wiki/DirectX#Version_history
-Yep, a close to metal api with fully programmable shaders in theory means it's the "last" API. Anything you add to it can *technically* be done via hardware shaders if necessary (albeit not well) so there really isn't a need for an updated version number.

That said, when GPU sales / Windows sales start slumping, MS/NV/AMD with get together and repackage everything as DX13 that will only be supported on the latest Windows and the latest GPUs to force obsolescence and drive up sales.
Posted on Reply
#27
swaaye
I wonder what kind of performance we would be at now for traditional programmable shading and rasterization if RT and "neural processing" weren't consuming transistor budget. We are essentially back in the days of non-unified architectures.
Posted on Reply
#28
Drash
RT was a thing in the early 90's for RF wave propagation. H/W was even weaker then, hence .....
Posted on Reply
#29
notoperable
Hilarious so, algorithmically correct ray tracing is a real computing headache, let's drop the hard stuff and let's put a model on it, teach it and it will invent the hard parts that might or might not look that way around the corner.

Mission accomplished

Forget Lucas Industrial Lights and Magic, say hello to Nvidia Gaslight
Posted on Reply
#30
bug
notoperableHilarious so, algorithmically correct ray tracing is a real computing headache, let's drop the hard stuff and let's put a model on it, teach it and it will invent the hard parts that might or might not look that way around the corner.

Mission accomplished

Forget Lucas Industrial Lights and Magic, say hello to Nvidia Gaslight
If possible, why not?
The thing about neural networks is they may take ages to train, but, once trained, they will recognize stuff in a jiffy. If you can leverage that, why not?
And it's not Nvidia anything, this is DX. Available for everyone.
Posted on Reply
#31
Evrsr
DavenDirectX 12 is almost 10 years old. Are we still going to be on DirectX 12 in another ten years? I mean why can't AI, RT, super sampling, upscaling, neural render, direct storage, etc. be considered enough of a change to warrant calling it DirectX 13? Does Microsoft get some sort of benefit by changing so much but keeping the version number the same?

Edit: Here are the release times for past versions:

1.0 to 2.0 1 year
2.0 to 3.0 1 year
3.0 to 5.0 1 year (4.0 never released)
5.0 to 6.0 1 year
6.0 to 7.0 1 year
7.0 to 8.0 1 year
8.0 to 9.0 2 years
9.0 to 10.0 4 years
10.0 to 11.0 3 years
11.0 to 12.0 6 years
12.0 going on 10 years now!!
To be fair, DX 12 Ultimate could very well be named DX13.

There is also the need to preserve some compatibility because of the install base. The number of people playing games were much smaller than now and hardware is much slower.
We're at two years per new architecture, was in those early days it was 6 months.

Most of it has also been Microsoft's fault, as game developers may not want to support newer standards. Besides the hardware IIRC, Windows 10 didn't get 12 Ultimate for quite some time, as they wanted to push Win 11 adoption.
So, developers tend to stick to a common platform until it is possible to move forward. They are not going to alienate 50%+ of the user base just because Microsoft is stubborn.
Posted on Reply
#32
Rightness_1
A sad reflection of Microsoft's decline. This had to be in the works at NV for at least 4 years, yet it's still not ready to be rolled out via DirectX. I really wonder if any Blackwell card will ever render a single frame (outside of a demo) in the first year that the cards are released for.
Posted on Reply
#33
notoperable
bugIf possible, why not?
The thing about neural networks is they may take ages to train, but, once trained, they will recognize stuff in a jiffy. If you can leverage that, why not?
And it's not Nvidia anything, this is DX. Available for everyone.
Of course, but my user experience with prediction engines (also known colloquially as ai) is traumatizing- so if this "leverage" will fall into similar qualities then no thanks,
I want to have Empirical Ray-tracing
Rightness_1A sad reflection of Microsoft's decline. This had to be in the works at NV for at least 4 years, yet it's still not ready to be rolled out via DirectX. I really wonder if any Blackwell card will ever render a single frame (outside of a demo) in the first year that the cards are released for.
There is no people who gawk high-performance graphics pipeline and low-end optimizations in Microsoft because its Microsoft.
Posted on Reply
#34
Dr. Dro
DavenDirectX 12 is almost 10 years old. Are we still going to be on DirectX 12 in another ten years? I mean why can't AI, RT, super sampling, upscaling, neural render, direct storage, etc. be considered enough of a change to warrant calling it DirectX 13? Does Microsoft get some sort of benefit by changing so much but keeping the version number the same?
DirectX 13 is simply called "12 Ultimate". DirectX development has always been tied to the latest versions of Windows, not to mention that DirectX 11 is still the most widely used graphics API overall, and for good reason. It's the most versatile of them, capabilities align with most budgets and gamers' interests, OS compatibility is good and isn't as low-level as DirectX 12 tends to be, allowing more programmers to work with it. Vendors that aren't green and clad in a fancy leather or snakeskin jacket also have great trouble keeping up with the DirectX spec, in most cases they are late to adopt the latest driver or shader models and simply opt not to support any features that aren't absolutely mandated by Microsoft and considered to be optional extensions, unless their hand is forced by an ISV that desperately wants access to said feature.
Posted on Reply
#35
Naito
Vayra86As for RT... I'm still trying to wrap my head around the economical sense of it, because there isn't one
There's always an early adopter fee and teething issues associated with the first few generations of something. Sure, PT/RT as a tech is nothing new, but to calculate physical-based lighting in modern games at playable frame rates, takes a lot of resources. Even today, a prerendered frame in a CGI film can take days to render, so it's impressive the tech performs in games as well as it does, even using lower fidelity techniques.

Games built with PT as a core requirement, like the new Indiana game, show how much more convincing a scene can be versus all the traditional rasterisation hacks of the past. Once they solve the 'noise' issue and performance hits, it will be a very bright future for gaming indeed.

I for one long to see the day when things like distracting screen-space hacks and cube maps or inaccurate lighting from light/shadow maps are a thing of the past.

The annoying this is, is that NVidia is allowed to rampantly capitalise on the transition because their primary competitor really dropped the ball on the tech. RDNA4 does look like a step in the right direction, but will have to wait for reviews.
Posted on Reply
#36
dragontamer5788
windwhirlThat aside, I'm not sure if I understand the point of this technology other than for Nvidia et al to sell us hardware we wouldn't need otherwise.
GPUs advance in instructions faster than CPUs these days.

DirectX12 / Shader Model 6 was wave intrinsics. A bunch of instructions named ballot, vote, etc. etc that allowed programmers to better think in terms of wavefronts and optimize shaders even further. With new instructions (raytracing, FP16, neural nets, etc. etc.), the new instructions need a place in HLSL (the programming language you use in DirectX to program GPUs). Furthermore, GPUs may have other changes (ex: Work Graphs) which are more efficient than older ways of launching kernels.

In short: OS level system stuff. Way back in the 90s you'd tell the GPU about every triangle in immediate mode. Today, you have Mesh Shaders calculate triangles on the GPU itself programmatically, and the CPU may have never even known these triangles existed as a concept. (Particle effects, new geometries, and more). This only exists because today's GPUs can issue new GPU commands and new GPU programs, a concept that didn't exist 10 years ago.

DirectX12 (and newer) have to add new API calls to Windows so that the game programmers can access these new features.
Posted on Reply
#37
Vayra86
NaitoThere's always an early adopter fee and teething issues associated with the first few generations of something. Sure, PT/RT as a tech is nothing new, but to calculate physical-based lighting in modern games at playable frame rates, takes a lot of resources. Even today, a prerendered frame in a CGI film can take days to render, so it's impressive the tech performs in games as well as it does, even using lower fidelity techniques.

Games built with PT as a core requirement, like the new Indiana game, show how much more convincing a scene can be versus all the traditional rasterisation hacks of the past. Once they solve the 'noise' issue and performance hits, it will be a very bright future for gaming indeed.

I for one long to see the day when things like distracting screen-space hacks and cube maps or inaccurate lighting from light/shadow maps are a thing of the past.

The annoying this is, is that NVidia is allowed to rampantly capitalise on the transition because their primary competitor really dropped the ball on the tech. RDNA4 does look like a step in the right direction, but will have to wait for reviews.
So far, the reality is that the price of admission is far too high, and even then, the performance is still abysmal, and the visual quality sub optimal. That's literally what you're saying, except painted in a more rosy way. I agree there's always an early adopter fee, and teething issues. The question is if they ever go away. So far, we've just seen a sharp decline in FPS and a so-so result in return, while games haven't gotten better due to these graphics either. If the graphical expense leeches crucial dev time out of the product's core features and you notice it as a player, something's wrong. And this happens, a lot, lately.

I'm not playing better games here because of these graphics. That's the bottom line, to me.
Posted on Reply
Add your own comment
Jan 15th, 2025 19:44 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts