Thursday, March 13th 2025

NVIDIA and Microsoft Open Next Era of Gaming With Groundbreaking Neural Shading Technology

NVIDIA today announced ahead of the Game Developers Conference (GDC) groundbreaking enhancements to NVIDIA RTX neural rendering technologies. NVIDIA has partnered with Microsoft to bring neural shading support to the Microsoft DirectX preview in April, giving developers access to AI Tensor Cores in NVIDIA GeForce RTX GPUs to accelerate neural networks from within a game's graphics pipeline. Neural shading represents a revolution in graphics programming, combining AI with traditional rendering to dramatically boost frame rates, enhance image quality and reduce system resource usage.

"Microsoft is adding cooperative vector support to DirectX and HLSL, starting with a preview this April," said Shawn Hargreaves, Direct3D development manager at Microsoft. "This will advance the future of graphics programming by enabling neural rendering across the gaming industry. Unlocking Tensor Cores on NVIDIA RTX will allow developers to fully leverage RTX Neural Shaders for richer, more immersive experiences on Windows."
Neural Shaders Enable Photorealistic, Living Worlds With AI
The next era of computer graphics will be based on NVIDIA RTX Neural Shaders, which allow the training and deployment of tiny neural networks from within shaders to generate textures, materials, lighting, volumes and more. This results in dramatic improvements in game performance, image quality and interactivity, delivering new levels of immersion for players.

At the CES trade show earlier this year, NVIDIA introduced RTX Kit, a comprehensive suite of neural rendering technologies for building AI-enhanced, ray-traced games with massive geometric complexity and photorealistic characters.

Now, at GDC, NVIDIA is expanding its powerful lineup of neural rendering technologies, including with Microsoft DirectX support and plug-ins for Unreal Engine 5.

NVIDIA is partnering with Microsoft to bring neural shading support to the DirectX 12 Agility software development kit preview in April, providing game developers with access to RTX Tensor Cores to accelerate the performance of applications powered by RTX Neural Shaders.

Plus, Unreal Engine developers will be able to get started with RTX Kit features such as RTX Mega Geometry and RTX Hair through the experimental NVIDIA RTX branch of Unreal Engine 5. These enable the rendering of assets with dramatic detail and fidelity, bringing cinematic-quality visuals to real-time experiences.

Now available, NVIDIA's "Zorah" technology demo has been updated with new incredibly detailed scenes filled with millions of triangles, complex hair systems and cinematic lighting in real time - all by tapping into the latest technologies powering neural rendering, including:
  • ReSTIR Path Tracing
  • ReSTIR Direct Illumination
  • RTX Mega Geometry
  • RTX Hair

And the first neural shader, Neural Radiance Cache, is now available in RTX Remix.
Add your own comment

12 Comments on NVIDIA and Microsoft Open Next Era of Gaming With Groundbreaking Neural Shading Technology

#1
Notional
There are currently as many 5000 series cards for sale as games that support this feature.
Posted on Reply
#2
londiste
Oh god those names.
RTX Mega Geometry seems like an important one based on what has been claimed previously. If it really does help with BVH processing that might be big.
Posted on Reply
#3
LastDudeALive
Can someone ELI5 Neural rendering? These press releases only have buzzwords and never explain what something actually does.
Posted on Reply
#4
L0stS0ul
Seriously, we need so many separate news - spam from Nvidia, it couldn't be grouped into one or two news ? E.g. about games themselves and similar solutions, this is some kind of Nvidia spam or product placement :banghead:
Posted on Reply
#5
Prima.Vera
Of course those features will ONLY be available to nVidia RTX 60x0 series, and will only be supported via exclusive DLSS 5 for 6000 series.
You heard it first here folks!
Posted on Reply
#6
Epaminombas
This is really crazy.

AMD uses technologies that are already present in DirectX.

Meanwhile, Nvidia keeps creating exclusive APIs for CUDA Cores and RTX.

CUDA needs DirectX libraries. But it releases exclusive stuff.

If developers only used DirectX, there would be no need to use exclusive CUDA.

But it is clear that Nvidia PAYS developers to use CUDA and this has been going on for over 10 years.
Posted on Reply
#7
londiste
LastDudeALiveCan someone ELI5 Neural rendering? These press releases only have buzzwords and never explain what something actually does.
AI/ML stuff in DX?
EpaminombasThis is really crazy.
AMD uses technologies that are already present in DirectX.
Meanwhile, Nvidia keeps creating exclusive APIs for CUDA Cores and RTX.
CUDA needs DirectX libraries. But it releases exclusive stuff.
If developers only used DirectX, there would be no need to use exclusive CUDA.
But it is clear that Nvidia PAYS developers to use CUDA and this has been going on for over 10 years.
How do you think APIs evolve? Where did Vulkan come from? And what triggered DX12 to be created?
Additions to DirectX are pretty much the opposite of exclusive. DXR is not exclusive, nor is neural shaders. RTX isn't API. CUDA has a pretty different use case. In fact, you could say this addition of neural shaders moves that part of ML-related workload out of CUDA and into DirectX.

Nvidia does not pay developers to use CUDA. Nvidia spends a lot of money on developing CUDA so that developers would want to use it :)

Also, stagnation is not a good thing...
Posted on Reply
#8
Epaminombas
Vulkan came from DirectX.

Microsoft gave AMD money to get back on its feet and gave it the idea to create an Open API based on DirectX.

So much so that 90% of the Vulkan library CAME FROM DIRECTX. But you can't copy everything 100% because DirectX is patented.

CUDA is a separate API, yes, it's even on Nvidia's website.
So much so that the RTX 50 has a different version of CUDA than the RTX 30 and 40.

If Intel or Nvidia or AMD or whoever creates a new library, Microsoft is obliged to add it to DirectX. And so it does with each new version of WDDM and with each new version of Windows, new features are added to DirectX.

devblogs.microsoft.com/directx/
Posted on Reply
#9
LastDudeALive
EpaminombasThis is really crazy.

AMD uses technologies that are already present in DirectX.

Meanwhile, Nvidia keeps creating exclusive APIs for CUDA Cores and RTX.

CUDA needs DirectX libraries. But it releases exclusive stuff.

If developers only used DirectX, there would be no need to use exclusive CUDA.

But it is clear that Nvidia PAYS developers to use CUDA and this has been going on for over 10 years.
RT was integrated into DirectX from the very beginning. Microsoft works with Nvidia because Nvidia is the only company that can think of new things. If AMD thinks of something new, I'm sure Microsoft will be more than happy to work on putting it into DirectX. But AMD is stupid and can't think of anything new, so Nvidia has become responsible for improving DirectX.
londisteAI/ML stuff in DX?
So basically, instead of keeping a rock texture in the game file, it would generate a rock texture picture on-demand using the tensor cores? Is that it? Or generate NPC dialogue on-demand?
Posted on Reply
#10
londiste
LastDudeALiveSo basically, instead of keeping a rock texture in the game file, it would generate a rock texture picture on-demand using the tensor cores? Is that it? Or generate NPC dialogue on-demand?
The key bit from DirectX perspective is API for multiplication of matrices - think Nvidia Tensor Units or AMD AI accelerators.
I think this MS post was rather good in explaining what and why:
devblogs.microsoft.com/directx/enabling-neural-rendering-in-directx-cooperative-vector-support-coming-soon/

Texture shenanigans, NPC dialogue, BVH updates, etc. are applications/middleware/algorithms that can now use this units using DirectX for whatever they need to do. It is not that the same ML-based functions cannot be used today but right now you'd need to use manufacturers' own APIs for that - CUDA for Nvidia cards, not sure what AMD needs for WMMA, something in GPUOpen library?
EpaminombasVulkan came from DirectX.
Microsoft gave AMD money to get back on its feet and gave it the idea to create an Open API based on DirectX.
So much so that 90% of the Vulkan library CAME FROM DIRECTX. But you can't copy everything 100% because DirectX is patented.
Vulkan is managed by Khronos Group, the same entity that is behind OpenGL. They needed a low-level API to keep up with the times. OpenGL and Vulkan are direct competitors of Microsoft's Direct3D.
From technical side of things at least initially Vulkan was a slightly repainted AMD's Mantle. AMD and DICE created Mantle to have a lower level API compared to OpenGL and DirectX9/11. This brought on the current crop of APIs - Vulkan and DirectX12.
EpaminombasCUDA is a separate API, yes, it's even on Nvidia's website.
So much so that the RTX 50 has a different version of CUDA than the RTX 30 and 40.
CUDA is an API. RTX is not. RTX is a brand, marketing term for their current platform. This does include or use a number of APIs for various purposes, DXR, Optix, CUDA and a smattering of others.
EpaminombasIf Intel or Nvidia or AMD or whoever creates a new library, Microsoft is obliged to add it to DirectX. And so it does with each new version of WDDM and with each new version of Windows, new features are added to DirectX.
That is not how it works at all. Microsoft is quite nicely in control of DirectX. They say what goes or doesn't go. Naturally, the new stuff comes from graphics manufacturers but it is entirely on Microsoft.
Vulkan is where that is different - they have a more curious model for innovation. Manufacturer can create an extension for Vulkan, implement it, provide it and start using it. Then if whatever they added might be generally useful there is a round of discussions and the extension can become an official extension of Vulkan.
Posted on Reply
#11
Visible Noise
EpaminombasVulkan came from DirectX.

Microsoft gave AMD money to get back on its feet and gave it the idea to create an Open API based on DirectX.

So much so that 90% of the Vulkan library CAME FROM DIRECTX. But you can't copy everything 100% because DirectX is patented.

CUDA is a separate API, yes, it's even on Nvidia's website.
So much so that the RTX 50 has a different version of CUDA than the RTX 30 and 40.

If Intel or Nvidia or AMD or whoever creates a new library, Microsoft is obliged to add it to DirectX. And so it does with each new version of WDDM and with each new version of Windows, new features are added to DirectX.

devblogs.microsoft.com/directx/
You really have no idea how any of this works. Just go play games.
Posted on Reply
#12
londiste
LastDudeALiveRT was integrated into DirectX from the very beginning. Microsoft works with Nvidia because Nvidia is the only company that can think of new things. If AMD thinks of something new, I'm sure Microsoft will be more than happy to work on putting it into DirectX. But AMD is stupid and can't think of anything new, so Nvidia has become responsible for improving DirectX.
AMD does and has come up with new things. From big DX12 things async compute comes to mind. Microsoft will work with whoever can come up with something beneficial.
With RT Nvidia did what everyone is basically always asked for - brought in something new and it was not a proprietary thing but part of a general, public commonly used API from get-go. Vulkan ray tracing extensions were not far behind.
Posted on Reply
Add your own comment
Apr 16th, 2025 02:04 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts