MSI GeForce RTX 5090 Suprim Liquid SOC Review 43

MSI GeForce RTX 5090 Suprim Liquid SOC Review

Pictures & Teardown »

Neural Rendering


Neural Rendering promises to be as transformative to modern graphics as programmable shaders itself. 3D Graphics rendering evolved from fixed-function over the turn of the century, to programmable shaders, HLSL, geometry shaders, compute shaders, and ray tracing, over the past couple of decades. In 2025, NVIDIA is writing the next chapter in this journey with Blackwell neural shaders. This allows for a host of neural-driven effects, including neural materials, neural volumes, and even neural radiance fields. Microsoft introduced the new Cooperative Vectors API for DirectX in a recent update, making it possible to access Tensor cores within a graphics API. Combined with a new shading language, Slang, this breakthrough enables developers to integrate neural techniques directly into their workflows, potentially replacing parts of the traditional graphics pipeline. Slang splits large, complex functions into smaller pieces that are easier to handle. Given that this is a DirectX standard API feature, there is nothing that stops AMD and Intel from integrating Neural Rendering (Cooperative Vectors) into their graphics drivers.

RTX Neural Materials works to significantly reduce the memory footprint of materials in 3D scenes. Under conventional rendering, the memory footprint of a material is bloated from complex shader code. Neural materials convert shader code and texture layers into a compressed neural representation. This results in up to a 7:1 compression ratio and enables small neural networks to generate stunning, film-like materials in real-time. For example, silk rendered with traditional shaders might lack the multicolored sheen seen in real life. Neural materials, however, capture intricate details like color variation and reflections, bringing such surfaces to life with unparalleled realism—and at a fraction of the memory cost.


The new Neural Radiance Cache, which dynamically trains a neural network during gameplay using the user's GPU, allowing light transport to be cached spatially, enabling near-infinite light bounces in a scene. This results in realistic indirect lighting and shadows with minimal performance impact. NRC partially traces 1 or 2 rays before storing them in a radiance cache, and infers an infinite amount of rays and bounces for a more accurate representation of indirect lighting in the game scene.

DLSS 4 and Multi Frame Generation


DLSS 4 introduces a major leap in image quality and performance. It isn't just a version bump with the introduction of a new feature, namely Multi Frame Generation, but introduces updates to nearly all DLSS sub-features. DLSS from its very beginning relied on AI to reconstruct details in super resolution, and with DLSS 4, NVIDIA is introducing a new transformer-based AI model to succeed the convolutional neural networks previous used, for double the parameters, four times the compute performance, and significantly improved image quality. Ray Reconstruction, introduced with DLSS 3.5, gets a significant image quality update with the new transformer-based model.


To understand Multi Frame Generation, you need to understand how DLSS Frame Generation, introduced with GeForce Ada, works. An Optical Flow Accelerator component gives the DLSS algorithm data to generate an entire frame using a neural network, using information from a previous rendered frame, effectively doubling frame rate. In Multi Frame Generation, AI takes over the functions of optical flow, to predict up to three frames following a conventionally rendered frame, effectively drawing four frames form the rendering effort of one.


Now, assuming this rendered frame is a product of Super Resolution, with the maximum performance setting generating 4x the pixels from a single rendered pixel, you're looking at a possibility where the rendering effort of 1/4th a frame goes into drawing 4 frames, or 15 in every 16 pixels being generated entirely by DLSS. When generating so many frames, Frame Pacing becomes a problem—irregular frame intervals impact smoothness. DLSS 4 addresses these issues by using a dedicated hardware unit inside Blackwell, which takes care of flip metering, reducing frame display variability by 5-10x. The Display Engine of Blackwell contains the hardware for flip metering.

NVIDIA Reflex 2


The original NVIDIA Reflex brought about a significant improvement to the responsiveness of maxed out graphics in competitive online gameplay, by compacting the rendering queue with the goal of reducing the whole system latency by up to 50%. Reflex is mandatory in DLSS 3 Frame Generation, given the latency cost imposed by the technology. Multi-frame generation calls for an equally savvy piece of technology, so we hence have Reflex 2. NVIDIA claims to have achieved a 75% reduction in latency with Frame Warp, which updates the camera (viewport) positions based on user inputs in real-time, and then uses temporal information to reconstruct the frame to display.
Next Page »Pictures & Teardown
View as single page
Jan 26th, 2025 04:58 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts