Thursday, April 11th 2019
NVIDIA Extends DirectX Raytracing (DXR) Support to Many GeForce GTX GPUs
NVIDIA today announced that it is extending DXR (DirectX Raytracing) support to several GeForce GTX graphics models beyond its GeForce RTX series. These include the GTX 1660 Ti, GTX 1660, GTX 1080 Ti, GTX 1080, GTX 1070 Ti, GTX 1070, and GTX 1060 6 GB. The GTX 1060 3 GB and lower "Pascal" models don't support DXR, nor do older generations of NVIDIA GPUs. NVIDIA has implemented real-time raytracing on GPUs without specialized components such as RT cores or tensor cores, by essentially implementing the rendering path through shaders, in this case, CUDA cores. DXR support will be added through a new GeForce graphics driver later today.
The GPU's CUDA cores now have to calculate BVR, intersection, reflection, and refraction. The GTX 16-series chips have an edge over "Pascal" despite lacking RT cores, as the "Turing" CUDA cores support concurrent INT and FP execution, allowing more work to be done per clock. NVIDIA in a detailed presentation listed out the kinds of real-time ray-tracing effects available by the DXR API, namely reflections, shadows, advanced reflections and shadows, ambient occlusion, global illumination (unbaked), and combinations of these. The company put out detailed performance numbers for a selection of GTX 10-series and GTX 16-series GPUs, and compared them to RTX 20-series SKUs that have specialized hardware for DXR.Update: Article updated with additional test data from NVIDIA.
According to NVIDIA's numbers, GPUs without RTX are significantly slower than the RTX 20-series. No surprises here. But at 1440p, the resolution NVIDIA chose for these tests, you would need at least a GTX 1080 or GTX 1080 Ti for playable frame-rates (above 30 fps). This is especially true in case of Battlefield V, in which only the GTX 1080 Ti manages 30 fps. The gap between the GTX 1080 Ti and GTX 1080 is vast, with the latter serving up only 25 fps. The GTX 1070 and GTX 1060 6 GB spit out really fast Powerpoint presentations, at under 20 fps.It's important to note here, that NVIDIA tested at the highest DXR settings for Battlefield V, and lowering the DXR Reflections quality could improve frame-rates, although we remain skeptical about the slower SKUs such as GTX 1070 and GTX 1060 6 GB. The story repeats with Shadow of the Tomb Raider, which uses DXR shadows, albeit the frame-rates are marginally higher than Battlefield V. You still need a GTX 1080 Ti for 34 fps.Atomic Heart uses Advanced Reflections (reflections of reflections, and non-planar reflective surfaces). Unfortunately, no GeForce GTX card manages performance over 15.4 fps. The story repeats with 3DMark Port Royal, which uses both Advanced Reflections and DXR Shadows. Single-digit frame-rates for all GTX cards. The performance is better with Justice tech-demo, although far-from playable, as only the GTX 1080 and GTX 1080 Ti manage over 20 fps. Advanced Reflections and AO, in case of the Star Wars RTX tech-demo, is another torture for these GPUs - single-digit frame-rates all over. Global Illumination with Metro Exodus is another slog for these chips.Overall, NVIDIA has managed to script the perfect advertisement for the RTX 20-series. Real-time ray-tracing on compute shaders is horrendously slow, and it pays to have specialized hardware such as RT cores for them, while tensor cores accelerate DLSS to improve performance even further.It remains to be seen if AMD takes a swing at DXR on GCN stream processors any time soon. The company has already had a technical effort underway for years under Radeon Rays, and is reportedly working on DXR.
Update:
NVIDIA posted its test data for 4K and 1080p in addition to 1440p, and medium-thru-low settings of DXR. Their entire test data is posted below.
The GPU's CUDA cores now have to calculate BVR, intersection, reflection, and refraction. The GTX 16-series chips have an edge over "Pascal" despite lacking RT cores, as the "Turing" CUDA cores support concurrent INT and FP execution, allowing more work to be done per clock. NVIDIA in a detailed presentation listed out the kinds of real-time ray-tracing effects available by the DXR API, namely reflections, shadows, advanced reflections and shadows, ambient occlusion, global illumination (unbaked), and combinations of these. The company put out detailed performance numbers for a selection of GTX 10-series and GTX 16-series GPUs, and compared them to RTX 20-series SKUs that have specialized hardware for DXR.Update: Article updated with additional test data from NVIDIA.
According to NVIDIA's numbers, GPUs without RTX are significantly slower than the RTX 20-series. No surprises here. But at 1440p, the resolution NVIDIA chose for these tests, you would need at least a GTX 1080 or GTX 1080 Ti for playable frame-rates (above 30 fps). This is especially true in case of Battlefield V, in which only the GTX 1080 Ti manages 30 fps. The gap between the GTX 1080 Ti and GTX 1080 is vast, with the latter serving up only 25 fps. The GTX 1070 and GTX 1060 6 GB spit out really fast Powerpoint presentations, at under 20 fps.It's important to note here, that NVIDIA tested at the highest DXR settings for Battlefield V, and lowering the DXR Reflections quality could improve frame-rates, although we remain skeptical about the slower SKUs such as GTX 1070 and GTX 1060 6 GB. The story repeats with Shadow of the Tomb Raider, which uses DXR shadows, albeit the frame-rates are marginally higher than Battlefield V. You still need a GTX 1080 Ti for 34 fps.Atomic Heart uses Advanced Reflections (reflections of reflections, and non-planar reflective surfaces). Unfortunately, no GeForce GTX card manages performance over 15.4 fps. The story repeats with 3DMark Port Royal, which uses both Advanced Reflections and DXR Shadows. Single-digit frame-rates for all GTX cards. The performance is better with Justice tech-demo, although far-from playable, as only the GTX 1080 and GTX 1080 Ti manage over 20 fps. Advanced Reflections and AO, in case of the Star Wars RTX tech-demo, is another torture for these GPUs - single-digit frame-rates all over. Global Illumination with Metro Exodus is another slog for these chips.Overall, NVIDIA has managed to script the perfect advertisement for the RTX 20-series. Real-time ray-tracing on compute shaders is horrendously slow, and it pays to have specialized hardware such as RT cores for them, while tensor cores accelerate DLSS to improve performance even further.It remains to be seen if AMD takes a swing at DXR on GCN stream processors any time soon. The company has already had a technical effort underway for years under Radeon Rays, and is reportedly working on DXR.
Update:
NVIDIA posted its test data for 4K and 1080p in addition to 1440p, and medium-thru-low settings of DXR. Their entire test data is posted below.
111 Comments on NVIDIA Extends DirectX Raytracing (DXR) Support to Many GeForce GTX GPUs
All this data has told me, is that if games get made with compute versions of RT optimised for it, then potentially performance can be quite close. I dont understand what nvidia are doing here, it seems out of desperation they enabled support for the large pascal userbase to try and entice developers. RT cards may be dead consumer tech within 1-2 generations.
NVIDIA to date hasn't technically described what the RT cores are.
I guess we'll find out when AMD debuts DXR support.
Fallout 4 (FleX added in 2017), Witcher 3 family (2016), COD Ghosts (2013) really benefited from that - at least I personally liked the effects. I could't stop using grenades :)
Was it doomed by being closed source? Maybe. But that doesn't mean it didn't work. Unreal Engine 4 still uses it.
I would like an identical approach for RTX: Let me add another card and dedicate it to RTX. That will make me maybe take the bait to upgrade sooner to a single-card solution.
Compare results from BF5 that uses little, Metro/SoTR that use little bit more and benchmarks like Port Royal or techdemos that use a lot of RT. The more RT is used the bigger the performance gap gets.
The other part is that Nvidia chose to put front and center results with DXR Low/Medium and modest resolutions. These paint Pascal in a better light than DXR High/Ultra results.
For a visual representation on what I am trying to say, look at the Metro Exodus frame graphs from Nvidia's original announcement, the middle part represents the part that RT Cores deal with:
www.techpowerup.com/253759/nvidia-to-enable-dxr-ray-tracing-on-gtx-10-and-16-series-gpus-in-april-drivers-update
www.techpowerup.com/img/Qr86CtLnbFWCRcfc.jpg They have not described the units very precisely. However, it is not quite correct to say we do not know what the RT Cores do. They run a couple operations for raytracing implemented in hardware. Anandtech's article has a pretty good overview:
www.anandtech.com/show/13282/nvidia-turing-architecture-deep-dive/5
Remember, RTX has an *extremely* limited capability to ray trace: it complements existing rendering techniques in games rather than replacing it.
As RTX are the only gpus with tensor cores right now, it would run like shit, driving upgrades to RTX.
That is his point in a nutshell. They would need to add tensor cores first.
Nvidia described Tensor cores as "specialized execution units designed specifically for performing the tensor / matrix operations that are the core compute function used in Deep Learning " .
They have nothing to do with Ray Tracing.
Full "Damage Control" mode.
Thus, they have everything to do with it.
RTRT doesn't require denoiser to work.
Denoiser is an after-effect added to the final image.
They trace only a couple of rays per pixel and mix some textures in, hence they have to apply de-noise to that to make it look good.
I guess you didn't read the excellent article linked above? Here you go a quote:
Nvidia offers an AI-based de-noiser powered by tensor cores.
It will de-noise any image given, no matter it is an in-game image, or a photo.
If it is just the de-noiser which matters, then it is the de-noiser, NOT the Tensor cores.
If AMD could come up with an efficient de-noise method without any dedicated hardware, Tensor cores also become utterly pointless.
Of course, models that are rendered there are simple, but still. This boils down to "it does in RT cores what DXR API is about" namely, intersection matching.
Uh, who would have thought.
Without any comparison data from the red team, we have no idea if the pascal cards received optimization for RTRT , or no optimization at all.
After all, Nvidia naturally wants to sell more Turing cards, optimize old pascal cards for the selling feature of Turing is the exact opposite of that.
It's way too early for indie devs to be adding raytracing as a cost-saving measure.