Wednesday, May 8th 2019
Crytek's Hardware-Agnostic Raytracing Scene Neon Noir Performance Details Revealed
Considering your reaction, you certainly remember Crytek's Neon noir raytracing scene that we shared with you back in march. At the time, the fact that raytracing was running at such mesmerizing levels on AMD hardware was arguably the biggest part of the news piece: AMD's Vega 56 graphics card with no dedicated raytracing hardware, was pushing the raytraced scene in a confident manner. Now, Crytek have shared some details on how exactly Neon noir was rendered.
The AMD Radeon Vega 56 pushed the demo at 1080p/30 FPS, with full-resolution rendering of raytraced effects. Crytek further shared that raytracing can be rendered at half resolution compared to the rest of the scene, and that if they did so on AMD's Vega 56, they could push a 1440p resolution at 40+ FPS. The raytraced path wasn't running on any modern, lower-level API, such as DX12 or Vulkan, but rather, on a custom branch of Crytek's CryEngine, version 5.5.Crytek said that RTX support will be implemented, which should improve performance on NVIDIA graphics cards, allowing for up to 4K, full-screen resolution rendering and effects. RTX shouldn't allow for more features, but rather, for improved performance and quality level of already-implemented ones. There is some interesting information on Crytek's blog post on their current raytracing implementation, the choice to integrate the technology based on object "glossiness" level rather than a full-blown solution for improved performance, and mixing voxel and raytracing workloads for the best possible optimization. Take a look at the source link.
Source:
Crytek
The AMD Radeon Vega 56 pushed the demo at 1080p/30 FPS, with full-resolution rendering of raytraced effects. Crytek further shared that raytracing can be rendered at half resolution compared to the rest of the scene, and that if they did so on AMD's Vega 56, they could push a 1440p resolution at 40+ FPS. The raytraced path wasn't running on any modern, lower-level API, such as DX12 or Vulkan, but rather, on a custom branch of Crytek's CryEngine, version 5.5.Crytek said that RTX support will be implemented, which should improve performance on NVIDIA graphics cards, allowing for up to 4K, full-screen resolution rendering and effects. RTX shouldn't allow for more features, but rather, for improved performance and quality level of already-implemented ones. There is some interesting information on Crytek's blog post on their current raytracing implementation, the choice to integrate the technology based on object "glossiness" level rather than a full-blown solution for improved performance, and mixing voxel and raytracing workloads for the best possible optimization. Take a look at the source link.
33 Comments on Crytek's Hardware-Agnostic Raytracing Scene Neon Noir Performance Details Revealed
Ray-tracing is not a complex thing in its core and is a well-understood concept. The only thing RTX cards have to improve things is couple hardware operations that are faster/more efficient than running the same operations on shaders. Battlefield V, Metro Exodus and Shadow of Tomb Raider are all using DXR which is a feature of DX12. This is a standard. Things with Vulkan are a bit more dicey as currently the operations are exposed in Nvidia-specific extensions - that is what Quake2 VKPT uses and Quake2 RTX will use if/when Nvidia decides to release it.
If AMD so wishes they can write an implementation of DXR and provide it to us in drivers. I am very sure they have one running in-house but there is no logical reason for them to release it. Vega should be able to compete with Pascal very favorably in DXR but that is pointless in the grand scheme of things and would only highlight lack of RT hardware compared to RTX cards, thus weakening AMD's own position. Could you please elaborate on what exactly you mean by different?
RT technique used in Neon Noir is practically identical to what Battlefield V implements. Everything down to the optimization choices both CryTek and DICE went for.
Reality check. What is NVIDIA RTX Technology? What is DirectX DXR? Here's what they can and cannot do
Nvidia's RTX ray-tracing has de-noise pass which is pixel re-construction
To get a more correct sentence, you could probably replace the strikethrough part with "Real-time".
Also worth noting is that linked article (could you please fix the actual link: cgicoffee.com/blog/2018/03/what-is-nvidia-rtx-directx-dxr ) is from March 2018. Turing and RT Cores were not a thing at that point. Plus, the guy has some things wrong - bits in RTX technology that help accelerate real-time raytracing also help to accelerate offline raytracing. He mentioned OptiX which has exactly that as its purpose.
These feature the recently announced real-time ray tracing tool-set of Microsoft DirectX 12 as well as the (claimed) performance benefits proposed by NVIDIA's proprietary RTX technology available in their Volta GPU lineup, which in theory should give the developers new tools for achieving never before seen realism in games and real-time visual applications.
At that point it remained somewhat unclear what RTX technology was or what it brings to the table. Turns out, RTX is mostly just a marketing term. Volta had no RT Cores. Tensor cores were there but denoising on Tensor cores is only one option (one that we are not sure if DXR games even currently use). The main purpose was and is to bring RTRT to the table for general public. The primary drivers are DXR and implementations/solutions both within GameWorks and outside of it.
RTX as a term is Nvidia's marketing failure. It started with it meaning a set of RTRT related technologies, followed by a prefix of graphics card series along with bundling DLSS underneath the same RTX moniker. The meaning of the term got more and more mixed and along with reaction to RTX cards makes RTX as a term quite meaningless.
Direct12's DirectML extension exposes GPU's rapid pack math features.