Wednesday, May 8th 2019

Crytek's Hardware-Agnostic Raytracing Scene Neon Noir Performance Details Revealed

Considering your reaction, you certainly remember Crytek's Neon noir raytracing scene that we shared with you back in march. At the time, the fact that raytracing was running at such mesmerizing levels on AMD hardware was arguably the biggest part of the news piece: AMD's Vega 56 graphics card with no dedicated raytracing hardware, was pushing the raytraced scene in a confident manner. Now, Crytek have shared some details on how exactly Neon noir was rendered.

The AMD Radeon Vega 56 pushed the demo at 1080p/30 FPS, with full-resolution rendering of raytraced effects. Crytek further shared that raytracing can be rendered at half resolution compared to the rest of the scene, and that if they did so on AMD's Vega 56, they could push a 1440p resolution at 40+ FPS. The raytraced path wasn't running on any modern, lower-level API, such as DX12 or Vulkan, but rather, on a custom branch of Crytek's CryEngine, version 5.5.
Crytek said that RTX support will be implemented, which should improve performance on NVIDIA graphics cards, allowing for up to 4K, full-screen resolution rendering and effects. RTX shouldn't allow for more features, but rather, for improved performance and quality level of already-implemented ones. There is some interesting information on Crytek's blog post on their current raytracing implementation, the choice to integrate the technology based on object "glossiness" level rather than a full-blown solution for improved performance, and mixing voxel and raytracing workloads for the best possible optimization. Take a look at the source link.

Source: Crytek
Add your own comment

33 Comments on Crytek's Hardware-Agnostic Raytracing Scene Neon Noir Performance Details Revealed

#26
londiste
@RH92 , RTX-powered games is a marketing term that should die and burn in a fire. Both because it is technically incorrect as well as because it immediately induces flame in forums/comments.

Ray-tracing is not a complex thing in its core and is a well-understood concept. The only thing RTX cards have to improve things is couple hardware operations that are faster/more efficient than running the same operations on shaders. Battlefield V, Metro Exodus and Shadow of Tomb Raider are all using DXR which is a feature of DX12. This is a standard. Things with Vulkan are a bit more dicey as currently the operations are exposed in Nvidia-specific extensions - that is what Quake2 VKPT uses and Quake2 RTX will use if/when Nvidia decides to release it.

If AMD so wishes they can write an implementation of DXR and provide it to us in drivers. I am very sure they have one running in-house but there is no logical reason for them to release it. Vega should be able to compete with Pascal very favorably in DXR but that is pointless in the grand scheme of things and would only highlight lack of RT hardware compared to RTX cards, thus weakening AMD's own position.
medi01Given how different RT technique used by the demo and what Turing RT are doing, uh, really?
Could you please elaborate on what exactly you mean by different?
RT technique used in Neon Noir is practically identical to what Battlefield V implements. Everything down to the optimization choices both CryTek and DICE went for.
Posted on Reply
#27
Xzibit
RH92You are mixing up stuff ....

The point here is that in order to achieve 1080p30 on a Vega 56 CryTek team is already using lower graphical quality in their demo that what can be observed on most RTX powered games , for instance : '' All the objects in the Neon Noir Demo use low-poly versions of themselves for reflections,” Frölich says. “As a few people have commented, it is noticeable on the bullets'' that's not the case on RTX titles or at least not the case when RTX Ultra is enabled as far as il aware. With this in mind lowering even more the graphical quality of that demo in order to achieve higher framerates/resolution makes not much sense because you are already offering less graphical quality that the reference ( wich is RTX powered games ) at 1080p30 .

Hence why comparing the perf of the Vega 56 on that demo with the perf of Nvidia cards on RTX powered titles at 1080p makes litle sense .

Regardless CryTek team is basicaly confirming that in order to enjoy proper raytracing you need hardware support !
That LODs, you kept saying Resolution. Nothing new about "Optimization". BFV had to look into reducing its LODs further if you recall.
Expect to see more granularity added to the DXR settings, perhaps with a focus on culling distance and LODs

However, there are discussions internally to change what each individual settings do; we could do more, like play with LODs and cull distances as well as perhaps some settings for the new hybrid ray tracer that is coming in the future.

We are also looking into reducing the LOD levels for alpha tested geometry like trees and vegetation and we are also looking at reducing memory utilisation by the alpha shaders like vertex attribute fetching (using our compute input assembler).
Not sure how one can even say Proper Ray Tracing when talking about a Hybrid method.
Posted on Reply
#29
londiste
rvalenciaNvidia's RTX ray-tracing has de-noise pass which is pixel re-construction
Let me fix that for you.
To get a more correct sentence, you could probably replace the strikethrough part with "Real-time".

Also worth noting is that linked article (could you please fix the actual link: cgicoffee.com/blog/2018/03/what-is-nvidia-rtx-directx-dxr ) is from March 2018. Turing and RT Cores were not a thing at that point. Plus, the guy has some things wrong - bits in RTX technology that help accelerate real-time raytracing also help to accelerate offline raytracing. He mentioned OptiX which has exactly that as its purpose.
Posted on Reply
#30
Xzibit
londisteLet me fix that for you.
To get a more correct sentence, you could probably replace the strikethrough part with "Real-time".

Also worth noting is that linked article (could you please fix the actual link: cgicoffee.com/blog/2018/03/what-is-nvidia-rtx-directx-dxr ) is from March 2018. Turing and RT Cores were not a thing at that point. Plus, the guy has some things wrong - bits in RTX technology that help accelerate real-time raytracing also help to accelerate offline raytracing. He mentioned OptiX which has exactly that as its purpose.
The article is dated same time GTC 2018. Nvidia announced RTX Tech at that time. Even held In-depth Dev talks.
Posted on Reply
#31
ValenOne
londisteLet me fix that for you.
To get a more correct sentence, you could probably replace the strikethrough part with "Real-time".

Also worth noting is that linked article (could you please fix the actual link: cgicoffee.com/blog/2018/03/what-is-nvidia-rtx-directx-dxr ) is from March 2018. Turing and RT Cores were not a thing at that point. Plus, the guy has some things wrong - bits in RTX technology that help accelerate real-time raytracing also help to accelerate offline raytracing. He mentioned OptiX which has exactly that as its purpose.
If you actually read cgicoffee.com/blog/2018/03/what-is-nvidia-rtx-directx-dxr it mentions "real time" i.e. I quote

These feature the recently announced real-time ray tracing tool-set of Microsoft DirectX 12 as well as the (claimed) performance benefits proposed by NVIDIA's proprietary RTX technology available in their Volta GPU lineup, which in theory should give the developers new tools for achieving never before seen realism in games and real-time visual applications.
Posted on Reply
#32
londiste
rvalenciaIf you actually read cgicoffee.com/blog/2018/03/what-is-nvidia-rtx-directx-dxr it mentions "real time" i.e. I quote

These feature the recently announced real-time ray tracing tool-set of Microsoft DirectX 12 as well as the (claimed) performance benefits proposed by NVIDIA's proprietary RTX technology available in their Volta GPU lineup, which in theory should give the developers new tools for achieving never before seen realism in games and real-time visual applications.
Yes, Nvidia did harp on RTRT already in the initial announcement but it was clearly aimed towards professional market and it was clear real-time part of it would be limited. Half a year later, Turing brought out a rather large improvement for the performance side of things.

At that point it remained somewhat unclear what RTX technology was or what it brings to the table. Turns out, RTX is mostly just a marketing term. Volta had no RT Cores. Tensor cores were there but denoising on Tensor cores is only one option (one that we are not sure if DXR games even currently use). The main purpose was and is to bring RTRT to the table for general public. The primary drivers are DXR and implementations/solutions both within GameWorks and outside of it.

RTX as a term is Nvidia's marketing failure. It started with it meaning a set of RTRT related technologies, followed by a prefix of graphics card series along with bundling DLSS underneath the same RTX moniker. The meaning of the term got more and more mixed and along with reaction to RTX cards makes RTX as a term quite meaningless.
Posted on Reply
#33
ValenOne
londisteYes, Nvidia did harp on RTRT already in the initial announcement but it was clearly aimed towards professional market and it was clear real-time part of it would be limited. Half a year later, Turing brought out a rather large improvement for the performance side of things.

At that point it remained somewhat unclear what RTX technology was or what it brings to the table. Turns out, RTX is mostly just a marketing term. Volta had no RT Cores. Tensor cores were there but denoising on Tensor cores is only one option (one that we are not sure if DXR games even currently use). The main purpose was and is to bring RTRT to the table for general public. The primary drivers are DXR and implementations/solutions both within GameWorks and outside of it.

RTX as a term is Nvidia's marketing failure. It started with meaning a set of RTRT related technologies, followed by a prefix of graphics card series along with bulding DLSS underneath the same RTX moniker. The meaning of the term got more and more mixed and along with reaction to RTX cards makes RTX as a term quite meaningless.
Besides tensor and RT units, RTX 2080 Ti still has non-RT upgrades e.g. improve raster performance, rapid pack math (CUDA cores), proper hardware async scheduler from Volta, discrete integer CUDA cores, double L2 cache storage, variable shader rate and 'etc'.

Direct12's DirectML extension exposes GPU's rapid pack math features.
Posted on Reply
Add your own comment
Dec 18th, 2024 06:19 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts