Wednesday, August 1st 2018
NVIDIA Unveils Adaptive Temporal Anti-Aliasing with Ray-Tracing
NVIDIA published the first documentation of Adaptive Temporal Anti-Aliasing (ATAA), an evolution of TAA that incorporates real-time ray-tracing, or at least the low light-count method NVIDIA implemented with RTX. Its "adaptive" nature also lets it overcome many of the performance challenges users encounter with TAA in high framerate and rapidly changing 3D scenes, such as in games. Non-gaming scenes, such as those used by real-estate developers, don't face these challenges.
To developers, ATAA promises image quality comparable to 8x supersampling at a cost of under 33 ms frame delay. These numbers were derived on a TITAN V ("Volta"), using Unreal Engine 4. It could take a while for ATAA to make it to games, as developers will need a few months to learn the technique before implementing them in their ongoing or future projects. NVIDIA will introduce ATAA support through driver updates.
Sources:
NVIDIA (PDF), Videocardz
To developers, ATAA promises image quality comparable to 8x supersampling at a cost of under 33 ms frame delay. These numbers were derived on a TITAN V ("Volta"), using Unreal Engine 4. It could take a while for ATAA to make it to games, as developers will need a few months to learn the technique before implementing them in their ongoing or future projects. NVIDIA will introduce ATAA support through driver updates.
71 Comments on NVIDIA Unveils Adaptive Temporal Anti-Aliasing with Ray-Tracing
Sorry, but looking closer to this pic, the technique is crappy to say the least. Just check out the blur. Again. Actually is double the times blurer than FXAA. WTF!?
nVidia loves the blur for some retarded reason....
There is some GPU times @fullhd for Titan V too
I liken to think Kawase filter can be combined with some sort of predication bit array(like how VLIW handled multiple streams of code in-flight) that signs a kernel mask for active edge awareness(like QR code). It 'could' approximate a fixed sampling target, for example 8x SSAA per say, if the bit mask is given a LUT which pixel gradients to pass. There can even be an AI algorithm to optimise the shader.
If you have moving average, you can have a gaussian which you can build a fast bilateral filter with.