Please play the game yourself and offer your own opinion before attacking other who gave their opinion based on real gameplay.
Oh and Digital Foundry didn't think too highly of FSR either, they must be Nvidia fanboys to you right?
FSR is very bad for fine detail. You need to look at the
source code and understand people pointing this out are not being toxic. They appear correct. You see the code uses neighbour clamping that gets rid of the ringing but causes lost of detail compared to normal
Lanczos. Also if you look in cyberpunk 2077 that details were two power lines cross one another FSR removes detail so that when one cable crosses the other. You can see that they dont cross anymore, one cable is normal and the other cable breaks before it crosses the other on both sides. There are other issues that are just Lanzcos related, these are covered in NVidia's
video on DLSS/NIS and why it cant match DLSS image quality.
// FSR - [EASU] EDGE ADAPTIVE SPATIAL UPSAMPLING
// EASU provides a high quality spatial-only scaling at relatively low cost.
// Meaning EASU is appropiate for laptops and other low-end GPUs.
// Quality from 1x to 4x area scaling is good.
// The scalar uses a modified fast approximation to the standard lanczos(size=2) kernel.
// EASU runs in a single pass, so it applies a directionally and anisotropically adaptive radial lanczos.
// This is also kept as simple as possible to have minimum runtime.
// The lanzcos filter has negative lobes, so by itself it will introduce ringing.
// To remove all ringing, the algorithm uses the nearest 2x2 input texels as a neighborhood,
// and limits output to the minimum and maximum of that neighborhood.
FSR 1 can't match DLSS for image quality, like in this video because the higher resolution texture detail cannot be reconstructed. DLSS has this detail already because it starts with the full detail information for textures at the upscaled target resolution. So its render uses the same textures as used if the render was 4k. It uses the data for the textures the 4k native image would and the same LOD settings. The temporal part uses information over a number of frames to reconstruct the final frame at the target resolution. This has the detail information required but has one drawback. Temporal artifacts. DLSS AI network (likely the best method for removing temporal artifacts) just deals with the temporal artifacts and adds in some details which it works out based on its training. This is 100% the reason why DLSS 2 will always be superiour to FSR 1 or even DLSS 1's spatial upscaling algorithm. See again
video
FSR 2 is stated by AMD to be better than FSR 1 for the very same reason DLSS 2 is also better than FSR 1. The temporal method is better in every reguard image quality wise. This is reguardles of whether the temporal artifacts are removed via software or AI methods. Its obvious.
A Survey of Temporal Antialiasing Techniques Lei Yang, Shiqiu Liu, Marco Salvi
Temporal upsampling essentially accumulates lower-resolution shading results, and produces higher resolution images that often contain more details than pure spatial upsampling results. page 7
Amortizing sampling and shading across multiple frames does sometimes lead to image quality defects. Many of these problems are either due to limited computation budget (e.g. imperfect resampling), or caused by the fundamental difficulty of lowering sampling rate on spatially complex, fast changing signals. page 9
8.3. Machine learning-based methods
Salvi [Sal17] enhances TAA image quality by using stochastic gradient descent (SGD) to learn optimal convolutional weights for computing the color extents used with neighborhood clamping and clipping methods (see Section 4.2). Image quality can be further improved by abandoning engineered history rectification methods in favor of directly learning the rectification task. For instance, variance
clipping can be replaced with a recurrent convolutional autoencoder which is jointly trained to hallucinate new samples and appropriately blend them with the history data [Sal17]. page 12
From NVidia's website it states that NVIDIA DLSS 2.x uses a convolutional autoencoder.
Because temporal artifacts have the same causes reguardless of the source, DLSS 2.x can be seen as a more generalized network that can be applied to multiple games.