Monday, March 14th 2022
AMD Potentially Preparing to Announce FSR 2.0 at GDC 2022
AMD is scheduled to hold an event discussing "Next-Generation Image Upscaling for Games" at the Game Developers Conference on March 23. The event only includes a brief description that "AMD will present some of the results of their research in the domain of next-generation image upscaling technology" but the developer of CapFrameX has recently claimed to see footage from FidelityFX Super Resolution (FSR) 2.0 so AMD may be preparing to announce the technology imminently.
The developer claims that FSR 2.0 switches to a temporal upscaling approach with optimized anti-aliasing that doesn't require AI acceleration unlike DLSS & XeSS meaning that it can work with GPUs from multiple vendors. The technology can also allegedly improve image quality beyond native resolution but we will need to wait for the official announcement and reviews before reaching any conclusions.
Sources:
GDC, @CapFrameX
The developer claims that FSR 2.0 switches to a temporal upscaling approach with optimized anti-aliasing that doesn't require AI acceleration unlike DLSS & XeSS meaning that it can work with GPUs from multiple vendors. The technology can also allegedly improve image quality beyond native resolution but we will need to wait for the official announcement and reviews before reaching any conclusions.
34 Comments on AMD Potentially Preparing to Announce FSR 2.0 at GDC 2022
FSR Performance mode runs at 25% of the original resolution (50% per dimension).
25% of the original resolution is also what Unreal Engine 5's TSR uses by default. It's looking like a TSR-equivalent with the advantage of being cross-engine and whose open-source implementation is properly documented in the FidelityFX suite.
This could be huge for devices like the Steam Deck where FSR1.0 doesn't work great because of the low base resolution, for example. If it's a temporal upscaling method then it probably needs motion vectors to be implemented.
The good news is that games that use UE4/5 and/or DLSS already has those in place, so the development effort to implement FSR2.0 in such games should be small.
Cyberpunk 2077 is one of those cases, but we'd still need developer intervention regardless.
I'm also happening to be playing CP2077 at the moment on a 6900XT, and I have to say playing the game with a better upscaling/upsampling technology would be nice so I can up the raytracing settings.
Some more information about FSR 2.0. It looks like it will launch on Deathloop.
videocardz.com/newz/amd-fsr-2-0-next-level-temporal-upscaling-officially-launches-q2-2022-rsr-launches-march-17th
AMD seems to be claiming that "Performance Mode" that runs at 25% resolution has a similar quality to original, but with close to 2x the performance.
This is great on discrete GPUs, but on the Steam Deck this means the GPU can afford to render at only 640*400 which is a gamechanger considering the device's limited fillrate and memory bandwidth.
The question is whether or not there's too much or discernible ghosting, though.
DLSS do not use their neural network to create pixel from scratch, but it's being used to select witch pixel to re-use. I wonder if AMD will come up with a "normal" algorithm that will do the same and be able to identify and remove ghosting.
Back to my first point, 3d real time rendering was always about cheating to get results and anything that you can do to improve performance and/or image quality without requiring increased resources will make it's way into news games. Upscaler are here to stay, and they will continue to improve and at some point they will all the time deliver better than native performance and image quality. They might do in the future Variable upscaling a bit like VRS. An upscaling that will focus on rendering the part that would benefits more from a new frame and re-use previous frame from parts that are less visible.
As long as it's available to all and not locked into a single vendors, those technology will become standard.
This FSR 2.0 and XeSS probably announce the death of DLSS. Not because it's not good, but because it's closed to a single brand.
As for DLSS, it's the same, Nvidia have deep pockets but it will be mostly in the future in Nvidia sponsored game.
Though I guess they will push dev relationships as hard as they can to keep DLSS relevant, and in the case of Unreal Engine it's an automatic toggle apparently. I bet most developers will prefer to adopt upsampling/reconstruction methods that they can implement transversally through PC and 9th-gen consoles.
Unless FSR2.0 is significantly inferior to DLSS (though we should expect it to be close to UE5's TSR), it's DLSS who might become something contained to Nvidia sponsored titles.
Made me laugh too but then I got RTX and gotta admit that DLSS, DLAA, and DL scaling technology is here to stay.
Why are you able to take incredible photos with teeny weeny smartphones' image sensors? Because AI-combined shots from 3 tiny sensors are better than native medium sensor. Similar thing does DL with motion vectors and temporal magic.
As for ghosting, at 240Hz three-frames ghosting is better than native 60Hz.
Our brain does DL all the time, thanks to that we are often able to see more than our eyes' optics permit - it's pretty native.
DLSS has been widely adopted already, kinda hard for any game dev not to know about it. FSR2.0 and XeSS on the other hand are still unknown upscalers. UE4.26 and UE5 game engines have DLSS plugin, no point for game devs not to click on the DLSS box unless they are paid not to do so (by AMD/Intel).
Most game devs should have experience with DLSS by now, while FSR2.0 or XeSS are unknown upscalers. Heck most current FSR1.0 implementations are unusable.
That should bring them more competitive on the power usage side as they won't have to power silicon that is not really used in a gaming scenario.
Are open source so the dev's can use them at will without additional cost.
But yeah I wouldn't buy AAA games when I have an old GPU, only to rely on upscalers to play at 1080p, that's just self tormenting.
I honestly could not believe what I was seeing. I will be reprocessing some of better landscape photos now.
So it might make you laugh now I guarantee it is possible.
Another example is noisy images. Topaz Denoise AI can actually restore features lost to noise because the AI knows what should be there.
AI image processing is already doing amazing things and it's it's still early days.
Now one way you can slightly improve IQ is to upscale an image (no AI) do some image processing and then resample back to native resolution. So I expect they mean if you ran FSR 2.0 without down-scaling, the resultant image could look better than native, but not by much.
FSR being spatial, whilst not as good IQ, at least gave an option that wasn't temporal based. Now it seems thats coming to an end. RIP.
Seeing is believing.
The added details come from slight difference between frame (the renderer will shift the grid slightly each frame to resolve details and it will push further away the mipmap level so texture get more details than they would the backend resolution. It's clever and that do the job very well. If you don't have to use raw power to do something, don't. It's how to do effective real time rendering. Don't hold your breath ! Some people there really have a shills. Let them be, they don't hurt anyone really.