• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD FSR FidelityFX Super Resolution Quality & Performance

i don't need your sarcasm about the year we are.

I ask you againg: could you show me the DIRECT comparison between FSR and DLSS 2.0, in the same screenshot.
Because the video you linked are not showing any.
They are showing DLSS or FSR, not the twos in the same game.

Thank you.
Literally the last video has a small section of direct FSR and DLSS 2 comparison. Like I said, we aren't in dial-up era and you can google anything that you want yourself.
 
Please allow me to ignore the baseless personal insult part of your internet manifestation, I won't go that deep, sorry.
That being said, if I understand right, DLSS reconstructs and supersamples while FSR upscales, hence my post about comparision, my opinion is that it's not comparable at this time, and reviews should not handle them on the same level.
I would be the happiest if AMD would offer better streaming options, better ray tracing options, better DLSS options, etc... but so far they only offer better rasterization speeds. I honestly hope they will do, because that would mean more competition which would lead to lower prices and better products for us, customers.
I just pointed out that we should still stay objective while we are at it.

DLSS 1.0 was a Spatial + GAN regen. DLSS 2.0 is Temporal + General GAN + Motion Vector data to try and reduce false hits by the GAN causing excessive false data (this is why DLSS 2.0 upscales so well, but smears to shit in motion).

FSR is Spatial + Adaptive Sharpening.

Honestly, if AMD can do this much without the requirement for any GAN model or previous frame data Nvidia should be worried. AMD's Spatial is miles ahead of DLSS 1.0's try at doing the same, and its sharpening is top notch.

As it stands, it appears DLSS 2.0 is the image quality king, but FSR is the image stability in motion master. AMD patents leaked may also show where this is going next. I commented on this and what it might be positing.

AMD's 2019 "Gaming Super Resolution" Patent Could be the Blueprint for FidelityFX Super Resolution? | TechPowerUp Forums

The approach looks really smart IMO, use an algorithmic upscale for the broad image and apply DNN for detail restoration.

The talk of ground-truth for DNN sounds like AMD might be 'resolution spiking' every x amount of frames to provide reference data back to the DNN, which I'm intrigued to see if so what impact it may have on frame time.

If we look at FSR, it looks suspiciously like AMD just delivered for algorithmic upscale for the broad image. Next stage would be applying a DNN for detail restoration.
 
Easy to implement and open source don't give it objectively superior image quality
@Vya Domus hasn't mentioned superior image quality but superior in general. Probably because of the easy implementation.
 
@Vya Domus hasn't mentioned superior image quality but superior in general. Probably because of the easy implementation.
I guess my point is saying objectively superior overall is quite vague and doesn't tell the whole story when in reality the story has a lot of elements and is far more nuanced. 'easier to implement' makes it objectively superior in ease of implementation, not requiring speicifc hardware makes it objectively superior in compatibility, but to say that those things alone make it objectively superior is borderline purposely vague and misleading when there are multiple other existing techniques around that trade blows across the spectrum of characteristics, including implementation, hardware requirements and IQ.
 
I'm curious as to what, in your eyes, overarchingly makes it "objectively" superior.
Straight forward implementation, can work on basically any hardware and achieves comparable image quality and performance with DLSS.

Where if you prefer the preservation of fine detail or text sharpness, it would be objectively inferior, in that aspect of image quality, to some alternatives.

That's straight up false though, if you use the maximum quality setting it often looks sharper and cleaner than native.
 
Literally the last video has a small section of direct FSR and DLSS 2 comparison. Like I said, we aren't in dial-up era and you can google anything that you want yourself.
did you watch the video ?
Maybe it's me but I cant find any direct comparison
 
Looks very promising. Perfect timing to release this during the current GPU shortage. For those who are still stuck with their old cards because upgrades aren't available at their MSRP, this is a godsend if they want to play new titles.
 
That's straight up false though, if you use the maximum quality setting it often looks sharper and cleaner than native.

Don't confuse clearer than native with fucking awful Temporal implementations. Both Death Stranding & Control have awful native Temporal implementations which destroy IQ. Funnily enough, DLSS does a better job than those fucking travesties, even with an upscale.
 
I guess my point is saying objectively superior overall is quite vague and doesn't tell the whole story when in reality the story has a lot of elements and is far more nuanced. 'easier to implement' makes it objectively superior in ease of implementation, not requiring speicifc hardware makes it objectively superior in compatibility, but to say that those things alone make it objectively superior is borderline purposely vague and misleading when there are multiple other existing techniques around that trade blows across the spectrum of characteristics, including implementation, hardware requirements and IQ.
I really wonder, what would you say if for instance (and some people has already suggested it) NV opened the DLSS and it would work just fine on the 1000 series cards? Would you say DLSS is better at every aspect? Do you think NV will do it? Any other thoughts if that would be the case?
Open source superiority is not only to a quality (image quality) perspective but also how many people can actually take advantage of it. The FSR can be used on a lot of graphics cards despite company and that my friend is a huge superiority.
 
That's straight up false though, if you use the maximum quality setting it often looks sharper and cleaner than native.
I don't think we're looking at the same content, yet to see one ultra quality VS native comparison that looks sharper and clear than native.
I really wonder, what would you say if for instance (and some people has already suggested it) NV opened the DLSS and it would work just fine on the 1000 series cards? Would you say DLSS is better at every aspect? Do you think NV will do it? Any other thoughts if that would be the case?
Open source superiority is not only to a quality (image quality) perspective but also how many people can actually take advantage of it. The FSR can be used on a lot of graphics cards despite company and that my friend is a huge superiority.
I don't think they can or will open DLSS 2.0+ on pre 20 series cards no. It would be cool though, even if the results weren't as robust.

I see nvidia more doing their own implementation / adaptation of FSR as a lower tier DLSS, I think there's already talk that their Sharpen+ filter may use similar algorithms
Straight forward implementation, can work on basically any hardware and achieves comparable image quality and performance with DLSS.
I'm not only talking about DLSS but other built in methods like TAAU, I'm willing to agree on some points, but not image quality - most specifically when viewed across the spectrum of input and output resolutions, not just best case scenarios.

Again, I think this is freaking awesome, but I can't agree on a sweeping, all encompassing statement like objectively superior. I do appreciate the polite and objective discourse though.
 
Last edited:
I don't think they can or will open DLSS 2.0+ on pre 20 series cards no. It would be cool though, even if the results weren't as robust.

I see nvidia more doing their own implementation / adaptation of FSR as a lower tier DLSS, I think there's already talk that their Sharpen+ filter may use similar algorithms
I'd like to see NV's kind of FSR implementation in terms of supporting all modern GPUs. If that is going to be the case then you can easily say DLSS is going to be put on a back-burner or even dropped.
 
I'd like to see NV's kind of FSR implementation in terms of supporting all modern GPUs. If that is going to be the case then you can easily say DLSS is going to be put on a back-burner or even dropped.
Maybe? It depends on how well it's done, and we're not at the end of the road for DLSS refinement and adoption yet either, if it is going to die, imo we haven't seen its peak yet. In any case I'm more here to talk FSR.
 
Maybe? It depends on how well it's done, and we're not at the end of the road for DLSS refinement and adoption yet either, if it is going to die, imo we haven't seen its peak yet. In any case I'm more here to talk FSR.
Yeah, we haven't seen it's peak yet and you already say NV is going to switch direction from proprietary DLSS to something open working across different graphics already. What's the conclusion for DLSS here then? Prosperity or ditch?
 
Partner saying.

 
Yeah, we haven't seen it's peak yet and you already say NV is going to switch direction from proprietary DLSS to something open working across different graphics already. What's the conclusion for DLSS here then? Prosperity or ditch?
You asked me a what if question, so I gave a hypothetical answer. I think if they did that, it would be noticeably worse IQ than DLSS, but they might just sit back and let AMD / game devs do the work, who knows.

Hypothetically, if they made their own FSR equivalent which did not need tensor cores, and it had comparable IQ, then yeah maybe that would be the end of DLSS - as we know it today. If DLSS as we know it were to retain a distinct IQ advantage, especially in the lower resolution teirs, I could well stick around longer. I dunno man I don't have a crystal ball, I'm just happy I even got an RTX3080 and get to play with them all for years to come.
 
I don't think we're looking at the same content, yet to see one ultra quality VS native comparison that looks sharper and clear than native.

Really ? https://www.techpowerup.com/review/...solution-quality-performance-benchmark/6.html

Look on the brick walls in the background, it's extremely obvious that the highest quality mode looks cleaner and sharper. I think this is pretty funny, it's the complete opposite of when DLSS was released and people didn't want to admit that it looked like rubbish, now they don't want to admit that FSR looks good.
 
You asked me a what if question, so I gave a hypothetical answer. I think if they did that, it would be noticeably worse IQ than DLSS, but they might just sit back and let AMD / game devs do the work, who knows.

Hypothetically, if they made their own FSR equivalent which did not need tensor cores, and it had comparable IQ, then yeah maybe that would be the end of DLSS - as we know it today. If DLSS as we know it were to retain a distinct IQ advantage, especially in the lower resolution teirs, I could well stick around longer. I dunno man I don't have a crystal ball, I'm just happy I even got an RTX3080 and get to play with them all for years to come.
So there are no plans for NV creating an open hardware independent DLSS equivalent?
If that is the case then FSR is/will be superior in my eyes and in eye of a lot of developers. For devs it is simple extrapolation, feature that can be used by more users, more willingly will buy the product, easier to handle, which means more money coming in. Call it what you will but the FSR looks great in my opinion. DLSS 2.0 may be a notch better in some cases but still not by much and availability, good quality nonetheless and ease of implementation is a huge pro in my eyes contrary to DLSS difficulty, resources put and limited to certain hardware only.
 
Really ? https://www.techpowerup.com/review/...solution-quality-performance-benchmark/6.html

Look on the brick walls in the background, it's extremely obvious that the highest quality mode looks cleaner and sharper. I think this is pretty funny, it's the complete opposite of when DLSS was released and people didn't want to admit that it looked like rubbish, now they don't want to admit that FSR looks good.
Not sure how I missed it but I'll give you that one, it does seem mostly due to the Sharpen filter doing its job well, and I use Sharpen filters on the majority of games I play these days, upscaled or native.

It is funny I guess, I didn't really like dlss at release but didn't have a 20 series card either, only 30 series and 2.0+ games. What I find funny is similar types of people with different preferences didn't want to admit DLSS 2.0 looked good, or even worth enabling on a balance of IQ : FPS because of personal preferences and nitpicking drawbacks, but are very happy to take some visual drawbacks in FSR, perhaps because it's free? It certainly has some bonus points in its favour.

I do think FSR looks good, I said it on these very forums that my bar for success is being better than simple Upscaling like just lowering render resolution, and indeed it does seem to be all things considered, and easy to imp makes it an immediate success and great feature, you'd be a fool to deny it. More options to tweak performance and IQ should be welcomed with open arms.
 
It is funny I guess, I didn't really like dlss at release but didn't have a 20 series card either, only 30 series and 2.0+ games. What I find funny is similar types of people with different preferences didn't want to admit DLSS 2.0 looked good, or even worth enabling on a balance of IQ : FPS because of personal preferences and nitpicking drawbacks, but are very happy to take some visual drawbacks in FSR.
You should not listen to these people. DLSS 2.0 is very good. Of course one implementation can differ from the other (native 4k image in one game can differ from a 4k image in a different game) and thus the DLSS 2.0 implementation can look better or worse in one game than the other.

FSR does look good and I'm waiting for more games supporting it. If AMD pulls all the stops it might look even better in time. The first implementation is great and I hope we will see more and better.
 
Thanks AMD for including Nvidia support too.:clap:
As per reddit, it runs on Haswell IGP too...

supersamples while FSR upscales
I don't think this word means what you think it means.
Supersampling is the opposite of upscaling, it is going from higher resol
Evil Genius 2 added support as well, I just noticed. For those who might own that game.
View attachment 205075
I think one of the game devs commented that it took him about 2 hours to integrate FSR.
Great stuff.
 
my opinion is that it's not comparable at this time, and reviews should not handle them on the same level.
should of said that in the beginning, if you more clear I wouldnt have replied as I did.

They are not comparable because they process differently, achieving similar results.
 
Not sure how I missed it but I'll give you that one, it does seem mostly due to the Sharpen filter doing its job well, and I use Sharpen filters on the majority of games I play these days, upscaled or native.

How do you know what's sharpened up and what isn't ?
 
Where can I download AMDs FSR? I want to try it on my antique. :)
 
How do you know what's sharpened up and what isn't ?
This is basically just AMD's existing CAS, part of the whole FSR deal.
It's always there.
 
This is basically just AMD's existing CAS, part of the whole FSR deal.
It's always there.

I know it's there, I am asking how would one know looking at an image if there is actual detail in it or it's just sharpened.
 
Back
Top