Thursday, March 18th 2021

Confronting NVIDIA's DLSS: AMD Confirms FidelityFX Super Resolution (FSR) to Launch in 2021

AMD, via its CVP & GM at AMD Radeon Scott Herkelman, confirmed in video with PCWorld that the company's counterpart to NVIDIA's DLSS technology - which he defines as the most important piece of software currently in development from a graphics perspective - is coming along nicely. Launch of the technology is currently planned for later this year. Scott Herkelman further confirmed that there is still a lot of work to do on the technology before it's ready for prime time, but in the meantime, it has an official acronym: FSR (FidelityFX Super Resolution). If you're unfamiliar with DLSS, it's essentially an NVIDIA-locked, proprietary upscaling algorithm that has been implemented in a number of games now, which leverages Machine Learning hardware capabilities (tensor cores) to upscale a game with minimal impact to visual quality. It's important because it allows for much higher performance in even the latest, most demanding titles - especially when they implement raytracing.

As has been the case with AMD, its standing on upscaling technologies defends a multiplatform, compatible approach that only demands implementation of open standards to run in users' systems. The idea is to achieve the broadest possible spectrum of game developers and gamers, with tight, seamless integration with the usual game development workflow. This is done mostly via taking advantage of Microsoft's DirectML implementation that's baked straight into DX 12.
One detail doesn't instill confidence in how soon we'll see this technology out in the wild; Scott Herkelman in the video says that there are multiple approaches to such an upscaling solution, and that they're being evaluated in the lab; this either means that AMD hasn't yet decided on the technologies to leverage for the upscale via Microsoft's Direct ML, or that the company is actively working on two or more different approaches to actually be able to measure their benefits, drawbacks, and ability for deployment in a large scale. All in all though, it's great to know that things are coming along nicely, as such a technology has an immense return potential not only for PC gamers (perhaps even NVIDIA-toting ones, if AMD's solution truly is hardware agnostic), but also for console players. If the performance increases we can expect from FSR are comparable to those of DLSS, we can expect an immense amount of power being unlocked in current-gen consoles. And that, in turn, benefits everyone.

Watch the full PCWorld video below:

Source: via Videocardz
Add your own comment

89 Comments on Confronting NVIDIA's DLSS: AMD Confirms FidelityFX Super Resolution (FSR) to Launch in 2021

#76
INSTG8R
Vanguard Beta Tester
nguyenHuh, you mean putting both Hitman 2 and 3 in the same benchmarking suite? that is not how a fair reviewer would choose their games.
LOL this from the guy that to put it “politely” was “insistent” your precious DLSS should be included in every benchmark, so basically the limited number of games that support it so basically that was “fair” right?
Posted on Reply
#77
AusWolf
ratirtThe FidelityFX works nice with CyberPunk. It does boost the performance considering how demanding that game is. I wish AMD hurry with the FSR release but I know it's better to wait a bit longer to get it done right. I'm really curious how this one will work.


I kinda have the feeling with you, if there's a game that favors AMD it should have been disregarded immediately from being a GPU performance indicator. Can't see you say the same about NV.
Since we have 2 major companies making GPUs, wouldn't it be healthy to have both "favored" games in the benchmark mix?
Every game favours one architecture or another. This has been true since the dawn of GPUs. Does anybody remember 3DFX and their proprietary API, Glide? Or S3D? I do.

It's only that we don't have proprietary APIs anymore, just proprietary technologies. As every game works with one or more of these technologies, there's no such thing as an impartial game for benchmarking. I think we should all forget about labelling games "AMD-favoured" or "nvidia-favoured", and either focus on the specific game that each one of us wants to play (that's what I usually do while skimming through reviews), or take an average of as many games as possible - which is an objective, but not very useful way to measure GPU performance, imo.
Posted on Reply
#78
INSTG8R
Vanguard Beta Tester
AusWolfEvery game favours one architecture or another. This has been true since the dawn of GPUs. Does anybody remember 3DFX and their proprietary API, Glide? Or S3D? I do.

It's only that we don't have proprietary APIs anymore, just proprietary technologies. As every game works with one or more of these technologies, there's no such thing as an impartial game for benchmarking. I think we should all forget about labelling games "AMD-favoured" or "nvidia-favoured", and either focus on the specific game that each one of us wants to play (that's what I usually do while skimming through reviews), or take an average of as many games as possible - which is an objective, but not very useful way to measure GPU performance, imo.
Well I agree for the most part but API has a feature set that’s agnostic. The engine that uses it is where the propriety stiff gets added
Posted on Reply
#79
AusWolf
INSTG8RWell I agree for the most part but API has a feature set that’s agnostic. The engine that uses it is where the propriety stiff gets added
True, though games use more or less of certain agnostic features that run better or worse on certain GPU architectures. I mean, games can be shader-intensive, memory-intensive, etc. which can be favoured by one specific kind of GPU over another. You add all the proprietary stuff to this, and you've got a very mixed landscape of games, where nothing can be discarded as a "sponsored title". Every title is sponsored, or at least favoured by one GPU brand or another.

Edit: That's why I don't look at average values when I'm reading hardware reviews. Instead, I only look at the games that I play, or want to play.
Posted on Reply
#80
nguyen
INSTG8RLOL this from the guy that to put it “politely” was “insistent” your precious DLSS should be included in every benchmark, so basically the limited number of games that support it so basically that was “fair” right?
Turning off DLSS 2.0 in supported game is like making an aircraft race with a car on road for the sake of apple to apple comparison, sure the Aircraft can run well on road but that's not all of its capability, get it?
DLSS Quality should be the default setting in supported games for RTX owners, yet it is turned off for the sake of apple to apple comparison LOL.
Same thing with Ray Tracing, now that RX6000 also include support for DXR, turning it off is just stupid. How about just test every game with Low Settings, make it apple to apple comparison and then every GPU is equal because they are getting CPU bound LOL.

I was never against Hitman3 being included in the benchmark suite, just saying hitman 3 perform exactly like hitman 2 would, so no idea why @ratirt think I was against Hitman3 being included.
Posted on Reply
#81
INSTG8R
Vanguard Beta Tester
nguyenTurning off DLSS 2.0 in supported game is like making an aircraft race with a car on road for the sake of apple to apple comparison, sure the Aircraft can run well on road but that's not all of its capability, get it?
DLSS Quality should be the default setting in supported games for RTX owners, yet it turned off for the sake of apple to apple comparison LOL.

I was never against Hitman3 being included in the benchmark suite, just saying hitman 3 perform exactly like hitman 2 would, so no idea why @ratirt think I was against Hitman3 being included.
Well no my point is any game that can use DLSS will obviously have its advantages that can’t be compared.
Posted on Reply
#82
medi01
nguyenmaking an aircraft race with a car on road for the sake of apple to apple comparison
Using car vs aircraft analogy when talking about puny TAA derivative, just when I was wondering whether things can get any crazier in the green lands... :D
Posted on Reply
#83
ratirt
INSTG8RYeah I mean i don’t know if 95% was default but I was well into the game when I noticed it so I’ve just left it there was already used to the IQ. Death Stranding as a whole was a really well optimized game but always turn on the “special sauce” anyway
To be honest, I used the 80% FidelityFX and It looked OK but the boost in FPS was substantial. It's pretty good but I wouldn't go below 70. Otherwise you will see some noise on the ground and it ruins the game kinda. Unless you don't have a choice.
nguyenHuh, you mean putting both Hitman 2 and 3 in the same benchmarking suite? that is not how a fair reviewer would choose their games.
It is a benchmark suite. Somebody decided these two are there. You may like it or not. Dismissing it just because NV sucks in those titles is not right. Besides, If you were to dismiss all the games that favor one system than the other you would have a very mediocre comprehension about the cards. Also "fair" for you is when NV is leading if it doesn't then RT and DLSS is broth to light. It is kinda funny. Besides this is not about NV but FidelityFX.
I've ran some tests with the CP2077 and I like it.
AusWolfEvery game favours one architecture or another. This has been true since the dawn of GPUs. Does anybody remember 3DFX and their proprietary API, Glide? Or S3D? I do.

It's only that we don't have proprietary APIs anymore, just proprietary technologies. As every game works with one or more of these technologies, there's no such thing as an impartial game for benchmarking. I think we should all forget about labelling games "AMD-favoured" or "nvidia-favoured", and either focus on the specific game that each one of us wants to play (that's what I usually do while skimming through reviews), or take an average of as many games as possible - which is an objective, but not very useful way to measure GPU performance, imo.
Well, as I see it, the more proprietary the more money you have to pay for it and incompatibility creeps in. Favors meaning runs better on one than the other? Favors meaning has been sponsored and the other company's arch is crippled? "Favor" one architecture is very misleading.
This graphics cards are different and the obvious thing is one game will run better on one architecture than the other. It's just nowadays, I can see, if something runs better on one graphics, means that it must "favor" one architecture. Whatever hides behind this "favors" one than the other. It is not about benchamrking and which is faster it kinda MUST favor one than the other. That's an incorrect statement in my book.
That's just the way it is now.
Posted on Reply
#84
AusWolf
ratirtWell, as I see it, the more proprietary the more money you have to pay for it and incompatibility creeps in. Favors meaning runs better on one than the other? Favors meaning has been sponsored and the other company's arch is crippled? "Favor" one architecture is very misleading.
This graphics cards are different and the obvious thing is one game will run better on one architecture than the other. It's just nowadays, I can see, if something runs better on one graphics, means that it must "favor" one architecture. Whatever hides behind this "favors" one than the other. It is not about benchamrking and which is faster it kinda MUST favor one than the other. That's an incorrect statement in my book.
That's just the way it is now.
Yes, I mean X game runs better on Y hardware because of reasons - it might prefer memory bandwidth, it might prefer a certain GPU architecture, or the developers might be sponsored by nvidia or AMD. It doesn't matter. What matters is, if we disregard the games that run better on nvidia or AMD hardware, then we pretty much end up with no game to benchmark.
Posted on Reply
#85
ratirt
AusWolfYes, I mean X game runs better on Y hardware because of reasons - it might prefer memory bandwidth, it might prefer a certain GPU architecture, or the developers might be sponsored by nvidia or AMD. It doesn't matter. What matters is, if we disregard the games that run better on nvidia or AMD hardware, then we pretty much end up with no game to benchmark.
I think the memory is a key here. You've changed your narrative, no longer "favor" but performs better. I think there's a huge difference between "favor" and "performs better". Reviewers should benchmark games performing better on each architecture. Maybe, same number of games from each camp. I wouldn't bring sponsored games to that equation. It has been established that sponsored games run better on one architecture than the other and there no shock here.
Posted on Reply
#86
voltage
always copying, always releasing tech similar to Intel and Nvidia. when will amd invent something completely unique before anyone else? Never.
Posted on Reply
#87
londiste
AusWolfYes, I mean X game runs better on Y hardware because of reasons - it might prefer memory bandwidth, it might prefer a certain GPU architecture, or the developers might be sponsored by nvidia or AMD. It doesn't matter. What matters is, if we disregard the games that run better on nvidia or AMD hardware, then we pretty much end up with no game to benchmark.
Determining what game prefers hardware from which vendor is not straightforward so what exactly would you disregard? Where exactly is the zero point for preferring (or performs better on)? For example there are fairly general-level things like Unreal Engine that is claimed to prefer Nvidia or CryEngine that AMD GPUs seem to have trouble with.
Posted on Reply
#88
Unregistered
voltagealways copying, always releasing tech similar to Intel and Nvidia. when will amd invent something completely unique before anyone else? Never.
Always posting comments with little to no substance. When will voltage post something that contributes to the topic? Never.

This is literally the fourth comment I've seen from you that is the exact same thing. Be more original and less of a fanboy.

"AMD invented x86-64. Intel is always copying, always releasing CPUs similar to AMD. When will Intel invent something completely unique before anyone else? Never". There you go, I made a statement that is about as baseless as yours. Yes, whoever makes the technology first pioneered it. But that doesn't mean everybody else making their own versions to create something called competition is inherently copying.
Posted on Edit | Reply
#89
AusWolf
londisteDetermining what game prefers hardware from which vendor is not straightforward so what exactly would you disregard? Where exactly is the zero point for preferring (or performs better on)? For example there are fairly general-level things like Unreal Engine that is claimed to prefer Nvidia or CryEngine that AMD GPUs seem to have trouble with.
That's exactly my point. Instead of pointing fingers at "nvidia titles" or "AMD titles", it's better to benchmark whatever we can, or whatever we're interested in. A good game is a good game regardless of what platform it prefers, or who sponsored the developers.
Posted on Reply
Add your own comment
Jan 22nd, 2025 06:03 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts