Friday, August 25th 2023

AMD Announces FidelityFX Super Resolution 3 (FSR 3) Fluid Motion Rivaling DLSS 3, Broad Hardware Support

In addition to the Radeon RX 7800 XT and RX 7700 XT graphics cards, AMD announced FidelityFX Super Resolution 3 Fluid Motion (FSR 3 Fluid Motion), the company's performance enhancement that's designed to rival NVIDIA DLSS 3 Frame Generation. The biggest piece of news here, is that unlike DLSS 3, which is restricted to GeForce RTX 40-series "Ada," FSR 3 enjoys the same kind of cross-brand hardware support as FSR 2. It works on the latest Radeon RX 7000 series, as well as previous-generation RX 6000 series RDNA2 graphics cards, as well as NVIDIA GeForce RTX 40-series, RTX 30-series, and RTX 20-series. It might even be possible to use FSR 3 with Arc A-series, although AMD wouldn't confirm it.

FSR 3 Fluid Motion is a frame-rate doubling technology that generates alternate frames by estimating an intermediate between two frames rendered by the GPU (which is essentially what DLSS 3 is). The company did not detail the underlying technology behind FSR 3 in its pre-briefing, but showed an example of FSR 3 implemented on "Forspoken," where the game puts out 36 FPS at 4K native resolution, is able to run at 122 FPS with FSR 3 "performance" preset (upscaling + Fluid Motion + Anti-Lag). At 1440p native, with ultra-high RT, "Forspoken" puts out 64 FPS, which nearly doubles to 106 FPS without upscaling (native resolution) + Fluid Motion frames + Anti-Lag. The Maximum Fidelity preset of FSR 3 is essentially AMD's version of DLAA (to use the detail regeneration and AA features of FSR without dropping down resolution).
AMD announced just two title debuts for FSR 3 Fluid Motion, the already released "Forspoken," and "Immortals of Aveum" that released earlier this week. The company announced that it is working with game developers to bring FSR 3 support to "Avatar: Frontiers of Pandora," "Cyberpunk 2077," "Warhammer II: Space Marine," "Frostpunk 2," "Alters," "Squad," "Starship Troopers: Extermination," "Black Myth: Wukong," "Crimson Desert," and "Like a Dragon: Infinite Wealth." The company is working with nearly all leading game publishers and game engine developers to add FSR 3 support, including Ascendant, Square Enix, Ubisoft, CD Projekt Red, Saber Interactive, Focus Entertainment, 11-bit Studios, Unreal Engine, Sega, and Bandai Namco Reflector.
AMD is also working to get FSR 3 Fluid Motion frames part of the AMD Hyper-RX feature that the company is launching soon. This is big, as pretty much any DirectX 11 or DirectX 12 game will get Fluid Motion frames, launching in Q1-2024.

Both "Forspoken" and "Immortals of Aveum" will get FSR 3 patches this Fall.
Add your own comment

362 Comments on AMD Announces FidelityFX Super Resolution 3 (FSR 3) Fluid Motion Rivaling DLSS 3, Broad Hardware Support

#251
apoklyps3
In my gtx 970 days I Had the daily black screen with the "Display Driver Stopped Responding and Has Recovered" error for years.
Don't talk to me about nvidia's perfect driver stability. Just check their driver & hotfix releases change log
Posted on Reply
#252
Lew Zealand
AusWolfThat's proof that it works better on Ada. It's not proof that it doesn't work on Turing and Ampere.
If Nvidia could enable RT on cards that have zero dedicated RT hardware (GTX 1000 series), then this shouldn't be a problem, either.
I think the difference is that one is adding a new feature, RT. The other is not and is instead adding perceived smoothness, FrameGen.

RT sucks the frames out of your card no matter which one you use, so having it cut to 20-25% on a 1xxx series card was merely 2-3x worse than a Turing card but still allowed the user to "see what they were missing." It's a decent advertising gimmick.

Frame Generation exists specifically to make more frames to increase perceived smoothness. If adding FrameGen to Turing and Ampere ends up adding few or no additional frames, then you are getting nothing yet taking a hit on latency in the process.

One (RT) adds something while the other (FG) adds nothing on "unsupported" cards hence why RT got added to those cards and not FG.
Posted on Reply
#253
oxrufiioxo
Lew ZealandI think the difference is that one is adding a new feature, RT. The other is not and is instead adding perceived smoothness, FrameGen.

RT sucks the frames out of your card no matter which one you use, so having it cut to 20-25% on a 1xxx series card was merely 2-3x worse than a Turing card but still allowed the user to "see what they were missing." It's a decent advertising gimmick.

Frame Generation exists specifically to make more frames to increase perceived smoothness. If adding FrameGen to Turing and Ampere ends up adding few or no additional frames, then you are getting nothing yet taking a hit on latency in the process.

One (RT) adds something while the other (FG) adds nothing on "unsupported" cards hence why RT got added to those cards and not FG.
The argument now is that if AMD regardless of being nearly a year late can get it working with Asynchronous compute surely nvidia could. We will have to wait till the technology is actually out before critiquing how close they are actually coming to the competing technology. So far according to digital foundry in hands off demonstration it's promising.
Posted on Reply
#254
AusWolf
oxrufiioxoMy guess is they are in a damn if they do damn if they don't situation and decided against releasing it on Turing/Ampere.

Let's say they did release it and it performs like crap and has a ton of artifacts people will say they gimped it on purpose so basically the same situation that they are in now.

Whenever I buy a gpu I buy it for the performance it gives me that day I think most people just buy whatever performs the best within their budget regardless anyone buying a gpu because the box is red or green if there are better options at the same price point is only doing themselves a disservice.
Lew ZealandI think the difference is that one is adding a new feature, RT. The other is not and is instead adding perceived smoothness, FrameGen.

RT sucks the frames out of your card no matter which one you use, so having it cut to 20-25% on a 1xxx series card was merely 2-3x worse than a Turing card but still allowed the user to "see what they were missing." It's a decent advertising gimmick.

Frame Generation exists specifically to make more frames to increase perceived smoothness. If adding FrameGen to Turing and Ampere ends up adding few or no additional frames, then you are getting nothing yet taking a hit on latency in the process.

One (RT) adds something while the other (FG) adds nothing on "unsupported" cards hence why RT got added to those cards and not FG.
Maybe, maybe not. We won't know unless they do decide to roll it out for Turing and Ampere in the future.

Personally, I don't like all this "new tech" Nvidia introduces with every generation. One may see it as something new and exciting, but to me, it's just gimmicks to make people spend money on an upgrade even if they wouldn't have to otherwise. I'm more of an advocate of unified, hardware-agnostic standards, and a level playing field where the only major qualities of a graphics card are its computing power and price. If Nvidia is really a software company as some may claim, then they should develop software that runs on everything instead of hardware dedicated for not giving people a choice when buying a GPU.
Posted on Reply
#255
JustBenching
AusWolfThat's proof that it works better on Ada. It's not proof that it doesn't work on Turing and Ampere.
If Nvidia could enable RT on cards that have zero dedicated RT hardware (GTX 1000 series), then this shouldn't be a problem, either.
But nvidia themselves said that yes, it can work on older hardware. It will just look like crap.
Posted on Reply
#256
AusWolf
fevgatosBut nvidia themselves said that yes, it can work on older hardware. It will just look like crap.
I'd rather judge that for myself than believe Nvidia without any evidence presented.
Posted on Reply
#257
dyonoctis
AusWolfMaybe, maybe not. We won't know unless they do decide to roll it out for Turing and Ampere in the future.

Personally, I don't like all this "new tech" Nvidia introduces with every generation. One may see it as something new and exciting, but to me, it's just gimmicks to make people spend money on an upgrade even if they wouldn't have to otherwise. I'm more of an advocate of unified, hardware-agnostic standards, and a level playing field where the only major qualities of a graphics card are its computing power and price. If Nvidia is really a software company as some may claim, then they should develop software that runs on everything instead of hardware dedicated for not giving people a choice when buying a GPU.
You need to imagine how someone who isn't a big tech nerd might react to a lesser implementation of DLSS 3: they will toggle the setting by curiosity, see that it looks/perform like crap, and base their whole opinion of the tech based on their personal experience. They are not going to research about how DLSS3 perform best starting from a specific generation because the hardware used for FG is more powerful. Letting the client trying out everything that they want is a double edge sword: if it doesn't work, they will still expect you to fix it somehow, if you don't fix you hurt your brand image, and the product will be deemed as crap. Nvidia isn't the first brand that would rather not poke that bear. The more mainstream something is meant to be, the lesser control you'll have over it because tech support doesn't want to get swarmed by people who don't understand what "not officially supported" means :D

Sometimes the industry needs a push. Vulkan was born from mantle, anything that tressFX and gameworks did is now a standard feature in games engine.
Posted on Reply
#258
AusWolf
dyonoctisSometimes the industry needs a push. Vulkan was born from mantle, anything that tressFX and gameworks did is now a standard feature in games engine.
Yes, but they were all hardware-agnostic, just like they are now. The industry needs a push, but not by X company to buy only X company's cards.

As for the longer part of your post: I guess I see the point. It's just now how I would prefer. Nvidia at least could release some footage of a Turing GPU running FG like crap.
Posted on Reply
#259
dyonoctis
AusWolfYes, but they were all hardware-agnostic, just like they are now. The industry needs a push, but not by X company to buy only X company's cards.
Honestly, the impression that I'm getting from the GPU market right now is that there's a growing pain about the machine learning hardware. For a while nvidia was alone to have that, Intel followed them, but their ML hardware isn't software compatible with Nvidia, and AMD doesn't seem to think that dev accessible AI on a consumer GPU is the future, and it's MIA on the consoles.
-So Nvidia want to power everything with machine learning.
-Intel wants to do it as well, but they still propose an agnostic solution because they can't make XESS works with the tensor core apparently.
-AMD just want to use the basic GPU hardware since that seems to be the only workable agnostic solution at the moment.
-Direct ML is a thing that supposed to be hardware agnostic, but no one use it for upscaling and frame generation? (genuine question)

Upscaling/FG seems to suffer from a difference of philosophy about the means to achieve it, and the fact that each company seems unable to make use of the specialised hardware of the other. So, there's something to clean up and standardise there.... but I think that Microsoft would need make direct X 12_3 (direct x Ultimate ML) where every constructor would have a guideline about what the ML hardware need to be able to do to be compliant.
Posted on Reply
#260
Patriot
Assimilator3dfx killed itself. Stop making up stupid bullshit to justify your lack of actual argument.

Ah yes the good old "I don't actually have an argument so I'm going to bring up everything that I think NVIDIA has ever done wrong". I could do the same for ATI/AMD, but I won't, because I'm smart enough to know that that's not an argument, it's just stupid whining by a butthurt AMD fanboy.
You asked for a history lesson, don't be annoyed you got one.
Posted on Reply
#261
SunWukong
Are you not happy with your 13900KS/4090 setup? The inevitable and excessive crying never ends for team green/blue. :D
Posted on Reply
#262
Assimilator
AusWolfThat's proof that it works better on Ada. It's not proof that it doesn't work on Turing and Ampere.
Of course it isn't. But you keep claiming that it will work on Turing and Ampere, also without any proof. Do you see your hypocrisy?
PatriotYou asked for a history lesson, don't be annoyed you got one.
I didn't ask for a history lesson, and the stupid bullshit you made up in a pathetic attempt to support your not-argument wasn't one. Unless it's a history of your inability to make a coherent argument.
Posted on Reply
#263
Metroid
The way AMD presented it, it seems too good to be true.
Posted on Reply
#264
Patriot
AssimilatorI didn't ask for a history lesson, and the stupid bullshit you made up in a pathetic attempt to support your not-argument wasn't one. Unless it's a history of your inability to make a coherent argument.
Since you forgot,
www.techpowerup.com/forums/threads/amd-announces-fidelityfx-super-resolution-3-fsr-3-fluid-motion-rivaling-dlss-3-broad-hardware-support.312786/#post-5087087

You asked for history of anti-consumer behavior.
And frankly I am not sure why you are in denial of it, both the historical facts and having asked for it lol.
Posted on Reply
#265
AusWolf
AssimilatorOf course it isn't. But you keep claiming that it will work on Turing and Ampere, also without any proof. Do you see your hypocrisy?
No - I keep claiming that we have proof that Turing and Ampere have the necessary hardware, and that we have no proof that it doesn't work. Nvidia is kindly asking us to believe whatever they say at face value. If they provided a video to compare how it runs across Turing/Ampere/Ada, so we could see with our own eyes why they chose to only enable it on Ada, it would be the difference of night and day.

Edit: Here's a little info morsel on the topic:
www.extremetech.com/gaming/340298-redditor-enables-dlss-3-on-turing-gpu-with-simple-config-file
dyonoctisHonestly, the impression that I'm getting from the GPU market right now is that there's a growing pain about the machine learning hardware. For a while nvidia was alone to have that, Intel followed them, but their ML hardware isn't software compatible with Nvidia, and AMD doesn't seem to think that dev accessible AI on a consumer GPU is the future, and it's MIA on the consoles.
-So Nvidia want to power everything with machine learning.
-Intel wants to do it as well, but they still propose an agnostic solution because they can't make XESS works with the tensor core apparently.
-AMD just want to use the basic GPU hardware since that seems to be the only workable agnostic solution at the moment.
-Direct ML is a thing that supposed to be hardware agnostic, but no one use it for upscaling and frame generation? (genuine question)

Upscaling/FG seems to suffer from a difference of philosophy about the means to achieve it, and the fact that each company seems unable to make use of the specialised hardware of the other. So, there's something to clean up and standardise there.... but I think that Microsoft would need make direct X 12_3 (direct x Ultimate ML) where every constructor would have a guideline about what the ML hardware need to be able to do to be compliant.
That makes perfect sense. And I agree - standardisation is needed.
Posted on Reply
#266
Patriot
AusWolfNo - I keep claiming that we have proof that Turing and Ampere have the necessary hardware, and that we have no proof that it doesn't work. Nvidia is kindly asking us to believe whatever they say at face value. If they provided a video to compare how it runs across Turing/Ampere/Ada, so we could see with our own eyes why they chose to only enable it on Ada, it would be the difference of night and day.


That makes perfect sense. And I agree - standardisation is needed.
I would say AMD didn't go directML route as that would cut out all their old cards, RX7000 is the first with "tensor cores"
gpuopen.com/learn/wmma_on_rdna3/ edit: it looks like directML could work just slow on old cards...
Posted on Reply
#267
wolf
Better Than Native
AusWolfwe have proof that Turing and Ampere have the necessary hardware
It's a piece of hardware with the same name, but lesser capability by a significant amount, so at least by their own measure, it's not the necessary hardware.
Posted on Reply
#268
Patriot
wolfIt's a piece of hardware with the same name, but lesser capability by a significant amount, so at least by their own measure, it's not the necessary hardware.
That is the problem with a lack of transparency in product details.
AMD has this same issue with the WMMA, they have dedicated AI cores but... that's all we know, they will do things sometime in the future...
In the same way Nvidia doesn't mention the differences between its consumer tensor core implementation and workstation tensor cores.
Posted on Reply
#269
AusWolf
wolfIt's a piece of hardware with the same name, but lesser capability by a significant amount, so at least by their own measure, it's not the necessary hardware.
I would still like to see how it handles (or doesn't handle) FG instead of believing Nvidia's claims without a second thought.
Posted on Reply
#270
wolf
Better Than Native
AusWolfI would still like to see how it handles (or doesn't handle) FG instead of believing Nvidia's claims without a second thought.
I too would be very interested to know, and benefit of the doubt goes in all directions, I need to see certain claims tested before I am willing to accept AMD's word for it, especially after showing they have extensive "we can be dodgy and anti consumer" chops, especially recently.
Posted on Reply
#271
AusWolf
wolfI too would be very interested to know, and benefit of the doubt goes in all directions, I need to see certain claims tested before I am willing to accept AMD's word for it, especially after showing they have extensive "we can be dodgy and anti consumer" chops, especially recently.
Absolutely. Marketing material is never to be believed from any company.
Posted on Reply
#272
Tek-Check
SteevoI admit, I said mean things about the queen and some other royal family and was told by someone on Twitter they were going to phone me into the local police. However living in Montana I really don't give a fuck who they call, and even offered to donate to get the local constable a row boat to make the trip.
Some places aren't speech nazis.
Montana? The voice from the depth of whale's belly.
Posted on Reply
#273
chrcoluk
So AMD just boosted Ampere cards on behalf of Nvidia lol.

Nvidia meanwhile will continue to use software to sell hardware.
apoklyps3In my gtx 970 days I Had the daily black screen with the "Display Driver Stopped Responding and Has Recovered" error for years.
Don't talk to me about nvidia's perfect driver stability. Just check their driver & hotfix releases change log
I used to get that, I now routinely increase the driver timeout out of paranoia.
Posted on Reply
#274
kapone32
dyonoctisSeems like you were saying that AMD software is more refined because their UI is more modern, when he was saying that you should also take into consideration the software stack beyond the UI.
nvidia made a big bet on software long before ATI/AMD did, and I'm not just talking about gaming. With CUDA/OPTIX nvidia became a must have for a lot of content creation apps, OpenCL was either deprecated, or just avoided entirely by a few major developers. Pixar for exemple, even with their historical ties with Apple, eventually developed a set of internal tools that only works with Nvidia's APIs. Because it was just the best thing around.

AMD spend a lot of years just sitting around and waiting, Apple figured out fast that OpenCL (which was their creation) wasn't about to become the absolute industry standard and decided to do thing the Nvidia way. AMD HIP is just barely starting to get traction, which is good, but they have a decade of Nvidia optimisation to catch up with.

For as long as I can remember, Nvidia has always been more agressive on the software side of thing, AMD looked much more laid back with the exception of TressFX, Mantle and trueaudio.

Now it's hard to talk about driver stability without falling into speculative, or anecdotal experience. Unless someone can list the numbers of bugs reported by each side for the past few years.
Nvidia has software that it has been created to optimize it's hardware. Look at Freesync vs Gsync in terms of one was created with hardware requirements while the other was a driver level software innovation that worked without the requirement of specific hardware. I could also use Crossfire as an example but you would have to have had Polaris in crossfire to appreciate that. Of course all of the negative aspects of SLI were applied to that when there was no issue with Polaris Multi GPU implementation. Now we have the FSR/DLSS argument that somehow tries to establish that even though AMD is giving you all of Nvidia's selling points it is somehow a bad thing and FSR is somehow garbage already. Of course the truth is that if like me you have a 7900XT you wonder what all the noise about upscaling is all about. Now when the time comes that Games no longer support my generation to the fullest that I will have a software package to fully mitigate that.
Posted on Reply
#275
TheoneandonlyMrK
fevgatosYeah right, cause in the hypothetical scenario that amd did pay them for that they would have admitted it....right right, silly me.
You really are the worst version of poirot or clueso or Agatha Christie, imagine ,you would bang up more innocent people than anyone hypothetically, it's him he was there, I said so.
oxrufiioxoMy guess is they are in a damn if they do damn if they don't situation and decided against releasing it on Turing/Ampere.

Let's say they did release it and it performs like crap and has a ton of artifacts people will say they gimped it on purpose so basically the same situation that they are in now.

Whenever I buy a gpu I buy it for the performance it gives me that day I think most people just buy whatever performs the best within their budget regardless anyone buying a gpu because the box is red or green if there are better options at the same price point is only doing themselves a disservice.
Your lucky you can afford to.

If some didn't choose to go with the competition regardless due to reasons you wouldn't be able to afford a GPU now.


Now imagine if Huang had his way, a monopoly THEN this AI boom kicked in.

As I said your lucky to have that option.
Posted on Reply
Add your own comment
Dec 22nd, 2024 15:10 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts