Thursday, December 14th 2023
AMD Releases FSR 3 Source Code on GPUOpen
AMD on Thursday announced the first release of FidelityFX Super Resolution 3 (FSR 3) source code through the company's GPUOpen initiative. The company just set up an FSR 3 source code repo on GitHub that game devs everywhere can take advantage of. This includes the complete source for DirectX 12, and the source of an FSR 3 Unreal Engine 5 plugin. With it, the company also released extensive documentation that helps developers understand the inner workings of FSR 3, so they could better integrate the tech with their games and applications. With this announcement, AMD also unveiled FSR 3 support for even more new and upcoming games, which include "Black Myth: Wukong," the three latest titles from the "Warhammer" franchise, including "Darktide," "Space Marine II," and "Realms of Ruin;" "Starfield," "Pax Dei," and "Crimson Desert."
Source:
GPUOpen
50 Comments on AMD Releases FSR 3 Source Code on GPUOpen
We'll see once the feature is up again, if it's up, but I do not think it will stay driver-side, just my humble opinion.
Regarding DLSS claims or FG:
AMD FSR3 on Avatar is the best implementation and is very impressive, FG included, but on what basis does that mean Nvidia implementation doesn't actually need the Optic flow generator for DLSS FG ?
You say AMD made FG work, sure and Avatar is impressive, but frame generation has been in most TV for years, something called "soap-opera" effect yet it's not AMD or Nvidia implementation.
Why would AMD managing it differently would imply Nvidia doesn't really need a hardware for THEIR frame gen tech ? You have no evidence of this, only that AMD manages it without optic flow generators
Nvidia's bet is on AI, AI improves over times and using a dedicated AI hardware architecture like in Apple silicon, Nvidia GPU or newest Qualcomm SoC is nothing shocking
Just saying 'I don't believe Nvidia really needs the AI hardware because AMD solution does not' is really too simplistic and not really an evidence for me
That does not change the fact that if they wanted they could made it to work with older cards. The decision to not do it was purely a business one - to sell more 4xxx cards as without FG buying 4070 or something other than 4090 makes even less sense.
Whole marketing from nvidia was based of giving the performance numbers of those cards with DLSS3 FG enabled vs 3090 without it. If 3090 would be able to do it those graphs would not look that impressive.
And proper hardware?? Isn't DLSS, FSR, XESS, software layers, that tell the hardware what to do?
I'd rather just have a card that can run the monitor's default refresh rate & resolution...................and I'd pay more for it ;)
I don't know why it's hard for you to accept that Nvidia simply lied on this one, they claimed the hardware acceleration was necessary, which it wasn't. FSR FG works on Nvidia GPUs as well in case you didn't know and most people agree it's basically the same. There is plenty of evidence, Intel also has an "AI upscaler" that works on everything as well, strange how only Nvidia's implementation needs their own unique hardware for it to work isn't it ? lol
One thing i definitely don't do is look at the Fortune Global, Forbes, Bloombergs index, etc to determine where to inject my hard earned cash :p
I do like Apple though... healthy fruit
This is especially true because it's evident that like Upscaling, each camps unique approach differs in R&D, execution and quality (and perhaps more), to just say these two companies 'did the same thing, so one is a liar', is far too vague and accounts for none of the many differences and nuances between them from conception to completion. Also spot on, not nearly enough evidence to draw that as a definitive conclusion, given the above factors especially. XeSS as an example thrown out, sure it has a compatibility code path that works across vendors.... that delivers unequivocally worse results both in performance and image quality. Side note but it's impressive that XeSS (v1.2) has already been able to produce better IQ than FSR 2.2.x at equal input resolutions, and subjectively is still often preferable when performance normalised.
I'm very keen to see AMD's FG continue to grow and be implemented, and to test it out for myself. I have no real issue with the 'innovation but locked', followed by 'copied (hackjob) and open' that we're getting in cycles lately. I'd certainly prefer that over no innovation.
github.com/Nukem9/dlssg-to-fsr3
Looking at the difference in visual quality, rtx4000 owners still have the advantages with FG, just like RTX owners in general have access to the superior DLSS
Just to show you it is the open solution breaking the artificial barriers set by Nvidia.
And good on them, good for everyone! AMD have done well to get it working to this level so quickly.
In Avatar where it's implemented officially it looks very good. Even when I ran it at native 4K with unobtainum settings so base framerate was like 20-30, apart from obvious latency image itself showed almost no artifacts.
Funny considering nvidia whitepapers saying that you need AI to interpolate shadows and particles :roll:
Even funnier is that if their OFA is so good without motion vectors then I guess they will have driver option as well for it as AMD does with driver preview now and soon official driver.
Has little to do with perception, if they wanted "perception" they would do marketing to brainwash people.
Also, PM me when they actually start caring about their "open" linux driver.
AMD actually has a driver that interfaces with open components well. NVIDIA does not and has actively resisted this. End of story in my eyes.
You need to realize nvidia isnt doing you any favors, thair profit is going into more marketing so that you see their pamphlets everywhere and if you see them everywhere it must be true.
I don't recall anybody official or otherwise saying that on behalf of AMD, nor anybody on Nvidia's side officially stating the opposite for the 4090 for that matter.
Nobody really knew until first the launch announcements and secondly, the NDA's were lifted.
I mean, I get it... the 4090 is the best card bar none at the £1500 level for any old 'who, again?' brand or the closer to and over £2000 level for the known ones. But against well known brand 7900XTX's dropping as low as £850 or so and still undercutting 4080's by some £500+ at that point... do they really need to be 1:1 comparisons to be good for x or y? Tbh they've been good for years at this point, not merely months.
That being said, that's not really that awful. Just a different way of doing things. Fact is nvidia software products are high quality and that affects my purchase choice a lot more.
AMD is good in open source land, but honestly, the windows driver still has a few issues I'd like dealt with. That's IMO a worse issue.
I am not familiar with all of them but I do use two of those open source frameworks, a framework named Apache Spark (through RAPIDS) for distributed processing and PyTorch (machine learning) for work, both have a lot of Nvidia contribution to them and they work wonderfully well with Nvidia GPUs
We don't need to use proprietary nvidia tool to perform ML or data ETLs, they just really did their homework for AI/ML/data scientists on many open-source projects.
For consumers, I think AMD has the upper hand in open-source of course, but right now DLSS3.5 is what I prefer.
I don't care about closed or open upscalers frankly because I feel that many studios want a little paycheck from either AMD or Nvidia to implement it anyways and are not really waiting for things to go open-source, at least in AAA (gpu sponsorship is omnipresent in AA/AAA pc gaming).
I'd even tell you that DLSS SDK that you use to implement DLSS in your game is open source, the algorithm itself is not but anyone can implements it using an open-source, transparent SDK in their code, the rest is in the driver and hardware but at least what you embark in your executable is known:
github.com/NVIDIA/DLSS
I care about quality, implementation rate in games.. DLSS is in far more games, overall it's a really great upscaler relatively speaking, it does not include vulnerabilities or anti-cheat false positives, it's continuously improved with stable features but also experimental ones like ray reconstruction.
FSR3, I can use it on Nvidia whilst DLSS can't be used on AMD, again, despite what some people say here, nvidia upscalers are HARDWARE based, they made that choice, it's AI, it's a neural network that improves through time to a pace a deterministic algorithm like FSR may not be able to match at one point.
Having frame generation locked behind a hardware optical flow generator sucks for < RTX 4000, absolutely, they could have worked at least on a software-based rollback, but saying in above comments that it's all a lie that hardware architecture is useless is naive and baseless, this architecture whole point is to be trained through time by Nvidia to continuously improve the neural network behind DLSS and FG, something a software solution cannot do as well .
Although Nvidia may have the advantage, dedicated tensor cores comes at a cost, a cost which may be less rewarding in the long run if deployed hardware features fall short of compatibility with later DLSS iterations. Full respect to Nvidia though, it takes more sweat for hardware based solutions to pay off and so far they're championing that race. I'm usually more concerned with price to performance native perf hence don't really get into all the upscaler skirmishes but its great to see innovations from Nvidia/AMD (...Intel) opening new doors on all levels with various price points, features, etc.... something for everyone....thats progress!
Not sure how many hours i've slept last night (maybe a couple of months).... are we already on 3.5? sizeable improvement?
link:
RTX 3080 FSR3 FrameGen A Plague Tale