Monday, March 4th 2024
AMD Working on an AI-powered FSR Upscaling Algorithm
AMD CTO Mark Papermaster confirmed that AMD is working on a new upscaling technology that leverages AI. A key technological difference between AMD FSR and competing solutions NVIDIA DLSS and Intel XeSS, has been AMD's remarkable restraint in implementing AI in any part of the upscaler's pipeline. Unlike FSR, both DLSS and XeSS utilize AI DNNs to overcome temporal artifacts in their upscalers. AMD Radeon RX 7000 series GPUs and Ryzen 7000 CPUs are the first with accelerators or ISA that speed up AI workloads; and with the RX 7000 series capturing a sizable install-base, AMD is finally turning to AI for the next generation of its FSR upscaling tech. Papermaster highlighted his company's plans for AI in upscaling technologies in an interview with No Priors.
To a question by No Priors on exploring AI for upscaling, Papermaster responded: "2024 is a giant year for us because we spent so many years in our hardware and software capabilities for AI. We have just completed AI-enabling our entire portfolio, so you know cloud, edge, PCs, and our embedded devices, and gaming devices. We are enabling gaming devices to upscale using AI and 2024 is a really huge deployment year." In short, Papermaster walked the interviewer through the 2-step process in which AMD is getting into AI, with a hardware-first approach.AMD spent 2022-23 introducing ISA-level AI enablement for Ryzen 7000 desktop processors and EPYC "Genoa" server processors. For notebooks, it introduced Ryzen 7040 series and 8040 series mobile processors with NPUs (accelerated AI enablement); as well as gave its Radeon RX 7000 series RDNA 3 GPUs AI accelerators. Around this time, AMD also introduced the Ryzen AI stack for Windows PC applications leveraging AI for certain client productivity experiences. 2024 will see the company implement AI into its technologies, and Papermaster couldn't be more clear that a new-generation FSR that leverages AI, is in the works.
Sources:
No Priors (YouTube), VideoCardz
To a question by No Priors on exploring AI for upscaling, Papermaster responded: "2024 is a giant year for us because we spent so many years in our hardware and software capabilities for AI. We have just completed AI-enabling our entire portfolio, so you know cloud, edge, PCs, and our embedded devices, and gaming devices. We are enabling gaming devices to upscale using AI and 2024 is a really huge deployment year." In short, Papermaster walked the interviewer through the 2-step process in which AMD is getting into AI, with a hardware-first approach.AMD spent 2022-23 introducing ISA-level AI enablement for Ryzen 7000 desktop processors and EPYC "Genoa" server processors. For notebooks, it introduced Ryzen 7040 series and 8040 series mobile processors with NPUs (accelerated AI enablement); as well as gave its Radeon RX 7000 series RDNA 3 GPUs AI accelerators. Around this time, AMD also introduced the Ryzen AI stack for Windows PC applications leveraging AI for certain client productivity experiences. 2024 will see the company implement AI into its technologies, and Papermaster couldn't be more clear that a new-generation FSR that leverages AI, is in the works.
70 Comments on AMD Working on an AI-powered FSR Upscaling Algorithm
How about working on cards that can use native res at decent frame rates for a reasonable price,?
Personally no interest in the image quality and input lag trade offs from any kind of upscaling.
No idea how it's got traction. Massive step backwards.
I wonder if Microsoft's DirectSR has anything to do here...
We are literally at all time historical lows of AMD discrete GPU market share. By a huge margin.
That is not AI. But whatever. In the land of the blind...
Perhaps the magic of AI is that if you print it enough times in your PR, it magically becomes a thing because it's been called so often. No man, RX 7000 is capturing all that market share from AMD! :roll: Its not AI that is making DLSS better. In the end you just use a dll with an algorithm. Nvidia in the early DLSS days tried to sell us the idea that whole farms are calculating DLSS frames for every single game so that it can work, but you don't need to be a rocket scientist to know that's complete and utter nonsense, especially when both Intel and AMD got to the same point without any of that.
As for Early DLSS that used a flawed approach by training on specific games, and just didn't work.
On the side of the gpu, yes you are just executing some algorithm, just one who's execution can be accelerated by instructions optimized to run such an algorithm.
On the side creating the model, it's a very different process between the two.
1 - It's public knowledge that AMD's been researching this for a while, as they first applied for patents on AI-assisted upscaling as far back as 2020.
worldwide.espacenet.com/patent/search/family/075908263/publication/EP4062360A4?q=pn%3DEP4062360A4%3F
2 - The RX7000 GPUs aren't the first AMD GPUs with ISA meant to accelerate AI tasks, as all RX6000 GPUs support DP4a which is clearly for that end.
3 - I doubt the Phoenix APUs (Ryzen 7x40/8x40) will be using their NPUs for this. There's no word on data latency between this and the system RAM or any special interconnection to the GPU, and they seem to have been designed for low-power applications regardless.
As for the AI, even AMD is well aware that it produces better results. They explain why it was not used for either FSR1 or 2 here, at 6:40 minute mark.
And I believe that it was not a that farms are calculating frames, but that they are training the algorithm on their supercomputer by feeding it frames from games.
Overall it was bad because you can only go so far without temporal data, and that's how DLSS2 was born. Still their algo needs to learn on something.
Good that they are now in a position to go that way.
That's probably just some VRAM allocation that both the upscaling and the framegen tasks will use.
Just give me more
If D.L.S.S wasn't so widely agreed upon by reviewers as a "valuable feature", then this stuff wouldn't be implemented or important.
When consumers read reviews, they want things that add value to their purchases. So, the consumers buying into all these techs are also at fault for the support.
From what I've understood about DLSS 1.0, is that each game had to be trained on Nvidia servers first, and then they used that data to help your GPU to upscale those specific games locally. Since DLSS 2.0, they apparently figured out a more efficient model that doesn't need to be trained on a specific game to work.
It's obvious by now that you have a big issue with the definition of "commercial A.I", but if for you a real A.I is must be a full 1:1 with biological intelligence, then it doesn't exist, and it might never exist. Just the process to make a machine "see" is tedious, a machine cannot "see" without algorithms and lines of codes. People have been trying to make a machine recognize human faces for over 20 years, but it's still not even close to be as accurate as the human perception of faces. (Let's not even talk about trying to recognize a caricature of a real person). They have to use tricks, teach the machine how it's supposed to recognize a face...heck, they even have to tell the machine how it's supposed to learn. They basically look at how the brain works, and see if it's possible to emulate it on something that is fundamentally different from a brain. (spoilers, recent finding suggest that a new type of computer/hardware need to be developed to handle some aspect of human learning behavior)
www.ox.ac.uk/news/2024-01-03-study-shows-way-brain-learns-different-way-artificial-intelligence-systems-learn
Movies sold us a fantasy of A.I, but the reality of commercial A.I is just the illusion of intelligence : meaning being able to automate something that previously absolutely required human input. Like how lip-syncing for animation can be automated. They trained a machine on data relevant to that task, and they eventually figured out an efficient model that's able to do lip-syncing on a simple computer. But creating said model require an insane amount of computing power, from what I've understood. What our computers are running is the digested version, but the datacenter is where everything is figured out and improved.
I don't really see how a machine is ever supposed to do anything without having to ever rely on a man made algorithm. At that level, you are not making a computer anymore, but a really new life form that isn't biologic. :D All the papers that I've seen so far shows that people are aware that the current A.I/ML is not even close to a biological brain, but it also shows that scientists haven't fully figured out the human brain yet either, and it's unsure if they will. I'm no computer scientist, but I like to read about this, it's a rabbit hole that so deep, and mingled with neuroscience. It's assumed that developing neuromorphic computing might help us to understand the human brain better and see advancement in both computer and neuroscience, but it's also unsure if going for a 1:1 is really the best thing to do.www.nature.com/collections/jaidjgeceb
Pleases. If not enough to supply you next computer, then you must use direct nuclear power.
I'll itself need of more raw performance at least on a half electricity cost.
Even the 4090 still struggles in some games at 4k.
More power is always the answer!