Monday, March 4th 2024

AMD Working on an AI-powered FSR Upscaling Algorithm

AMD CTO Mark Papermaster confirmed that AMD is working on a new upscaling technology that leverages AI. A key technological difference between AMD FSR and competing solutions NVIDIA DLSS and Intel XeSS, has been AMD's remarkable restraint in implementing AI in any part of the upscaler's pipeline. Unlike FSR, both DLSS and XeSS utilize AI DNNs to overcome temporal artifacts in their upscalers. AMD Radeon RX 7000 series GPUs and Ryzen 7000 CPUs are the first with accelerators or ISA that speed up AI workloads; and with the RX 7000 series capturing a sizable install-base, AMD is finally turning to AI for the next generation of its FSR upscaling tech. Papermaster highlighted his company's plans for AI in upscaling technologies in an interview with No Priors.

To a question by No Priors on exploring AI for upscaling, Papermaster responded: "2024 is a giant year for us because we spent so many years in our hardware and software capabilities for AI. We have just completed AI-enabling our entire portfolio, so you know cloud, edge, PCs, and our embedded devices, and gaming devices. We are enabling gaming devices to upscale using AI and 2024 is a really huge deployment year." In short, Papermaster walked the interviewer through the 2-step process in which AMD is getting into AI, with a hardware-first approach.
AMD spent 2022-23 introducing ISA-level AI enablement for Ryzen 7000 desktop processors and EPYC "Genoa" server processors. For notebooks, it introduced Ryzen 7040 series and 8040 series mobile processors with NPUs (accelerated AI enablement); as well as gave its Radeon RX 7000 series RDNA 3 GPUs AI accelerators. Around this time, AMD also introduced the Ryzen AI stack for Windows PC applications leveraging AI for certain client productivity experiences. 2024 will see the company implement AI into its technologies, and Papermaster couldn't be more clear that a new-generation FSR that leverages AI, is in the works.
Sources: No Priors (YouTube), VideoCardz
Add your own comment

70 Comments on AMD Working on an AI-powered FSR Upscaling Algorithm

#1
qlum
Good to see AMD is going down that road, the approach of fsr2 has clear limitations. Would be nice if this tech has backends compatible with both Nvidia and intel, to actually have something universal, using the AI cores on the cpu is also interesting.
Posted on Reply
#2
mb194dc
Has "AI" been mentioned in the op enough?

How about working on cards that can use native res at decent frame rates for a reasonable price,?

Personally no interest in the image quality and input lag trade offs from any kind of upscaling.

No idea how it's got traction. Massive step backwards.
Posted on Reply
#3
wNotyarD
As much as I'm against upscalers from the get-go, at least it is good to see AMD trying to catch up.
I wonder if Microsoft's DirectSR has anything to do here...
Posted on Reply
#4
Firedrops
with the RX 7000 series capturing a sizable install-base
:laugh:

We are literally at all time historical lows of AMD discrete GPU market share. By a huge margin.
Posted on Reply
#5
Vayra86
Maybe I missed something but nothing in that diagram there shows even the vaguest presence of AI. More like they've copied an optical flow accelerator from Green.

That is not AI. But whatever. In the land of the blind...

Perhaps the magic of AI is that if you print it enough times in your PR, it magically becomes a thing because it's been called so often.
Firedrops:laugh:

We are literally at all time historical lows of AMD discrete GPU market share. By a huge margin.
No man, RX 7000 is capturing all that market share from AMD! :roll:
qlumGood to see AMD is going down that road, the approach of fsr2 has clear limitations. Would be nice if this tech has backends compatible with both Nvidia and intel, to actually have something universal, using the AI cores on the cpu is also interesting.
Its not AI that is making DLSS better. In the end you just use a dll with an algorithm. Nvidia in the early DLSS days tried to sell us the idea that whole farms are calculating DLSS frames for every single game so that it can work, but you don't need to be a rocket scientist to know that's complete and utter nonsense, especially when both Intel and AMD got to the same point without any of that.
Posted on Reply
#6
Redwoodz
Firedrops:laugh:

We are literally at all time historical lows of AMD discrete GPU market share. By a huge margin.
At an all time high in datacenter marketshare. By a huge margin. That's why they worked on the hardware first. AI in gaming is there now because the competitors use it as a marketing tool to claim their product is better. The whole community has clearly stated they buy Nvidia over AMD because they make fake frames better.
Posted on Reply
#7
qlum
Vayra86Its not AI that is making DLSS better. In the end you just use a dll with an algorithm. Nvidia in the early DLSS days tried to sell us the idea that whole farms are calculating DLSS frames for every single game so that it can work, but you don't need to be a rocket scientist to know that's complete and utter nonsense, especially when both Intel and AMD got to the same point without any of that.
I am not saying AI is some magic, both Intel and Nvidia use an approach with training data to arrive at a general model that seems to provide better results than the mostly hand-coded model AMD uses.
As for Early DLSS that used a flawed approach by training on specific games, and just didn't work.

On the side of the gpu, yes you are just executing some algorithm, just one who's execution can be accelerated by instructions optimized to run such an algorithm.
On the side creating the model, it's a very different process between the two.
Posted on Reply
#8
wNotyarD
Vayra86Maybe I missed something but nothing in that diagram there shows even the vaguest presence of AI. More like they've copied an optical flow accelerator from Green.
Isn't that diagram just what FSR3 is already doing?
Posted on Reply
#9
AnarchoPrimitiv
Firedrops:laugh:

We are literally at all time historical lows of AMD discrete GPU market share. By a huge margin.
And this latest generation of videocards had an extremely low price to performance....gee, I wonder if those two things are related?
Posted on Reply
#10
ToTTenTranz
3 things:

1 - It's public knowledge that AMD's been researching this for a while, as they first applied for patents on AI-assisted upscaling as far back as 2020.
worldwide.espacenet.com/patent/search/family/075908263/publication/EP4062360A4?q=pn%3DEP4062360A4%3F

2 - The RX7000 GPUs aren't the first AMD GPUs with ISA meant to accelerate AI tasks, as all RX6000 GPUs support DP4a which is clearly for that end.

3 - I doubt the Phoenix APUs (Ryzen 7x40/8x40) will be using their NPUs for this. There's no word on data latency between this and the system RAM or any special interconnection to the GPU, and they seem to have been designed for low-power applications regardless.
Posted on Reply
#11
Redwoodz
ToTTenTranz3 things:

1 - It's public knowledge that AMD's been researching this for a while, as they first applied for patents on AI-assisted upscaling as far back as 2020.
worldwide.espacenet.com/patent/search/family/075908263/publication/EP4062360A4?q=pn%3DEP4062360A4%3F

2 - The RX7000 GPUs aren't the first AMD GPUs with ISA meant to accelerate AI tasks, as all RX6000 GPUs support DP4a which is clearly for that end.

3 - I doubt the Phoenix APUs (Ryzen 7x40/8x40) will be using their NPUs for this. There's no word on data latency between this and the system RAM or any special interconnection to the GPU, and they seem to have been designed for low-power applications regardless.
Isn't the dotted grey line marked FSR3 Internal Resource Sharing Path evidence of a special interconnect?
Posted on Reply
#12
jesdals
mb194dcHas "AI" been mentioned in the op enough?

How about working on cards that can use native res at decent frame rates for a reasonable price,?

Personally no interest in the image quality and input lag trade offs from any kind of upscaling.

No idea how it's got traction. Massive step backwards.
I second that no need for FSR - Just give us more power
Posted on Reply
#13
remekra
Vayra86Maybe I missed something but nothing in that diagram there shows even the vaguest presence of AI. More like they've copied an optical flow accelerator from Green.

That is not AI. But whatever. In the land of the blind...

Perhaps the magic of AI is that if you print it enough times in your PR, it magically becomes a thing because it's been called so often.


No man, RX 7000 is capturing all that market share from AMD! :roll:


Its not AI that is making DLSS better. In the end you just use a dll with an algorithm. Nvidia in the early DLSS days tried to sell us the idea that whole farms are calculating DLSS frames for every single game so that it can work, but you don't need to be a rocket scientist to know that's complete and utter nonsense, especially when both Intel and AMD got to the same point without any of that.
That diagram is just taken from FSR3, from gpuopen. And optical flow is not something created by nvidia., they have the accelerator for it, but that word is not mentioned in the slide.

As for the AI, even AMD is well aware that it produces better results. They explain why it was not used for either FSR1 or 2 here, at 6:40 minute mark.
And I believe that it was not a that farms are calculating frames, but that they are training the algorithm on their supercomputer by feeding it frames from games.
Overall it was bad because you can only go so far without temporal data, and that's how DLSS2 was born. Still their algo needs to learn on something.


Good that they are now in a position to go that way.
Posted on Reply
#14
Vya Domus
mb194dcHow about working on cards that can use native res at decent frame rates for a reasonable price,?
You need to take this to Nvidia, they started this.
Posted on Reply
#15
ToTTenTranz
RedwoodzIsn't the dotted grey line marked FSR3 Internal Resource Sharing Path evidence of a special interconnect?
The OP's diagram only refers to regular FSR3 and AFAICT has nothing to do with the news.

That's probably just some VRAM allocation that both the upscaling and the framegen tasks will use.
Posted on Reply
#16
Makaveli
jesdalsI second that no need for FSR - Just give us more power
Same I run native with no upscaling or FG don't care for it at all.

Just give me more

Posted on Reply
#17
DemonicRyzen666
Vya DomusYou need to take this to Nvidia, they started this.
Except it's more than just Nvidia who are doing this.
If D.L.S.S wasn't so widely agreed upon by reviewers as a "valuable feature", then this stuff wouldn't be implemented or important.
When consumers read reviews, they want things that add value to their purchases. So, the consumers buying into all these techs are also at fault for the support.
Posted on Reply
#18
Vya Domus
DemonicRyzen666So, the consumers buying into all these techs are also at fault for the support.
They didn't, it was more or less forced upon them. Soon most games will start to integrate upscaling in a way that users wont be able to turn it off like in Alan Wake.
Posted on Reply
#19
Frick
Fishfaced Nincompoop
jesdalsI second that no need for FSR - Just give us more power
Considering how you need a RTX 4090 to push 60FPS @ 4K maxed out in all games (not considering RT!) "more power" is not the answer.
Vya DomusYou need to take this to Nvidia, they started this.
Oh yes let us just undo this industry-wide shift.
Posted on Reply
#20
Noyand
Vayra86Maybe I missed something but nothing in that diagram there shows even the vaguest presence of AI. More like they've copied an optical flow accelerator from Green.

That is not AI. But whatever. In the land of the blind...

Perhaps the magic of AI is that if you print it enough times in your PR, it magically becomes a thing because it's been called so often.


Its not AI that is making DLSS better. In the end you just use a dll with an algorithm. Nvidia in the early DLSS days tried to sell us the idea that whole farms are calculating DLSS frames for every single game so that it can work, but you don't need to be a rocket scientist to know that's complete and utter nonsense, especially when both Intel and AMD got to the same point without any of that.
That's just the old diagram used to explain how FSR 3 works, it not really related to their new ML upscaler.

From what I've understood about DLSS 1.0, is that each game had to be trained on Nvidia servers first, and then they used that data to help your GPU to upscale those specific games locally. Since DLSS 2.0, they apparently figured out a more efficient model that doesn't need to be trained on a specific game to work.

It's obvious by now that you have a big issue with the definition of "commercial A.I", but if for you a real A.I is must be a full 1:1 with biological intelligence, then it doesn't exist, and it might never exist. Just the process to make a machine "see" is tedious, a machine cannot "see" without algorithms and lines of codes. People have been trying to make a machine recognize human faces for over 20 years, but it's still not even close to be as accurate as the human perception of faces. (Let's not even talk about trying to recognize a caricature of a real person). They have to use tricks, teach the machine how it's supposed to recognize a face...heck, they even have to tell the machine how it's supposed to learn. They basically look at how the brain works, and see if it's possible to emulate it on something that is fundamentally different from a brain. (spoilers, recent finding suggest that a new type of computer/hardware need to be developed to handle some aspect of human learning behavior)
www.ox.ac.uk/news/2024-01-03-study-shows-way-brain-learns-different-way-artificial-intelligence-systems-learn

Movies sold us a fantasy of A.I, but the reality of commercial A.I is just the illusion of intelligence : meaning being able to automate something that previously absolutely required human input. Like how lip-syncing for animation can be automated. They trained a machine on data relevant to that task, and they eventually figured out an efficient model that's able to do lip-syncing on a simple computer. But creating said model require an insane amount of computing power, from what I've understood. What our computers are running is the digested version, but the datacenter is where everything is figured out and improved.

I don't really see how a machine is ever supposed to do anything without having to ever rely on a man made algorithm. At that level, you are not making a computer anymore, but a really new life form that isn't biologic. :D All the papers that I've seen so far shows that people are aware that the current A.I/ML is not even close to a biological brain, but it also shows that scientists haven't fully figured out the human brain yet either, and it's unsure if they will. I'm no computer scientist, but I like to read about this, it's a rabbit hole that so deep, and mingled with neuroscience. It's assumed that developing neuromorphic computing might help us to understand the human brain better and see advancement in both computer and neuroscience, but it's also unsure if going for a 1:1 is really the best thing to do.
Stepping back, challenging the Turing model of computation as the most effective model to describe neural computation is more than just a philosophical question for theoretical computer science. It brings up a more fundamental question for the neuroscience field in general. After our decades-long pursuit to describe the brain explicitly or implicitly through the lens of Turing computation and von Neumann architectures, it may be worth asking whether we have lost sight of what makes the brain special. Perhaps we have unknowingly abstracted away the very things that we need for understanding cognition and intelligence, and in the process unintentionally handcuffed ourselves in our pursuit of the brain for the purposes of efficient computing and improved health. “Does the brain use algorithms?” is not the right question. The right question is, “Are we even positioned to understand what a neural algorithm even is?”
www.nature.com/collections/jaidjgeceb
Posted on Reply
#22
TumbleGeorge
jesdalsJust give us more power

Pleases. If not enough to supply you next computer, then you must use direct nuclear power.

I'll itself need of more raw performance at least on a half electricity cost.
Posted on Reply
#23
Event Horizon
Sorely needed to reduce the annoying shimmering and artifacting in FSR compared to DLSS and even XeSS.
Posted on Reply
#24
Vya Domus
FrickOh yes let us just undo this industry-wide shift.
I know there is no undoing this but if people have an issue with it there is really only one company who pushed for this.
Posted on Reply
#25
Makaveli
FrickConsidering how you need a RTX 4090 to push 60FPS @ 4K maxed out in all games (not considering RT!) "more power" is not the answer.
This is what is coming, the 5090 will indeed be more powerful. Regardless of upscaling and FG.

Even the 4090 still struggles in some games at 4k.

More power is always the answer!
Posted on Reply
Add your own comment
Dec 22nd, 2024 01:11 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts