# AMD FSR 2.0 Quality & Performance



## W1zzard (May 11, 2022)

We're reviewing AMD's new FSR 2.0 upscaling technology, which offers amazing image quality improvements that are able to match the visual finesse of NVIDIA's DLSS. The great thing is that the new FSR doesn't require any special hardware and works even on NVIDIA GeForce and Intel.

*Show full review*


----------



## Denver (May 11, 2022)

Oh My God, it really looks great... Nvidia must be sweating blood seeing their solution that needs a dedicated ASIC, wasting precious space on the die, being matched via open source software. lol


----------



## Al Chafai (May 11, 2022)

i love this part:
"I really have to applaud AMD for democratizing upscaling without additional hardware requirements, and all we need now is widespread game developer support."


----------



## FeelinFroggy (May 11, 2022)

Darksider92 said:


> all we need now is widespread game developer support


This is the key.  It's hard to get too excited about FSR or DLSS when so few games support it.


----------



## harm9963 (May 11, 2022)

Going AMD ,is compelling


----------



## DeathtoGnomes (May 11, 2022)

Thank you for the review @W1zzard. It took a while for DLSS to gain traction with developers at first too.  FSR1.0 had an easier time due to the ease of implementation and backwards compatibility if/when developers took the time to patch it in. I think FSR2.0 will be more quickly implemented than DLSS and FSR 1.0 was.


----------



## Al Chafai (May 11, 2022)

FeelinFroggy said:


> This is the key.  It's hard to get too excited about FSR or DLSS when so few games support it.


owh believe me this is going to be well supported for sure. especially when this technology can be used on current gen consoles like the PS5/Xbox SX/S as well.


----------



## phanbuey (May 11, 2022)

DLSS 2.0 was basically the main reason I went with a 3080.  This is huge.


----------



## Bavor (May 11, 2022)

In some of the screenshots I noticed the same issue that FSR 1.0 had.  Objects such as wires, fences, railings, handrails, etc... looked worse with FSR 2.0 than with DLSS.  They looked like they were either missing pieces or had jagged edges with FSR 2.0 when the equivalent DLSS setting didn't have that issue.  FSR 1.0 had the same issue.


----------



## 3211 (May 11, 2022)

Bavor said:


> In some of the screenshots I noticed the same issue that FSR 1.0 had.  Objects such as wires, fences, railings, handrails, etc... looked worse with FSR 2.0 than with DLSS.  They looked like they were either missing pieces or had jagged edges with FSR 2.0 when the equivalent DLSS setting didn't have that issue.  FSR 1.0 had the same issue.



Those are the limits.  This is not as good as the article makes it out to be, from this one cherry picked game from amd.   Once you will have fine objects in motion,  hair on characters the artifacts will be even more apparent.  Its good, but its no dlss killer of any kind. It looks worse, it runs worse and we'll see how the adoption is going to be with additional work required. DLSS is in like three times more games than the junk fsr 1.0 which should have been the easiest thing in the world to implement


----------



## tabascosauz (May 11, 2022)

I'm all ears - but AMD needs to commit to this in the same way that Nvidia has been stubbornly stuck on DLSS like flies on a turd. As in, getting more devs onboard to expand the game support list (which Nvidia isn't done with either honestly), frequently releasing drop-in incremental improvements (e.g. DLSS 2.x .dll versions), and not treating this like a one-off science experiment as they have in the past (TressFX, standalone "FidelityFX sharpening").

For the longest time MW2019 was the only title but DLSS has been slowly creeping into my games library (War Thunder, Siege, No Man's Sky) - I hope FSR 2.0 will soon follow. Lazy/incompetent/overworked devs will still be the challenge - ie. DLSS "2.0" in War Thunder, sad abandoned excuse of DLSS in BFV, DLSS performance in BF2042, "FSR support" but Linux only in BL3...


----------



## zlobby (May 11, 2022)

Another great, free, open technology by AMD. 

TenZZor cores my aZZ!


----------



## Xuper (May 11, 2022)

hmm , FSR 2.0 on 1080p is way too much sharp. I was surprised DLAA/DLSS quality has more blur detail than FSR 2.0 + sharpen ( 1080p )

Edit : this image slider is wrong.


----------



## Al Chafai (May 11, 2022)

3211 said:


> Those are the limits.  This is not as good as the article makes it out to be, from this one cherry picked game from amd.   Once you will have fine objects in motion,  hair on characters the artifacts will be even more apparent.  Its good, but its no dlss killer of any kind. It looks worse, it runs worse and we'll see how the adoption is going to be with additional work required. DLSS is in like three times more games than the junk fsr 1.0 which should have been the easiest thing in the world to implement


wooooooh so did you actually try this tech on other games? where did you come up with all these conclusion so fast? you sound like a hater not gonna lie.... 
during gameplay i am sure spotting the difference between the two will be so hard even for the most experienced. people are forgetting that these reconstruction techniques aren't perfect by any means, and for AMD to provide such improvements without the need for any proprietary hardware is a massive step forward.


----------



## rutra80 (May 11, 2022)

Well done. Now the only feature AMD is missing, is performant raytracing.


----------



## ARF (May 11, 2022)

rutra80 said:


> Well done. Now the only feature AMD is missing, is performant raytracing.



DLSS and RT are proprietary "features" by nvidia with no value for the user who can think.
The PS5 and new XBox do not support RT, so the gamers actually do not need it.

AMD's mistake is that it follows instead of thinking proactively about new unique features with real value.


----------



## qlum (May 11, 2022)

Denver said:


> Oh My God, it really looks great... Nvidia must be sweating blood seeing their solution that needs a dedicated ASIC, wasting precious space on the die, being matched via open source software. lol


IIt is kind of the other way around, the card had AI cores, DLSS was developed in an effort to find a use for them.


----------



## AnarchoPrimitiv (May 11, 2022)

Al Chafai said:


> wooooooh so did you actually try this tech on other games? where did you come up with all these conclusion so fast? you sound like a hater not gonna lie....
> during gameplay i am sure spotting the difference between the two will be so hard even for the most experienced. people are forgetting that these reconstruction techniques aren't perfect by any means, and for AMD to provide such improvements without the need for any proprietary hardware is a massive step forward.


Does sound like a hater, plus, everyone seems to forget that for all intents and purposes, Nvidia has limitless financial resources when compared to AMD, but expect AMD to not only compete, but to do better (while also not seeking profit in the same way as Nvidia....so many people think AMD should be a non-profit company and hold them to standards they hold nobody else to)....this is a great big step, and should only get better as long as Nvidia doesn't pull an intel and instead of innovating, just use vast amounts of money to box AMD out and get developers to be exclusive to Nvidia IP....which they will


----------



## Nordic (May 11, 2022)

I really enjoyed the introduction to the technology. It was a great write up.

It is too bad that the minimum requirements are so high. I could really use this with my 1060 but it is below min spec for 1080p up scaling. Those who need it the most can't even use it.


----------



## AnarchoPrimitiv (May 11, 2022)

ARF said:


> DLSS and RT are proprietary "features" by nvidia with no value for the user who can think.
> The PS5 and new XBox do not support RT, so the gamers actually do not need it.
> 
> AMD's mistake is that it follows instead of thinking proactively about new unique features with real value.


Again, compare the R&D budgets for Nvidia and AMD, Nvidia is at $5 billion, AMD is at $2 billion, and we KNOW more than half of that $2 billion is being spent on x86, so in reality, AMD has a fifth of the budget Nvidia has to compete....and still manages to do what they do, in terms of bang for the buck, AMD is the most impressive.


----------



## ARF (May 11, 2022)

AnarchoPrimitiv said:


> so many people think AMD should be a non-profit company



This is a radical opinion which doesn't reflect the reality. Probably you'd agree that there is a difference between "moderate" profit margin of 15%, and aggressive, crazy profit margin of 65%, yes or no?


----------



## Chrispy_ (May 11, 2022)

Still not sure if Tensor cores really do anything for DLSS; They were architected for AI workloads but none of the DLSS "AI" happens on your RTX card, it's performed by Nvidia on their large server farms and then baked into the game-ready driver as a preset.


----------



## ARF (May 11, 2022)

AnarchoPrimitiv said:


> Again, compare the R&D budgets for Nvidia and AMD, Nvidia is at $5 billion, AMD is at $2 billion, and we KNOW more than half of that $2 billion is being spent on x86, so in reality, AMD has a fifth of the budget Nvidia has to compete....and still manages to do what they do, in terms of bang for the buck, AMD is the most impressive.



AMD's mistake is that it answers these dirty initiatives by nvidia. Tessellation, and now RT... Do you remember when nvidia paid a game developer to REMOVE the DX 10.1 implementation (Assassin's Creed DX10.1) in which the Radeons were better?


----------



## Prima.Vera (May 11, 2022)

We need a comparison test also with the upscaller that is now implemented by Epic in Unreal Engine 5.


----------



## zx128k (May 11, 2022)

Basically as per the XeSS for Intel, XMX increases performance and allow more process which increases image quality.  This is the same for DLSS and tensor cores.  From Intel slides, XeSS on DP4 instructions runs on all GPUs but is both slower and has reduced image quality.  One of the big issues with temporal upscaling is a lack of processing power leads to blurring.  Also AI is better at removing artifacts and can make guess when there is a lack of samples further improving image quality.

In the Matrix Awakened demo DLSS quality mode appears to outperform TSR in UE5 by 10 fps.  So I expect DLSS and XeSS to both outperform TSR and FSR 2 in both quality and performance with a 4k upscale.  With DLSS having a fixed cost because the number of tensor cores.  I expect the performance to be close at lower resolutions.   The issue I would guess with FSR 2 is it wot upscale from 1080p to 4k because the quality wont match.  Dynamic resolution scaling is going to be the best bet to keep quality close the nvidia DLSS.  you samply increase the resolution or decrease it.  Thus you can have better quality and maintain performance.  With temporal upscaling there should be little need for sharpening.

In the article we can see that a 4k result requires a 1440p input for FSR 2 to upscale to 4k in quality mode.  Like DLSS this is better than FSR 1.  Internal resolution affects the number of rays needed in Ray Tracing games thus rendering at the lowest resolution posible is very important before upscaling.  I can see that FSR 2 i about 5 fps behind, given that there is a performnce gap in RT games as well.  This means AMD is cannot close the gap with NVidia.

Image quality is good in still shots, need to see raw video to see image quality while the image is moving.   Thats when most of the artifacts happen.  FSR 2 looks a little more blurred compared to DLSS but close.  DLSS 2 does looks closer to native.  Stills wont tell you much.  Would not use FSR 1 but would use FSR 2.  FSR 2 is the right balance, were FSR 1 was not.


----------



## thegnome (May 12, 2022)

It is good that this is finally getting on dlss standard while not requiring any more die space. Imagine the costs Nvidia could save by not having to include tensor cores (or having much less) while still offering the same stuff. Or they just find more uses for it than just DLSS and some pro stuff, on a seperate die for that MCM future


----------



## kapone32 (May 12, 2022)

zx128k said:


> Basically as per the XeSS for Intel, XMX increases performance and allow more process which increases image quality.  This is the same for DLSS and tensor cores.  From Intel slides, XeSS on DP4 instructions runs on all GPUs but is both slower and has reduced image quality.  One of the big issues with temporal upscaling is a lack of processing power leads to blurring.  Also AI is better at removing artifacts and can make guess when there is a lack of samples further improving image quality.
> 
> In the Matrix Awakened demo DLSS quality mode appears to outperform TSR in UE5 by 10 fps.  So I expect DLSS and XeSS to both outperform TSR and FSR 2 in both quality and performance with a 4k upscale.  With DLSS having a fixed cost because the number of tensor cores.  I expect the performance to be close at lower resolutions.   The issue I would guess with FSR 2 is it wot upscale from 1080p to 4k because the quality wont match.  Dynamic resolution scaling is going to be the best bet to keep quality close the nvidia DLSS.  you samply increase the resolution or decrease it.  Thus you can have better quality and maintain performance.  With temporal upscaling there should be little need for sharpening.
> 
> ...


Thanks Kreskin


----------



## wolf (May 12, 2022)

ARF said:


> DLSS and RT are proprietary "features" by nvidia with no value for the user who can think.
> The PS5 and new XBox do not support RT, so the gamers actually do not need it.


Care to explain that a bit more? Just becuase it's of no value to you, doesn't mean it's of no value 'for the user who can think'. And is that a typo about the consoles?


ARF said:


> AMD's mistake is that it follows instead of thinking proactively about new unique features with real value.


Indeed, Nvidia (or others) innovate, and AMD plays catch up, but I suppose it's nice to get respectable open source alternatives a year or two late.


----------



## zx128k (May 12, 2022)

kapone32 said:


> Thanks Kreskin



If you are going to follow me to each thread I post in and post like this, you better create a backup account.  No one expects others to agree but making personal remarks is another story.


----------



## kapone32 (May 12, 2022)

zx128k said:


> If you are going to follow me to each thread I post in and post like this, you better create a backup account.


............


----------



## swirl09 (May 12, 2022)

I still prefer DLSS, both image quality and performance is ahead - tho small.

Having said that, FSR 2.0 is a noticeable step up from 1.0. When looking at FSR 2.0 Balanced Vs FSR 1.0 Quality (for closer performance parity), FSR 2.0 was very easily the winner despite the lesser setting! This is only based on 1 game tho. Still nice to see these improvements, and that its not hardware dependent. DLSS as a selling feature has been diminished.


----------



## wolf (May 12, 2022)

Definitely going to be trying this for myself, but I'd also like a broader sample than 1 title to test it in to draw broad conclusions.

Based on what I'm seeing here though, there are still things it does worse, and I'd easily call it worse overall (just watch the video), just by a MUCH slimmer margin, at least far slimmer than FSR 1.0, and that still doesn't make it a DLSS killer. 

The title is borderline clickbait, AMD's marketing has been stressing pretty clearly that it's easy to implement in games that support DLSS so... why not both? And that goes both ways. If you already planned on supporting FSR 2.0, then you should add DLSS support. Similarly, if you already planned on DLSS support, then you should add FSR 2.0 support as well.

AMD needs to get it in lots games people care about, because if Nvidia can still keep doing that (and AMD can't), this is all for nothing.

The road to DLSS actually being killed may have finally started to be walked, but we're nowhere near Nvidia crying about it and pulling the pin, it could take years to actually _make _the kill.


----------



## wheresmycar (May 12, 2022)

I just don't know what all the fuss is about. 

I sit 1 feet further away from my screen and I've got DLSS 3.0/FSR 2.5 enabled. If games need a little sprucing up with ray tracing, i just open the window and let the sun in. You guys just need to catch up!


----------



## Minus Infinity (May 12, 2022)

Massive improvements at 1080p for FSR2.0, FSR1.0 was god awful. But now it's every bit as good if not a wee bit better than DLSS. They are so close no one would notice in real gameplay IMO. Now if RDNA3 massively improves RT, then Nvidia's last advantage is gone and all they'll rely on is brute force ie huge power draw to win bragging rights. RDNA3 might be the first AMD GPU's I'm going to buy in the last 10 years.


----------



## Jism (May 12, 2022)

Denver said:


> Oh My God, it really looks great... Nvidia must be sweating blood seeing their solution that needs a dedicated ASIC, wasting precious space on the die, being matched via open source software. lol



We're not far away off from the idea that AMD competes in all possible markets and is able to release better products overall.


----------



## Oberon (May 12, 2022)

ARF said:


> This is a radical opinion which doesn't reflect the reality. Probably you'd agree that there is a difference between "moderate" profit margin of 15%, and aggressive, crazy profit margin of 65%, yes or no?


I imagine you'd be shocked to see how much higher NVIDIA's margins are than AMD's...


----------



## btarunr (May 12, 2022)

Denver said:


> Oh My God, it really looks great... Nvidia must be sweating blood seeing their solution that needs a dedicated ASIC, wasting precious space on the die, being matched via open source software. lol


NVIDIA uses Tensor cores for ray tracing, too, for the AI denoiser. This probably offloads the shaders quite a bit.

Though I agree. NVIDIA looks for problems to solve using the most difficult/expensive approach possible, which it can sell as exclusive features for a generation or two. AMD looks at the problem NVIDIA discovered, the solution, and tries to find the most frugal alternative solution. Happened with FreeSync vs. G-SYNC; happened with FSR 2.0 vs. DLSS; will happen with Ray Tracing.


----------



## Crackong (May 12, 2022)

This is Day One of FSR 2.0 and it is already matching most of the DLSS 2.x quality, having DLSS 2.x being polished for over a year for now

Once again the market will favor an open solution without proprietary hardware requirement.


----------



## Trunks0 (May 12, 2022)

btarunr said:


> NVIDIA uses Tensor cores for ray tracing, too, for the AI denoiser. This probably offloads the shaders quite a bit.
> 
> Though I agree. NVIDIA looks for problems to solve using the most difficult/expensive approach possible, which it can sell as exclusive features for a generation or two. AMD looks at the problem NVIDIA discovered, the solution, and tries to find the most frugal alternative solution. Happened with FreeSync vs. G-SYNC; happened with FSR 2.0 vs. DLSS; will happen with Ray Tracing.



Do they? Has any RayTracing game implemented Tensor core accelerated de-noising of the RayTracing?

And no they don't. Using AI to implent DLSS was the easier and more affordable method for nVidia. As training an AI for a model, can be easier than hand working code. And nVidia has massively invested in AI. So it made a ton of sense for them to do it. It also helped them unify the GPU's they make. As now everything has tensor cores on it and they don't have to create something seperate just for the AI business.


----------



## btarunr (May 12, 2022)

Trunks0 said:


> Has any RayTracing game implemented Tensor core accelerated de-noising of the RayTracing?


Every single ray tracing application that has ever run on a GeForce RTX GPU uses that AI denoiser.

AI denoising is integral to the ray tracing pipeline on GeForce RTX. AMD GPUs use a compute-shader based denoiser.


----------



## Trunks0 (May 12, 2022)

btarunr said:


> Every single ray tracing application that has ever run on a GeForce RTX GPU uses that AI denoiser.
> 
> Denoising is integral to the ray tracing pipeline.



I'm not sure that's really accurate. Just a little googling suggests they don't use the tensor cores for that yet, but it's being worked on.


----------



## InVasMani (May 12, 2022)

FSR 2.0 especially with Sharpen does a nice job the combination helps with shading and lighting while enhancing reflection. Overall FSR 2.0 does a good enough job at preserving color depth quality. Zoomed in I feel it looks better than DLAA and DLSS personally and better than native with no AA. I have to agree with W1zzard on sharpening control AMD would be wise to include good control options around it since people have very varied preferences around that subject matter.

I'm hoping the code won't be too hard to adjust the per dimension scaling for the different presets as well. I'd probably make some adjustments on the scaling as well to work a little better within 0-255 8-bit color space as well the 1.7 per dimension is fine, but the others would be more optimal at 1.53x, 2.04x, and 3.06x since it aligns better plus the minor per dimension uplift will work better with the additional temporal data. For adaptive I'd prefer 51%/68%/85% of screen resolution myself.

This is a good step forward. The depth/motion/color buffer's are nice I hope AMD expands around it with newer hardware with additional optimized configuration profiles around each that gets combined into a single frame buffer. They could basically target enhancing scene highlights/mid-tones/shadows to each buffer type and combine them basically loosely targeting 0/127.5/255 black/grey/white to provide buffer better image clarity with each combined with upscale to higher resolution. Could probably brute force that now at around native resolution performance with better image quality for older titles.


----------



## renz496 (May 12, 2022)

ARF said:


> AMD's mistake is that it answers these dirty initiatives by nvidia. Tessellation, and now RT... Do you remember when nvidia paid a game developer to REMOVE the DX 10.1 implementation (Assassin's Creed DX10.1) in which the Radeons were better?


it is AMD that push MS to make tessellation being part of DX11 spec not nvidia. nvidia simply beat AMD at tessellation performance on their first try. ever since then i have seen people accusing nvidia pushing tessellation into DX spec to cripple AMD when in fact it is AMD that try to push for tessellation since DX10 era.


----------



## Khonjel (May 12, 2022)

wheresmycar said:


> I just don't know what all the fuss is about.
> 
> I sit 1 feet further away from my screen and I've got DLSS 3.0/FSR 2.5 enabled. If games need a little sprucing up with ray tracing, i just open the window and let the sun in. You guys just need to catch up!


Yeah. Listen to us old timers, kids. Touch grass and get some sunshine. Gives you vitamin D.


----------



## Makaveli (May 12, 2022)

rutra80 said:


> Well done. Now the only feature AMD is missing, is performant raytracing.


This will be fixed in RDNA 3 and things should get real interesting.


----------



## Punkenjoy (May 12, 2022)

RT is not a proprietary feature of Nvidia, Nvidia was first on it and branded it as "RTX" but it's a standard that is in both Vulkan and Direct X12 ultimate. AMD implementation is just different and they made the choice to not have dedicated hardware function for it but rather reuse the texture and compute unit to do it. It's a more flexible approach but you loose overall performance when using RT since it use things you already need to render the frame.

The big advantage is when there is a game that do not use RT, you do not have silicon space unused. on the other side, if you are even in a non-RT setup, you will lose when RT is enabled. Their goal was probably to achieve much higher raster performance than Nvidia so they would get on par RT performance but they were not there. We will see how it is on Navi 3x. RT is still in its early phase and i do not think there is enough good hardware to really be used widely. A bit like when pixel shaders got out. it took dx9c and few gen of hardware before they really got widespread.


----------



## Akkedie (May 12, 2022)

btarunr said:


> Every single ray tracing application that has ever run on a GeForce RTX GPU uses that AI denoiser.
> 
> AI denoising is integral to the ray tracing pipeline on GeForce RTX. AMD GPUs use a compute-shader based denoiser.


There is no such thing as AI denoising used in any game afaik, where did you read this?


__
		https://www.reddit.com/r/hardware/comments/bcsktj


----------



## Kaotik (May 12, 2022)

ARF said:


> DLSS and RT are proprietary "features" by nvidia with no value for the user who can think.
> The PS5 and new XBox do not support RT, so the gamers actually do not need it.
> 
> AMD's mistake is that it follows instead of thinking proactively about new unique features with real value.


PS5, Xbox SS and SX all support RT just like every other RDNA2 product.



3211 said:


> Those are the limits.  This is not as good as the article makes it out to be, from this one cherry picked game from amd.   Once you will have fine objects in motion,  hair on characters the artifacts will be even more apparent.  Its good, but its no dlss killer of any kind. It looks worse, it runs worse and we'll see how the adoption is going to be with additional work required. DLSS is in like three times more games than the junk fsr 1.0 which should have been the easiest thing in the world to implement


Thanks for the laughs 
ps. FSR got over 1/3 of the official game count vs DLSS despite being out under a year, while DLSS has been out over 3 years. Also there's tools to apply FSR to literally any game (like RSR now allows, too)



btarunr said:


> Every single ray tracing application that has ever run on a GeForce RTX GPU uses that AI denoiser.
> 
> AI denoising is integral to the ray tracing pipeline on GeForce RTX. AMD GPUs use a compute-shader based denoiser.


Quite sure nothing outside NVIDIA demos ever used their "AI denoiser", it's possible there's an odd exception somewhere, but that's it.


----------



## Trunks0 (May 12, 2022)

Punkenjoy said:


> RT is not a proprietary feature of Nvidia, Nvidia was first on it and branded it as "RTX" but it's a standard that is in both Vulkan and Direct X12 ultimate. AMD implementation is just different and they made the choice to not have dedicated hardware function for it but rather reuse the texture and compute unit to do it. It's a more flexible approach but you loose overall performance when using RT since it use things you already need to render the frame.
> 
> The big advantage is when there is a game that do not use RT, you do not have silicon space unused. on the other side, if you are even in a non-RT setup, you will lose when RT is enabled. Their goal was probably to achieve much higher raster performance than Nvidia so they would get on par RT performance but they were not there. We will see how it is on Navi 3x. RT is still in its early phase and i do not think there is enough good hardware to really be used widely. A bit like when pixel shaders got out. it took dx9c and few gen of hardware before they really got widespread.



Sorta... AMD's has dedicate RT... but the RT units are part of the compute units. Where are nVidia's are more dedicated to just RT and the computre/CUDA part is completely seperate. The funny part is AMD's method isn't bad, it's as fast or faster than Turning... but slower than Ampere. Early days stuff as you said though, it will get there.


----------



## AsRock (May 12, 2022)

Denver said:


> Oh My God, it really looks great... Nvidia must be sweating blood seeing their solution that needs a dedicated ASIC, wasting precious space on the die, being matched via open source software. lol



Oddly nVidia's Linux drivers just gone open source.


----------



## 3211 (May 12, 2022)

Al Chafai said:


> wooooooh so did you actually try this tech on other games? where did you come up with all these conclusion so fast? you sound like a hater not gonna lie....
> during gameplay i am sure spotting the difference between the two will be so hard even for the most experienced. people are forgetting that these reconstruction techniques aren't perfect by any means, and for AMD to provide such improvements without the need for any proprietary hardware is a massive step forward.



We know how techniques similar to this work in other games. We just recently had Epic's TSR in Ghostwire which shows the same faults.  Theres a reason amd kept using Deathloop as example, with its stylized graphics and softer looking image.  Also, remember wizzard claiming that fsr1 stacks well against DLSS when that launched ? We all know how that turned out.  

Why are we still acting like "proprietary hardware" means much when every nvidia card in the last four years had RT and DLSS ? It is not possible for you to purchase an nvidia card now and forever in the future without DLSS and RT.


----------



## Verpal (May 12, 2022)

Wouldn't call it DLSS killer when it can only provide comparable IQ at 4K, and if a game can support FSR 2.0 it would also be able to support DLSS, however, it is free and doesn't use die space, AMD deserve all congratulations here!


----------



## wolf (May 12, 2022)

btarunr said:


> NVIDIA looks for problems to solve using the most difficult/expensive approach possible, which it can sell as exclusive features for a generation or two.
> AMD looks at the problem NVIDIA discovered, the solution, and tries to find the most frugal alternative solution.


QFT. Very succinctly put and broadly what people could have expected and perhaps continue to expect unless market share shifts significantly.

Certainly not a bad thing, and caters to both sides, if you want the new and shiny, theres a choice, if you want free and open, theres another, both with their own 'costs'


----------



## ratirt (May 12, 2022)

To be fair this FSR 2.0 looks better than I have expected. Really looking forward to try it out.


----------



## spnidel (May 12, 2022)

the fuck man, FSR 2.0 + sharpen often looks better than native
there has to be some caveat, like lots of blur in motion, I refuse to believe it works this good (I'm happy, mind you, as I've a 6800 XT, but still)


----------



## ioannis (May 12, 2022)

Maybe the time has come to return to AMD after many years with their next gpu generation? FSR 2.0 looks great, identical to DLSS. I am guessing 7000 series GPU will be able to do 60 fps with RT + FSR 2.0


----------



## wolf (May 12, 2022)

Also not sure how this gets the Innovation award, it's literally doing something that already exists and was innovated already by other people, I don't think it actually meets the definition of innovative.


----------



## VulkanBros (May 12, 2022)

Never heard that word before ...


> the game can automagically increase the rendering resolution again


----------



## ratirt (May 12, 2022)

wolf said:


> Also not sure how this gets the Innovation award, it's literally doing something that already exists and was innovated already by other people, I don't think it actually meets the definition of innovative.


well if you consider what FSR 2.0 does and that it can be used by any graphics card there is with no restrictions is innovative since there is nothing like that for what it does.


----------



## Markosz (May 12, 2022)

Very nice improvements over 1.0, it's head to head with DLSS but with without the 'AI magic'.
With current GPU market and stagnating technology, developers really need to pick up the pace with implementing these kind of upscalers if they want to keep improving the graphics with a playable performance.


----------



## The red spirit (May 12, 2022)

tabascosauz said:


> and not treating this like a one-off science experiment as they have in the past (TressFX, standalone "FidelityFX sharpening").


If we are talking about RIS, then it's broken AF. It says it will run in DX9, DX11 games, but my success rate with it working has been way lower. Only Genshin Impact seems to work with it. It doesn't work with Horizon 5 or anything else I tried. Meanwhile DXR's sharpening works in any game. More forgotten features like Chill or Radeon Boost have been either crap or non-functional. Chill with RX 580 was buggy since 2019. This lack of maintenance and lack of improvements is easily one of the worst "feature" of AMD software.


----------



## wolf (May 12, 2022)

ratirt said:


> well if you consider what FSR 2.0 does and that it can be used by any graphics card there is with no restrictions is innovative since there is nothing like that for what it does.


My 2c... If we take what you said as absolutely true, it's still not really innovative... at all. 

But, I accept your opinion, just not really interested in debating it.


----------



## ratirt (May 12, 2022)

wolf said:


> My 2c... If we take what you said as absolutely true, it's still not really innovative... at all.
> 
> But, I accept your opinion, just not really interested in debating it.


That's the thing with today's world. Everything is debatable since a most stuff is an extension to something that was already there or it uses ideas that have emerged long time ago.


----------



## W1zzard (May 12, 2022)

wolf said:


> Also not sure how this gets the Innovation award, it's literally doing something that already exists and was innovated already by other people, I don't think it actually meets the definition of innovative.


Have you watched the GDC FSR 2.0 video?


----------



## Recus (May 12, 2022)

Where is crowd who always claiming that DLSS is blurry? Now if FSR achieves DLSS image quality does it mean it has same amount of blur?



Spoiler: ...






Denver said:


> Oh My God, it really looks great... Nvidia must be sweating blood seeing their solution that needs a dedicated ASIC, wasting precious space on the die, being matched via open source software. lol



Just like Nvidia software Async made AMD hardware Async run for its money?



zlobby said:


> Another great, free, open technology by AMD.
> 
> TenZZor cores my aZZ!





zlobby said:


> Always remember: if it is free, the product is you!





ARF said:


> AMD's mistake is that it answers these dirty initiatives by nvidia. Tessellation, and now RT... Do you remember when nvidia paid a game developer to REMOVE the DX 10.1 implementation (Assassin's Creed DX10.1) in which the Radeons were better?



Stop russian propaganda. https://techreport.com/news/14707/ubisoft-comments-on-assassins-creed-dx10-1-controversy-updated/


----------



## ARF (May 12, 2022)

Instead of wastin precious developer time on mimicing lower settings, why don't you simply change the settings from ultra high to very high? The result will be the same with regards to the FPS improvement..


----------



## wolf (May 12, 2022)

W1zzard said:


> Have you watched the GDC FSR 2.0 video?


I will and I'll report back 

I might have part of this misunderstood or just straight missed, willing to admit if it's the case.


----------



## Ferrum Master (May 12, 2022)

kudos to AMD


----------



## Charcharo (May 12, 2022)

ARF said:


> DLSS and RT are proprietary "features" by nvidia with no value for the user who can think.
> The PS5 and new XBox do not support RT, so the gamers actually do not need it.
> 
> AMD's mistake is that it follows instead of thinking proactively about new unique features with real value.



COnsoles support RT.


----------



## ARF (May 12, 2022)

Charcharo said:


> COnsoles support RT.



Only "ray-traced" reflections... though.


----------



## THU31 (May 12, 2022)

ARF said:


> Only "ray-traced" reflections... though.


Does Metro Exodus EE on consoles not support RTGI? I think it does. And the Matrix demo? Some games have RT shadows as well.

I think the problem is performance. You just cannot enable all the effects without sacrificing performance. That is why we might be looking at 30 FPS targets for UE5 games.


Anyway, FSR 2.0 is a huge upgrade. DLSS still resolves distant sub-pixel detail better, but you have to really look for it. Tensor cores is probably where the 3 FPS difference comes from.

I am still hoping for that SDK that will include all three technologies and will be easy to implement in all games.


----------



## Maranak (May 12, 2022)

ARF said:


> AMD's mistake is that it follows instead of thinking proactively about new unique features with real value.


You mean things like AMD Mantle (which is basically Vulkan now) or HBM (which they co-developed with SK Hynix) which literally everyone is using in the server/HPC segments?


----------



## ARF (May 12, 2022)

THU31 said:


> Does Metro Exodus EE on consoles not support RTGI? I think it does. And the Matrix demo? Some games have RT shadows as well.
> 
> I think the problem is performance. You just cannot enable all the effects without sacrificing performance. That is why we might be looking at 30 FPS targets for UE5 games.
> 
> ...



Yes, performance is the problem. We don't have the transistor budget on these manufacturing processes in order to make everything ray-traced and the performance above 30 FPS.


----------



## tomcug (May 12, 2022)

btarunr said:


> NVIDIA uses Tensor cores for ray tracing, too, for the AI denoiser. This probably offloads the shaders quite a bit.


What you refer to is probably NVIDIA's NRD denoising library. The source code is public:








						GitHub - NVIDIAGameWorks/RayTracingDenoiser: NVIDIA Ray Tracing Denoiser
					

NVIDIA Ray Tracing Denoiser. Contribute to NVIDIAGameWorks/RayTracingDenoiser development by creating an account on GitHub.




					github.com
				



And it's just shader code, doesn't use Tensor cores.


----------



## dyonoctis (May 12, 2022)

Chrispy_ said:


> Still not sure if Tensor cores really do anything for DLSS; They were architected for AI workloads but none of the DLSS "AI" happens on your RTX card, it's performed by Nvidia on their large server farms and then baked into the game-ready driver as a preset.


I thought that it was only for DLSS 1.0 ? you can use dlss in the real time preview of unity and unreal engine, and I really doubt that Nvidia servers are computing every single project that are being made


			https://docs.unity3d.com/Manual/deep-learning-super-sampling.html:D


----------



## Charcharo (May 12, 2022)

ARF said:


> Only "ray-traced" reflections... though.


Nope. All RT effects. Reflections are what devs USUALLY use, but not always.
For Metro its RTGI. For other games it is shadows or RTAO.


----------



## r9 (May 12, 2022)

I hope both technologies are here to stay and hopefully they become industry standard where all games will come with out of the box.


----------



## Kaotik (May 12, 2022)

ARF said:


> Only "ray-traced" reflections... though.


Huh? They support literally anything you can do with RT, just like AMD or NVIDIA (or Intel Arc) PC cards can.
The reason they might seem more "limited" is simply performance issue, consoles aren't top end PC hardware where GPU consumes more than the whole console


----------



## Chrispy_ (May 12, 2022)

Some games are starting to add a sharpening slider for DLSS and honestly this is what DLSS has sorely needed. At least in deathloop the FSA 2.0 + sharpness is the best output here by a wide margin.

I think FSR 2.0 is close to DLSS 2.3 in terms of output quality; it doesn't really matter if there are minor differences because the effectiveness of FSR and DLSS varies from game to game and from scene to scene. The basic mechanics of FSR 2.0 much more closely match DLSS now - temporal sampling of jittered camera positions with a couple of features designed to combat the two worst drawbacks of this technique (thin feature shimmer and motion trail artifacts)

Where I think FSR is vastly superior to DLSS is the adaptive resolution. Finally you can run something demanding at a target framerate and not have to pause, sacrifice some graphics options, potentially restart the game to apply them, and then wait until the next big firefight and hope it's enough. You'll (maybe) notice it getting a bit blurry in the heat of the moment, but it's only temporary and you don't have to sacrifice those higher quality settings or resolution for 99% of the gameplay just to cover those 1% peak demands.


----------



## spnidel (May 12, 2022)

Nordic said:


> I really enjoyed the introduction to the technology. It was a great write up.
> 
> It is too bad that the minimum requirements are so high. I could really use this with my 1060 but it is below min spec for 1080p up scaling. Those who need it the most can't even use it.


why not try it first before getting upset


----------



## Chrispy_ (May 12, 2022)

r9 said:


> I hope both technologies are here to stay and hopefully they become industry standard where all games will come with out of the box.


I am confident that FSR will become industry standard fast - as it's the only tech that runs on all platforms. Game developers may continue to implement both as long as Nvidia make it easy for them and incentivise them to do so, but look at it from the game developer's perspective:

Do you: 

Implement a single solution (FSR) which works for all three of your target markets with no restrictions, and is officially endorsed by the exclusive GPU vendor for the two console markets.

OR


Do all the work to implement FSR but *also* do additional work to add DLSS that is only usable by about one quarter of one of your three target markets, and adding it is redundant because it doesn't really do anything special that FSR already does.

Please, find me a good reason why a Dev would pick the second option from now on? Outside of financial incentives from Nvidia, you just wouldn't.


----------



## THU31 (May 12, 2022)

Chrispy_ said:


> Where I think FSR is vastly superior to DLSS is the adaptive resolution. Finally you can run something demanding at a target framerate and not have to pause, sacrifice some graphics options, potentially restart the game to apply them, and then wait until the next big firefight and hope it's enough. You'll (maybe) notice it getting a bit blurry in the heat of the moment, but it's only temporary and you don't have to sacrifice those higher quality settings or resolution for 99% of the gameplay just to cover those 1% peak demands.


DLSS in Deathloop does support dynamic resolution scaling. It was tested by someone on TPU months ago. It seems that is a game specific feature.


----------



## Chrispy_ (May 12, 2022)

THU31 said:


> DLSS in Deathloop does support dynamic resolution scaling. It was tested by someone on TPU months ago. It seems that is a game specific feature.


Ah okay, I've not seen it as a feature in any games yet, though I have seen games that let you combine the in-game dynamic resolution scaling with DLAA. 

If the dynamic scaling actually affects the internal render resolution of DLSS then that's good. DLAA or DLSS with the game's own dynamic resolution is just a simple per-frame upscale without any of the motion-vector or temporal buffers that make DLSS 2.3 and FSR 2.0 better.


----------



## Punkenjoy (May 12, 2022)

dyonoctis said:


> I thought that it was only for DLSS 1.0 ? you can use dlss in the real time preview of unity and unreal engine, and I really doubt that Nvidia servers are computing every single project that are being made
> 
> 
> https://docs.unity3d.com/Manual/deep-learning-super-sampling.html:D



DLSS 2.x still use a neural network but it's not trained per game. Nvidia ship the inference and it's what run on the tensor core. 

One of the thing is people think you need AI for a lot of thing were a good algorithm can do the work just fine and be much easier and cheaper to run. But writing algorithm require more work than to train an AI. AI is being used right now to brute force so many things that could just have been done using good programing. 

there are area where AI is really useful and cannot be beaten by clever algorithm. but these area is just a small subset of what people try to apply AI to.


----------



## Oberon (May 12, 2022)

Nordic said:


> It is too bad that the minimum requirements are so high. I could really use this with my 1060 but it is below min spec for 1080p up scaling. Those who need it the most can't even use it.



It will still work. Unlike NVIDIA, AMD doesn't stop you from trying things that aren't explicitly supported.


----------



## The red spirit (May 12, 2022)

ARF said:


> Instead of wastin precious developer time on mimicing lower settings, why don't you simply change the settings from ultra high to very high? The result will be the same with regards to the FPS improvement..


Do you really think that everyone does that or has hardware for that. I don't even use presets, but I manage with low-high settings, with most set to medium. Technology was more interesting for low end gamers, that may be able to use weaker hardware that otherwise might not run game you want. The problem is still picture quality. The main problem with gaming, which has been a problem for at least decade is that games need faster and faster hardware to run but very often there's nearly no visual quality gain in newer games. You can run 10 year old game at very high settings and it will look better than new game at low, but old game could run well with GTX 650, meanwhile new game will be slideshow. I frankly don't want games to look awful, but I also don't care to much about visual quality. However, I hate when newer games are more demanding and look worse or run worse for no obvious reason. It's damn shame that many devs don't know how to dev properly.


----------



## Od1sseas (May 12, 2022)

ARF said:


> DLSS and RT are proprietary "features" by nvidia with no value for the user who can think.
> The PS5 and new XBox do not support RT, so the gamers actually do not need it.
> 
> AMD's mistake is that it follows instead of thinking proactively about new unique features with real value.


Jesus you're delusional.
RT is not proprietary, it existed since the 1980's
Ray Tracing is the future of graphics.
Xbox and PS5 DO support Ray Tracing.


----------



## Lateshow (May 12, 2022)

AnarchoPrimitiv said:


> Does sound like a hater, plus, everyone seems to forget that for all intents and purposes, Nvidia has limitless financial resources when compared to AMD, but expect AMD to not only compete, but to do better (while also not seeking profit in the same way as Nvidia....so many people think AMD should be a non-profit company and hold them to standards they hold nobody else to)....this is a great big step, and should only get better as long as Nvidia doesn't pull an intel and instead of innovating, just use vast amounts of money to box AMD out and get developers to be exclusive to Nvidia IP....which they will


Yeah, it's like the pot, and the kettle around here at times... His comment was typical of an NVIDIA fan boy, but you trying to say people want AMD to run as a non profit is just as fan boyish.  

AMD, and their record setting quarters, clearly shows they are as focused on profits as NVIDIA, or Intel. Digging deeper, they got rid of their sub $300 CPU's last go, and sold a silly amount of slower 3k chips for those who couldn't budget $300+ for the improved 5k chips. Yesterday's GPU refresh offering around 5% more performance for 10%+ more money is as bad any of their competitors tactics. Well NVIDIAs original MSRP on their 20 series being super high to help sell excess 10 series GPU's was worse, but it's along the same lines. AMD 6500xt was a bad joke, right? There's more if you want.

Competition is the only thing keeping tech prices at these inflated levels despite the mining, and PC boom being over. I'm hoping Intel jumps in the GPU game soon, and succeeds hard. Both GPU makers took advantage of their customers, and any decent 3rd option would help out us consumers.


----------



## RedBear (May 12, 2022)

Oberon said:


> It will still work. Unlike NVIDIA, AMD doesn't stop you from trying things that aren't explicitly supported.


If it "works" but the impact is severe enough to leave you with (nearly) unplayable frame rate, then it doesn't work. I mean, those minimum recommendations weren't thrown out just for fun.

EDIT: Talking about 1080p, is there going to be a 1080p comparison?


----------



## ARF (May 12, 2022)

Od1sseas said:


> RT... it existed since the 1980's
> Ray Tracing is the future of graphics.



It had been the "future" since 80s and never came 
You don't have the technology to make ray-tracing working for real gaming. Unless every computer is connected via the internet with many powerful ray-tracing supercomputers in order to give you that constant framerate that you wish between 60 FPS and 144 FPS (for example).



Od1sseas said:


> you're delusional.



I'm delusional or you believe in unicorns?


----------



## Xex360 (May 12, 2022)

AMD destroyed nVidia and their stupid tensor cores, if AMD could do it without extra wasted silicon so could nVidia.


----------



## Od1sseas (May 12, 2022)

ARF said:


> It had been the "future" since 80s and never came
> You don't have the technology to make ray-tracing working for real gaming. Unless every computer is connected via the internet with many powerful ray-tracing supercomputers in order to give you that constant framerate that you wish between 60 FPS and 144 FPS (for example).
> 
> 
> ...


Dude, you didn't even understand what I've said. You said that RT is useless and proprietary when in reality it's a technology that existed since 1980's and it's the only way to get photorealistic graphics. What real time rt in games have anything do with this?


----------



## zx128k (May 12, 2022)

Od1sseas said:


> Jesus you're delusional.
> RT is not proprietary, it existed since the 1980's
> Ray Tracing is the future of graphics.
> Xbox and PS5 DO support Ray Tracing.


DirectX 12 has been expanded to cover ray tracing, _machine learning_ and faster storage. This is why these features should be normal parts of any benchmarking. Not a special sub section of tests. The reason these feature are treated differently is the perception NVidia supports these features better and delivers more performance. Thus AMD are to be protected from the negative results in benchmarks.


----------



## sith'ari (May 12, 2022)

> For developers *that already support DLSS 2.0,* adding FSR 2.0 support will be easy, AMD talks about days.



Exactly that !!
When DLSS is present AMD have stated that implementing FSR 2.0 is very easy. "Deathloop" is a game that already has DLSS implemented , so what we are seeing here is the best case scenario for FSR 2.0
Of course , the next logical question is "how FSR 2.0 performs *when DLSS is not present*".
AMD has stated that this will be a much lengthier procedure ( 05:17 at the following video) ,and of course we don't know yet what kind of quality will this implementation have.


----------



## zx128k (May 12, 2022)

sith'ari said:


> Exactly that !!
> When DLSS is present AMD have stated that implementing FSR 2.0 is very easy. "Deathloop" is a game that already has DLSS implemented , so what we are seeing here is the best case scenario for FSR 2.0
> Of course , the next logical question is "how FSR 2.0 performs *when DLSS is not present*".
> AMD has stated that this will be a much lengthier procedure ( 05:17 at the following video) ,and of course we don't know yet what kind of quality will this implementation have.


Customer wins I guess, now that FSR 2 is out, benchmarks will have to accept DLSS/FSR 2 results.  There is no reason not to accept RT and upscaling now.


----------



## Punkenjoy (May 12, 2022)

RedBear said:


> If it "works" but the impact is severe enough to leave you with (nearly) unplayable frame rate, then it doesn't work. I mean, those minimum recommendations weren't thrown out just for fun.


There are still a big performance it versus running the game at the lower resolution directly. 

By example in the previous test
1440p Native with TAA: 70 FPS
4K with FSR 2.0 Quality (1440p internal resolution): 52 fps -26% loss vs native 1440p
4K with DLSS 2.0 Quality (1440p internal resolution): 54 fps -23% loss vs native 1440p

So if you can take a 25% hit on the internal resolution, you can probably run FSR 2.0. from what I see.


----------



## mouacyk (May 12, 2022)

So wtf is DL in DLSS these days?  Apparently AMD is doing it without any neural network shenanigans.


----------



## MxPhenom 216 (May 12, 2022)

ARF said:


> DLSS and RT are proprietary "features" by nvidia with no value for the user who can think.
> The PS5 and new XBox do not support RT, so the gamers actually do not need it.
> 
> AMD's mistake is that it follows instead of thinking proactively about new unique features with real value.


RT is not proprietary to Nvidia. Nvidia RTX was before RT became a feature set of DX12.


----------



## ARF (May 12, 2022)

MxPhenom 216 said:


> RT is not proprietary to Nvidia. Nvidia RTX was before RT became a feature set of DX12.



Okey, so nvidia rtx is proprietary, sorry for missing the "x" in the end..

AMD said that you can get your ray-tracing only in the cloud. Good luck!


----------



## Od1sseas (May 12, 2022)

ARF said:


> Okey, so nvidia rtx is proprietary, sorry for missing the "x" in the end..
> 
> AMD said that you can your ray-tracing only in the cloud. Good luck!
> 
> View attachment 247161


RTX is just a marketing term for Ray Tracing. Nothing proprietary


----------



## zx128k (May 12, 2022)

mouacyk said:


> So wtf is DL in DLSS these days?  Apparently AMD is doing it without any neural network shenanigans.


As per the while paper, "A Survey of Temporal Antialiasing Techniques"  Using , "8.3. Machine learning-based methods"



> Salvi [Sal17] enhances TAA image quality by using stochastic gradient descent (SGD) to learn optimal convolutional weights for computing the color extents used with neighborhood clamping and clipping methods (see Section 4.2). *Image quality can be further improved by abandoning engineered history rectification methods in favor of directly learning the rectification task. For instance, variance clipping can be replaced with a recurrent convolutional autoencoder which is jointly trained to hallucinate new samples and appropriately blend them with the history data [Sal17].*



Thus, DLSS uses a convolutional autoencoder for better quality output.  Tensor cores help reduce Challenges by providing more processing power.



> 6. Challenges
> Amortizing sampling and shading across multiple frames does sometimes lead to image quality defects. *Many of these problems are either due to limited computation budget* (e.g. imperfect resampling), or caused by the fundamental difficulty of lowering sampling rate on spatially complex, fast changing signals. In this section we review the common problems, their causes, and existing solutions.


Tensor cores increase the computational budget and the convolutional autoencoder helps with the second part which is the lowering sample rate by hallucinating new samples.

You can process ML tasks in any way you like but real-time puts a limit on how long you can process the image.  This can reduce quality if there is not enough processing power.  Tensor cores are faster at this task than normal cores, they execute within one clock cycle.  Thus you can drop back to normal processing but as Intel states for their DP4 version of XeSS, both quality and performance is reduced when compare their xmx(Intel tensor cores) version.


----------



## ARF (May 12, 2022)

Od1sseas said:


> RTX is just a marketing term for Ray Tracing. Nothing proprietary



It is proprietary, though..



> *Nvidia GeForce RTX* (Ray Tracing Texel eXtreme)





> RTX runs on Nvidia Volta-, Turing- and Ampere-based GPUs, specifically utilizing the Tensor cores (and new RT cores on Turing and successors) on the architectures for ray-tracing acceleration.



Nvidia RTX - Wikipedia


----------



## zx128k (May 12, 2022)

ARF said:


> It is proprietary, though..
> 
> 
> 
> ...


DX12u and DXR are proprietary to microsoft.  They are the standard both AMD and nvidia must follow.  RT, ML and DirectStorage are covered in this standard. There is no point to be made here.

And he has an AMD CPU and GPU.  Starting to see a pattern....


----------



## Od1sseas (May 12, 2022)

ARF said:


> It is proprietary, though..
> 
> 
> 
> ...


Wikipedia is wrong here. First of all tensor cores are not used for Ray Tracing acceleration that's the RT Cores job. As I've said, RTX is a marketing term for Ray Tracing which is NOT proprietary


----------



## kapone32 (May 12, 2022)

zx128k said:


> DX12u and DXR are proprietary to microsoft.  They are the standard both AMD and nvidia must follow.  RT, ML and DirectStorage are covered in this standard. There is no point to be made here.


The Tensor Cores are what RTX designates as proprietary .


----------



## zx128k (May 12, 2022)

Tensor cores are basically on cards without Ray Tracing cores.  There were no mainstream GeForce graphics cards based on Volta for example.  It was NVIDIA's first chip to feature Tensor _Cores. Volta's Tensor cores are first generation while Ampere has third generation Tensor cores.  RT cores are on Turing and successors.

RTX runs on Nvidia Volta-, Turing- and Ampere-based GPUs, specifically utilizing the Tensor cores (and new RT cores on Turing and successors) on the architectures for ray-tracing acceleration._


----------



## tomcug (May 12, 2022)

AFAIK on Volta the Tensor cores are only used in ray-tracing context for DLSS processing. There is no evidence they are used for RT acceleration.


----------



## zx128k (May 12, 2022)

tomcug said:


> AFAIK on Volta the Tensor cores are only used in ray-tracing context for DLSS processing. There is no evidence they are used for RT acceleration.


I read some papers were denoising can use AI, and in future you could reduce the number of Rays needed in a scene.  The AI works most of the rays out and you dont have to process as much.  Its like a kind of DLSS but for the rays in a scene.

This video is just easier to watch or you can go to the source.









I believe this wont be ready for nvidia next generation cards but the generation afterwards will likely use AI to massively speed up Ray Tracing as their tensor cores will support this method.  If nvidia next generation cards have this feature, AMD will be far behind in ray tracing.  This is why AMD's lack of tensor or xmx like cores is a big deal in future.


----------



## tomcug (May 12, 2022)

zx128k said:


> I read some papers were denoising can use AI, and in future you could reduce the number of Rays needed in a scene.


Yes, the Tensor cores can be used for AI-based denoising. However current games don't do that and whether the future ones will, we shall see.


----------



## zx128k (May 12, 2022)

tomcug said:


> Yes, the Tensor cores can be used for AI-based denoising. However current games don't do that and whether the future ones will, we shall see.


I believe its being research right now and will appear in the future.  The output of this method can be seen here. Once this technology hits the mainstream Ray Tracing will be magical looking.



> We propose the concept of neural control variates (NCV) for unbiased variance reduction in parametric Monte Carlo integration for solving integral equations. So far, the core challenge of applying the method of control variates has been finding a good approximation of the integrand that is cheap to integrate. We show that a set of neural networks can face that challenge: a normalizing flow that approximates the shape of the integrand and another neural network that infers the solution of the integral equation.
> 
> We also propose to leverage a neural importance sampler to estimate the difference between the original integrand and the learned control variate. To optimize the resulting parametric estimator, we derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
> 
> When applied to light transport simulation, neural control variates are capable of matching the state-of-the-art performance of other unbiased approaches, while providing means to develop more performant, practical solutions. Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of insignificant visible bias.



Ignoring RT and AI is not what the mainstream should be doing.  We should not allow companies like AMD to ignore it as well.  Screaming FSR 2 does not need tensor cores, basically misses the point of what is happening in computer graphics.

The end result is going to be something like this:










Note how this AI network enhances GTA 5 so that it look almost photo real in real-time.  Ignoring AI and downplaying it misses the whole direction graphics is currently following.


----------



## Punkenjoy (May 12, 2022)

zx128k said:


> I read some papers were denoising can use AI, and in future you could reduce the number of Rays needed in a scene.  The AI works most of the rays out and you dont have to process as much.  Its like a kind of DLSS but for the rays in a scene.
> 
> This video is just easier to watch or you can go to the source.
> 
> ...


Denoising in RT is already been used right now without the use of AI. Again, this is similar to this thread, it's not because you can do something with AI that you must do it with AI. 

One of the thing to keep in mind is there is very few time to run the inference (applying what the neural network have learn) in Realtime. you can save some time by trying to run things asynchronously, but up to a point. (and Nvidia is already doing that). 

there are plenty of good, relatively cheap denoising algorithm that you can use to denoise without having to use AI. 

AI have more future in area where it can save a huge amount of work. By example, helping creating 3d maps like the Nvidia Demo. For real time rendering, i don't think we are there yet. There is not enough time for AI to really make a difference and they would have to do way more work. 

You want to use AI to get some results. it must give that results way faster than what it would have taken to compute that results.(with the proper algorithm). 

Right now, i think it's Nvidia trying to sellout AI to gamer when in reality they want (for a good reason) to get a good footstep in the AI market. They have tensor core to sell to gamer and they trying to do thing with it that do not need to be done with it. 

On top of that, they can stamp the AI word on it and be trendy. People will think it's magic


----------



## zx128k (May 12, 2022)

Punkenjoy said:


> Denoising in RT is already been used right now without the use of AI. Again, this is similar to this thread, it's not because you can do something with AI that you must do it with AI.
> 
> One of the thing to keep in mind is there is very few time to run the inference (applying what the neural network have learn) in Realtime. you can save some time by trying to run things asynchronously, but up to a point. (and Nvidia is already doing that).
> 
> ...


They explain it all here.


----------



## 80-watt Hamster (May 12, 2022)

Am I the only one here who is okay with their video games looking like, you know, video games? Photorealism is a bad target, IMO.


----------



## zx128k (May 12, 2022)

80-watt Hamster said:


> Am I the only one here who is okay with their video games looking like, you know, video games? Photorealism is a bad target, IMO.


Its cheaper to develop games with ray tracing.  You don't have to have the goal of Photorealism.  You can use upscaling to better create your artistic vision by adding more detail and then upscaling to maintain performance.  Tensor cores for better ingame AI using DX12's support for machine learning.


----------



## Nordic (May 12, 2022)

FSR 2.0 is working on the Steam Deck. This tech is going to be more useful in the console / fixed hardware space than PC's


----------



## THU31 (May 12, 2022)

80-watt Hamster said:


> Am I the only one here who is okay with their video games looking like, you know, video games? Photorealism is a bad target, IMO.


Photorealistic look and natural look are two different things.

I like realism when it comes to the behavior of lighting, shadows and reflections. But I do not like when games go for a natural, bland, washed out look. Art style is very important and I definitely prefer a unique design with an interesting color palette over something that tries to look like the real world. If I want to see the real world, I just go outside.

You get the same thing with movies and tv shows. While they all look inherently realistic, they usually do not look natural, thanks to the use of apertures and filters.


----------



## thepath (May 12, 2022)

Why compare only still image ?? What about quality when moving ?


----------



## zx128k (May 12, 2022)

thepath said:


> Why compare only still image ?? What about quality when moving ?


Thats the big issue with the temporal method, DLSS had all types of issues when moving.  Both look good in youtube videos, so did FSR 1 and it was complete garbage.


----------



## ARF (May 12, 2022)

zx128k said:


> DX12u and DXR are proprietary to microsoft.  They are the standard both AMD and nvidia must follow.  RT, ML and DirectStorage are covered in this standard. There is no point to be made here.
> 
> And he has an AMD CPU and GPU.  Starting to see a pattern....



DXR is the standard which everyone must follow. Not only AMD and nvidia, but also Intel, hopefully you didn't forget about it.

RTX is NOT supported by non-nvidia graphics cards...


----------



## zx128k (May 12, 2022)

ARF said:


> DXR is the standard which everyone must follow. Not only AMD and nvidia, but also Intel, hopefully you didn't forget about it.
> 
> RTX is NOT supported by non-nvidia graphics cards...


That was my....

More FSR 2 videos for anyone that cares. 

DLSS has more fine detail and FSR 2 looks sharper.  - HWU  
In motion,  FSR 2 loses more detail and is less stable than native.  DLSS is more stable and has more fine detail when compared to native.


----------



## Assimilator (May 12, 2022)

Please ignore the troll ARF, who doesn't know anything about what they post.


----------



## ModEl4 (May 12, 2022)

W1zzard said:


> In Deathloop, the modes available are "Quality," Balanced," and "Performance." *"Ultra Quality" from FSR 1.0 has been removed because it was just "Quality" with "Sharpening,"* which can now be adjusted separately.


Hi @W1zzard, can you elaborate?
I thought "Ultra quality mode" had 1.3X scaling per dimension and "Quality mode" 1.5X.
How is Ultra quality mode" just the same as "Quality mode"+Sharpening, like you said?
Is AMD giving the option to the developer to use lower than advertised resolution in "Ultra Quality mode" and at the same time allow them to call it "Ultra Quality mode"?



			https://gpuopen.com/wp-content/uploads/2021/07/fsr_qualitytable_1200px.jpg?imbypass=true


----------



## Imsochobo (May 13, 2022)

zx128k said:


> That was my....
> 
> More FSR 2 videos for anyone that cares.
> 
> ...



Some fine detail render better on nv, some better on amd, he brought up points like the wires for the baloons in movement were aliased on nvidia but not on amd and a metal fence on top of a building was better on nvidia.

the edge for fine detail is as you say to nvidia, but for in motion I didn't hear or hear a clear winner.

I need to bring out kekler 780TI and 7970 and see how they run, or if they run at all 
FSR 1.0 worked on the 7970 just fine, which I think is really the gamechanger for these technologies.. they just run and work


----------



## kapone32 (May 13, 2022)

zx128k said:


> That was my....
> 
> More FSR 2 videos for anyone that cares.
> 
> ...


pinnacle of technology knowledge.


----------



## wolf (May 13, 2022)

Bear in mind that this is just one game, and so far multiple reviewers/sites/tech-tubers are only covering this one because it seems it's the only one they can cover.

We've been seeing the preview images for weeks now, so AMD has had considerable time to work with the developer to, at least in theory, make this a best case showing for FSR 2.0 - especially when they talk about mere days when the game already has DLSS, they've been fine tuning the hec out of this one.

I would love to see that this is the result we can generally expect mind you, but we all need to be appropriately cautious until the consistency of results starts to come together.


----------



## Minus Infinity (May 13, 2022)

Now last year Lisa Su said something along the lines we could expect FSR hardware acceleration in RDNA3, so we might see large gains in performance compared to RDNA2, although how you equalise the performance for comparison other than through clocks I don't know


----------



## InVasMani (May 13, 2022)

Recus said:


> Where is crowd who always claiming that DLSS is blurry? Now if FSR achieves DLSS image quality does it mean it has same amount of blur?
> 
> 
> 
> ...


Those people will probably circle back around to it's too sharp.


----------



## pantherx12 (May 13, 2022)

ARF said:


> AMD's mistake is that it answers these dirty initiatives by nvidia. Tessellation, and now RT... Do you remember when nvidia paid a game developer to REMOVE the DX 10.1 implementation (Assassin's Creed DX10.1) in which the Radeons were better?


AMD introduced tesselation before Nvidia, think it was on HD 3000 series cards. Had a weird frog dude demo to show it off.


----------



## cobhc (May 13, 2022)

This looks very promising and being GPU platform agnostic/open source is great news. I went for an Nvidia laptop (during the GPU shortage) and DLSS was a big reason for that, but now I might stay with AMD for my next desktop upgrade if they manage to nail RT performance with RDNA3.

I really can't believe some of the massively ill-informed comments in this thread though. A lot of you might want to fact check yourself before you claim that current gen consoles can't do RT or that they only do RT reflections, etc.


----------



## tomcug (May 13, 2022)

pantherx12 said:


> AMD introduced tesselation before Nvidia, think it was on HD 3000 series cards. Had a weird frog dude demo to show it off.


They used to call it TruForm and it actually dates back to the Radeon 8500 which was released in 2001.


----------



## chrcoluk (May 13, 2022)

thegnome said:


> It is good that this is finally getting on dlss standard while not requiring any more die space. Imagine the costs Nvidia could save by not having to include tensor cores (or having much less) while still offering the same stuff. Or they just find more uses for it than just DLSS and some pro stuff, on a seperate die for that MCM future


Save lots of money but maybe also sell less GPUs.

I always felt these special cores were there to try and lock people into Nvidia as a vendor.  Although I have used Nvidia for many years and currently have a 3080, I have never been a fan of RT, DLSS is ok but is weak in that it requires game devs to support it alongside the special cores.

DLSS/RT hardware cost may be the reason why AMD can add more VRAM and compete at similar price points.


----------



## zx128k (May 13, 2022)

Imsochobo said:


> Some fine detail render better on nv, some better on amd, he brought up points like the wires for the baloons in movement were aliased on nvidia but not on amd and a metal fence on top of a building was better on nvidia.
> 
> the edge for fine detail is as you say to nvidia, but for in motion I didn't hear or hear a clear winner.
> 
> ...


I quote from the video. They state in motion DLSS was better, the image was more stable and had more fine detail.  In the still image DLSS had more fine detail.  This makes 100% sense given the more processing power available to DLSS via the tensor cores.  FSR 2 looks completely usable and not too bad really.  FSR 1 the original DLSS killer was complete garbage for image quality.

His System
R9 5800x3d
XFX RX 6800 Speedster



pantherx12 said:


> AMD introduced tesselation before Nvidia, think it was on HD 3000 series cards. Had a weird frog dude demo to show it off.


ATI Radeon 8500 in 2001.  Then nvidia had better performance, so their were special tesselation benchmarks.  There were complaints because AMD cards were hit harder performance wise.



> However, with all of its geometric detail, the DX11 upgraded version of _Crysis 2_ now manages to push that envelope. The guys at Hardware.fr found that enabling tessellation dropped the frame rates on recent Radeons by 31-38%. The competing GeForces only suffered slowdowns of 17-21%.





> Radeon owners do have some recourse, thanks to the slider in newer Catalyst drivers that allows the user to cap the tessellation factor used by games. Damien advises users to choose a limit of 16 or 32, well below the peak of 64.
> 
> *As a publication that reviews GPUs, we have some recourse, as well. One of our options is to cap the tessellation factor on Radeon cards in future testing.
> 
> Yet another is to exclude Crysis 2 results from our overall calculation of performance for our value scatter plots, as we’ve done with HAWX 2 in the past.*- source



So the too much tesselation began and was corrected by messing with driver settings and cherry picking benchmarks.  All so Radeon cards could perform better.  Now like then, Ray Tracing is too heavy on Radeon cards and this needs to be another "special treatment".

Thus began the, "People want a “better” implementation of DX11 and thus Tessellation."

Then there was you cant use PhysX results because nvidia was so far ahead.  Thus came the same attacks against it.  Its not open source, they dont work on the cpu code.



> Physx is a joke, and a detriment to the community, so I don’t get why you are bothering to defend it.* Whether or not Physx had potential doesn’t matter as long as it’s being manipulated to sell video cards.* *You should support open alternatives instead.*


That would be non ATI/AMD video cards being sold.  Open alternatives instead. Sound like DLSS vs FSR.

The same with DLSS but NVidia won that for now.  The same with ray tracing.  Thus innovations gets destroyed.  Just so a Radeon card can perform better.  Then get accepted once Radeon performance get parity with nvidia.

source



> l33t-g4m3r
> 11 years ago
> Trolls trolling trolls. If you don’t like my post editing, don’t argue with me. I’m just clarifying my views and fixing grammar. Neither of you have anything good to say anyway, since your points depend on shaky evidence, or don’t matter since Ageia is defunct. Once you look at the whole picture, the arguments fall through. All I gotta do is point out the hole in the boat, and voila, it sinks.
> Physx is a joke, and a detriment to the community, so I don’t get why you are bothering to defend it. The cat’s been out of the bag for a while too, so arguing about it now is like beating a dead horse.
> Whether or not Physx had potential doesn’t matter as long as it’s being manipulated to sell video cards. You should support open alternatives instead.


----------



## ratirt (May 13, 2022)

HUB stated that the DLSS wins over FSR2.0 in one case with better picture quality and losses in other or are equal for both. The clear advantage for DLSS in comparison to FSR2.0, DLSS is a notch faster up to 6%. On the FSR2.0 we have support for all graphics cards there are and that is a huge advantage.


----------



## zx128k (May 13, 2022)

ratirt said:


> HUB stated that the DLSS wins over FSR2.0 in one case with better picture quality and losses in other or are equal for both. The clear advantage for DLSS in comparison to FSR2.0, DLSS is a notch faster up to 6%. On the FSR2.0 we have support for all graphics cards there are and that is a huge advantage.


Open and support for all gpu is an old amd PR trick.  This was used to attack Physx which gave a massive performance uplift, just not for AMD.  AMD uses it to kill off innovations that affect its performance.


----------



## Trunks0 (May 13, 2022)

zx128k said:


> I quote from the video. They state in motion DLSS was better, the image was more stable and had more fine detail.  In the still image DLSS had more fine detail.  This makes 100% sense given the more processing power available to DLSS via the tensor cores.  FSR 2 looks completely usable and not too bad really.  FSR 1 the original DLSS killer was complete garbage for image quality.
> 
> His System
> R9 5800x3d
> ...


Some HUGE gaps in time there and some weird remembering of how things played out.

Tessellation was first intro'ed in hardware in 2001 by ATi with the 8500, as TruForm, but tessellation didn't see wide use till years and many DX versions later when it become programmable vs fixed. Then when nVidia got a performance advantage they used GameWorks to get developers to implement nVidia's own coded  and implemented effects that used tessellation. The problem however is those effects used tessellation in extremely over the top ways. But because it was black box code from nVidia, developers couldn't optimize or adjust it's performance. The only thing ATi/AMD could do about it in the short term, was control the tessellation levels driver side.

PhysX was BS. nVidia bought it and locked it's hardware acceleration to CUDA. They also blocked using a dedicated GeForce card for it if another other vendor card was detected in the system. Further more they purposefully didn't optmise the CPU acceleration, making the effects only peform well with CUDA acceleration. And later all but abandoned the hardware accelerated functionality in favour of wider adoption as a general physics engine that ran CPU side anyway.

The rest on innovation getting destroyed is kinda just hyperbole.


----------



## zx128k (May 13, 2022)

Trunks0 said:


> Some HUGE gaps in time there and some weird remembering of how things played out.
> 
> Tessellation was first intro'ed in hardware in 2001 by ATi with the 8500, as TruForm, but tessellation didn't see wide use till years and many DX versions later when it become programmable vs fixed. Then when nVidia got a performance advantage they used GameWorks to get developers to implement nVidia's own coded  and implemented effects that used tessellation. The problem however is those effects used tessellation in extremely over the top ways. But because it was black box code from nVidia, developers couldn't optimize or adjust it's performance. The only thing ATi/AMD could do about it in the short term, was control the tessellation levels driver side.
> 
> ...


It was called over-tessellation because AMD cards could not handle the feature well.  So there was a massive misinformation campaign about how the feature was over used in games.  Thus it was justified to reduce tessellation settings.  The reality was this was an issue with AMD's performance and nvidia had no problems.  AMD brought out hacks in their drivers to restore performance.  Yes restore performance. lol

Remember when DXR was nothing but a fad and you should just get a 10 series card or stay on a 10 series card.  Get a 1660 card. Don't buy a 20 series cards.  You cant see the difference between RT and raster anyway. DLSS is all blurred and useless.  Did you play Control and Metro Exodus because of that lie in raster mode.  Just because you never got it was untrue.  Did you get it was a con as well.

PhysX was amazing, if you played the games that supported it. You could use your old nvidia card for it. I remember it in Fallout 4, Mirror's Edge, Star Citizen and the batman games. The metro series games and others. It was you who got con'ed into thinking it was crap and all the arguments against it. You're still so invested in that con that you still cant admit it to yourself. Also PhysX used to affect the physics score for the 3dmark (2011?)benchmark. This ment that nvidia cards all had the best overall scores which cause a hatred from AMD owners like you would not believe. This started the attack on PhysX to protect AMD and their lack of inovation.  So why did it disappear? Direct _Physics_ is officially a part of DirectX12. This was to use _Havok_ Physics but disappeared from the pages of history afterwards. NVidia GameWorks.  List of games, not complete. Note the witcher 3. Note the use of the term PhysX.



> "We have invested over 500 engineering-years of work to deliver the most comprehensive platform for developing DirectX 12 games, including the world's *most advanced physics simulation engine*," said Tony Tamasi, senior vice president of content and technology at NVIDIA. "These resources will ensure that GeForce gamers can enjoy the very best game experience on DirectX 12 titles, just as they have on DirectX 11 games."



Remember DLSS wont catch on, its closed and only supports nvidia cards.  FSR 1 was better than DLSS (all versions of some) and open sourced, lets not forget the videos now showing FSR 1 behind FSR 2 and FSR 2 not quite as good as DLSS.

Remember when the AMD 6000 series was a nvidia killer, yet nvidia 30 series basically now controls the market.  There are more cards with tensor cores than ever.  DLSS support wont be a problem for most gamers.  Afterall they bought either 20 and 30 series cards.  Not bad for a RTX fad that will pass, its now DX12u(so mush for being a FAD that will pass).  How does FSR 2 help then again by being open for any gpu?  Afterall if you really need FSR 2 for an old gpu, then maybe upgrade.  If the market is anything to go by, they will upgrade to a nvidia gpu with tensor cores and thus get DLSS support.

This hyperbole goes on forever.

Stop believing this non sense then trying to convince others.  I am tired of it.


----------



## 80-watt Hamster (May 13, 2022)

zx128k said:


> It was called over-tessellation because AMD cards could not handle the feature well.  So there was a massive misinformation campaign about how the feature was over used in games.  Thus it was justified to reduce tessellation settings.  The reality was this was issue an AMD's performance and nvidia had no problems.  AMD brought out hacks in their drivers to restore performance.  Yes restore performance. lol
> 
> Remember when DXR was nothing but a fad and you should just get a 10 series card or stay on a 10 series card.  Get a 1660 card. Don't buy a 20 series cards.  You cant see the difference between RT and raster anyway. DLSS is all blurred and useless.  Did you play Control and Metro Exodus because of that lie in raster mode.  Just because you never got it was untrue.  Did you get it was a con as well.
> 
> ...



Nvidia has basically controlled the market for at least ten years.  6000 series had no chance to be a "killer" of anything; the mindshare, production and distribution for AMD simply weren't there.  What it did manage, though, was superior efficiency and price/performance in certain cases.  Except in ray tracing, of course, which seems to be the only thing you care about.  Correct me if I'm wrong, but do ray-traced engines not still run on a raster core?  In any event, raster performance is still _very_ relevant.  And what the AMD partisans said about FSR 1.0  and tessellation doesn't particularly matter anymore. 

Let's forcus on the subject at hand, then:  Based on the information available RIGHT NOW, FSR 2 has the potential to give DLSS a run for its money.  That's it.  RT doesn't enter into it.  FSR 1 doesn't enter into it.  Tesselation doesn't enter into it.

Seriously, it's like AMD ran over your dog and then Nvidia gave you a puppy or something.  Chill out.


----------



## zx128k (May 13, 2022)

80-watt Hamster said:


> Nvidia has basically controlled the market for at least ten years.  6000 series had no chance to be a "killer" of anything; the mindshare, production and distribution for AMD simply weren't there.  What it did manage, though, was superior efficiency and price/performance in certain cases.  Except in ray tracing, of course, which seems to be the only thing you care about.  Correct me if I'm wrong, but do ray-traced engines not still run on a raster core?  In any event, raster performance is still _very_ relevant.  And what the AMD partisans said about FSR 1.0  and tessellation doesn't particularly matter anymore.
> 
> Let's forcus on the subject at hand, then:  Based on the information available RIGHT NOW, FSR 2 has the potential to give DLSS a run for its money.  That's it.  RT doesn't enter into it.  FSR 1 doesn't enter into it.  Tesselation doesn't enter into it.
> 
> Seriously, it's like AMD ran over your dog and then Nvidia gave you a puppy or something.  Chill out.


RT is the only thing the whole market cares about or did you miss the fact its center stage for the consoles and for DX12.  All the 3d engines are being update or updating to use DXR. That Unreal Engine 5 brings Ray Tracing to nearly all platforms.  Its you who conning yourself.  FSR 2 is basically in doubt if it lasts.  No one really bought an AMD 6000 series card and this is not an opinion.  Its only takes a few clicks to add the DLSS plugin to Unreal Engine 5 and support most of the PC market.  Unreal Engine 5 supports TSR which leaves FSR 2 well looking for a place to live.  Sure AMD will pay for a few developers to use FSR 2, like with FSR 1 and its useful for AMD cards in DXR games so some big AAA titles may support it but thats really it as far as I can see.  There is a small part of market that will use FSR 2 and a much bigger part (almost all the market) that will use DLSS.

As far as I can see FSR 2 is slower than DLSS.  It has less fine details and is less stable.  This is also more so in motion.  PC world stated.



> If you want to game at 1440p with FSR 2.0, you'll probably need at least a *Radeon RX 5600 or Vega GPU, or a GTX 1080 or RTX 2060 on Nvidia's side*—though that's not a universal truth.



So people on low end hadrware are not really going to use FSR 2 to its fullest.

Also AMD cared how well FSR 2 runs on other hardware they tuned it only for RDNA2, *AMD FSR 2.0 upscaling is tuned to run faster on RDNA 2-powered graphics cards.*


----------



## 3211 (May 13, 2022)

Digital Foundry released a very competent video about this.  Makes this AMD PR article completely useless.  FSR has the usual faults of this technique, from movement, to particles, to hair, to transparency and details stability.   It has a much higher cost on amd cards, almost double than on Ampere.  DLSS is universally better than FSR 2 and every RTX owner should chose DLSS.    But it remains a success for people with older hardware.


----------



## R0H1T (May 13, 2022)

zx128k said:


> *RT is the only thing the whole market cares about* or did you miss the fact its center stage for the consoles and for DX12.


----------



## zx128k (May 13, 2022)

3211 said:


> Digital Foundry released a very competent video about this.  Makes this AMD PR article completely useless.  FSR has the usual faults of this technique, from movement, to particles, to hair, to transparency and details stability.   It has a much higher cost on amd cards, almost double than on Ampere.  DLSS is universally better than FSR 2 and every RTX owner should chose DLSS.    But it remains a success for people with older hardware.


They stated its not a DLSS killer.  DLSS has better performance and quality which is to be expected with the extra performance of tensor cores.  That this is a win for people with AMD hardware.  FSR 2.0 is tuned for RDNA2 hardware so it should be even slower on current nvidia hardare and older hardware in general.  If you are on a RTX 2060-2070 you are going to use DLSS.  The fact some reviews state a GTX 1080 for 1440p would imply older hardware is not really the goal.  That this is ment to run on RDNA2 gpus and have far worse performance on others by design.  That the cache on RDNA2 is what makes 4k possible with good performance and speeds up FSR 2 in general.












> AMD says FSR 2.0 will still work on even older GPUs, but your mileage may vary. Due to FSR 2.0's additional computing requirements on the GPU, *you might not get a performance uplift at all when using FSR 2.0 on older GPUs*, especially if you try running at higher resolutions. As we showed with our GTX 970, GTX 1080, RX 480, and RX Vega 64 testing, you should expect lower performance gains if you use FSR 2.0 on older hardware.
> The second cavitate is GPU support. According to AMD, there are limits to what FSR 2.0 can do on older hardware. AMD recommends an RX 5700 or RTX 2070 as a baseline for 4K upscaling with FSR 2.0. For 1440p, AMD recommends at least an RX 5600 XT or GTX 1080 for optimal performance, and at 1080p AMD recommends an RX 590 or GTX 1070.
> source


So I guess older hardware is not really the focus and its just RDNA2 gpus that get the full benefit.  So as NVidia going to 4k upscaling you would only use DLSS as FSR 2 is worse for you.  1440p is not really low end hardware.  1080p is outside of the GTX 1060 abilities if AMD are correct and a GTX 1070 is not really low end hardware as well.

Seems the only low end hardware could be RDNA2 APU's....

FSR 2.0 is basically as complex as DLSS 2.x now, and both need frame data, motion vectors, and depth buffers.  Maybe it too will lose to FSR 1 and its easy support/lower development time.


----------



## Trunks0 (May 13, 2022)

zx128k said:


> It was called over-tessellation because AMD cards could not handle the feature well.  So there was a massive misinformation campaign about how the feature was over used in games.  Thus it was justified to reduce tessellation settings.  The reality was this was an issue with AMD's performance and nvidia had no problems.  AMD brought out hacks in their drivers to restore performance.  Yes restore performance. lol
> 
> Remember when DXR was nothing but a fad and you should just get a 10 series card or stay on a 10 series card.  Get a 1660 card. Don't buy a 20 series cards.  You cant see the difference between RT and raster anyway. DLSS is all blurred and useless.  Did you play Control and Metro Exodus because of that lie in raster mode.  Just because you never got it was untrue.  Did you get it was a con as well.
> 
> ...



...where to start... I'll just do it by paragraph? and just quote the start of them to make it easier to follow. I'll leave it fully quoted above 

"It was called over-tessellation..." No, The performance on nVidia hardware sucked as well. But was mostly playable. That was intentional, because nVidia had the performance advantage. The blackbox code was un-modifiable by the developers. So even if they wanted to adjust it to improve performance, they couldn't. You can google this, it's a known part of graphics history. Wasn't always the case though, nVidia honestly just nailed it.

"Remember when DXR was nothing but a fad..." Yeah, because a 20xx series performance wasn't really much better than the 10xx series at first and RT was barely in use yet. DLSS 1.x was a blurry ugly mess, DLSS 2.x addressed the issue. And RT was kinda poorly implemented in allot of early games. So buying hardware to use it wasn't really worth it yet. It took awhile before dev's really started to get a hang of where to use it and where not to. It was/is totally worth it in some games.

Also fun little factoid. Control's DLSS implementation didn't use the tensor cores. Often referred to as DLSS "1.9", Control later updated to DLSS 2.x and was commonly used to compare DLSS 1 and 2.

"PhysX was amazing.." Yes it was pretty damn cool. And yes you could dedicate your old card. But only if you had a nVidia GPU's in your system. And 3DMark dropped it in 2008, the year nVidia bought Ageia and locked down PhysX. You can google this history, but it basically boiled down to nVidia hardware being able to potentially cheat the PhysX powered benchmark. So 3DMark just removed it.

The rest is just regular GPU wars stuff. People should buy whatever hardware matches what they want out of it. *shrug* same as always.

*late edit*
Also Havok's GPU acceleration died when Intel bought Havok. Which btw was hardware agnostic and was demo'ed on both GeForce and Radeon hardware, the year before they where bought by Intel.


----------



## 80-watt Hamster (May 13, 2022)

zx128k said:


> RT is the only thing the whole market cares about or did you miss the fact its center stage for the consoles and for DX12.  All the 3d engines are being update or updating to use DXR. That Unreal Engine 5 brings Ray Tracing to nearly all platforms.  *Its you who conning yourself*.  FSR 2 is basically in doubt if it lasts.  No one really bought an AMD 6000 series card and this is not an opinion.  Its only takes a few clicks to add the DLSS plugin to Unreal Engine 5 and support most of the PC market.  Unreal Engine 5 supports TSR which leaves FSR 2 well looking for a place to live.  Sure AMD will pay for a few developers to use FSR 2, like with FSR 1 and its useful for AMD cards in DXR games so some big AAA titles may support it but thats really it as far as I can see.  There is a small part of market that will use FSR 2 and a much bigger part (almost all the market) that will use DLSS.
> 
> As far as I can see FSR 2 is slower than DLSS.  It has less fine details and is less stable.  This is also more so in motion.  PC world stated.
> 
> ...



Conning myself about what, exactly?


----------



## zx128k (May 13, 2022)

Trunks0 said:


> ...where to start... I'll just do it by paragraph? and just quote the start of them to make it easier to follow. I'll leave it fully quoted above
> 
> "It was called over-tessellation..." No, The performance on nVidia hardware sucked as well. But was mostly playable. That was intentional, because nVidia had the performance advantage. The blackbox code was un-modifiable by the developers. So even if they wanted to adjust it to improve performance, they couldn't. You can google this, it's a known part of graphics history. Wasn't always the case though, nVidia honestly just nailed it.
> 
> ...


Control used lots of DLSS versions, only version 1.9 was not tensor based and only in Control.  It also looked like complete crap.  I was playing Control at the time, was not happy with DLSS 1.9.

As a person that really did play with DLSS 1.x, its was bad at the start.  Then you would get an update and it was magic.  FSR 1 was complete garbage and could not match DLSS 1.



> Evidently it was as Metro Exodus received an update that was said to improve DLSS and improve it did. Below we're including several screenshots taken in game with performance overlays included so that you can see scene for scene the performance change when running the game at 4K with DLSS on vs off (Ultra settings, Hairworks and PhysX enabled). One thing to take away here is that first impressions are hard to shake, but sometimes deserve a second look once ironed out. *Let us know down in the comment section if this changes your mind on what is possible with DLSS because clearly it can improve and with the click of a button you can get comparable image quality with healthy performance gains.* The hotfix updates the Steam game version to 1.0.0.1 while the Epic store version will be updated to version 1.0.1.1.  source 1 source 2





> *UPDATE 21 FEBRUARY 2019*​*Build numbers:*
> 
> Epic build (verify in-game from Main Menu Options or Pause menu) – 1.0.1.1
> DLSS fixes and improvements to sharpness


You can tell the people who learned about DLSS 1 from propaganda and people who play games with DLSS 1.  DLSS 1.0 was released on February 2019 with Metro Exodus and Battlefield V.  So this massive image quality increase happened within a month of DLSS 1's release.

PhysX cards could do the same, its was not just nvidia cards.  This is not cheating as there have been accelerator cards from the dawn of computer history.  They too are not cheating. Afterall gpus are one type of accelerator cards.  So by your argument we have all been cheating in Time Spy by installing a high end gpu and not running it all on the cpu.  The only problem was AMD owners.  Saying that I had two 7970's and two 290x gpus.  This is the stuff that turned me off AMD.  All the lies and reviews that cant be trusted.

Too much tessellation and not AMD performance.  source We all know it was AMD cards, I had two 7970's and two 290x's. The reason for it being so bad is in bold. That why they were going to make changes in the drivers and ban games from benchmarks.


> Unnecessary geometric detail slows down all GPUs, of course, but it just so happens to have a much larger effect on DX11-capable AMD Radeons than it does on DX11-capable Nvidia GeForces. The Fermi architecture underlying all DX11-class GeForce GPUs dedicates more attention (and transistors) to achieving high geometry processing throughput than the competing Radeon GPU architectures. We’ve seen the effect quite clearly in synthetic tessellation benchmarks. Few games have shown a similar effect, simply because they don’t push enough polygons to strain the Radeons’ geometry processing rates. However, with all of its geometric detail, the DX11 upgraded version of _Crysis 2_ now manages to push that envelope. *The guys at Hardware.fr found that enabling tessellation dropped the frame rates on recent Radeons by 31-38%. The competing GeForces only suffered slowdowns of 17-21%.*
> 
> Radeon owners do have some recourse, thanks to the slider in newer Catalyst drivers that allows the user to cap the tessellation factor used by games. Damien advises users to choose a limit of 16 or 32, well below the peak of 64.
> 
> As a publication that reviews GPUs, we have some recourse, as well. One of our options is to cap the tessellation factor on Radeon cards in future testing. Another is simply to skip _Crysis 2_ and focus on testing other games. Yet another is to exclude _Crysis 2_ results from our overall calculation of performance for our value scatter plots, as we’ve done with _HAWX 2_ in the past.


----------



## Trunks0 (May 13, 2022)

zx128k said:


> Control used lots of DLSS versions, only version 1.9 was not tensor based and only in Control.  It also looked like complete crap.  I was playing Control at the time, was not happy with DLSS 1.9.
> 
> As a person that really did play with DLSS 1.x, its was bad at the start.  Then you would get an update and it was magic.  FSR 1 was complete garbage and could not match DLSS 1.
> 
> ...



DLSS 1 sucked. It's well documented and anyone can easily google what happened. Updates helped increase it's fidelity, but it still sucked. It was funny at the time, that in some scenarios that a simple bilinear upsample with CAS Sharpening looked better than DLSS 1. But that was mostly down to DLSS 1 just not really panning out. Metro got some special attention, so it's easily the best looking version of DLSS 1 we saw. But DLSS 2.x is when DLSS really came into it's own and has been what we always hoped it would be.

"PhysX cards could do the same, its was not just nvidia cards" Google what happen in 2008 with the 3DMark Vantage benchmark. Not sure your remembering what happened. It made sense to remove it.


----------



## zx128k (May 13, 2022)

Trunks0 said:


> DLSS 1 sucked. It's well documented and anyone can easily google what happened. Updates helped increase it's fidelity, but it still sucked. It was funny at the time, that in some scenarios that a simple bilinear upsample with CAS Sharpening looked better than DLSS 1. But that was mostly down to DLSS 1 just not really panning out. Metro got some special attention, so it's easily the best looking version of DLSS 1 we saw. But DLSS 2.x is when DLSS really came into it's own and has been what we always hoped it would be.
> 
> "PhysX cards could do the same, its was not just nvidia cards" Google what happen in 2008 with the 3DMark Vantage benchmark. Not sure your remembering what happened. It made sense to remove it.


Only in your head is that true, objectively it is not.  I already posted evidence that DLSS 1 was not crap.  Just because support for PhysX hardware acceleration was removed does not mean it would be right.  AMD scores were lol at the time because of hardware acceleration. PhysX had hardware support from the start.  Anyone could buy a PhysX card.  NVidia cards got PhysX support.  That was the truth of it and AMD scores suffered.  Action was taken to protect AMD performance numbers from wide spread PhysX hardware support.  A pattern that repeats right up to DXR and DLSS.

Cheating is what nvidia did in the gForce FX series of cards.  Its in the video were they dropped colour depth to 16 bit to increase performance and then told no one about it.


----------



## Trunks0 (May 13, 2022)

I followed my own advice and googled it. Because reading your posts made me think I should look back. I was being too harsh(on DLSS 1)

The PhysX bit though... It had to be removed. You can't allow a vendor locked, closed sourced feature that is top to bottom controlled by a competing vendor be used as a comparison benchmark.  nVidia modified the PhysX API to do something the Benchmark wasn't supposed to test for. The benchmark was designed for CPU and PPU. nVidia then changed the API to use CUDA acceleration on the GPU. That skewed the results and is exactly the sorta reason you can't allow it, it's an obvious conflict of interest.


----------



## zx128k (May 13, 2022)

Trunks0 said:


> I followed my own advice and googled it. Because reading your posts made me think I should look back. I was being too harsh, it wasn't great though.
> 
> The PhysX bit though... It had to be removed. You can't allow a vendor locked, closed sourced feature that is top to bottom controlled by a competing vendor be used as a comparison benchmark.  nVidia modified the PhysX API to do something the Benchmark wasn't supposed to test for. The benchmark was designed for CPU and PPU. nVidia then changed the API to use CUDA acceleration on the GPU. That skewed the results and is exactly the sorta reason you can't allow it, it's an obvious conflict of interest.


The benchmark supported HW acceleration and PhysX cards were supported in CPU 2 only. 



> Since only the second CPU test in 3DMark Vantage from the PPU (Physics Processing Unit) can benefit, the performance increases logically only in this part. With the PPU a 34 percent higher computing power is achieved, whereby the CPU result increases from 16,139 points to 17,820 points. The overall result remains pretty unimpressed, however, as the CPU value only has a marginal influence on the overall score.


This was known but the problem was what happened next.


> The Inquirer posted something up about driver cheating this week and that got the industry buzzing. The Inq claimed that NVIDIA was using in-house PhysX API’s and that they were able to manipulate the score in 3DMark Vantage since they can make the graphics card, drivers and physics API that is used for the benchmark. Our test scores showed the significant performance increase that everyone was up in arms about, but from what we could tell it was just off-loading the workload from the CPU to the GPU. The new NVIDIA drivers allow GPUs to do what once took a dedicated AGEIA PhysX card! The days of running a PPU and a GPU in a system are soon to be long gone!





> NVIDIA *is not* using driver optimizations to cheat on benchmarks, they are just doing what someone with a PhysX card could do months ago. source


----------



## MxPhenom 216 (May 13, 2022)

Od1sseas said:


> RTX is just a marketing term for Ray Tracing. Nothing proprietary



Well now it is, and it was Nvidias way of naming Gpus to differentiate between rtx and gtx video cards. Rtx cards have RT core hardware gtx cards dont, but that was only really for Turing generation now that with ampere even low-mid end cards are all RTX.



ARF said:


> Okey, so nvidia rtx is proprietary, sorry for missing the "x" in the end..
> 
> AMD said that you can get your ray-tracing only in the cloud. Good luck!
> 
> View attachment 247161


Whats your point? That might be the most honest press release slide I have ever seen from a corporation in this industry. 

But make no mistakes, RDNA2 has RT hardware in it. First gen, and a different implementation from Nvidia and it appears to be not as good, but acting like RT is proprietary to nvidia is absurd and false.

We are long way off from a full natively ray traced scenes in video games, and if that requires cloud and/or AI to help so be it. Why the dig at AMD about that? Ray tracing will remain as only part of the rendering pipeline locally for quite a while I suspect.


----------



## kapone32 (May 13, 2022)

zx128k said:


> Control used lots of DLSS versions, only version 1.9 was not tensor based and only in Control.  It also looked like complete crap.  I was playing Control at the time, was not happy with DLSS 1.9.
> 
> As a person that really did play with DLSS 1.x, its was bad at the start.  Then you would get an update and it was magic.  FSR 1 was complete garbage and could not match DLSS 1.
> 
> ...


First of all who created Physx and why was it favorable to Nvidia. As far as Crossfire support in those days it did bring a compelling improvement. It does not matter though because you are not convincing anyone with your revised edition of History.


----------



## InVasMani (May 13, 2022)

zx128k said:


> RT is the only thing the whole market cares about or did you miss the fact its center stage for the consoles and for DX12.  All the 3d engines are being update or updating to use DXR. That Unreal Engine 5 brings Ray Tracing to nearly all platforms.  Its you who conning yourself.  FSR 2 is basically in doubt if it lasts.  No one really bought an AMD 6000 series card and this is not an opinion.  Its only takes a few clicks to add the DLSS plugin to Unreal Engine 5 and support most of the PC market.  Unreal Engine 5 supports TSR which leaves FSR 2 well looking for a place to live.  Sure AMD will pay for a few developers to use FSR 2, like with FSR 1 and its useful for AMD cards in DXR games so some big AAA titles may support it but thats really it as far as I can see.  There is a small part of market that will use FSR 2 and a much bigger part (almost all the market) that will use DLSS.
> 
> As far as I can see FSR 2 is slower than DLSS.  It has less fine details and is less stable.  This is also more so in motion.  PC world stated.
> 
> ...



It's a core feature not center stage 4K 120Hz is the center stage defining feature of current generation consoles. Also in terms of Physx being disabled when other brand GPU's were detected that was very anti-competitive and anti-consumer. Imagine in reverse Intel/AMD doing that with the CPU when detecting a Nvidia GPU oops there goes your computer functionality shouldn't have installed Nvidia better luck next time.


----------



## zx128k (May 13, 2022)

InVasMani said:


> It's a core feature not center stage 4K 120Hz is the center stage defining feature of current generation consoles. Also in terms of Physx being disabled when other brand GPU's were detected that was very anti-competitive and anti-consumer. Imagine in reverse Intel/AMD doing that with the CPU when detecting a Nvidia GPU oops there goes your computer functionality shouldn't have installed Nvidia better luck next time.


For awhile its worked and then it disabled.  Then it was disabled if you installed a AMD gpu.  I was not happy about that as well. 

Anyway RT and consoles.  Unreal Engine 5 Lumen its software Ray Tracing was designed from the ground up with consoles in mind.  Basically Lumen for RT support and TSR for upscaling.
Also you have games like this one.









@kapone32 Who asks a question already answered.


> First of all who created Physx and why was it favorable to Nvidia. As far as Crossfire support in those days it did bring a compelling improvement. It does not matter though because you are not convincing anyone with your revised edition of History.


As posted above.


> *AGEIA *PhysX card


BFG is one of the companies companies that manufactured the Ageia PhysX card.  After Ageia's acquisition by Nvidia, dedicated PhysX cards were discontinued in favor of GeForce GPUs.  3DMark Vantage used a PPU (Physics Processing Unit) in the second CPU test.  Thus at first Ageia PhysX card owners got a speed increase but after Ageia's acquisition by Nvidia.  Then GeForce GPUs could be used instead of Ageia PhysX cards.  Thus NVidia was accused of being a cheater because nvidia systems now had PhysX acceleration and were scoring higher.  As per the sources given above, all that had happened was GeForce GPU's were now acting as a PPU for the second cpu test.  As Nvidia had changed the commands in the PhysX API to support cuda for processing.  Thus nvidia had better performance.


----------



## Trunks0 (May 14, 2022)

The CUDA acceleration just highlighted an issue with having a test that used "PhysX" in the suite. It's why it got removed. Not because Radeon users where whining or because AMD need some sort of protection. That's hyperbole. It got removed because once nVidia had control of the PhysX stack, it wasn't a fair benchmark anymore. Way to much conflict of interest.

UE5 designed its entire system around being as hardware agnostic as possible. It makes sense for them, as they want thier engine to be as adaptable as possible.


----------



## zx128k (May 14, 2022)

Trunks0 said:


> The CUDA acceleration just highlighted an issue with having a test that used "PhysX" in the suite. It's why it got removed. Not because Radeon users where whining or because AMD need some sort of protection. That's hyperbole. It got removed because once nVidia had control of the PhysX stack, it wasn't a fair benchmark anymore. Way to much conflict of interest.
> 
> UE5 designed its entire system around being as hardware agnostic as possible. It makes sense for them, as they want thier engine to be as adaptable as possible.


NVIDIA Responds To GPU PhysX Cheating Allegation  Why would AMD have anything to say, its ATI at this point.
NVIDIA, PhysX, and the “C” Word
nVidia PhysX overwrites Vantage .dll's - Results now removed from Hall of Fame
GPU PhysX Doesn't get you to 3DMark Vantage Hall of Fame Anymore


> With NVIDIA releasing their GeForce PhysX drivers, users of the PhysX accelerating GeForce cards were at an advantage over their Radeon counterparts...





> The relation of GPU acceleration for gaining higher 3DMark scores in physics tests has been controversial to say the least. Futuremark has now decided to update its Hall of Fame to exclude all results using PhysX on a GPU, simply because this was not how they intended it to work. It has also been updated to organise the results better for easier comparison. You will be able to use GPU physics processing to get a 3DMark score, you will not be able to make it to the Hall of Fame using it. You can use an Ageia PhysX card to assist your 3DMark score to make it to the Hall of Fame, as that's how Futuremark intended PhysX processing scores to assist your final scores.


The issue was the geForce gpu had the feature but you could still use an Ageia PhysX card.

techpowerup comment at the top.


> exodusprime1337
> that is the stupidest shit ever.   once again futuremark comes out with a way to uneven the scores.  All because it's not an ageia physx processor the scores don't count, what a crock of shit.   The fact of the matter is that amd cards are unable to do the physx processing on they're own so now the whole lot has to suffer cause amd cards just can't cut it anymore.  Bullshit





> Kursah
> 
> 
> > ghost101No its because nvidia had more money and effectively bought the performance crown in this benchmark. If I was involved with futuremark, i'd be pissed as well. If AMD had the money, the scenario could have been the other way around. *How does this tell me which card is actually better?*
> ...





> mullered07
> the whole point futuremark are making is that the physX test in 3dmark vatage were made for the cpu not a gpu and it doesnt represent real world gaming as the nvidia gpus are only being used for the physx in the test and not for rendering graphics at the same time which is what would happen in a real world scenario (ie gpu would be rendering graphics and physx at the same time)
> 
> and ati gpu are fully capable of doing physx only they would have to create there own api as cuda belongs to nvidia (who didnt create it either before the nvidia fanboys start)





> 1c3d0g
> This is why I dislike Futuremark and their stupid benches so much. It's just some rabid fan boys trying to measure who's e-penis is the biggest, but at the end of the day, what did they really "win"? Even if they get the highest score, they're still retarded...nobody with an ounce of sanity wastes so much time and energy into such a pointless benchmark.





> farlex85
> 
> 
> > warhammer3dMark vantage is not real world gaming or performance..
> ...





> Hayder_Master
> ati cards now is very good and have high score in 3d mark , so nvidia find the weak point in vantage and they took it and develop software like hacking on 3d mark to increase the score , and we must not forget the 3d mark 2006 score Affected with high cpu , so it is weak point too


Really goes on forever.

Remember PhysX cards were ment to be supported, that means that you could buy a AGEIA PhysX card.  The big deal is it was a Nvidia feature on an nvidia gpu.


> Tero Sarkkinen, FutureMark-CEO, has written that FutureMark will drop GPU-PhysX in 3DMark Vantage next time. The reason is that it is very unfortunate that the CPU-Test of the GPU affected.





> yeh! i was expected this to happen. Afterall, replacing Drivers is ofcourse a Cheating..hyeah:
> :rofl: source



Update on PhysX on ATI Radeon cards – NVIDIA offers help on the project


> *It seems that AMD still is not being cooperative*, we get the *feeling that they want this project to fail*. Perhaps their plans are to strangle PhysX since *AMD and Intel have Havok*. The truth is… *Nvidia is now helping us with the project and it seems they are giving us their blessings*. It’s very impressive, inspiring and motivating to see Nvidia’s view on this.


PhysX also runs on ATI Radeon HD cards!
Then it died, guess who killed off PhysX on AMD cards.

But it never really ended. Gameworks has PhysX builtin.  Its must be cheating.
Nvidia Gameworks "Cheating" in Final Fantasy XV Benchmark



> *burn420247* 4 years ago#1
> 
> 
> 
> ...





> *JKatarn* 4 years ago#3
> Not surprising, given the fact that the whole point of "GameWorks" is to sap so much performance that e-peeners will hopefully run out and splurge on the top-end card in the hopes of bruteforcing decent performance out of games with it enabled. Ditto PhysX in most games.


UE5 was designed for the consoles, Lumen which is software Ray Tracing was design for next generation consoles.  It states it all over the engines documentations.  This is not the only engine that provides ray tracing on the lastest consoles.
Well enough of this and back to FSR 2.


----------



## Trunks0 (May 14, 2022)

zx128k said:


> NVIDIA Responds To GPU PhysX Cheating Allegation  Why would AMD have anything to say, its ATI at this point.
> NVIDIA, PhysX, and the “C” Word
> nVidia PhysX overwrites Vantage .dll's - Results now removed from Hall of Fame
> GPU PhysX Doesn't get you to 3DMark Vantage Hall of Fame Anymore
> ...


"Why would AMD have anything to say, its ATI at this point." because where talking about 2008 and AMD bought ATi in 2006.

The cheating allegations didn't really matter. Once nVidia owned PhysX it was never going to work out.

The offer to let AMD have CUDA was BS from the start and is laughable. No one is going to implement a competitors closed source standard. The circus at this point was nuts. As you can see with all your links. The reality was nVidia had PhysX and they weren't going to honestly let anyone else in. They didn't and they killed PPU support about year after they bought Ageia with version 2.8.3 of PhysX.

Some dates of note
AMD Buys ATi - 2006
CUDA intro'ed - 2006
nVidia buys Ageia - 2008
CUDA PhysX acceleration - 2008
OpenCL Intro'ed - 2009
nVidia ends PPU acceleration with PhysX with version 2.8.3 - 2009

Once nVidia owned PhysX it was game over.


----------



## zx128k (May 14, 2022)

Trunks0 said:


> Dude it AMD by 2008, because they bought ATi in 2006.
> 
> "Why would AMD have anything to say, its ATI at this point." because where talking about 2008 and AMD bought ATi in 2006. So it's AMD, not ATi.
> 
> ...


Even if NVidia made it open for all, AMD would not take part.  Even if DLSS was a completely open source, (it is after the hack) AMD would not hve anything to do with it.  They cant anyway as it wont work on AMD hardware because without tensor or xmx cores the performance would not be there.  Once NVidia was the source of DLSS, AMD wont touch it.  AMD only make FSR open source because they are desperate to kill off DLSS.  They want Intel etc to support their standard.  This time it wont work because nvidia own the market share to go it alone.

100% the same thing AMD had been doing with PhysX.  Once it was Nvidia's child it was dead to them.


----------



## InVasMani (May 14, 2022)

It doesn't matter when Nvidia didn't in fact make it open. AMD isn't the only one that could take part others like Microsoft could just as easily. The point is more that Nvidia had no f*cking intentions of doing so.


----------



## zx128k (May 14, 2022)

Trunks0 said:


> You talk as if nVidia is some bastion of open source who is known to be easy to collaborate with.


NVidia likely knew that AMD would stop PhysX on their cards.  So nVidia are happy to help.  AMD is playing the same game with DLSS.  They are the moral high ground for being open source with FSR 1 but they are really just attacking nVidia's DLSS.  FSR 1 is basically dead now and the code is worthless.


----------



## InVasMani (May 14, 2022)

I ate all the cake because I knew someone else might want a piece. I was happy to help.


----------



## 80-watt Hamster (May 14, 2022)

InVasMani said:


> I ate all the cake because I knew someone else might want a piece. I was happy to help.



Aww, I wanted cake.  I hope you get indigestion. 

/s


----------



## Trunks0 (May 14, 2022)

I wasn't gonna respond... but where talking about FSR now, so why not.



zx128k said:


> NVidia likely knew that AMD would stop PhysX on their cards.  So nVidia are happy to help.  AMD is playing the same game with DLSS.  They are the moral hi ground for being open source with FSR 1 but they are really just attacking nVidia's DLSS.  FSR 1 is basically dead now and the code i worthless.



"NVidia likely knew that AMD would stop PhysX on their cards.  So nVidia are happy to help."

What nonsense. It's complete obvious and laughable PR stunt, bullshit. Your not that naive.

"AMD is playing the same game with DLSS.  They are the moral hi ground for being open source with FSR 1 but they are really just attacking nVidia's DLSS"

They had to respond to DLSS with something. Even Intel did/is with XeSS. FidelityFX CAS alone wasn't going to really cut it. So they started multiple efforts(google it). The project that won out first was Lotte's (If you don't know who this is, you should. Guy is awesome sauce.). His spatial method won out because it of it's ease of implementation and the performance/quality ratio it struck. Which is kinda his thing, if you look at this past projects. It use's "EASU (Edge-Adaptive Spatial Upsampling)" and is used in combo with CAS sharpening.

"FSR 1 is basically dead now and the code i worthless."

Hardly, it's a very good, fast, and high compatible scaler. And because it opensource anyone can implement it, iterate on it and use it. Which is why it's pop'ed up in allot of places and been very helpful(the adoption rate has been nuts). Notably other places it showed up include emulation, VR and consoles. That will keep happening, as the EASU scaler portion is great for what it is before you start going toward more advanced methods. The CAS Shapening filter it combo's with is also awesome sauce.

Further more AMD implemented it driver side as RSR (Radeon Super Resolution). Handy for plenty of situations. Far from dead.


----------



## zx128k (May 14, 2022)

Trunks0 said:


> I wasn't gonna respond... but where talking about FSR now, so why not.
> 
> 
> 
> ...


I mean FSR 1 is dead because going forward AAA games will use FSR 2.  I agree FSR 2 was the responce that they should have had to DLSS.  Nvidia already have a FSR 1 replacement built into the drivers NIS.

I dont think anyone expects AMD to support CUDA and get locked into a standard they dont control.  Just so their drivers can support PhysX.  A quote from Dune the moive, sometimes gifts are not given out of love.


----------



## kapone32 (May 14, 2022)

zx128k said:


> I mean FSR 1 is dead because going forward AAA games will use FSR 2.  I agree FSR 2 was the responce that they should have had to DLSS.  Nvidia already have a FSR 1 replacement built into the drivers NIS.
> 
> I dont think anyone expects AMD to support CUDA and get locked into a standard they dont control.  Just so their drivers can support PhysX.  A quote from Dune the moive, sometimes gifts are not given out of love.


A standard they don't control? What ?


----------



## Trunks0 (May 14, 2022)

kapone32 said:


> A standard they don't control? What ?


If AMD adopted CUDA acceleration, they would be adopting a standard they have zero say or control over. There was a point in 2008, around PhysX, where nVidia essentially played a PR stunt of saying they where open to AMD adopting CUDA so they could have PhysX acceleration.


----------



## kapone32 (May 14, 2022)

Trunks0 said:


> If AMD adopted CUDA acceleration, they would be adopting a standard they have zero say or control over. There was a point in 2008, around PhysX, where nVidia essentially played a PR stunt of saying they where open to AMD adopting CUDA so they could have PhysX acceleration.


After Nvidia bought Physx and basically ruined all the promise. I believe there was only one Game other than Arkham that had full Physx deployment. Then Nvidia found out that people were buying their cheapest cards to run Physx they basically abandoned it.  The thing about it is Physx was really cool for what it was, definitely made Batman more enjoyable.


----------



## zx128k (May 14, 2022)

kapone32 said:


> After Nvidia bought Physx and basically ruined all the promise. I believe there was only one Game other than Arkham that had full Physx deployment. Then Nvidia found out that people were buying their cheapest cards to run Physx they basically abandoned it.  The thing about it is Physx was really cool for what it was, definitely made Batman more enjoyable.


Physx is used in metro exodus. The GameWorks Library includes guides for Core SDK, Direct3D and OpenGL graphics/compute samples, as well as information on both OptiX and* PhysX tools.*


----------



## Trunks0 (May 14, 2022)

There was plenty of cool implementations of PhysX that could use hardware acceleration.


----------



## wolf (May 15, 2022)

So more content has surfaced about FSR 2.0 and it appears that it handles ghosting really well in this title, perhaps nvidia have something to learn when they delve into the source code. I do hope both or indeed any technique can learn from what it does well, and perhaps improve on it and have it runs faster. 

It also appears that it has more obvious visual glitches and falls further behind DLSS the lower the output res and input res drops. Then I also saw that Ampere cards take less of a frametime penalty to run FSR? Was not expecting that. The plot thickens.


----------



## zx128k (May 15, 2022)

wolf said:


> So more content has surfaced about FSR 2.0 and it appears that it handles ghosting really well in this title, perhaps nvidia have something to learn when they delve into the source code. I do hope both or indeed any technique can learn from what it does well, and perhaps improve on it and have it runs faster.
> 
> It also appears that it has more obvious visual glitches and falls further behind DLSS the lower the output res and input res drops. Then I also saw that Ampere cards take less of a frametime penalty to run FSR? Was not expecting that. The plot thickens.


This is why using a recurrent convolutional autoencoder(1) is better or the DLSS method is better.  FSR 2 will have more obvious visual glitches and will fall further behind DLSS the lower the output res and input res drops.

1.  A recurrent convolutional autoencoder is one which is jointly trained to hallucinate new samples and appropriately blend them with the history data. source


----------



## medi01 (May 15, 2022)

So there goes "you need tensor cores" eh? Color me surprised.

Also note the missing "AI" bit.



rutra80 said:


> Well done. Now the only feature AMD is missing, is performant raytracing.


It is really about who is closer to the developers:




 







btarunr said:


> NVIDIA uses Tensor cores for ray tracing, too, for the AI denoiser.


Is that part of DXR? I thought DXR was exclusively about path tracing.


----------



## Icy1007 (May 15, 2022)

ARF said:


> DLSS and RT are proprietary "features" by nvidia with no value for the user who can think.
> The PS5 and new XBox do not support RT, so the gamers actually do not need it.
> 
> AMD's mistake is that it follows instead of thinking proactively about new unique features with real value.


RT is not proprietary to Nvidia. AMD GPUs support RT and so do both PS5 and Xbox Series X.


FSR 2.0 is inferior to DLSS 2.0 both in quality and in performance.


----------



## medi01 (May 15, 2022)

zx128k said:


> Its cheaper to develop games with ray tracing.


It is a promise, not a fact. 
And then, we have other promises too, zero hardware path tracing here:


----------



## zx128k (May 15, 2022)

medi01 said:


> It is a promise, not a fact.
> And then, we have other promises too, zero hardware path tracing here:


Basically in raster games you have to create more light sources for bounce lighting and you spend hours getting the lighting looking natural.  With Ray Tracing you model real lighting and once you create the source the engine creates all the bounce lighting and effects you need.

The demo you linked to is using the Unreal Engine 5, this engine uses Lumen which is a method of ray tracing.  Lumen will also use hardware support for ray tracing.  You pointed at a demo and could not get that Lumen is a form of ray tracing.  Lumen is software so it hits the cpu hard but also supports hardware ray tracing.  Lumen is Unreal Engine 5's fully dynamic global illumination and reflections system that is designed for *next-generation consoles**. * High end PCs will use there hardware support for better image quality. UE5 also supports Path Tracing.  Also with software mode there are big limitations not found in the hardware support mode of Lumen.  Its not like Lumen is a replacement for hardware ray tracing, hardware ray tracing is an integral part of Lumen.


----------



## wolf (May 16, 2022)

medi01 said:


> So there goes "you need tensor cores" eh? Color me surprised.


Years later, a solution comes out that doesn't look as good or run as fast and doesn't need tensor cores, colour me surprised.


medi01 said:


> we have other promises too, zero hardware path tracing here


Lumen absolutely can and does leverage hardware, just not in that demo.


----------



## medi01 (May 16, 2022)

zx128k said:


> Basically in raster games you have to create more light sources for bounce


That is jumping to another horse , hilariously, now hardware RT is faster eh? 

RT needs multiple steps and ray intersection is just one of them. Creating the object structure to check for intersections, changing it when scene is changing, doing all other steps (denoising is one of them) comes on top. (basically ALL of that, bar the actual intersection tests, is good old shader code)

It has inherent issues of being unstable in terms of how many rays you need to get palatable results.

And, hey, wait a sec, all that ONLY ON SOME GPUs that support hardware DXR.  Which means doing RT that way today is GUARANTEED to be more effort. And, look at Cyberpunk, for what? Just for fun for devs to learn something new, I guess, in case "in the future" RT of that kind will become a thing.



zx128k said:


> this engine uses Lumen which is a method of ray tracin


Yeah. A shocker. To trace rays we need some sort of ray tracing. As if it was about how things work in real world.

Hold on, yeah, it is!



zx128k said:


> Lumen will also use hardware support for ray tracing.


It could if NV's approach wasn't so bad. So as it stands, perhaps on AMD platform only (where they can use shader code to traverse the structure)


----------



## zx128k (May 16, 2022)

medi01 said:


> That is jumping to another platform again, hilariously, now hardware RT is faster eh?
> 
> RT needs multiple steps and ray intersection is just one of them. Creating the object structure to check for intersections, changing it when scene is changing, doing all other steps (denoising is one of them) comes on top. (basically ALL of that, bar the actual intersection tests, is good old shader code)
> 
> ...


In raster games to avoid very dark areas(no bounce lighting) the level designer has to create extra light sources.  They have to work hard and take their time to get decent looking lighting.  With Ray Tracing, once you place the light source the game engine does all that time consuming work for you, it models the light and creates the bounce lighting for you.  Thus it takes less time for a level designer to create the lighting effects with ray tracing.

The down side of ray tracing is it takes more processing but you get a more realistic lighting model in the game engine.  Raster is faster because its very simplistic and unnatural lighting system.  Raster takes alot of work to make it look natural.  Ray Tracing (which imcludes path tracing) are methods to model how real world light interacts with objects.  Many of the features of path tracing for example are not possible in other methods of lighting.

Example at 1:25:17










Look at the graphics in the original Metro Exodus using raster lighting.  Then compare to Metro Exodus Enhance Edition which uses a full Ray Tracing method for lighting.  The difference is massive, image quality is massively improved.


----------



## AusWolf (May 16, 2022)

This is awesome news - and an awesome article!

If AMD can optimise their raytracing performance with RDNA 3, I'll see no reason to stay with Nvidia during my next upgrade.


----------



## medi01 (May 16, 2022)

zx128k said:


> In raster games to avoid very dark areas(no bounce lighting) the level designer has to create extra light sources. They have to work hard and take their time to get decent looking lighting. With Ray Tracing, once you place the light source the game engine does all that time consuming work for you, it models the light and creates the bounce lighting for you. Thus it takes less time for a level designer to create the lighting effects with ray tracing.


No, thus it takes less time for a level designer to address that CHERRY PICKED ISSUE that you decided to single out.



zx128k said:


> The down side of ray tracing is it takes more processing but


There are many and they have already been mentioned.

"it takes less time to develop" - lies (as of today)
"it looks better" - lies (as of today)
"it tanks performance" - yeah, true, it does 



zx128k said:


> Raster takes alot of work to make it look natural. Ray Tracing (which imcludes path tracing) are methods to model how real world light interacts with objects.


That touches on another lie, that path tracing is enough for photorealism. No it isn't. 








						Rendering equation - Wikipedia
					






					en.wikipedia.org
				






zx128k said:


> uses a full Ray Tracing method


Oh boy...



AusWolf said:


> If AMD can optimise their raytracing performance with RDNA 3, I'll see no reason to stay with Nvidia during my next upgrade.


It is quite likely that AMD GPUs are already on par and faster, and it's just a war of "game was optimized for which GPU".

Note how AMD GPU in both major consoles means that anything crossplatform is stupid not to optimize for AMD.


----------



## zx128k (May 16, 2022)

medi01 said:


> No, thus it takes less time for a level designer to address that CHERRY PICKED ISSUE that you decided to single out.
> 
> 
> There are many and they have already been mentioned.
> ...



You dont have to capture every aspect of light reflection.  Movies use path tracing for their effects and most of the time the human brain cant see the different.  You wont.  From your source.


> Applications​Solving the rendering equation for any given scene is the primary challenge in realistic rendering. *One approach to solving the equation* is based on finite element methods, leading to the radiosity algorithm. Another approach using *Monte Carlo methods* has led to many different algorithms including* path tracing*, photon mapping, and Metropolis light transport, among others.


Note that solving that equation, one of the methods is path tracing.

This is an example of a real time path traced scene and is near photo realistic objectively to the human brain.  This is not the same quality that a offline render which would be much higher quality.










One of the issues given in that link is subsurface scattering, a technique was pioneered for this in the The Matrix Reloaded movie called Texture space diffusion.  Also real time computer games could model this effect.



> Separable Subsurface Scattering is a novel technique to add real-time subsurface light transport calculations for computer games and other real-time applications.











Anyway have fun which your own strawman argument.


----------



## zx128k (May 16, 2022)

I remember Gamers Nexus talking about SSS years ago.


----------



## kapone32 (May 16, 2022)

zx128k said:


> In raster games to avoid very dark areas(no bounce lighting) the level designer has to create extra light sources.  They have to work hard and take their time to get decent looking lighting.  With Ray Tracing, once you place the light source the game engine does all that time consuming work for you, it models the light and creates the bounce lighting for you.  Thus it takes less time for a level designer to create the lighting effects with ray tracing.
> 
> The down side of ray tracing is it takes more processing but you get a more realistic lighting model in the game engine.  Raster is faster because its very simplistic and unnatural lighting system.  Raster takes alot of work to make it look natural.  Ray Tracing (which imcludes path tracing) are methods to model how real world light interacts with objects.  Many of the features of path tracing for example are not possible in other methods of lighting.
> 
> ...


And I gIve a shit in a Game where I can be killed in 2 seconds from any direction when I am actually playing. Those things only matter in Games like RTS or RPG where you can actually appreciate that kind of fidelity. Having said that the Division 2 looks pretty Good too but that Game has an AMD splash screen so I guess I am bias.

Butter smooth Gameplay is the key there have been innovations since the start of this race. Look at Games like Wizard of Wor or Tempest in the arcade (certainly not Graphically inspiring). Asteroids was not successful because it looked good but because it was hard and played butter smooth. BTW you still have not explained how Spiderman looks so good on the PS4.

Wait, Wait oh oh you will now tell me that I am going to be banned.


----------



## zx128k (May 16, 2022)

My opinion about benchmarks not caring about AMD's lack of DXR performance remains, if a AAA game at maximum settings has DXR thats the only thing thats tested.  Maximum settings and no creating sub sections between DXR and raster.  My opinion about a large enough sample size and unbiased choice of games remains.


----------



## tabascosauz (May 17, 2022)

This is the FSR 2.0 showcase. Stay on topic, or don't come back to this thread. Last warning.


----------



## Oberon (May 19, 2022)

RedBear said:


> If it "works" but the impact is severe enough to leave you with (nearly) unplayable frame rate, then it doesn't work. I mean, those minimum recommendations weren't thrown out just for fun.
> 
> EDIT: Talking about 1080p, is there going to be a 1080p comparison?


NVIDIA will never give you the chance to find out, and if it doesn't work, who cares? It's not a supported configuration.


----------



## RedBear (May 19, 2022)

Oberon said:


> NVIDIA will never give you the chance to find out, and if it doesn't work, who cares? It's not a supported configuration.


And who cares about finding out something that doesn't work? Nvidia did "unlock" raytracing on Pascal and GTX 16** Turing, does anyone care about it?


----------



## Oberon (May 19, 2022)

RedBear said:


> And who cares about finding out something that doesn't work? Nvidia did "unlock" raytracing on Pascal and GTX 16** Turing, does anyone care about it?


It's about not artificially limiting the technologies consumers have access to, as NVIDIA has a history of doing. You may not care about it because it's slow, but that's not a reason to block people from trying it.


----------



## Baba Yetu VI (May 25, 2022)

Sorry, I didn't read all the above comments coz it might take hours.
So if I have missed something, let me know.

I have seen some video on youtube showing the new effects which is indeed impressive.
Now my thoughts are related to something far greater.

Given the gap between the two technologies FSR 2 and DLSS 2 is brought much closer, what implications are there for the hardware - AMD RX series or nVidia RTX series?

At the moment, despite seemingly less games are supporting FSR, but given that the integration of the technologies into existing and future games are relatively easier, the community will certainly witness the emergence of  FSR 2 eventually. And as it gains more momentum, AMD cards which are seen as behind considerably in ray tracing performance, will if not outperform nVidia cards, certainly match nVidia cards' performance in ray tracing. In which case, the entire landscape of the cake divided by nVidia and AMD will change? Or it's just AMD fans' unilateral wet dreams?


----------



## akaloith (May 25, 2022)

can i try somewhere a fsr 2.0 demo on my gtx 1060?


----------



## W1zzard (May 25, 2022)

akaloith said:


> can i try somewhere a fsr 2.0 demo on my gtx 1060?


Gotta buy Deathloop. I also think MS Flight Simulator has FSR 2.0 now


----------



## akaloith (May 26, 2022)

so no free way, demo, app to test fsr 2.0 on my gtx 1060 ?


----------



## chrcoluk (May 27, 2022)

Thought I would try out FSR in RPCS3 for eternal sonata.

Resolution scaling at 200% works very nicely for the 720p game on my 1440p screen, but when I tried FSR instead it appeared to do absolutely nothing which I guess is a RPCS3 bug at the moment, ironically guides I found said resolution scaling was broken for the game so to use FSR instead.


----------



## Baba Yetu VI (Jun 1, 2022)

Cyberpunk 1.5 patch works great with FSR 1.0. I guess with FSR 2.0, the game will perform even better.

I'm beginning to find out that AMD ray tracing is not bad at all!


----------



## Shatun_Bear (Jun 1, 2022)

If FSR 2.0 matches DLSS 2.2, which Nvidia have been polishing for a year with their unlimited budget on top, FSR 2.2 will likely be superior. Who would have thought that would be possible 6 months ago?


----------



## arni-gx (Jun 3, 2022)

IMO......for now, i will still choose >>> RT ultra ALL + DLSS quality = best of the best..........


----------



## medi01 (Jun 7, 2022)

Shatun_Bear said:


> If FSR 2.0 matches DLSS 2.2, which Nvidia have been polishing for a year with their unlimited budget on top, FSR 2.2 will likely be superior. Who would have thought that would be possible 6 months ago?


It is on par per TPU review, image quality wise.

It is SUPERIOR as an offering, as a whole package, as you just do it once, and it just runs on cars manufactured by any vendor, including older cards.

G-Sync vs FreeSync story again.


----------



## chrcoluk (Jun 7, 2022)

Ok so this works in Vulkan mode on RPCS3, and since resolution scaling is broken on FF13-2 on there, I been testing the game with FSR at 70% sharpness (any higher and get too much distortions).

I dont know if its implemented poorly on RPCS3 or not and if its 1.0 or 2.0, but here is my impression.

Is definitely better than modern AA such as TAA and FXAA, however I dont think its as good as MSAA.  It also struggles a lot on thin lines and with fast moving objects.  So overall impression is it is good for those who lack the grunt to render at higher res, and when is no MSAA/SSAA support.  But if those options are available then they are better.


----------



## TheoneandonlyMrK (Jun 7, 2022)

ARF said:


> AMD's mistake is that it answers these dirty initiatives by nvidia. Tessellation, and now RT... Do you remember when nvidia paid a game developer to REMOVE the DX 10.1 implementation (Assassin's Creed DX10.1) in which the Radeons were better?


Firstly, AMD lead with tessellation not followed, second AMD Do support RT on newer cards.

Thirdly dx12 ultimate, means any new GPU needs features like RT to support it, AMD do so at least catch up on reality.


----------



## InVasMani (Jun 7, 2022)

chrcoluk said:


> Ok so this works in Vulkan mode on RPCS3, and since resolution scaling is broken on FF13-2 on there, I been testing the game with FSR at 70% sharpness (any higher and get too much distortions).
> 
> I dont know if its implemented poorly on RPCS3 or not and if its 1.0 or 2.0, but here is my impression.
> 
> Is definitely better than modern AA such as TAA and FXAA, however I dont think its as good as MSAA.  It also struggles a lot on thin lines and with fast moving objects.  So overall impression is it is good for those who lack the grunt to render at higher res, and when is no MSAA/SSAA support.  But if those options are available then they are better.



It should work identical to DSR where 2.00x resolution looks best at 50% and 4.00x looks best at 25%. Still 34%/68% are reasonable alternatives if you don't mind some distortion trade off for more blur/sharpness obviously DSR blurs while FSR sharpens, but they work entirely inverted of each other upscale/downscale dilate and erode.


----------



## chrcoluk (Jun 7, 2022)

InVasMani said:


> It should work identical to DSR where 2.00x resolution looks best at 50% and 4.00x looks best at 25%. Still 34%/68% are reasonable alternatives if you don't mind some distortion trade off for more blur/sharpness obviously DSR blurs while FSR sharpens, but they work entirely inverted of each other upscale/downscale dilate and erode.


Well there was no way to tune the resolution multiplier, its on/off for FSR and a sharpness slider.

At the default 50%, it had too many sticking out pixels on the characters bodies, hard to explain what I mean, 70% kind of normalised it and made it look closer to resolution scaling.  Any higher though I had distortion at edges, especially on text.

Resolution scaler is way superior, but on FF13-2 for some reason the textures flicker with that option, so hence using FSR, resolution scaling when it does work properly is really nice on RPCS3.

I have never played a game that has DLSS before (I own FF15 but played before they added it and I know that only has DLSS 1.0.) So was interesting to see how this worked vs the hype.


----------

