• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD FSR 2.0 Quality & Performance

Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
RT is the only thing the whole market cares about or did you miss the fact its center stage for the consoles and for DX12. All the 3d engines are being update or updating to use DXR. That Unreal Engine 5 brings Ray Tracing to nearly all platforms. Its you who conning yourself. FSR 2 is basically in doubt if it lasts. No one really bought an AMD 6000 series card and this is not an opinion. Its only takes a few clicks to add the DLSS plugin to Unreal Engine 5 and support most of the PC market. Unreal Engine 5 supports TSR which leaves FSR 2 well looking for a place to live. Sure AMD will pay for a few developers to use FSR 2, like with FSR 1 and its useful for AMD cards in DXR games so some big AAA titles may support it but thats really it as far as I can see. There is a small part of market that will use FSR 2 and a much bigger part (almost all the market) that will use DLSS.

As far as I can see FSR 2 is slower than DLSS. It has less fine details and is less stable. This is also more so in motion. PC world stated.



So people on low end hadrware are not really going to use FSR 2 to its fullest.

Also AMD cared how well FSR 2 runs on other hardware they tuned it only for RDNA2, AMD FSR 2.0 upscaling is tuned to run faster on RDNA 2-powered graphics cards.

It's a core feature not center stage 4K 120Hz is the center stage defining feature of current generation consoles. Also in terms of Physx being disabled when other brand GPU's were detected that was very anti-competitive and anti-consumer. Imagine in reverse Intel/AMD doing that with the CPU when detecting a Nvidia GPU oops there goes your computer functionality shouldn't have installed Nvidia better luck next time.
 
Last edited:
Joined
Feb 20, 2022
Messages
175 (0.17/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
It's a core feature not center stage 4K 120Hz is the center stage defining feature of current generation consoles. Also in terms of Physx being disabled when other brand GPU's were detected that was very anti-competitive and anti-consumer. Imagine in reverse Intel/AMD doing that with the CPU when detecting a Nvidia GPU oops there goes your computer functionality shouldn't have installed Nvidia better luck next time.
For awhile its worked and then it disabled. Then it was disabled if you installed a AMD gpu. I was not happy about that as well.

Anyway RT and consoles. Unreal Engine 5 Lumen its software Ray Tracing was designed from the ground up with consoles in mind. Basically Lumen for RT support and TSR for upscaling.
Also you have games like this one.

@kapone32 Who asks a question already answered.
First of all who created Physx and why was it favorable to Nvidia. As far as Crossfire support in those days it did bring a compelling improvement. It does not matter though because you are not convincing anyone with your revised edition of History.
As posted above.
AGEIA PhysX card
BFG is one of the companies companies that manufactured the Ageia PhysX card. After Ageia's acquisition by Nvidia, dedicated PhysX cards were discontinued in favor of GeForce GPUs. 3DMark Vantage used a PPU (Physics Processing Unit) in the second CPU test. Thus at first Ageia PhysX card owners got a speed increase but after Ageia's acquisition by Nvidia. Then GeForce GPUs could be used instead of Ageia PhysX cards. Thus NVidia was accused of being a cheater because nvidia systems now had PhysX acceleration and were scoring higher. As per the sources given above, all that had happened was GeForce GPU's were now acting as a PPU for the second cpu test. As Nvidia had changed the commands in the PhysX API to support cuda for processing. Thus nvidia had better performance.
 
Last edited:
Joined
May 12, 2022
Messages
54 (0.06/day)
The CUDA acceleration just highlighted an issue with having a test that used "PhysX" in the suite. It's why it got removed. Not because Radeon users where whining or because AMD need some sort of protection. That's hyperbole. It got removed because once nVidia had control of the PhysX stack, it wasn't a fair benchmark anymore. Way to much conflict of interest.

UE5 designed its entire system around being as hardware agnostic as possible. It makes sense for them, as they want thier engine to be as adaptable as possible.
 
Joined
Feb 20, 2022
Messages
175 (0.17/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
The CUDA acceleration just highlighted an issue with having a test that used "PhysX" in the suite. It's why it got removed. Not because Radeon users where whining or because AMD need some sort of protection. That's hyperbole. It got removed because once nVidia had control of the PhysX stack, it wasn't a fair benchmark anymore. Way to much conflict of interest.

UE5 designed its entire system around being as hardware agnostic as possible. It makes sense for them, as they want thier engine to be as adaptable as possible.
NVIDIA Responds To GPU PhysX Cheating Allegation Why would AMD have anything to say, its ATI at this point.
NVIDIA, PhysX, and the “C” Word
nVidia PhysX overwrites Vantage .dll's - Results now removed from Hall of Fame
GPU PhysX Doesn't get you to 3DMark Vantage Hall of Fame Anymore
With NVIDIA releasing their GeForce PhysX drivers, users of the PhysX accelerating GeForce cards were at an advantage over their Radeon counterparts...
The relation of GPU acceleration for gaining higher 3DMark scores in physics tests has been controversial to say the least. Futuremark has now decided to update its Hall of Fame to exclude all results using PhysX on a GPU, simply because this was not how they intended it to work. It has also been updated to organise the results better for easier comparison. You will be able to use GPU physics processing to get a 3DMark score, you will not be able to make it to the Hall of Fame using it. You can use an Ageia PhysX card to assist your 3DMark score to make it to the Hall of Fame, as that's how Futuremark intended PhysX processing scores to assist your final scores.
The issue was the geForce gpu had the feature but you could still use an Ageia PhysX card.

techpowerup comment at the top.
exodusprime1337
that is the stupidest shit ever. once again futuremark comes out with a way to uneven the scores. All because it's not an ageia physx processor the scores don't count, what a crock of shit. The fact of the matter is that amd cards are unable to do the physx processing on they're own so now the whole lot has to suffer cause amd cards just can't cut it anymore. Bullshit
Kursah
ghost101No its because nvidia had more money and effectively bought the performance crown in this benchmark. If I was involved with futuremark, i'd be pissed as well. If AMD had the money, the scenario could have been the other way around. How does this tell me which card is actually better?
Hopefully looking at more than just a benchmark score! :D :laugh:

Okay this is good and great, but if FM knew that GPU PhysX was going to happen, maybe even had it available to them? Why not make a test for it that is actually supportive of the technology of PhysX while Rendering to give more accurate results for starters...

So it makes one's score fair as long as they went out and purchased an Ageia PhysX card then? LoL...I'm sorry but maybe FM should've kept this program under wraps until they knew better what was going to happen with NV's and AMD's technologies that have been more recently released...

Really as long as my games look and play good on my rig, PhysX on or off....Vantage scores don't bother me at all...I feel bad for the world that feeds off of bench scores if this has those communities up in arms...Yeah I agree NV kinda bullied it's way in this, but if they have the technology, might as well use it! Claimed unfair or not, it's nice to to see for consumers using PhysX enabeled cards I'm sure...and hell now they can go check out much more entertaining pieces of software called games that support physx and hopefully use it TWIWMTBP!

Interesting story, not suprising, I jus thope AMD can get some official physx too so fanboi's stop pissing and moaning and FM can then update Vantange so the PhysX test is "more fair" to both companies' cards! lol...:roll:

:toast:
mullered07
the whole point futuremark are making is that the physX test in 3dmark vatage were made for the cpu not a gpu and it doesnt represent real world gaming as the nvidia gpus are only being used for the physx in the test and not for rendering graphics at the same time which is what would happen in a real world scenario (ie gpu would be rendering graphics and physx at the same time)

and ati gpu are fully capable of doing physx only they would have to create there own api as cuda belongs to nvidia (who didnt create it either before the nvidia fanboys start)
1c3d0g
This is why I dislike Futuremark and their stupid benches so much. It's just some rabid fan boys trying to measure who's e-penis is the biggest, but at the end of the day, what did they really "win"? Even if they get the highest score, they're still retarded...nobody with an ounce of sanity wastes so much time and energy into such a pointless benchmark.
farlex85
warhammer3dMark vantage is not real world gaming or performance..
Real world gaming is just another benchmark, and is subject to the same biases and differences that 3dmark is. I really don't know why that argument always gets brought up. Just b/c a game is popular doesn't make it a better bench than a program like 3dmark.

I personally find it a little hard to believe fm didn't intend for this sort of effect on scores, as physX is built into the final and difference making test. PhysX being there completely changes the way the test is run. Did they not know nvidia was putting physX on their cards? Did they think this would just be for dedicated physics cards? I doubt it. The bench really is done poorly, and planned very poorly. That last test should be a seperate category for physics, calling it a cpu score is a cause for all the frizzy. I also agree w/ mulder, I think it doesn't look very good at all. They need to figure out something new to accomodate this changing graphics processing arena.
Hayder_Master
ati cards now is very good and have high score in 3d mark , so nvidia find the weak point in vantage and they took it and develop software like hacking on 3d mark to increase the score , and we must not forget the 3d mark 2006 score Affected with high cpu , so it is weak point too
Really goes on forever.

Remember PhysX cards were ment to be supported, that means that you could buy a AGEIA PhysX card. The big deal is it was a Nvidia feature on an nvidia gpu.
Tero Sarkkinen, FutureMark-CEO, has written that FutureMark will drop GPU-PhysX in 3DMark Vantage next time. The reason is that it is very unfortunate that the CPU-Test of the GPU affected.
yeh! i was expected this to happen. Afterall, replacing Drivers is ofcourse a Cheating..:eek:hyeah:
:rofl: source

Update on PhysX on ATI Radeon cards – NVIDIA offers help on the project
It seems that AMD still is not being cooperative, we get the feeling that they want this project to fail. Perhaps their plans are to strangle PhysX since AMD and Intel have Havok. The truth is… Nvidia is now helping us with the project and it seems they are giving us their blessings. It’s very impressive, inspiring and motivating to see Nvidia’s view on this.
PhysX also runs on ATI Radeon HD cards!
Then it died, guess who killed off PhysX on AMD cards.

But it never really ended. Gameworks has PhysX builtin. Its must be cheating.
Nvidia Gameworks "Cheating" in Final Fantasy XV Benchmark

burn420247 4 years ago#1


Gameworks skewing results vs AMD in FF XV. All hair still being "rendered" even when not on-screen causing massive performance drops for the competition lacking teh driver tech.

the video shows how this type of stuff has been going on for years on both sides of the battle. Radeon is just as guilty in optimizing FPS over screen quality with driver optimizations in targeted applications.

give it a watch, learn some GPU history!
JKatarn 4 years ago#3
Not surprising, given the fact that the whole point of "GameWorks" is to sap so much performance that e-peeners will hopefully run out and splurge on the top-end card in the hopes of bruteforcing decent performance out of games with it enabled. Ditto PhysX in most games.
UE5 was designed for the consoles, Lumen which is software Ray Tracing was design for next generation consoles. It states it all over the engines documentations. This is not the only engine that provides ray tracing on the lastest consoles.
Well enough of this and back to FSR 2.
 
Last edited:
Joined
May 12, 2022
Messages
54 (0.06/day)
NVIDIA Responds To GPU PhysX Cheating Allegation Why would AMD have anything to say, its ATI at this point.
NVIDIA, PhysX, and the “C” Word
nVidia PhysX overwrites Vantage .dll's - Results now removed from Hall of Fame
GPU PhysX Doesn't get you to 3DMark Vantage Hall of Fame Anymore


The issue was the geForce gpu had the feature but you could still use an Ageia PhysX card.

techpowerup comment at the top.






Really goes on forever.

Remember PhysX cards were ment to be supported, that means that you could buy a AGEIA PhysX card. The big deal is it was a Nvidia feature on an nvidia gpu.



Update on PhysX on ATI Radeon cards – NVIDIA offers help on the project

PhysX also runs on ATI Radeon HD cards!
Then it died, guess who killed off PhysX on AMD cards.

But it never really ended. Gameworks has PhysX builtin. Its must be cheating.
Nvidia Gameworks "Cheating" in Final Fantasy XV Benchmark



UE5 was designed for the consoles, Lumen which is software Ray Tracing was design for next generation consoles. It states it all over the engines documentations. This is not the only engine that provides ray tracing on the lastest consoles.
Well enough of this and back to FSR 2.
"Why would AMD have anything to say, its ATI at this point." because where talking about 2008 and AMD bought ATi in 2006.

The cheating allegations didn't really matter. Once nVidia owned PhysX it was never going to work out.

The offer to let AMD have CUDA was BS from the start and is laughable. No one is going to implement a competitors closed source standard. The circus at this point was nuts. As you can see with all your links. The reality was nVidia had PhysX and they weren't going to honestly let anyone else in. They didn't and they killed PPU support about year after they bought Ageia with version 2.8.3 of PhysX.

Some dates of note
AMD Buys ATi - 2006
CUDA intro'ed - 2006
nVidia buys Ageia - 2008
CUDA PhysX acceleration - 2008
OpenCL Intro'ed - 2009
nVidia ends PPU acceleration with PhysX with version 2.8.3 - 2009

Once nVidia owned PhysX it was game over.
 
Last edited:
Joined
Feb 20, 2022
Messages
175 (0.17/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
Dude it AMD by 2008, because they bought ATi in 2006.

"Why would AMD have anything to say, its ATI at this point." because where talking about 2008 and AMD bought ATi in 2006. So it's AMD, not ATi.

The cheating allegations didn't really matter. Once nVidia owned PhysX it was never going to work out.

The offer to let AMD have CUDA was BS from the start and is laughable. No one is going to implement a competitors closed source standard. The circus at this point was nuts. As you can see with all your links. The reality was nVidia had PhysX and they weren't going to honestly let anyone else in. They didn't and they killed PPU support about year after they bought Ageia with version 2.8.3 of PhysX.

Some dates of note
AMD Buys ATi - 2006
CUDA intro'ed - 2006
nVidia buys Ageia - 2008
CUDA PhysX acceleration - 2008
OpenCL Intro'ed - 2009
nVidia ends PPU acceleration with PhysX with version 2.8.3 - 2009

Once nVidia owned PhysX it was game over.
Even if NVidia made it open for all, AMD would not take part. Even if DLSS was a completely open source, (it is after the hack) AMD would not hve anything to do with it. They cant anyway as it wont work on AMD hardware because without tensor or xmx cores the performance would not be there. Once NVidia was the source of DLSS, AMD wont touch it. AMD only make FSR open source because they are desperate to kill off DLSS. They want Intel etc to support their standard. This time it wont work because nvidia own the market share to go it alone.

100% the same thing AMD had been doing with PhysX. Once it was Nvidia's child it was dead to them.
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
It doesn't matter when Nvidia didn't in fact make it open. AMD isn't the only one that could take part others like Microsoft could just as easily. The point is more that Nvidia had no f*cking intentions of doing so.
 
Joined
Feb 20, 2022
Messages
175 (0.17/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
You talk as if nVidia is some bastion of open source who is known to be easy to collaborate with.
NVidia likely knew that AMD would stop PhysX on their cards. So nVidia are happy to help. AMD is playing the same game with DLSS. They are the moral high ground for being open source with FSR 1 but they are really just attacking nVidia's DLSS. FSR 1 is basically dead now and the code is worthless.
 
Last edited:
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
I ate all the cake because I knew someone else might want a piece. I was happy to help.
 
Joined
Aug 21, 2015
Messages
1,725 (0.51/day)
Location
North Dakota
System Name Office
Processor Ryzen 5600G
Motherboard ASUS B450M-A II
Cooling be quiet! Shadow Rock LP
Memory 16GB Patriot Viper Steel DDR4-3200
Video Card(s) Gigabyte RX 5600 XT
Storage PNY CS1030 250GB, Crucial MX500 2TB
Display(s) Dell S2719DGF
Case Fractal Define 7 Compact
Power Supply EVGA 550 G3
Mouse Logitech M705 Marthon
Keyboard Logitech G410
Software Windows 10 Pro 22H2
I ate all the cake because I knew someone else might want a piece. I was happy to help.

Aww, I wanted cake. :( I hope you get indigestion. :mad:

/s
 
Joined
May 12, 2022
Messages
54 (0.06/day)
I wasn't gonna respond... but where talking about FSR now, so why not.

NVidia likely knew that AMD would stop PhysX on their cards. So nVidia are happy to help. AMD is playing the same game with DLSS. They are the moral hi ground for being open source with FSR 1 but they are really just attacking nVidia's DLSS. FSR 1 is basically dead now and the code i worthless.

"NVidia likely knew that AMD would stop PhysX on their cards. So nVidia are happy to help."

What nonsense. It's complete obvious and laughable PR stunt, bullshit. Your not that naive.

"AMD is playing the same game with DLSS. They are the moral hi ground for being open source with FSR 1 but they are really just attacking nVidia's DLSS"

They had to respond to DLSS with something. Even Intel did/is with XeSS. FidelityFX CAS alone wasn't going to really cut it. So they started multiple efforts(google it). The project that won out first was Lotte's (If you don't know who this is, you should. Guy is awesome sauce.). His spatial method won out because it of it's ease of implementation and the performance/quality ratio it struck. Which is kinda his thing, if you look at this past projects. It use's "EASU (Edge-Adaptive Spatial Upsampling)" and is used in combo with CAS sharpening.

"FSR 1 is basically dead now and the code i worthless."

Hardly, it's a very good, fast, and high compatible scaler. And because it opensource anyone can implement it, iterate on it and use it. Which is why it's pop'ed up in allot of places and been very helpful(the adoption rate has been nuts). Notably other places it showed up include emulation, VR and consoles. That will keep happening, as the EASU scaler portion is great for what it is before you start going toward more advanced methods. The CAS Shapening filter it combo's with is also awesome sauce.

Further more AMD implemented it driver side as RSR (Radeon Super Resolution). Handy for plenty of situations. Far from dead.
 
Joined
Feb 20, 2022
Messages
175 (0.17/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
I wasn't gonna respond... but where talking about FSR now, so why not.



"NVidia likely knew that AMD would stop PhysX on their cards. So nVidia are happy to help."

What nonsense. It's complete obvious and laughable PR stunt, bullshit. Your not that naive.

"AMD is playing the same game with DLSS. They are the moral hi ground for being open source with FSR 1 but they are really just attacking nVidia's DLSS"

They had to respond to DLSS with something. Even Intel did/is with XeSS. FidelityFX CAS alone wasn't going to really cut it. So they started multiple efforts(google it). The project that won out first was Lotte's (If you don't know who this is, you should. Guy is awesome sauce.). His spatial method won out because it of it's ease of implementation and the performance/quality ratio it struck. Which is kinda his thing, if you look at this past projects. It use's "EASU (Edge-Adaptive Spatial Upsampling)" and is used in combo with CAS sharpening.

"FSR 1 is basically dead now and the code i worthless."

Hardly, it's a very good, fast, and high compatible scaler. And because it opensource anyone can implement it, iterate on it and use it. Which is why it's pop'ed up in allot of places and been very helpful(the adoption rate has been nuts). Notably other places it showed up include emulation, VR and consoles. That will keep happening, as the EASU scaler portion is great for what it is before you start going toward more advanced methods. The CAS Shapening filter it combo's with is also awesome sauce.

Further more AMD implemented it driver side as RSR (Radeon Super Resolution). Handy for plenty of situations. Far from dead.
I mean FSR 1 is dead because going forward AAA games will use FSR 2. I agree FSR 2 was the responce that they should have had to DLSS. Nvidia already have a FSR 1 replacement built into the drivers NIS.

I dont think anyone expects AMD to support CUDA and get locked into a standard they dont control. Just so their drivers can support PhysX. A quote from Dune the moive, sometimes gifts are not given out of love.
 
Joined
Jun 2, 2017
Messages
9,201 (3.36/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
I mean FSR 1 is dead because going forward AAA games will use FSR 2. I agree FSR 2 was the responce that they should have had to DLSS. Nvidia already have a FSR 1 replacement built into the drivers NIS.

I dont think anyone expects AMD to support CUDA and get locked into a standard they dont control. Just so their drivers can support PhysX. A quote from Dune the moive, sometimes gifts are not given out of love.
A standard they don't control? What ?
 
Joined
May 12, 2022
Messages
54 (0.06/day)
A standard they don't control? What ?
If AMD adopted CUDA acceleration, they would be adopting a standard they have zero say or control over. There was a point in 2008, around PhysX, where nVidia essentially played a PR stunt of saying they where open to AMD adopting CUDA so they could have PhysX acceleration.
 
Joined
Jun 2, 2017
Messages
9,201 (3.36/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
If AMD adopted CUDA acceleration, they would be adopting a standard they have zero say or control over. There was a point in 2008, around PhysX, where nVidia essentially played a PR stunt of saying they where open to AMD adopting CUDA so they could have PhysX acceleration.
After Nvidia bought Physx and basically ruined all the promise. I believe there was only one Game other than Arkham that had full Physx deployment. Then Nvidia found out that people were buying their cheapest cards to run Physx they basically abandoned it. The thing about it is Physx was really cool for what it was, definitely made Batman more enjoyable.
 
Joined
Feb 20, 2022
Messages
175 (0.17/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
After Nvidia bought Physx and basically ruined all the promise. I believe there was only one Game other than Arkham that had full Physx deployment. Then Nvidia found out that people were buying their cheapest cards to run Physx they basically abandoned it. The thing about it is Physx was really cool for what it was, definitely made Batman more enjoyable.
Physx is used in metro exodus. The GameWorks Library includes guides for Core SDK, Direct3D and OpenGL graphics/compute samples, as well as information on both OptiX and PhysX tools.
 
Joined
May 12, 2022
Messages
54 (0.06/day)
There was plenty of cool implementations of PhysX that could use hardware acceleration.
 

wolf

Better Than Native
Joined
May 7, 2007
Messages
8,178 (1.27/day)
System Name MightyX
Processor Ryzen 9800X3D
Motherboard Gigabyte X650I AX
Cooling Scythe Fuma 2
Memory 32GB DDR5 6000 CL30
Video Card(s) Asus TUF RTX3080 Deshrouded
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
So more content has surfaced about FSR 2.0 and it appears that it handles ghosting really well in this title, perhaps nvidia have something to learn when they delve into the source code. I do hope both or indeed any technique can learn from what it does well, and perhaps improve on it and have it runs faster.

It also appears that it has more obvious visual glitches and falls further behind DLSS the lower the output res and input res drops. Then I also saw that Ampere cards take less of a frametime penalty to run FSR? Was not expecting that. The plot thickens.
 
Joined
Feb 20, 2022
Messages
175 (0.17/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
So more content has surfaced about FSR 2.0 and it appears that it handles ghosting really well in this title, perhaps nvidia have something to learn when they delve into the source code. I do hope both or indeed any technique can learn from what it does well, and perhaps improve on it and have it runs faster.

It also appears that it has more obvious visual glitches and falls further behind DLSS the lower the output res and input res drops. Then I also saw that Ampere cards take less of a frametime penalty to run FSR? Was not expecting that. The plot thickens.
This is why using a recurrent convolutional autoencoder(1) is better or the DLSS method is better. FSR 2 will have more obvious visual glitches and will fall further behind DLSS the lower the output res and input res drops.

1. A recurrent convolutional autoencoder is one which is jointly trained to hallucinate new samples and appropriately blend them with the history data. source
 
Joined
Jul 9, 2015
Messages
3,413 (0.99/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
So there goes "you need tensor cores" eh? Color me surprised.

Also note the missing "AI" bit.

Well done. Now the only feature AMD is missing, is performant raytracing.
It is really about who is closer to the developers:

1652638243886.png
1652638287530.png
1652638301693.png


NVIDIA uses Tensor cores for ray tracing, too, for the AI denoiser.
Is that part of DXR? I thought DXR was exclusively about path tracing.
 
Joined
Jul 27, 2019
Messages
20 (0.01/day)
DLSS and RT are proprietary "features" by nvidia with no value for the user who can think.
The PS5 and new XBox do not support RT, so the gamers actually do not need it.

AMD's mistake is that it follows instead of thinking proactively about new unique features with real value.
RT is not proprietary to Nvidia. AMD GPUs support RT and so do both PS5 and Xbox Series X.


FSR 2.0 is inferior to DLSS 2.0 both in quality and in performance.
 
Last edited:
Joined
Jul 9, 2015
Messages
3,413 (0.99/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Its cheaper to develop games with ray tracing.
It is a promise, not a fact.
And then, we have other promises too, zero hardware path tracing here:

 
Joined
Feb 20, 2022
Messages
175 (0.17/day)
System Name Custom Watercooled
Processor 10900k 5.1GHz SSE 5.0GHz AVX
Motherboard Asus Maximus XIII hero z590
Cooling XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator
Memory Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB)
Video Card(s) KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card
Storage WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C)
Display(s) LG 3D TV 32LW450U-ZB and Samsung U28D590
Case Full Tower Case
Audio Device(s) ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP
Power Supply Corsair AX1000 Titanium 80 Plus Titanium Power Supply
Mouse Logitech G502SE
Keyboard Logitech Y-BP62a
Software Windows 11 Pro
Benchmark Scores https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t
It is a promise, not a fact.
And then, we have other promises too, zero hardware path tracing here:

Basically in raster games you have to create more light sources for bounce lighting and you spend hours getting the lighting looking natural. With Ray Tracing you model real lighting and once you create the source the engine creates all the bounce lighting and effects you need.

The demo you linked to is using the Unreal Engine 5, this engine uses Lumen which is a method of ray tracing. Lumen will also use hardware support for ray tracing. You pointed at a demo and could not get that Lumen is a form of ray tracing. Lumen is software so it hits the cpu hard but also supports hardware ray tracing. Lumen is Unreal Engine 5's fully dynamic global illumination and reflections system that is designed for next-generation consoles. High end PCs will use there hardware support for better image quality. UE5 also supports Path Tracing. Also with software mode there are big limitations not found in the hardware support mode of Lumen. Its not like Lumen is a replacement for hardware ray tracing, hardware ray tracing is an integral part of Lumen.

 
Last edited:

wolf

Better Than Native
Joined
May 7, 2007
Messages
8,178 (1.27/day)
System Name MightyX
Processor Ryzen 9800X3D
Motherboard Gigabyte X650I AX
Cooling Scythe Fuma 2
Memory 32GB DDR5 6000 CL30
Video Card(s) Asus TUF RTX3080 Deshrouded
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
So there goes "you need tensor cores" eh? Color me surprised.
Years later, a solution comes out that doesn't look as good or run as fast and doesn't need tensor cores, colour me surprised.
we have other promises too, zero hardware path tracing here
Lumen absolutely can and does leverage hardware, just not in that demo.
 
Joined
Jul 9, 2015
Messages
3,413 (0.99/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Basically in raster games you have to create more light sources for bounce
That is jumping to another horse , hilariously, now hardware RT is faster eh? :D

RT needs multiple steps and ray intersection is just one of them. Creating the object structure to check for intersections, changing it when scene is changing, doing all other steps (denoising is one of them) comes on top. (basically ALL of that, bar the actual intersection tests, is good old shader code)

It has inherent issues of being unstable in terms of how many rays you need to get palatable results.

And, hey, wait a sec, all that ONLY ON SOME GPUs that support hardware DXR. Which means doing RT that way today is GUARANTEED to be more effort. And, look at Cyberpunk, for what? Just for fun for devs to learn something new, I guess, in case "in the future" RT of that kind will become a thing.

this engine uses Lumen which is a method of ray tracin
Yeah. A shocker. To trace rays we need some sort of ray tracing. As if it was about how things work in real world.

Hold on, yeah, it is!

Lumen will also use hardware support for ray tracing.
It could if NV's approach wasn't so bad. So as it stands, perhaps on AMD platform only (where they can use shader code to traverse the structure)
 
Top