They do have a more open reputation, I never said Nvidia was more open, my point was that it's not like Nvidia is doing nothing in the open-source area, especially around IA/ML.
I am not familiar with all of them but I do use two of those open source frameworks, a framework named Apache Spark (through RAPIDS) for distributed processing and PyTorch (machine learning) for work, both have a lot of Nvidia contribution to them and they work wonderfully well with Nvidia GPUs
We don't need to use proprietary nvidia tool to perform ML or data ETLs, they just really did their homework for AI/ML/data scientists on many open-source projects.
For consumers, I think AMD has the upper hand in open-source of course, but right now DLSS3.5 is what I prefer.
I don't care about closed or open upscalers frankly because I feel that many studios want a little paycheck from either AMD or Nvidia to implement it anyways and are not really waiting for things to go open-source, at least in AAA (gpu sponsorship is omnipresent in AA/AAA pc gaming).
I'd even tell you that DLSS SDK that you use to implement DLSS in your game is open source, the algorithm itself is not but anyone can implements it using an open-source, transparent SDK in their code, the rest is in the driver and hardware but at least what you embark in your executable is known:
https://github.com/NVIDIA/DLSS
I care about quality, implementation rate in games.. DLSS is in far more games, overall it's a really great upscaler relatively speaking, it does not include vulnerabilities or anti-cheat false positives, it's continuously improved with stable features but also experimental ones like ray reconstruction.
FSR3, I can use it on Nvidia whilst DLSS can't be used on AMD, again, despite what some people say here, nvidia upscalers are HARDWARE based, they made that choice, it's AI, it's a neural network that improves through time to a pace a deterministic algorithm like FSR may not be able to match at one point.
Having frame generation locked behind a hardware optical flow generator sucks for < RTX 4000, absolutely, they could have worked at least on a software-based rollback, but saying in above comments that it's all a lie that hardware architecture is useless is naive and baseless, this architecture whole point is to be trained through time by Nvidia to continuously improve the neural network behind DLSS and FG, something a software solution cannot do as well .