I'm all on the side of bashing nvidia for their practices, but I honestly can't say anything bad about the tech. This tech itself is great. DLSS has been good in most games since like 2.1. Yes, it does alter the image, but at high resolutions, in actual play... not that noticeable, and ends up being comparable to your typical AA solutions, if not a little more natural-looking across the whole image, in terms of the net changes to the overall image. To me, the frame rate boost and ability to run what are, the these days, much more refined and better understood RT techniques than before, make it worth trying out. The biggest problem was that only quality or *sometimes* balanced mode was really any good outside of 4k. That's changing too, though.
And I KNOW it sounds pretentious. I don't really expect people to just believe me, but perhaps you can at least see how it's possible in a logical sense. When you put everything about these technologies under the looking glass, try to take samples that encapsulate what it's doing, they tend to fall short. With all of the effects, their core strength is in what they push out of the way. If you look for overt or earth shattering, you often find boring and strange instead. It's a matter of missing the full context of those elements within the entire space. The main strength of RT effects is in the way they sneak in and up the overall plausibility factor. It's just less of a strain to believe you're there. Many different kinds of games can benefit from these things. While lighting has always been faked, RT is the better way to fake things.
The DLSS tech goes hand in hand with that. The challenge has always been having the grunt to get enough throughput for practical amounts of real-time accuracy (or at least, correction.) Correction is more attainable right now. It's not ideal. And there are tradeoffs. But from every experience I've ever had with it, it's hard for me not to see it as very worthwhile tech. Bridging that performance gap through sneaky unburdening approaches, allows for an increase in overall plausibility, at the cost of that last layer or two in image fidelity.
And you know what? That IS subjective. I think it's fair to not like what you lose in the images. But I think in time that could change, and I don't think what it offers in impact is worth entirely discarding. RT is pretty interesting as a tool, and who knows what other uses devs might find for it. To me, if it allows for better fidelity of conveyance, more convincing expression in the visuals... to me that stands out as something very valuable. It doesn't automatically make a games visuals better. But it DOES make for a better platform to convey visuals that are fundamentally good better. For instance, some classic games really look great with a proper RT conversion, and the reason they look so good is because the RT is in concert with very good visual design.
Anything that has the potential to elevate the experiences possible in games is worth keeping on-radar at the least. It's prohibitively expensive - that's worth some outrage. I can't even use this... I'll be stuck with my 3060ti for a while... and I only got lucky that a friend cut me a deal on a spare he happened to snipe out. Nvidia really, really is not a great company. I really don't care about DLSS or RTX as brands. But the technology itself is good, has IMO proven its worth, and to me, shows promising future potential. If tricks like machine learning super scaling and frame generation can open the door to it, I can't see that as a bad thing. The only bad thing about it is the absurd cost/availability.