I know it's intended and advertised as such but I really doubt it does for what it was intended. I've got a serious problem believing this upscaling.
I don't. Machine learning is supposed to be good at filling in the blanks and this isn't a case where filling in the blanks needs to be perfect. It is literally a perfect situation for machine learning (if there is enough time to do the math.) If a GPU is under heavy load, it's going to be spending a lot of time (relatively speaking,) to render a frame. DLSS just renders a sizable portion of the frame while the tensor cores "fill in the blanks" after the fact. It's to give the illusion that you're running at say, 4k instead of 1440p. Dropping the resolution makes it easier for the rest of the GPU to render the frame and the otherwise unused resources of the tensors cores are then given a task after the frame has been rendered and stored in the framebuffer. This lets the tensor cores do their work while the next frame is already getting rendered.
This actually makes perfect sense, the problem though is that you shouldn't need this kind of trickery with hardware that's this fast and this expensive. This really should be a feature for slower GPUs where the GPU itself being a bottleneck is much more likely. It's also entirely possible that you need enough tensor cores to do it, in which case you already need a lot of (tensor,) resources available to do it. If that's really the case, it's benefit (to me,) seems marginal.
NVidia really likes its smoke and mirrors.