Well now will never known what was or not using DLSS on my RX 6800 since latest demo update removed it from game options.
Also not having "Tensor Marketing Cores" doesn't mean same math/matrix multiplications can't be run on any other GPUs. I wonder how the world did matrix multiplications for AI before Nvidia invented the Marketing Cores :>
In hindsight I'm pretty certain it was TSR.
However I agree that the Tensor Core bullshit has to stop. Sheeple will literally eat anything...
A Tensor is just a Matrix * Matrix operation. While it would have its uses, especially for neural/AI, it's not wizardry. Proof is that AMD doesn't have giant Matrix cores in its cards and has this kind of crappy but functional Raytracing technology that runs on TMUs and a software solution. It's bad, but it does work. The math checks out in the end, slower but accurate.
I have 100% confidence that the only reason the Tensor meme was thrown back in the Turing days is because Nvidia wanted to do the Nvidia thing and force everyone to upgrade from the 1000s and didn't want to have to say "we refuse to have DLSS run on the 1000s, pay up peasants".
They've always been like this and the sheeple have always obediently bleated at it, but the ever growing list of clues that this was a corporate policy and not a necessity at all should've at least made them react a bit. If anything, the fact that FSR 3 currently runs on a 3080 and DLSS 3 doesn't is another smoking gun, but at this point, estimating the level of Nvidia's fuckery with "bullshit reasons to force you to upgrade" is being in a smog of smoking guns. FSR 3, FSR 2, DLSS 3 FG on the 4000s but DLSS 3.5 is a-ok on the 3000s, magical "Tensor cores" that are so Tensor that if you released the source code to DLSS, AMD would be able to make it run on their cards + consoles in probably a matter of a few months...the list is long.
It would've been interesting to see if DLSS running on AMD had any real perf impact. I expect the perf would've been lower, but how much lower is the question, and I expect very little. Nvidia's justifications for gatekeeping everything have always been flimsy, and this would've been a great occasion to prove just how deep the bullshit runs.
Not that it would've changed much. The Sheeple would've found another excuse whispered straight from Nvidia's marketing on Reddit, the bullshit would be repeated until it became an accepted fact, and the caravan would've passed same as ever.
But as
@theouto said, it would've been a fun weekend.
You can already run XESS, of course not so performant and not as good quality wise as on ARC with their XMX cores but you can.
TBH rather than Intel doing something, or nvidia doing something I would rather have AMD finally jump on the bandwagon and utilize their AI Accelerators (as it even says on my 7900XTX box) to improve FSR2.
Have a fallback layer to standard FSR2 for older RDNA cards but release a new version for RDNA3.
I agree that by now, either AMD pushes on for FSR 2.3 and keeps tryharding to get upscaling on a human-made algorithm again, or they just go for the solution literally everyone else has gone to and does some AI work with their upscaler.
I feel like FSR 1/2 were amazing for their time, but their "time" was the 500-6000/1000-3000 era. Every single GPU henceforth will run with some AI capability. It's great that a very decent solution was provided to upscale games on older hardware. But now, it's time to focus on the future and get some higher quality stuff. ML seems like the way to go to accelerate their productivity in bettering image quality.