RT could be nice when implemented right. That is, when the whole scene is ray traced. The problem is that current gen RT cores on both AMD's and Nvidia's side are too slow for this. That's why game devs limit what they do with RT (for example, shadows only) and use traditional rasterization for the rest of the effects - which is what makes RT at its current state kind of pointless, imo. Otherwise, it would be great.
As for Tensor cores, I agree. Everybody praises Nvidia for DLSS, when in fact, if RT cores were strong enough to do their job at a minimal performance loss vs rasterization, then nobody would need DLSS in the first place. Besides, AMD has shown that the whole DLSS shenanigans can be done without Tensor cores so... meh.
Exactly. Nearly every single monitor is Freesync capable nowadays, whereas you have to find and pay more for Gsync ones. This is where the slogan "Nvidia - The Way It's Meant To Be Paid" is valid.
Well, the point of DLSS is to increase performance at a minimal image quality loss. But then the question is, why don't you have enough performance in the first place? Is it the gamer's fault for sitting on the 120+ fps craze train, or is it Nvidia's way of selling their otherwise useless Tensor cores when they could have used the same die space for more raster cores? RT is nice, but like I said above...
They have a much harder time convincing me for sure. I honestly think the high refresh rate craze is probably the stupidest thing the gaming industry has invented so far.