Where does that almost surreal "it's gotta be dedicated HW" weirdo thought come from?
Crytek demoed we can have RT gimmick right there, with current tech.
Hey look, you and me agree on this, I'm no fan either of large GPU die percentages dedicated to just RT performance; but with the facts available to us now, we also have a few things to deal with...
- RT / Tensor core implementation in Turing has a much higher perf/watt potential and absolute performance potential than any other implementation today.
- Turing shows us it can be done in tandem with a full fat GPU, within a limited power budget.
- RTX / DXR will and can be used to
speed up the things you see in the Crytek demo.
.... now that last point is an important one. It means Nvidia, with a hardware solution, is likely to be faster in the usage of tech you saw in that Crytek demo. After all, part of the dedicated hardware has increased efficiency at doing a piece of the workload, which leaves TDP budget for the rest to run as usual. With a software implementation that runs on the 'entire' GPU, a hypothetical AMD GPU might offer a similar performance peak for non-RT gaming (the normal die at work) but it can never be faster at doing both in tandem.
End result, Nvidia with that weirdo thought wins again.
The real question is what the market will accept in terms of visual gain versus additional cost / performance hit. And that is an answer nobody has, but Turing so far isn't going like hotcakes, which is a sign. In that sense, if we can see in-game, live footage of that Crytek implementation adding to visual quality at minimal performance cost, that is the real game changer. A tech demo is just that: a showcase of
potential. But you can't sell potential.
I think the more interesting development with hardware solutions for RT is how well it can be utilized for other tasks. That will make RT adoption easier. Nvidia tried something with DLSS, but that takes too much effort.