Yes. Been saying since day one. A hardware implementation that takes such a massive amount of die space is so grossly inefficient, simple economics will destroy it. If not with Turing then later down the line. Its just not viable. Sales numbers currently only underline that sentiment. I'm not the only one frowning at this; already with the first gen and a meagre implementation we're looking at a major price bump because the die is simply bigger. The market ain't paying it and devs will not spend time on it as a result. Another aspect: I'm not looking to sell my soul to Nvidia's overpriced proprietary bullshit, I'm not paying for inefficiency. Its been the reason I've bought Nvidia the past few generations... they were more efficient. Their wizardry for example with VRAM, and balancing out (most) GPUs in the stack so well is quite something. Turing is like a 180 degree turn.
This, however... yes. Simply yes. Attacking the performance problem from the angle of a software-based implementation that can scale across the entire GPU instead of just a part of it, while the entire GPU is also available should you want the performance elsewhere. Even if this runs at 5 FPS today in realtime on a Vega 56, its already more promising than dedicated hardware. This is the only way to avoid a PhysX situation. RT needs widespread adoption to get the content to go along. If I can see a poorly running glimpse of my RT future on a low-end GPU, this will catch on, and it will be an immense incentive for people to upgrade, and keep upgrading. Thát is viable on a marketplace.
Another striking difference I feel is the quality of this demo compared to what Nvidia has put out with RTX. This feels like a next step in graphics in every way, the fidelity, the atmosphere simply feels right. With every RTX demo thus far, even in Metro Exodus, I don't have that same feeling. It truly feels like some weird overlay that doesn't come out quite right. Which, in reality, it also is. The cinematically badly lit scenes of Metro only emphasize that when you put them side by side with non-RT scenes. The latter may not always be 'correct' but it sure is a whole lot more playable.
*DXR. In the end Nvidia is using a customized setup that works for them, it remains to be seen how well AMD can plug into DXR with their solution, or how Crytek does it now, and/or whether they even want to or need to. The DX12 requirement sure doesn't help it and DXR will be bogged down by rasterization as well as it sits within the same API. There is a chance the overall trend will move away from DXR altogether, leaving RTX in the dust or out to find a new point of entry.