Raytracing in theory is simple, in application, it is not. Case in point, if a ray bounces into the void that's just beyond the visible player space, how does the GPU know the terminate that ray instead of trying to find a collision that doesn't exist? Not only do you have to purge all of the artificial lighting out of the existing world, you have to update the world to work inside of the framework of containing rays. It's simple to the layman but very complex for game engine developers, modelers, and level designers.
Running through it in my head, there's zero chance of backporting this stuff. They would have to pull up the old code, delete all of the old lighting, add ray emitters in place of lights, make sure the model that represents the light lets the rays through, update all meshes to correctly reflect/absorb according to material type, then check every corner of every map to make sure it is adequately lit. Conversely they have to make sure nothing is too bright too (pure white is as unplayable as pitch black). Oh, and models have to be updated too so that they can react to the rays.