Tuesday, August 28th 2018
UL's Raytracing Benchmark Not Based on Time Spy, Completely New Development
After we've covered news of UL's (previously known as 3D Mark) move to include a raytracing benchmark mode on Time Spy, the company has contacted us and other members of the press to clarify their message and intentions. As it stands, the company will not be updating their Time Spy testing suite with Raytracing technologies. Part of the reason is that this would need an immense rewrite of the benchmark itself, which would be counterproductive - and this leads to the rest of the reason why it's not so: such a significant change would invalidate previous results that didn't have the Raytracing mode activated.
As such, UL has elected to develop a totally new benchmark, built from the ground up to use Microsoft's DirectX Raytracing (DXR). This new benchmark will be added to the 3D Mark app as an update. The new test will produce its own benchmarking scores, very much like Fire Strike and Time Spy did, and will provide users with yet another ladder to climb on their way to the top of the benchmarking scene. Other details are scarce - which makes sense. But the test should still be available on or around the time of NVIDIA's 20-series launch, come September 20th.
Sources:
UL, None of the images herein are representative of UL's benchmark, they're just examples of Raytracing
As such, UL has elected to develop a totally new benchmark, built from the ground up to use Microsoft's DirectX Raytracing (DXR). This new benchmark will be added to the 3D Mark app as an update. The new test will produce its own benchmarking scores, very much like Fire Strike and Time Spy did, and will provide users with yet another ladder to climb on their way to the top of the benchmarking scene. Other details are scarce - which makes sense. But the test should still be available on or around the time of NVIDIA's 20-series launch, come September 20th.
10 Comments on UL's Raytracing Benchmark Not Based on Time Spy, Completely New Development
EDIT: Here's a better one from Remedy channel
And before someone comes to nitpick, sparse raytracing is used to generate some stages of things like lightmap/shadowmap/reflections and the sparse result is then upsampled with the machine learning DLSS to be used in the normal rasterized graphics pipeline. Doing that to the entire frame is just not viable at this time.
Precision as well as algorithms used are already heavily optimized, technically they are doing a variant of path tracing.