Thursday, May 2nd 2019
Intel Xe GPUs to Support Raytracing Hardware Acceleration
Intel's upcoming Xe discrete GPUs will feature hardware-acceleration for real-time raytracing, similar to NVIDIA's "Turing" RTX chips, according to a company blog detailing how the company's Rendering Framework will work with the upcoming Xe architecture. The blog only mentions that the company's data-center GPUs support the feature, and not whether its client-segment ones do. The data-center Xe GPUs are targeted at cloud-based gaming service and cloud-computing providers, as well as those building large rendering farms.
"I'm pleased to share today that the Intel Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel Rendering Framework family of API's and libraries," said Jim Jeffers, Sr. Principal Engineer and Sr. Director of Intel's Advanced Rendering and Visualization team. Intel did not go into technical details of the hardware itself. NVIDIA demonstrated that you need two major components on a modern GPU to achieve real-time raytracing: 1. a fixed-function hardware that computes intersection of rays with triangles or surfaces (which in NVIDIA's case are the RT cores), and 2. an "inexpensive" de-noiser. NVIDIA took the AI route to achieve the latter, by deploying tensor cores (matrix-multiplication units), which accelerate AI DNN building and training. Both these tasks are achievable without fixed-function hardware, using programmable unified shaders, but at great performance cost. Intel developed a CPU-based de-noiser that can leverage AVX-512.
Source:
Intel
"I'm pleased to share today that the Intel Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel Rendering Framework family of API's and libraries," said Jim Jeffers, Sr. Principal Engineer and Sr. Director of Intel's Advanced Rendering and Visualization team. Intel did not go into technical details of the hardware itself. NVIDIA demonstrated that you need two major components on a modern GPU to achieve real-time raytracing: 1. a fixed-function hardware that computes intersection of rays with triangles or surfaces (which in NVIDIA's case are the RT cores), and 2. an "inexpensive" de-noiser. NVIDIA took the AI route to achieve the latter, by deploying tensor cores (matrix-multiplication units), which accelerate AI DNN building and training. Both these tasks are achievable without fixed-function hardware, using programmable unified shaders, but at great performance cost. Intel developed a CPU-based de-noiser that can leverage AVX-512.
59 Comments on Intel Xe GPUs to Support Raytracing Hardware Acceleration
They don't want to miss the bus with market-relevance at launch and end up with another Larrabee this time.
As for the hardware ray tracing - Intel is one of the biggest FPGA players. This is definitely within their reach even at this moment. How it fares against RTX is another story. Ray tracing works works the same. You either push frames fast enough to be considered "real time" or not.
What do you mean by "rendered after rasterization"? Cloud gaming platform.
Signal is encoded and decoded, goes through multiple routers and switches, travels hundreds of km via wire or wireless.
And you worry about denoising on a die 20cm away. The first part of the article is about their existing products. Rendering movies and complex static 3D models happens on CPUs.
GPUs are too slow (ray tracing is sequential, i.e. heavily single-threaded). Also, complicated models are way too big for RAM available on GPUs.
The article mentions a few CPU features and libraries that were created to accelerate ray tracing.
The second part is about future GPUs. It doesn't mention cloud gaming explicitely but that's the best use case.
Intel Embree, rendering environment described in the text, is a CPU solution - heavily utilizing AVX instruction set. You can't just move it to GPUs.
And really... GPUs are awful at ray tracing. That's why Nvidia made an ASIC and sacrificed some GPGPU potential in RTX cards.
Intel may be planning to make new PCIe RT accelerators to speed up ray tracing nodes, but that would not be GPUs, but new generation of x86 coprocessors. In fact Xeon Phi are currently used for rendering. But that would be just improving products for a segment Intel already dominates.
RTRT for cloud gaming would mean entering a new market - something Intel must do to grow.
RIP Tensor cores
I just wish their parts were as disruptive as i740 was.
oh wait.
"de-noising" (Nvidia DLSS) is one thing, ray tracing is another. The phenomenon you're describing is called "programming". It makes it possible to build universal hardware that can do different things. Great stuff!
...
Yes, "hardware raytracing" means a dedicated chip that does exactly what's supposed to be doing (just like hardware encoding/decoding, hardware compressing, hardware random number generating).