Thursday, May 2nd 2019
Intel Xe GPUs to Support Raytracing Hardware Acceleration
Intel's upcoming Xe discrete GPUs will feature hardware-acceleration for real-time raytracing, similar to NVIDIA's "Turing" RTX chips, according to a company blog detailing how the company's Rendering Framework will work with the upcoming Xe architecture. The blog only mentions that the company's data-center GPUs support the feature, and not whether its client-segment ones do. The data-center Xe GPUs are targeted at cloud-based gaming service and cloud-computing providers, as well as those building large rendering farms.
"I'm pleased to share today that the Intel Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel Rendering Framework family of API's and libraries," said Jim Jeffers, Sr. Principal Engineer and Sr. Director of Intel's Advanced Rendering and Visualization team. Intel did not go into technical details of the hardware itself. NVIDIA demonstrated that you need two major components on a modern GPU to achieve real-time raytracing: 1. a fixed-function hardware that computes intersection of rays with triangles or surfaces (which in NVIDIA's case are the RT cores), and 2. an "inexpensive" de-noiser. NVIDIA took the AI route to achieve the latter, by deploying tensor cores (matrix-multiplication units), which accelerate AI DNN building and training. Both these tasks are achievable without fixed-function hardware, using programmable unified shaders, but at great performance cost. Intel developed a CPU-based de-noiser that can leverage AVX-512.
Source:
Intel
"I'm pleased to share today that the Intel Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel Rendering Framework family of API's and libraries," said Jim Jeffers, Sr. Principal Engineer and Sr. Director of Intel's Advanced Rendering and Visualization team. Intel did not go into technical details of the hardware itself. NVIDIA demonstrated that you need two major components on a modern GPU to achieve real-time raytracing: 1. a fixed-function hardware that computes intersection of rays with triangles or surfaces (which in NVIDIA's case are the RT cores), and 2. an "inexpensive" de-noiser. NVIDIA took the AI route to achieve the latter, by deploying tensor cores (matrix-multiplication units), which accelerate AI DNN building and training. Both these tasks are achievable without fixed-function hardware, using programmable unified shaders, but at great performance cost. Intel developed a CPU-based de-noiser that can leverage AVX-512.
59 Comments on Intel Xe GPUs to Support Raytracing Hardware Acceleration
you can see it's aimed at everything from data center to integrated graphics. Doesn't mean everything will be released at once though.
GJ Huang. Neither can Turing. All it does it some basic gimmicks with heavy denoising.
Of course there are segments when they have been active already: integrated chips and datacenters. These will be natural for them. But since they don't need much advertising, we won't know much before the launch.
Gaming GPUs are heavily advertised already. They will happen for sure.
Tbf Intel wants the piece of the AI pie.
software.intel.com/en-us/articles/get-started-with-neural-compute-stick
2. "milk the market" Have you seen the GPU sales of NV? Dropped by 50%. "milk" :D
Me, I just take that as a measure of (lack of) objectivity.
And yes, RTRT is a huge transformation. It will simplify the graphics pipeline and in turn graphics APIs while yielding results closer to reality. But it will take the better part of a decade or more to get there. That is why I root for speedy adoption, not because I automatically like everything Nvidia does. Yes, that. That's why all Hollywood blockbusters make heavy use of rasterization. Oh, wait...
NV needs other new innovations to keep it's Value.
heehee
It's not just an additional block in the pipeline (like AA). It's a major development - like when we moved from 2D to 3D.
If you don't understand the difference, I'd suggest some reading. Otherwise you'll have a very hard time understanding the changes gaming will undergo in the next few years. As of today Intel's RT officially exists as a mention on a blog and AMD's RT as a PS5-related rumor. Nvidia's product is on a shelf in the PC store near you. That's the difference.
And Tesla's "better AI chip" is a render (nomen omen). As of today all Teslas leaving the factory are still equipped with the "dumped" Nvidia chip. Gaming is slightly under 60% of Nvidia's revenue. Automotive is 5%. The rest is Datacenters, pro cards and OEM. That's the whole point of being an innovative company. You have to keep making new stuff. And don't worry. Nvidia will be fine.
And there is a cost aspect as well keeping adoption in the problematic area for quite a while. Those first gen Intel GPUs won't do much for that problem, and neither will Navi. The fact that Intel right here is announcing a focus on professional RT use cases is telling - it obviously doesn't see a market in the consumer segment yet.
And about the simplified rendering, there's no assumption. RT gives you shadows (AO included) and reflections in a single step, for example.
I wish for the best, but won't hold my breath.
It's no wonder they're desperate to try GPU. If they lose CPUs, then they're toast lol
Next time they get a bright idea, they should just give it to AMD to develop and buy their stock lolz