Friday, August 26th 2022
Intel Posts XeSS Technology Deep-Dive Video
Intel Graphics today posted a technological deep-dive video presentation into how XeSS (Xe Super Sampling), the company's rival to NVIDIA DLSS and AMD FSR, works. XeSS is a gaming performance enhancement technology where your game is rendered by the GPU at a lower resolution than what your display is capable of; while a high-quality upscaling algorithm scales it up to your native resolution while minimizing quality losses associated with classical upscaling methods.
The video details mostly what we gathered from our older articles on how XeSS works. A game's raster and lighting is rendered at a lower-resolution, frame-data along with motion vectors are fed to the XeSS upscaling algorithm, and is then passed on to the renderer's post-processing and the native-resolution HUD is applied. The XeSS upscaler takes not just motion vector and the all important frame inputs, but also temporal data from processed (upscaled) frames, so a pre-trained AI could better reconstruct details.What's new in today's presentation is that a set of performance numbers obtained on the flagship Arc A770 desktop graphics card were posted, showing how XeSS impacts performance across a set of games that include Ghostwire Tokyo, Hitman 3, Arcadegeddon, SoTR, Diofield Chronicle, Super People, Redout 2, and Chivalry 2. They also walked us through the various quality presets of XeSS that include Ultra Quality, Quality, Balanced, and Performance.Also revealed is that Intel Graphics is working with UL Benchmarks to add a new XeSS Feature Test to 3DMark, much like the DLSS benchmark of the test. Much like AMD FSR, Intel claims the XeSS is easy to implement on a variety of game engines, and while it benefits tremendously from the XMX matrix-math accelerators of Xe-HPG GPUs, it has a DP4a motion-vector fallback that lets its support not just older Intel architectures (such as Xe-LP), but also rival GPU brands.Intel once again detailed the broad list of games that will support XeSS, and the big AAA name here is the upcoming "Call of Duty: Modern Warfare II," which will get XeSS support at launch (October 28).
The video can be watched here:
Source:
HotHardware
The video details mostly what we gathered from our older articles on how XeSS works. A game's raster and lighting is rendered at a lower-resolution, frame-data along with motion vectors are fed to the XeSS upscaling algorithm, and is then passed on to the renderer's post-processing and the native-resolution HUD is applied. The XeSS upscaler takes not just motion vector and the all important frame inputs, but also temporal data from processed (upscaled) frames, so a pre-trained AI could better reconstruct details.What's new in today's presentation is that a set of performance numbers obtained on the flagship Arc A770 desktop graphics card were posted, showing how XeSS impacts performance across a set of games that include Ghostwire Tokyo, Hitman 3, Arcadegeddon, SoTR, Diofield Chronicle, Super People, Redout 2, and Chivalry 2. They also walked us through the various quality presets of XeSS that include Ultra Quality, Quality, Balanced, and Performance.Also revealed is that Intel Graphics is working with UL Benchmarks to add a new XeSS Feature Test to 3DMark, much like the DLSS benchmark of the test. Much like AMD FSR, Intel claims the XeSS is easy to implement on a variety of game engines, and while it benefits tremendously from the XMX matrix-math accelerators of Xe-HPG GPUs, it has a DP4a motion-vector fallback that lets its support not just older Intel architectures (such as Xe-LP), but also rival GPU brands.Intel once again detailed the broad list of games that will support XeSS, and the big AAA name here is the upcoming "Call of Duty: Modern Warfare II," which will get XeSS support at launch (October 28).
The video can be watched here:
32 Comments on Intel Posts XeSS Technology Deep-Dive Video
If you check 3 months back i said based on Intel's already delay, what Intel's strategy must be regarding launch date/optimization etc.
It was end of September.
Also i said that the actual clocks of the OC models (like a Asrock Phantom Gaming variant) shouldn't exceed 2.5GHz (without extra OC applied)
Regarding performance i said that these OC variants if accompanied with 18Gbps should allow Intel to reach 95% of RTX 3060Ti and if the silicon isn't so stressed out (Let's say 2.4GHz) and with 16Gbps -5% from that.
And i gave a ±5% margin error since we didn't have any benchmark and drivers was constantly improving.
So best case from 100% RTX 3060Ti performance if we are talking about 2.5GHz with 18Gbps memory with good driver improvements to worst case 86% RTX 3060Ti performance for 2.4GHz with 16Gbps memory with small driver improvements.
Regarding A380 i also said 3 months ago that it will be (at 1080p in this case) -2% from GTX 1650 with a ±5% margin error again, based on 2250MHz actual clocks and 16Gbps memory that was then the rumors.
A380 prediction got close enough so i expect A770 to be close to my original assumptions also depending clocks/memory and how much improved drivers will be in one month from now.
Intel absolutely is not a newcomer to graphics, they have been the largest GPU manufacturer since at least the late 00's. Yes, that is in integrated graphics, but integrated graphics need drivers too, and they have introduced new architectures in iGPUs as well, they should know how to develop and release functional graphics drivers for new architectures on release.
From an architectural level, there is little difference between between an iGPU and dGPU, one is just a larger version of the other. It isn't like Intel is having issues with their dGPU memory controllers which is one of the only new elements in a dGPU. The issues they are having are not a symptom of trying something new, just bad mismanagement and incompetence which is a common theme for Intel outside their core x86 market, especially given their massive engineering and financial resources.
Since we are talking about optimization from developers, we are talking about games that are concurrent or after the release of the CPU/IGP solution, not games 3-5 years earlier from CPU launch...
On the other side for Intel also it didn't make sense based on performance/target market to invest in s/w engineers to support developers or to fund optimizations marketing deals.
Of course Intel underdelivered even taking account the above since driver quality level/functionality wasn't were it should have been at all.
Now they are trying (i guess by necessity if you think were things are going regarding CPU/GPU interoperability) to compete in game orientated graphics solutions also, so they know what they have to do, I'm willing to cut them some slack you don't, nothing wrong about that...
Intel could've added it to their iGPU instead.