Thursday, August 19th 2021

Intel's DLSS-rivaling AI-accelerated Supersampling Tech is Named XeSS, Doubles 4K Performance

Intel plans to go full tilt with gaming graphics, with its newly announced Arc line of graphics processors designed for high-performance gaming. The top Arc "Alchemist" part meets all requirements for DirectX 12 Ultimate logo, including real-time raytracing. The company, during the technology's reveal, earlier this week, also said that it's working on an AI-accelerated supersampling technology. The company is calling it XeSS (Xe SuperSampling). It likely went with Xe in the name, as it possibly plans to extend the technology to even its Xe LP-based iGPUs and the entry-level Iris Xe MAX discrete GPU.

Intel claims that XeSS cuts down 4K frame render-times by half. By all accounts, 1440p appears to be the target use case of the top Arc "Alchemist" SKU. XeSS would make 4K possible (i.e., display resolution set at 4K, rendering at a lower resolution, with AI-accelerated supersampling restoring detail). The company revealed that XeSS will use a neural network-based temporal upscaling technology that incorporates motion vectors. In the rendering pipeline, XeSS sits before most post-processing stages, similar to AMD FSR.

While AMD's FSR technology is purely shader based, the Intel algorithm can either use XMX hardware units (new in Intel Xe HPG), or DP4a instructions (available on nearly all modern AMD and NVIDIA GPUs). XMX stands for Xe Matrix Extensions and is basically Intel's version of NVIDIA's Tensor Cores, to speed up matrix math, which is used in many AI-related tasks. The Intel XeSS SDK will be available this month, in open source, using XMX hardware, the DP4a version will be available "later this year".
Source: VideoCardz
Add your own comment

46 Comments on Intel's DLSS-rivaling AI-accelerated Supersampling Tech is Named XeSS, Doubles 4K Performance

#1
Imsochobo
riiight.

So is it blurry, temporal artefacts everywhere ?

We'll never know, Intel really needs to start showing tech instead of talking of tech 5 years down the line, what a screaming baby they've become.

I just want to see Products!
Posted on Reply
#2
dj-electric
ImsochoboWe'll never know, Intel really needs to start showing tech instead of talking of tech 5 years down the line, what a screaming baby they've become.
Wait like... literally 30 minutes
Posted on Reply
#3
Punkenjoy
I am the only one that dislike that we don't have standard AI/Temporal upscaling tech supported on all hardware having the required function instead of each vendor having their own receipe ?
Posted on Reply
#4
toilet pepper
PunkenjoyI am the only one that dislike that we don't have standard AI/Temporal upscaling tech supported on all hardware having the required function instead of each vendor having their own receipe ?
Well, AMD's tech is pretty much open but marketing wouldn't allow promotion of tech from other brands coz it will make them look inferior.
Posted on Reply
#5
juular
toilet pepperWell, AMD's tech is pretty much open but marketing wouldn't allow promotion of tech from other brands coz it will make them look inferior.
FSR is not temporal, it's regular downscaling+upscaling with sharpening filters slapped together, that's why it's so bad, especially in motion.
Posted on Reply
#6
Keullo-e
S.T.A.R.S.
juularFSR is not temporal, it's regular downscaling+upscaling with sharpening filters slapped together, that's why it's so bad, especially in motion.
I should try that in RE Village as it supports it.
Posted on Reply
#7
watzupken
In my opinion, with another new AI upscaling option, ultimately something's gonna give. No way game developers will want to waste time optimising their games for DLSS, FSR and XeSS. Sure we will see sponsored games running specific upscaling technology, but with DLSS having a significant head start in a "smart" upscaling technology, the odds of Intel squeezing another one in with success is very low. Unless it is significantly easier to integrate than DLSS. FSR for now remains the "dumb" upscaling that is the easiest to integrate in games, but I suspect future FSR for RDNA3 onwards may also start leveraging on AI to do the job.
Posted on Reply
#8
Mistral
Intel graphics and Intel press materials... keep your expectations in check, people :)
Posted on Reply
#9
Dredi
watzupkenIn my opinion, with another new AI upscaling option, ultimately something's gonna give. No way game developers will want to waste time optimising their games for DLSS, FSR and XeSS. Sure we will see sponsored games running specific upscaling technology, but with DLSS having a significant head start in a "smart" upscaling technology, the odds of Intel squeezing another one in with success is very low. Unless it is significantly easier to integrate than DLSS. FSR for now remains the "dumb" upscaling that is the easiest to integrate in games, but I suspect future FSR for RDNA3 onwards may also start leveraging on AI to do the job.
XeSS will be implemented using open standards and should work across all platforms similarly to FSR.
Posted on Reply
#10
R00kie
this infographic is confusing, if its an upscaler and it improves performance, shouldn't frametime be lower rather than higher with it enabled? :confused::confused::confused:
Posted on Reply
#11
dir_d
juularFSR is not temporal, it's regular downscaling+upscaling with sharpening filters slapped together, that's why it's so bad, especially in motion.
You should read up on FSR and think about what you typed
Posted on Reply
#12
Anymal
They should stray away from SS mark.
Posted on Reply
#13
ZoneDymo
juularFSR is not temporal, it's regular downscaling+upscaling with sharpening filters slapped together, that's why it's so bad, especially in motion.
ermm it really isnt "so bad"... and there are plenty of artifacts with DLSS as well.
infact I dont even understand that comment because TAA IS temporal and that has artifacts in motion....

On Topic:
I would like to know what the difference is between this and Nvidia's implementation because in base terms it all sounds the same
Posted on Reply
#14
Punkenjoy
It look like Intel is stating that XeSS will work with any GPU vendor. that is good, very good. I do not know if the GPU need to have special function or any dx12 ultimate gpu will do...

As for artefact on temporal solution, it's way harder to do a good implementation but it's not impossible. But it's probably more something AAA studios will be able to do. And that is also one of the great point of FSR, not as good as temporal solution, but easy to work with and quick to implement for smaller studios.

So with XeSS and FSR, there are probably no reason that a game doesn't implement one or both of them in the future...
Posted on Reply
#15
chodaboy19
We need some kind of baseline to be able to quantify this "double" the performance... hehe
Posted on Reply
#16
londiste
dir_dYou should read up on FSR and think about what you typed
What do you mean? FSR is quite literally a slight modified Lanczos upscaling plus slightly toned down CAS.

Temporal component with motion vectors is basically more contemporary method for upscaling (or reconstruction) that both DLSS and now XeSS are using. FSR will no doubt evolve to include the temporal component but today it does not have any of that.
PunkenjoyI am the only one that dislike that we don't have standard AI/Temporal upscaling tech supported on all hardware having the required function instead of each vendor having their own receipe ?
From what it looks like, this is what Intel intends to exploit. If they open up XeSS, that is quite likely going to succeed in becoming the "standard" way of upscaling quite fast.
PunkenjoyIt look like Intel is stating that XeSS will work with any GPU vendor. that is good, very good. I do not know if the GPU need to have special function or any dx12 ultimate gpu will do...
From what Intel disclosed, Xe-HPG seems to go pretty heavy on matrix math, even heavier than Nvidia. IIRC AMD is going to have something similar in RDNA3.
No doubt everything is doable with shaders but more purpose-built units are likely faster for these ML-based techniques.
Posted on Reply
#17
defaultluser
watzupkenIn my opinion, with another new AI upscaling option, ultimately something's gonna give. No way game developers will want to waste time optimising their games for DLSS, FSR and XeSS. Sure we will see sponsored games running specific upscaling technology, but with DLSS having a significant head start in a "smart" upscaling technology, the odds of Intel squeezing another one in with success is very low. Unless it is significantly easier to integrate than DLSS. FSR for now remains the "dumb" upscaling that is the easiest to integrate in games, but I suspect future FSR for RDNA3 onwards may also start leveraging on AI to do the job.
Yeah, this just sounds like "too little, too late."

They even made their own FXAA competitor you can force in the driver (it does almost nothing to edges, and it costs nearly as much performance.)

Because (unlike SMAA and FXAA) we will never be able to insert FSR, game devs are going to have to choose on or two techs to bother with!
Posted on Reply
#18
Cheese_On_tsaot
Jill ValentineI should try that in RE Village as it supports it.
It's awful.
Posted on Reply
#19
Keullo-e
S.T.A.R.S.
Cheese_On_tsaotIt's awful.
Still going to try it, haven't downloaded yet as I wiped all my SSDs clean as I was too lazy to organize those.
Posted on Reply
#20
juular
ZoneDymoermm it really isnt "so bad"... and there are plenty of artifacts with DLSS as well.
infact I dont even understand that comment because TAA IS temporal and that has artifacts in motion....
Comparing FSR and DLSS 2.0 the major difference isn't obvious only if you're blind or if you're comparing screenshots in static scenes. In motion DLSS 2.0 made a big improvement over 1.0, it's almost transparent, sure, there are artifacts from it's temporal nature but the performance improvement worth it in most cases, at least in scenic games where the quality of static image and the ability to run the game at maximum settings with good enough FPS is more important.
And while TAA is, by definition, temporal, FSR isn't. So it isn't really a fair comparison in the first place, people and media comparing them and concluding 'yeah, they're comparable' are simply lying to themselves. Like that HU video where they compare a bunch of static screenshots while saying nothing about FSR's performance in motion where it's all but a blurry mess, better than nothing i guess, but not at all comparable to even first gen DLSS.
dir_dYou should read up on FSR and think about what you typed
I don't know what you were reading, but FSR is very much not temporal. It's a relatively simple downscaling-upscaling based on Lanczos, a popular opensource upscaling algorithm. There are no way to make it temporal while retaining it's ease of implementation by game developers, it's simply a post-process shader and was developed as such. Whether AMD would eventually come up with an actual alternative to DLSS is another question. I sure hope so, but first, they would need to add DL acceleration blocks to their hardware.
Posted on Reply
#21
dir_d
londisteWhat do you mean? FSR is quite literally a slight modified Lanczos upscaling plus slightly toned down CAS.

Temporal component with motion vectors is basically more contemporary method for upscaling (or reconstruction) that both DLSS and now XeSS are using. FSR will no doubt evolve to include the temporal component but today it does not have any of that.

From what it looks like, this is what Intel intends to exploit. If they open up XeSS, that is quite likely going to succeed in becoming the "standard" way of upscaling quite fast.
From what Intel disclosed, Xe-HPG seems to go pretty heavy on matrix math, even heavier than Nvidia. IIRC AMD is going to have something similar in RDNA3.
No doubt everything is doable with shaders but more purpose-built units are likely faster for these ML-based techniques.
What i meant for him to read up on was the " that's why it's so bad, especially in motion." of his statement because there is not temporal component like you stated.
Posted on Reply
#22
dicktracy
DLSS 2.0 is best in class. FSR sucks. This one will likely suck. We need DirectML ASAP to standardize AI upscaling.
Posted on Reply
#23
Verpal
DP4a instructions (available on nearly all modern AMD and NVIDIA GPUs)
I am aware that NVIDIA support DP4a since Pascal, but is there any documentation on when AMD start supporting DP4a?
Posted on Reply
#24
Vayra86
PunkenjoyI am the only one that dislike that we don't have standard AI/Temporal upscaling tech supported on all hardware having the required function instead of each vendor having their own receipe ?
Nope. Same goes for Physics engines, another such underrated thing in games that works here and there but never as a standard thing. We could've had so much more of it.

Its a reason not to fund proprietary bullshit, but it requires competitors in the market, and so far, all we had was two giants trying to invent the best sauce for the same plate of food. But its the same plate of food.

Now that there are three, it gets a whole lot harder to defend that idea. Three inventors of a wheel where two are destined to fail is quite a bit more risk than the odd 50% adjusted for market(ing) share. Devs won't go about supporting three technologies either. They want them fed to them or they're not happening. Another marketing idea that won't be working anymore, is the good old difference of 'AMD open, Nvidia closed'. Its no longer a real thing to stand out with from the 'crowd' when a third player deploys yet another policy. Three is a great number for us.
Posted on Reply
#25
TheUn4seen
So, now we have three proprietary upscaling technologies, with one being less proprietary but fairly primitive. I don't think this kind of segmentation will last for long since developers don't want to limit their target audience and certainly don't want to spend money on implementing three separate technologies to achieve a single goal. So now:
- If the deciding factor will be a financial incentive for the developers, nVidia will win,
- If ease of implementation, FSR will win.
- If performance, Intel has the upper hand, if their first party benchmarking is to be believed. But they will have to really make it into a polished product since they have zero market share and brand recognition as far as dedicated GPUs go.

Personally I think if AMD can create FSR 2.0 with improved quality and performance AND make it hardware agnostic, it will be a clear winner for developers and consumers. As much as I admire the complexity and elegance of AI, the mass market works on the KISS principle.
Posted on Reply
Add your own comment
May 15th, 2024 15:14 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts