• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Microsoft DirectSR Runtime Based on AMD FSR 2.2

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
46,476 (7.66/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Microsoft revealed that its DirectSR (Direct Super Resolution) API, which seeks to standardize super resolution-based performance enhancement technologies in games, has a hardware-independent default code path that is essentially based on AMD FSR 2.2, a Microsoft Dev Manager speaking at GDC has revealed. DirectSR provides a common set of APIs for game developers to integrate super resolution—so that developers don't have to separately implement DLSS, FSR and XeSS. Rather these upscalers, and others, can register themselves with the DirectSR API, and then get fed a dozen of input parameters that they may (or may not) use to improve the upscaling quality. Since AMD has open-sourced the code of FSR 2.2 on GPUOpen, and it is entirely shader-based, and doesn't use exotic technologies such as AI, Microsoft decided to use FSR 2.2 as the base algorithm for DirectSR. If other algorithms like DLSS are available on the user system, these can be activated by the user, too, of course, but supporting them requires no extra work from the developer side.



Update 18:15 UTC: Updated the news post to make it clear that the FSR 2.2 code path is merely a default, and other upscalers are free to hook into DirectSR to provide upscaling.

View at TechPowerUp Main Site | Source
 
Joined
Feb 20, 2019
Messages
7,447 (3.89/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Oh dear.
It makes sense I guess, because FSR is relatively mature and open, but also as the lowest-common-denominator it's also the lowest quality of the three options.
 
Joined
May 26, 2021
Messages
122 (0.11/day)
My god. What an uninformed article. The decision by Microsoft is to use the API interface similar to what is used in FSR 2.2, not the FSR itself. Hence, not the up scaling technology.

Once this interface is established, you can plug in whichever up scaling tech into it, to magically work with the games. That is it.

Quite similar to how DLL swaps work today to bring one tech in lieu of another into a game that doesn’t support both.

I expected better from TPU.
 
Last edited:
Joined
Nov 8, 2017
Messages
151 (0.06/day)
My god. What an uninformed article. The decision by Microsoft is to use the API interface similar to what is used in FSR 2.2, not the FSR itself. Hence, not the up scaling technology.

Once this interface is established, you can plug in whichever up scaling tech into it, to magically work with the games. That is it.

Quite similar to how DLL swaps work today to bring one tech in lieu of another into a game that doesn’t support both.

I expected better from TPU.
yhea...from the source article itself:
First, Hargreaves stated, "We are not trying to unify the super-resolution upscaler to a one-size-fits-all specification, but the goal is to make it easier for developers and users to work with." [...] However, although each technology has its own advantages and disadvantages, the functions of converting the resolution of game images while restoring and enhancing the sense of resolution are the same, so the processing systems are very similar to each other.
In this case, there is no need to implement all of the DLSS/FSR/XeSS shader code in the entire game. Microsoft thought it would be a good idea to put these core processing parts under DirectSR.
With this mechanism, the game can handle all DLSS/FSR/XeSS by simply calling DirectSR by performing only the preliminary processing necessary for the super-resolution upscaler and preparing the parameters.
 
Last edited:
Joined
Nov 22, 2023
Messages
147 (0.82/day)
So long as the underlying tech/upscaling algo's can eventually be updated to ones that have been trained its not a huge issue at the moment.

I figure FSR will be on its way to being an AI trained algo before long, and a good way to cross market the MI300 etc...

The thing about DLSS is that its feature gating etc is entirely a management decision by NV. No reason you couldn't run DLSS on shaders. It might run slower than using tensor cores, but its more of a tail wagging the dog situation here where NV needed to find a reason to use the tensor cores than the cores really being required to run DLSS.
 
Joined
Nov 8, 2017
Messages
151 (0.06/day)
So long as the underlying tech/upscaling algo's can eventually be updated to ones that have been trained its not a huge issue at the moment.

I figure FSR will be on its way to being an AI trained algo before long, and a good way to cross market the MI300 etc...

The thing about DLSS is that its feature gating etc is entirely a management decision by NV. No reason you couldn't run DLSS on shaders. It might run slower than using tensor cores, but its more of a tail wagging the dog situation here where NV needed to find a reason to use the tensor cores than the cores really being required to run DLSS.
You could make the same argument about Intel. Xess doesn't need the XMX core but runs better on them. The difference is that Nvidia is in a position where they could afford to lock DLSS, when intel isn't.
 
Joined
Jun 2, 2017
Messages
8,055 (3.17/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
My god. What an uninformed article. The decision by Microsoft is to use the API interface similar to what is used in FSR 2.2, not the FSR itself. Hence, not the up scaling technology.

Once this interface is established, you can plug in whichever up scaling tech into it, to magically work with the games. That is it.

Quite similar to how DLL swaps work today to bring one tech in lieu of another into a game that doesn’t support both.

I expected better from TPU.
A better example would be how Vulkan got integrated to DX12.
 
Joined
Sep 1, 2009
Messages
1,188 (0.22/day)
Location
CO
System Name 4k
Processor AMD 5800x3D
Motherboard MSI MAG b550m Mortar Wifi
Cooling Corsair H100i
Memory 4x8Gb Crucial Ballistix 3600 CL16 bl8g36c16u4b.m8fe1
Video Card(s) Nvidia Reference 3080Ti
Storage ADATA XPG SX8200 Pro 1TB
Display(s) LG 48" C1
Case CORSAIR Carbide AIR 240 Micro-ATX
Audio Device(s) Asus Xonar STX
Power Supply EVGA SuperNOVA 650W
Software Microsoft Windows10 Pro x64
Joined
Oct 2, 2015
Messages
2,997 (0.95/day)
Location
Argentina
System Name Ciel
Processor AMD Ryzen R5 5600X
Motherboard Asus Tuf Gaming B550 Plus
Cooling ID-Cooling 224-XT Basic
Memory 2x 16GB Kingston Fury 3600MHz@3933MHz
Video Card(s) Gainward Ghost 3060 Ti 8GB + Sapphire Pulse RX 6600 8GB
Storage NVMe Kingston KC3000 2TB + NVMe Toshiba KBG40ZNT256G + HDD WD 4TB
Display(s) AOC Q27G3XMN + Samsung S22F350
Case Cougar MX410 Mesh-G
Audio Device(s) Kingston HyperX Cloud Stinger Core 7.1 Wireless PC
Power Supply Aerocool KCAS-500W
Mouse EVGA X15
Keyboard VSG Alnilam
Software Windows 11
Oh dear.
It makes sense I guess, because FSR is relatively mature and open, but also as the lowest-common-denominator it's also the lowest quality of the three options.
It's the fallback, it's the best option for GPUs that can't inform better support like Pascal. XeSS would murder them.

FSR 2.2? Lol pass.
Reading comprehension is necessary in life.
 
Joined
Oct 1, 2006
Messages
4,899 (0.76/day)
Location
Hong Kong
Processor Core i7-12700k
Motherboard Z690 Aero G D4
Cooling Custom loop water, 3x 420 Rad
Video Card(s) RX 7900 XTX Phantom Gaming
Storage Plextor M10P 2TB
Display(s) InnoCN 27M2V
Case Thermaltake Level 20 XT
Audio Device(s) Soundblaster AE-5 Plus
Power Supply FSP Aurum PT 1200W
Software Windows 11 Pro 64-bit
Joined
Feb 3, 2017
Messages
3,498 (1.31/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
The thing about DLSS is that its feature gating etc is entirely a management decision by NV. No reason you couldn't run DLSS on shaders. It might run slower than using tensor cores, but its more of a tail wagging the dog situation here where NV needed to find a reason to use the tensor cores than the cores really being required to run DLSS.
I really wish we could see a test of that somehow. The same argument was brought to bear for the DXR and Nvidia did release drivers with its implementation on Pascal. That did not turn out too well. Not saying DLSS would run into problems at the same degree but depending on what exactly DLSS does running tensor operations that could be anywhere between negligible and quite a large problem for running on shaders.
 

OneMoar

There is Always Moar
Joined
Apr 9, 2010
Messages
8,756 (1.70/day)
Location
Rochester area
System Name RPC MK2.5
Processor Ryzen 5800x
Motherboard Gigabyte Aorus Pro V2
Cooling Enermax ETX-T50RGB
Memory CL16 BL2K16G36C16U4RL 3600 1:1 micron e-die
Video Card(s) GIGABYTE RTX 3070 Ti GAMING OC
Storage ADATA SX8200PRO NVME 512GB, Intel 545s 500GBSSD, ADATA SU800 SSD, 3TB Spinner
Display(s) LG Ultra Gear 32 1440p 165hz Dell 1440p 75hz
Case Phanteks P300 /w 300A front panel conversion
Audio Device(s) onboard
Power Supply SeaSonic Focus+ Platinum 750W
Mouse Kone burst Pro
Keyboard EVGA Z15
Software Windows 11 +startisallback
GPUOpen, and it is entirely shader-based,

in short its ass
the only reason DLSS2 was a smash hit was because the imagine quality is so good
fsr2 is not nor it will it ever be its a glorified lancoze resize
Ai training features are a must for this kind if upscaling to look good
 
Last edited:
Top