• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Starfield: FSR 3 vs DLSS 3 vs XeSS Comparison

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
5,041 (1.99/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans replaced with Noctua A14x25 G2
Cooling Optimus Block, HWLabs Copper 240/40 + 240/30, D5/Res, 4x Noctua A12x25, 1x A14G2, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MT 26-36-36-48, 56.6ns AIDA, 2050 FCLK, 160 ns tRFC, active cooled
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear, MX900 dual gas VESA mount
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front, LINKUP Ultra PCIe 4.0 x16 white
Audio Device(s) Audeze Maxwell Ultraviolet w/upgrade pads & LCD headband, Galaxy Buds 3 Pro, Razer Nommo Pro
Power Supply SF750 Plat, full transparent custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 8 KHz Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU-R CNC Alu/Brass, SS Prismcaps W+Jellykey, LekkerV2 mod, TLabs Leath/Suede
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores Legendary
Sort of.

Ray reconstruction occurs after the ray tracing calculations happen (which are done at the API level and vendor independent) but replaces the normal denoiser with a custom pre-trained AI denoiser. That's the "reconstruction" part.

Technically AMD and Intel could come up with their own denoiser that leverages their their own tensor/vector/matrix cores.

It's just math.



That information comes from two MLID videos/podcasts.
  • Tom covered that Microsoft is trying to pressure AMD for lower pricing on next-gen custom silicon and threatened to go to ARM or Intel.
    • Microsoft also did this last-gen and AMD called their bluff.
    • Intel is super-hungry for Microsoft's business, which is when Tom and Dan started spitballing the idea of the XBox using XeSS.
    • They also mentioned why it's highly unlikely that Microsoft would switch to Intel unless Intel just made them a ridiculous at-cost deal.
  • Tom and his guest in a more recent podcast mentioned that Sony is rumored to be developing an upscaling tech. It's important to point out that:
    • The PS4 and 5 don't use DirectX or Vulkan. They have their own low-level API called GNM and GNMX. Their shader language is PSSL.
    • They're developing their own hardware accelerated upscaling tech for the same reason Microsoft are.

So why are Microsoft and Sony developing their own tech?
  • To prevent vendor lock in (e.g. Switching from AMD to Intel)
  • To make it easier to develop for and port games from console to PC and vice versa.
  • Developing for three different proprietary upscaling technologies means comitting expansive engineering time for three different code paths and at least three more quality control efforts. That's ridiculously expensive.
    • Coincidentally that's also why most studios ship with FSR initially - they can hit both consoles and the PC in one swimlane.
I hope I explained it more clearly this time rather than just making a simple statement. I sometimes forget that most people don't have a lot of experience working in software or hardware development.
That's true.

However, with the way RT "just math" performance requirements are so high, I don't think that the technical separation between the RT algorithms and the proprietary tech used to make running that math fast enough to be useful in real time gaming is a significant one. At least not currently or for the near future. I also think it's not realistic to assume open standards will win just because they're open, but more that they provide a minimum implementation that vendors can agree on. In fact I'd argue that the likely further segmentation of the AI upscaling tech is to compensate for supporting open standard DirectX/Vulkan RT, while still offering the advantages of massively funded and developed proprietary tech that offers significant IQ and performance increases, while keeping local hardware cheap. Sure, companies could expect people to all be running a supercomputer to get native RT at high FPS, but the vast majority won't spend that much.

Cheaper NVIDIA cards lower in the stack having comparable performance with RT engaged to otherwise faster AMD cards higher in the stack, plus having the advantage of superior proprietary tech resulting in better IQ, promotes dominance. I don't think it's a stretch to say that dominance is, in a large part, due to said proprietary tech, and therefore it's unlikely to disappear.

Arguing technicalities is fun, we are on a technical forum, but I just don't think you can meaningfully separate those RT and upscaling/AI technologies.

The Playstation 5 not using DirectX or Vulkan is a vulnerability in my opinion. Sure, it's sold better than the Xbox, but I wonder how much of that is due to brand recognition and comfortable repeat purchasing habits. The PS5 has a serious shortage of exclusive games, and while that could be attributed to everything being cross platform these days, I suppose you could also make the argument that it's because it's an uncommon API.

Xbox made its own compromise/mistake with having two massively different tiers of processing power with the Series X and the Series S, from what I hear, developers hate it, considering every Xbox game has to run on the Series S.

Nintendo likely getting DLSS in the Switch 2 is also a big boon for proprietary upscaling tech in my opinion, so I really do think that the statement "all proprietary upscaling tech will be shit canned" is misleading.
 
Joined
Dec 29, 2021
Messages
67 (0.06/day)
Location
Colorado
Processor Ryzen 7 7800X3D
Motherboard Asrock x670E Steel Legend
Cooling Arctic Liquid Freezr II 420mm
Memory 64GB G.Skill DDR5 CAS30 fruity LED RAM
Video Card(s) Nvidia RTX 4080 (Gigabyte)
Storage 2x Samsung 980 Pros, 3x spinning rust disks for ~20TB total storage
Display(s) 2x Asus 27" 1440p 165hz IPS monitors
Case Thermaltake Level 20XT E-ATX
Audio Device(s) Onboard
Power Supply Super Flower Leadex VII 1000w
Mouse Logitech g502
Keyboard Logitech g915
Software Windows 11 Insider Preview
Arguing technicalities is fun, we are on a technical forum, but I just don't think you can meaningfully separate those RT and upscaling/AI technologies.
Yeah I post on here because I like to discuss this stuff. That and it's slow this month so I've got an excess of free time.
Lets begin.

However, with the way RT "just math" performance requirements are so high, I don't think that the technical separation between the RT algorithms and the proprietary tech used to make running that math fast enough to be useful in real time gaming is a significant one. At least not currently or for the near future.

It's not that the maths for RT aren't complex, it's that the hardware can and will catch up. Very soon.
As you mentioned this is correlated to AI performance so this is driving a LOT of engineering investment from the big three.

I also think it's not realistic to assume open standards will win just because they're open, but more that they provide a minimum implementation that vendors can agree on.
DirectX and Sony's APIs are not open standards at all. They're "if you want to develop software on our consoles you will obey" mandates. That's why they're getting bent out of shape; they're afraid of being beholden to Nvidia.

Vulkan is open but it seems like most of the commercial effort these days is on platform agnosticism and feature parity.

Vulkan works like OpenGL did: GPU vendors submit extensions to Vulkan that are vetted and eventually absorbed into the core API. That's how SGI's multitexturing became part of OpenGL's core and how Nvidia's RT extentions became part of Vulkan's core.

In fact I'd argue that the likely further segmentation of the AI upscaling tech is to compensate for supporting open standard DirectX/Vulkan RT, while still offering the advantages of massively funded and developed proprietary tech that offers significant IQ and performance increases, while keeping local hardware cheap. Sure, companies could expect people to all be running a supercomputer to get native RT at high FPS, but the vast majority won't spend that much.
That's not unreasonable at all. I think we're all tired of GPU vendor shenanigans like using DLSS/FSR to present their products as being more peformant than they actually are. Hopefully this is just this generation.

Cheaper NVIDIA cards lower in the stack having comparable performance with RT engaged to otherwise faster AMD cards higher in the stack, plus having the advantage of superior proprietary tech resulting in better IQ, promotes dominance.
I completely disagree with you here. I actually own both cards (see pic at bottom for proof) and the 7900XTX consistently performs better than the 4080 in raster and is within single-digits delta with RT enabled in most games I've tested. It handily outperforms my 4070 even wth RT enabled.

The Playstation 5 not using DirectX or Vulkan is a vulnerability in my opinion. Sure, it's sold better than the Xbox, but I wonder how much of that is due to brand recognition and comfortable repeat purchasing habits. The PS5 has a serious shortage of exclusive games, and while that could be attributed to everything being cross platform these days, I suppose you could also make the argument that it's because it's an uncommon API.I

Sony having their own API is all about control. They've not experienced any disadvantage on the market so they think they're 100% correct..

Nintendo likely getting DLSS in the Switch 2 is also a big boon for proprietary upscaling tech in my opinion, so I really do think that the statement "all proprietary upscaling tech will be shit canned" is misleading.
DLSS is pretty much the only option they have isn't it? Why expend the resources to mess with anything else if they're sticking with Nvidia?
Nintendo is pleasantly stable in their hardware configurations as of late which is a big win for developers.

Microsoft is getting stomped by Sony so they're more than wiling to go with somebody like Intel or Qualcom for the next Xbox if they can get a better deal.
Sony knows their Playstation brand is responsible for a massive chunk of AMD's revenue and they know Intel will grovel for it.


Cheer and it's good discussing this with you.


Postscript: This is how I know Nvidia and AMD fanboys are full of shit.

all-cards-no_exif.jpg
 

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
5,041 (1.99/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans replaced with Noctua A14x25 G2
Cooling Optimus Block, HWLabs Copper 240/40 + 240/30, D5/Res, 4x Noctua A12x25, 1x A14G2, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MT 26-36-36-48, 56.6ns AIDA, 2050 FCLK, 160 ns tRFC, active cooled
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear, MX900 dual gas VESA mount
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front, LINKUP Ultra PCIe 4.0 x16 white
Audio Device(s) Audeze Maxwell Ultraviolet w/upgrade pads & LCD headband, Galaxy Buds 3 Pro, Razer Nommo Pro
Power Supply SF750 Plat, full transparent custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 8 KHz Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU-R CNC Alu/Brass, SS Prismcaps W+Jellykey, LekkerV2 mod, TLabs Leath/Suede
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores Legendary
Yeah I post on here because I like to discuss this stuff. That and it's slow this month so I've got an excess of free time.
Lets begin.



It's not that the maths for RT aren't complex, it's that the hardware can and will catch up. Very soon.
As you mentioned this is correlated to AI performance so this is driving a LOT of engineering investment from the big three.


DirectX and Sony's APIs are not open standards at all. They're "if you want to develop software on our consoles you will obey" mandates. That's why they're getting bent out of shape; they're afraid of being beholden to Nvidia.

Vulkan is open but it seems like most of the commercial effort these days is on platform agnosticism and feature parity.

Vulkan works like OpenGL did: GPU vendors submit extensions to Vulkan that are vetted and eventually absorbed into the core API. That's how SGI's multitexturing became part of OpenGL's core and how Nvidia's RT extentions became part of Vulkan's core.


That's not unreasonable at all. I think we're all tired of GPU vendor shenanigans like using DLSS/FSR to present their products as being more peformant than they actually are. Hopefully this is just this generation.


I completely disagree with you here. I actually own both cards (see pic at bottom for proof) and the 7900XTX consistently performs better than the 4080 in raster and is within single-digits delta with RT enabled in most games I've tested. It handily outperforms my 4070 even wth RT enabled.



Sony having their own API is all about control. They've not experienced any disadvantage on the market so they think they're 100% correct..


DLSS is pretty much the only option they have isn't it? Why expend the resources to mess with anything else if they're sticking with Nvidia?
Nintendo is pleasantly stable in their hardware configurations as of late which is a big win for developers.

Microsoft is getting stomped by Sony so they're more than wiling to go with somebody like Intel or Qualcom for the next Xbox if they can get a better deal.
Sony knows their Playstation brand is responsible for a massive chunk of AMD's revenue and they know Intel will grovel for it.


Cheer and it's good discussing this with you.


Postscript: This is how I know Nvidia and AMD fanboys are full of shit.

View attachment 336093
I agree with most of what you're saying.

Two points of contention, I don't see single digit or comparable RT performance between 7900XTX and 4080/S, it's more like a flat 20% on average, but can be much more than that, especially if you test path tracing/full ray tracing games. Although to be fair considering AMD price cuts, the 4070 Ti Super is the XTX competitor.


Personally I'm aware of the marketing shenanigans, but those are nothing new, and will always exist. I see DLSS, XeSS and especially DLAA as unique value adding tech and ignore non native or non fair comparison marketing slides.

Also, not in response to anything you've said, but I find the "fake frames" argument against frame generation and upscaling pretty pathetic, since almost by definition, all computer graphics are fake imitations to some extent. We're getting to the point where a good DLSS implementation with the right settings and resolution is virtually indistinguishable from native, and in some cases looks better, especially with DLAA, even disregarding the performance advantage.
 
Joined
Jan 18, 2021
Messages
180 (0.13/day)
Processor Core i7-12700
Motherboard MSI B660 MAG Mortar
Cooling Noctua NH-D15
Memory G.Skill Ripjaws V 64GB (4x16) DDR4-3600 CL16 @ 3466 MT/s
Video Card(s) AMD RX 6800
Storage Too many to list, lol
Display(s) Gigabyte M27Q
Case Fractal Design Define R5
Power Supply Corsair RM750x
Mouse Too many to list, lol
Keyboard Keychron low profile
Software Fedora, Mint
Also, not in response to anything you've said, but I find the "fake frames" argument against frame generation and upscaling pretty pathetic, since almost by definition, all computer graphics are fake imitations to some extent. We're getting to the point where a good DLSS implementation with the right settings and resolution is virtually indistinguishable from native, and in some cases looks better, especially with DLAA, even disregarding the performance advantage.
This is a good argument in principle, but I find it misleading to lump frame generation in with the rest. You're not the first to do this; I've seen Digital Foundry make precisely the same argument.

Upscaling, sure--we can argue that various forms of that have existed for thirty years. If upscaling is "cheating," then we're on a slippery slope that goes nowhere. One could argue that any number of other techniques, LODs, occlusion culling, anything that improves performance by lightening the load on the hardware, are also "cheating," by the same reasoning. But Frame Generation does not improve performance in the same way that other "cheating" methods do. It's a purely visual smoothing feature, like motion blur without the blur. If the extra "frame rate" doesn't reduce input latency, then those extra frames cannot be equated with extra performance. Arguing otherwise is akin to saying that you can make your games run faster by changing the color temperature on your monitor.

Of course, frame gen is still impressive and, occasionally, useful, and some people take the "fake frames" objection way too far--but let's not pretend that the criticism is entirely without merit. The "fake frames" meme probably wouldn't feature so heavily in these discussions if the latest GPU generation weren't so resoundingly underwhelming by every traditional metric. Frame generation was asked to carry too heavy a marketing load, presented more or less instead of a performance uplift for Ada and (later) RDNA 3. No surprise then that thoroughly jaded GPU consumers roll their eyes at it.

You're right that Nvidia and AMD and maybe even Intel will continue to craft first-party GPU benchmark numbers for maximum marketing hype, nothing new, etc. I can't even say that marketing frame gen as performance is the most egregious example, but it is pretty pathetic, to use your word, all the more so because of the timing.
 

bazoka2023

New Member
Joined
Jan 24, 2023
Messages
1 (0.00/day)
so what the bottleneck for nvidia dlss fg at 4k vs amd fsr 121 vs 106 but in other res same or little difference any explanation ?
 
Joined
Mar 23, 2005
Messages
4,089 (0.57/day)
Location
Ancient Greece, Acropolis (Time Lord)
System Name RiseZEN Gaming PC
Processor AMD Ryzen 7 5800X @ Auto
Motherboard Asus ROG Strix X570-E Gaming ATX Motherboard
Cooling Corsair H115i Elite Capellix AIO, 280mm Radiator, Dual RGB 140mm ML Series PWM Fans
Memory G.Skill TridentZ 64GB (4 x 16GB) DDR4 3200
Video Card(s) ASUS DUAL RX 6700 XT DUAL-RX6700XT-12G
Storage Corsair Force MP500 480GB M.2 & MP510 480GB M.2 - 2 x WD_BLACK 1TB SN850X NVMe 1TB
Display(s) ASUS ROG Strix 34” XG349C 180Hz 1440p + Asus ROG 27" MG278Q 144Hz WQHD 1440p
Case Corsair Obsidian Series 450D Gaming Case
Audio Device(s) SteelSeries 5Hv2 w/ Sound Blaster Z SE
Power Supply Corsair RM750x Power Supply
Mouse Razer Death-Adder + Viper 8K HZ Ambidextrous Gaming Mouse - Ergonomic Left Hand Edition
Keyboard Logitech G910 Orion Spectrum RGB Gaming Keyboard
Software Windows 11 Pro - 64-Bit Edition
Benchmark Scores I'm the Doctor, Doctor Who. The Definition of Gaming is PC Gaming...
All these frame generation and resolution enhancements to squeeze out more performance is an option due to lazy development. IMO even RT is useless at this point.

Just look how incredible DOOM Eternal looks like and you can play that game on Ultra High PQ settings at 1080p, 1440p and 4K with zero issue on affordable graphics cards. That's how you design games.
 
Joined
Nov 22, 2020
Messages
70 (0.05/day)
Processor Ryzen 5 3600
Motherboard ASRock X470 Taichi
Cooling Scythe Kotetsu Mark II
Memory G.SKILL 32GB DDR4 3200 CL16
Video Card(s) EVGA GeForce RTX 3070 FTW3 Ultra (1980 MHz / 0.968 V)
Display(s) Dell P2715Q; BenQ EX3501R; Panasonic TC-P55S60
Case Fractal Design Define R5
Audio Device(s) Sennheiser HD580; 64 Audio 1964-Q
Power Supply Seasonic SSR-650TR
Mouse Logitech G700s; Logitech G903
Keyboard Cooler Master QuickFire TK; Kinesis Advantage
VR HMD Quest 2
If DLSS3 looks good, and FSR3 looks bad, how does dlssg-to-fsr3 fare? It could indicate whether the problem is mostly with FSR3, or with Bethesda's implementation.
 
Joined
Aug 7, 2019
Messages
361 (0.19/day)
How come you can see heavy artifacting in the video (even highlighted) for all the implementations and yet you read something else in the written review? I honestly don't know how it's possible to make these comparisons without stating first that "ALL OF THEM ARE GARBAGE" but "this one is slightly better".

At least we have left behind the "upscaling can produce better IQ than native".
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
I dunno, XeSS (on right) makes text, indicators and lights on cockpit very washed out and idk, fainted. (FSR3 on left)
View attachment 336035

Seems to be a trade off between crispness versus noise. The FSR3 looks a bit more noisy, but more crisp for certain at least in the still screenshot. How they both look in motion is very important contextually though as well.
 
Joined
Nov 11, 2016
Messages
3,417 (1.16/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
Weird that AMD is locking FSR3 out from using XeSS/DLSS upscaling in official implementation, meanwhile the FSR3 MOD can use XeSS/DLSS which are more preferable than FSR2.
 
Joined
Oct 6, 2021
Messages
1,605 (1.39/day)
The assertion that any upscaling technique approaches native resolution is grossly misleading. Comparing a blurred image with TAA to a slightly less blurred image from another upscaling method is not a fair comparison. You're just showing that TAA is garbage.
 
Joined
Feb 1, 2019
Messages
3,607 (1.69/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
All these frame generation and resolution enhancements to squeeze out more performance is an option due to lazy development. IMO even RT is useless at this point.

Just look how incredible DOOM Eternal looks like and you can play that game on Ultra High PQ settings at 1080p, 1440p and 4K with zero issue on affordable graphics cards. That's how you design games.
I been taking screenshots of FF15 lately (even this game is an optimisation mess, but just not the same level as the new stuff like cyberpunk etc).

Outside scene with lighting reflection on water, great scenery, 4k textures. Rendering natively at 4k, no DLSS, no FSR, no interpolation of frames, no RT. Done the proper way. does use TAA though :(

Most of these tools have turned into cheat mode for game developers.

RT no need to manually handle lighting anymore, just create light sources and objects. The penalty for this is born by consumers on more expensive hardware and loss of performance not by developers. (for me RT lighting is no prettier than older games that had high quality legacy reflections and lighting, just usually games that implement RT have lazy non RT lighting which makes it seem god like).
DLSS, lower performance targets on optimisation stage of development, instead of allowing low end hardware to hit performance of higher end hardware, just now use DLSS to hit legacy performance targets.
Frame generation, to further cheat on your performance, can do even less optimisation now.

So as an example a RTX 4060 instead of running a new game natively at 1080p 60fps, might only need to be able to hit 720p 30fps, as they can just upscale with DLSS, and insert missing interpolated frames with frame generation.

If games had the same level of optimisations as 10 years ago a RTX 4060 would be able to do 4k at something like 120fps on new games. New hardware just breeds new software inefficiencies.

I also think DX12 is inferior to DX11. The few games that support both seem to always run better on 11.

Same applies to AA as Denver mentioned. TAA, FXAA both lazy poor versions of AA to allow less optimisation by developers, the worst proper AA is probably MSAA, and SSAA being the best mainstream one, both of these are practically extinct now, SGSSAA the god mode AA never officially utilised.

I wouldnt be surprised if in 10 years a $5000 GPU is struggling to natively render new games at 480p 30fps, and Nvidia and co are touting loads of cheat features that give a 4k 60fps like presentation.
 
Last edited:
Joined
May 12, 2022
Messages
54 (0.06/day)
Weird that AMD is locking FSR3 out from using XeSS/DLSS upscaling in official implementation, meanwhile the FSR3 MOD can use XeSS/DLSS which are more preferable than FSR2.

The way the mods are hooking in to do it... is unofficial. It's not AMD locking anything. Any developer could code their FSR3 FrameGen implementation to do it. But your very unlikely to see it, as it would require custom coding it. Something that is rarely done these days. Most developers plop in the code and make the necessary connections so it works. Then some refinement, so its not broken and that's it. Very very very few go any further than that.
 

wolf

Better Than Native
Joined
May 7, 2007
Messages
8,178 (1.27/day)
System Name MightyX
Processor Ryzen 9800X3D
Motherboard Gigabyte X650I AX
Cooling Scythe Fuma 2
Memory 32GB DDR5 6000 CL30
Video Card(s) Asus TUF RTX3080 Deshrouded
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
Nintendo likely getting DLSS in the Switch 2 is also a big boon for proprietary upscaling tech in my opinion, so I really do think that the statement "all proprietary upscaling tech will be shit canned" is misleading.
Yeah I just don't see that happening either, not yet at least. I've said it before and I'll say it again, DLSS won't last forever, it's days absolutely are numbered in the long term, but end of 2024? doubt.

I can absolutely relate to wanting that to be true, don't get me wrong, but I don't think 1 Jan 2025 is a realistic timeframe given what we know today.
 
Joined
Nov 27, 2023
Messages
2,373 (6.41/day)
System Name The Workhorse
Processor AMD Ryzen R9 5900X
Motherboard Gigabyte Aorus B550 Pro
Cooling CPU - Noctua NH-D15S Case - 3 Noctua NF-A14 PWM at the bottom, 2 Fractal Design 180mm at the front
Memory GSkill Trident Z 3200CL14
Video Card(s) NVidia GTX 1070 MSI QuickSilver
Storage Adata SX8200Pro
Display(s) LG 32GK850G
Case Fractal Design Torrent (Solid)
Audio Device(s) FiiO E-10K DAC/Amp, Samson Meteorite USB Microphone
Power Supply Corsair RMx850 (2018)
Mouse Razer Viper (Original) on a X-Raypad Equate Plus V2
Keyboard Cooler Master QuickFire Rapid TKL keyboard (Cherry MX Black)
Software Windows 11 Pro (24H2)
Same applies to AA as Denver mentioned. TAA, FXAA both lazy poor versions of AA to allow less optimisation by developers, the worst proper AA is probably MSAA, and SSAA being the best mainstream one, both of these are practically extinct now, SGSSAA the god mode AA never officially utilised.
Less “lazy” and more “no real other option”. MSAA doesn’t work with deferred renderers and doesn’t handle transparency well at all and SSAA in any form is just prohibitively expensive in terms of frame time cost. Post-processing AA methods are garbage for the obvious reason of blur and not working well in motion. Temporal AA can theoretically be quite good, but needs good implementation and HEAVILY improves with higher resolution and framerates. In pure theory, the holy grail of AA would be something that combines all three methods for different parts of the frame, but I assume that coding something like that and implementing it in-engine would be insanely difficult and potentially heavy on performance. There were attempts to create a “combo” AA solution, NVidias TXAA the most prominent, but those quickly died off.
In practice, DLAA without upsampling is… fine at 1440p and absolutely adequate at 4K and above. Certainly better than most base TAA implementations and any post-processing techniques. Sure, SSAA would be better quality-wise, but even 2X at 1440p is a death knell for performance. At 4K? Good luck.
 
Joined
May 12, 2022
Messages
54 (0.06/day)
Yeah... where kinda stuck with TAA in its various forms for now. The only way to really get better TAA more universially would be for a proper standard to be set by either DX or Vulkan. Without that, we will be continually stuck with vendor lock and various other silliness.
 
Last edited:
Joined
Feb 1, 2019
Messages
3,607 (1.69/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
Less “lazy” and more “no real other option”. MSAA doesn’t work with deferred renderers and doesn’t handle transparency well at all and SSAA in any form is just prohibitively expensive in terms of frame time cost. Post-processing AA methods are garbage for the obvious reason of blur and not working well in motion. Temporal AA can theoretically be quite good, but needs good implementation and HEAVILY improves with higher resolution and framerates. In pure theory, the holy grail of AA would be something that combines all three methods for different parts of the frame, but I assume that coding something like that and implementing it in-engine would be insanely difficult and potentially heavy on performance. There were attempts to create a “combo” AA solution, NVidias TXAA the most prominent, but those quickly died off.
In practice, DLAA without upsampling is… fine at 1440p and absolutely adequate at 4K and above. Certainly better than most base TAA implementations and any post-processing techniques. Sure, SSAA would be better quality-wise, but even 2X at 1440p is a death knell for performance. At 4K? Good luck.
The penalty is proportionate to the game itself.

e.g. if you make a game do the same framerate at half the hardware cost, then you have created a buffer to allow the overhead of MSAA or SSAA.

Why were games made with deferred?
 
Joined
May 12, 2022
Messages
54 (0.06/day)
The penalty is proportionate to the game itself.

e.g. if you make a game do the same framerate at half the hardware cost, then you have created a buffer to allow the overhead of MSAA or SSAA.

Why were games made with deferred?

That is sort of a deeper question than you may realize. Google about Forward vs Deferred rendering. Allot of it comes down to the rise of programmable shaders and newer graphical features.

Also not that MSAA can't work with deferred rendering, it's just allot of work to make it happen. So most developers are not going to take the time. It also doesn't look as great as people like to suggest it does. It had all sorts of failings that people bitched about. Notably texture aliasing and how it generally handled transparencies.
 
Joined
Nov 27, 2023
Messages
2,373 (6.41/day)
System Name The Workhorse
Processor AMD Ryzen R9 5900X
Motherboard Gigabyte Aorus B550 Pro
Cooling CPU - Noctua NH-D15S Case - 3 Noctua NF-A14 PWM at the bottom, 2 Fractal Design 180mm at the front
Memory GSkill Trident Z 3200CL14
Video Card(s) NVidia GTX 1070 MSI QuickSilver
Storage Adata SX8200Pro
Display(s) LG 32GK850G
Case Fractal Design Torrent (Solid)
Audio Device(s) FiiO E-10K DAC/Amp, Samson Meteorite USB Microphone
Power Supply Corsair RMx850 (2018)
Mouse Razer Viper (Original) on a X-Raypad Equate Plus V2
Keyboard Cooler Master QuickFire Rapid TKL keyboard (Cherry MX Black)
Software Windows 11 Pro (24H2)
The penalty is proportionate to the game itself.

e.g. if you make a game do the same framerate at half the hardware cost, then you have created a buffer to allow the overhead of MSAA or SSAA.
Sure, if the general consumer is ready to accept worse graphics. We are getting a lot of poorly optimized games lately, that is true, but optimization is not magic. You can’t make an already somewhat optimized game run at the same frame time target while demanding half the hardware resources. Not without downgrading the visual quality. And no amount of overhead will make 2X or especially 4X SSAA tenable at modern resolutions. There is a reason why it fell out of favor and laziness has nothing to do with it. In fact, supersampling is arguably the easiest AA to actually implement since all you are doing is brute forcing higher resolution frame and then downscaling it.

Also not that MSAA can't work with deferred rendering, it's just allot of work to make it happen. So most developers are not going to take the time. It also doesn't look as great as people like to suggest it does. It had all sorts of failings that people bitched about. Notably texture aliasing and how it generally handled transparencies.
It can work, sure, but since it’s really only applied to geometry edges and modern games are far more than just geometry the results will be, as you said, subpar at high cost.
 
Joined
Feb 1, 2019
Messages
3,607 (1.69/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
That is sort of a deeper question than you may realize. Google about Forward vs Deferred rendering. Allot of it comes down to the rise of programmable shaders and newer graphical features.

Also not that MSAA can't work with deferred rendering, it's just allot of work to make it happen. So most developers are not going to take the time. It also doesn't look as great as people like to suggest it does. It had all sorts of failings that people bitched about. Notably texture aliasing and how it generally handled transparencies.
I was asking you the question, do you think its a dev choice to use deferred or forced upon them, which in turn makes MSAA difficult (not impossible). Last I read its a choice for them, but I like to hear opinions from people themselves if they making an opposing argument in a debate.
 
Joined
Nov 27, 2023
Messages
2,373 (6.41/day)
System Name The Workhorse
Processor AMD Ryzen R9 5900X
Motherboard Gigabyte Aorus B550 Pro
Cooling CPU - Noctua NH-D15S Case - 3 Noctua NF-A14 PWM at the bottom, 2 Fractal Design 180mm at the front
Memory GSkill Trident Z 3200CL14
Video Card(s) NVidia GTX 1070 MSI QuickSilver
Storage Adata SX8200Pro
Display(s) LG 32GK850G
Case Fractal Design Torrent (Solid)
Audio Device(s) FiiO E-10K DAC/Amp, Samson Meteorite USB Microphone
Power Supply Corsair RMx850 (2018)
Mouse Razer Viper (Original) on a X-Raypad Equate Plus V2
Keyboard Cooler Master QuickFire Rapid TKL keyboard (Cherry MX Black)
Software Windows 11 Pro (24H2)
I was asking you the question, do you think its a dev choice to use deferred or forced upon them, which in turn makes MSAA difficult (not impossible). Last I read its a choice for them, but I like to hear opinions from people themselves if they making an opposing argument in a debate.
Uh, you are putting it in a really strange way. No, of course nobody is forced to do anything under a gun barrel. But pretty much the vast majority of modern graphics as we know them since around the end of 2000s - start of the 2010s rely and are possible due to deferred rendering. Again, sure, you can make a modern forward rendered game. It will just not look as something that is acceptable as a AAA current gen title.
Like… the graphics and rendering technology always evolves and advances. MSAA was a solution for a problem (aliasing) that was fit for the time and techniques that were around at its inception. When the technology advanced and changed it became no longer fit for purpose. AA is just a solution, a tool. You do not bend your whole rendering pipeline backwards to accommodate a certain type of AA. That’s silly.
 
Joined
Oct 6, 2021
Messages
1,605 (1.39/day)
I was asking you the question, do you think its a dev choice to use deferred or forced upon them, which in turn makes MSAA difficult (not impossible). Last I read its a choice for them, but I like to hear opinions from people themselves if they making an opposing argument in a debate.
Devs don't choose, most are hired to follow orders and rush to deliver the project under the pressure of tight targets. The Industry just tends to choose what brings results faster, even if it is not the most efficient in terms of performance, that is what they will choose.(RT/PT can also be taken as an example of this)

Deferred rendering allows for more flexibility in shaders, as shaders can be written to handle lighting information more generically, without needing to consider the interaction with each light individually.

However, deferred rendering also has some disadvantages, such as higher memory consumption due to storing lighting data in textures, and may not be ideal for scenes with a very small number of lights.
 
Joined
Feb 1, 2019
Messages
3,607 (1.69/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
Devs don't choose, most are hired to follow orders and rush to deliver the project under the pressure of tight targets. The Industry just tends to choose what brings results faster, even if it is not the most efficient in terms of performance, that is what they will choose.(RT/PT can also be taken as an example of this)

Deferred rendering allows for more flexibility in shaders, as shaders can be written to handle lighting information more generically, without needing to consider the interaction with each light individually.

However, deferred rendering also has some disadvantages, such as higher memory consumption due to storing lighting data in textures, and may not be ideal for scenes with a very small number of lights.
Every project I have been involved in the dev's choose how its carried out, yes they have targets, but boss's not directly involved in development are not telling them what API to use and so forth. Why do you think so many projects evolve, they get sudden code rewrites, framework changes and so forth, thats to accommodate new generation's of developers.

So yes I believe the rendering like other things is a consequence of developer choices.

I dont disagree with you on time pressures, and obviously that affects the decisions they make and the very poor optimisation we get (in some games). I also understand there will be benefits to using deferred rendering, the downsides are accepted as consequences for those benefits.
 

Miss_Cherry_Bomb

New Member
Joined
Sep 2, 2022
Messages
17 (0.02/day)
Problem with this review is that DLSS3 and XESS let you choose between upscaling quality presets. FSR3 doesn't, you have to manually tune it. So this review is FS3 performance mode picture quality vs DLSS3 and XESS quality mode picture quality.
 

Attachments

  • Screenshot (205).png
    Screenshot (205).png
    366.7 KB · Views: 41

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
5,041 (1.99/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans replaced with Noctua A14x25 G2
Cooling Optimus Block, HWLabs Copper 240/40 + 240/30, D5/Res, 4x Noctua A12x25, 1x A14G2, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MT 26-36-36-48, 56.6ns AIDA, 2050 FCLK, 160 ns tRFC, active cooled
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear, MX900 dual gas VESA mount
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front, LINKUP Ultra PCIe 4.0 x16 white
Audio Device(s) Audeze Maxwell Ultraviolet w/upgrade pads & LCD headband, Galaxy Buds 3 Pro, Razer Nommo Pro
Power Supply SF750 Plat, full transparent custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 8 KHz Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU-R CNC Alu/Brass, SS Prismcaps W+Jellykey, LekkerV2 mod, TLabs Leath/Suede
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores Legendary
Problem with this review is that DLSS3 and XESS let you choose between upscaling quality presets. FSR3 doesn't, you have to manually tune it. So this review is FS3 performance mode picture quality vs DLSS3 and XESS quality mode picture quality.
Nope. Read the review.

FSR 3 is the default, so to use it you simply need to change the render scaling ratio in the game's settings, which has an available range between 50% and 100%. When FSR 3 is enabled at the 100% render scale, the game runs at native resolution without an upscaling component, but still with the benefit of FSR 3's improved antialiasing, a similar approach to NVIDIA's Deep Learning Anti-Aliasing (DLAA). In our standardized testing we used the following render scale values for FSR 3: Quality mode at 67%, Balanced mode at 58% and Performance mode at 50% render scaling.

Any time there's a game tested without presets, we use the quality slider to set equivalents.
 
Top