- Joined
- Nov 11, 2016
- Messages
- 3,539 (1.18/day)
System Name | The de-ploughminator Mk-III |
---|---|
Processor | 9800X3D |
Motherboard | Gigabyte X870E Aorus Master |
Cooling | DeepCool AK620 |
Memory | 2x32GB G.SKill 6400MT Cas32 |
Video Card(s) | Asus RTX4090 TUF |
Storage | 4TB Samsung 990 Pro |
Display(s) | 48" LG OLED C4 |
Case | Corsair 5000D Air |
Audio Device(s) | KEF LSX II LT speakers + KEF KC62 Subwoofer |
Power Supply | Corsair HX850 |
Mouse | Razor Death Adder v3 |
Keyboard | Razor Huntsman V3 Pro TKL |
Software | win11 |
DLSS has access to the raw imagery earlier than FSR, so it can 'reconstruct' data that would have been thrown out by the time FSR sees it
Like uhhh.... seeing the data for a fence *before* a motion blur effect is added, instead of after: more to work with, so a slightly better image
Actually DLSS uses something called camera jitter, which Intel copied with their XeSS, to reconstruct pixel-wide objects that even 4K Native resolution would flicker
Here is an example in CP2077