- Joined
- Feb 20, 2022
- Messages
- 175 (0.17/day)
System Name | Custom Watercooled |
---|---|
Processor | 10900k 5.1GHz SSE 5.0GHz AVX |
Motherboard | Asus Maximus XIII hero z590 |
Cooling | XSPC Raystorm Pro, XSPC D5 Vario, EK Water Blocks EK-CoolStream XE 360 (Triple Fan) Radiator |
Memory | Team Group 8Pack RIPPED Edition TDPPD416G3600HC14CDC01 @ DDR4-4000 CL15 Dual Rank 4x8GB (32GB) |
Video Card(s) | KFA2 GeForce RTX 3080 Ti SG 1-Click OC 12GB LHR GDDR6X PCI-Express Graphics Card |
Storage | WD Blue SN550 1TB NVME M.2 2280 PCIe Gen3 Solid State Drive (WDS100T2B0C) |
Display(s) | LG 3D TV 32LW450U-ZB and Samsung U28D590 |
Case | Full Tower Case |
Audio Device(s) | ROG SupremeFX 7.1 Surround Sound High Definition Audio CODEC ALC4082, ESS SABRE9018Q2C DAC/AMP |
Power Supply | Corsair AX1000 Titanium 80 Plus Titanium Power Supply |
Mouse | Logitech G502SE |
Keyboard | Logitech Y-BP62a |
Software | Windows 11 Pro |
Benchmark Scores | https://valid.x86.fr/2rdbdl https://www.3dmark.com/spy/27927340 https://ibb.co/YjQFw5t |
As per the while paper, "A Survey of Temporal Antialiasing Techniques" Using , "8.3. Machine learning-based methods"So wtf is DL in DLSS these days? Apparently AMD is doing it without any neural network shenanigans.
Salvi [Sal17] enhances TAA image quality by using stochastic gradient descent (SGD) to learn optimal convolutional weights for computing the color extents used with neighborhood clamping and clipping methods (see Section 4.2). Image quality can be further improved by abandoning engineered history rectification methods in favor of directly learning the rectification task. For instance, variance clipping can be replaced with a recurrent convolutional autoencoder which is jointly trained to hallucinate new samples and appropriately blend them with the history data [Sal17].
Thus, DLSS uses a convolutional autoencoder for better quality output. Tensor cores help reduce Challenges by providing more processing power.
Tensor cores increase the computational budget and the convolutional autoencoder helps with the second part which is the lowering sample rate by hallucinating new samples.6. Challenges
Amortizing sampling and shading across multiple frames does sometimes lead to image quality defects. Many of these problems are either due to limited computation budget (e.g. imperfect resampling), or caused by the fundamental difficulty of lowering sampling rate on spatially complex, fast changing signals. In this section we review the common problems, their causes, and existing solutions.
You can process ML tasks in any way you like but real-time puts a limit on how long you can process the image. This can reduce quality if there is not enough processing power. Tensor cores are faster at this task than normal cores, they execute within one clock cycle. Thus you can drop back to normal processing but as Intel states for their DP4 version of XeSS, both quality and performance is reduced when compare their xmx(Intel tensor cores) version.