• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Reveals Secret Weapon Behind DLSS Evolution: Dedicated Supercomputer Running for Six Years

Joined
Sep 15, 2011
Messages
6,828 (1.40/day)
Processor Intel® Core™ i7-13700K
Motherboard Gigabyte Z790 Aorus Elite AX
Cooling Noctua NH-D15
Memory 32GB(2x16) DDR5@6600MHz G-Skill Trident Z5
Video Card(s) ZOTAC GAMING GeForce RTX 3080 AMP Holo
Storage 2TB SK Platinum P41 SSD + 4TB SanDisk Ultra SSD + 500GB Samsung 840 EVO SSD
Display(s) Acer Predator X34 3440x1440@100Hz G-Sync
Case NZXT PHANTOM410-BK
Audio Device(s) Creative X-Fi Titanium PCIe
Power Supply Corsair 850W
Mouse Logitech Hero G502 SE
Software Windows 11 Pro - 64bit
Benchmark Scores 30FPS in NFS:Rivals
I don't know man. The DLAA still looks like a shitty anti-aliasing solution. Still a complete garbage blur mess, just a little better than the worst AA ever invented, TAA.
You want a good AA technique? Just have a look on how older games were looking with 4xSSAA or even 8xSSAA.
Those were the best times.
 
Joined
Dec 25, 2020
Messages
7,332 (4.93/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS
Motherboard ASUS ROG Maximus Z790 Apex Encore
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) NVIDIA RTX A2000
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic IntelliMouse (2017)
Keyboard IBM Model M type 1391405
Software Windows 10 Pro 22H2
Benchmark Scores I pulled a Qiqi~
I don't know man. The DLAA still looks like a shitty anti-aliasing solution. Still a complete garbage blur mess, just a little better than the worst AA ever invented, TAA.
You want a good AA technique? Just have a look on how older games were looking with 4xSSAA or even 8xSSAA.
Those were the best times.

How? It's DLSS applied to the native resolution, ergo only scene reconstruction algorithm is running. It can't be blurry unless the frame rate is faltering, the perception of blurriness comes from poor resolution or frame rate. DLAA tends to enhance scene detail in some engines that only enable certain aspects of the image when the resolution is high enough.

SSAA was simply increasing the internal rendering resolution (NV DSR and AMD VSR were a way to universally apply this without support in the game), MSAA has always been the clearest in my opinion but it's very resource-inefficient and has many compatibility issues with modern rendering techniques (iirc it doesn't play well with deferred rendering).
 
Joined
Mar 11, 2024
Messages
91 (0.29/day)
The whole point of AI is to surpass the limited human intelligence as well as workforce automation. If all goes well AI will be better than humans at pretty much anything including creating stories or generating worlds (the model would use algorithm for more realisitc and efficient compute)

With nuclear fusion, energy could become much cheaper in the near future and allow exponential growth of civilization (technology and number of people)
 
Joined
May 10, 2023
Messages
520 (0.84/day)
Location
Brazil
Processor 5950x
Motherboard B550 ProArt
Cooling Fuma 2
Memory 4x32GB 3200MHz Corsair LPX
Video Card(s) 2x RTX 3090
Display(s) LG 42" C2 4k OLED
Power Supply XPG Core Reactor 850W
Software I use Arch btw
PSSR, as everything Sony, is fully proprietary, poorly documented to the public and apparently has been relatively poorly received so far. I don't believe it has any particular need for ML hardware since the PS5 Pro's graphics are still based on RDNA 2, which does not have this capability. Unless there is a semicustom solution, but I don't believe this to be the case.
It's a ML-based upscaling as well, but it doesn't make use of any extra specific hardware. RDNA3.5 (which the PS5 Pro kinda uses) has some extra instructions meant to process some stuff relevant for matmul in lower precision, you can read more about it here:

With the extra hardware bump, it should be able to run an upscaling CNN without much issues and no need for extra hardware (apart from what's in the GPU itself).
Got you... But what I'm wondering is how can AMD/Sony (allegedly) do this in FSR4 without some "supercomputer" doing the work for them to upscale the image with minimal artifacts?
Sony's PSSR did do something similar to what Nvidia has done with DLSS, by training a model with tons of compute during a long period of time, which can then be used by the actual consoles, they just did not announce it like Nvidia did now. And if Nvidia never gave up this detail away, you wouldn't be making this complaint.
FSR4, if truly based on ML, will also require lots of compute time beforehand in order to create a model that can perform this task in your local GPU.

Let me try to give you a better example: do you know that feature in phone's gallery that are able to recognize people or places?
That's a machine learning model that's running in your phone and tagging those images behind the scenes.

That "model" (think of a "model" as a binary or dll that contains the "runtime" of the AI stuff) has been trained by google/samsung/apple in their servers for long hours with tons of examples saying "this picture is a dog", "this is a car", "this is a beach", "this person X is different from person Y", etc etc. This part is the "training" part, which is really compute intensive and takes really long time. As an example, the GPT model behind ChatGPT took around 5~6 months to train.
The outcome of this model is then shipped into your phone, where it's able to use what it has learnt and apply it to your cat pictures, and say that it is a cat. This part is called the "inference" part, and is often really fast. Think how DLSS, even in its first version, was able to upscale a frame from a smaller res into a higher one with really fast FPS (so for each frame, it upscaled it in less than 10ms!). In a similar manner, think how your phone is able to tag a pic as a "dog" really quick, or how ChatGPT is able to give you answers reasonably fast, even though the training part for all of those tasks took weeks, months, or even years.
 
Joined
Jul 24, 2024
Messages
351 (1.94/day)
System Name AM4_TimeKiller
Processor AMD Ryzen 5 5600X @ all-core 4.7 GHz
Motherboard ASUS ROG Strix B550-E Gaming
Cooling Arctic Freezer II 420 rev.7 (push-pull)
Memory G.Skill TridentZ RGB, 2x16 GB DDR4, B-Die, 3800 MHz @ CL14-15-14-29-43 1T, 53.2 ns
Video Card(s) ASRock Radeon RX 7800 XT Phantom Gaming
Storage Samsung 990 PRO 1 TB, Kingston KC3000 1 TB, Kingston KC3000 2 TB
Case Corsair 7000D Airflow
Audio Device(s) Creative Sound Blaster X-Fi Titanium
Power Supply Seasonic Prime TX-850
Mouse Logitech wireless mouse
Keyboard Logitech wireless keyboard
How? It's DLSS applied to the native resolution, ergo only scene reconstruction algorithm is running. It can't be blurry unless the frame rate is faltering, the perception of blurriness comes from poor resolution or frame rate. DLAA tends to enhance scene detail in some engines that only enable certain aspects of the image when the resolution is high enough.

SSAA was simply increasing the internal rendering resolution (NV DSR and AMD VSR were a way to universally apply this without support in the game), MSAA has always been the clearest in my opinion but it's very resource-inefficient and has many compatibility issues with modern rendering techniques (iirc it doesn't play well with deferred rendering).
DLSS itself causes distortions (blurs image). You won't notice it so much because there's a series of various filters and sharpening tools after upscaling. You actually answered your question.

The whole point of AI is to surpass the limited human intelligence as well as workforce automation. If all goes well AI will be better than humans at pretty much anything including creating stories or generating worlds (the model would use algorithm for more realisitc and efficient compute)

With nuclear fusion, energy could become much cheaper in the near future and allow exponential growth of civilization (technology and number of people)
It's fair to mention that this is not a real AI. This term is being misused too much nowadays for naming any neural network (model) involved system which gets accelerated by GPUs. Neural networks, machine learning, deep learning is part of AI, but not AI itself.
 
Top