• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA Reveals Secret Weapon Behind DLSS Evolution: Dedicated Supercomputer Running for Six Years

I don't know man. The DLAA still looks like a shitty anti-aliasing solution. Still a complete garbage blur mess, just a little better than the worst AA ever invented, TAA.
You want a good AA technique? Just have a look on how older games were looking with 4xSSAA or even 8xSSAA.
Those were the best times.
 
I don't know man. The DLAA still looks like a shitty anti-aliasing solution. Still a complete garbage blur mess, just a little better than the worst AA ever invented, TAA.
You want a good AA technique? Just have a look on how older games were looking with 4xSSAA or even 8xSSAA.
Those were the best times.

How? It's DLSS applied to the native resolution, ergo only scene reconstruction algorithm is running. It can't be blurry unless the frame rate is faltering, the perception of blurriness comes from poor resolution or frame rate. DLAA tends to enhance scene detail in some engines that only enable certain aspects of the image when the resolution is high enough.

SSAA was simply increasing the internal rendering resolution (NV DSR and AMD VSR were a way to universally apply this without support in the game), MSAA has always been the clearest in my opinion but it's very resource-inefficient and has many compatibility issues with modern rendering techniques (iirc it doesn't play well with deferred rendering).
 
The whole point of AI is to surpass the limited human intelligence as well as workforce automation. If all goes well AI will be better than humans at pretty much anything including creating stories or generating worlds (the model would use algorithm for more realisitc and efficient compute)

With nuclear fusion, energy could become much cheaper in the near future and allow exponential growth of civilization (technology and number of people)
 
PSSR, as everything Sony, is fully proprietary, poorly documented to the public and apparently has been relatively poorly received so far. I don't believe it has any particular need for ML hardware since the PS5 Pro's graphics are still based on RDNA 2, which does not have this capability. Unless there is a semicustom solution, but I don't believe this to be the case.
It's a ML-based upscaling as well, but it doesn't make use of any extra specific hardware. RDNA3.5 (which the PS5 Pro kinda uses) has some extra instructions meant to process some stuff relevant for matmul in lower precision, you can read more about it here:

With the extra hardware bump, it should be able to run an upscaling CNN without much issues and no need for extra hardware (apart from what's in the GPU itself).
Got you... But what I'm wondering is how can AMD/Sony (allegedly) do this in FSR4 without some "supercomputer" doing the work for them to upscale the image with minimal artifacts?
Sony's PSSR did do something similar to what Nvidia has done with DLSS, by training a model with tons of compute during a long period of time, which can then be used by the actual consoles, they just did not announce it like Nvidia did now. And if Nvidia never gave up this detail away, you wouldn't be making this complaint.
FSR4, if truly based on ML, will also require lots of compute time beforehand in order to create a model that can perform this task in your local GPU.

Let me try to give you a better example: do you know that feature in phone's gallery that are able to recognize people or places?
That's a machine learning model that's running in your phone and tagging those images behind the scenes.

That "model" (think of a "model" as a binary or dll that contains the "runtime" of the AI stuff) has been trained by google/samsung/apple in their servers for long hours with tons of examples saying "this picture is a dog", "this is a car", "this is a beach", "this person X is different from person Y", etc etc. This part is the "training" part, which is really compute intensive and takes really long time. As an example, the GPT model behind ChatGPT took around 5~6 months to train.
The outcome of this model is then shipped into your phone, where it's able to use what it has learnt and apply it to your cat pictures, and say that it is a cat. This part is called the "inference" part, and is often really fast. Think how DLSS, even in its first version, was able to upscale a frame from a smaller res into a higher one with really fast FPS (so for each frame, it upscaled it in less than 10ms!). In a similar manner, think how your phone is able to tag a pic as a "dog" really quick, or how ChatGPT is able to give you answers reasonably fast, even though the training part for all of those tasks took weeks, months, or even years.
 
How? It's DLSS applied to the native resolution, ergo only scene reconstruction algorithm is running. It can't be blurry unless the frame rate is faltering, the perception of blurriness comes from poor resolution or frame rate. DLAA tends to enhance scene detail in some engines that only enable certain aspects of the image when the resolution is high enough.

SSAA was simply increasing the internal rendering resolution (NV DSR and AMD VSR were a way to universally apply this without support in the game), MSAA has always been the clearest in my opinion but it's very resource-inefficient and has many compatibility issues with modern rendering techniques (iirc it doesn't play well with deferred rendering).
DLSS itself causes distortions (blurs image). You won't notice it so much because there's a series of various filters and sharpening tools after upscaling. You actually answered your question.

The whole point of AI is to surpass the limited human intelligence as well as workforce automation. If all goes well AI will be better than humans at pretty much anything including creating stories or generating worlds (the model would use algorithm for more realisitc and efficient compute)

With nuclear fusion, energy could become much cheaper in the near future and allow exponential growth of civilization (technology and number of people)
It's fair to mention that this is not a real AI. This term is being misused too much nowadays for naming any neural network (model) involved system which gets accelerated by GPUs. Neural networks, machine learning, deep learning is part of AI, but not AI itself.
 
DLSS itself causes distortions (blurs image). You won't notice it so much because there's a series of various filters and sharpening tools after upscaling. You actually answered your question.


It's fair to mention that this is not a real AI. This term is being misused too much nowadays for naming any neural network (model) involved system which gets accelerated by GPUs. Neural networks, machine learning, deep learning is part of AI, but not AI itself.
I mean, academically AI is a more generic term that could range its meaning from a basic rule-based system (bunch of if/else) up to ML/deep learning.
Aren't you thinking about AGI?
 
I mean, academically AI is a more generic term that could range its meaning from a basic rule-based system (bunch of if/else) up to ML/deep learning.
Aren't you thinking about AGI?
I remember since for ever the in game NPC opponents behavior to be called the "AI of the game" which exactly falls under the example of if/else I think. Couldn't learn anything... lol
I think of AI as a general term. There are terms after AI that can distinguish it into multiple levels, from sub-human, on par, up to beyond.
All some form of AI though.

To me the idea of intelligence is not equal to something necessarily high.
Is not correct to say... this has low/very low intelligence? I think it is.
And when it is made it is artificial.
 
I am confident that if the title said AMD this thread would be 20 pages. Here we go with the FSR comments from people that have never used it in a Nvidia thread.
 
I am confident that if the title said AMD this thread would be 20 pages. Here we go with the FSR comments from people that have never used it in a Nvidia thread.
It would just be comical, imagine "AMD reveals secret weapon behind FSR". I mean why would it even need to be secret, nobody would try copying it, it's bad.
 
I am confident that if the title said AMD this thread would be 20 pages. Here we go with the FSR comments from people that have never used it in a Nvidia thread.
Funny thing is that they don't even know that Nvidia cards can do FSR as well as XeSS.

It would just be comical, imagine "AMD reveals secret weapon behind FSR". I mean why would it even need to be secret, nobody would try copying it, it's bad.
If AMD is not keeping it's own secret in form of hidden supercomputer somewhere to continuously smooth out FSR model as Nvidia does with DLSS, then if you compare effort invested in the technology by AMD and Nvidia, AMD's results are not so bad.
 
If AMD is not keeping it's own secret in form of hidden supercomputer somewhere to continuously smooth out FSR model as Nvidia does with DLSS, then if you compare effort invested in the technology by AMD and Nvidia, AMD's results are not so bad.
Sure, does it matter? Im not paying for the effort, im paying for the result.
 
Sure, does it matter? Im not paying for the effort, im paying for the result.
Whether it matters, that's personal. It does certainly shed other light.
 
I am confident that if the title said AMD this thread would be 20 pages. Here we go with the FSR comments from people that have never used it in a Nvidia thread.

You do realize Nvidia cards can run all vendors' upscaling solutions, right? As for the "secret weapon behind FSR" argument... it's been there since day one? It is open source, after all...

It's a ML-based upscaling as well, but it doesn't make use of any extra specific hardware. RDNA3.5 (which the PS5 Pro kinda uses) has some extra instructions meant to process some stuff relevant for matmul in lower precision, you can read more about it here:

With the extra hardware bump, it should be able to run an upscaling CNN without much issues and no need for extra hardware (apart from what's in the GPU itself).

I was aware that this was a RDNA 2 solution with some improvements ported from RDNA 3-3.5, I suppose this is one of them. Weird custom chip.
 
Last edited:
Back
Top