Friday, January 17th 2025

NVIDIA Reveals Secret Weapon Behind DLSS Evolution: Dedicated Supercomputer Running for Six Years

At the RTX "Blackwell" Editor's Day during CES 2025, NVIDIA pulled back the curtain on one of its most powerful tools: a dedicated supercomputer that has been continuously improving DLSS (Deep Learning Super Sampling) for the past six years. Brian Catanzaro, NVIDIA's VP of applied deep learning research, disclosed that thousands of the company's latest GPUs have been working round-the-clock, analyzing and perfecting the technology that has revolutionized gaming graphics. "We have a big supercomputer at NVIDIA that is running 24/7, 365 days a year improving DLSS," Catanzaro explained during his presentation on DLSS 4. The supercomputer's primary task involves analyzing failures in DLSS performance, such as ghosting, flickering, or blurriness across hundreds of games. When issues are identified, the system augments its training data sets with new examples of optimal graphics and challenging scenarios that DLSS needs to address.

DLSS 4 is the first move from convolutional neural networks to a transformer model that runs locally on client PCs. The continuous learning process has been crucial in refining the technology, with the dedicated supercomputer serving as the backbone of this evolution. The scale of resources allocated to DLSS development is massive, as the entire pipeline for a self-improving DLSS model must consist of not only thousands but tens of thousands of GPUs. Of course, a company making 100,000 GPU data centers (xAI's Colossus) must save some for itself and is proactively using it to improve its software stack. NVIDIA's CEO Jensen Huang famously said that DLSS can predict the future. Of course, these statements are to be tested when the Blackwell series launches. However, the approach of using massive data centers to improve DLSS is quite interesting, and with each new GPU generation NVIDIA release, the process is getting significantly sped up.
Source: via PC Gamer
Add your own comment

27 Comments on NVIDIA Reveals Secret Weapon Behind DLSS Evolution: Dedicated Supercomputer Running for Six Years

#1
Space Lynx
Astronaut
I hate it, but it is the future most likely. My next card probably will be Nvidia, even if FSR4 ends up being great, it probably will only be a tiny handful of games. I don't plan to upgrade for a couple years, maybe even three or four. Don't forget your backlog games are fun too lads.
Posted on Reply
#2
Jtuck9
Tell me how it ends
Posted on Reply
#3
Jeager
Jtuck9Tell me how it ends
Posted on Reply
#4
LittleBro
I kind of expected this. Was wondering for long time, how the hell can they advance in DLSS so quickly, meaning how does the model keep improving itself?
A supercomputer purely dedicated to grinding away on images that just aren't quite good enough.

"What we're doing during this process," Catanzaro continues, "is we're analysing failures. When the DLSS model fails it looks like ghosting or flickering or blurriness. And, you know, we find failures in many of the games we're looking at and we try to figure out what's going on, why does the model make the wrong choice about how to draw the image there?

"We then find ways to augment our training data set. Our training data sets are always growing. We're compiling examples of what good graphics looks like and what difficult problems DLSS needs to solve.

"We put those in our training set, and then we retrain the model, and then we test across hundreds of games in order to figure out how to make DLSS better. So, that's the process."
Source: www.yahoo.com/tech/turns-theres-big-supercomputer-nvidia-140554206.html

Now it's pretty clear why nowadays middle class RTX costs a kidney.
It's pretty clear why Nvidia so strongly encourages game devs to include DLSS.
Posted on Reply
#5
Jtuck9
LittleBroI kind of expected this. Was wondering for long time, how the hell can they advance in DLSS so quickly, meaning how does the model keep improving itself? Is there some dedicated algorithm working when DLSS is turned on in games to search for problematic sections/defects in rendered picture?


Source: www.yahoo.com/tech/turns-theres-big-supercomputer-nvidia-140554206.html

Now it's pretty clear why nowadays middle class RTX costs a kidney.
It's pretty clear why Nvidia so strongly encourages game devs to include DLSS.
"We can't do computer graphics anymore without artificial intelligence," he said. "We compute one pixel, we infer the other 32. I mean, it's incredible... And so we hallucinate, if you will, the other 32, and it looks temporally stable, it looks photorealistic, and the image quality is incredible, the performance is incredible."

If that saves energy then I'm all for it, but not sure how much the training aspect offsets that. Therein lies the rub...

www.whitehouse.gov/briefing-room/statements-releases/2025/01/14/statement-by-president-biden-on-the-executive-order-on-advancing-u-s-leadership-in-artificial-intelligence-infrastructure/
Posted on Reply
#6
nguyen
I'm quite interested in how the Transformer model will look vs the old CNN model.

The best thing is that all that advance in visuals can be backported to all games that support DLSS2.x, meaning over 600 games
Posted on Reply
#7
Easy Rhino
Linux Advocate
The next time some politician or Hollywood a-hole preaches to you about your carbon footprint and how you can't have a wood stove anymore to heat your home, remind them that a trillion dollar company used untold megawatts training AI so you could get a few extra FPS on a crappy upscaled 4K video game.
Posted on Reply
#8
JustBenching
nguyenI'm quite interested in how the Transformer model will look vs the old CNN model.

The best thing is that all that advance in visuals can be backported to all games that support DLSS2.x, meaning over 600 games
And for gpus as far back as turing. That is wild.
Posted on Reply
#9
AnarchoPrimitiv
I think one of the biggest issues here, and I never see it mentioned, is that implementations like the one Nvidia is using detailed in this article, basically create a runaway effect with respect to competition. Nvidia is basically leveraging it's unparalleled resources to exponentially accelerate the gulf between them and AMD/Intel, and because their competitors do not have that level of resources, they will perpetually be "behind" and they'll never be able to catch up.

Now, what I'm about to say could be wrong, I don't have expert knowledge of how GPUs are designed, but it seemed like 20 years ago, if you had a brilliant individual or a few of them, you could compete because in the end, every company is more or less working with and is limited by, the same tool: the human brain.

Now, with machine learning and AI, that limitation has been breached, and it has basically turned into an arms race with who can amass the most compute power. In a reality wholly shaped by the dictates of capitalism and the profit motive, the competition has basically been reduced to who can buy the most hardware. It then basically turns into a positive feedback loop: Nvidia has the most resources so they have access to more compute power, this compute power let's them create faster products, the products sell more and Nvidia gets more resources....repeat. With the use of AI/ML in the design process, I feel like Nvidia has literally gained an insurmountable advantage and it will never be "corrected" by market forces.
Posted on Reply
#10
close
I've been wondering for a while if eventually every game will be an AI model running on the GPU that takes the player's input as a "prompt" and outputs the game's visuals realtime. Eventually everything could be procedurally generated, assuming AI will be able to write a captivating story as opposed to just outputting visuals. Just take the "one to rule them all" model and prompt it for what the game should be like.
Posted on Reply
#11
R0H1T
So I'm guessing they're using your "data" to train this supercomputer ~
www.nvidia.com/en-us/geforce-now/
closeJust take the "one to rule them all" model and prompt it for what the game should be like.
That "one rule" probably makes the AI cheat :nutkick:
Posted on Reply
#12
Daven
Yawn. Wake me when the singularity happens. In the meantime, I’ll be playing Broforce in my dreams and having fun.

Nvidia is just way too serious when it comes to gaming.
Posted on Reply
#13
Jtuck9
closeI've been wondering for a while if eventually every game will be an AI model running on the GPU that takes the player's input as a "prompt" and outputs the game's visuals realtime. Eventually everything could be procedurally generated, assuming AI will be able to write a captivating story as opposed to just outputting visuals. Just take the "one to rule them all" model and prompt it for what the game should be like.
AnarchoPrimitivI think one of the biggest issues here, and I never see it mentioned, is that implementations like the one Nvidia is using detailed in this article, basically create a runaway effect with respect to competition. Nvidia is basically leveraging it's unparalleled resources to exponentially accelerate the gulf between them and AMD/Intel, and because their competitors do not have that level of resources, they will perpetually be "behind" and they'll never be able to catch up.

Now, what I'm about to say could be wrong, I don't have expert knowledge of how GPUs are designed, but it seemed like 20 years ago, if you had a brilliant individual or a few of them, you could compete because in the end, every company is more or less working with and is limited by, the same tool: the human brain.

Now, with machine learning and AI, that limitation has been breached, and it has basically turned into an arms race with who can amass the most compute power. In a reality wholly shaped by the dictates of capitalism and the profit motive, the competition has basically been reduced to who can buy the most hardware. It then basically turns into a positive feedback loop: Nvidia has the most resources so they have access to more compute power, this compute power let's them create faster products, the products sell more and Nvidia gets more resources....repeat. With the use of AI/ML in the design process, I feel like Nvidia has literally gained an insurmountable advantage and it will never be "corrected" by market forces.
They weren't happy with Biden for a reason. Not sure if "ethics washing" is a term but they certainly played the humanity card, although in fairness I did read the argument about Leather being better for the environment rather than pleather, so there's that at least.
Posted on Reply
#14
JustBenching
AnarchoPrimitivI think one of the biggest issues here, and I never see it mentioned, is that implementations like the one Nvidia is using detailed in this article, basically create a runaway effect with respect to competition. Nvidia is basically leveraging it's unparalleled resources to exponentially accelerate the gulf between them and AMD/Intel, and because their competitors do not have that level of resources, they will perpetually be "behind" and they'll never be able to catch up.
Nothing stops the competition from releasing a 450$ card that annihilates eg. a 5080 in raster. Especially considering how anemic nvidias last 2 gens have been, i'd say a 450$ card that beats the 5080 in raster is not even something to write home about. It should be trivial
Posted on Reply
#15
LittleBro
AnarchoPrimitivI think one of the biggest issues here, and I never see it mentioned, is that implementations like the one Nvidia is using detailed in this article, basically create a runaway effect with respect to competition. Nvidia is basically leveraging it's unparalleled resources to exponentially accelerate the gulf between them and AMD/Intel, and because their competitors do not have that level of resources, they will perpetually be "behind" and they'll never be able to catch up.

Now, what I'm about to say could be wrong, I don't have expert knowledge of how GPUs are designed, but it seemed like 20 years ago, if you had a brilliant individual or a few of them, you could compete because in the end, every company is more or less working with and is limited by, the same tool: the human brain.

Now, with machine learning and AI, that limitation has been breached, and it has basically turned into an arms race with who can amass the most compute power. In a reality wholly shaped by the dictates of capitalism and the profit motive, the competition has basically been reduced to who can buy the most hardware. It then basically turns into a positive feedback loop: Nvidia has the most resources so they have access to more compute power, this compute power let's them create faster products, the products sell more and Nvidia gets more resources....repeat. With the use of AI/ML in the design process, I feel like Nvidia has literally gained an insurmountable advantage and it will never be "corrected" by market forces.
Yes but you can't blame Nvidia for re-applying their success from elsewhere. They used their own tech to their advantage.
Nvidia's success is heavily dependent on TSMC's success. Maybe it will get corrected through there if you know what I mean.
Posted on Reply
#16
kondamin
So do they render precision stuff like bridges and skyscrapers with 4 bit hallucinations from a big server too?
Posted on Reply
#17
GuiltySpark
AnarchoPrimitivNow, what I'm about to say could be wrong, I don't have expert knowledge of how GPUs are designed, but it seemed like 20 years ago, if you had a brilliant individual or a few of them, you could compete because in the end, every company is more or less working with and is limited by, the same tool: the human brain.
I strongly disagree on that you said. IMO things haven't change, it was almost impossible to compete before and it is almost impossible to compete now. If what you say is correct, then where are the dozens of intel competitors back in the 2000? There were not, because it is not a matter of having "brilliant individuals" to design the thing, it has always been about the know how and data that you have on the thing. And unfortunately intel had (and has) 30 years of testing methodology for their processors such that the code you wrote it is guaranteed to do the same thing on their different CPUs. Try design another CPU with the same ISA than intel and prove that it is doing the right computation, good luck with that! And what is even funnier is that the same is true even if you redesign everything from zero. Try developing a CPU from scratch, even with your own libraries and code, and convince people that the thing have been tested and it is guaranteed that the code they write (even corner cases) are correctly handled. And there is no AI (so far) in there.
Posted on Reply
#18
Cybebe
with all the discussions and blablabla, the only thing that hurts is the final price of the product. Ngreedia knowing their current position as the leader in AI and DLSS is cashing on this advantage, knowing that no one in the industry right now can challenge them. Well AMD has better graphics cards generation 7000 in terms of rasterization but what happened?? hypocrites turning on DLSS and FG for their games and letting the public know how FSR sucks
Posted on Reply
#19
TumbleGeorge
R0H1TSo I'm guessing they're using your "data" to train this supercomputer
Your e-peen dimensions is more important personal data for the training?
Posted on Reply
#20
Hereticbar
Easy RhinoThe next time some politician or Hollywood a-hole preaches to you about your carbon footprint and how you can't have a wood stove anymore to heat your home, remind them that a trillion dollar company used untold megawatts training AI so you could get a few extra FPS on a crappy upscaled 4K video game.
What do you mean by "a wood stove"? My 380w graphics card is all the heating I ever need for my room!
Posted on Reply
#21
Heiro78
nguyenI'm quite interested in how the Transformer model will look vs the old CNN model.

The best thing is that all that advance in visuals can be backported to all games that support DLSS2.x, meaning over 600 games
"Can" is imperative here. Is it really going to be made available for those older models of DLSS? Or is NVIDIA going to lock them behind newer versions to taut the newest hardware?
AnarchoPrimitivI think one of the biggest issues here, and I never see it mentioned, is that implementations like the one Nvidia is using detailed in this article, basically create a runaway effect with respect to competition. Nvidia is basically leveraging it's unparalleled resources to exponentially accelerate the gulf between them and AMD/Intel, and because their competitors do not have that level of resources, they will perpetually be "behind" and they'll never be able to catch up.

Now, what I'm about to say could be wrong, I don't have expert knowledge of how GPUs are designed, but it seemed like 20 years ago, if you had a brilliant individual or a few of them, you could compete because in the end, every company is more or less working with and is limited by, the same tool: the human brain.

Now, with machine learning and AI, that limitation has been breached, and it has basically turned into an arms race with who can amass the most compute power. In a reality wholly shaped by the dictates of capitalism and the profit motive, the competition has basically been reduced to who can buy the most hardware. It then basically turns into a positive feedback loop: Nvidia has the most resources so they have access to more compute power, this compute power let's them create faster products, the products sell more and Nvidia gets more resources....repeat. With the use of AI/ML in the design process, I feel like Nvidia has literally gained an insurmountable advantage and it will never be "corrected" by market forces.
I agree that it's a runaway cycle. Even if some genius were to create a new method of coding that made rasterization faster, easier, or more efficient; Nvidia would still gain by using their current method. I'm a proponent of regulation for fairness but we can't impose a limit in how Nvidia manages their business to improve their products. In some respects, they should be praised for seeing the writing on the wall years ago about the impending physical limitations in hardware improvements.

Other than maybe a blanket tax on any company that uses exorbitant amounts of energy with no offsets, I don't see what could be done. There would also have to be exceptions such as for steel mills. Electric Arc furnaces are amazing for recycling steel but they are so energy intensive.
LittleBroYes but you can't blame Nvidia for re-applying their success from elsewhere. They used their own tech to their advantage.
Nvidia's success is heavily dependent on TSMC's success. Maybe it will get corrected through there if you know what I mean.
No, I don't know what you mean. Please elaborate lol
Posted on Reply
#22
Chrisy
I am wondering if running the optimization locally means that the driver will upload these optimizations to Nvidia servers for aggregating the improvements.
Which effectively would be kind of moving the calculations they had to manage within their infrastructure to the clients now.
That would be a lot of off-loading (time and money saving) for Nvidia. Not sure if clever in evil-way or not :D
Posted on Reply
#23
Vayra86
AnarchoPrimitivI think one of the biggest issues here, and I never see it mentioned, is that implementations like the one Nvidia is using detailed in this article, basically create a runaway effect with respect to competition. Nvidia is basically leveraging it's unparalleled resources to exponentially accelerate the gulf between them and AMD/Intel, and because their competitors do not have that level of resources, they will perpetually be "behind" and they'll never be able to catch up.

Now, what I'm about to say could be wrong, I don't have expert knowledge of how GPUs are designed, but it seemed like 20 years ago, if you had a brilliant individual or a few of them, you could compete because in the end, every company is more or less working with and is limited by, the same tool: the human brain.

Now, with machine learning and AI, that limitation has been breached, and it has basically turned into an arms race with who can amass the most compute power. In a reality wholly shaped by the dictates of capitalism and the profit motive, the competition has basically been reduced to who can buy the most hardware. It then basically turns into a positive feedback loop: Nvidia has the most resources so they have access to more compute power, this compute power let's them create faster products, the products sell more and Nvidia gets more resources....repeat. With the use of AI/ML in the design process, I feel like Nvidia has literally gained an insurmountable advantage and it will never be "corrected" by market forces.
But... are they that far behind? XeSS is pretty good and FSR strongly improved, without harping about all this.
Posted on Reply
#24
Jtuck9
Vayra86But... are they that far behind? XeSS is pretty good and FSR strongly improved, without harping about all this.
I wondering how far away we are from the point of it being "good enough" for most people. Diminishing returns and all that. The "Reasoning Era" being the new paradigm at OpenAI.
Posted on Reply
#25
Vayra86
ChrisyI am wondering if running the optimization locally means that the driver will upload these optimizations to Nvidia servers for aggregating the improvements.
Which effectively would be kind of moving the calculations they had to manage within their infrastructure to the clients now.
That would be a lot of off-loading (time and money saving) for Nvidia. Not sure if clever in evil-way or not :D
Well yeah, then you are literally mining for Nvidia :D They're not going to pay you dividends, so that would be dirty AF.

But of course they'll try to sell that differently. You're helping games improve! :p
AnarchoPrimitivI think one of the biggest issues here, and I never see it mentioned, is that implementations like the one Nvidia is using detailed in this article, basically create a runaway effect with respect to competition. Nvidia is basically leveraging it's unparalleled resources to exponentially accelerate the gulf between them and AMD/Intel, and because their competitors do not have that level of resources, they will perpetually be "behind" and they'll never be able to catch up.

Now, what I'm about to say could be wrong, I don't have expert knowledge of how GPUs are designed, but it seemed like 20 years ago, if you had a brilliant individual or a few of them, you could compete because in the end, every company is more or less working with and is limited by, the same tool: the human brain.

Now, with machine learning and AI, that limitation has been breached, and it has basically turned into an arms race with who can amass the most compute power. In a reality wholly shaped by the dictates of capitalism and the profit motive, the competition has basically been reduced to who can buy the most hardware. It then basically turns into a positive feedback loop: Nvidia has the most resources so they have access to more compute power, this compute power let's them create faster products, the products sell more and Nvidia gets more resources....repeat. With the use of AI/ML in the design process, I feel like Nvidia has literally gained an insurmountable advantage and it will never be "corrected" by market forces.
Nah, conceptual things only get discovered once, and then we all copy them, and that's that. Look at FSR4. That's also why its folly to be paying for proprietary bullshit. Just wait. It'll come. And if it won't, it simply will not survive.
Posted on Reply
Add your own comment
Jan 17th, 2025 17:52 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts