Friday, January 17th 2025
NVIDIA Reveals Secret Weapon Behind DLSS Evolution: Dedicated Supercomputer Running for Six Years
At the RTX "Blackwell" Editor's Day during CES 2025, NVIDIA pulled back the curtain on one of its most powerful tools: a dedicated supercomputer that has been continuously improving DLSS (Deep Learning Super Sampling) for the past six years. Brian Catanzaro, NVIDIA's VP of applied deep learning research, disclosed that thousands of the company's latest GPUs have been working round-the-clock, analyzing and perfecting the technology that has revolutionized gaming graphics. "We have a big supercomputer at NVIDIA that is running 24/7, 365 days a year improving DLSS," Catanzaro explained during his presentation on DLSS 4. The supercomputer's primary task involves analyzing failures in DLSS performance, such as ghosting, flickering, or blurriness across hundreds of games. When issues are identified, the system augments its training data sets with new examples of optimal graphics and challenging scenarios that DLSS needs to address.
DLSS 4 is the first move from convolutional neural networks to a transformer model that runs locally on client PCs. The continuous learning process has been crucial in refining the technology, with the dedicated supercomputer serving as the backbone of this evolution. The scale of resources allocated to DLSS development is massive, as the entire pipeline for a self-improving DLSS model must consist of not only thousands but tens of thousands of GPUs. Of course, a company making 100,000 GPU data centers (xAI's Colossus) must save some for itself and is proactively using it to improve its software stack. NVIDIA's CEO Jensen Huang famously said that DLSS can predict the future. Of course, these statements are to be tested when the Blackwell series launches. However, the approach of using massive data centers to improve DLSS is quite interesting, and with each new GPU generation NVIDIA release, the process is getting significantly sped up.
Source:
via PC Gamer
DLSS 4 is the first move from convolutional neural networks to a transformer model that runs locally on client PCs. The continuous learning process has been crucial in refining the technology, with the dedicated supercomputer serving as the backbone of this evolution. The scale of resources allocated to DLSS development is massive, as the entire pipeline for a self-improving DLSS model must consist of not only thousands but tens of thousands of GPUs. Of course, a company making 100,000 GPU data centers (xAI's Colossus) must save some for itself and is proactively using it to improve its software stack. NVIDIA's CEO Jensen Huang famously said that DLSS can predict the future. Of course, these statements are to be tested when the Blackwell series launches. However, the approach of using massive data centers to improve DLSS is quite interesting, and with each new GPU generation NVIDIA release, the process is getting significantly sped up.
27 Comments on NVIDIA Reveals Secret Weapon Behind DLSS Evolution: Dedicated Supercomputer Running for Six Years
Now it's pretty clear why nowadays middle class RTX costs a kidney.
It's pretty clear why Nvidia so strongly encourages game devs to include DLSS.
If that saves energy then I'm all for it, but not sure how much the training aspect offsets that. Therein lies the rub...
www.whitehouse.gov/briefing-room/statements-releases/2025/01/14/statement-by-president-biden-on-the-executive-order-on-advancing-u-s-leadership-in-artificial-intelligence-infrastructure/
The best thing is that all that advance in visuals can be backported to all games that support DLSS2.x, meaning over 600 games
Now, what I'm about to say could be wrong, I don't have expert knowledge of how GPUs are designed, but it seemed like 20 years ago, if you had a brilliant individual or a few of them, you could compete because in the end, every company is more or less working with and is limited by, the same tool: the human brain.
Now, with machine learning and AI, that limitation has been breached, and it has basically turned into an arms race with who can amass the most compute power. In a reality wholly shaped by the dictates of capitalism and the profit motive, the competition has basically been reduced to who can buy the most hardware. It then basically turns into a positive feedback loop: Nvidia has the most resources so they have access to more compute power, this compute power let's them create faster products, the products sell more and Nvidia gets more resources....repeat. With the use of AI/ML in the design process, I feel like Nvidia has literally gained an insurmountable advantage and it will never be "corrected" by market forces.
www.nvidia.com/en-us/geforce-now/ That "one rule" probably makes the AI cheat :nutkick:
Nvidia is just way too serious when it comes to gaming.
Nvidia's success is heavily dependent on TSMC's success. Maybe it will get corrected through there if you know what I mean.
Other than maybe a blanket tax on any company that uses exorbitant amounts of energy with no offsets, I don't see what could be done. There would also have to be exceptions such as for steel mills. Electric Arc furnaces are amazing for recycling steel but they are so energy intensive. No, I don't know what you mean. Please elaborate lol
Which effectively would be kind of moving the calculations they had to manage within their infrastructure to the clients now.
That would be a lot of off-loading (time and money saving) for Nvidia. Not sure if clever in evil-way or not :D
But of course they'll try to sell that differently. You're helping games improve! :p Nah, conceptual things only get discovered once, and then we all copy them, and that's that. Look at FSR4. That's also why its folly to be paying for proprietary bullshit. Just wait. It'll come. And if it won't, it simply will not survive.