Saturday, September 23rd 2023

NVIDIA Foresees Greater Reliance on AI Rendering Techniques in Games
Digital Foundry organized an AI Visuals Roundtable earlier this week, with Alex Battaglia (Video Producer) acting as host—the main topic being a recently visually upgraded version of Cyberpunk 2077. Guests included: Bryan Catanzaro (NVIDIA's Vice President Applied Deep Learning Research), Jakub Knapik (VP Art and Global Art Director at CD Projekt RED), Jacob Freeman (GeForce Marketing Evangelist) and Pedro Valadas (PCMR Founder). Team Green, naturally, advocates its newly released DLSS 3.5 and Ray Reconstruction technologies—brute force rendering techniques are considered passé, and no longer an ideal solution for upcoming GPU generations. Valadas was curious enough to ask whether NVIDIA had any plans to target reasonable performance levels—minus DLSS—at native resolutions.
Catanzaro focused on the benefits introduced in Update 2.0 and Phantom Liberty: "I think that DLSS 3.5 actually makes Cyberpunk 2077 even more beautiful than native rendering...that's my belief. The reason being that the AI is able to make smarter decisions about how to render a scene...and I think that's going to continue to develop." He half jokingly states that rasterization is a "bag of fakeness." A combination of DLSS and path tracing is preferred over old hat methods—thus attaining the most realistic visual results. He summarizes with his own slogan: "Native rendering is fake frames." Catanzaro predicts that real-time graphics industries will become increasingly reliant on AI-processed image reconstruction and rendering technologies in the future. He hopes that AAA game environments will become cheaper to make in due time—with a shift to neural rendering. DLSS 10 could interface with game engine technology at a higher level, with a possible outcome being the creation of more "immersive and beautiful" experiences.Digital Foundry's video description states: "Many thanks to all participants in this roundtable chat: Bryan Catanzaro, Vice President Applied Deep Learning Research at Nvidia, Jakub Knapik VP Art and Global Art Director at CD Projekt RED, GeForce evangelist Jacob Freeman and Pedro Valadas of the PCMR sub-Reddit."
Cyberpunk 2077 2.0 is here—the first showcase for DLSS 3.5 ray reconstruction, integrated into a full DLSS package including super resolution and frame generation, all combining to produce a state-of-the-art visual experience. In this roundtable discussion, we discuss how ray reconstruction works, how it was developed and its applications outside of path-traced games. We also talk about the evolution of the original DLSS, the success of DLSS in the modding community and the future of machine learning in PC graphics.
Sources:
Eurogamer, Tom's Hardware
Catanzaro focused on the benefits introduced in Update 2.0 and Phantom Liberty: "I think that DLSS 3.5 actually makes Cyberpunk 2077 even more beautiful than native rendering...that's my belief. The reason being that the AI is able to make smarter decisions about how to render a scene...and I think that's going to continue to develop." He half jokingly states that rasterization is a "bag of fakeness." A combination of DLSS and path tracing is preferred over old hat methods—thus attaining the most realistic visual results. He summarizes with his own slogan: "Native rendering is fake frames." Catanzaro predicts that real-time graphics industries will become increasingly reliant on AI-processed image reconstruction and rendering technologies in the future. He hopes that AAA game environments will become cheaper to make in due time—with a shift to neural rendering. DLSS 10 could interface with game engine technology at a higher level, with a possible outcome being the creation of more "immersive and beautiful" experiences.Digital Foundry's video description states: "Many thanks to all participants in this roundtable chat: Bryan Catanzaro, Vice President Applied Deep Learning Research at Nvidia, Jakub Knapik VP Art and Global Art Director at CD Projekt RED, GeForce evangelist Jacob Freeman and Pedro Valadas of the PCMR sub-Reddit."
Cyberpunk 2077 2.0 is here—the first showcase for DLSS 3.5 ray reconstruction, integrated into a full DLSS package including super resolution and frame generation, all combining to produce a state-of-the-art visual experience. In this roundtable discussion, we discuss how ray reconstruction works, how it was developed and its applications outside of path-traced games. We also talk about the evolution of the original DLSS, the success of DLSS in the modding community and the future of machine learning in PC graphics.
47 Comments on NVIDIA Foresees Greater Reliance on AI Rendering Techniques in Games
Ray tracing was bad enough performance wise but you could argue it does make a big difference to visuals. But now Nvidia is pushing for full path tracing, you get a quarter of the performance down to literal slideshow levels of framerates for what exactly ? Even the most hardcore fanboy would have trouble to tell the difference if the images aren't labeled. Are we supposed to just pretend it's amazing so we pay Nvidia 2000$ to make the games playable with a million levels of upscaling/interpolation/denoising/etc ?
Also most people don't even understand what RT does. It won't improve the models, it won't improve the design, it won't improve the texture quality. The main thing it does is the light simulation which is critical in AAA titles.
I don't think games should rely on AI techs. Otherwise let's just use AI to render the whole game, needless to try so hard to make models and optimise visuals, why develop games in the first place?
Software is important, but it doesn't mean it's all.
Its like missionary work, and it'll likely end the same way: a steady growth of atheism as people get their reality checks later down the line while a subgroup descends into extremism. I remember a few years ago everyone was going to be all over RT. Today you're happy if you can discover whether a god ray is rendered in raster or not. Its completely irrelevant, simply because the tax is too high - in every way: hardware cost, performance cost, game support cost...
Fools & Money, undeniable, and proven, with every marketing blurb with hardware solutions looking for new problems to create. Because we all know: there isn't a problem. Starfield shouldn't require FSR. Cyberpunk shouldn't require DLSS and neither games should be artifacting their way to decent FPS. We had games running fine for many years without all of this shit.
I haven't seen a single game actually be a better game because of these technologies, and no, being verge of playable when you use it and unplayable without isn't a pro argument to use them...
Cyberpunk for all its bells and whistles is now trying to relaunch itself with a 2.0 version that is in fact hoping to aspire to the expectations of what 1.0 should've been. But at least you have a dozen ways to run it! So much this. I can't begin to describe the sadness I feel for all those fools pixel peeping their FSR/DLSS/Xess comparisons and discussing what's what. Seriously, get a fucking life and realize how knee deep you're in the corporate cesspool, in fully apathy of a marketing machine used to sell inferior games and inferior GPUs and call them done. Just demand all vendors to stop this bullshit and get with the program, yes, we know, you can do this trick, now unify it, deploy it and move on.
This interview underlines it nicely.
We already know DLSS doesn't need AI, interpolation is as old as 1999. Stop bullshitting This whole affair isn't and was never about AMD and what it does or doesn't do, that's just the easy cop out so we don't have to face reality and can keep blinders on about what's happening in GPU land.
I hate FSR in similar measure, except AMD isn't using it to sell. They just don't sell ;) :roll:
Also, AMD biased... you couldn't be more wrong. TPU is however a community where people like to think more and click less on baity videos and nonsense. Or at least, there is a substantial group in it. Less marketing influenced? Definitely. AMD biased? That's just your own bias talking and if anything should poise you to reflect a little. The vast majority here has been running Nvidia for decades and that includes me. People here generally just want whatever is best at the time, its why they came here, influenced by some of the highest quality reviewing on the web ;)
Path tracing is the only way to simulate the light correctly but it will take decades to run it natively.
Also, there is another aspect that hasn't been touched (or shot dead by nVidia, so they make money out of it later in the future).
Physics, destruction and materials behaviour.
RTX = slightly better reflections for a 50%~ performance hit and you need better cpu for it.
AI, no idea what LLMs do for games. Probably just dropping it as a buzz word in a desperate attempt for more stock price pumping.
I was thinking earlier how Crysis with DX10 really was a revolution in visuals that made the performance hit seem worthwhile.
That isn't true at all of Cyberpunk with RT and DLSS at all.
We're very much in the tech snake oil era these days imo.
The natural evolution was PBR but Nvidia is pushing RT/PT so hard it won't stand a chance.
Minecraft with RTX PBR Texturing Guide | GeForce News | NVIDIA
Understanding Physically Based Rendering in Arnold | AREA by Autodesk
Physically-Based Shading at Disney
SIGGRAPH 2010 Course: Physically-Based Shading Models in Film and Game Production (renderwonk.com)
(And FSR, albeit needing some refinement, works perfectly fine, imho)
You've made me realize I completely forgot the price argument as well. Nvidia jacked up pricing on the 2000 series, arguing that was the cost of ray tracing, and future generations have only become more expensive.
So really we are spending more than ever on graphics cards for a feature of questionable subjective benefit as of right now, a huge performance hit, and the addition of graphics artifacts and latency.
And now Nvidia is likely to increase prices next gen as well because according to Jensen, Moore's law is dead for everyone but the well off.
I would say this about A.I. denoising too. It's just switching algorithms between a few different noise filters depending on contrast, angle etc.
No you muppets, it's absolutely possible to be against NVIDIA's pricing while being a massive proponent of the technological progress in GPU rendering they've brought to the table. Before NVIDIA made real-time ray-tracing a thing, there was zero innovation in the GPU space, it was just "make next generation render more rasterised frames durrrrrrrrr". RTRT is the first step along the road to computer-generated graphics matching what our eyes can see, without all of the tricks and fakery and ultimately bullshit that half a century of rasterisation has taught game engineers and graphics artists to bake into their work so that the entire visual facade doesn't come crashing down.
Then there's the fact that all of the rasterisation fakery has allowed us to create a pretty good simulation of what the human eye sees, that we've been trained to accept, whereas ray-tracing is starting on floor zero and is naturally not going to be perfect. Yet time and time again we hear the same refrain, "ray-tracing looks worse so we should stop wasting time on it WAAAAAAA". THAT'S NOT HOW TECHNOLOGY WORKS.
Frame interpolation is part and parcel of the long road to replacing rasterisation with RT. It's not perfect, and NVIDIA knows that full well, but it's a necessary step until graphics hardware becomes powerful enough to ray- or path-trace in real-time to no longer require it. When that happens it will be discarded by the wayside. No we don't. We know that standard interpolation doesn't need machine learning, but we literally don't know how DLSS works. It may be nothing more than interpolation with marketing, it might be a whole lot more, only NVIDIA knows how it's implemented. So stop repeating this BS that you've made up to justify your arguments. :laugh:
TPU has ALWAYS had a massive AMD bias, given that the site was basically created around ATITool. That utility is the reason I, and I'll bet many of the other old-timers, joined these forums.
You cannot be against their absurd prices while simultaneously worshiping the strategies that allowed them to increase those prices in the first place, it's military grade cognitive dissonance and ignorance.
Should we all Have to pay 2/3K to game at 1080p.
The futures bright.
But it's melting icebergs for god ray's, how quaint the future is, possibly short though.
Do you need a recap of their efforts throughout the industry since pre-Turing? I'm not sure what you're smoking here honestly. The cognitive dissonance is with you, unable to accept a highly inconvenient truth: corporate is always chasing money, it doesn't have to amount to anything. Many technologies and innovations just simply die off. Less respectfully you could say lots of companies with money are just throwing shit at the wall continuously, paid by us and not by them through tax.
This is why I go back to what matters every time: have games become better since Turing? Did RT enable new worlds, new immersive qualities yet? I dare say it hasn't. So why would we want this? Its a major cost increase. The cost/benefit analysis is not advantageous to any of us gamers. So I think we have every reason to say no?
And again, its not about Nvidia because its Nvidia. Its about their approach, their timing, their strategy, and everything else surrounding their Geforce GPUs since Turing. It's been a complete and utter shitshow; and on the other side of the fence, we have an AMD that's copying that shitshow over because they haven't got any good ideas for GPU left, apart from the hardware-related finetuning they do through chiplets. I don't applaud either company for this, tbh. Exactly. One way or another, the reality check is pending. I'm not diving into this hole only to figure out what I already know: this is a bubble waiting to burst.