• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Nvidia's future DlSS iterations might have ai textures / NPC add ons.

Joined
Sep 4, 2022
Messages
343 (0.41/day)
At Computex 2024 Jensen Huang spoke about the future of DLSS. He said that DLSS will be able to generate ai textures, objects and even NPCs.
Will vram be of greater importance with DLSS textures? Will these ai generated textures require an internet connection to RTX Remix servers and or will they require a significant amount of drive space locally? While I believe most of the talking points was geared for investors Huang wants to show the image that Nvidia can chew gum and dribble the ball at the same time.
The only developer that comes to mind that historically used all the RTX/DlSS feature sets was Cdprojectred. Currently CDprojectred is working on the Witcher 4 using Unreal engine 5.
With that said Unreal engine 5 might have a Dlss plug in with minimal developer resources input for other titles. With most game development still using consoles as the lowest common denominator, Huang is not waiting for the weakest link to constantly fortify Nvidia's position of dominance imo.

 
Joined
Jul 13, 2016
Messages
3,317 (1.08/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
Will vram be of greater importance with DLSS textures?

Yes, they'd have to run a heavier model in order to generate them.

I think calling this an iteration of DLSS is incredibly dumb. It has nothing to do with DLSS. The only common thread is the use of AI. Nvidia should just put these under a "game dev toolbox" branding or something. It's not flashy but it doesn't need to be flashy given it's targeted at devs.

Realtime generation of game assets like textures and NPCs I don't see as a good thing given the current state of AI. It's not good enough to the point where simple images can be generated without artifiacts and issues, let alone generating entire NPCs.

I have to question where Nvidia is getting it's training data for it's texture AI generation model as well. There are essentially zero good free texture resources out there to scrape for data which leads me to believe that if Nvidia wants quality output it's scraping paid resources and games. I think may game devs would be pretty pissed if Nvidia were using their textures to train an AI model they stand to massively profit from. Even more so if you create and sell texture assets as a job, that would amount to stealing your work and then taking all your business away.

I think Nvidia is crossing a dangerous line by adding content creation features to their suite of tools unless they explicitly license all the data the AI is trained over (of which I doubt given such a thing would be massively expensive given the licensing rights would essentially have to allow for devs using the tool to have all rights to generate images including the rights to sublicense them as well).
 
Joined
Oct 18, 2013
Messages
6,247 (1.53/day)
Location
Over here, right where you least expect me to be !
System Name The Little One
Processor i5-11320H @4.4GHZ
Motherboard AZW SEI
Cooling Fan w/heat pipes + side & rear vents
Memory 64GB Crucial DDR4-3200 (2x 32GB)
Video Card(s) Iris XE
Storage WD Black SN850X 4TB m.2, Seagate 2TB SSD + SN850 4TB x2 in an external enclosure
Display(s) 2x Samsung 43" & 2x 32"
Case Practically identical to a mac mini, just purrtier in slate blue, & with 3x usb ports on the front !
Audio Device(s) Yamaha ATS-1060 Bluetooth Soundbar & Subwoofer
Power Supply 65w brick
Mouse Logitech MX Master 2
Keyboard Logitech G613 mechanical wireless
Software Windows 10 pro 64 bit, with all the unnecessary background shitzu turned OFF !
Benchmark Scores PDQ
No, no, & hell no...

Cause AI IS dlss, textures, npc and everything else, everywhere, all at once, all the time.....

If the chair is against the wall, the wall is against the chair, and you can't sit on either one, even you wanted to...
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
At Computex 2024 Jensen Huang spoke about the future of DLSS. He said that DLSS will be able to generate ai textures, objects and even NPCs.
Will vram be of greater importance with DLSS textures? Will these ai generated textures require an internet connection to RTX Remix servers and or will they require a significant amount of drive space locally? While I believe most of the talking points was geared for investors Huang wants to show the image that Nvidia can chew gum and dribble the ball at the same time.
The only developer that comes to mind that historically used all the RTX/DlSS feature sets was Cdprojectred. Currently CDprojectred is working on the Witcher 4 using Unreal engine 5.
With that said Unreal engine 5 might have a Dlss plug in with minimal developer resources input for other titles. With most game development still using consoles as the lowest common denominator, Huang is not waiting for the weakest link to constantly fortify Nvidia's position of dominance imo.


There is a lot I could say and touch upon, but you can totally create textures with AI these just simple ones with a basic AI model not even a good quality one like DALL E-3 that would produce better results. In fact I could have that do things with these that would almost certainly produce better detailed results. They aren't half bad just the same and could easily be used on any cube, wall, floor, or ceiling to spice things up in fact you could use one for the floor another for the ceiling and the other for the walls and create a pretty cool environment. Keep in mind you can mix that in with other textures. These could easily be MineCraft blocks.
texture - 1.jpg
texture - 2.jpg
texture - 3.jpg


As far as VRAM and it being of greater importance. I mean yes and no. It doesn't exactly change that equation. If you're using the same resolution textures nothing really changes. If you are only using the same amount of texture variety within a game scene nothing changes. You can simply swap textures easily for the same resolution textures and VRAM requirements shouldn't exactly change heavily.

Some slight tinkering with the image on right side with DALL E-3 just to see what happens. I'm sure if I played with it some I could get some very different results as well. Not too shabby at all. Guess what I created? Oh that's right a texture with AI and it's not even half bad like that's entirely usable. I would 100% use this texture for a Raccoon Cube with 6 faces on it and I could stack them and play Minecraft with them. Hell I could even color them. Now if I wanted to get real fancy I could take all 3 import them into GIMP and blend and merge them together a bit. As well as other things that could be done to them in GIMP like linear lighting and shading things that can be applied. I mean look at those eyes they don't look friendly at all, but if not fren why fren shaped?
texture - 6.jpg
texture - 5.jpg
texture - 4.jpg


So for more fun decided to see if I could make a manhole cover out of the central portion of the image. Anyways AI is **** especially AI art and/or sound from what the internet says so it must be true. Not exactly perfect manhole cover it could be a bit better, but for about quick 2 minute job it's not half bad and approximate enough.

texture - 7.jpg
texture - 8.jpg
texture - 9.jpg


One last example set trio...decided to turn it into a power button...ahh yes trash panda satire at it's finest!! The middle one would be my nostalgic preference. Who knew a 8-bit washbear could look so fly with that mask it bears on it's face.
texture - 10.jpg
texture - 11.jpg
texture - 12.jpg
 
Last edited:
Joined
Jan 8, 2024
Messages
229 (0.67/day)
If you can generate textures locally on the device you won't need to stream as much data from the host. The freed up bandwidth can then be used for more assets which can then be multiplied using AI. The more you buy...
 
Joined
Sep 1, 2009
Messages
1,237 (0.22/day)
Location
CO
System Name 4k
Processor AMD 5800x3D
Motherboard MSI MAG b550m Mortar Wifi
Cooling ARCTIC Liquid Freezer II 240
Memory 4x8Gb Crucial Ballistix 3600 CL16 bl8g36c16u4b.m8fe1
Video Card(s) Nvidia Reference 3080Ti
Storage ADATA XPG SX8200 Pro 1TB
Display(s) LG 48" C1
Case CORSAIR Carbide AIR 240 Micro-ATX
Audio Device(s) Asus Xonar STX
Power Supply EVGA SuperNOVA 650W
Software Microsoft Windows10 Pro x64
If you can generate textures locally on the device you won't need to stream as much data from the host. The freed up bandwidth can then be used for more assets which can then be multiplied using AI. The more you buy...
How much Latency would this create or would a Local LLM be held in the memory of the GPU for textures. Really interesting to think about.
 

ir_cow

Staff member
Joined
Sep 4, 2008
Messages
4,558 (0.77/day)
Location
USA
I got to experience the AI NPC at Nvidia booth for CES 2024. It was wild. You could say anything you want and the NPC in Cyberpunk 2077 would naturally talk back to you. Imagine if side quests were generated from real-time conversations.
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
I think you're missing the whole point. You generate the textures and swap in place of the original textures in game. It doesn't matter if there is some latency once the texture is created there won't be any latency at that point it's saved to storage or held in VRAM probably storage if you want to keep it. It's not over complicated. Same with objects, npcs, dialog, animations, effects, sound, code, and host of other possibilities. It's only a matter of time before we have AI games we can build from the ground up to share and connect with others. People are grossly underestimating the potential of AI for gaming. The building blocks are already largely in place for much of it though game engines will have to incorporate much of them into the design if they haven't already begun to in order to really make it a more seamless process to the end user.

Ever played a open world survival building game well it's a lot like that, but on serious steroids with inference thrown into the mix on all the development aspects of the game design. It won't happen immediately, but more and more of that is going to end up happening. There is plenty I could touch upon more, but I've already elaborated plenty on more than I really intended to. I think we'll be seeing huge advancements with AI as creative tools in the future and is coming quick and fast in terms of rapid advancement. What is possible this year will feel like a bit of a joke compared to what's possible next year or in two years. I might be the same general premise, but the quality will have improved substantially due to more hardware and higher quality hardware combined with better inference training and algorithms to train around.
 
Joined
Jul 13, 2016
Messages
3,317 (1.08/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
As far as VRAM and it being of greater importance. I mean yes and no. It doesn't exactly change that equation. If you're using the same resolution textures nothing really changes. If you are only using the same amount of texture variety within a game scene nothing changes. You can simply swap textures easily for the same resolution textures and VRAM requirements shouldn't exactly change heavily.

You are thinking if the textures are pre-baked. That this is being attached to DLSS indicates it's something that'll run in real time. Even a small model like a LORA will take an additional 1 - 1.5GB of VRAM. That will certainly have to go with a smaller model as the memory consumption of something like SDXL is 16GB. Mind you SDXL tops out at 1024 x 1024 (or any variation with the same number of pixels). I'd imagine any generated images would have to be up-scaled as well.

There is a lot I could say and touch upon, but you can totally create textures with AI these just simple ones with a basic AI model not even a good quality one like DALL E-3

Dall E-3 has models that range from 7 Billion to 70 Billion parameters. Just for comparison SDXL has 3.5 billion. How much processing power it requires to run locally is completely unknown unless one works at OpenAI.

If you can generate textures locally on the device you won't need to stream as much data from the host. The freed up bandwidth can then be used for more assets which can then be multiplied using AI. The more you buy...

It takes vastly longer to generate an image than it does to stream it from your HDD or even any server regardless of where it is in the world. Generating a 1024x1024 with 40 steps SDXL takes my 4090 about 7 seconds. That's using 100% of the GPU as well.

How much Latency would this create or would a Local LLM be held in the memory of the GPU for textures. Really interesting to think about.

Far far far too much. It's not remotely feasible right now to do so in real time in at least the next 4 GPU generations.

I got to experience the AI NPC at Nvidia booth for CES 2024. It was wild. You could say anything you want and the NPC in Cyberpunk 2077 would naturally talk back to you. Imagine if side quests were generated from real-time conversations.

The problem with entirely AI generated quests is that they will all be meaningless filler. The AI will generate a generic situation with the associated quest. It'll reset the dungeon if you did it already and spawn in NPCs with generic gear according to the theme. In otherwords, essentially Bethesda's Radiant quest system with extra flexability.

That's using it the wrong way IMO. Instead they should use that level of interaction to trigger hand made quests. This enables players to interact more naturally with NPCs while still keeping the content quality high.

I think you're missing the whole point. You generate the textures and swap in place of the original textures in game. It doesn't matter if there is some latency once the texture is created there won't be any latency at that point it's saved to storage or held in VRAM probably storage if you want to keep it. It's not over complicated. Same with objects, npcs, dialog, animations, effects, sound, code, and host of other possibilities. It's only a matter of time before we have AI games we can build from the ground up to share and connect with others. People are grossly underestimating the potential of AI for gaming. The building blocks are already largely in place for much of it though game engines will have to incorporate much of them into the design if they haven't already begun to in order to really make it a more seamless process to the end user.

Latency is always important in games. People complain about games that complie shaders during gameplay and that's why most games don't do that anymore. What you are proposing is ramping that up an insane amount and also creating a lot write amplification on consumer SSDs that aren't designed for a ton of writes. A 4090 takes 7 seconds to generate 1024 x 1024 image using 100% of the GPU. You'd be waiting minutes after the initial load in before your screen stops freezing and then it would freeze each and every time a new texture is generating. Suffice to say, even if it happens only once for each texture it's completely unacceptable. The alternative is to restrict how much GPU the AI could use but that's not feasible for AI textures because they are dynamically generated and by extension no placeholder texture would exist. You couldn't just use a generic texture for all textures currently being generated either, that would look horrendous.

It makes far more sense that the dev would just include all needed textures from the get go. That ensures the experience players receive is not only consistent with the dev's vision but also that the players aren't fighting through a stutterfest.

Forget about generating more than one thing with AI. It's not feasible on a 4090 and it's definitely not feasible on cards regular consumers have. You are going to need a card about 30x the performance of a 4090 before that's feasible. Probably need something like 80 GB of VRAM to boot as each AI is going to require it's own memory.

Ever played a open world survival building game well it's a lot like that, but on serious steroids with inference thrown into the mix on all the development aspects of the game design. It won't happen immediately, but more and more of that is going to end up happening. There is plenty I could touch upon more, but I've already elaborated plenty on more than I really intended to. I think we'll be seeing huge advancements with AI as creative tools in the future and is coming quick and fast in terms of rapid advancement. What is possible this year will feel like a bit of a joke compared to what's possible next year or in two years. I might be the same general premise, but the quality will have improved substantially due to more hardware and higher quality hardware combined with better inference training and algorithms to train around.

Yes, AI is a great tool for devs. I'll ask it for character backgrounds for dnd. The output is almost always something cliche or that I've read before but it serves as a great base to take and make my own. That's the attitude devs should be going into AI with. Fully AI generated content just feels like stealing other's IP.
 
Last edited:
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
No what I was thinking was being able to interchange textures like you would with modding textures within a scene with ones created with inference. You could create it inside the game or outside of it technically, but it would be easier outside of the game on system resources. Just the same I've run AI inference on integrated graphics just fine with co-pilot and otherwise. In most cases it doesn't take 7 second either and even when using a GPU for it I'm only using a GTX980.

I don't think DLSS is a indicator of anything in particular actually given only Nvidia know exactly what's going on with it behind the scenes because it's proprietary. It's impossible to say definitively exactly what DLSS is doing and how it's doing so. You can make some qualitative assumptions at best about it. Plus as we've seen from FSR a lot can be done differently than how Nvidia handles DLSS irrespective of how you feel about FSR not to mention XeSS as well. Beyond that you've post process injection like ReShade that itself can do quite a lot of things.

You could also just use another GPU to run inference off of it with it's own allocated VRAM.
 
Joined
Jul 13, 2016
Messages
3,317 (1.08/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
Just the same I've run AI inference on integrated graphics just fine with co-pilot and otherwise. In most cases it doesn't take 7 second either and even when using a GPU for it I'm only using a GTX980.

Co-pilot doesn't run locally yet. Microsoft servers are doing all the work in your case. Microsoft is working towards local co-pilot but it hasn't released it yet.

You are also conflating text to text models with text to image models.

Text to text doesn't take anywhere near the resources that image generation does (whether that be text to image, audio to image, ect). Llama 3 instruct takes some 5 GB of VRAM and 5GB of main system memory. That's still very significant for the average gamer given most people still have a mere 8GB of VRAM but it's small compared to what is required for image generation.

Image generation on the other hand takes vastly more. As I pointed out above SDXL takes a minimum of 16GB of VRAM and you will exceed that once you start adding in any LoRA, IP Adapter, or T2i Adapter. I've seen it exceed of 40GB when using regional IP adapters with a T2I adapter, which is precisely why I have an Intel Optane P5800X to store the swapfile that is inevitably created (that and for storing model training EPOCHs). All the above increase generation time as well above the prior mentioned 7 seconds.

You can verify my numbers by downloading GPT4All or Stable Diffusion and running these models yourself. Dall-E 3 is a cloud service so you can't gleam any insight from that on how much AI actually takes to run.

I don't think DLSS is a indicator of anything in particular actually given only Nvidia know exactly what's going on with it behind the scenes because it's proprietary. It's impossible to say definitively exactly what DLSS is doing and how it's doing so.

DLSS stands for deep learning super sampling. We know this because Nvidia themselves calls it that and explains it as such. We can definitely say what DLSS is doing because Nvidia has already explained it via words and charts.

Any tacking on of technology that's unrelated is just nonsense by Nvidia to co-opt the name of a well liked technology to push unrelated BS.

You could also just use another GPU to run inference off of it with it's own allocated VRAM.

You are talking about doubling cost and power consumption. Neither of which sound appealing.

Mind you most motherboards don't support two x16 x16 PCIe slots. They typically support x8 x8 at best and mid-range to low end boards go lower then that, x16, x4 or x8 x4.
 
Last edited:

wolf

Better Than Native
Joined
May 7, 2007
Messages
8,235 (1.28/day)
System Name MightyX
Processor Ryzen 9800X3D
Motherboard Gigabyte X650I AX
Cooling Scythe Fuma 2
Memory 32GB DDR5 6000 CL30
Video Card(s) Asus TUF RTX3080 Deshrouded
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
I'm going to have to see it in action and judge for myself what it's doing, how it runs, how it looks etc. Speculation based on the AI buzzword alone is perhaps not going to achieve a lot. Might be great, might be terrible, might be locked to a certain gen or higher, might not.
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
We only know portions of what DLSS does based on what Nvidia has shown and told us about it. We don't know the full scope of how it operates openly. We know as much about DLSS basically as a AI image that isn't a open prompt essentially maybe a little more or little less and it's pointless to argue which we know more about it doesn't matter it's irrelevant. I don't care what it takes to run locally when you can generate images from various places you can tap into around the web. It's going to matter less in the future anyway to run locally as tech matures and improves.

You can 100% create textures in line with or better than you'd expect to see in a number of games. For modders and developers it's great. Is it for everyone no does it matter no. Is it bound to change and improve yes. I honestly don't care if a texture takes 10 minutes to run locally for that matter if it's a good texture and it's a reasonable enough time period that I could completely transform a good for the better in tangible ways. As far a latency that does not matter one bit you save it to local storage after creation. The only latency you'd notice is during creation and that doesn't matter too much since you'd be in control of where, when, and what you create alternate textures, dialogs, animations, ect so on and so fourth.

I could defiantly see Nvidia offering a service to both create AI inference textures and other assets connected to games as well sharing ones users wish to openly share.
 
Joined
Jun 3, 2008
Messages
770 (0.13/day)
Location
Pacific Coast
System Name Z77 Rev. 1
Processor Intel Core i7 3770K
Motherboard ASRock Z77 Extreme4
Cooling Water Cooling
Memory 2x G.Skill F3-2400C10D-16GTX
Video Card(s) EVGA GTX 1080
Storage Samsung 850 Pro
Display(s) Samsung 28" UE590 UHD
Case Silverstone TJ07
Audio Device(s) Onboard
Power Supply Seasonic PRIME 600W Titanium
Mouse EVGA TORQ X10
Keyboard Leopold Tenkeyless
Software Windows 10 Pro 64-bit
Benchmark Scores 3DMark Time Spy: 7695
Will this be allowed in multiplayer? I don't see how.
 
Joined
Sep 10, 2018
Messages
6,957 (3.04/day)
Location
California
System Name His & Hers
Processor R7 5800X/ R7 7950X3D Stock
Motherboard X670E Aorus Pro X/ROG Crosshair VIII Hero
Cooling Corsair h150 elite/ Corsair h115i Platinum
Memory Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk
Video Card(s) Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090
Storage lots of SSD.
Display(s) A whole bunch OLED, VA, IPS.....
Case 011 Dynamic XL/ Phanteks Evolv X
Audio Device(s) Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B
Power Supply Seasonic Ultra Prime Titanium 1000w/850w
Mouse Logitech G502 Lightspeed/ Logitech G Pro Hero.
Keyboard Logitech - G915 LIGHTSPEED / Logitech G Pro
I got to experience the AI NPC at Nvidia booth for CES 2024. It was wild. You could say anything you want and the NPC in Cyberpunk 2077 would naturally talk back to you. Imagine if side quests were generated from real-time conversations.

I watched videos of it and while cool it felt like somthing that was probably a half decade away or more they will likely make it proprietary so they will likely have to subsidize most the development cost or developers would have to develope separate versions which isn't likely. Unless they make it capable of running on consoles and Radeon/Intel hardware and we all know that shite ain't happening.

I'm going to have to see it in action and judge for myself what it's doing, how it runs, how it looks etc. Speculation based on the AI buzzword alone is perhaps not going to achieve a lot. Might be great, might be terrible, might be locked to a certain gen or higher, might not.

I've only read impressions and while it sounded cool the NPC would do stuff that didn't make any sense lol it's got a long way to go before we see it in a game. 6000 series maybe.



I'm not a huge fan of AI doing more than frame generation and upscaling and while they come a long way since 2018 they still have a long way to go even DLSS can still have a ton of issues at lower resolutions I can only imagine how bad full NPC and textures would be...
 
Last edited:
Joined
Jul 15, 2006
Messages
1,316 (0.20/day)
Location
Noir York
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B450M S2H
Cooling Scythe Kotetsu Mark II
Memory 2 x 16GB SK Hynix CJR OEM DDR4-3200 @ 4000 20-22-20-48
Video Card(s) Colorful RTX 2060 SUPER 8GB GDDR6
Storage 250GB WD BLACK SN750 M.2 + 4TB WD Red Plus + 4TB WD Purple
Display(s) AOpen 27HC5R 27" 1080p 165Hz curved VA
Case AIGO Darkflash C285
Audio Device(s) Creative SoundBlaster Z + Kurtzweil KS-40A bookshelf / Sennheiser HD555
Power Supply Great Wall GW-EPS1000DA 1kW
Mouse Razer Deathadder Essential
Keyboard Cougar Attack2 Cherry MX Black
Software Windows 10 Pro x64 22H2
I'm not a huge fan of AI doing more than frame generation and upscaling and while they come a long way since 2018 they still have a long way to go even DLSS can still have a ton of issues at lower resolutions I can only imagine how bad full NPC and textures would be...
Same here, would prefer them to fix DLSS, improve its performance and quality for lower end hardware where most people use.
 
Joined
Sep 10, 2018
Messages
6,957 (3.04/day)
Location
California
System Name His & Hers
Processor R7 5800X/ R7 7950X3D Stock
Motherboard X670E Aorus Pro X/ROG Crosshair VIII Hero
Cooling Corsair h150 elite/ Corsair h115i Platinum
Memory Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk
Video Card(s) Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090
Storage lots of SSD.
Display(s) A whole bunch OLED, VA, IPS.....
Case 011 Dynamic XL/ Phanteks Evolv X
Audio Device(s) Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B
Power Supply Seasonic Ultra Prime Titanium 1000w/850w
Mouse Logitech G502 Lightspeed/ Logitech G Pro Hero.
Keyboard Logitech - G915 LIGHTSPEED / Logitech G Pro
Same here, would prefer them to fix DLSS, improve its performance and quality for lower end hardware where most people use.

4k DLSS quality is awesome in most games but both 1440p and 1080p need some work fix that first before trying to AI render textures lol..... You know they only want to do that so they can sell a 500 usd 8 GB gpu and say look see it can do 4k cuz ai is rendering half the textures smh then you'll have some Nvidia fanboys telling us how awesome that is rip amd smh

I can hear it now so clearly the 6060ti is so awesome becuase they only give it 8GB of vram it's so much more power efficient than the 9700XT that has 16GB all for the same price and half of textures are fake so it's amazing..... :kookoo:
 
Last edited:
Joined
Jul 13, 2016
Messages
3,317 (1.08/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
We only know portions of what DLSS does based on what Nvidia has shown and told us about it. We don't know the full scope of how it operates openly.

We know the scope of DLSS, it's in the name. In what world does Deep Learning Super Sample mean asset generation to you? Total nonsense. It's akin to every brand stamping AI on everything despite most things having nothing to do with AI.

We know as much about DLSS basically as a AI image that isn't a open prompt essentially maybe a little more or little less and it's pointless to argue which we know more about it doesn't matter it's irrelevant.

You do realize there's a thing called an image interrogator that can spit out the prompt used to generate an image right?

You can 100% create textures in line with or better than you'd expect to see in a number of games.

Well duh, it would likely be trained off texture data from games to begin with. I hesitate to call that better though, it entirely relies on the quality of it's training data. If you trained the AI on low quality textures the output would be low quality. That's why it's important to recognize the fact that if AI were to kill off the jobs of those making textures "by hand", the pool of training data would become stagnant. You cannot train AI off AI generated images as that only amplifies artifacts and introduces overtraining of certain elements.

For modders and developers it's great. Is it for everyone no does it matter no. Is it bound to change and improve yes. I honestly don't care if a texture takes 10 minutes to run locally for that matter if it's a good texture and it's a reasonable enough time period that I could completely transform a good for the better in tangible ways. As far a latency that does not matter one bit you save it to local storage after creation. The only latency you'd notice is during creation and that doesn't matter too much since you'd be in control of where, when, and what you create alternate textures, dialogs, animations, ect so on and so fourth.

I could defiantly see Nvidia offering a service to both create AI inference textures and other assets connected to games as well sharing ones users wish to openly share.

You are talking about devs generating assets and including them in their game of which I already touched upon in my very first comment: (I have omitted sections relating to realtime generation)

Nvidia should just put these under a "game dev toolbox" branding or something. It's not flashy but it doesn't need to be flashy given it's targeted at devs.

I have to question where Nvidia is getting it's training data for it's texture AI generation model as well. There are essentially zero good free texture resources out there to scrape for data which leads me to believe that if Nvidia wants quality output it's scraping paid resources and games. I think many game devs would be pretty pissed if Nvidia were using their textures to train an AI model they stand to massively profit from. Even more so if you create and sell texture assets as a job, that would amount to stealing your work and then taking all your business away.

I think Nvidia is crossing a dangerous line by adding content creation features to their suite of tools unless they explicitly license all the data the AI is trained over (of which I doubt given such a thing would be massively expensive given the licensing rights would essentially have to allow for devs using the tool to have all rights to generate images including the rights to sublicense them as well).

I enjoy using AI image generation but far too many people seem gung-ho about commericalizing it. Yeah, the results look good but you forget that comes on the back of the artists whos work the AI was trained on. I'm not against it of course but the consequences of failing to protect the artist's IP could be catestrophic.

The questions becomes what in the world does cloud based AI asset generation have to do with a realtime upscaling technology such as DLSS? The two have nothing to do with each other, not only do they do two entirely different things but one is realtime and the other isn't. One happens at time of development while the other happens on the client machine. Two entirely different audiiences to boot. If Nvidia calls this kind of asset generation DLSS that would be absolutely farcial because they could not be further apart.
 
Joined
Nov 11, 2016
Messages
3,450 (1.17/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
4k DLSS quality is awesome in most games but both 1440p and 1080p need some work fix that first before trying to AI render textures lol..... You know they only want to do that so they can sell a 500 usd 8 GB gpu and say look see it can do 4k cuz ai is rendering half the textures smh then you'll have some Nvidia fanboys telling us how awesome that is rip amd smh

I can hear it now so clearly the 6060ti is so awesome becuase they only give it 8GB of vram it's so much more power efficient than the 9700XT that has 16GB all for the same price and half of textures are fake so it's amazing..... :kookoo:

HUB found that at 1440p DLSS Quality is superior than Native TAA more often than not, and they tested with stock DLLs

I haven't tried 1440p but at 4K, DLSS ver3.7 Performance looks better than ver2.3 Quality Mode

DLSS progression probably hit a plateau, so other ideas to improve visuals using tensor cores are more preferrable. Don't get too hung up on matured technologies anyways
 
Last edited:
Joined
Sep 10, 2018
Messages
6,957 (3.04/day)
Location
California
System Name His & Hers
Processor R7 5800X/ R7 7950X3D Stock
Motherboard X670E Aorus Pro X/ROG Crosshair VIII Hero
Cooling Corsair h150 elite/ Corsair h115i Platinum
Memory Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk
Video Card(s) Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090
Storage lots of SSD.
Display(s) A whole bunch OLED, VA, IPS.....
Case 011 Dynamic XL/ Phanteks Evolv X
Audio Device(s) Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B
Power Supply Seasonic Ultra Prime Titanium 1000w/850w
Mouse Logitech G502 Lightspeed/ Logitech G Pro Hero.
Keyboard Logitech - G915 LIGHTSPEED / Logitech G Pro
Might just have to do with it being UW 3440x1440p but DLSS looks pretty meh in a lot of games even with the latest DLL I can only stand 4k in the quality mode so maybe that it's..... Even balanced 4k is barf to me although my 4k screen is 48 inches could be why. I do sit about 5-7 feet back still doesn't matter.

If using DLSS I prefer to use DLDSR in combination then it looks fine.
 
Joined
Nov 11, 2016
Messages
3,450 (1.17/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
Might just have to do with it being UW 3440x1440p but DLSS looks pretty meh in a lot of games even with the latest DLL I can only stand 4k in the quality mode so maybe that it's..... Even balanced 4k is barf to me although my 4k screen is 48 inches could be why. I do sit about 5-7 feet back still doesn't matter.

If using DLSS I prefer to use DLDSR in combination then it looks fine.

Everything looks meh, I prefer 8K, but not at 30FPS, so I play at 4K DLSS.Balanced at 120FPS on my 48in OLED :cool:
 
Joined
Sep 10, 2018
Messages
6,957 (3.04/day)
Location
California
System Name His & Hers
Processor R7 5800X/ R7 7950X3D Stock
Motherboard X670E Aorus Pro X/ROG Crosshair VIII Hero
Cooling Corsair h150 elite/ Corsair h115i Platinum
Memory Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk
Video Card(s) Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090
Storage lots of SSD.
Display(s) A whole bunch OLED, VA, IPS.....
Case 011 Dynamic XL/ Phanteks Evolv X
Audio Device(s) Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B
Power Supply Seasonic Ultra Prime Titanium 1000w/850w
Mouse Logitech G502 Lightspeed/ Logitech G Pro Hero.
Keyboard Logitech - G915 LIGHTSPEED / Logitech G Pro
Everything looks meh, I prefer 8K, but not at 30FPS, so I play at 4K DLSS.Balanced at 120FPS on my 48in OLED :cool:

Na DLAA looks quite good I prefer that over Vanilla DLSS.
 
Joined
Nov 11, 2016
Messages
3,450 (1.17/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
Joined
Sep 10, 2018
Messages
6,957 (3.04/day)
Location
California
System Name His & Hers
Processor R7 5800X/ R7 7950X3D Stock
Motherboard X670E Aorus Pro X/ROG Crosshair VIII Hero
Cooling Corsair h150 elite/ Corsair h115i Platinum
Memory Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk
Video Card(s) Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090
Storage lots of SSD.
Display(s) A whole bunch OLED, VA, IPS.....
Case 011 Dynamic XL/ Phanteks Evolv X
Audio Device(s) Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B
Power Supply Seasonic Ultra Prime Titanium 1000w/850w
Mouse Logitech G502 Lightspeed/ Logitech G Pro Hero.
Keyboard Logitech - G915 LIGHTSPEED / Logitech G Pro
Nah I would take 50% more FPS, 50W less and go with DLSS.Balanced with a touch of Sharpen+ instead of DLAA
DLAA vs DLSS.Balanced Sharpen+

That's the awesome thing about PC everyone can play how they want, well Nvidia locks you into/out of their stuff but otherwise you get what I mean.

Screenshots are pretty useless though DLSS and all similar tech needs to be seen native and in motion. I would think FSR was awesome most the time just looking at Screenshots lol.
 
Joined
Nov 11, 2016
Messages
3,450 (1.17/day)
System Name The de-ploughminator Mk-III
Processor 9800X3D
Motherboard Gigabyte X870E Aorus Master
Cooling DeepCool AK620
Memory 2x32GB G.SKill 6400MT Cas32
Video Card(s) Asus RTX4090 TUF
Storage 4TB Samsung 990 Pro
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
That's the awesome thing about PC everyone can play how they want, well Nvidia locks you into/out of their stuff but otherwise you get what I mean.

Screenshots are pretty useless though DLSS and all similar tech needs to be seen native and in motion. I would think FSR was awesome most the time just looking at Screenshots lol.

Oh you would definitely notice how 120FPS with DLSS.Balanced is much smoother than 80FPS with DLAA, before you notice anything else :laugh:.

But yeah FSR flaws cannot be missed, I have seen people with Radeon prefer XeSS 1.3 or TSR over FSR2
 
Top