• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce RTX 50 Technical Deep Dive

But most people do prefer upscaling?
while the total amount is larger, we still dont know the "why".
e.g. how many use it, because they lack the (raw) power to do native.
 
while the total amount is larger, we still dont know the "why".
e.g. how many use it, because they lack the (raw) power to do native.
Does the why matter? In what sense?

I can tell you why I'm using it, because at 4k dlss q it performs as well as 1440p native but looks way better. So I'm using it for the increased image quality at same performance.
 
yes it does. because if i can do native without, i would.
meaning im not using it because i "want" to, and many others might too.

its like saying would you want to buy a +300K Bentley? most will probably say no.
but what if all those "asked", would have 1M in their account (to spend on a car)?
you think they would still (all) say no?

part of my problem with the poll, as i do use it (so not native), but would prefer (to be able) to use native.
and i doubt im alone, thus changing numbers by quite a bit.
 
Last edited:
yes it does. because if i can do native without, i would.
meaning im not using it because i "want" to, and many others might too.

its like saying would you want to buy a +300K Bentley? most will probably say no.
but what if all those "asked", would have 1M in their account (to spend on a car)?
you think they would still (all) say no?

part of my problem with the poll, as i do use it (so not native), but would prefer (to be able) to use native.
and i doubt im alone, thus changing numbers by quite a bit.

Some TAA if not most TAA implementations are so bad that what does it even mean that you are running native? It's still gathering data from multiple frames, it can still have ghosting and artifacts.
And I can't remember a game where there was no TAA recently.
 
for me native means native res the moni has, not AA related.

and at least for the games i have (nothing last 2y), i can select to not use TAA.
 
yes it does. because if i can do native without, i would.
meaning im not using it because i "want" to, and many others might too.

its like saying would you want to buy a +300K Bentley? most will probably say no.
but what if all those "asked", would have 1M in their account (to spend on a car)?
you think they would still (all) say no?

part of my problem with the poll, as i do use it (so not native), but would prefer (to be able) to use native.
and i doubt im alone, thus changing numbers by quite a bit.
But that's the point, if everyone had 1m in their account they would buy a higher resolution monitor and then use DLSS on it. Because it looks better than native, which is my point. If you are building a PC and the card you are targeting is a 1440p card (let's say XX70 tier), it's better to get a 4k monitor to use with it use DLSS Q. Will look much better than native 1440p. Which is what I did. My option was 1440p native (so a 1440p monitor) or 4k DLSS Q (so a 4k monitor). Went with the latter.

Just a fast demonstration. The so called superior 4k native. Zoom on it and have a laugh
4k native.JPG


And DLSS

dlss.JPG


These are not zoomed. Got some extra spicy details for people that think native looks better...
 
Last edited:
upscaling a lower res to native (moni) res will never look better, any native UHD video (vs same content at 720p/1080p upscaled to 4K) will easily show that.
you can not make up stuff, that wasnt there to begin with, and why even if it looks good and close, it wont match the same quality at native res.

and running a lower (than native) res on a higher resolution moni (vs lower res on content and lower res moni) will always look better,
but that's because of the higher res on the screen, not because of upscaling.

if upscaling would look better (than native), movie companies wouldnt waste shooting in UHD (or higher).
 
you can not make up stuff, that wasnt there to begin with
This used to be true, but it isn't anymore. Making stuff up is the whole point of generative AI. Some can generate a pictures from text.

Also, some of the resolution increase can be real. Temporal sampling techniques can supersample single pixel through multiple frames. Due to motion it's not always perfect and can produce artifacts, but when it works it combines multiple lower resolution frames into a higher resolution image.
 
This used to be true, but it isn't anymore. Making stuff up is the whole point of generative AI. Some can generate a pictures from text.
No no, you can't make up detail from nothing. Image restoration isn't a thing :D

fixthephoto-photo-restoration-services-damaged-parts-repair.jpg
 
@zigzag
while it might work with games, with "real" content like movies/videos it wont.

it will look almost as good, and usually good enough so ppl dont "throw up" watching it on their big screen tv, but it will not surpass the quality of native res.

lets say i have an image that consist of a single color with a single dot somewhere, i now remove the area with the dot, AI will never be able to come up with dot in a "restored" image.
 
Last edited:
That's a weird thing to say. You are actually giving to much credit to nvidia, and ignoring that people outside the company are also asking themselves how emerging technologies could affect various aspect of the gaming industry:
(PDF) Artificial intelligence for video game visualization, advancements, benefits and challenges
Jensen wasn't involved in that research paper, I would argue that research at AMD, nvidia, Intel, microsft etc... is probably influenced on some level but what's happening in academics labs, i don't believe that they function as a bubble.
To further reinforce the point that Neural rendering isn't something that nvidia invented, nor ar they the only people researching it's application:
Intel's Latest Research for Graphics and Generative AI
Neural Prefiltering for Correlation-Aware Levels of Detail
UniTorch - Integrating Neural Rendering into Unity | Request PDF
1737043911332.png
1737044093888.png
 
@zigzag
while it might work with games, with "real" content like movies/videos it wont.

it will look almost as good, and usually good enough so ppl dont "throw up" watching it on their big screen tv, but it will not surpass the quality of native res.

lets say i have an image that consist of a single color with a single dot somewhere, i now remove the area with the dot, AI will never be able to come up with dot in a "restored" image.
Don't waste your time. Good point with your previous statement about how you cannot make up stuff out of nothing.

Does not matter that upscaled or so called "AI" generated content is something artificial or guessed. When it's shiny, it must be good, right?

Breasts upscaled by LSBT (Large Silicon Breast Tuning) algorithm in so called "beauty clinic" are still nothing else but a pair of fake tits.
 
Don't waste your time. Good point with your previous statement about how you cannot make up stuff out of nothing.

Does not matter that upscaled or so called "AI" generated content is something artificial or guessed. When it's shiny, it must be good, right?

Breasts upscaled by LSBT (Large Silicon Breast Tuning) algorithm in so called "beauty clinic" are still nothing else but a pair of fake tits.
So I assume the left looks better than the right to you?

fixthephoto-photo-restoration-services-damaged-parts-repair.jpg
 
@zigzag
while it might work with games, with "real" content like movies/videos it wont.

it will look almost as good, and usually good enough so ppl dont "throw up" watching it on their big screen tv, but it will not surpass the quality of native res.

lets say i have an image that consist of a single color with a single dot somewhere, i now remove the area with the dot, AI will never be able to come up with dot in a "restored" image.
Even a human can't do that. But if there is more context, like a nose missing on a face, some AI models would generate a new nose.

But that's not what we are dealing with here! Game renders a lower-res image, which is together with previous frames and motion vectors provided as a context for the model. If final resolution is 4k, you render lower-res image at ~1080p. That means that your dot would need to be smaller than one pixel at 1080p to be missing from input data. Do you realize how fine these details are?

Note that just native resolution is not enough for great image quality. At native res, you still see aliasing in motion and other artifacts. You need super-sampling. 8x the native resolution for real high quality. GPUs won't be capable of brute forcing all the needed calculations for decades.

It's not about the native resolution, but image quality you can get with the limited performance your GPU can provide in real time. I hope we agree that the highest image-quality (a GPU can render in real time) is the goal. And to do that we need to sacrifice some quality, where it's the least noticeable in order to gain quality where it matters. We used to sacrifice some resolution in order to have shaders. And then to have better shaders. Now to do basic path tracing to get real reflections, proper lighting and shadows. And now we can also fake some fine details in order to improve perceived image quality. It's not perfect, but most of the time is better that brute forcing lower quality graphics at higher resolutions. It is also partly subjective what is important and what is not.
 
@zigzag
sure but thats because it learned that from previous training.
e.g. if i have a pixelated (say ultra low res) license plate from a car, no AI will be able to make that readable.

my problem is with ppl claiming that upscaling (from a lower res) to native res, will look better than straight native, which isnt the case.


upscaling to above native res, then downscaling to native (screen) res, is a completely different thing, works, looks better, i use it, and i never said anything to the contrary.
 
It depends on the game for sure. All TAA games have some amount of softness to them which is slightly worsened by DLSS-Q @ 4K, but it's not something that bothers me much. I run my games with very little post-process sharpening, sometimes even with an .ini tweak to go below the minimum sharpening offered by in-game settings, because I don't like specular aliasing or pixel crawl. What is it in particular that stands out to you with upscalers, and are there certain games where it's worse?
My problem is the blur that you mentioned. Upscaling makes the image look soft and fuzzy, as if I had something wrong with my eyes or like I was squinting. It's a bit better in games that offer a sharpening slider, because I can just push it to the max. It's also a bit more tolerable with FSR 3. Games that use older versions are terrible.

I imagine that you miss the time before games switched to post-process AA and everything became a bit blurry. I have nostalgia for DX9 games which now can run with SGSSAA, and slightly worried that future graphics cards will stop supporting all the old flavors of MSAA. I don't know if there is/was dedicated hardware support to accelerate multisampling: does it still exist in today's cards, serving a purpose with shaders? Would there be any academic interest in seeing how fast a 5090 can run DX:HR with 8xSGSSAA @ 4K compared to previous product generations?
I definitely miss those times! When we used to sharpen with AA instead of making things a blurry mess with upscaling. I don't think there was any dedicated hardware for MSAA, that's probably why it lost traction to DLAA/VSR.

@JustBenching
just because something is cheaper/faster, doesnt make it the better deal.
whats the cuda perf on amd dgpu (vs Nv)?
right.
the same way if i have +100K to spend (just) on a car, still wont make me buy a toyota for 20K, just because its cheaper.
I will buy the 20k car if I don't have a valid use case for the 100k one. If I don't go to track days, if I only drive to work and back, I don't need a Ferrari, right?

I disagree with the CUDA argument on a similar fashion. A gamer doesn't need it, it shouldn't influence the buying decision. Whatever you don't use is a waste of your money.

and moni res has NOTHING to do with the res you set in games, nor are you fixed to run games in 4K, just because you're using a 4K screen.
i rather play at FHD/QHD on a large screen thats 4K, than the same screen in lower res and get pissed off from the screen door effect.
Playing at anything lower than your monitor's native res is a blurry mess, imo. It was fine in the CRT days, but it looks horrible now.

@AusWolf
cant talk about recent games, but anything i have from the past 10y will never look as good on low like they do on high/ultra,
Let's agree to disagree on that one.

the main reason why i spend more money on a dgpu than any console would cost, is to have improvements in image quality/fps or (more) options i can use (vs console).
I agree. But that doesn't mean that one brand suits those needs better than the other.
 
@zigzag
sure but thats because it learned that from previous training.
e.g. if i have a pixelated (say ultra low res) license plate from a car, no AI will be able to make that readable.
Of course they would. Have you tried it?

Everyone is using ai cause it does exactly what you are claiming it can't do...
 
You need super-sampling. 8x the native resolution for real high quality. GPUs won't be capable of brute forcing all the needed calculations for decades.

It's not about the native resolution, but image quality you can get with the limited performance your GPU can provide in real time. I hope we agree that the highest image-quality (a GPU can render in real time) is the goal. And to do that we need to sacrifice some quality, where it's the least noticeable in order to gain quality where it matters. We used to sacrifice some resolution in order to have shaders. And then to have better shaders. Now to do basic path tracing to get real reflections, proper lighting and shadows. And now we can also fake some fine details in order to improve perceived image quality. It's not perfect, but most of the time is better that brute forcing lower quality graphics at higher resolutions. It is also partly subjective what is important and what is not.

First you contradict yourself (we need to render the image at a higher resolution to get better quality, so we render at lower resolution... ), then you change the goal post (from actual quality to perceived (aka fake aka conceptual) quality).

The AI upscalers are for speed, first and foremost. I'll prove it to you:

You have a choice between a conceptual graphics card that can:
1) play 4k supersampled (so full screen rendered at 8k and downsampled to 4k) at 60fps
2 play 4k DLSS4 (1080p upsampled to 4k) at 60fps

what are you picking?
 
That's not true we had one apples to apples benchmarks and one semi apples to apples...

Now we have 3 apples to apples and 1 semi lol.... #Progress....

5090 review isn't far out if it's dissappointing the whole stack is in trouble.

The 5090 is irrelevant imo. The RTX 4090 was the worst product last gen. I actually still care about performance per dollar.

NVidia is brainwashing people in to thinking $2000 for a GPU makes sense.

It's the Titan but with 100x more sales from 10 years ago.
 
The main thing I think that killed MSAA was the engines using different rendering. Although there is also the arguments it was done to satisfy weaker hardware like consoles, but I think its mainly down to engines and modern development.
Not just MSAA/SGSSAA either, other techniques that were good stopped being used.
Star Ocean 4 on the Xbox 360 looked pretty good, then play the game on something like a PS4 or even PC with higher rendering resolution, suddenly things like hair looks much worse. Some software technique they were using on the 360 to make it look good got obsoleted.
 
First you contradict yourself (we need to render the image at a higher resolution to get better quality, so we render at lower resolution... ), then you change the goal post (from actual quality to perceived (aka fake aka conceptual) quality).

The AI upscalers are for speed, first and foremost. I'll prove it to you:

You have a choice between a conceptual graphics card that can:
1) play 4k supersampled (so full screen rendered at 8k and downsampled to 4k) at 60fps
2 play 4k DLSS4 (1080p upsampled to 4k) at 60fps

what are you picking?
You have taken one paragraph out of context and mixed it with a point from another.

1) If you want absolute best image quality, without any shortcuts (aka faking results), you need a huge amount of super-sampling. That you can only do offline and we won't be doing this in real-time for decades (if the scene and renderer are also high quality). Point here is that uncompromised absolute best image quality is not the actual goal, because it cannot be done in real-time.

2) As #1 is not possible we need to make some compromises (either assets or renderer or resolution) to be able to do it in real-time. The goal that matters is perceived quality. Take a look at video compression for example. Common measurement of video compression quality is a PSNR value. It's calculated based on mean squared error using mathematical formula. You might call it an objective quality or "actual" quality. But this PSNR quality metrics does not always correlate with quality perceived by humans. Video codecs that focus too much just on PSNR metrics are not the best codecs. Similarly, to get the best possible quality in games is all about smart compromises.

From the options you provided I'm picking #1. But that's because that game has already made big compromises in assets quality and render quality to be able to run in real-time at 8k. Also, if it it runs 60fps at 8k, it will run faster than 60fps at 1080p upsampled to 4k (unless you are limited by a display refresh rate).

More interesting choice would be:
1) play at a lower render quality, 4k supersampled (so full screen rendered at 8k and downsampled to 4k) at 60fps
2) play at a higher render quality, 4k DLSS4 (1080p upsampled to 4k) at 60fps
 
The poll shows that the majority of users are using upscaling btw.
Very true.
And 45% in rhat poll is mostly AMD users, beause Amd upscaler is not so great allways, thats why they use native.

Most of RTX user use ofc Upscaler because DLSS is just great

Yes. It also clearly shows that vast majority of poll contributors values image quality above performance, where mostly prefering no upscaling.
Majority use upscaler
Cant u see it?
39% + 9% + 4% +3% = 55%
But most of AMD users wont use it so thats why Native is 45%, FSR is not so good,better to use Native if gaming Amd gpu

Also there was only under 20k votes while Nvidia sell millions on GPUs.
Global results show +80% of RTX users use upscaler.
And Nvidia dominate in marketshares so there is huge amount of DLSS users..

But u dont like it so dont use it, its not good in Amd gpus anyways so its okay to use native then.

But that's the point, if everyone had 1m in their account they would buy a higher resolution monitor and then use DLSS on it. Because it looks better than native, which is my point. If you are building a PC and the card you are targeting is a 1440p card (let's say XX70 tier), it's better to get a 4k monitor to use with it use DLSS Q. Will look much better than native 1440p. Which is what I did. My option was 1440p native (so a 1440p monitor) or 4k DLSS Q (so a 4k monitor). Went with the latter.

Just a fast demonstration. The so called superior 4k native. Zoom on it and have a laugh


And DLSS



These are not zoomed. Got some extra spicy details for people that think native looks better...
DLSS looks so much better also huge FPS boost..

But ppls are brand loyalty even they see something great they dont like it because its wrong brand.

@zigzag
sure but thats because it learned that from previous training.
e.g. if i have a pixelated (say ultra low res) license plate from a car, no AI will be able to make that readable.

my problem is with ppl claiming that upscaling (from a lower res) to native res, will look better than straight native, which isnt the case.


upscaling to above native res, then downscaling to native (screen) res, is a completely different thing, works, looks better, i use it, and i never said anything to the contrary.
100% sure AMD user
Maybe try Nvidia and DLSS someday?to realy see it u self.
Quality is better than native.

The 5090 is irrelevant imo. The RTX 4090 was the worst product last gen. I actually still care about performance per dollar.

NVidia is brainwashing people in to thinking $2000 for a GPU makes sense.

It's the Titan but with 100x more sales from 10 years ago.
Okay another Amd user again..

Have fun playing 4K using 3060Ti or 5700XT
performance-per-dollar-3840-2160.png



2000$ GPU cost only 2000$ u buy it only once,sell afther 2 years and buy new one.
Ppls QQ just too much about prices, dont buy it u u cant or want.

There is cheap gpus also, let ppls buy and use they hard earned moneys if it makes them happy.
What u care what ppls buy?
 
Last edited:
First you contradict yourself (we need to render the image at a higher resolution to get better quality, so we render at lower resolution... ), then you change the goal post (from actual quality to perceived (aka fake aka conceptual) quality).

The AI upscalers are for speed, first and foremost. I'll prove it to you:

You have a choice between a conceptual graphics card that can:
1) play 4k supersampled (so full screen rendered at 8k and downsampled to 4k) at 60fps
2 play 4k DLSS4 (1080p upsampled to 4k) at 60fps

what are you picking?
That's a scenario that cannot happen. At similar framerates, using DLSS gives you higher image quality. That's a given.

DLSS looks so much better also huge FPS boost..

But ppls are brand loyalty even they see something great they dont like it because its wrong brand.
That's DLSS Ultra Performance btw :roll:
 
Very true.
And 45% in rhat poll is mostly AMD users, beause Amd upscaler is not so great allways, thats why they use native.

Most of RTX user use ofc Upscaler because DLSS is just great
Because 45% of people have an AMD card, obviously... Wait, what? :roll:

Majority use upscaler
Cant u see it?
39% + 9% + 4% +3% = 55%
But most of AMD users wont use it so thats why Native is 45%, FSR is not so good,better to use Native if gaming Amd gpu

Also there was only under 20k votes while Nvidia sell millions on GPUs.
Global results show +80% of RTX users use upscaler.
And Nvidia dominate in marketshares so there is huge amount of DLSS users..
If DLSS is so brilliant, then where does that 80% come from? Shouldn't 100% of RTX owners be using it? :rolleyes:

Also, if Nvidia owns 90% of the gaming GPU market, 80% of which use upscaling, then that's 72% of all people in total. Yet, the poll shows 55%. I don't know who's bad with maths here.

2000$ GPU cost only 2000$ u buy it only once,sell afther 2 years and buy new one.
Ppls QQ just too much about prices, dont buy it u u cant or want.
Sure, you buy a $2k GPU, then 2 years later sell it for $1.2k and buy another $2k GPU. That's an initial spending of $2k and then $800 every two years after that. That's $10k spent in 10 years only on your GPU.

Have you heard the word "depreciation" before?

There is cheap gpus also, let ppls buy and use they hard earned moneys if it makes them happy.
What u care what ppls buy?
Yeah, what do you care what people buy? Are you a paid Nvidia promoter or something? ;)
 
Back
Top