• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 50 Technical Deep Dive

Joined
Jun 14, 2020
Messages
3,891 (2.32/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Because side by side image comparison is way easier that noticing the quality difference when playing the game. I wanted overall graphics quality assessment, like when comparing image quality of a new game to 5-year old game. People often say a new game is unoptimized, because graphics quality looks similar, but FPS is much lower. They forget that quality improvement get harder and harder to notice as graphics get better.

Both screenshots are from TPU reviews.

Answers:

God of War: Low
Horizon: Medium
Exactly, side by side medium vs high is way easier to tell apart than dlss. Wasn't that the point?
 
Joined
Jan 8, 2017
Messages
9,597 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Using the GeForce 2MX in 2006 would have been an awful experience, with many games just refusing to launch since the market changed so much. Crysis won't even launch when a 2080Ti is still a decent performer in modern games.
It's not about this, RTRT and upscaling have been pushed onto people with no foresight. Back in those days engines and features weren't standard, if a developer felt like they could add "X" feature they would do so and then if others felt it was necessary they would all have their own solution. The reason 2080Ti is a still a "decent performer" is precisely because advancements have been handicapped by poor choices the industry made, that's not normal nor is it a good thing.

Because side by side image comparison is way easier
Yet it seems people correctly recognized that it was on medium or low, how about that.
 
Joined
Dec 14, 2011
Messages
1,145 (0.24/day)
Location
South-Africa
Processor AMD Ryzen 9 5900X
Motherboard ASUS ROG STRIX B550-F GAMING (WI-FI)
Cooling Noctua NH-D15 G2
Memory 32GB G.Skill DDR4 3600Mhz CL18
Video Card(s) ASUS GTX 1650 TUF
Storage SAMSUNG 990 PRO 2TB
Display(s) Dell S3220DGF
Case Corsair iCUE 4000X
Audio Device(s) ASUS Xonar D2X
Power Supply Corsair AX760 Platinum
Mouse Razer DeathAdder V2 - Wireless
Keyboard Corsair K70 PRO - OPX Linear Switches
Software Microsoft Windows 11 - Enterprise (64-bit)
What quality setting is this (low, medium, high, ultra)?'
View attachment 380174

If I had to say; Low, Maybe medium, the textures are too blended and undefined, lighting/reflections is lacking, same for shadows.

(Never played God of War)

Because side by side image comparison is way easier that noticing the quality difference when playing the game. I wanted overall graphics quality assessment, like when comparing image quality of a new game to 5-year old game. People often say a new game is unoptimized, because graphics quality looks similar, but FPS is much lower. They forget that quality improvement get harder and harder to notice as graphics get better.

Both screenshots are from TPU reviews.

Answers:

God of War: Low
Horizon: Medium

Ah, nvm, guess I should have read further, I was right. :D
 
Joined
Nov 22, 2020
Messages
93 (0.06/day)
Processor Ryzen 5 3600
Motherboard ASRock X470 Taichi
Cooling Scythe Kotetsu Mark II
Memory G.SKILL 32GB DDR4 3200 CL16
Video Card(s) EVGA GeForce RTX 3070 FTW3 Ultra (1980 MHz / 0.968 V)
Display(s) Dell P2715Q; BenQ EX3501R; Panasonic TC-P55S60
Case Fractal Design Define R5
Audio Device(s) Sennheiser HD580; 64 Audio 1964-Q
Power Supply Seasonic SSR-650TR
Mouse Logitech G700s; Logitech G903
Keyboard Cooler Master QuickFire TK; Kinesis Advantage
VR HMD Quest 2
Exactly, side by side medium vs high is way easier to tell apart than dlss. Wasn't that the point?
My takeaway is that most games look playable even on lower settings. Also, now I want a whole thread dedicated to "guess the graphics preset", if only to distract me from the wonderful https://guessthe.game/
The hardware also advanced significantly. You cannot compare the jump from the Geforce 2 Ti to the GTX 8800 to the 2080TI to the 5090. Using the GeForce 2MX in 2006 would have been an awful experience, with many games just refusing to launch since the market changed so much. Crysis won't even launch when a 2080Ti is still a decent performer in modern games.

Even the Jump from the Geforce 2 ti to the geforce 3 Ti was subanstial, and that was within one year. now it's taking at least 4 years to get a similar jump
I was going to challenge your claim about how long it's taking for each leap in hardware. I recently compiled a list of my last 25 years of graphics cards and I was surprised at how consistently my upgrades gained +200% performance for roughly +50% power consumption. But I looked up release dates of the cards and, sure enough, I'm waiting longer each time. 2.5 years between upgrades through the 2000's, but then 5 years and 6 years between upgrades in the 2010's.

(OT: it was hard to find benchmarks for old products. Other than TPU, TomsHardware, and Anandtech, every other review site is linkrotted or entirely gone, and many performance graphs were not backed up by the Internet Archive.)
 
Joined
Sep 21, 2023
Messages
47 (0.10/day)
Exactly, side by side medium vs high is way easier to tell apart than dlss. Wasn't that the point?
I just tried to show that low doesn't mean PS2 quality anymore, but more like console quality.
Yet it seems people correctly recognized that it was on medium or low, how about that.
Some weren't 100% sure. If it was side-by-side comparison, no one would have any doubt.

In games with good DLSS implementation, the difference (between DLSS & native) is even harder to see (but not impossible) than between low-medium & high graphics quality settings. For current DLSS softer details is a giveaway. Transformers based DLSS 4 seems to (at least to some degree) fix that. I'm looking forward to see how it compares to native.
 
Joined
May 11, 2018
Messages
1,309 (0.54/day)
My takeaway is that most games look playable even on lower settings. Also, now I want a whole thread dedicated to "guess the graphics preset", if only to distract me from the wonderful https://guessthe.game/

I was going to challenge your claim about how long it's taking for each leap in hardware. I recently compiled a list of my last 25 years of graphics cards and I was surprised at how consistently my upgrades gained +200% performance for roughly +50% power consumption. But I looked up release dates of the cards and, sure enough, I'm waiting longer each time. 2.5 years between upgrades through the 2000's, but then 5 years and 6 years between upgrades in the 2010's.

(OT: it was hard to find benchmarks for old products. Other than TPU, TomsHardware, and Anandtech, every other review site is linkrotted or entirely gone, and many performance graphs were not backed up by the Internet Archive.)

Commendable. I started buying 3D accelerators with Riva TNT2, and I completely forgot what I had and for how long - luckily there are still posts from early 2000' on forums where I described my upgrade paths. :-D
 
Joined
Nov 22, 2020
Messages
93 (0.06/day)
Processor Ryzen 5 3600
Motherboard ASRock X470 Taichi
Cooling Scythe Kotetsu Mark II
Memory G.SKILL 32GB DDR4 3200 CL16
Video Card(s) EVGA GeForce RTX 3070 FTW3 Ultra (1980 MHz / 0.968 V)
Display(s) Dell P2715Q; BenQ EX3501R; Panasonic TC-P55S60
Case Fractal Design Define R5
Audio Device(s) Sennheiser HD580; 64 Audio 1964-Q
Power Supply Seasonic SSR-650TR
Mouse Logitech G700s; Logitech G903
Keyboard Cooler Master QuickFire TK; Kinesis Advantage
VR HMD Quest 2
I see far less difference between low and ultra graphics in most games than I do between native and FSR at any setting.
It depends on the game for sure. All TAA games have some amount of softness to them which is slightly worsened by DLSS-Q @ 4K, but it's not something that bothers me much. I run my games with very little post-process sharpening, sometimes even with an .ini tweak to go below the minimum sharpening offered by in-game settings, because I don't like specular aliasing or pixel crawl. What is it in particular that stands out to you with upscalers, and are there certain games where it's worse?

I imagine that you miss the time before games switched to post-process AA and everything became a bit blurry. I have nostalgia for DX9 games which now can run with SGSSAA, and slightly worried that future graphics cards will stop supporting all the old flavors of MSAA. I don't know if there is/was dedicated hardware support to accelerate multisampling: does it still exist in today's cards, serving a purpose with shaders? Would there be any academic interest in seeing how fast a 5090 can run DX:HR with 8xSGSSAA @ 4K compared to previous product generations?

Also I'm waiting for the trend in retro-3D-aesthetics to catch up to the DX9 era... there are some games this year that are very PS1-stylized, so maybe another decade until the wheel finishes its turn.
 
Last edited:
Joined
Jun 14, 2020
Messages
3,891 (2.32/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
I just tried to show that low doesn't mean PS2 quality anymore, but more like console quality.

Some weren't 100% sure. If it was side-by-side comparison, no one would have any doubt.

In games with good DLSS implementation, the difference (between DLSS & native) is even harder to see (but not impossible) than between low-medium & high graphics quality settings. For current DLSS softer details is a giveaway. Transformers based DLSS 4 seems to (at least to some degree) fix that. I'm looking forward to see how it compares to native.
Well the claim was that's preferable to drop settings than use dlss, which obviously isn't the case. Especially when you get into medium territory and most games start to look like ass. I mean i was on the phone and the horizon ss you posted looked like crap.

But it seems that we agree anyways. There are games that native look a lot lot worse than dlss. I would be posting pictures but people wouldn't believe that the crap one is the native. Eg in cod dlss performance looks better than native in static. Obviously in movement it isn't, but quality is.
 
Joined
Jan 8, 2017
Messages
9,597 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
In games with good DLSS implementation, the difference (between DLSS & native) is even harder to see
In static scenes they all resolve pretty close to native but during motion you can absolutely tell that it isn't native, the image is full of stairstep artifacts.
 
Joined
Sep 21, 2023
Messages
47 (0.10/day)
Well the claim was that's preferable to drop settings than use dlss, which obviously isn't the case. Especially when you get into medium territory and most games start to look like ass. I mean i was on the phone and the horizon ss you posted looked like crap.

But it seems that we agree anyways. There are games that native look a lot lot worse than dlss. I would be posting pictures but people wouldn't believe that the crap one is the native. Eg in cod dlss performance looks better than native in static. Obviously in movement it isn't, but quality is.
I agree that DLSS will usually give you better image-quality than lower settings, but I don't that they always look that bad. Some look quite acceptable even on low (like GoW). Horizon lowers texture resolution way too much.

In static scenes they all resolve pretty close to native but during motion you can absolutely tell that it isn't native, the image is full of stairstep artifacts.
You mean FSR or DLSS? DLSS is much better than FSR in that regard (but not perfect).
 
Joined
Jan 8, 2017
Messages
9,597 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Joined
Nov 7, 2017
Messages
1,989 (0.76/day)
Location
Ibiza, Spain.
System Name Main
Processor R7 5950x
Motherboard MSI x570S Unify-X Max
Cooling converted Eisbär 280, two F14 + three F12S intake, two P14S + two P14 + two F14 as exhaust
Memory 16 GB Corsair LPX bdie @3600/16 1.35v
Video Card(s) GB 2080S WaterForce WB
Storage six M.2 pcie gen 4
Display(s) Sony 50X90J
Case Tt Level 20 HT
Audio Device(s) Asus Xonar AE, modded Sennheiser HD 558, Klipsch 2.1 THX
Power Supply Corsair RMx 750w
Mouse Logitech G903
Keyboard GSKILL Ripjaws
VR HMD NA
Software win 10 pro x64
Benchmark Scores TimeSpy score Fire Strike Ultra SuperPosition CB20
@JustBenching
just because something is cheaper/faster, doesnt make it the better deal.
whats the cuda perf on amd dgpu (vs Nv)?
right.
the same way if i have +100K to spend (just) on a car, still wont make me buy a toyota for 20K, just because its cheaper.

and moni res has NOTHING to do with the res you set in games, nor are you fixed to run games in 4K, just because you're using a 4K screen.
i rather play at FHD/QHD on a large screen thats 4K, than the same screen in lower res and get pissed off from the screen door effect.

@AusWolf
cant talk about recent games, but anything i have from the past 10y will never look as good on low like they do on high/ultra,
and my F1 from 2012 (as i dont care for anything less than V8@18000 rpm) modded to ultra high/ultra max, looks so much better, most dont even know how old the game is when i show them vids.

the main reason why i spend more money on a dgpu than any console would cost, is to have improvements in image quality/fps or (more) options i can use (vs console).
 
Last edited:
Joined
Oct 28, 2012
Messages
1,224 (0.27/day)
Processor AMD Ryzen 3700x
Motherboard asus ROG Strix B-350I Gaming
Cooling Deepcool LS520 SE
Memory crucial ballistix 32Gb DDR4
Video Card(s) RTX 3070 FE
Storage WD sn550 1To/WD ssd sata 1To /WD black sn750 1To/Seagate 2To/WD book 4 To back-up
Display(s) LG GL850
Case Dan A4 H2O
Audio Device(s) sennheiser HD58X
Power Supply Corsair SF600
Mouse MX master 3
Keyboard Master Key Mx
Software win 11 pro
It's not about this, RTRT and upscaling have been pushed onto people with no foresight. Back in those days engines and features weren't standard, if a developer felt like they could add "X" feature they would do so and then if others felt it was necessary they would all have their own solution. The reason 2080Ti is a still a "decent performer" is precisely because advancements have been handicapped by poor choices the industry made, that's not normal nor is it a good thing.
It's a bit of "what if" IMHO,( even the dedicated raytracing hardware which is the focus right now isn't improving at comparable speed to the early 2000 golden era, hardware growth seems to have slown down generally.)
B
ut I know that there's been external force beyond Nvidia and Microsoft, who pushed for more research in RTRT 20 years ago. SIGRAPPH made a conference about that in 2005, and both AMD and Nvidia were cited as actors contributing for the stunt growth of RTRT vs raster. As hardware was getting faster, there was barely any research done on making RTRT a reality, people just gave up in the 90's.

And it's also were I think that the interest of gamers and CG research don't always align. For them, raster is seen as a workaround with limitation in accuracy/functionality, not really the end-game. But the average gamer would have probably been fine with raster graphics until their death. But for a researcher settling down is a form of death in itself. Probably.

But if you read about neural rendering, you can see that it's also finding workaround to make RT/PT more efficient, and trying to find solution for things that neither raster or RT/PT can do adequately in rel-time right now.

Realtime ray tracing for current and future games | ACM SIGGRAPH 2005 Courses
Introduction to real-time ray tracing | Request PDF

1737031164040.png
1737031333943.png
1737031495320.png
 
Joined
Jan 8, 2017
Messages
9,597 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
But if you read about neural rendering, you can see that it's also finding workaround to make RT/PT more efficient
I feel like every time I am arguing about these things eventually I realize I am just talking to an Nvidia PR rep.
 
Joined
Jul 24, 2024
Messages
334 (1.89/day)
System Name AM4_TimeKiller
Processor AMD Ryzen 5 5600X @ all-core 4.7 GHz
Motherboard ASUS ROG Strix B550-E Gaming
Cooling Arctic Freezer II 420 rev.7 (push-pull)
Memory G.Skill TridentZ RGB, 2x16 GB DDR4, B-Die, 3800 MHz @ CL14-15-14-29-43 1T, 53.2 ns
Video Card(s) ASRock Radeon RX 7800 XT Phantom Gaming
Storage Samsung 990 PRO 1 TB, Kingston KC3000 1 TB, Kingston KC3000 2 TB
Case Corsair 7000D Airflow
Audio Device(s) Creative Sound Blaster X-Fi Titanium
Power Supply Seasonic Prime TX-850
Mouse Logitech wireless mouse
Keyboard Logitech wireless keyboard
The problem with that is a 10 year + lifespan card, which the 5090 is easily for most people with backlogs of games combined with future games using frame gen. AIO's simply don't last that long. An innovative blower fan design working with a company like Noctua to design a new fan for it would have been the way I went personally. Two fans rotating in opposite directions to reduce vibrations = lower noise levels, combined with Noctua fan technology = quiet blower fan that can handle a 575watt card. I might be wrong, but I bet those engineers could have figured it out 100%.
Even with Noctua it's a problem. It's still too much heat to be flowing through the case, heating other components up.

Stagnation is never a good thing even if some of the new feature look promising.

None of the possible reasons why are good either
1. Silicon is hitting a wall
I keep hearing this for almost 10 years now and still, they introduce smaller and smaller nodes. It's more like we're hitting heating dissipation problems due to ever raising transistor density.

2. This is the best they can do while keeping cost similar although spending 50 more on each die would likely lead to decent performance improvements.
They need more advanced node in order to introduce bigger progress in raster performance. Shrinking from 5nm to 4nm is not enough. Maybe with 2nm they will finally be able to ramp up compute units count similarly to how they increased it with RTX 4090. Also, it's not the best they can do, since they let part of the units handle everything else but rasterizing. Now imagine if those units were doing native rasterizing workload instead of interpolating, upscaling, etc.

So, how are the reviewers gonna pin the accolades on RTX 5080, considering it has about ZERO performance uplift in raster snd Ray Tracing compared to RTX 4080 Super, for the same price?

Change the benchmarking format, relegate the non-DLSS performance numbers to the past that isn't relevant, and embrace the fake frames?

Ignore the mid-generational uplift, compare just the $1200 RTX 4080 to $999 RTX 5080, and somehow sell 15% of performance increase as ground breaking, hoping that people have memories of goldfish?

Focus on AI gaming features useability, something that is as much in a vague future as Ray Tracing was for the RTX 2080 - by the time game creators learned to use it, the card was too slow to actually enjoy it?

Fully embrace that Moore's Law is truly dead now, if you want more performance, you pay more, not wait for two years?
They will praise DLSS4 and MFG while saying that lack of DLSS and MFG support is a negative aspect on any non-Nvidia GPU.

"Nvidia says more than 80 percent of RTX GPU owners activate its DLSS upscaling."
Interesting data from the article.

Usage statistics indicate that over 80% of RTX players enable DLSS during gameplay, with a cumulative total of 3 billion hours of DLSS-enabled gaming
Quite on the contrary to currently ongoing TPU poll:
1737020684054.png


I personally do notice the difference with DLSS quality. It looks better than native for the most part in most games.
:eek: :fear:

Is this medium or ultra?
He can't really tell. You need to see it in motion/sequence to notice artifacts or ghosting, or you need image for comparison. If I let you play any game with turning on DLSS without telling you, you probably would not notice. Then I'd switch DLSS off and you would notice immediately. This is basically the issue in some games where upscaling is turned on by default - some users practically never see that the particular game may look even better on native. Users unaware of playing with turned on upscaling (by default) counts towards Nvidia's statistics of 80% users using DLSS. And as was already said here, some games won't allow turning off DLSS. I have a very bad feeling about this forced upscaling strategy. Hopefully this won't spread anymore, so GPU makers are not able to obfuscate poor generational perf. uplifts.

Simple. It takes much longer to render a frame with DLSS off. The latency reduction from DLSS more than offsets the latency increase from frame gen. Frame gen only increases latency if you compare it to DLSS.
DLSS without (M)FG increases framerate (thus reduces frametime) by rendering scenes in lower resolution and upscaling it to higher one while utilizing model based on continously trained neural network (so called "AI"). DLSS also increases distortions in rendered upscaled scenes. Though, I must say, new DLSS4 transforming model looks very promising.

FG increases latency, as its calculations require compute time in between rendering native/upscaled frames.

1737021953651.png

(Source: https://www.techspot.com/news/106265-early-dlss-4-test-showcases-cleaner-images-multiplied.html)

I did a quick research and Nvidia's key to reducing latency is Reflex 2 Frame Warp technology

Frame Generation interpolates between two rendered frames, be it on DLSS or native. Right now, it can ONLY interpolate, not extrapolate. Jensen was quite incorrect when he said that DLSS version 4 predicts the future. Algorithm simply cannot predict when and where you'll move your character with keyboard or to what side you'll move your mouse. And even if it could, error/miss rate would be enormous. DLSS4 includes up to three frame generation between every two native frames in sequence. Unless there is change in rendered scene composition (change in performance requirements), amount of time between two native rendered frames does not change, whether there is 1, 2, 3, or X generated frames injected between. When moving with your character in games, you can only see results of your keyboard presses and/or mouse movement in native rendered frames (not in interpolated), because in order to show user interaction, keyboard and mouse data must come from processor which only happens with native frames. Responsiveness increases when native framerate increases, hence interpolated frames cannot lower it. Good explanation (source, source):
I understand why Nvidia doesn’t want to comment a lot on DLSS 4’s use of frame interpolation. That’s because frame interpolation introduces latency. You need to render two frames and then perform the interpolation before the first frame in the sequence in displayed, so when using any frame interpolation tool, you’re essentially playing on a slight delay. The assumption I’ve seen is that these extra frames linearly increase latency, which isn’t the case.

The Verge showed concern saying it wanted to “see how the new frame generation tech affects latency,” while TechSpot declared that “users are concerned that multi-frame rendering could compound the [latency] problem.” It’s a natural counter to the multiplied “fake” frames that DLSS 4 can spit out. If generating one frame causes a latency problem, surely generating three of them would cause a bigger latency problem. But that’s not how it works.

This is why it’s so important to understand that DLSS 4 uses frame interpolation. The idea of playing on a delay isn’t any different between DLSS 3 generating one extra frame and DLSS 4 generating three extra ones — the process still involves rendering two frames and comparing the difference between them. Your latency doesn’t significantly increase between inserting one, two, or three extra frames in between the two that were rendered. Regardless of the number of frames that go in between, the latency added by the frame interpolation process is largely the same.

Let me illustrate this. Let’s say you’re playing a game at 60 frames per second (fps). That means there’s 16.6 milliseconds between each frame you see. With DLSS 3, your frame rate would double to 120 fps, but your latency isn’t halved to 8.3ms. The game looks smoother, but there’s still 16.6ms between each rendered frame. With DLSS 4, you’ll be able to go up to 240 fps, quadrupling your frame rate, but once again, the latency doesn’t drop to 4.2ms. It’s still that same 16.6ms.
There’s a problem with multi-frame generation, and that’s latency. As is already the case with DLSS 3, you’re essentially playing on a delay with frame generation. It’s a slight delay, but the graphics card has to render two frames in order to perform the frame interpolation to “generate” the frames in between. This isn’t a problem at high frame rates, but DLSS 3 runs into issues if you’re running at a low base frame rate.

That issue becomes exaggerated when generating multiple frames, which is something I saw when looking at Lossless Scaling, which supports 4x frame generation like DLSS 4. You may be able to go from 30 frames per second (fps) to 120 fps with DLSS 4, but you’ll still get the responsiveness of 30 fps.
(M)FG improves gameplay smoothness (increases overall framerate) but does not improve responsiveness (increases latency).

With Reflex 2 Warp Frame technology, GPU requests from CPU most recent mouse input to only re-render parts of already rendered frames by shifting objects in the scene and painting blank areas created by shifting. This effectively eliminates latency issue introduced with M(FG). It virtually improves responsiveness as long as mouse polling rate is higher than overall fps (be it native, generated, or both combined) and as long as CPU is fast enough. This new technology is applied to already existing Reflex technology that was utilizing CPU-GPU synchronization. Am I correct?

1737028375136.png
 
Last edited:
Joined
Nov 7, 2017
Messages
1,989 (0.76/day)
Location
Ibiza, Spain.
System Name Main
Processor R7 5950x
Motherboard MSI x570S Unify-X Max
Cooling converted Eisbär 280, two F14 + three F12S intake, two P14S + two P14 + two F14 as exhaust
Memory 16 GB Corsair LPX bdie @3600/16 1.35v
Video Card(s) GB 2080S WaterForce WB
Storage six M.2 pcie gen 4
Display(s) Sony 50X90J
Case Tt Level 20 HT
Audio Device(s) Asus Xonar AE, modded Sennheiser HD 558, Klipsch 2.1 THX
Power Supply Corsair RMx 750w
Mouse Logitech G903
Keyboard GSKILL Ripjaws
VR HMD NA
Software win 10 pro x64
Benchmark Scores TimeSpy score Fire Strike Ultra SuperPosition CB20
@LittleBro
except tpu isnt a global metric for "gamers', the same way any steam/other gaming app numbers are reflecting global numbers.

more than 50% of all gamers are on 720 maybe 1080p, now compare to what the tpu numbers show.
 
Joined
Jun 14, 2020
Messages
3,891 (2.32/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Even with Noctua it's a problem. It's still too much heat to be flowing through the case, heating other components up.


I keep hearing this for almost 10 years now and still, they introduce smaller and smaller nodes. It's more like we're hitting heating dissipation problems due to ever raising transistor density.


They need more advanced node in order to introduce bigger progress in raster performance. Shrinking from 5nm to 4nm is not enough. Maybe with 2nm they will finally be able to ramp up compute units count similarly to how they increased it with RTX 4090. Also, it's not the best they can do, since they let part of the units handle everything else but rasterizing. Now imagine if those units were doing native rasterizing workload instead of interpolating, upscaling, etc.


They will praise DLSS4 and MFG while saying that lack of DLSS and MFG support is a negative aspect on any non-Nvidia GPU.



Quite on the contrary to currently ongoing TPU poll:
View attachment 380178


:eek: :fear:


He can't really tell. You need to see it in motion/sequence to notice artifacts or ghosting, or you need image for comparison. If I let you play any game with turning on DLSS without telling you, you probably would not notice. Then I'd switch DLSS off and you would notice immediately. This is basically the issue in some games where upscaling is turned on by default - some users practically never see that the particular game may look even better on native. Users unaware of playing with turned on upscaling (by default) counts towards Nvidia's statistics of 80% users using DLSS. And as was already said here, some games won't allow turning off DLSS. I have a very bad feeling about this forced upscaling strategy. Hopefully this won't spread anymore, so GPU makers are not able to obfuscate poor generational perf. uplifts.


DLSS without (M)FG increases framerate (thus reduces frametime) by rendering scenes in lower resolution and upscaling it to higher one while utilizing model based on continously trained neural network (so called "AI"). DLSS also increases distortions in rendered upscaled scenes. Though, I must say, new DLSS4 transforming model looks very promising.

FG increases latency, as its calculations require compute time in between rendering native/upscaled frames.

View attachment 380181
(Source: https://www.techspot.com/news/106265-early-dlss-4-test-showcases-cleaner-images-multiplied.html)

I did a quick research and Nvidia's key to reducing latency is Reflex 2 Frame Warp technology

Frame Generation interpolates between two rendered frames, be it on DLSS or native. Right now, it can ONLY interpolate, not extrapolate. Jensen was quite incorrect when he said that DLSS version 4 predicts the future. Algorithm simply cannot predict when and where you'll move your character with keyboard or to what side you'll move your mouse. And even if it could, error/miss rate would be enormous. DLSS4 includes up to three frame generation between every two native frames in sequence. Unless there is change in rendered scene composition (change in performance requirements), amount of time between two native rendered frames does not change, whether there is 1, 2, 3, or X generated frames injected between. When moving with your character in games, you can only see results of your keyboard presses and/or mouse movement in native rendered frames (not in interpolated), because in order to show user interaction, keyboard and mouse data must come from processor which only happens with native frames. Responsiveness increases when native framerate increases, hence interpolated frames cannot lower it. Good explanation (source, source):


(M)FG improves gameplay smoothness (increases overall framerate) but does not improve responsiveness (increases latency).

With Reflex 2 Warp Frame technology, GPU requests from CPU most recent mouse input to only re-render parts of already rendered frames by shifting objects in the scene and painting blank areas created by shifting. This effectively eliminates latency issue introduced with M(FG). It virtually improves responsiveness as long as mouse polling rate is higher than overall fps (be it native, generated, or both combined) and as long as CPU is fast enough. This new technology is applied to already existing Reflex technology that was utilizing CPU-GPU synchronization. Am I correct?

View attachment 380190
The poll shows that the majority of users are using upscaling btw.
 
Joined
Jul 24, 2024
Messages
334 (1.89/day)
System Name AM4_TimeKiller
Processor AMD Ryzen 5 5600X @ all-core 4.7 GHz
Motherboard ASUS ROG Strix B550-E Gaming
Cooling Arctic Freezer II 420 rev.7 (push-pull)
Memory G.Skill TridentZ RGB, 2x16 GB DDR4, B-Die, 3800 MHz @ CL14-15-14-29-43 1T, 53.2 ns
Video Card(s) ASRock Radeon RX 7800 XT Phantom Gaming
Storage Samsung 990 PRO 1 TB, Kingston KC3000 1 TB, Kingston KC3000 2 TB
Case Corsair 7000D Airflow
Audio Device(s) Creative Sound Blaster X-Fi Titanium
Power Supply Seasonic Prime TX-850
Mouse Logitech wireless mouse
Keyboard Logitech wireless keyboard
Yes. It also clearly shows that vast majority of poll contributors values image quality above performance, where mostly prefering no upscaling.
 
Joined
Jan 19, 2023
Messages
288 (0.40/day)
It's a bit of "what if" IMHO,( even the dedicated raytracing hardware which is the focus right now isn't improving at comparable speed to the early 2000 golden era, hardware growth seems to have slown down generally.)
B
ut I know that there's been external force beyond Nvidia and Microsoft, who pushed for more research in RTRT 20 years ago. SIGRAPPH made a conference about that in 2005, and both AMD and Nvidia were cited as actors contributing for the stunt growth of RTRT vs raster. As hardware was getting faster, there was barely any research done on making RTRT a reality, people just gave up in the 90's.

And it's also were I think that the interest of gamers and CG research don't always align. For them, raster is seen as a workaround with limitation in accuracy/functionality, not really the end-game. But the average gamer would have probably been fine with raster graphics until their death. But for a researcher settling down is a form of death in itself. Probably.

But if you read about neural rendering, you can see that it's also finding workaround to make RT/PT more efficient, and trying to find solution for things that neither raster or RT/PT can do adequately in rel-time right now.

Realtime ray tracing for current and future games | ACM SIGGRAPH 2005 Courses
Introduction to real-time ray tracing | Request PDF

View attachment 380193View attachment 380194View attachment 380195
I guess leather jacket man wasn't lying back when they launched 20 series that it was 10 years in the making:


They must have started research shortly after that siggraph, or its a coincidence.

EDIT
Also I recommend the short read in the link, Nvidia marketing didn't change since then :D GIGA RAYS PER SEC
 
Joined
Jun 14, 2020
Messages
3,891 (2.32/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Yes. It also clearly shows that vast majority of poll contributors values image quality above performance, where mostly prefering no upscaling.
But most people do prefer upscaling?
 

L0stS0ul

New Member
Joined
Dec 16, 2024
Messages
6 (0.19/day)
It's good that the slides are labeled otherwise some people wouldn't know which one has the better image :rolleyes: If the results showed in a blind survey 20 or 50% of people have to go to an ophthalmologist because they can't see the difference or even worse, they blindly choose images without these super modern improvements :pimp:

RTX 5080 has 11% more processsing units and up to 15% better perf. in raster than RTX 4080. So much for 5080 beating 4090 with just 10k shaders and all that new architecture magic on top of that.
But the price of the GTX/RTX-80 series has increased by 40-70 percent (RTX 3080 -> 4080, now only 40%). And as it turns out, without DLSS, RT is not possible to use in games. Because when you pay $1,200, players expect the card to be able to provide full smooth performance, not 20-30 with RT. Without DLSS, these cards are not able to handle RT - Cyberpunk 2077, Alan Wake, PT and Indiana Jones. In addition, 16 GB in higher resolution, RT etc. is becoming becoming very, very insufficient.
 
Joined
Sep 21, 2023
Messages
47 (0.10/day)
With Reflex 2 Warp Frame technology, GPU requests from CPU most recent mouse input to only re-render parts of already rendered frames by shifting objects in the scene and painting blank areas created by shifting.
It's not clear to me how in-painting is done. Is it AI generated in-painting or an actual narrow viewport that is fully rendered or something in-between the two.

Based on this line "predictive rendering algorithm that uses camera, color and depth data from prior frames to in-paint these holes accurately" I'm guessing there is little to no 3D rendering happening?
 
Joined
Nov 7, 2017
Messages
1,989 (0.76/day)
Location
Ibiza, Spain.
System Name Main
Processor R7 5950x
Motherboard MSI x570S Unify-X Max
Cooling converted Eisbär 280, two F14 + three F12S intake, two P14S + two P14 + two F14 as exhaust
Memory 16 GB Corsair LPX bdie @3600/16 1.35v
Video Card(s) GB 2080S WaterForce WB
Storage six M.2 pcie gen 4
Display(s) Sony 50X90J
Case Tt Level 20 HT
Audio Device(s) Asus Xonar AE, modded Sennheiser HD 558, Klipsch 2.1 THX
Power Supply Corsair RMx 750w
Mouse Logitech G903
Keyboard GSKILL Ripjaws
VR HMD NA
Software win 10 pro x64
Benchmark Scores TimeSpy score Fire Strike Ultra SuperPosition CB20
i assume there cant be, or you couldnt really offload it to other (hw) doing the work.
 
Joined
Oct 28, 2012
Messages
1,224 (0.27/day)
Processor AMD Ryzen 3700x
Motherboard asus ROG Strix B-350I Gaming
Cooling Deepcool LS520 SE
Memory crucial ballistix 32Gb DDR4
Video Card(s) RTX 3070 FE
Storage WD sn550 1To/WD ssd sata 1To /WD black sn750 1To/Seagate 2To/WD book 4 To back-up
Display(s) LG GL850
Case Dan A4 H2O
Audio Device(s) sennheiser HD58X
Power Supply Corsair SF600
Mouse MX master 3
Keyboard Master Key Mx
Software win 11 pro
I feel like every time I am arguing about these things eventually I realize I am just talking to an Nvidia PR rep.
That's a weird thing to say. You are actually giving to much credit to nvidia, and ignoring that people outside the company are also asking themselves how emerging technologies could affect various aspect of the gaming industry:
(PDF) Artificial intelligence for video game visualization, advancements, benefits and challenges
Jensen wasn't involved in that research paper, I would argue that research at AMD, nvidia, Intel, microsft etc... is probably influenced on some level but what's happening in academics labs, i don't believe that they function as a bubble.

Game visualization refers to the use of artificial intelligence techniques and technologies to enhance the visual aspects of games including graphics, animations and virtual environments [15,16]. In general, it encompasses the employment of computer graphics and AI techniques to render and exhibit the visual components of a video game including environments, characters, objects and effects such as lighting and shading
 
Top