Monday, January 6th 2025

NVIDIA Introduces DLSS 4 with Multi-Frame Generation for up to 8X Framerate Uplifts

With the GeForce RTX 50-series "Blackwell" generation, NVIDIA is introducing the new DLSS 4 technology. The most groundbreaking feature being introduced with DLSS 4 is multi-frame generation. The technology relies on generative AI to predict up to three frames ahead of a conventionally rendered frame, which in and of itself could be a result of super resolution. Since DLSS SR can effectively upscale 1 pixel into 4 (i.e. turn a 1080p render into 4K output), and DLSS 4 generates the following three frames, DLSS 4 effectively has a pixel generation factor of 1:15 (15 in every 16 pixels are generated outside the rendering pipeline). When it launches alongside the GeForce RTX 50-series later this month, over 75 game titles will be ready for DLSS 4. Multi-frame generation is a feature exclusive to "Blackwell."
Add your own comment

133 Comments on NVIDIA Introduces DLSS 4 with Multi-Frame Generation for up to 8X Framerate Uplifts

#101
rv8000
zigzagBoth are upscaled (left: old model, right: new model). Note that these two images are a screenshot of a Youtube video, so there are too many video compression and resizing artefacts for in-depth quality analysis.

1) I agree that the new one looks over-sharpened. Comparisons from more games are needed to see if it's caused by the new model or if it's game specific or if it's due to resizing and video compression.

2) Aliasing might seem worse to you because of the added detail (sharper images always make aliasing more visible), but if you look at the video, you will see that there is more aliasing in the old one (even though it has softer image which should hide aliasing)

3) These details may exist in previous frames. Each frame a different sub-pixel shift is added, which makes a real resolution increase possible when data from multiple frames is merged together. But even if all these micro details are completely made up, if they are fitting and make sense, then they make CG graphics better. You are not getting 100% what artist has envisioned anyway (production time constraints and tools limitations, texture compression, real-time 3D rendering limitations, game size constraints etc.).
In regards to the detail, the new model looks like a knife was taken to the fabric/leather of the bag and scuffed repeatedly. You didn’t include an original image at this point, and unless those markings are actually in the non-scaled native image, its more bs approximation of textures that don’t exist. DLSS, FSR, and XESS are nothing more than ways to pump fps at the cost of quality, the approximation may be better in newer iterations, but it is still wrong and worse than native.

Not only that but over-sharpening and texture alteration from upscalers often results in these “fine” (incorrect/fake) details providing odd lighting variations to those details, further resulting in a more “off” looking image presentation. There was a DLSS/FSR game review a few months back to where DLSS was darkening certain textures on the face of a building in direct sunlight, while also blurring textures resulting in an overall worse lighting presentation.

It’s obvious previous generational improvements are a thing of the past, but these software gimmicks (in most cases) are resulting in the industry wide excuse for poor optimization and reliance on worse quality rendering techniques for the sake of cutting corners. A good example of this was the recent MH:Wilds Beta. Without DLSS/DLAA enable the game looked like absolute trash.
Posted on Reply
#102
Dirt Chip
We might see this: As with dlss3, the big beneficial difference will show on already high fps systems. On low fps setups the added leg will be much more noticeable to the point of worse overall experience.

Even more, all those generative AI stunts (as cool as they might seems) are just makeup, literally, for poorly optimized, badly coded games.
This adverse co-evolution is dragging the whole game industry downward and we gamers happily financing it.

Good luck to us all...
Posted on Reply
#103
zigzag
rv8000In regards to the detail, the new model looks like a knife was taken to the fabric/leather of the bag and scuffed repeatedly. You didn’t include an original image at this point, and unless those markings are actually in the non-scaled native image, its more bs approximation of textures that don’t exist. DLSS, FSR, and XESS are nothing more than ways to pump fps at the cost of quality, the approximation may be better in newer iterations, but it is still wrong and worse than native.
As you said we don't have native image. As far as we know the native image might have the same look.
rv8000Not only that but over-sharpening and texture alteration from upscalers often results in these “fine” (incorrect/fake) details providing odd lighting variations to those details, further resulting in a more “off” looking image presentation. There was a DLSS/FSR game review a few months back to where DLSS was darkening certain textures on the face of a building in direct sunlight, while also blurring textures resulting in an overall worse lighting presentation.
And rasterization produces shadow artifacts, wrong reflections, wrong lightning etc. You are just used to one type of artefacts and accept them as being the norm while obsessing over artefacts produced by upscalers.

EDIT: As game devs can create, for example, ugly faces, bad animations they can also do a poor implementation of an upscaler. No one has an infinite budged and tradeoffs need to be made. Maybe a game is poorly optimized, but has a better gameplay, because resources were spend on gameplay.
Posted on Reply
#104
Blueberries
I'm excited for the future of AI and I'm glad NVIDIA is developing the software to drive it.

As a market consumer we're still at a stop-gap though. If I care about high resolution raster I'm not interested in upscalers, and if I'm interested in high framerates I'm not interested in pathtracing. What's important is native performance and that's what they need to convince me is worth $2000.

I don't think they're doing themselves any favors with deceptive marketing slides comparing DLSS 4 to DLSS 3 either.
Posted on Reply
#105
JustBenching
zigzagAnd rasterization produces shadow artifacts, wrong reflections, wrong lightning etc. You are just used to one type of artefacts and accept them as being the norm while obsessing over artefacts produced by upscalers.
There are countless of videos and reddit channels dedicated to eradicate the pestilence of TAA from games cause it's a blurry mess, but in TPU somehow "native" is the way :D

Look at this mess for example, TAA looks horrible

Posted on Reply
#106
rv8000
zigzagAs you said we don't have native image. As far as we know the native image might have the same look.

And rasterization produces shadow artifacts, wrong reflections, wrong lightning etc. You are just used to one type of artefacts and accept them as being the norm while obsessing over artefacts produced by upscalers.

EDIT: As game devs can create, for example, ugly faces, bad animations they can also do a poor implementation of an upscaler. No one has an infinite budged and tradeoffs need to be made. Maybe a game is poorly optimized, but has a better gameplay, because resources were spend on that.
Incorrect. Shadow mapping and baked in lighting are designed with intent and to approximate lighting as best as possible as path tracing isn’t feasible.

Upscaling and frame generation take the approximations and make further guesses as to what is actually there, resulting in further error and or gross misrepresentation of detail. For instance the hair in that image being over sharpened, altered in contrast, screws up the overall lighting and depth of the object.
Posted on Reply
#107
zigzag
BlueberriesI'm excited for the future of AI and I'm glad NVIDIA is developing the software to drive it.

As a market consumer we're still at a stop-gap though. If I care about high resolution raster I'm not interested in upscalers, and if I'm interested in high framerates I'm not interested in pathtracing. What's important is native performance and that's what they need to convince me is worth $2000.

I don't think they're doing themselves any favors with deceptive marketing slides comparing DLSS 4 to DLSS 3 either.
If you're not interested in pathtracing you are not interested in graphics quality as pathtracing is objectively the more accurate rendering technique. Which means you can just run games at low quality settings :p

But seriously, GPU prices are way too high and I agree with you that marketing slides comparing performance of cards when one is running DLSS 4 and another DLSS 3 is a deceptive marketing.
rv8000Incorrect. Shadow mapping and baked in lighting are designed with intent and to approximate lighting as best as possible as path tracing isn’t feasible.

Upscaling and frame generation take the approximations and make further guesses as to what is actually there, resulting in further error and or gross misrepresentation of detail. For instance the hair in that image being over sharpened, altered in contrast, screws up the overall lighting and depth of the object.
But path tracing is feasible in combination with generative AI. You can have better lighting and shadows than what the best rasterization based approximations can achieve. In both cases there are tradeoffs.
Posted on Reply
#108
rv8000
zigzagIf you're not interested in pathtracing you are not interested in graphics quality as pathtracing is objectively the more accurate rendering technique. Which means you can just run games at low quality settings :p

But seriously, GPU prices are way too high and I agree with you that marketing slides comparing performance of cards when one is running DLSS 4 and another DLSS 3 is a deceptive marketing.


But path tracing is feasible in combination with generative AI. You can have better lighting and shadows than what best rasterization based approximations can achieve. In both cases there are tradeoffs.
Incorrect. In both rendering cases upscalers and frame generation gen are image destructive approximations. Path tracing is not feasible at native resolutions, we’re still many, many years out from that. CP2077 is a good example of that. Native, dependent on resolution, your sub playable frames, as low as 21fps at 4k, on a 4090 at that. This also requires ray-reconstruction and denoisers, without it you’d require far far far more gpu horsepower to not get a path traced or fully ray traced render without it being horrendously noisy.

A truly ray/path traced game is absolutely the end game, but current heavily RT games like CP2077, Indiana Jones, Alan Wake, and BMW show it’s not feasible hardware and cost wise without upscaling, low quality and quantity ray counts, and other approximation software tech such as frame generation.
Posted on Reply
#109
zigzag
rv8000Incorrect. In both rendering cases upscalers and frame generation gen are image destructive approximations. Path tracing is not feasible at native resolutions, we’re still many, many years out from that. CP2077 is a good example of that. Native, dependent on resolution, your sub playable frames, as low as 21fps at 4k, on a 4090 at that. This also requires ray-reconstruction and denoisers, without it you’d require far far far more gpu horsepower to not get a path traced or fully ray traced render without it being horrendously noisy.

A truly ray/path traced game is absolutely the end game, but current heavily RT games like CP2077, Indiana Jones, Alan Wake, and BMW show it’s not feasible hardware and cost wise without upscaling, low quality and quantity ray counts, and other approximation software tech such as frame generation.
I think you misunderstood me. With both cases I meant 1) rasterization without any ML tech and 2) path tracing with ML tech (upscaler, denoising, RR, FG).

While path tracing is not feasible without ML tech, it is feasible when upscaler, denoising, RR and FG are used. That's what I meant as I said "with generative AI".

All approximations can be considered image destructive and rasterization has a lot of approximations. It's just what approximations you personally prefer over others and which are a better fit for certain graphics style and which have less bugs in specific game implementation.
Posted on Reply
#110
Wasteland
rv8000Incorrect. In both rendering cases upscalers and frame generation gen are image destructive approximations. Path tracing is not feasible at native resolutions, we’re still many, many years out from that. CP2077 is a good example of that. Native, dependent on resolution, your sub playable frames, as low as 21fps at 4k, on a 4090 at that. This also requires ray-reconstruction and denoisers, without it you’d require far far far more gpu horsepower to not get a path traced or fully ray traced render without it being horrendously noisy.

A truly ray/path traced game is absolutely the end game, but current heavily RT games like CP2077, Indiana Jones, Alan Wake, and BMW show it’s not feasible hardware and cost wise without upscaling, low quality and quantity ray counts, and other approximation software tech such as frame generation.
Yeah even path-traced Cyberpunk tops out at 2 rays and 2 bounces, IIRC. It's always fun to hear novelty fetishists wax pompous about how technologies that are decades away have already killed the established standard (rasterization, in this case). At this rate, I'll be lucky to see true full path tracing take over before I die of old age. But yeah, sure, rasterization is dead and gone and you LUDDITE GAMERS JUST NEED TO ACCEPT IT, LMAO.

Likewise, the argument that all rendering is fake, and therefore benchmarks using DLSS 3/4 are legit, grows tiresome. Here's a simple heuristic: if the frame cannot respond to user input, it is "fake" for the purpose of performance discussions. Interpolated frames smooth out the visual presentation. That's it. You may be watching the game at 240 FPS with DLSS 4, but you're still only playing it at 60.
Posted on Reply
#111
rv8000
zigzagI think you misunderstood me. With both cases I meant 1) rasterization without any ML tech and 2) path tracing with ML tech (upscaler, denoising, RR, FG).

While path tracing is not feasible without ML tech, it is feasible when upscaler, denoising, RR and FG are used. That's what I meant as I said "with generative AI".

All approximations can be considered image destructive and rasterization has a lot of approximations. It's just what approximations you personally prefer over others and which are a better fit for certain graphics style and which have less bugs in specific game implementation.
No, and you even make the distinction yourself. Traditional rasterization as we know it is designed with intent. Current ray or path traced renderings are hybrids, and lumping ai software techniques result in approximations of approximations. In a traditional rasterized game the look of a texture or lighting or occlusion is designed to look one way only, and is rendered one way only. This is wholly false if you want to claim RT/PT + upscalers and frame gen are the same thing.
Posted on Reply
#112
FeelinFroggy
NiceumemuI don't care about fake frames or fake resolution, show me pure raster performance
AI super sampling is the future. Get with it or move on. How long did it take a GPU to make 4k gaming above 60fps a possibility? How do you expect a GPU to push 16k without AI? It will take decades of process refinement. That's just not going to cut it when people already complain about the minor performance increase generation over generation.

If you dont like fake frames or fake resolution than you better quit gaming now because you will only see more and more of it in every generation for Nvidia and AMD.
Posted on Reply
#113
N3utro
FeelinFroggyAI super sampling is the future. Get with it or move on. How long did it take a GPU to make 4k gaming above 60fps a possibility? How do you expect a GPU to push 16k without AI? It will take decades of process refinement. That's just not going to cut it when people already complain about the minor performance increase generation over generation.

If you dont like fake frames or fake resolution than you better quit gaming now because you will only see more and more of it in every generation for Nvidia and AMD.
I'd bet these people saying they dont like "fake frames" wouldn't be able to tell which is the AI generated frames gameplay vs non AI generated at the same framerate in a blind test. Probably would say the AI generated gameplay would even look better than the original one at the same framerate.
Posted on Reply
#114
zigzag
rv8000No, and you even make the distinction yourself. Traditional rasterization as we know it is designed with intent. Current ray or path traced renderings are hybrids, and lumping ai software techniques result in approximations of approximations. In a traditional rasterized game the look of a texture or lighting or occlusion is designed to look one way only, and is rendered one way only. This is wholly false if you want to claim RT/PT + upscalers and frame gen are the same thing.
"Designed with intent" might be the key here. What does "intent" mean?

For example, if you hire an artist to create a game character for your project, what instructions do you give them? Do you tell them you want the character to look mighty and scary, or do you specify where each skin pore should be located? I would say skin pore locations can be arbitrary as long as they fit the overall intended style of the character.

AI upscalers affect only these micro details, where some deviation doesn't really impact the intent. Results might not be as deterministic as with pure traditional rasterization, but is that even important? A painter doesn't have full control over each bristle in a paintbrush, but still manages to create a work of art as intended. If AI-generated details look good, fit the intention and provide a performance uplift, then it's a win in my book.
Posted on Reply
#115
SOAREVERSOR
FeelinFroggyAI super sampling is the future. Get with it or move on. How long did it take a GPU to make 4k gaming above 60fps a possibility? How do you expect a GPU to push 16k without AI? It will take decades of process refinement. That's just not going to cut it when people already complain about the minor performance increase generation over generation.

If you dont like fake frames or fake resolution than you better quit gaming now because you will only see more and more of it in every generation for Nvidia and AMD.
I don't know if you're really a fellow shipmate based on your avatar and name but if so well spoken!
BlueberriesI'm excited for the future of AI and I'm glad NVIDIA is developing the software to drive it.

As a market consumer we're still at a stop-gap though. If I care about high resolution raster I'm not interested in upscalers, and if I'm interested in high framerates I'm not interested in pathtracing. What's important is native performance and that's what they need to convince me is worth $2000.

I don't think they're doing themselves any favors with deceptive marketing slides comparing DLSS 4 to DLSS 3 either.
The 2000 ask is not for gamers. It's for pros. The 1000 ask is for the inbetween. The below are for gamers. People need to get that through their skulls and fast. There is a reason AMD is not competing with those because those are not gaming cards. The companies are telling us this. Yet PC gamers refuse to accept what the specs, the price, and the companies themselves keep saying. Because they are spoiled fucking shits.
N3utroI'd bet these people saying they dont like "fake frames" wouldn't be able to tell which is the AI generated frames gameplay vs non AI generated at the same framerate in a blind test. Probably would say the AI generated gameplay would even look better than the original one at the same framerate.
Because it's just different ways of rendering the same image or painting the same picture. What matters is the output. Each method has flaws. But nvidia is an AI company and just as rasterization was once new and had issues now this is new and has issues. But the horse is out of the barn, the horse buggy is dead, and on we go. Everyone has complained at every big hop that's nothing new.

I'm old enough to remember the huge fiasco over AA and vsync. That worked out in the end. Sucked at the start.
Posted on Reply
#116
Bagerklestyne
Od1sseasUpscaling is better than native vs previous AA technologies like older DLSS and especially TAA.


DLSS 4 looks better than native = Progress
DLSS FG double performance for the same latency (vs native with no DLSS SR and Reflex) = Progress
DLSS MFG Triples performance for the same latency = Progress.

This is “fake” progress only in the eyes of AMD fanboys or low IQ individuals
Nothing looks better than native, can look as close to native as possible, can't look better. Can't. It's interpreted math. That's not progress (getting closer is.) That's like saying AI upscaling a picture makes it look better at the same resolution.

FG double for same latency adds nothing. If you game at 60fps but it renders it at 120fps, you're still only effecting the game via input at 60fps. doubling the framerate makes it feel smoother but it's not more responsive.

MFG = As above - if you game at 60 fps, but render at 240fps you're still gaming (giving input to the game) back at 60fps, everything else is an interpolated slideshow.

TL;DR - Upscaling helps increase framerates/resolution at the expense or real detail (calculated details)
Frame gen gives apparent smoothness and frame (at NO point during any 'generated' frame can your gaming input effect what the game renders)

If you press 'X' to doubt, you'll still have to wait for a real frame to be rendered (not generated) to doubt it.

Sledging people by calling them fanboys or low iq shows you're not sure of what you're saying so you're throwing shade.

Go have a look at hardware unboxed and jayz2cents to get some clarification on this - especially this regarding frame gen -

Reality is it looks like the 5000 series are 20-30% faster (bar the 5090 being out of line compared to it's predecessor) look at the gaming uplift sides (usually first 1-2 columns showed games without frame gen)

This is the actual progress
Posted on Reply
#118
Dawora
Pepamamiok if go this road.
I think 5070 has unacceptable RAM size.


I wish it worked like that xd
Then u can buy 5070Ti
or 5080

But u are AMD user so why u even care?
LycanwolfenWOW more software AI gimics, Seem that everything now is 1080p scaled to 4k or 8k with AI. Guess they cannot make pure 4k or 8k rendering. Lame.

So cyber punk runs 27 FPS without software gimics um my 1080ti can run game better than that.
RT max

u 1080Ti run it 5fps
Posted on Reply
#119
Capitan Harlock
FeelinFroggyAI super sampling is the future. Get with it or move on. How long did it take a GPU to make 4k gaming above 60fps a possibility? How do you expect a GPU to push 16k without AI? It will take decades of process refinement. That's just not going to cut it when people already complain about the minor performance increase generation over generation.

If you dont like fake frames or fake resolution than you better quit gaming now because you will only see more and more of it in every generation for Nvidia and AMD.
Super sampling was meant to achieve better details at lower resolutions on powerfull enough cards or when mixing up detail options in games to come up with a good visual.
Frame gen is meant to be used if you already can get 60 fps .
Using tech to say look performance is great is like making a sleep car and saying look this Fiat 500 does 160mph without saying is only the shell of a Fiat 500.
Ai is a tool but using it this way and saying is progress when games are unoptimized by lazy peoples who lack the experience and knowledge to optimize is a giant cope.
Game engines used with auto tools because they give you an easy way to achieve an effect giving the idea there is no optimization to do and we get blurriness and fake performance tricks to hide all those artifacts so amazing :kookoo:
Another thing I like to point out is people don't know when some settings and filters options actually do and look at different resolutions depending on the game implementation.
At 4k you don't need fricking AA if the game uses good quality textures that fit the resolution and if the game is optimized properly to put the work on the gpu when is necessary.
I have been playing recently Forza Horizon 4 at 4k with everything maxed out motion blur off, no AA and everything else maxed on my 1080 ti and I have fixed with no drops 60fps . The game looks amazing and I could even turn down some options because it might be barely noticeable the change between ultra and high at 4k for some settings. Who said that at 4k you need to use certain maxed options?
Having the options doesn't mean you have to set everything cranked up and then say look it barely runs so we need to use this tool to make it run properly when it depends on how they made the settings and how they optimized the game.
In their own showcase using Cyberpunk as an example with a no dlss doing 30 and with multy frame 200 plus is a mega gimmick because at 4k Cyberpunk doesn't need AA and other options maxed out to look good.
Looks like their being dumb on purpose like they don't know how some options work and purposely push the idea of innovation when they give themselves tools to devs that make them do lazy jobs and then say look we made progress :kookoo:

RTX looks cool and all when is properly and well implemented but is it really necessary in every game titles?

No but their pushing it like is the fricking godsend like they did back in the day with Physx and 3D and was just a show off feature.

If you clap your hands for all this Ai use to be only used more by incompetent and lazy devs your're the problem with how so many new games sell when they don't deserve the money they ask.

This is the same with the Ps5 Pro trying so hard to justify its existence when is the same bs.
Posted on Reply
#120
Frizz
What I do nowadays to configure the games before I play them is turn settings on and off then I assess how many frames I gain or lose and how much difference it's made to me visually (I play on 4k). Ray Tracing is left ON if there are enough frames to spare. DLSS is left on for more FPS if I don't see a difference in visual fidelity on my display which is the case most of the time (On Quality/DLAA).

I would have preferred GPUs to continue pumping out frames through raw performance, but it seems AI is the way forward for technology as a whole right now.
Posted on Reply
#121
londiste
Capitan HarlockAt 4k you don't need fricking AA if the game uses good quality textures that fit the resolution and if the game is optimized properly to put the work on the gpu when is necessary.
Yes you do. Textures and optimization have nothing to do with needing AA. Granted, if you are doing 2160p on a very small screen you might not see pixels but aliasing is still there.
Capitan HarlockRTX looks cool and all when is properly and well implemented but is it really necessary in every game titles?
Yes, it is. Next question?

While both Nvidia and AMD keep showing "Performance" level upscaling for big fps numbers, there are also "Quality" options that have a fairly small image quality penalty for still about 25-30% performance boost.
FrizzI would have preferred GPUs to continue pumping out frames through raw performance, but it seems AI is the way forward for technology as a whole right now.
Rapid progress on manufacturing processes have stalled. For a long time the performance boosts in new generations came from having double the transistors in the same area and usually at the same power consumption as well. This is no longer the case. And both Nvidia and AMD have new generation still on 5nm class process (4nm is a variation of that). And it is not getting better either - 3nm has been mass-produced for about 2 years now but looks like it is not good (cheap) enough for big consumer CPUs yet.

edit:
I am very surprised at how there is a big and vocal mindset against anything new these days. The entire progress of computer graphics has been driven by new stuff. Someone brings out a new feature, there will be a bunch of uses or implementations and if it is worth it, the feature becomes industry standard. At least enthusiast space throughout this used to be fairly excited about new improved things even if particular early implementations were often laughed at. Things that improved image quality often came with big performance hits. Things that improved performance often came with image quality hits. The question was always more about the degree of tradeoffs being acceptable.

Yes, the image quality in general has (very) arguably become "good enough" but should that really be a reason to stop the progress?
Posted on Reply
#122
JustBenching
BagerklestyneNothing looks better than native, can look as close to native as possible, can't look better. Can't.
That's just flat out wrong though. Have you seen image restoration for example?

londisteYes you do. Textures and optimization have nothing to do with needing AA. Granted, if you are doing 2160p on a very small screen you might not see pixels but aliasing is still there.
Yes, it is. Next question?
4k at 32", definitely needs AA.
Posted on Reply
#123
Capitan Harlock
londisteYes you do. Textures and optimization have nothing to do with needing AA. Granted, if you are doing 2160p on a very small screen you might not see pixels but aliasing is still there.
Yes, it is. Next question?

While both Nvidia and AMD keep showing "Performance" level upscaling for big fps numbers, there are also "Quality" options that have a fairly small image quality penalty for still about 25-30% performance boost.

Rapid progress on manufacturing processes have stalled. For a long time the performance boosts in new generations came from having double the transistors in the same area and usually at the same power consumption as well. This is no longer the case. And both Nvidia and AMD have new generation still on 5nm class process (4nm is a variation of that). And it is not getting better either - 3nm has been mass-produced for about 2 years now but looks like it is not good (cheap) enough for big consumer CPUs yet.

edit:
I am very surprised at how there is a big and vocal mindset against anything new these days. The entire progress of computer graphics has been driven by new stuff. Someone brings out a new feature, there will be a bunch of uses or implementations and if it is worth it, the feature becomes industry standard. At least enthusiast space throughout this used to be fairly excited about new improved things even if particular early implementations were often laughed at. Things that improved image quality often came with big performance hits. Things that improved performance often came with image quality hits. The question was always more about the degree of tradeoffs being acceptable.

Yes, the image quality in general has (very) arguably become "good enough" but should that really be a reason to stop the progress?
Depending on the game 4K does smooth edges on is own without any AA needed so that's already not true that is necessary always.

Rtx is not necessary always because is just like Physx used to be a thing in the past.
In some games the RTX implementation is only heavy and doesn't give much difference in lighting and shadows is just there to be annoying .
I'm not saying it shouldn't be used but not be a thing in any game.
Certain engines behave badly with certain features or uses.
Just look at RE engine used in open world games and you see if they added RTX as an option it would be fricking bad because the engine and the games are not been optimized properly for those scenarios.
In other games with different engine properly optimized RTX does what is suppose to without killing frames but be always a thing is just Nvidia marketing because they used Nvidia tools to work on the game.
As i said Pushing all graphics options and than say look performance bad use framegen or multi frame is stupid.
What is wrong to want real performance for the price you pay but instead they call innovation gimmicks because the industry is flooded with people who can't optimize engines and game?
Posted on Reply
#124
Pepamami
DaworaThen u can buy 5070Ti
or 5080

But u are AMD user so why u even care?
how can u tell? Because I dont want to praise Nvidia?
And think that 12Gb is unacceptable for 500$++++ price tag?
You know, I am thinking to become one, actually.
Posted on Reply
#125
Bagerklestyne
JustBenchingThat's just flat out wrong though. Have you seen image restoration for example?



4k at 32", definitely needs AA.
Thing is, that's taken the sub optimal time degraded image and restored it.

If you took a picture of the same subject today with a quality modern camera at the same detail level (pretend for arguments sake the original was 6000x4000 pixels - so a camera that uses the same resolution) then you couldn't improve upon the picture taken without interpolation, which is not the original image then.

Image restoration is recreation of the best picture, or returning it to as close to the original as you can, which is exactly what frame generation/upscaling does, tries to get as close to the original as it can.

The difference with image restoration is that the original image when first created (so at it's best) might not be as good as what could be produced today.

Mathematically speaking though, if it was digital and the file wasn't corrupted, you couldn't restore it to be better without interpolating data.
Posted on Reply
Add your own comment
Jan 8th, 2025 22:17 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts