• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Doesn't Believe in NVIDIA's DLSS, Stands for Open SMAA and TAA Solutions

I know this all too well lol.

Interesting topic: When does the AI program become so intelligent that it doesn't want to upscale your video games? :)

I would assume when it realizes that you could turn settings down to achieve the same goal without having to do all this extra processing?
 
I would assume when it realizes that you could turn settings down to achieve the same goal without having to do all this extra processing?
Ohhhhhh~
kelso-burn.jpg


Edit: sorry, lol. Couldn't resist.
 
I disagree honestly, AI technology as it is right now is dumb. It's little more than basic logic engines. there is no real intelligence, and there wont be for many years. it's in its infancy but of course this is a good step and its gotta start somewhere.
I wouldn't say totally.
Some time ago I've attended a Warsaw Security Summit conference (Hope Cucker will be thirlled cause it was held in Poland)
Innovation with AI and Deep learning for cameras and since I was doing Video Surveillance Systems I attended. Using AI and/or deep learning techiques the camera was able to recognize human emotions. It could tell if somebody is sad or stressed etc. Basically it would tell you if a dude is up to something since his behavior, face would say it. Like a somebody who's about to commit a crime. That's innovation which has been showed and analized later with in depth information and code. What we have with DLSS here? People hear Deep Learning Multi-sampling and it must be great since "deep learning" is mentioned there. What a bull crap on a barn floor that marketing shenaningans.


Let's be honest, it's pretty hard to get better performance with an increase in image quality. I have not played a title with DLSS so I am not sure if I would notice the quality or not. Really depends on the pace of the game. Metro might be slow enough of a pace that it would be noticeable. Something like BFV multiplayer, maybe not so much.
You can take a look in TPU review. You have a comparison. Take a look. it's not only the algorithm but also purpose of it and if it's meant for this particular job (should Deep Learning be used for this ?) and of course if it's worth it.
I don't know your game preferences but imagin motion blur effect. Did you like it in games when you wanted to excel at a particular game? I didn't. I always wanted nice smooth and crispy image. With DLSS It seems like we are going backwards under the Deep Learning flag with a promise and motto "The way it's meant to be played" from now on :p. It's just sad.
 
Last edited:
You can take a look in TPU review. You have a comparison. Take a look.

I have a comparison of a still image that can be scrutinized till the end of the world. What I don't have is live action comparison. I don't know if I would notice it. There are times when I am so caught up playing the game that I don't notice my giant, obvious radar telling me there is someone about to kill me right before I die.

TL;DR;

Still frames are much easier to compare and scrutinize than live action. Until I see it in person while I am playing, I will withhold final judgement. That said, I likely will never see it as I likely won't be buying a 2000 series card.
 
Two years from now it will be like G-Sync, another proprietary technology that went no where.
 
Two years from now it will be like G-Sync, another proprietary technology that went no where.
And with what waste of time, technology and money. (the last one not in particular since a lot of people will still go with it :P )
 
amd is a spoiled little kid with some skills to show, but they are jealous of every achievement of other companies, they mostly suck at leading the innovation but heck, they can do some decent hardware at a low cost. but they contribute global warming more than any other computer part company.
 
Well regarding the thread title ... have to ask, what can we expect any competitor in any market for any product would say in response in response to the announcement of a new feature, with advantages (real or imagined), that the competitor doesn't have access to ?
 
This is kinda interesting for those who wanted to know something about Ray Tracing from AMD
 
I honestly don't know enough to form an opinion as to which of the competing technologies is superior butttttt if those images are labeled correctly and they were produced and promoted by AMD as part of their argument in favor of TAA over DLSS.......well then that's just dumb (and I'm an AMD stock-holder lol fml)
 
This is interesting. Wonder if you guys have seen it. It's about ray tracing and rasterization.
 
This is interesting. Wonder if you guys have seen it. It's about ray tracing and rasterization.

It's definitely an interesting subject, and it might be fine as an introduction, but it is glaringly obvious to me that the author's understanding of rendering is only skin deep, and he gets the deeper technical wrong.

22:58
But hybrid rendering is a stopgap; Nvidia needs to take the hybrid approach due to a legacy of thousands of rasterized games. We see why that is; with Turing already being poorly received due to not being fast enough at rasterization, can you imagine what would have happened had they doubled or even quadrupled their RTX gigarays while actually lowering rasterization performance?
This was the only way Nvidia could do it.
AMD on the other hand, they're the type of company that would just throw it all out and start from scratch. And I believe we will see them go down a true raytracing or pathtracing route with the game consoles one day
Anyone who knows how GPUs and raytracing work understands that the new RT cores are only doing a part of the rendering. GPUs are in fact a collection on various specific accelerated hardware (geometry processing, tessellation, TMUs, ROPs, video encoding/decoding, etc.) and clusters of ALU/FPUs for generic math. Even in a fully raytraced scene, 98% of this hardware will still be used. The RT cores are just another type specialized accelerators for one specific task. And it's not like everything can/will be raytraced; like UI elements in a game, a page in your web browser or your Windows desktop, it's rasterized because it's efficient, and that will not change.
 
FF, Metro and BFV DLSS looks like muddy garbage at this point, which is further compounded by that there isn't enough performance to push over 90fps with DLSS on.

I'm waiting to see what the next few updates do, depending on game I might turn DLSS on, but at this point, RT is not worth the smearing that DLSS is.
 
It is okay for Nvidia to keep experimenting with gauges when true to form, but higher engagement and lower results aren't in their forte. It has to be a winner solution to keep the milk going. For the longest time green spammers had a hunch for texture fidelity, this is an optimisation towards that. They keep budgeting higher transistor counts, I suppose games are forever going to look like textureless bland blobs.
 
Two years from now it will be like G-Sync, another proprietary technology that went no where.

Heh, and yet G-Sync spawned FreeSync™.
 
I'm sure Nvidia wouldn't leave it at that, unless it was the other way around...

Well it wasn't the other way round, maybe the question should be; Would we have FreeSync if Nvidia hadn't...
 
Let me go 2 years into the Future. One Second Please..... Wow Navi surprised everybody with its amazing performance, power efficiency, cost and AMD's version of this DLSS that actually makes the PQ look fantastic all while increasing FPS. :p
 

Attachments

  • standee-doctor-who-tardis-stjohn-1493.jpg
    standee-doctor-who-tardis-stjohn-1493.jpg
    432.7 KB · Views: 289
AMD catching up to Nvidia in one jump? We'll just have to see about that. Navi will compete with the successor of Turing most of its life, so it would have to offer over twice the efficiency of Vega, which will be no small feat.

BTW; many thought Vega was going to be a Pascal killer too…
 
It's definitely an interesting subject, and it might be fine as an introduction, but it is glaringly obvious to me that the author's understanding of rendering is only skin deep, and he gets the deeper technical wrong.

22:58

Anyone who knows how GPUs and raytracing work understands that the new RT cores are only doing a part of the rendering. GPUs are in fact a collection on various specific accelerated hardware (geometry processing, tessellation, TMUs, ROPs, video encoding/decoding, etc.) and clusters of ALU/FPUs for generic math. Even in a fully raytraced scene, 98% of this hardware will still be used. The RT cores are just another type specialized accelerators for one specific task. And it's not like everything can/will be raytraced; like UI elements in a game, a page in your web browser or your Windows desktop, it's rasterized because it's efficient, and that will not change.
So what do you think is wrong with what that dude said? Cause I'm sure it is right. Cant see your point here. We are talking about games not web browsers. The UI isn't ray traced but that's stating the obvious. The ray tracing is for illumination and shadows (reflections as well like fire in BFV on the gun or in water) to get more realism in the graphics. It is connected to the light source in the game so depending on the scene it may not ray trace everything but a huge chunk of the image will be ray traced.
It shows how ray tracing eats the resources and we still have a limitation with the hardware currently available in the market. I think this video shows what is needed and gives an example of other ways to achieve the ray traced scene in games nowadays or in the future. We will have to see where and how it will be done and how will it work.
 
So what do you think is wrong with what that dude said? Cause I'm sure it is right. Cant see your point here.
His mistake is thinking the hardware resources used in rasterized rendering is not used during raytracing, when everything except a tiny part is. Adding raytracing capabilities doesn't lower rasterization capabilities. Please read the parts i bold again and you'll see.
 
His mistake is thinking the hardware resources used in rasterized rendering is not used during raytracing, when everything except a tiny part is. Adding raytracing capabilities doesn't lower rasterization capabilities. Please read the parts i bold again and you'll see.
If the rt cores are doing the ray tracing and that's how it has been shown then there's less cores doing the rasterization. So in fact he got it right. That's what I understood from his video and I think that was the main idea of it. Adding to my premise, after rasterization is complete then the ray tracing is being processed. that also creates a lag (more time needed to complete) since these can't work at the same time. You need the objects already there before you can ray trace them. We can see that in BFV when you switch on ray tracing. It barely caps at 60FPS.
 
If the rt cores are doing the ray tracing and that's how it has been shown then there's less cores doing the rasterization. So in fact he got it right. That's what I understood from his video and I think that was the main idea of it. Adding to my premise, after rasterization is complete then the ray tracing is being processed. that also creates a lag (more time needed to complete) since these can't work at the same time. You need the objects already there before you can ray trace them. We can see that in BFV when you switch on ray tracing. It barely caps at 60FPS.
What do you mean there is less cores doing the rasterization - that without RT cores they could fit some more shaders onto the chip or something else?
RT is being processed concurrently with rasterization work. There are some prerequisites - generally G-Buffer - but largely it happens at the same time.
 
Back
Top