• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 6700 XT: All You Need to Know

Joined
Apr 16, 2019
Messages
632 (0.31/day)
The fact that AMD claims pit this against the RTX 3070 is astonishing. That basically means in raw power everything AMD has to offer beats out Nvidia at better prices up until the RTX 3080

So once the RX 6700 drops voice to reason it would be faster than the RTX 3060 Ti

This is a very interesting Generation for PC Gamers.
They claim a lot of things, the reality on the other hand is usually (actually almost always) a "bit" different:

The RX 6800 is barely faster than RTX 3070 (yes, that's at 1080p, but given the extremely graphically demanding nature of recent new titles, that will be the resolution best suited to this cards in the longer run), so it stands to reason that 6700XT will struggle to compete with the 3060Ti. In a normal time, this card (considering its additional lack of features vs 3000 series) would be worth $350 at most...
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.66/day)
Location
Ex-usa | slava the trolls
^^^ The graph that you present is misleading - it shows the 6800 severely bottlenecked by something.
In real conditions, the 6800 is around 10-15% faster than RTX 3070.

1614940446286.png
 
Joined
Apr 16, 2019
Messages
632 (0.31/day)
You say 10-15% and you show 9? :D And that's at 4k which will certainly be out of reach for 6700xt (in newer titles at decent settings)
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.66/day)
Location
Ex-usa | slava the trolls
You say 10-15% and you show 9? :D

Yes, older drivers, lack of SAM support, Core i-something instead of a Ryzen platform, bugs in Nvidia's control panel - lower settings, etc.
 
D

Deleted member 205776

Guest
Spoken like a true team red fanboy indeed! :rolleyes:
Imagine being a fanboy of either company. Neither company cares about you, only about your wallet. Just stop this childish mindset. If AMD cards ever have the feature set I need, I'm definitely switching to try them out.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.66/day)
Location
Ex-usa | slava the trolls
Imagine being a fanboy of either company. Neither company cares about you, only about your wallet. Just stop this childish mindset. If AMD cards ever have the feature set I need, I'm definitely switching to try them out.

Sometimes they don't even care about your wallet. Because they think God grows money on the trees.

What features do you request from AMD? The Radeon is a more feature-rich product line, in general and historically.
 
Joined
Feb 20, 2019
Messages
8,260 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Can vouch for undervolting. My 3070 Gaming X Trio drew 230-250W on stock at 1075mv, 1965-1980 MHz, 63-65C.

Undervolted to 2010 MHz @ 900mv stable. Draws 140-180W and temps remain under 60C which is insane (case fans don't even need to ramp up past 1000 rpm so very quiet system while gaming, which is a first for me). Stable in all 3DMark tests, steady 2010 MHz frequency and even 2025 MHz sometimes.

I was very surprised to see how well these Ampere cards undervolt. Or maybe I just got lucky... or MSI did some black magic.




Stock:


UV:
Looks solid.

In my experience, Navi10 undervolts better than Turing, but that's to be expected really as TSMC's 7FF is better than than their older 14nm process.

Samsung 8nm looks comparable to Navi10 based on your single post, and I'm assuming that Navi22 will undervolt in a very similar fashion to Navi10, being the same process and all.

The idea of a 6700XT or 3060 running at sub-100W is very appealing to me, and looking at the ebay prices of a 5700XT I can likely make a reasonable profit by selling my 5700XT on if I can find a 6700XT or 3060 to play with.

"Everything to the right of the highest point that is a flat line just means the GPU won't try and boost beyond that speed/voltage point on the curve."
This. I set it to run at 2025 MHz max constantly, with a constant 900mv. Don't need more than that.

On stock, it would fluctuate between 1965-1980 at higher temps and more power draw.

This way, it remains at a stable 2010-2025 MHz at 900mv, while drawling less power and having lower temps.
See, I'd be running a battery of OCCT tests to work out the minimum stable voltage for each clock and then trying to work out where the beginning of diminishing returns kicks in for voltage/clocks.

It's not an idea that appeals to a lot of people but I suspect somewhere between 1500-1800MHz is the sweet spot with the highest performance/Watt. So yes, I'd happily slow down the card if it has large benefits in power draw. If I ever need more performance I'll just buy a more expensive card contact my AMD/Nvidia account managers and try to bypass the retail chain in a desperate last-ditch effort to obtain a card with a wider pipeline and more CUs/Cores.
 
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Perhaps I've misunderstood the sequence of quotes that lead to you saying "I wonder what you say about 3070", so again I'll ask, I'm not sure of your point, are you just genuinely curious what they think of a 3070?
It's a chip, with more VRAM than 3070, with perf roughly in the ballpark, and with claimed TDP roughly in the ballpark.
So if 6700 was bad, I was wondering, how you rated 3070.

RDNA2 silicone is simply less powerful at RT operations
That's a baseless speculation.
People take stuff like Quake II RT, don't get that 90% of that perf is qurks nested in quirks nested in quirks optimized for single vendor's SHADERS, and draw funny conclusions.

One of the ray intersection issues (that didn't quite allow to drastically improve its performance) is that you need to randomle access large memory structures. Guess who has an edge at that...

it's not even close.
Uh oh, doh.

Let me try again, there is NO such gap, definitely not in NV favor, in ACTUAL hardware RT perf, perfr is all over the place.

1614954768229.png



And if you wonder "but why it is faster in GREEN SPONSORED games then", because only a fraction of what happens in games for ray tracing is ray intersection.

Make sure to check "Random Thoughts" section on github, it's quite telling.

Random Thoughts​

  • I suspect the RTX 2000 series RT cores to implement ray-AABB collision detection using reduced float precision. Early in the development, when trying to get the sphere procedural rendering to work, reporting an intersection every time the rint shader is invoked allowed to visualise the AABB of each procedural instance. The rendering of the bounding volume had many artifacts around the boxes edges, typical of reduced precision.
  • When I upgraded the drivers to 430.86, performance significantly improved (+50%). This was around the same time Quake II RTX was released by NVIDIA. Coincidence?
  • When looking at the benchmark results of an RTX 2070 and an RTX 2080 Ti, the performance differences mostly in line with the number of CUDA cores and RT cores rather than being influences by other metrics. Although I do not know at this point whether the CUDA cores or the RT cores are the main bottleneck.
  • UPDATE 2020-01-07: the RTX 30xx results seem to imply that performance is mostly dictated by the number of RT cores. Compared to Turing, Ampere achieves 2x RT performance only when using ray-triangle intersection (as expected as per NVIDIA Ampere whitepaper), otherwise performance per RT core is the same. This leads to situations such as an RTX 2080 Ti being faster than an RTX 3080 when using procedural geometry.
  • UPDATE 2020-01-31: the 6900 XT results show the RDNA 2 architecture performing surprisingly well in procedural geometry scenes. Is it because the RDNA2 BVH-ray intersections are done using the generic computing units (and there are plenty of those), whereas Ampere is bottlenecked by its small number of RT cores in these simple scenes? Or is RDNA2 Infinity Cache really shining here? The triangle-based geometry scenes highlight how efficient Ampere RT cores are in handling triangle-ray intersections; unsurprisingly as these scenes are more representative of what video games would do in practice.

DLSS into the mix
Sorry, I cannot seriously talk about "but if I downscale and slap TAA antialiasing, can I pretend I did not downscale".
No, you can't. Or wait, you can. Whatever you fancy.
It's just, I won't.
 
D

Deleted member 205776

Guest
Sometimes they don't even care about your wallet. Because they think God grows money on the trees.

What features do you request from AMD? The Radeon is a more feature-rich product line, in general and historically.
Idk, actual OpenGL support so my MC shaders don't run at 2 FPS, an encoder as good as NVENC, good drivers. Main things.
 

MxPhenom 216

ASIC Engineer
Joined
Aug 31, 2010
Messages
13,006 (2.50/day)
Location
Loveland, CO
System Name Ryzen Reflection
Processor AMD Ryzen 9 5900x
Motherboard Gigabyte X570S Aorus Master
Cooling 2x EK PE360 | TechN AM4 AMD Block Black | EK Quantum Vector Trinity GPU Nickel + Plexi
Memory Teamgroup T-Force Xtreem 2x16GB B-Die 3600 @ 14-14-14-28-42-288-2T 1.45v
Video Card(s) Zotac AMP HoloBlack RTX 3080Ti 12G | 950mV 1950Mhz
Storage WD SN850 500GB (OS) | Samsung 980 Pro 1TB (Games_1) | Samsung 970 Evo 1TB (Games_2)
Display(s) Asus XG27AQM 240Hz G-Sync Fast-IPS | Gigabyte M27Q-P 165Hz 1440P IPS | Asus 24" IPS (portrait mode)
Case Lian Li PC-011D XL | Custom cables by Cablemodz
Audio Device(s) FiiO K7 | Sennheiser HD650 + Beyerdynamic FOX Mic
Power Supply Seasonic Prime Ultra Platinum 850
Mouse Razer Viper v2 Pro
Keyboard Corsair K65 Plus 75% Wireless - USB Mode
Software Windows 11 Pro 64-Bit
"Everything to the right of the highest point that is a flat line just means the GPU won't try and boost beyond that speed/voltage point on the curve."
This. I set it to run at 2025 MHz max constantly, with a constant 900mv. Don't need more than that.

On stock, it would fluctuate between 1965-1980 at higher temps and more power draw.

This way, it remains at a stable 2010-2025 MHz at 900mv, while drawling less power and having lower temps.
Im going to try this once i get a 3080. It'll have a waterblock on it too.

Did you remove sone of the points from the curve. My 1070 has a ton abd id hate to have to get each one at the same freq hah
 
D

Deleted member 205776

Guest
Im going to try this once i get a 3080. It'll have a waterblock on it too.

Did you remove sone of the points from the curve. My 1070 has a ton abd id hate to have to get each one at the same freq hah
Nope, just adjusted them. You can shift click and move a ton of squares at once, that's how I did it.

Here's an update.



2040-2055 stable @ 925mv (compared to stock 1965-1980 @ 1075mv). Max power draw 190W. Max temp 61C on air. The 66C max temp reported is the pic is from periodically going back to stock settings -- so yes, there is a 5 degree temp decrease and lots of MHz increase.

Fully stable.

Undervolt your Ampere cards people.

Also, we are getting a "bit" off topic, we should end this convo here or make a new thread lol.
 
Last edited by a moderator:
Joined
Oct 15, 2010
Messages
951 (0.18/day)
System Name Little Boy / New Guy
Processor AMD Ryzen 9 5900X / Intel Core I5 10400F
Motherboard Asrock X470 Taichi Ultimate / Asus H410M Prime
Cooling ARCTIC Liquid Freezer II 280 A-RGB / ARCTIC Freezer 34 eSports DUO
Memory TeamGroup Zeus 2x16GB 3200Mhz CL16 / Teamgroup 1x16GB 3000Mhz CL18
Video Card(s) Asrock Phantom RX 6800 XT 16GB / Asus RTX 3060 Ti 8GB DUAL Mini V2
Storage Patriot Viper VPN100 Nvme 1TB / OCZ Vertex 4 256GB Sata / Ultrastar 2TB / IronWolf 4TB / WD Red 8TB
Display(s) Compumax MF32C 144Hz QHD / ViewSonic OMNI 27 144Hz QHD
Case Phanteks Eclipse P400A / Montech X3 Mesh
Power Supply Aresgame 850W 80+ Gold / Aerocool 850W Plus bronze
Mouse Gigabyte Force M7 Thor
Keyboard Gigabyte Aivia K8100
Software Windows 10 Pro 64 Bits
They claim a lot of things, the reality on the other hand is usually (actually almost always) a "bit" different:

The RX 6800 is barely faster than RTX 3070 (yes, that's at 1080p, but given the extremely graphically demanding nature of recent new titles, that will be the resolution best suited to this cards in the longer run), so it stands to reason that 6700XT will struggle to compete with the 3060Ti. In a normal time, this card (considering its additional lack of features vs 3000 series) would be worth $350 at most...

Go back and watch the table of 2560x1440 and you'll see a better representation.
 
Joined
Jan 25, 2011
Messages
531 (0.11/day)
Location
Inside a mini ITX
System Name ITX Desktop
Processor Core i7 9700K
Motherboard Gigabyte Aorus Pro WiFi Z390
Cooling Arctic esports 34 duo.
Memory Corsair Vengeance LPX 16GB 3000MHz
Video Card(s) Gigabyte GeForce RTX 2070 Gaming OC White PRO
Storage Samsung 970 EVO Plus | Intel SSD 660p
Case NZXT H200
Power Supply Corsair CX Series 750 Watt
CU count is not really relevent here as Xbox Series X GPU is clocked so low (1.8Ghz). There's a reason the PS5 performs better in nearly every multiplatform game comparison despite 36 CUs.

I've included the clock speeds in my calculation:
Doing a quick math, (52CU/40CU)*(1.825MHz/2.424MHz) = 0.98. The performance is similar to an Xbox series X which (the entire system) costs almost the same. What a time to be a PC gamer /s
 

wolf

Better Than Native
Joined
May 7, 2007
Messages
8,168 (1.27/day)
System Name MightyX
Processor Ryzen 5800X3D
Motherboard Gigabyte X570 I Aorus Pro WiFi
Cooling Scythe Fuma 2
Memory 32GB DDR4 3600 CL16
Video Card(s) Asus TUF RTX3080 Deshrouded
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
Uh oh, doh.
Doh indeed!
The triangle-based geometry scenes highlight how efficient Ampere RT cores are in handling triangle-ray intersections; unsurprisingly as these scenes are more representative of what video games would do in practice.
I don't wonder why it's faster in green sponsored games, I wonder why it's more often faster in vendor-agnostic tests and even in AMD sponsored games, in the form of adding a higher millisecond rendering time penalty to the output image.
Whatever you fancy.
I have no such reservations about how the magic pixels are rendered when the output image is virtually indistinguishable in motion and it comes with a healthy FPS boost. Quoting your own head-in-the-sand opinion in bold was a nice touch, though. It almost made me reconsider.

I'd say it was an interesting experience, but I've looked through the rose-coloured glasses before and I prefer to see the entire spectrum.

And with that, the ignore button strikes again!
 
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
I wonder why it's more often faster in vendor-agnostic tests
You were presented with results of vendor-agnostic tests, along with source code and curious comments on major performance bumps.

even in AMD sponsored games
1) Dirt 5 is so far the only RT game of that kind, and AMD is comfortably ahead in it
2) DF is an embarrassment

when the output image is virtually indistinguishable in motion
Ah. In motion that is. And from sufficient distance, I bet.
That's ok then. As I recall DLSS took this:

1615193632006.png


and turned it into this:

1615193654162.png


all while reviewer kept saying that "better than native" mantra.

But one had to see that in motion, I'll remember that. Thanks!
 
Joined
Feb 3, 2017
Messages
3,746 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
1) Dirt 5 is so far the only RT game of that kind, and AMD is comfortably ahead in it
2) DF is an embarrassment
That DigitalFoundry video on probably the best analysis out there for the performance hit of raytracing effects today and across both manufacturers.
I will just link to the video again.
 

wolf

Better Than Native
Joined
May 7, 2007
Messages
8,168 (1.27/day)
System Name MightyX
Processor Ryzen 5800X3D
Motherboard Gigabyte X570 I Aorus Pro WiFi
Cooling Scythe Fuma 2
Memory 32GB DDR4 3600 CL16
Video Card(s) Asus TUF RTX3080 Deshrouded
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
That DigitalFoundry video on probably the best analysis out there for the performance hit of raytracing effects today and across both manufacturers.
I will just link to the video again.
Indeed, and it clearly demonstrates the penalty, where the AMD GPU pays a higher price to enable the RT effect, in an AMD sponsored title.

Fantastic channel too, they do a great job on virtually all content, they do the lengthy investigation, present the findings in full showing show you the good, the bad, and the nuance, and then on a balance of it all make informed conclusions and recommendations.
 
Last edited:
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
That DigitalFoundry video on probably the best analysis out there for the performance hit of raytracing effects today and across both manufacturers.
I will just link to the video again.

It's the sad bit.
It's the best analysis on RT subject that I've seen on youtube.
And it's still filled with pathetic shilling.

Yet, even without reading between the lines, you should have figured this:

Apples to apples, eh:

Typically, in any RT scenario, there are four steps.
1) To begin with, the scene is prepared on the GPU, filled with all of the objects that can potentially affect ray tracing.
2) In the second step, rays are shot out into that scene, traversing it and tested to see if they hit objects.
3) Then there's the next step, where the results from step two are shaded - like the colour of a reflection or whether a pixel is in or out of shadow.
4) The final step is denoising. You see, the GPU can't send out unlimited amounts of rays to be traced - only a finite amount can be traced, so the end result looks quite noisy. Denoising smooths out the image, and producing the final effect.


So, there are numerous factors at play in dealing with RT performance. Of the four steps, only the second one is hardware accelerated - and the actual implementation between AMD and Nvidia is different...

...Meanwhile, PlayStation 5's Spider-Man: Miles Morales demonstrates that Radeon ray tracing can produce some impressive results on more challenging effects - and that's using a GPU that's significantly less powerful than the 6800 XT....


So, uh, oh, doh, you were saying?
 
Joined
Feb 3, 2017
Messages
3,746 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
Why are you leaning that heavily on different actual implementation? DXR is a standard thing, if the implementation is different, I would expect manufacturer to know what they are doing and aiming for.

But yes, the second step is the hardware accelerated one and their measurements give a pretty good indication that Nvidia's RT hardware is more powerful at this point (probably simply by having more units). This evidenced by the place of performance falloff on the scale of amounts of rays used. Both fall off but the respective points are different.

Miles Morales on PS5 is heavily optimized using the same methods for performance improvements that RT effects use on PC, mostly to a higher degree. Also, clever design. The same Digital Foundry has a pretty good article/video on how that is achieved: https://www.eurogamer.net/articles/digitalfoundry-2020-console-ray-tracing-in-marvels-spider-man
 
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Why are you leaning that heavily on different actual implementation? DXR is a standard thing, if the implementation is different
You have missed the points. Of a number of things that need to happen for RT to end up being an image, only one bit is hardware accelerated.

Why are you leaning that heavily on different actual implementation?
There is another side of the implementation:
For example, Quake 2 RTX and Watch Dogs Legion use a denoiser built by Nvidia and while it won't have been designed to run poorly on AMD hardware (which Nvidia would not have had access to when they coded it), it's certainly designed to run as well as possible on RTX cards.

Comparison of actual hardware RT perf benchmark, have been linked in #85 here. There is no need to run around and "guess" things, they are right there, on the surface.

The:

the RTX 3080 could render the effect in nearly half the time in Metro Exodus, or even a third of the time in Quake 2 RTX, yet increasing the amount of rays after this saw the RTX 3080 having less of an advantage.

could mean many things. This part is hilarious:

In general, from these tests it looks like the simpler the ray tracing is, the more similar the rendering times for the effect are between the competing architectures. The Nvidia card is undoubtedly more capable across the entire RT pipeline

Remember which part of ray tracing is hardware accelerated? Which "RT pipeline" cough? Vendor optimized shader code?
 
Last edited:

wolf

Better Than Native
Joined
May 7, 2007
Messages
8,168 (1.27/day)
System Name MightyX
Processor Ryzen 5800X3D
Motherboard Gigabyte X570 I Aorus Pro WiFi
Cooling Scythe Fuma 2
Memory 32GB DDR4 3600 CL16
Video Card(s) Asus TUF RTX3080 Deshrouded
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
Against my better judgment, I've viewed the ignored content, here we go again...

Ah. In motion that is. And from sufficient distance, I bet.
That's ok then.
I never said from a distance, your words. And no, it looks fantastic close-up, too.

Yeah in motion, of course in motion. I tend to play games at something in the order of 60-144fps, not sitting and nitpicking stills, but for argument's sake, I'll do that too. If we're going to cherry-pick some native vs DLSS shots, I can easily do the same and show the side of the coin that you conveniently haven't.

Native left, DLSS Quality right

1615354287600.png

1615354484951.png

1615354483003.png


And the real kicker after viewing what is, at worst, comparable quality where each rendering has strengths and weaknesses, and at best, higher overall quality...

1615354705147.png


But you appear to have made up your mind, you don't like it, you won't "settle" for it. Fine, suit yourself, nobody will make you buy an RTX card, play a supported game and turn it on. Cherry picking examples to try and show how 'bad' it is doesn't make you come across as smart, and it certainly doesn't just make you right, you could have at least chosen a game with a notoriously 'meh' implementation. Not to mention the attitude, yikes.

I can't convince you, and you can't convince me, so where from here? ignore each other?
 
Last edited:
Joined
Oct 22, 2014
Messages
14,082 (3.82/day)
Location
Sunshine Coast
System Name H7 Flow 2024
Processor AMD 5800X3D
Motherboard Asus X570 Tough Gaming
Cooling Custom liquid
Memory 32 GB DDR4
Video Card(s) Intel ARC A750
Storage Crucial P5 Plus 2TB.
Display(s) AOC 24" Freesync 1m.s. 75Hz
Mouse Lenovo
Keyboard Eweadn Mechanical
Software W11 Pro 64 bit
That person's hair actually looks better in Native in comparison to DLSS, as it appears softer and cleaner as opposed to coarse and oily.
 
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Yeah in motion, of course in motion.
I won't bite this lie, I'm sorry.

It's not about "in motion" at all. What you present is the "best case" for any anti-aliasing method that adds blur, TAA in particular.
There is barely any crisp texture (face eh?) to notice the added blur.
It is heavily loaded with stuff that benefits a lot from antialiasing (hair, long grass, eyebrows).

But if you dare bringing in actual, real stuff, from the very pic in your list, Death Stranding, DLSS takes this:



and turns it into this:



no shockers here, all TAA derivatives exhibit it.

NV's TAA(u) derivative adds blur to... entire screen if you move your mouse quickly. Among other things

It's a shame ars-techinca was the only site to dare point it out.


1615367902359.png


1615367948269.png
 
Top