• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Radeon RX 6700 XT: All You Need to Know

The fact that AMD claims pit this against the RTX 3070 is astonishing. That basically means in raw power everything AMD has to offer beats out Nvidia at better prices up until the RTX 3080

So once the RX 6700 drops voice to reason it would be faster than the RTX 3060 Ti

This is a very interesting Generation for PC Gamers.
They claim a lot of things, the reality on the other hand is usually (actually almost always) a "bit" different:
relative-performance_1920-1080.png

The RX 6800 is barely faster than RTX 3070 (yes, that's at 1080p, but given the extremely graphically demanding nature of recent new titles, that will be the resolution best suited to this cards in the longer run), so it stands to reason that 6700XT will struggle to compete with the 3060Ti. In a normal time, this card (considering its additional lack of features vs 3000 series) would be worth $350 at most...
 
^^^ The graph that you present is misleading - it shows the 6800 severely bottlenecked by something.
In real conditions, the 6800 is around 10-15% faster than RTX 3070.

1614940446286.png
 
You say 10-15% and you show 9? :D And that's at 4k which will certainly be out of reach for 6700xt (in newer titles at decent settings)
 
Spoken like a true team red fanboy indeed! :rolleyes:
Imagine being a fanboy of either company. Neither company cares about you, only about your wallet. Just stop this childish mindset. If AMD cards ever have the feature set I need, I'm definitely switching to try them out.
 
Imagine being a fanboy of either company. Neither company cares about you, only about your wallet. Just stop this childish mindset. If AMD cards ever have the feature set I need, I'm definitely switching to try them out.

Sometimes they don't even care about your wallet. Because they think God grows money on the trees.

What features do you request from AMD? The Radeon is a more feature-rich product line, in general and historically.
 
Can vouch for undervolting. My 3070 Gaming X Trio drew 230-250W on stock at 1075mv, 1965-1980 MHz, 63-65C.

Undervolted to 2010 MHz @ 900mv stable. Draws 140-180W and temps remain under 60C which is insane (case fans don't even need to ramp up past 1000 rpm so very quiet system while gaming, which is a first for me). Stable in all 3DMark tests, steady 2010 MHz frequency and even 2025 MHz sometimes.

I was very surprised to see how well these Ampere cards undervolt. Or maybe I just got lucky... or MSI did some black magic.

A4asRSr.png


VZqOR9E.png

Stock:
unknown.png


UV:
unknown.png
Looks solid.

In my experience, Navi10 undervolts better than Turing, but that's to be expected really as TSMC's 7FF is better than than their older 14nm process.

Samsung 8nm looks comparable to Navi10 based on your single post, and I'm assuming that Navi22 will undervolt in a very similar fashion to Navi10, being the same process and all.

The idea of a 6700XT or 3060 running at sub-100W is very appealing to me, and looking at the ebay prices of a 5700XT I can likely make a reasonable profit by selling my 5700XT on if I can find a 6700XT or 3060 to play with.

"Everything to the right of the highest point that is a flat line just means the GPU won't try and boost beyond that speed/voltage point on the curve."
This. I set it to run at 2025 MHz max constantly, with a constant 900mv. Don't need more than that.

On stock, it would fluctuate between 1965-1980 at higher temps and more power draw.

This way, it remains at a stable 2010-2025 MHz at 900mv, while drawling less power and having lower temps.
See, I'd be running a battery of OCCT tests to work out the minimum stable voltage for each clock and then trying to work out where the beginning of diminishing returns kicks in for voltage/clocks.

It's not an idea that appeals to a lot of people but I suspect somewhere between 1500-1800MHz is the sweet spot with the highest performance/Watt. So yes, I'd happily slow down the card if it has large benefits in power draw. If I ever need more performance I'll just buy a more expensive card contact my AMD/Nvidia account managers and try to bypass the retail chain in a desperate last-ditch effort to obtain a card with a wider pipeline and more CUs/Cores.
 
Perhaps I've misunderstood the sequence of quotes that lead to you saying "I wonder what you say about 3070", so again I'll ask, I'm not sure of your point, are you just genuinely curious what they think of a 3070?
It's a chip, with more VRAM than 3070, with perf roughly in the ballpark, and with claimed TDP roughly in the ballpark.
So if 6700 was bad, I was wondering, how you rated 3070.

RDNA2 silicone is simply less powerful at RT operations
That's a baseless speculation.
People take stuff like Quake II RT, don't get that 90% of that perf is qurks nested in quirks nested in quirks optimized for single vendor's SHADERS, and draw funny conclusions.

One of the ray intersection issues (that didn't quite allow to drastically improve its performance) is that you need to randomle access large memory structures. Guess who has an edge at that...

it's not even close.
Uh oh, doh.

Let me try again, there is NO such gap, definitely not in NV favor, in ACTUAL hardware RT perf, perfr is all over the place.

1614954768229.png



And if you wonder "but why it is faster in GREEN SPONSORED games then", because only a fraction of what happens in games for ray tracing is ray intersection.

Make sure to check "Random Thoughts" section on github, it's quite telling.

Random Thoughts​

  • I suspect the RTX 2000 series RT cores to implement ray-AABB collision detection using reduced float precision. Early in the development, when trying to get the sphere procedural rendering to work, reporting an intersection every time the rint shader is invoked allowed to visualise the AABB of each procedural instance. The rendering of the bounding volume had many artifacts around the boxes edges, typical of reduced precision.
  • When I upgraded the drivers to 430.86, performance significantly improved (+50%). This was around the same time Quake II RTX was released by NVIDIA. Coincidence?
  • When looking at the benchmark results of an RTX 2070 and an RTX 2080 Ti, the performance differences mostly in line with the number of CUDA cores and RT cores rather than being influences by other metrics. Although I do not know at this point whether the CUDA cores or the RT cores are the main bottleneck.
  • UPDATE 2020-01-07: the RTX 30xx results seem to imply that performance is mostly dictated by the number of RT cores. Compared to Turing, Ampere achieves 2x RT performance only when using ray-triangle intersection (as expected as per NVIDIA Ampere whitepaper), otherwise performance per RT core is the same. This leads to situations such as an RTX 2080 Ti being faster than an RTX 3080 when using procedural geometry.
  • UPDATE 2020-01-31: the 6900 XT results show the RDNA 2 architecture performing surprisingly well in procedural geometry scenes. Is it because the RDNA2 BVH-ray intersections are done using the generic computing units (and there are plenty of those), whereas Ampere is bottlenecked by its small number of RT cores in these simple scenes? Or is RDNA2 Infinity Cache really shining here? The triangle-based geometry scenes highlight how efficient Ampere RT cores are in handling triangle-ray intersections; unsurprisingly as these scenes are more representative of what video games would do in practice.

DLSS into the mix
Sorry, I cannot seriously talk about "but if I downscale and slap TAA antialiasing, can I pretend I did not downscale".
No, you can't. Or wait, you can. Whatever you fancy.
It's just, I won't.
 
Sometimes they don't even care about your wallet. Because they think God grows money on the trees.

What features do you request from AMD? The Radeon is a more feature-rich product line, in general and historically.
Idk, actual OpenGL support so my MC shaders don't run at 2 FPS, an encoder as good as NVENC, good drivers. Main things.
 
"Everything to the right of the highest point that is a flat line just means the GPU won't try and boost beyond that speed/voltage point on the curve."
This. I set it to run at 2025 MHz max constantly, with a constant 900mv. Don't need more than that.

On stock, it would fluctuate between 1965-1980 at higher temps and more power draw.

This way, it remains at a stable 2010-2025 MHz at 900mv, while drawling less power and having lower temps.
Im going to try this once i get a 3080. It'll have a waterblock on it too.

Did you remove sone of the points from the curve. My 1070 has a ton abd id hate to have to get each one at the same freq hah
 
Im going to try this once i get a 3080. It'll have a waterblock on it too.

Did you remove sone of the points from the curve. My 1070 has a ton abd id hate to have to get each one at the same freq hah
Nope, just adjusted them. You can shift click and move a ton of squares at once, that's how I did it.

Here's an update.

CH4yA98.png


2040-2055 stable @ 925mv (compared to stock 1965-1980 @ 1075mv). Max power draw 190W. Max temp 61C on air. The 66C max temp reported is the pic is from periodically going back to stock settings -- so yes, there is a 5 degree temp decrease and lots of MHz increase.

Fully stable.

Undervolt your Ampere cards people.

Also, we are getting a "bit" off topic, we should end this convo here or make a new thread lol.
 
Last edited by a moderator:
They claim a lot of things, the reality on the other hand is usually (actually almost always) a "bit" different:
relative-performance_1920-1080.png

The RX 6800 is barely faster than RTX 3070 (yes, that's at 1080p, but given the extremely graphically demanding nature of recent new titles, that will be the resolution best suited to this cards in the longer run), so it stands to reason that 6700XT will struggle to compete with the 3060Ti. In a normal time, this card (considering its additional lack of features vs 3000 series) would be worth $350 at most...

Go back and watch the table of 2560x1440 and you'll see a better representation.
 
CU count is not really relevent here as Xbox Series X GPU is clocked so low (1.8Ghz). There's a reason the PS5 performs better in nearly every multiplatform game comparison despite 36 CUs.

I've included the clock speeds in my calculation:
Doing a quick math, (52CU/40CU)*(1.825MHz/2.424MHz) = 0.98. The performance is similar to an Xbox series X which (the entire system) costs almost the same. What a time to be a PC gamer /s
 
Uh oh, doh.
Doh indeed!
The triangle-based geometry scenes highlight how efficient Ampere RT cores are in handling triangle-ray intersections; unsurprisingly as these scenes are more representative of what video games would do in practice.
I don't wonder why it's faster in green sponsored games, I wonder why it's more often faster in vendor-agnostic tests and even in AMD sponsored games, in the form of adding a higher millisecond rendering time penalty to the output image.
Whatever you fancy.
I have no such reservations about how the magic pixels are rendered when the output image is virtually indistinguishable in motion and it comes with a healthy FPS boost. Quoting your own head-in-the-sand opinion in bold was a nice touch, though. It almost made me reconsider.

I'd say it was an interesting experience, but I've looked through the rose-coloured glasses before and I prefer to see the entire spectrum.

And with that, the ignore button strikes again!
 
I wonder why it's more often faster in vendor-agnostic tests
You were presented with results of vendor-agnostic tests, along with source code and curious comments on major performance bumps.

even in AMD sponsored games
1) Dirt 5 is so far the only RT game of that kind, and AMD is comfortably ahead in it
2) DF is an embarrassment

when the output image is virtually indistinguishable in motion
Ah. In motion that is. And from sufficient distance, I bet.
That's ok then. As I recall DLSS took this:

1615193632006.png


and turned it into this:

1615193654162.png


all while reviewer kept saying that "better than native" mantra.

But one had to see that in motion, I'll remember that. Thanks!
 
1) Dirt 5 is so far the only RT game of that kind, and AMD is comfortably ahead in it
2) DF is an embarrassment
That DigitalFoundry video on probably the best analysis out there for the performance hit of raytracing effects today and across both manufacturers.
I will just link to the video again.
 
That DigitalFoundry video on probably the best analysis out there for the performance hit of raytracing effects today and across both manufacturers.
I will just link to the video again.
Indeed, and it clearly demonstrates the penalty, where the AMD GPU pays a higher price to enable the RT effect, in an AMD sponsored title.

Fantastic channel too, they do a great job on virtually all content, they do the lengthy investigation, present the findings in full showing show you the good, the bad, and the nuance, and then on a balance of it all make informed conclusions and recommendations.
 
Last edited:
That DigitalFoundry video on probably the best analysis out there for the performance hit of raytracing effects today and across both manufacturers.
I will just link to the video again.

It's the sad bit.
It's the best analysis on RT subject that I've seen on youtube.
And it's still filled with pathetic shilling.

Yet, even without reading between the lines, you should have figured this:

Apples to apples, eh:

Typically, in any RT scenario, there are four steps.
1) To begin with, the scene is prepared on the GPU, filled with all of the objects that can potentially affect ray tracing.
2) In the second step, rays are shot out into that scene, traversing it and tested to see if they hit objects.
3) Then there's the next step, where the results from step two are shaded - like the colour of a reflection or whether a pixel is in or out of shadow.
4) The final step is denoising. You see, the GPU can't send out unlimited amounts of rays to be traced - only a finite amount can be traced, so the end result looks quite noisy. Denoising smooths out the image, and producing the final effect.


So, there are numerous factors at play in dealing with RT performance. Of the four steps, only the second one is hardware accelerated - and the actual implementation between AMD and Nvidia is different...

...Meanwhile, PlayStation 5's Spider-Man: Miles Morales demonstrates that Radeon ray tracing can produce some impressive results on more challenging effects - and that's using a GPU that's significantly less powerful than the 6800 XT....


So, uh, oh, doh, you were saying?
 
Why are you leaning that heavily on different actual implementation? DXR is a standard thing, if the implementation is different, I would expect manufacturer to know what they are doing and aiming for.

But yes, the second step is the hardware accelerated one and their measurements give a pretty good indication that Nvidia's RT hardware is more powerful at this point (probably simply by having more units). This evidenced by the place of performance falloff on the scale of amounts of rays used. Both fall off but the respective points are different.

Miles Morales on PS5 is heavily optimized using the same methods for performance improvements that RT effects use on PC, mostly to a higher degree. Also, clever design. The same Digital Foundry has a pretty good article/video on how that is achieved: https://www.eurogamer.net/articles/digitalfoundry-2020-console-ray-tracing-in-marvels-spider-man
 
Why are you leaning that heavily on different actual implementation? DXR is a standard thing, if the implementation is different
You have missed the points. Of a number of things that need to happen for RT to end up being an image, only one bit is hardware accelerated.

Why are you leaning that heavily on different actual implementation?
There is another side of the implementation:
For example, Quake 2 RTX and Watch Dogs Legion use a denoiser built by Nvidia and while it won't have been designed to run poorly on AMD hardware (which Nvidia would not have had access to when they coded it), it's certainly designed to run as well as possible on RTX cards.

Comparison of actual hardware RT perf benchmark, have been linked in #85 here. There is no need to run around and "guess" things, they are right there, on the surface.

The:

the RTX 3080 could render the effect in nearly half the time in Metro Exodus, or even a third of the time in Quake 2 RTX, yet increasing the amount of rays after this saw the RTX 3080 having less of an advantage.

could mean many things. This part is hilarious:

In general, from these tests it looks like the simpler the ray tracing is, the more similar the rendering times for the effect are between the competing architectures. The Nvidia card is undoubtedly more capable across the entire RT pipeline

Remember which part of ray tracing is hardware accelerated? Which "RT pipeline" cough? Vendor optimized shader code?
 
Last edited:
Against my better judgment, I've viewed the ignored content, here we go again...

Ah. In motion that is. And from sufficient distance, I bet.
That's ok then.
I never said from a distance, your words. And no, it looks fantastic close-up, too.

Yeah in motion, of course in motion. I tend to play games at something in the order of 60-144fps, not sitting and nitpicking stills, but for argument's sake, I'll do that too. If we're going to cherry-pick some native vs DLSS shots, I can easily do the same and show the side of the coin that you conveniently haven't.

Native left, DLSS Quality right

1615354287600.png

1615354484951.png

1615354483003.png


And the real kicker after viewing what is, at worst, comparable quality where each rendering has strengths and weaknesses, and at best, higher overall quality...

1615354705147.png


But you appear to have made up your mind, you don't like it, you won't "settle" for it. Fine, suit yourself, nobody will make you buy an RTX card, play a supported game and turn it on. Cherry picking examples to try and show how 'bad' it is doesn't make you come across as smart, and it certainly doesn't just make you right, you could have at least chosen a game with a notoriously 'meh' implementation. Not to mention the attitude, yikes.

I can't convince you, and you can't convince me, so where from here? ignore each other?
 
Last edited:
That person's hair actually looks better in Native in comparison to DLSS, as it appears softer and cleaner as opposed to coarse and oily.
 
Yeah in motion, of course in motion.
I won't bite this lie, I'm sorry.

It's not about "in motion" at all. What you present is the "best case" for any anti-aliasing method that adds blur, TAA in particular.
There is barely any crisp texture (face eh?) to notice the added blur.
It is heavily loaded with stuff that benefits a lot from antialiasing (hair, long grass, eyebrows).

But if you dare bringing in actual, real stuff, from the very pic in your list, Death Stranding, DLSS takes this:

1615193632006-png.191506


and turns it into this:

1615193654162-png.191507


no shockers here, all TAA derivatives exhibit it.

NV's TAA(u) derivative adds blur to... entire screen if you move your mouse quickly. Among other things

It's a shame ars-techinca was the only site to dare point it out.


1615367902359.png


1615367948269.png
 
Back
Top