Wednesday, March 3rd 2021

AMD Radeon RX 6700 XT: All You Need to Know

AMD today announced the Radeon RX 6700 XT, its fourth RX 6000 series graphics card based on the RDNA2 graphics architecture. The card debuts the new 7 nm "Navi 22" silicon, which is physically smaller than the "Navi 21" powering the RX 6800/RX 6900 series. The RX 6700 XT maxes out "Navi 22," featuring 40 RDNA2 compute units, amounting to 2,560 stream processors. These are run at a maximum Game Clock frequency of 2424 MHz, a significant clock speed uplift over the previous-gen. The card comes with 12 GB of GDDR6 memory across a 192-bit wide memory interface. The card uses 16 Gbps GDDR6 memory chips, so the memory bandwidth works out to 384 GB/s. The chip packs 96 MB of Infinity Cache on-die memory, which works to accelerate the memory sub-system. AMD is targeting a typical board power metric of 230 W. The power input configuration for the reference-design RX 6700 XT board is 8-pin + 6-pin.

AMD is marketing the RX 6700 XT as a predominantly 1440p gaming card, positioned a notch below the RX 6800. The company makes some staggering performance claims. Compared to the previous-generation the RX 6700 XT is shown beating the GeForce RTX 2080 Super. NVIDIA marketed the current-gen RTX 3060 Ti as having the same performance outlook. Things get interesting, where AMD shows that in select games, the RX 6700 XT can even beat the RTX 3070, a card NVIDIA marketed as matching its previous-gen flagship, the RTX 2080 Ti. AMD is pricing the Radeon RX 6700 XT at USD $479 (MSRP), which is very likely to be bovine defecation, given the prevailing market situation. The company announced a simultaneous launch of its reference-design and AIB custom-design boards, starting March 18, 2021.
AMD's performance claims follow.

Add your own comment

104 Comments on AMD Radeon RX 6700 XT: All You Need to Know

#76
ARF
^^^ The graph that you present is misleading - it shows the 6800 severely bottlenecked by something.
In real conditions, the 6800 is around 10-15% faster than RTX 3070.

Posted on Reply
#77
HenrySomeone
You say 10-15% and you show 9? :D And that's at 4k which will certainly be out of reach for 6700xt (in newer titles at decent settings)
Posted on Reply
#78
ARF
HenrySomeoneYou say 10-15% and you show 9? :D
Yes, older drivers, lack of SAM support, Core i-something instead of a Ryzen platform, bugs in Nvidia's control panel - lower settings, etc.
Posted on Reply
#79
HenrySomeone
Spoken like a true team red fanboy indeed! :rolleyes:
Posted on Reply
#81
Unregistered
HenrySomeoneSpoken like a true team red fanboy indeed! :rolleyes:
Imagine being a fanboy of either company. Neither company cares about you, only about your wallet. Just stop this childish mindset. If AMD cards ever have the feature set I need, I'm definitely switching to try them out.
#82
ARF
AlexaImagine being a fanboy of either company. Neither company cares about you, only about your wallet. Just stop this childish mindset. If AMD cards ever have the feature set I need, I'm definitely switching to try them out.
Sometimes they don't even care about your wallet. Because they think God grows money on the trees.

What features do you request from AMD? The Radeon is a more feature-rich product line, in general and historically.
Posted on Reply
#83
Chrispy_
AlexaCan vouch for undervolting. My 3070 Gaming X Trio drew 230-250W on stock at 1075mv, 1965-1980 MHz, 63-65C.

Undervolted to 2010 MHz @ 900mv stable. Draws 140-180W and temps remain under 60C which is insane (case fans don't even need to ramp up past 1000 rpm so very quiet system while gaming, which is a first for me). Stable in all 3DMark tests, steady 2010 MHz frequency and even 2025 MHz sometimes.

I was very surprised to see how well these Ampere cards undervolt. Or maybe I just got lucky... or MSI did some black magic.




Stock:


UV:
Looks solid.

In my experience, Navi10 undervolts better than Turing, but that's to be expected really as TSMC's 7FF is better than than their older 14nm process.

Samsung 8nm looks comparable to Navi10 based on your single post, and I'm assuming that Navi22 will undervolt in a very similar fashion to Navi10, being the same process and all.

The idea of a 6700XT or 3060 running at sub-100W is very appealing to me, and looking at the ebay prices of a 5700XT I can likely make a reasonable profit by selling my 5700XT on if I can find a 6700XT or 3060 to play with.
Alexa"Everything to the right of the highest point that is a flat line just means the GPU won't try and boost beyond that speed/voltage point on the curve."
This. I set it to run at 2025 MHz max constantly, with a constant 900mv. Don't need more than that.

On stock, it would fluctuate between 1965-1980 at higher temps and more power draw.

This way, it remains at a stable 2010-2025 MHz at 900mv, while drawling less power and having lower temps.
See, I'd be running a battery of OCCT tests to work out the minimum stable voltage for each clock and then trying to work out where the beginning of diminishing returns kicks in for voltage/clocks.

It's not an idea that appeals to a lot of people but I suspect somewhere between 1500-1800MHz is the sweet spot with the highest performance/Watt. So yes, I'd happily slow down the card if it has large benefits in power draw. If I ever need more performance I'll just buy a more expensive card contact my AMD/Nvidia account managers and try to bypass the retail chain in a desperate last-ditch effort to obtain a card with a wider pipeline and more CUs/Cores.
Posted on Reply
#84
medi01
wolfPerhaps I've misunderstood the sequence of quotes that lead to you saying "I wonder what you say about 3070", so again I'll ask, I'm not sure of your point, are you just genuinely curious what they think of a 3070?
It's a chip, with more VRAM than 3070, with perf roughly in the ballpark, and with claimed TDP roughly in the ballpark.
So if 6700 was bad, I was wondering, how you rated 3070.
wolfRDNA2 silicone is simply less powerful at RT operations
That's a baseless speculation.
People take stuff like Quake II RT, don't get that 90% of that perf is qurks nested in quirks nested in quirks optimized for single vendor's SHADERS, and draw funny conclusions.

One of the ray intersection issues (that didn't quite allow to drastically improve its performance) is that you need to randomle access large memory structures. Guess who has an edge at that...
wolfit's not even close.
Uh oh, doh.

Let me try again, there is NO such gap, definitely not in NV favor, in ACTUAL hardware RT perf, perfr is all over the place.



github.com/GPSnoopy/RayTracingInVulkan

And if you wonder "but why it is faster in GREEN SPONSORED games then", because only a fraction of what happens in games for ray tracing is ray intersection.

Make sure to check "Random Thoughts" section on github, it's quite telling.

Random Thoughts

  • I suspect the RTX 2000 series RT cores to implement ray-AABB collision detection using reduced float precision. Early in the development, when trying to get the sphere procedural rendering to work, reporting an intersection every time the rint shader is invoked allowed to visualise the AABB of each procedural instance. The rendering of the bounding volume had many artifacts around the boxes edges, typical of reduced precision.
  • When I upgraded the drivers to 430.86, performance significantly improved (+50%). This was around the same time Quake II RTX was released by NVIDIA. Coincidence?
  • When looking at the benchmark results of an RTX 2070 and an RTX 2080 Ti, the performance differences mostly in line with the number of CUDA cores and RT cores rather than being influences by other metrics. Although I do not know at this point whether the CUDA cores or the RT cores are the main bottleneck.
  • UPDATE 2020-01-07: the RTX 30xx results seem to imply that performance is mostly dictated by the number of RT cores. Compared to Turing, Ampere achieves 2x RT performance only when using ray-triangle intersection (as expected as per NVIDIA Ampere whitepaper), otherwise performance per RT core is the same. This leads to situations such as an RTX 2080 Ti being faster than an RTX 3080 when using procedural geometry.
  • UPDATE 2020-01-31: the 6900 XT results show the RDNA 2 architecture performing surprisingly well in procedural geometry scenes. Is it because the RDNA2 BVH-ray intersections are done using the generic computing units (and there are plenty of those), whereas Ampere is bottlenecked by its small number of RT cores in these simple scenes? Or is RDNA2 Infinity Cache really shining here? The triangle-based geometry scenes highlight how efficient Ampere RT cores are in handling triangle-ray intersections; unsurprisingly as these scenes are more representative of what video games would do in practice.
wolfDLSS into the mix
Sorry, I cannot seriously talk about "but if I downscale and slap TAA antialiasing, can I pretend I did not downscale".
No, you can't. Or wait, you can. Whatever you fancy.
It's just, I won't.
Posted on Reply
#85
Unregistered
ARFSometimes they don't even care about your wallet. Because they think God grows money on the trees.

What features do you request from AMD? The Radeon is a more feature-rich product line, in general and historically.
Idk, actual OpenGL support so my MC shaders don't run at 2 FPS, an encoder as good as NVENC, good drivers. Main things.
#86
MxPhenom 216
ASIC Engineer
Alexa"Everything to the right of the highest point that is a flat line just means the GPU won't try and boost beyond that speed/voltage point on the curve."
This. I set it to run at 2025 MHz max constantly, with a constant 900mv. Don't need more than that.

On stock, it would fluctuate between 1965-1980 at higher temps and more power draw.

This way, it remains at a stable 2010-2025 MHz at 900mv, while drawling less power and having lower temps.
Im going to try this once i get a 3080. It'll have a waterblock on it too.

Did you remove sone of the points from the curve. My 1070 has a ton abd id hate to have to get each one at the same freq hah
Posted on Reply
#87
Unregistered
MxPhenom 216Im going to try this once i get a 3080. It'll have a waterblock on it too.

Did you remove sone of the points from the curve. My 1070 has a ton abd id hate to have to get each one at the same freq hah
Nope, just adjusted them. You can shift click and move a ton of squares at once, that's how I did it.

Here's an update.



2040-2055 stable @ 925mv (compared to stock 1965-1980 @ 1075mv). Max power draw 190W. Max temp 61C on air. The 66C max temp reported is the pic is from periodically going back to stock settings -- so yes, there is a 5 degree temp decrease and lots of MHz increase.

Fully stable.

Undervolt your Ampere cards people.

Also, we are getting a "bit" off topic, we should end this convo here or make a new thread lol.
#88
N3M3515
HenrySomeoneThey claim a lot of things, the reality on the other hand is usually (actually almost always) a "bit" different:

The RX 6800 is barely faster than RTX 3070 (yes, that's at 1080p, but given the extremely graphically demanding nature of recent new titles, that will be the resolution best suited to this cards in the longer run), so it stands to reason that 6700XT will struggle to compete with the 3060Ti. In a normal time, this card (considering its additional lack of features vs 3000 series) would be worth $350 at most...
Go back and watch the table of 2560x1440 and you'll see a better representation.
Posted on Reply
#89
hardcore_gamer
Shatun_BearCU count is not really relevent here as Xbox Series X GPU is clocked so low (1.8Ghz). There's a reason the PS5 performs better in nearly every multiplatform game comparison despite 36 CUs.
I've included the clock speeds in my calculation:
hardcore_gamerDoing a quick math, (52CU/40CU)*(1.825MHz/2.424MHz) = 0.98. The performance is similar to an Xbox series X which (the entire system) costs almost the same. What a time to be a PC gamer /s
Posted on Reply
#90
wolf
Better Than Native
medi01Uh oh, doh.
Doh indeed!
The triangle-based geometry scenes highlight how efficient Ampere RT cores are in handling triangle-ray intersections; unsurprisingly as these scenes are more representative of what video games would do in practice.
I don't wonder why it's faster in green sponsored games, I wonder why it's more often faster in vendor-agnostic tests and even in AMD sponsored games, in the form of adding a higher millisecond rendering time penalty to the output image.
medi01Whatever you fancy.
I have no such reservations about how the magic pixels are rendered when the output image is virtually indistinguishable in motion and it comes with a healthy FPS boost. Quoting your own head-in-the-sand opinion in bold was a nice touch, though. It almost made me reconsider.

I'd say it was an interesting experience, but I've looked through the rose-coloured glasses before and I prefer to see the entire spectrum.

And with that, the ignore button strikes again!
Posted on Reply
#91
medi01
wolfI wonder why it's more often faster in vendor-agnostic tests
You were presented with results of vendor-agnostic tests, along with source code and curious comments on major performance bumps.
wolfeven in AMD sponsored games
1) Dirt 5 is so far the only RT game of that kind, and AMD is comfortably ahead in it
2) DF is an embarrassment
wolfwhen the output image is virtually indistinguishable in motion
Ah. In motion that is. And from sufficient distance, I bet.
That's ok then. As I recall DLSS took this:



and turned it into this:



all while reviewer kept saying that "better than native" mantra.

But one had to see that in motion, I'll remember that. Thanks!
Posted on Reply
#92
londiste
medi011) Dirt 5 is so far the only RT game of that kind, and AMD is comfortably ahead in it
2) DF is an embarrassment
That DigitalFoundry video on probably the best analysis out there for the performance hit of raytracing effects today and across both manufacturers.
I will just link to the video again.
Posted on Reply
#93
wolf
Better Than Native
londisteThat DigitalFoundry video on probably the best analysis out there for the performance hit of raytracing effects today and across both manufacturers.
I will just link to the video again.
Indeed, and it clearly demonstrates the penalty, where the AMD GPU pays a higher price to enable the RT effect, in an AMD sponsored title.

Fantastic channel too, they do a great job on virtually all content, they do the lengthy investigation, present the findings in full showing show you the good, the bad, and the nuance, and then on a balance of it all make informed conclusions and recommendations.
Posted on Reply
#94
medi01
londisteThat DigitalFoundry video on probably the best analysis out there for the performance hit of raytracing effects today and across both manufacturers.
I will just link to the video again.
It's the sad bit.
It's the best analysis on RT subject that I've seen on youtube.
And it's still filled with pathetic shilling.

Yet, even without reading between the lines, you should have figured this:

Apples to apples, eh:

Typically, in any RT scenario, there are four steps.
1) To begin with, the scene is prepared on the GPU, filled with all of the objects that can potentially affect ray tracing.
2) In the second step, rays are shot out into that scene, traversing it and tested to see if they hit objects.
3) Then there's the next step, where the results from step two are shaded - like the colour of a reflection or whether a pixel is in or out of shadow.
4) The final step is denoising. You see, the GPU can't send out unlimited amounts of rays to be traced - only a finite amount can be traced, so the end result looks quite noisy. Denoising smooths out the image, and producing the final effect.


So, there are numerous factors at play in dealing with RT performance. Of the four steps, only the second one is hardware accelerated - and the actual implementation between AMD and Nvidia is different...

...Meanwhile, PlayStation 5's Spider-Man: Miles Morales demonstrates that Radeon ray tracing can produce some impressive results on more challenging effects - and that's using a GPU that's significantly less powerful than the 6800 XT....

www.eurogamer.net/articles/digitalfoundry-2021-pc-ray-tracing-deep-dive-rx-6800xt-vs-rtx-3080

So, uh, oh, doh, you were saying?
Posted on Reply
#95
londiste
Why are you leaning that heavily on different actual implementation? DXR is a standard thing, if the implementation is different, I would expect manufacturer to know what they are doing and aiming for.

But yes, the second step is the hardware accelerated one and their measurements give a pretty good indication that Nvidia's RT hardware is more powerful at this point (probably simply by having more units). This evidenced by the place of performance falloff on the scale of amounts of rays used. Both fall off but the respective points are different.

Miles Morales on PS5 is heavily optimized using the same methods for performance improvements that RT effects use on PC, mostly to a higher degree. Also, clever design. The same Digital Foundry has a pretty good article/video on how that is achieved: www.eurogamer.net/articles/digitalfoundry-2020-console-ray-tracing-in-marvels-spider-man
Posted on Reply
#96
medi01
londisteWhy are you leaning that heavily on different actual implementation? DXR is a standard thing, if the implementation is different
You have missed the points. Of a number of things that need to happen for RT to end up being an image, only one bit is hardware accelerated.
londisteWhy are you leaning that heavily on different actual implementation?
There is another side of the implementation:
For example, Quake 2 RTX and Watch Dogs Legion use a denoiser built by Nvidia and while it won't have been designed to run poorly on AMD hardware (which Nvidia would not have had access to when they coded it), it's certainly designed to run as well as possible on RTX cards.

Comparison of actual hardware RT perf benchmark, have been linked in #85 here. There is no need to run around and "guess" things, they are right there, on the surface.

The:

the RTX 3080 could render the effect in nearly half the time in Metro Exodus, or even a third of the time in Quake 2 RTX, yet increasing the amount of rays after this saw the RTX 3080 having less of an advantage.

could mean many things. This part is hilarious:

In general, from these tests it looks like the simpler the ray tracing is, the more similar the rendering times for the effect are between the competing architectures. The Nvidia card is undoubtedly more capable across the entire RT pipeline

Remember which part of ray tracing is hardware accelerated? Which "RT pipeline" cough? Vendor optimized shader code?
Posted on Reply
#97
wolf
Better Than Native
Against my better judgment, I've viewed the ignored content, here we go again...
medi01Ah. In motion that is. And from sufficient distance, I bet.
That's ok then.
I never said from a distance, your words. And no, it looks fantastic close-up, too.

Yeah in motion, of course in motion. I tend to play games at something in the order of 60-144fps, not sitting and nitpicking stills, but for argument's sake, I'll do that too. If we're going to cherry-pick some native vs DLSS shots, I can easily do the same and show the side of the coin that you conveniently haven't.

Native left, DLSS Quality right





And the real kicker after viewing what is, at worst, comparable quality where each rendering has strengths and weaknesses, and at best, higher overall quality...



But you appear to have made up your mind, you don't like it, you won't "settle" for it. Fine, suit yourself, nobody will make you buy an RTX card, play a supported game and turn it on. Cherry picking examples to try and show how 'bad' it is doesn't make you come across as smart, and it certainly doesn't just make you right, you could have at least chosen a game with a notoriously 'meh' implementation. Not to mention the attitude, yikes.

I can't convince you, and you can't convince me, so where from here? ignore each other?
Posted on Reply
#98
Caring1
That person's hair actually looks better in Native in comparison to DLSS, as it appears softer and cleaner as opposed to coarse and oily.
Posted on Reply
#99
medi01
wolfYeah in motion, of course in motion.
I won't bite this lie, I'm sorry.

It's not about "in motion" at all. What you present is the "best case" for any anti-aliasing method that adds blur, TAA in particular.
There is barely any crisp texture (face eh?) to notice the added blur.
It is heavily loaded with stuff that benefits a lot from antialiasing (hair, long grass, eyebrows).

But if you dare bringing in actual, real stuff, from the very pic in your list, Death Stranding, DLSS takes this:



and turns it into this:



no shockers here, all TAA derivatives exhibit it.

NV's TAA(u) derivative adds blur to... entire screen if you move your mouse quickly. Among other things

It's a shame ars-techinca was the only site to dare point it out.
arstechnica.com/gaming/2020/07/why-this-months-pc-port-of-death-stranding-is-the-definitive-version/




Posted on Reply
#100
Adam Krazispeed
SamuelLAssuming I could get one of these close(ish) to MSRP, would this be any kind of upgrade to a 1080ti? If I could get one around MSRP, then I could likely sell the 1080ti for about the same price. Just don’t know if it would be worth all the effort... I’m debating if I should just wait for the next round of future GPUs in 6-12 months given the current pricing disaster.

Opinions?
pricing get better, lol, it gonna get WORSE, If mining and scalping ISNT STOPPED?/

My 6800XT just died that i paid $1100 usd for on ebay 4 months ago and itd ALREADY DEAD???? CANT EVEN RETURN IT. Contacted Sapphire for an RMA but doesn't look like its gonna be honored

I have no choice, I NEED SOMETHING? .. im gonna kill someone over a 6700 XT m i cant get another 6800 XT for less than 2 grand usd, so I guess im done with PC gaming if i cant get on of these? yed it should be a great upgrade from a 1080/1080TI for sure,
Posted on Reply
Add your own comment
Nov 20th, 2024 06:34 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts