Monday, November 2nd 2020

AMD Releases Even More RX 6900 XT and RX 6800 XT Benchmarks Tested on Ryzen 9 5900X

AMD sent ripples in its late-October even launching the Radeon RX 6000 series RDNA2 "Big Navi" graphics cards, when it claimed that the top RX 6000 series parts compete with the very fastest GeForce "Ampere" RTX 30-series graphics cards, marking the company's return to the high-end graphics market. In its announcement press-deck, AMD had shown the $579 RX 6800 beating the RTX 2080 Ti (essentially the RTX 3070), the $649 RX 6800 XT trading blows with the $699 RTX 3080, and the top $999 RX 6900 XT performing in the same league as the $1,499 RTX 3090. Over the weekend, the company released even more benchmarks, with the RX 6000 series GPUs and their competition from NVIDIA being tested by AMD on a platform powered by the Ryzen 9 5900X "Zen 3" 12-core processor.

AMD released its benchmark numbers as interactive bar graphs, on its website. You can select from ten real-world games, two resolutions (1440p and 4K UHD), and even game settings presets, and 3D API for certain tests. Among the games are Battlefield V, Call of Duty Modern Warfare (2019), Tom Clancy's The Division 2, Borderlands 3, DOOM Eternal, Forza Horizon 4, Gears 5, Resident Evil 3, Shadow of the Tomb Raider, and Wolfenstein Youngblood. In several of these tests, the RX 6800 XT and RX 6900 XT are shown taking the fight to NVIDIA's high-end RTX 3080 and RTX 3090, while the RX 6800 is being shown significantly faster than the RTX 2080 Ti (roughly RTX 3070 scores). The Ryzen 9 5900X itself is claimed to be a faster gaming processor than Intel's Core i9-10900K, and features PCI-Express 4.0 interface for these next-gen GPUs. Find more results and the interactive graphs in the source link below.
Source: AMD Gaming Benchmarks
Add your own comment

147 Comments on AMD Releases Even More RX 6900 XT and RX 6800 XT Benchmarks Tested on Ryzen 9 5900X

#101
lexluthermiester
Minus Infinity6800XT + 5900X will be a nice Xmas present assuming I can get a hold of them.
That would be a very nice combo.
Posted on Reply
#102
r.h.p
medi01Which doesn't seem to be to need brute-force path tracing anyway:



Besides, I wouldn't be surprised if Zen 3 "infinity cache" inside RDNA2 lets it spank Ampere even on that (rather quite useless for now and in forseable future) front.
:rockout:
Posted on Reply
#103
turbogear
@btarunr

Thanks a lot for sharing the results.
Looks very promising.

Waiting for review from TPU especially for 6800XT to decide if that is my next GPU. :rolleyes:
I would be interested to see how the performance is with Zen 2 without SAM.

When Nvidia launched their new RTX generation I was thinking that it would really tough for AMD to match that but looking at the benchmarks until now it is really impressive to see AMD is catching up to NVidia at least at non DXR. :)

Let's hope that these will be available on the release day in larger quantities and not be sold out within minutes after release giving one option to buy it after reading reviews and not disappear from the online stores while one is looking at reviews. :roll:
Posted on Reply
#104
TumbleGeorge
Xex360games need more polygons and way better textures.
According to GPU database in Techpowerus RX 6900 XT has more pixel performance and more texel performance than RTX 3090.
PP 280 vs 190
TP 720 vs 556.
LoL. AMD is more future proof for long-term use!
PS. RX 6800 XT also is better than RTX 3090 if we rely only on a comparison of these numbers.
Posted on Reply
#105
ratirt
TumbleGeorgeAccording to GPU database in Techpowerus RX 6900 XT has more pixel performance and more texel performance than RTX 3090.
PP 280 vs 190
TP 720 vs 556.
LoL. AMD is more future proof for long-term use!
PS. RX 6800 XT also is better than RTX 3090 if we rely only on a comparison of these numbers.
I know what you are trying to say here but these cards are different. These should not be compared 1 to 1 considering the hardware.
Posted on Reply
#106
TumbleGeorge
ratirtI know what you are trying to say here but these cards are different. These should not be compared 1 to 1 considering the hardware.
This will show results only for time when it's will be compared for first, not related for term of how long in time cards will be relevant in the future. I think that AMD cards even if they don't show a big advantage in first reviews, in the future they will perform even better compared to the competing models from Nvidia's 30* series.
Posted on Reply
#107
EarthDog
TumbleGeorgeThis will show results only for time when it's will be compared for first, not related for term of how long in time cards will be relevant in the future. I think that AMD cards even if they don't show a big advantage in first reviews, in the future they will perform even better compared to the competing models from Nvidia's 30* series.
???

Fine wine? A couple % uptick overall more in a title or two? I wouldn't hold my breath for that. And those numbers you quoted don't add up to your conclusion.
Posted on Reply
#108
TumbleGeorge
EarthDog???

Fine wine? A couple % uptick overall more in a title or two? I wouldn't hold my breath for that. And those numbers you quoted don't add up to your conclusion.
All be clear in future. Ат the moment we can only guess, based on the characteristics we know at the moment, how things will develop in the future. It is not possible to present facts that have not yet happened.
Posted on Reply
#109
EarthDog
TumbleGeorgeAll be clear in future. Ат the moment we can only guess, based on the characteristics we know at the moment, how things will develop in the future. It is not possible to present facts that have not yet happened.
Im glad you understand that concept... apply it. :p
Posted on Reply
#110
Zach_01
TumbleGeorgeAccording to GPU database in Techpowerus RX 6900 XT has more pixel performance and more texel performance than RTX 3090.
PP 280 vs 190
TP 720 vs 556.
LoL. AMD is more future proof for long-term use!
PS. RX 6800 XT also is better than RTX 3090 if we rely only on a comparison of these numbers.
As you might figured out already, those numbers are telling absolutely nothing about actual performance of a card. Same with TFLOPS. Its just for reference. Raw fillrates, computing power and VRAM bandwidth cannot be directly comparable between different architecture GPUs. Not even if GPUs are made under the same brand.
And you cant predict either the future performance gains or losses of a GPU against another product as the factors related are far too many.
Posted on Reply
#111
TumbleGeorge
EarthDogIm glad you understand that concept... apply it. :p
Hmm, next factors to Nvidia incompLetences with VRAM size(exclude partially only RTX 3090 and include all other models, 3080 10GB; 3070 8GB; 3060 ti(?):


First:
AMD will support all ray tracing titles using industry-based standards, including the Microsoft DXR API and the upcoming Vulkan raytracing API. Games making of use of proprietary raytracing APIs and extensions will not be supported.
— AMD Marketing
.....
AMD has made a commitment to stick to industry standards, such as Microsoft DXR or Vulcan ray tracing APIs. Both should slowly become more popular, as the focus goes away from NVIDIA’s implementation. After all, Intel will support DirectX DXR as well, so developers will have even less reason to focus on NVIDIA’s
Second:
Interestingly, Keith Lee revealed that in order to support 4X x 4X UltraHD textures a 12GB VRAM is required. This means that Radeon RX 6000 series, which all feature 16GB GDDR6 memory along with 128MB Infinity Cache should have no issues delivering such high-resolution textures. It may also mean that the NVIDIA GeForce RTX 3080 graphics card, which only has 10GB of VRAM, will not be enough
Links are below "First & Second"!
Posted on Reply
#112
EarthDog
TumbleGeorgeHmm,
NV uses DXR, same as AMD.....

10GB may fall short at 4K in a few years... but by then, you'll want another GPU anyway. Even DOOM on nightmare doesn't eclipse 10GB @ 4K.
Zach_01As you might figured out already, those numbers are telling absolutely nothing about actual performance of a card. Same with TFLOPS. Its just for reference. Raw fillrates, computing power and VRAM bandwidth cannot be directly comparable between different architecture GPUs. Not even if GPUs are made under the same brand.
And you cant predict either the future performance gains or losses of a GPU against another product as the factors related are far too many.
I'm giving up. ;)
Posted on Reply
#113
BoboOOZ
CheeseballThe RX 5700 XT didn't really overclock great (2,000+ MHz only yielded at most 10 FPS with most models) as well, but we'll see how the 6800 XT works out.
The 5700XT OC'd pretty well (went up in frequency) but gains were small due to it already being memory bandwidth starved. Here, the memory architecture was completely overhauled, and at least the infinity cache should go up in speed with the core while overclocking, so it should be quite interesting to see...
Posted on Reply
#114
medi01
lexluthermiesterThat's an assumption on your part and not a very logical one, especially considering that NVidia has already had 2 years to gain a lead in both deployment and development of RTRT.
It's a very logical assumption, given who commands console market (and situation in the upcoming GPUs too).

More likely scenario, though, is that in that form (brute force path tracing) it will never take off.
turbogearis catching up
Smaller chips, lower power consumption, slower (and cheaper) VRAM, more of it, for lower price than competition and better perf/$ than competition.
Catching up, eh? :)
Posted on Reply
#115
lexluthermiester
medi01It's a very logical assumption, given who commands console market (and situation in the upcoming GPUs too).
Oh, do help us all understand your point in more detail...
Posted on Reply
#116
Punkenjoy
One of the reason of AMD fine wine is just that AMD took more time to polish their drivers because they have way less resource than Nvidia to do so.

Another is that CGN balance between fillrate/texture rate vs compute performance was a bit more on the Compute side. NVidia on the other hands focused a bit more on the fill rate side.

Each generation of games was shifting the resource from fill rate to compute by using more and more power and AMD GPU in a better position. But not really enough to make a card last way longer. Also the thing is low end cards where outclassed anyway were High end cards were bought by people with money that would probably change them as soon as it would make sense.

It look like that AMD with NAVI went to a more balanced setup where Nvidia is going onto the heavy compute path. We will see in the future what is the better balance but right now it's too early to tell.

So in the end, it do not really matter. a good strategy is to buy a PC at a price that you can afford another one at the same price in 3-4 year and you will always be in good shape. If paying 500$ for a card every 3-4 years is too much, buy something cheaper and that's it.

there is good chance that in 4 years, that 500$ card will be beaten by a 250$ card anyway. Even more when we think they are going to chiplet design with GPU. that will drive a good increase on performance.
Posted on Reply
#117
medi01
lexluthermiesterOh, do help us all understand your point in more detail...
Stranger talking about self in plural, you are seriously asking why anyone would optimize games for the LION's share of the market?
Posted on Reply
#118
lexluthermiester
medi01Stranger talking about self in plural, you are seriously asking why anyone would optimize games for the LION's share of the market?
Then why aren't you? Hmm? Perhaps because you know both that there is a counter argument and that such an argument is perfectly valid. It's as valid now as it has been since the Console VS PC debate began.
Posted on Reply
#119
moproblems99
medi01It's a very logical assumption, given who commands console market (and situation in the upcoming GPUs too).
Considering they had consoles las generation as well, how did that whole optimizing for AMD architecture go?
Posted on Reply
#120
TumbleGeorge
moproblems99Considering they had consoles las generation as well, how did that whole optimizing for AMD architecture go?
The explanation is extremely easy. In the past, AMDs were not ready to take advantage of the fact that the hardware of the old consoles had components developed by them. However, now they can and do!
Posted on Reply
#121
mtcn77
TumbleGeorgeThe explanation is extremely easy. In the past, AMDs were not ready to take advantage of the fact that the hardware of the old consoles had components developed by them. However, now they can and do!
It is the opposite imo. After they programmed the radeon profiler, they found out about the intrinsic limits of the hardware.
Yes, the scheduler was flexible, as it was announced to be in its launch, but instruction reordering does not necessarily mean the full extent of its performance. IPC still was 0.25 and now that it is 1 is a lot in comparison. They have all these baked-in instructions doing the intrinsic tuning for them in the hardware. The isa moved away from where gcn was by a great deal. Plus, they have this mesh shader which abnegate the triangle pixel size vs wavefront thread cost to deal with it in hardware. Performance really suffered with <64 pixel area triangles. Not so, any more.
Posted on Reply
#122
medi01
moproblems99Considering they had consoles las generation as well, how did that whole optimizing for AMD architecture go?
Oh, that is easy, my friend.
EPIC on UE4 "it was optimized for NVidia GPUs".
EPIC today, demoes UE5 on RDNA2 chip running on the weaker of the two next gen consoles, spits on Huang's RT altogether, even though it is supported even in UE4.



There is more fun to come.

Recent demo of XSeX vs 3080 was commended by a greenboi like "merely 2080Ti levels".
That is where next gen consoles ar > 98-99% of the PC GPU market.
lexluthermiesterThen why aren't you?
It was a rhetorical question.
Posted on Reply
#124
mtcn77
moproblems99@medi01 , no idea what you just said.
Yeah, me neither. An overview could be so nice. Rdna2 ftw, you were saying?
Posted on Reply
#125
TheoneandonlyMrK
moproblems99Considering they had consoles las generation as well, how did that whole optimizing for AMD architecture go?
So consider, is GPU physx big in game's ,what about Cuda ,is that big in game's because direct compute is, as is tesselation, an Rx580 will meet the minimum specs at least of any game released since it's birth.
Did Nvidia bring more performance at times, yes of course but that doesn't preclude AMD having good support for their features.
And GCN looked pretty effing capable until afew years after last gen consoles came out, about the Maxwell era no?.
Posted on Reply
Add your own comment
Oct 20th, 2024 09:38 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts