• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Radeon RX 6900 XT

So how does techspot show the 6900XT (sans SAM) being the fastest card of all at 1080p and 1440p and a bit slower than the 3090 at 4K but this shows it trailing at all resolutions.

This is a huge win for the 6900XT at the price. I guess we now get 3080 Ti 20GB on TSMC 7nm very shortly as a panic response.
 
So how does techspot show the 6900XT (sans SAM) being the fastest card of all at 1080p and 1440p and a bit slower than the 3090 at 4K but this shows it trailing at all resolutions.

This is a huge win for the 6900XT at the price. I guess we now get 3080 Ti 20GB on TSMC 7nm very shortly as a panic response.

Panic response for something that isnt really widely available yet? And its probaby because techspot has less DX11 games in their review suite and different setups.
 
Theres no stock, but here in Au the 6900XT is listed at $1400 and the 3090's are going for $2800-$3000
That sheer dollar value down under, makes this a clear winner if you dont want ray tracing (or a 3080, if you do)
Cheapest I can see is $1599, where have you seen 1400? And I can see 3090's for a hair over $2600, not that it really matters.

To my eyes the clear choices are the 6800XT or 3080. The price hike for a 6900XT over a 6800XT makes exceptionally little sense, it basically exists to sell people the 6800XT as it appears, and is, much better value.
Theres no clear winner this gen, what we have is even better... COMPETITION
I'll cheers to that! :toast:
 
Last edited:
Theres no clear winner this gen, what we have is even better... COMPETITION
Yes, this exactly! And this is most excellent!

Once again late to the party in this thread, but I have to say it's refreshing to see AMD standing on more or less equal ground to NVidia! The 6900XT trades blows with the 3090 and it's interesting to see the results. NVidia still has the advantage in RTRT performance and VRAM(which will only really matters in 8k gaming and professional applications) but Radeon is standing side by side with the best Geforce and not flinching.

AMD, Welcome back to the premium GPU space! Well done indeed!
 
The pricing is the problem here I guess ... but whats the point ... no stocks from both nVidia and AMD and whatever is out there is sold at almost double the price ... thank you COVID :mad::banghead:
 
way overpriced , i will be looking at the 6800 myself 1440p 144 hz is my monitor max
 
@ W1zzard

How come their isn't a comparison in the Raytracing charts when overclocked ?
I'd like to know if it has any effect on the RT performance at all ?
There is only stock, with Raytracing enabled compared to disabled.

These cards are hard to compare because you can't really truly compare it to Nvidia's second Generation of RTX card. The Rasterization is much higher then NVidia's First Generation RTX cards it's a tough thing to compare, with anything really..
 
I don't know about you guys, but as someone who plays a lot of Indie games that are and will be predominantly DX11 for some time... That DX11 overhead is a dealbreaker for me.

It is good to see AMD have a competivie product though, for most situations.
 
These cards are hard to compare because you can't really truly compare it to Nvidia's second Generation of RTX card. The Rasterization is much higher then NVidia's First Generation RTX cards it's a tough thing to compare, with anything really..
If you prefer RT Nvidia is a better choice, at least till RDNA3 cards debut. For traditional rasterization based games AMD is the way to go IMO.
 
From the article conclusion:

Zen 3 is sold out everywhere

That's really not the case. Here in Romania, there is no shortage of 5800X. I bought one yesterday at MSRP + 2%, and in 30 minutes it will be delivered, which is why I can't sleep right now :). I could have also bought a 5900X at MSRP + 10%, but I wanted a CPU with only one CCD. I had much more trouble finding a good motherboard for it, since I'm switching from Intel. I didn't find the motherboard I wanted from my usual retailer, I had to order one from a more obscure shop, so it will only be delivered towards the end of the week, or even next week :(.

But 5600X is indeed out of stock, and so is the 5950X. Anyway, if that statement would be changed to "Zen 3 is sold out almost everywhere", I wouldn't necessarily disagree. Maybe Romania is special. Although we seem to be affected by GPU shortages just as much as the rest of the world, so I have a suspicion that even worldwide the Zen 3 shortages are not as bad as the GPU shortages.

Now back to the topic, as others, I'm a bit underwhelmed by the 6900 XT performance. I expected more. I'm only interested in 4K performance, and this is how it looks in the TPU 4K benchmarks, even with SAM and DDR4-3800:

Code:
in  6 games - 6900 XT is slower than the 3080 at 4K
in 13 games - 6900 XT is somewhere between the 3080 and 3090 at 4K
in  4 games - 6900 XT is faster than the 3090 at 4K

So, even with SAM and faster memory, it's closer to a 3080 than a 3090 at 4K. Taking raytracing into account, it's even worse. And I'm interested in doing some machine learning on my GPU, and the AMDs are not ideal for that, to say the least, for various reasons.

The only saving grace is the increased efficiency of the 6900 XT. I received a Gigabyte 3080 Vision OC as a birthday present from my colleagues this weekend, and the damn thing made me open my window to cool my room. In the winter. While idle at the desktop. I shudder thinking what it will be like in the summer if I can't find a solution. I'm still troubleshooting why it's so power hungry even when idle.

So, maybe there are some use cases where the 6900 XT makes sense over the NVidia cards after all. But personally, I think the 6900 XT is at least $200 more expensive than it should be.
 
AMD will probably sell at least 10x as many zen3 chiplets (or 5x as many chips?) by the end of the year wrt Ampere & RDNA2 cards combined. The margins are much higher, yields much better & capacity isn't as constrained.
 
I received a Gigabyte 3080 Vision OC as a birthday present from my colleagues this weekend, and the damn thing made me open my window to cool my room. In the winter. While idle at the desktop.
What on earth are you on about, at idle you'll be lucky if it draws 50w. I have a 3080, summer has just started in Australia and mine doesn't appreciably warm my room when gaming.

Most of the rest of what you're saying makes some sense but I just can't get behind that quote, if the 3080 did that to you there's a high likelihood that in an apples to apples scenario so did the last card/rest of the system.
 
Now I wonder what happens going forward for RNDA3?

I'd be keen to see if the this 4 to 1 relationship gets revised to something like a 6 to 2 relationship? That might allow for more brute force as well as alternate frame half resolution temporal ray traced acceleration effects.
1607487101551.png


I'm not sure what they'll do with infinity cache. I could see a minor bump to the size especially after a node shrink or maybe 7nm EUV potentially as well. The other aspect is it's split between two 64MB slabs similar to CCX's so I wonder if a monolithic 128MB slab that's shared access is bound to happen eventually.

As for this bit on the CU scalars and schedulers where is this going?
1607487978593.png


I think maybe they'll double down or increase it's overall design layout granularity and scheduling relationship another 50% potentially. That in mind if they bump up the infinity cache another 64MB and make it all shared a 50% in this area makes a lot more sense.

I want to know more about Radeon Boost how configurable is it can you pick a custom downscale resolution target point to adhere to? It seems like it would work well in practice I'm just curious how adjustable it is. I think there are defiantly people that might prefer downscale the resolution to 1440p from 4K as opposed to 1080p or even more custom targets in between both like LOD mipmap scaling just more granular option targets to scale how much image fidelity to performance is adjusted while in motion. I really the idea of it a lot I've just only seen that one slide on it which isn't real detailed unfortunately.

I really think Wizard should consider a chart for 4K with Radeon Boost enabled with SAM on and off. The way that the smart access memory works that is a interesting combination to look at because they play into each other well with Radeon Boost making SAM enabled more ideal for people playing at high resolutions. You get a 7% SAM advantage at 1080p and 2% at 4K so with Radeon Boost with SAM you should have somewhere in the 2% to 7% ballpark!? I don't know how well average 5% roughly, but could lean more towards 7% or 2% depends how much scene activity is going on of course when it matters though it should be closer to the 7% mark. If for any other reason it would be interesting to see how Radeon Boost and SAM interact with the mixed rendering.
 
Last edited:
What on earth are you on about, at idle you'll be lucky if it draws 50w. I have a 3080, summer has just started in Australia and mine doesn't appreciably warm my room when gaming.

Most of the rest of what you're saying makes some sense but I just can't get behind that quote, if the 3080 did that to you there's a high likelihood that in an apples to apples scenario so did the last card/rest of the system.

That's why I said I'm troubleshooting the issue; I don't think this is normal, and I'm trying to determine if it's a problem with my system or the board. Apparently, the RAM remains at full speed even when idle, and it uses 21% of its power target when idle as a result. It's a 320W card, so 21% would be something like 70 W. Which it blows towards me constantly, if I keep the side of my case open. The fan is almost never idle.

My previous card was a 2080, and it never did this, on the same system.
 
Last edited:
Has BAR size been benchmarks at different aperture size settings for power consumption yet!!? I wonder what kind of impact it is has on that surely TDP goes up a bit, but maybe not too badly and likely mostly in line with the GPU uplift in any case. Still it's something to look at and consider and wonder if that's played any role in why until now it kind of got set at 256MB and forgotten or set aside until now.
 
AMD doesn't use shunts, the power draw is estimated internally in the GPU afaik

Indeed. They have spoiled any fun of doing hard OC.

Hoping the AIB versions will have a decently limited bios.
 
AMD should probably consider a special form of Radeon Boost to apply just for the RTRT elements that can be adjusted between 480p/720p/1080p for the time being and revised and scaled upward later on newer upward for RDNA2. It might not be a gigantic reduction to RTRT image quality relative to the performance gains most of the scene is still ultimately rasterized. If they could add that as a software option to RDNA 2 that would change the RTRT battlefield quite a bit at least until Nvidia follows suit though is there even a way to check to the end user what the RTRT resolution adheres to? I know you can adjust the quality, but does it specify the resolution or simply the quality which could be determined by several factors like the amount of light rays and bounces.
 
What on earth are you on about, at idle you'll be lucky if it draws 50w.

I think I got to the bottom of it. Using multiple monitors triggers the problem with my Gigabyte RTX 3080 Vision OC. I have 2 or 3 displays connected at all times: a 4K @ 60 Hz monitor over DisplayPort, a 3440x1440 monitor at 100Hz, also over DisplayPort, and a 4K TV at 60Hz HDR, over HDMI, which I usually keep turned off.

After closing all applications, it still refused to reduce GPU memory speed. But I noticed when Windows turns off my displays the GPU memory frequency and power usage finally goes down. So, I disconnected my 4K monitor. The power usage went down to 7%, and the memory frequency dropped to 51MHz from 1188MHz. I turned on the 4K TV instead, the power usage and memory frequency remained low. I turned off the 4K TV again and reconnected the 4K monitor. The power usage and memory frequency went up again. I disconnected the 3440x1440 display, the frequency and power usage dropped. I turned on the 4K TV, the power usage and memory frequency remained low.

So, in short, if I connect both my monitors, over DisplayPort, the memory frequency never goes down. As a final experiment, I connected the 3440x1440 display over HDMI, at 50Hz. There were some oscillations, depending on which apps were open, but the GPU power usage and memory frequency remained low, for the most part.

So, I'm guessing it really doesn't like having multiple monitors at high refresh rates and resolutions connected, especially over DisplayPort. This is how the power and frequency usage looked while I was disconnecting/connecting various monitors:

AORUS_2020-12-09_07-36-03.png


The thing is, I looked at all the 3080 TPU reviews, and none of them mentioned the GPU memory frequency being higher when idle and using multiple monitors, unless I missed something.

@W1zzard have seen anything like on any of the 3080s in your tests, GPU memory frequency never going down while using multiple monitors? You have a table with clock profiles on each GPU review, and for all your 3080 reviews you listed the multi-monitor GPU memory frequency as 51MHz. How exactly did you test that? How many monitors, at which resolutions/refresh rates, and how were they connected? DisplayPort, or HDMI? If there were just a couple of monitors at low resolutions, then that might explain the difference to my experience with the Gigabyte RTX 3080 Vision OC.
 
Last edited:
In negatives it lists:

"Overclocking requires power limit increase"

Is there a piece of computer hardware that when overclock Ed DOESN'T require a power limit increase?
All the custom design RX 6800 XT cards overclock just fine without power limit increase. "power limit increase" = you must increase the power limit slider in radeon settings or OC will not do anything.

Obviously overclocking always increases power consumption, that's not what I meant

How come their isn't a comparison in the Raytracing charts when overclocked ?
RT is simply not important enough at this time. I test SO many things, reviews need to be finished in a reasonable timeframe, so I have to make compromises.

That's really not the case. Here in Romania, there is no shortage of 5800X
Congrats on your new processor. The supply situation is definitely not normal, i.e. anyone can get any CPU at reasonable prices

Has BAR size been benchmarks at different aperture size settings for power consumption yet
You can't adjust the BAR size, the size == VRAM size, that's the whole point of mapping all GPU memory into CPU address space. Obviously it does not "use" the whole VRAM, I also suspect some secret sauce here, i.e. per-game optimizations in how data is transfered, AMD hinted at that in the press briefings

@W1zzard have seen anything like on any of the 3080s in your tests, GPU memory frequency never going down while using multiple monitors? You have a table with clock profiles on each GPU review, and for all your 3080 reviews you listed the multi-monitor GPU memory frequency as 51MHz. How exactly did you test that? How many monitors, at which resolutions/refresh rates, and how were they connected? DisplayPort, or HDMI? If there were just a couple of monitors at low resolutions, then that might explain the difference to my experience with the Gigabyte RTX 3080 Vision OC.
It's detailed on the power page in the expandable spoiler. Two monitors: 1920x1080 and 1280x1024, intentionally mismatched, one DVI, one HDMI, intentionally mismatched.

I think you are seeing increased clocks due to the refresh rate? Try going to 75 Hz or even 60 Hz.

Would love to hear more about this, could be good input so I can adjust my testing, in a separate thread please
 
Not a bad result for the 6900xt considering the price difference with 3090. But yet again, considering the price of the 6900xt that is definitely not my card.
 
All the custom design RX 6800 XT cards overclock just fine without power limit increase. "power limit increase" = you must increase the power limit slider in radeon settings or OC will not do anything.

Obviously overclocking always increases power consumption, that's not what I meant


RT is simply not important enough at this time. I test SO many things, reviews need to be finished in a reasonable timeframe, so I have to make compromises.


Congrats on your new processor. The supply situation is definitely not normal, i.e. anyone can get any CPU at reasonable prices


You can't adjust the BAR size, the size == VRAM size, that's the whole point of mapping all GPU memory into CPU address space. Obviously it does not "use" the whole VRAM, I also suspect some secret sauce here, i.e. per-game optimizations in how data is transfered, AMD hinted at that in the press briefings


It's detailed on the power page in the expandable spoiler. Two monitors: 1920x1080 and 1280x1024, intentionally mismatched, one DVI, one HDMI, intentionally mismatched.

I think you are seeing increased clocks due to the refresh rate? Try going to 75 Hz or even 60 Hz.

Would love to hear more about this, could be good input so I can adjust my testing, in a separate thread please

power consumption tests may need higher bandwidth monitors to be relevant, i've done some quick testing here and there does seem to be a threshold at which the GPU's ramp up the multi monitor consumption... gah it'd be a shitty expense to add a high refresh display (or two) to a benchmarking system
 
wait, you can get those fancy little dongles for fake monitors - they'd be perfect for simulating extra screens without actually needing them
(random amazon image for example)

p1kalmig2k191.jpg
 
Back
Top