• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

RX 5700 XT - 75 Hz/144 Hz/non-standard refresh rates causing VRAM clock to run at max

As another data point, my 6600xt has clocked down properly at all resolutions since I got it about 2 months ago, but that's only through 3 or so driver updates so far. And the idle power draw is ridiculously low at 4W. My gaming PC's total idle power usage is 24-25W from the wall after this change. Crazy low.
It makes sense since the 6600 XT is newer than the 6900 XT.
 
As another data point, my 6600xt has clocked down properly at all resolutions since I got it about 2 months ago, but that's only through 3 or so driver updates so far. And the idle power draw is ridiculously low at 4W. My gaming PC's total idle power usage is 24-25W from the wall after this change. Crazy low.
Yeah, RDNA2 is ridiculously efficient, and idle power for the smaller cards is fantastic. Really promising for the efficiency of future architectures as well - AMD catching up with and then surpassing Nvidia in GPU efficiency in just two generations is really impressive.
It makes sense since the 6600 XT is newer than the 6900 XT.
Shouldn't be a meaningful difference - those dice were taped out within a few months of each other, and it's highly unlikely that the newer die has any notable hardware changes. Most likely some driver tweak that applied to the 6900 XT didn't apply to the 6600 XT, which can be down to pretty much anything.
 
Most likely some driver tweak that applied to the 6900 XT didn't apply to the 6600 XT, which can be down to pretty much anything.
The other way around, otherwise I agree.

With RDNA 3 it is possible that AMD will for the first time since R9 290X catch the performance crown again.
 
Encounter monitor bugs
Change drivers to run higher clocks as short term fix
patch in fixes to various monitors/connection types over time
You are here
Release new products with hardware mitigation for the issue (seems like 6000 series cards have this, tbh)

Nvidia went through this too, high refresh rate high res monitors change things. I bet even Nv would have higher idle at 4k120
 
Nvidia went through this too, high refresh rate high res monitors change things. I bet even Nv would have higher idle at 4k120
my former 1080 Ti didn't even care when I had 1440p144 Hz + 1080p120 Hz connected. It only increased clocks when I streamed something, and then also not to full clocks.
Release new products with hardware mitigation for the issue (seems like 6000 series cards have this, tbh)
Very possible and at least it had more mature drivers from the get go, compared to older RDNA 2 graphics.
 
my former 1080 Ti didn't even care when I had 1440p144 Hz + 1080p120 Hz connected. It only increased clocks when I streamed something, and then also not to full clocks.

Very possible and at least it had more mature drivers from the get go, compared to older RDNA 2 graphics.
Nv had issues with high refresh HDMI, and upped their displayport versions to fix it

My brother has a 165Hz Gsync, and on his GTX 1080 165Hz caused it to clock up, 144 did not
 
165 Hz is the most unnecessary "Hz increase" ever anyway, comparable to 120 -> 144. Useless, no tangible benefits. More worries about hitting FPS targets.
 
165 Hz is the most unnecessary "Hz increase" ever anyway, comparable to 120 -> 144. Useless, no tangible benefits. More worries about hitting FPS targets.
You dare insult my glorious refresh rate?
(Nah i totally agree)
I only went 165Hz because i figured even running at 144 or 120, the higher refresh models would likely have newer tech, and not be a rehashed 5 year old design
 
You dare insult my glorious refresh rate?
(Nah i totally agree)
I only went 165Hz because i figured even running at 144 or 120, the higher refresh models would likely have newer tech, and not be a rehashed 5 year old design
Plus, it really is not a problem with a 3090. :laugh: I will probably skip the GPUs this year, my 2080 Ti is clocked so high, it's nearly as fast as a 3080, so it doesn't feel "dated" at all.
 
Apologies to bump this thread, but I would like to add this since I am using a RX 6900 XT (for games) and RTX 3090 (for work/ML). I have taken the RTX 3090 out since I'm on vacation and solely play Apex Legends (which runs "better" on the RX 6900 XT since its Source-based).

What I found is that if one side of my monitor (Samsung Odyssey G9 Neo, running in PDP 1440p120 for both sides) is running at 10 bpc Color Depth (DisplayPort) and the other is running at 8 bpc which is its max (HDMI 2.0 in PDP mode/2.1 solo). The AMD driver does not downclock the VRAM in idle if the Color Depths are different, so setting my DP side of the screen to 8 bpc to match the other side will allow it to idle properly.

1658356133292.png

1658356153862.png


This issue does not occur on my RTX 3090, as apparently the driver supports having different Color Depths for each monitor. I would run both on DisplayPort, but the G9 Neo only has 1 DP port and 2 HDMI 2.1 ports, so I have to use a DP 1.4a to HDMI 2.1 converter (I have sole HDMI connector on the video card connected to a HDTV).
 
I am running my 5700XT with a PG279Q at 144hz with no problem, the vram stays at 100mhz when in idle
Didn't try with newer drivers... i am still with 19.9.1
I thought the Asus PG279q only supports Gsync? That's all mine supports, did they come out with a freesync version later? Or maybe a variant that supports both freesync and Gsync?
 
Apologies to bump this thread, but I would like to add this since I am using a RX 6900 XT (for games) and RTX 3090 (for work/ML). I have taken the RTX 3090 out since I'm on vacation and solely play Apex Legends (which runs "better" on the RX 6900 XT since its Source-based).

What I found is that if one side of my monitor (Samsung Odyssey G9 Neo, running in PDP 1440p120 for both sides) is running at 10 bpc Color Depth (DisplayPort) and the other is running at 8 bpc which is its max (HDMI 2.0 in PDP mode/2.1 solo). The AMD driver does not downclock the VRAM in idle if the Color Depths are different, so setting my DP side of the screen to 8 bpc to match the other side will allow it to idle properly.

View attachment 255405
View attachment 255406

This issue does not occur on my RTX 3090, as apparently the driver supports having different Color Depths for each monitor. I would run both on DisplayPort, but the G9 Neo only has 1 DP port and 2 HDMI 2.1 ports, so I have to use a DP 1.4a to HDMI 2.1 converter (I have sole HDMI connector on the video card connected to a HDTV).
10 bit uses more bandwidth, it could be as simple as dropping the bandwidth gets you under the threshold to raise clocks

what happens if you try 6/6 and 6/8? (apart from looking terrible)
 
10 bit uses more bandwidth, it could be as simple as dropping the bandwidth gets you under the threshold to raise clocks

what happens if you try 6/6 and 6/8? (apart from looking terrible)

I figured it was a bandwidth limitation because if I set both monitors to 60 Hz, it exposes the 10-bit option for both of them, but if I set the HDMI connected display to 120 Hz, I only have the option for 8 bpc. On the DisplayPort connected one it has the choices of 6, 8 and 10 bpc, since DP normally has the bandwidth for it.

Interestingly, while in 60 Hz on the HDMI connected display, it has 8, 10 & 12 bpc, no 6 bpc:

1658423985671.png


60 Hz on the DisplayPort one it has 6, 8 and 10 bpc:

1658424025657.png


A mix of 6 bpc and 8 bpc also causes the VRAM clocks to shoot up:

1658424308553.png
 
Part of what i read and recall on this, was that at a GPU level you have clock generators for the displays.
This is from fuzzy memory so i'm sure i'll get terminology wrong, at least.

VGA, DVI and HDMI below 1.2 had a different clock rate depending on the data being sent - DP had a fixed rate, not a varying one.
Something about DP being packet based vs the others being more like VGA with everything being about timings for scan rates for horizontal, vertical etc.

A lot of video cards shared the clock gens, so you could use 3x native DP + HDMI + DVI, but if you used passive DP to anything else, you'd find yourself limited to just 3 total displays

Something i've forgotten here except the final part is that using all the bandwidth of DP (natively or via adaptors) used more hardware of the card instead of those ready-made clock gens, used more of the card (specifically VRAM) and required the clocks to be higher at idle to avoid issues

Comments like this from Toasty (author of CRU) ties into the probably important bits i've forgotten: 'VRAM has to update faster than the monitor does, you'll get flickering or blackouts on the screen'
1658448744511.png


This guy found the limits of how many older displays a 5700xt can support: (two, and he tried 3)
DP to DVI adapter not working on Radeon RX 5700 XT - AMD Community



The things i'm not sure of is:
Is this fix based on assumptions for the older display types, or is it neccesary on DP too?
Does DP and modern HDMI's fixed data-rate (instead of variable) just assume it needs the max possible bandwidth and clock the VRAM up just in case?
I know nvidia ditched VGA support ASAP, is it cards with support for older standards that have to do this?
Is this something that changed VRAM tech solves? Is it simply that GDDR6x cards can handle X bandwidth while GDDR6 or GDDR5 cant? (testing a 1070ti vs a 1080 would cover this, as the key difference there is their VRAM)

This could be a drawback of maintaining legacy support for older monitor tech, or it could be a software thing to avoid issues with certain combinations of displays and adaptors being triggered when it's not needed
 
Last edited:
I don't think the packet based thing ends up making much difference in the end. If you think about it, in theory perhaps you could burst a whole frame into a buffer in the monitor and then have time to do whatever you like for a while. But in order to send it down in say, half the time, you would need double the bandwidth. Well manufacturers are going to provision enough bandwidth and no more (or perhaps even slightly less than enough, leading to this thread). A lot of other problems too, like getting everyone to play ball with your scheme.

So at the end of the day you have just enough time to send everything along, packet based or not.

Is this fix based on assumptions for the older display types, or is it neccesary on DP too?
It's necessary any time there is insufficient time to change the DDR speed during the blanking between frames. It doesn't really matter what link you are using.

Is this something that changed VRAM tech solves? Is it simply that GDDR6x cards can handle X bandwidth while GDDR6 or GDDR5 cant? (testing a 1070ti vs a 1080 would cover this, as the key difference there is their VRAM)
It's not about the bandwidth, it's about having sufficient real time to change clockspeeds. If a new RAM tech solves it it will because they found a way to change speeds ultra fast.

I'm not sure what AMD/nvidia are doing in their drivers/products to deal with this, it could be any number of tricks and they never talk about this issue. As it stands the problem will continue to get more and more prevalent and harder to solve as monitor refresh rates increase. Higher the refresh, the less time between frames to change clock speeds. Once you go high enough it doesn't matter how much blanking you add in with CRU there will just never be enough time.

The permanent solution would be some sort of hardware buffers. Like a pair of 80MB RAM chips on the graphics card that are just purely there to hold the output frame, so the display out can read from that while the main GDDR does whatever it likes.
 
I don't think the packet based thing ends up making much difference in the end. If you think about it, in theory perhaps you could burst a whole frame into a buffer in the monitor and then have time to do whatever you like for a while. But in order to send it down in say, half the time, you would need double the bandwidth. Well manufacturers are going to provision enough bandwidth and no more (or perhaps even slightly less than enough, leading to this thread). A lot of other problems too, like getting everyone to play ball with your scheme.

So at the end of the day you have just enough time to send everything along, packet based or not.


It's necessary any time there is insufficient time to change the DDR speed during the blanking between frames. It doesn't really matter what link you are using.


It's not about the bandwidth, it's about having sufficient real time to change clockspeeds. If a new RAM tech solves it it will because they found a way to change speeds ultra fast.

I'm not sure what AMD/nvidia are doing in their drivers/products to deal with this, it could be any number of tricks and they never talk about this issue. As it stands the problem will continue to get more and more prevalent and harder to solve as monitor refresh rates increase. Higher the refresh, the less time between frames to change clock speeds. Once you go high enough it doesn't matter how much blanking you add in with CRU there will just never be enough time.

The permanent solution would be some sort of hardware buffers. Like a pair of 80MB RAM chips on the graphics card that are just purely there to hold the output frame, so the display out can read from that while the main GDDR does whatever it likes.
Would dual-ported WRAM help with this issue?
 
I suppose it could help if you used it for the buffer chip. It might let you use half as much if it saves you from needing to double buffer.
 
Back
Top