Tuesday, August 1st 2023
AMD Radeon RX 6000/7000 GPUs Reduce Idle Power Consumption by 81% with VRR Enabled
AMD Radeon RX 6000 and RX 700 series based on RDNA 2 and RDNA 3 GPU architectures have been benchmarked by folks over at ComputerBase. However, these weren't regular benchmarks of performance but rather power consumption. According to their latest results, they discovered that enabling Variable Refresh Rate (VRR) can lower the power consumption of AMD Radeon cards in idle. Using a 4K display with a 144 Hz refresh rate, ComputerBase benchmarked Radeon RX 6800/6700 XT and RX 7900 XT, both last-generation and current-generation graphics cards. The performance matrix also includes a comparison to Intel Arc A770, NVIDIA GeForce RTX 3060 Ti, RTX 3080, and RTX 4080.
Regarding performance figures, the tests compare desktop idle consumption, dual monitor power consumption, window movement, YouTube with SDR at 60 FPS, and YouTube with HDR at 60 FPS, all done on a 4K 144 Hz monitor setup. You can see the comparison below, with the most significant regression in power consumption being Radeon RX 7900 XTX using 81% less power in single and 71% less power in dual monitor setup.
Source:
ComputerBase.de
Regarding performance figures, the tests compare desktop idle consumption, dual monitor power consumption, window movement, YouTube with SDR at 60 FPS, and YouTube with HDR at 60 FPS, all done on a 4K 144 Hz monitor setup. You can see the comparison below, with the most significant regression in power consumption being Radeon RX 7900 XTX using 81% less power in single and 71% less power in dual monitor setup.
94 Comments on AMD Radeon RX 6000/7000 GPUs Reduce Idle Power Consumption by 81% with VRR Enabled
30 watts idle while on, , most modern monitors have power saving features, , like the pc attached, ,,, , so after 5/10 idle minutes it's off.
Your being hyperbolic Nvidia and Intel are Not That much better.
Some people buy nice things, and keep them for a few years, to make the most of it.
Would you buy a new car every year ?
In the grander scheme of things, reducing idle power for potentially millions of users benefits all of society.
If you're on a super low budget and looking for a super-high-end PC for the money you saved with years of hard work, by all means, knock yourself out, but let me have my opinion on it.
Edit: All I'm saying is that a high-end PC depreciates in value so fast that a little increase in your power bill is the least of your concerns. No because it's pointless. Not to mention it's a bad example, because a car is a car even 10 years after you've bought it, but your high-end PC won't play the latest games a few years later.
If the German is too difficult, use your browser’s inbuilt translator.
Go look at the original graphs. There is a lot of information there.
Go read the comments. They are better informed than many here, because they are using the source article as the basis of their comments.
And just to spell it out, VRR is required on the driver, and required on the monitor too! Don’t forget that. Otherwise the power consumption results don’t change.
My idle and most is significantly lower than that too.
And as I said the person sat or not in front, can configure for boiling ocean's or NOT.
The constant shutting down and powering up does put stress on hard drives and its why they recommend you leave the system runnning. However these days my current machine has no spinning drives they live in a NAS. So my machine is either on or in sleep mode until I need it.
Computing is a relatively cheap hobby when you start looking at the rest of them.
With that said, the Radeons and the 4080 have large caches and should fit entire frame buffers in them (47.5 MiB is required for two 4k monitors at 8-bit colour depth). The RAM, memory bus, and memory controller could remain in a slow clock, low voltage mode when working in 2D. I'm not sure which cards can do that but the high consumption indicates that they don't do that.
GRAPHICS CARD=
Video Memory + GPU + Framebuffer memory + Video Controller I/O + Connectors ---> Display
The decoupling of the Video Controller I/O from the GPU, such that the GPU can idle, whereas the Video Controller I/O is synced to the Display.
Bug's statement is obvious from old school graphics where the CPU did the graphics heavy lifting to a shared framebuffer that the Video Controller I/O accessed independently via DMI and offered different display output formats at different framesyncs.
In a modern Graphics Card these steps are not on discrete silicon, separate chips, and therefore the "GPU" is doing much more than in yesteryear, and therefore we require "features" like G-sync and VRR as driver driven functionality and Display compatibility, whereas in the past the output was more standardised maybe call it dumb, and the Video Controller I/O handled that standardised output DECOUPLED from the performance of the CPU/GPU.
Well at least that's what I think bug meant!
Not to mention, even the Windows desktop and the web browser use your GPU these days. "Idle" isn't exactly the same "idle" it used to be during the Windows XP times. It's more like a low power state that is ideally just enough to provide 2D acceleration when it works right - and a bit more when it doesn't.
HotHardware: Maybe they haven't optimised everything yet. The single-monitor (1440p60) idle indeed turns off the GPU. The VRAM is running at 13 MHz, which yields sufficient bandwidth. But in the multi-monitor idle mode, the problem becomes obvious: memory clock can not adapt, it jumps from 13 MHz to 2487 MHz and stays there at all times.
From the TPU 7900 XTX review:
The 7900 XT has exactly same memory clocks but consumption in multi-monitor idle and video playback is 1/6 lower than in the 7900 XTX. It also has 1/6 fewer memory chips and memory controllers. Funny, isn't it? This means that almost all of the 103-105 watts of idle power are funneled into the memory, memory bus and memory controllers, which could run at 30 MHz or 100 MHz or something - if they were able to scale down. It may well be a hardware limitation, unfixable in software!
Then VRR reduces the required data rate, and memory clock can fall back to 13 MHz. Sure, there's GPU rendering of web pages, but it only happens once if the displayed content doesn't change or scroll.
it is nice to see this testing and results.
And no, I'm not saying that lower power idle in your computer is all that saves you money, I'm just saying that it can be a not so insignificant part of it. If a gpu uses 40W idle and runs 24/7 that's 28kWh per month which does put a dent on a 210kWh monthly cap.
And even if you do, if a single appliance will put you over it, what the hell kind of usage cap is that and what else are you doing with it. Is it thát bad in Soviet Russia now or is this some arcane construction with solar and a battery plus a super expensive backup? :D
If $5-10 a month extra on your power bill is too much, then you have better things to spend that $100 saving on. Or if you don't care about that $100, then why do you care about $5?
Also, what about your fully loaded power consumption which is hundreds of Watts higher on a flagship GPU than on a midrange one? How is that suddenly not a problem? Then you don't need a flagship GPU with a 4K monitor to begin with. Simple. ;)
Or the thing I mentioned and apparently hasn't been implemented: variable memory clock, like in the 4080 and 4090. It's not sleeping, just walking instead of running.