Monday, March 6th 2023
Intel Quietly Fixes High Multi-monitor Power Draw of Arc GPUs
Without mentioning it in its driver change-log, Intel Graphics has quietly addressed the issue of unusually high power-draw for its Arc A-series GPUs in multi-monitor setups. The older 101.4091 drivers had a typical single-monitor idle power-draw of around 11 W, which would shoot up to 40 W idle in multi-monitor setups. In our own launch-day review of the Arc A770, we logged a 44 W multi-monitor power-draw. Intel now claims that the multi-monitor idle power-draw has been tamed, with the latest 101.4146 drivers the company released last week lowering this down to "8 to 9 W" for multi-monitor setups, and "7 to 8 W" for single-monitor ones.
Sources:
Intel GPU Issue Tracker, VideoCardz
48 Comments on Intel Quietly Fixes High Multi-monitor Power Draw of Arc GPUs
Well, not for long, I ordered another monitor, I'll see how much of an issue this really is. If am to take every comment on here about how bad the idle power consumption is (typically from non AMD users) and how bad AMD is as a result I am expecting hell to break loose at the very least or I am going to be very disappointed.
PC on for 16 hours a day 30 watts at 55p unit. Thats 27p day, multiple by 365 days. £99, add VAT thats just shy of £120, but I was thinking of the £150 card, so my mistake there. (not a full mistake if the £150 also affected)
Curious though why does wasting power been brought up offend you so much?
There's mountains, there's molehills, and then there's the grain of sand you are trying to turn into a molehill. The competition has had similar issues for years, AMD only recently started fixing it (and it isnt totally fixed). Almost like drivers r hard. But I'm sure you could do a better job. So, yeah, like $105. If that's a lot of money to you, first question is why are you buying a $350 card, and second, why on earth is it on, at IDLE, for 16 hours a day? Not using it, at IDLE. Seems like you have a bigger issue with waste from improper use of a PC at that point. If you have a card like this, in a work PC, then ideally it should be DOING something for that timeframe. If its just doing web browsing or other non heavy work, then having said card is pointless and the far more efficient APUs should be in your PC instead. The problem is people bring up idle power like it is going to bankrupt the world because of a few watts of power. It's not, and claiming things like "the idle power from a year will cost as much as an A770" is totally incorrect. People are blowing this "idle power" issue WAY out of proportion acting like it will take an entire paycheck and nobody but the rich can afford to run a PC as if they were some gas guzzling hypercar. They're not.
People will then take that incorrect information and use it to make ignorant decisions about hardware. "oh why would I buy an AMD card, look how much higher its idle usage is, I better get an nvidia card instead for $200 more" and in the process wasting nearly $200 because of an ignorant knowledge of both math and money. They'll prattle on about how much more efficient their GPU is while ignoring the massive carbon ramifications from them building a PC to play some bing bing wahoo. Gaming, at its core, is a massively wasteful hobby, and the moran grandstanding about power use is supremely annoying.
Financial illiteracy is a major trigger. Like the imbeciles who bought toyota prius' on a 5 year loan with 8% interest because gas went to $5, instead of keeping their perfectly serviceable paid off vehicle, then prance around talking about how much money they are "saving". You have no idea what I think, and your uninformed rambling has no relation to what I think. I have on many occasions advocated for the use of nuclear power to eliminate fossil fuel usage and lamented ont he waste of resources in a variety of manners. If you think a couple dozen watts of electricity from cards for a niche hobby is a major issue, oh baby just WAIT until I introduce to you the environmental catastrophe that is silicon chip production!
I'm with you that for a regular user the difference between 30w or 10w idle doesn't really move the needle enough to matter, but we should still criticize and applaud when companies fail or fix this kinds of things, like Assimilitor said, it adds up, waste just for waste's sake is stupid. And we should fix that too! "Show me on this diagram here where power efficiency hurt you" :D I think you're much more triggered than anyone else but whatever. People make stupid decisions everyday. Like we're talking about idle power but ignoring the part where the cards are not that efficient fps/W when running vs the competition, just like intel CPUs which are pretty terrible.
The cards had a problem that could be fixed and it was fixed at least partially, and that's great! wow, so much drama on something so simple. Effort vs impact, fixing the idle power of consumer products generally speaking might not have the greatest impact compared with industrial activities but it's still something very much achievable with low effort.
By the way, do you even own an AMD card ? Why do you even care ?
Intel Nvidia and AMD all have issues with this
After spending a lot of time with CRU on their forums (and blurbusters) overclocking various displays, it seems to come down to how various monitors implement their timings - theres standards for LCD's, reduced blanking, and a second generation of the reduced blanking
As you reduce the blanking more and more you can get more and more bandwidth to the displays for higher refresh rates, but you also begin to run into these issues where GPU's can't identify how much performance they need to run these non-standard setups (Hence may displays saying things like 144Hz with "160Hz OC")
As an example my 4K60Hz displays support DP 1.2 and 1.4 respectively, but internally they both have the same limit of around 570Mhz at stock or 590Mhz overclocked - with reduced blanking that lets me get 65Hz with 10 bit colour out of them, but they lose support for HDR and freesync by doing so and some older GPU's here (like my 10 series GPUs) ramp up their idle clocks
Despite using the same panel, the same HDMI standard and same bandwidth limits (despite different DP standards) they both implement different timings and blanking values. I understand the words, but this image apparently contains the visual explanation even if i can't really grasp it's meaning - shrinking the 'porch' gives more bandwidth to the display, but less time for the GPU to do it's thing so they clock up to compensate
Ironically, these overclocking methods to get higher refresh rates from the factory can result in more blurring and smearing along with the higher GPU idle clocks, or compression artifcacts like the samsung panels are prone to on their high refresh rate displays (the 240Hz displays have artifacting, but a custom refresh of 165Hz is problem free)
TL;DR: More idle time lets the pixels revert back to black/neutral before changing to the next colour, even at the same refresh rate, and GPU's can also benefit from that idle time to clock lower.
If AMD can fix it with subsequent driver updates, then logically it's not impossible.
If AMD can fix it with subsequent driver updates, then logically they can avoid it being a problem on release day. Yes, I'm such a fanboy who doesn't like AMD that my current CPU and motherboard are from them. Seriously, grow up and learn how to make a proper argument. If they made sure it wasn't a problem on launch day, they wouldn't get bug reports about it and they wouldn't have to spend time and effort triaging those reports and fixing the issue. In other words they'd save time by just taking the time before launch to make this work. For the same reason that I care about climate change. For the same reason that your "argument" isn't an argument, just another attempt to deflect from the topic at hand. I will grant you that it's an even more pathetic attempt at deflection than your previous one, which is impressive in its own right considering how incredibly sad the previous attempt was.
"If X company did it it must meant that Y company can do it" is a catastrophically unintelligent point you tried to make. Yeah because the time it takes to fix something is always zero. You have no knowledge whatsoever on matters that have to do with software development and it shows, you really should stop talking about this, you're clearly out of your element, this was an incredibly dumb statement. And have you made one ? I must have missed it. So you don't. I just find it interesting the only people that are very vocal about this are the ones who don't own an AMD card.
Just got another monitor, this is what my idle power consumption looks like with a 1440p 165hz monitor and a 4K 60hz one :
22 W, clearly unacceptable, since this upsets you so much, can you get mad on my behalf and write them an angry email ? Just copy paste all your comments from this thread, I think that will suffice.
Blurbusters and the forums related to CRU cover it really well since monitor overclocking (and underclocking) can trigger (or fix) the same problems, since monitors are using non-standard timings to push higher resolutions and refresh rates without using the latest HDMI or DP standards that support it they just make custom settings to fit the available bandwidth and the drivers can't identify WTF it needs, so they run at full speed to make sure you get an image at all.
Will have a read of the blurbuster forums, thanks.
-- I run my desktop only at 60hz ,so that might be the actual reason I am not affected, when I tried 120hz was also , but from my tests the timings only go out of spec at above 120hz (144hz), I didnt check the gpu power state when I tested 144hz.
On older GPU's there was a 150Mhz? limit on some standards, then 300Mhz for single link DVI (and HDMI), with dual link being 600Mhz (the current common maximum, without compression)
Almost all GPU's cant run all the ports at the same time (30 series were limited to 4 outputs at a time, despite some having 5 ports - and you cant run 4k 144Hz on all 4 at once, either)
Then it comes down to the VRAM needing enough performance to refresh faster than the monitors blanking interval, and that's the easiest thing to lower to get more bandwidth out of a display - it's how those 144Hz displays have a 165Hz "OC" mode.
These driver fixes would be adding in "common" situations they've tested and verified (4k 120hz with compression on 2x displays for example) and enabling some default safeties, like forcing 8 bit colour depth or locking to 60hz with 3? 4? displays - but all it takes is a user adding in another display, enabling HDR, using some weird DP/HDMI converter, changing from 8 to 10 or 12 bit colour depth, and suddenly the bandwidth requirements are all over the place.
HDMI is the worst of the 2 with the crap they pulled calling anything 2.1, not that it changed much, just allowing for anything other than FRL6 was already completely stupid, TMDS just added insult to injury