• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Outs Workaround for High Arc A770 Idle Power: Force PCIe L1 ASPM in Motherboard BIOS

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,878 (7.38/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Intel Arc A770 "Alchemist" graphics card has an idle power-draw problem. It pulls 44 W (card-only) power when idling. This used to be acceptable some 15 years ago, but GPU idle power-draw has come a long way since. The reigning Goliath GeForce RTX 4090 pulls just 21 W when idling, and the RTX 3070, the card the A770 was extensively compared against, only pulls 9 W—that's 7 LED downlights worth power-difference between the A770 and RTX 3070. Intel has a workaround to this problem: enable the PCI-Express active state power management (ASPM) setting to L1 mode in your motherboard's UEFI BIOS setup program.

The Intel Xe-HPG "Alchemist" graphics architecture reportedly uses PCIe Gen 2-era L0 and L1 ASPM, which needs to be forced via software settings. To do this, find the PCIe ASPM settings in your BIOS setup, and enable them with the "L1" setting. You then make your way to Power Options in the Windows Control Panel, edit your active power scheme, and manually set the PCI-Express "Link state power-management" to "Maximum." This affects the power-management behavior and performance of all PCIe devices in your system, including NVMe SSDs, not just the graphics card. Intel did not put out its power-draw numbers for this workaround, but we intend to test it as soon as we can.



View at TechPowerUp Main Site | Source
 
This will effect whatever device that uses PCI express not just the ssd. A bios release can accommodate fix for this useless power draw.
 
Intel might find themselves going oldschool with the solution in a form of a software based vBIOS update
 
Somewhat like setting PBO in the bios for zen4 to get lower temp/power draw- an issue with easy workaround, but should have avoided anyway in the first place.
 
Last edited:
Really weird they dont give any numbers with this.....makes me not have confidence it affects it that much.
Also Gen 2? I know it has been in development hell for some time now but jeez.
 
Why would anyone not enable this as standard?
 
the idle power of this card is so bad, I'm surprised it's allowed to be sold in the EU. Just criminally bad, product should never have been released in that state.

Props to TPU for highlighting the fact (a lot of review sites don't bother)
 
Can we trust what HwInfo shows?
For my 2080Ti, it shows "GPU power" ~3W, but there is also a "Gpu core (NVVDD) output power" which is ~5.5W

But in any case, 2080Ti seems to much better at idle power than new generation cards.
 
the idle power of this card is so bad, I'm surprised it's allowed to be sold in the EU. Just criminally bad, product should never have been released in that state.

Props to TPU for highlighting the fact (a lot of review sites don't bother)

Frankly the difference between 21W and 43W is high, but in the end doesn't matter that much, only 250W more per day or so.
 
What a workaround, just because intel's hardware is poorly designed, all PCI-express devices must now be forced into maximum savings mode. Simply amazing.
Please, someone with arc card, quickly test this workaround, to see what's what. It is indeed odd intel does not provide the workaround idle power numbers.
 
What a workaround, just because intel's hardware is poorly designed, all PCI-express devices must now be forced into maximum savings mode. Simply amazing.
Why is that bad? Before this news, I never even considered that there is a single person on the planet who doesn't use PCI-express power saving as standard.
 
Really weird they dont give any numbers with this.....makes me not have confidence it affects it that much.
Also Gen 2? I know it has been in development hell for some time now but jeez.
Those features were introduced with gen2 and are still in use by gen3/4/5. The wording in the article is wrong.
Source: https://pcisig.com/making-most-pcie®-low-power-features

Its a hardware flaw. Ssd might hamper performance if you do.
How exactly? Most SSD's need 6 to 10ms to come out of L1. After that, performance is identical.
Even with your drive power saving timeout set to 1min, your main OS / game SSD will never go to sleep during usage.

Why is that bad? Before this news, I never even considered that there is a single person on the planet who doesn't use PCI-express power saving as standard.
Exactly!
 
Why is that bad? Before this news, I never even considered that there is a single person on the planet who doesn't use PCI-express power saving as standard.
I mean power-saving is not bad per-se (it is in fact good), but to have to explicitly set it in maximum, when the newer windows is already designed with optimum power in mind is just a proof of poorly designed intel hardware with the excuse/workaround of having to utilize obscure settings while other hardware can work with default windows "optimum" power policies.
 
Frankly the difference between 21W and 43W is high, but in the end doesn't matter that much, only 250W more per day or so.
lol wot? firstly the 21W is an extreme product for bleeding edge users who don't care about anything other than maximum performance. the a770 is mainstream, and competing products such as the 6600 xt idle at 4W.

If large numbers of people actually bought the thing 250W x many graphics cards is terrible.
 
Why is that bad? Before this news, I never even considered that there is a single person on the planet who doesn't use PCI-express power saving as standard.
Having to enable *maximum* power savings mode can introduce latency and is not typically used unless the computer is asleep or some such. It's not how you want to run a desktop that isnt fretting over battery life.

Also, this royally sucks for anyone with, say, an OEM system with an intel card where enabling this is not possible, much like ReBar. Intel needs to stop relying on BIOS changes to fix their cards and actually fix their GPUs instead.
 
lol wot? firstly the 21W is an extreme product for bleeding edge users who don't care about anything other than maximum performance. the a770 is mainstream, and competing products such as the 6600 xt idle at 4W.

If large numbers of people actually bought the thing 250W x many graphics cards is terrible.
That and idle usage especially of a GPU is extremely common, even more so than cpu idle time.
 
any sane bios will have power saving settings for pcie and nvme slots seperated. ive seen them, they exist. if in the bios your nvme slot power saving is disabled and the pcie slot enabled, it should only apply to the pcie slot and not the nvme slot.
 
I'm loving all the bad press.
No, not because I hate Intel.
Because it means a higher likelihood of grabbing one of these *cheap* before early EoL.

Give it a few years after these are out of production and Enthusiasts will probably have better drivers and firmware compiled for it, than Intel could have ever hoped for.
 
I dont understand how this is WA? This is a native PCIe link state feature!
 
Back
Top