Power Consumption
Power Consumption Testing Details
Improving power efficiency of the GPU architecture has been the key to success for current-generation GPUs. It is also the foundation for low noise levels because any power consumed will turn into heat that has to be moved away from the GPU by its thermal solution. Lower heat output also helps improve cost because smaller, cheaper thermal solutions can be used.
For this test, we measure power consumption of only the graphics card via the PCI-Express power connector(s) and PCI-Express bus slot. A Keithley Integra 2700 digital multimeter with 6.5-digit resolution is used for all measurements. Again, these values only reflect the card's power consumption as measured at its DC inputs, not that of the whole system.
We use Metro: Last Light as a standard test for typical 3D gaming usage because it offers the following: very high power draw; high repeatability; is supported on all cards; drivers are actively tested and optimized for it; supports all multi-GPU configurations; test runs in a relatively short time and renders a non-static scene with variable complexity.
Our results are based on the following tests:
- Idle: Windows 10 sitting at the desktop (1920x1080) with all windows closed and drivers installed. The card is left to warm up in idle mode until power draw is stable.
- Multi-monitor: Two monitors are connected to the tested card, and both use different display timings. Windows 10 is sitting at the desktop (1920x1080 and 1280x1024) with all windows closed and drivers installed. The card is left to warm up in idle mode until power draw is stable. When using two identical monitors with the same timings and resolution, power consumption will be lower. Our test represents the usage model of many productivity users who have one big screen and a small monitor on the side.
- Media Playback: We use VLC Media Player to watch a 4K 30 FPS video that's encoded with H.264 AVC at 64 Mbps bitrate, making it similar enough to many streaming services as well, without adding a dependency on internet bandwidth. This codec should have GPU-accelerated decoding on every modern GPU, so it tests not only GPU power management, but also efficiency of the video decoding hardware.
- Average (Gaming): Metro: Last Light at 1920x1080 because it is representative of a typical gaming power draw. We report the average of all readings (12 per second) while the benchmark is rendering (no title/loading screen). In order to heat up the card, the benchmark is run once first without measuring its power consumption. This is required to report proper steady-state results, not short term numbers that won't hold up in long-term usage. For RTX 3060 Ti and faster we're rendering at 2560x1440, to ensure good GPU load and avoid potential CPU bottlenecks.
- Peak (Gaming): Same test as Average, but we report the highest single reading during the test.
- Sustained (Furmark): We use Furmark's Stability Test at 1600x900, 0xAA. This results in very high no-game power-consumption that can typically only be reached with stress-testing applications. We report the highest single reading after a short startup period. Initial bursts during startup are not included as they are too short to be relevant.
Power consumption results of other cards on this page are measurements of the respective reference design.
Non-gaming power consumption is a few watts higher than on the NVIDIA Founders Edition, probably due to the VRM changes and RGB lighting. The increase is small enough not to matter in daily usage.
Gaming power draw is significantly increased because Palit increased their card's power limit, which translates into additional performance since NVIDIA's Boost algorithm has more headroom to work with. The increase is quite big for a 2% real-life performance gain, though.
Minimum recommended PSU: 600 W