Thursday, September 27th 2018

NVIDIA Fixes RTX 2080 Ti & RTX 2080 Power Consumption. Tested. Better, But not Good Enough

While conducting our first reviews for NVIDIA's new GeForce RTX 2080 and RTX 2080 Ti we noticed surprisingly high non-gaming power consumption from NVIDIA's latest flagship cards. Back then, we reached out to NVIDIA who confirmed that this is a known issue which will be fixed in an upcoming driver.

Today the company released version 411.70 of their GeForce graphics driver, which, besides adding GameReady support for new titles, includes the promised fix for RTX 2080 & RTX 2080 Ti.

We gave this new version a quick spin, using our standard graphics card power consumption testing methodology, to check how things have been improved.

As you can see, single-monitor idle power consumption is much better now, bringing it to acceptable levels. Even though the numbers are not as low as Pascal, the improvement is great, reaching idle values similar to AMD's Vega. Blu-ray power is improved a little bit. Multi-monitor power consumption, which was really terrible, hasn't seen any improvements at all. This could turn into a deal breaker for many semi-professional users looking at using Turing, not just for gaming, but productivity with multiple monitors. An extra power draw of more than 40 W over Pascal will quickly add up into real Dollars for PCs that run all day, even though they're not used for gaming most of the time.
We also tested gaming power consumption and Furmark, for completeness. Nothing to report here. It's still the most power efficient architecture on the planet (for gaming power).
The table above shows monitored clocks and voltages for the non-gaming power states and it looks like NVIDIA did several things: First, the memory frequency in single-monitor idle and Blu-ray has been reduced by 50%, which definitely helps with power. Second, for the GTX 2080, the idle voltages have also been lowered slightly, to bring them in line with the idle voltages of RTX 2080 Ti. I'm sure there's additional under-the-hood improvements to power management, internal ones, that are not visible to any monitoring.

Let's just hope that multi-monitor idle power gets addressed soon, too.
Add your own comment

53 Comments on NVIDIA Fixes RTX 2080 Ti & RTX 2080 Power Consumption. Tested. Better, But not Good Enough

#26
IceShroom
medi01How FUD works 101.

From the charts above, multi-monitor:
Vega - 17W
2080Ti - 58W

But mentioning Vega "in certain context" did it's job.

So, yea, nearly like Vega, if you meant 3 times Vega, or almost 4 times Vega.

PS
And I don't believe the spin to be accidental, sorry.
I was talking about average gaming power consumption. Didn't looked carefully for idle, as I thought Nvidia is "efficient" at gaming, so they may sip no power with multi monitor.
Posted on Reply
#27
qubit
Overclocked quantum bit
That fix came quickly, good show for NVIDIA.

@W1zzard Did you try testing the power consumption using different screen resolutions? I think the cards will show differences there and I suggest using 2K and 4K video modes since they're the most common.
Posted on Reply
#28
no_
if your past results showed 11w for 1080ti idle then your methodology is misleading, intentional or otherwise

the 10 series have a widely-documented problem with idle power consumption (which you guys being "journalists" should already know) which is popularly believed to stem from mixed refresh rates (example one or more 60hz monitor and one or more higher than that such as 144hz) causing the card to not actually idle when it should be, resulting in power draw right around ... guess what ... 40 watts

however, contrary to that popular belief, that problem isnt directly from the mixed refresh rates, but instead from the mix of which ports are used on the card, with displayport in particular being the culprit. if you only use hdmi and/or dvi then the card will actually idle ... but the moment you use displayport in a multi-monitor setup because youre "supposed" to use displayport for higher refresh rates, it wont. dont believe me? saying to yourself "yeah i could totally prove you wrong if i had the hardware tools readily available for measuing off-the-wall-power but im too lazy to do that rn"? well the good news is you dont need hardware tools, all you need is afterburner or precision, whichever tool you prefer that can show the current clocks (to see if the cards actually in an idle state or not). 400ish core clock? idle. 1000+ core clock? not idle. comprende?

plug your multiple monitors into hdmi and/or dvi and NOT displayport and watch what happens to the clockspeed in afterburner/precision WHAT THE FUCK THE CARD IS ACTUALLY IDLING NOW ..... or if you only have 1 hdmi and no dvi because you have an FE, then plug the higher refresh rate monitor into hdmi and the lower refresh one(s) into displayport WHAT THE FUCK THE CARD IS ACTUALLY IDLING NOW

youre welcome

oh and before you say "hurr durr my higher refresh monitor wont work on hdmi because its not 2.0" ..... how sure are you that its not 2.0? do it anyway, go into NCP, and see what's there. wait ... do you mean ... that people on the internet (like nvidia and OEMS) would go on the internet and just tell lies ...?

youre welcome, again
Posted on Reply
#29
renz496
birdieNo.

Extra 30W more is almost nothing in a long term. People who shell out at the very least $500 for a GPU and also for an extra monitor couldn't care less about this extra wattage.
Haha i remember back in 2010 when they say using GTX480 will make your bill skyrocket by the end of month compared to 5870 for minimal performance increase over 5870. i remember some people end up making extensive calculation about it. In the end they conclude based on their calculation even in a year the difference in electricity cost between 5870 and 480 is very small (i don't remember the exact value but it was something like $10-$15 more for system with GTX480). Now people complaining that Turing draw just a bit more than pascal and said they were happy choosing pascal over Turing.
Posted on Reply
#30
hat
Enthusiast
I like efficiency just as much as the next guy, but the argument that running these cards is more expensive than a more efficient one is ridiculous, unless you are buying them by the pallet for a mining farm or something, in which case, that little bit more power would be multiplied over X (high) number of cards which experience load conditions constantly. While the (still) high multi monitor power consumption is a little unsettling, in the real world it hardly matters.

Pascal was a bit of an outlier. Due to lack of competition, nVidia simply tweaked Maxwell and, along with a die shrink, both increased performance and efficiency, and the power draw was superb. Now, with Turing, we have really big dies thanks to all the RT stuff (big die means inefficient) and more shaders/CUDA cores/whatever they're called now.

Next time competition heats up, especially in the high performance segment, nobody will give a flying whale turd about power draw and focus on getting the best performance, because that's what wins sales. While we like efficiency, almost nobody will buy a lower performing product because it's more efficient. We need FPS!
Posted on Reply
#31
cucker tarlson
FouquinThe original statement was the card runs significantly hotter, not louder. There was no mention of the noise. You're deflecting the argument at your convenience to another attribute that was not mentioned anywhere in the above comments. I won't argue against your point because you are not wrong, the acoustics of the Vega 64 are higher, but that doesn't change the maximum temperature limit of the card which was the point of discussion. Regardless of fan noise, the card is limited to 85C out of the box.
Wrong.

the original point was: "drew WAY more power (and ran way hotter)", you deflected it to make it temperature limit,while it was about gaming temps and power all the time.Also,you can keep arguing that one can talk about temps and noise as separate issues,but not with me anymore.If one card is running 83 degrees at 50% fan while the other has 85 degrees at 90% then we have a clear temperature winner here.At 90% fan the gtx 1080 fe stayed at 70 degrees with 2063/11000 OC IIRC, I had it for a couple of months only.
Posted on Reply
#32
Fouquin
cucker tarlsonWrong.

the original point was: "drew WAY more power (and ran way hotter)", you deflected it to make it temperature limit,while it was about gaming temps and power all the time.
I quite intentionally addressed exactly what I quoted from that post, as it was the only part I disagreed with. You introduced an entirely different variable, in this case fan noise, as if it were an argument against what I said about the cards having a thermal limit.
cucker tarlsonIf one card is running 83 degrees at 50% fan while the other has 85 degrees at 90% then we have a clear temperature winner here.At 90% fan the gtx 1080 fe stayed at 70 degrees with 2063/11000 OC IIRC, I had it for a couple of months only.
You appear to be throwing out unsubstantiated numbers as if they are fact, and attempting to support them with subjective anecdotes, then declaring a "winner" based on that information. If you read TPUs very own review of the Vega 64 (which I linked in my first post) you'll notice on the 'Fan Noise' page that it says, "AMD has limited the fan's maximum speed to ensure it doesn't get even noisier, even when the card climbs up to 85°C, its maximum temperature level."

This rather pointedly dismisses your outlandish "90% fan speed" statement, and again reinforces my statement that the cards have a maximum thermal limit that is a mere 2 degrees above the GTX 1080. Not dependent on fan speed, high or low.
Posted on Reply
#33
hat
Enthusiast
Meh, in my eyes if you force a fan to be quiet, but also have a temperature limit that can't be surpassed... there's only one thing left to do: throttle the card. Not good.
Posted on Reply
#34
londiste
LDNLWhat puzzles me is the fact that NVIDIA could get a bunch of performance out of the GTX 1080Ti for the same power consumption as the GTX 980Ti. Now they're just pumping more watts into these RTX cards to get even the smallest of performance gain over the last generation.
No, they could not. 1080Ti as well as the rest of the Pascal line hit a pretty solid wall somewhere between 2000 and 2100 MHz. Increasing power consumption brings dininishing returns from there. Yes, the retail cards are voltage-limited but extreme overclocking guys have shown clearly that scaling voltage up from there immediately destroys efficiency. It was pretty much the same for 980Ti and Maxwell and so far Turing seems to be in the same boat as Pascal, with possibility to get maybe 50-100 MHz more out of it. This all is defined by process as well as architecture.
mouacykCould lithography be a major factor?
980 TI: 28nm
1080 TI: 16nm = 0.57x shrink
2080 TI: 12nm = 0.75x shrink
Lithography is definitely a factor. So is architecture. Nvidia said a lot of transistors in Pascal went to increasing clock speed. The same remains true for Turing.
While 28nm > 16nm is a real node shrink, 16nm > 12nm actually is not. 12nm is more optimized (mainly power) but it is all basically the same process.
AndromosThe multi-monitor power draw is still pretty bad compared to last generation. I have a large main monitor and a smaller one to the side just like their testing set up. Drawing that much idle is not great.
I have two 1440p monitors (and a 4K TV that is connected half the time) and 2080 so far does not go below 50W even with the latest 411.70 drivers. It is not the first time Nvidia has fucked up the multi-monitor idle modes and will not be the last. Sounds like something they should be able to fix in the drivers though.
no_if your past results showed 11w for 1080ti idle then your methodology is misleading, intentional or otherwise

the 10 series have a widely-documented problem with idle power consumption (which you guys being "journalists" should already know) which is popularly believed to stem from mixed refresh rates (example one or more 60hz monitor and one or more higher than that such as 144hz) causing the card to not actually idle when it should be, resulting in power draw right around ... guess what ... 40 watts

however, contrary to that popular belief, that problem isnt directly from the mixed refresh rates, but instead from the mix of which ports are used on the card, with displayport in particular being the culprit. if you only use hdmi and/or dvi then the card will actually idle ... but the moment you use displayport in a multi-monitor setup because youre "supposed" to use displayport for higher refresh rates, it wont. dont believe me? saying to yourself "yeah i could totally prove you wrong if i had the hardware tools readily available for measuing off-the-wall-power but im too lazy to do that rn"? well the good news is you dont need hardware tools, all you need is afterburner or precision, whichever tool you prefer that can show the current clocks (to see if the cards actually in an idle state or not). 400ish core clock? idle. 1000+ core clock? not idle. comprende?
Interesting. I was seeing that exact problem with 1080Ti for a while but it was resolved at some point in drivers. 2080 on the other hand seems to still run into the same issue.

@W1zzard, what monitors did you have in multi-monitor testing and which ports? :)
Also, have you re-tested the Vega64 or are you using older results? About a month and couple drivers after release my Vega 64 started to idle at some incredibly low frequency like 32Mhz with 1-2 monitors and reported single-digit power consumption at that.
FouquinThis rather pointedly dismisses your outlandish "90% fan speed" statement, and again reinforces my statement that the cards have a maximum thermal limit that is a mere 2 degrees above the GTX 1080. Not dependent on fan speed, high or low.
Cards do have a thermal limit but that is usually at 100-105C.
83, 85 or 90 are the usual suspects for maximum temperature you see because that is where the fan controller is configured to.
Posted on Reply
#35
Frick
Fishfaced Nincompoop
birdieNo.

Extra 30W more is almost nothing in a long term. People who shell out at the very least $500 for a GPU and also for an extra monitor couldn't care less about this extra wattage.
If it had comptetion I'd definitely take a monitor pulling 30W less in idle.
Posted on Reply
#36
Ubersonic
So, they released a new driver to combat idle power, and it's even worse than the old driver in idle if you have multiple screens (as most people buying a 1k+ card do), why even bother to release it? lol
Posted on Reply
#37
Assimilator
no_if your past results showed 11w for 1080ti idle then your methodology is misleading, intentional or otherwise

the 10 series have a widely-documented problem with idle power consumption (which you guys being "journalists" should already know) which is popularly believed to stem from mixed refresh rates (example one or more 60hz monitor and one or more higher than that such as 144hz) causing the card to not actually idle when it should be, resulting in power draw right around ... guess what ... 40 watts

however, contrary to that popular belief, that problem isnt directly from the mixed refresh rates, but instead from the mix of which ports are used on the card, with displayport in particular being the culprit. if you only use hdmi and/or dvi then the card will actually idle ... but the moment you use displayport in a multi-monitor setup because youre "supposed" to use displayport for higher refresh rates, it wont. dont believe me? saying to yourself "yeah i could totally prove you wrong if i had the hardware tools readily available for measuing off-the-wall-power but im too lazy to do that rn"? well the good news is you dont need hardware tools, all you need is afterburner or precision, whichever tool you prefer that can show the current clocks (to see if the cards actually in an idle state or not). 400ish core clock? idle. 1000+ core clock? not idle. comprende?

plug your multiple monitors into hdmi and/or dvi and NOT displayport and watch what happens to the clockspeed in afterburner/precision WHAT THE FUCK THE CARD IS ACTUALLY IDLING NOW ..... or if you only have 1 hdmi and no dvi because you have an FE, then plug the higher refresh rate monitor into hdmi and the lower refresh one(s) into displayport WHAT THE FUCK THE CARD IS ACTUALLY IDLING NOW

youre welcome

oh and before you say "hurr durr my higher refresh monitor wont work on hdmi because its not 2.0" ..... how sure are you that its not 2.0? do it anyway, go into NCP, and see what's there. wait ... do you mean ... that people on the internet (like nvidia and OEMS) would go on the internet and just tell lies ...?

youre welcome, again
I have a GTX 1070 with two monitors connected via DP and HDMI (different resolutions, same refresh rates) and my idle core clock is 135MHz. 400MHz is absolutely not idle in any way shape or form.

I'm still trying to decipher what this claimed bug is from your rant... are you saying there is an issue if using monitors with different refresh rates AND one is plugged into DP? Or is it only if more than one monitor is used, and one of them is plugged into a DP? The only reference to this I can find online is from 2016, can you maybe give me some recent links so I can do further research?
Posted on Reply
#38
londiste
2 and 3 connected monitors has different behaviour. Sometimes considerably different.
Not @no_ but I guess I can answer that - Different refresh rates and one plugged into DP (which often enough is a requirement for high refresh rate).

That power bug link you found is related but as far as I remember it was not the exact bug as after the fix for that there were still problems.

I honestly havent found a good source for this either. It only occasionally comes up in a conversation on different topics, mainly using mixed refresh rate monitors, high refresh rate monitors, sometimes driver updates, power consumption/tweaking threads etc. For what it's worth, I have not seen this in a long time with Nvidia 10-series GPUs. It used to be a problem but not any more.
Posted on Reply
#39
B-Real
IceShroomStrange !!! No one talks about power consumption of New Nvidia Card. Nearly Vega 64 level.
Of course it offers plenty more performance. However, it's strange that when an AMD card does the same, it gets the treatment from NV users, when their card is doing the same, they keep quiet. :)
ZeppMan2172080TI is 45% faster than Vega 64, while consuming 5-10% less power. What are you talking about?
You know Vega 64 is capable of huge UV? Even you don't have to do that, there are preset BIOS-es. The most efficient nearly lowers consumption by 100 W.
Posted on Reply
#40
cucker tarlson
FouquinI quite intentionally addressed exactly what I quoted from that post, as it was the only part I disagreed with. You introduced an entirely different variable, in this case fan noise, as if it were an argument against what I said about the cards having a thermal limit.



You appear to be throwing out unsubstantiated numbers as if they are fact, and attempting to support them with subjective anecdotes, then declaring a "winner" based on that information. If you read TPUs very own review of the Vega 64 (which I linked in my first post) you'll notice on the 'Fan Noise' page that it says, "AMD has limited the fan's maximum speed to ensure it doesn't get even noisier, even when the card climbs up to 85°C, its maximum temperature level."

This rather pointedly dismisses your outlandish "90% fan speed" statement, and again reinforces my statement that the cards have a maximum thermal limit that is a mere 2 degrees above the GTX 1080. Not dependent on fan speed, high or low.
limited to what ? 45 dba.
B-RealYou know Vega 64 is capable of huge UV? Even you don't have to do that, there are preset BIOS-es. The most efficient nearly lowers consumption by 100 W.
And performance to sub-1070Ti level.
Posted on Reply
#41
jabbadap
Why should they, when perf/W is the best what it ever been?

These power numbers are not really that bad for desktop usage anyway. Where it's really matters is mobile space, I kind of doubt we will see RTX cards on laptops anytime soon. Maybe some TU106 variant will made it, but Nvidia will have the dilemma how marketing that. Pascal where all about equalizing naming with desktop and laptops parts, so will they name RTX 2070 as their highest mobile card or go back to laptop variant is tier lower than desktop variant.
Posted on Reply
#42
londiste
B-RealOf course it offers plenty more performance. However, it's strange that when an AMD card does the same, it gets the treatment from NV users, when their card is doing the same, they keep quiet. :)
Now, imagine these two situations:
1. Card A consumes 20 units of power and has 20 units of performance. Card B consumes 20 units of power and has 35 units of performance.
2. Card A consumes 20 units of power and has 20 units of performance. Card B consumes 12 units of power and has 20 units of performance.
In which situation and for which card is power consumption a problem?
B-RealYou know Vega 64 is capable of huge UV? Even you don't have to do that, there are preset BIOS-es. The most efficient nearly lowers consumption by 100 W.
No, it is not. You can do a minor undervolt from stock settings that won't help much with anything. All the "Undervolt" articles are about overclocking and include raising the power limit to 150%.
Posted on Reply
#43
cucker tarlson
B-Realit's strange that when an AMD card does the same, it gets the treatment from NV users, when their card is doing the same, they keep quiet. :)
yes cause people are gonna complain about the most power efficient card out there. Vega can't beat the 28nm 600mm2 GM200 with DDR5 in efficiency.
Posted on Reply
#44
jabbadap
londisteNow, imagine these two situations:
1. Card A consumes 20 units of power and has 20 units of performance. Card B consumes 20 units of power and has 35 units of performance.
2. Card A consumes 20 units of power and has 20 units of performance. Card B consumes 12 units of power and has 20 units of performance.
In which situation and for which card is power consumption a problem?
No, it is not. You can do a minor undervolt from stock settings that won't help much with anything. All the "Undervolt" articles are about overclocking and include raising the power limit to 150%.
Hmh. Why I'm remember that UV was first done with vega to overcome OC bug at the early days of vega. I think the best way for Vega is not the OC gpu itself at all but UV it and OC HBM2 freqs.. But all this is Off Topic anyway, so sorry and carry on with the Turing power.
Posted on Reply
#45
londiste
jabbadapHmh. Why I'm remember that UV was first done with vega to overcome OC bug at the early days of vega. I think the best way for Vega is not the OC gpu itself at all but UV it and OC HBM2 freqs.. But all this is Off Topic anyway, so sorry and carry on with the Turing power.
OC bugs were different and there were several. If I remember correctly, in the initial couple versions of tools memory voltage actually set the GPU voltage, some P-states were tricky etc.
Undervolting is a real thing as the GPU is (arguably) overvolted but it does not really do much at stock settings. You get about 20-30MHz from undervolting leaving everything else the same.
Undervolt is definitely a requirement for overclocking for Vega as OC potential without undervolting is very limited. Even at 150% power limit you will not see considerable boost and probably not even spec boost clock without undervolting first. This does not reduce the power consumption either.
Posted on Reply
#46
kings
IceShroomStrange !!! No one talks about power consumption of New Nvidia Card. Nearly Vega 64 level.
Yeah, a card that consumes less power in gaming than a Vega 64 and offers 85% more performance... what a joke Nvidia is.

C´mon people, this trend to defend the "underdog" at all costs is getting pretty ridiculous at this point.
Posted on Reply
#47
cucker tarlson
As far as efficiency 2080ti is pretty friggin sweet, 750mm2, runs 4K like my 1080 does 1440p (awesome), packs RT and tensor.It's the usefulness of those extra features that is under doubt,not the card's efficiency. Then again, a $700 and $1000 card is not for those who want something ordinary. I just wish they made $600 variant of 2080 with RT and tensor gone.I'd still buy the $699 full chip but a lot of people would be happier.
Posted on Reply
#48
Assimilator
@W1zzard In future, plotting the observed GPU and memory clocks on the idle power usage graphs would be quite useful to determine if the cards are downclocking correctly and/or whether there is a power bug.
Posted on Reply
#49
JB_Gamer
Hi guys, I have a Vega64, quite happy with it actually, bought it at the low introduction price;) Just want to ask, it says 2nd BIOS in the FAN NOICE LOAD graph, what is that - anyone?
Posted on Reply
#50
jabbadap
JB_GamerHi guys, I have a Vega64, quite happy with it actually, bought it at the low introduction price;) Just want to ask, it says 2nd BIOS in the FAN NOICE LOAD graph, what is that - anyone?
That chart is from TPUs RX Vega⁶⁴ review.
The Radeon RX Vega 64 comes with a dual BIOS switch, which lets you toggle to a secondary BIOS that runs at a lower power limit of 200W as opposed to the 220W on the default BIOS.
Posted on Reply
Add your own comment
Sep 2nd, 2024 02:15 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts