Monday, February 3rd 2025

NVIDIA GeForce RTX 5090 Runs on 3x8-Pin PCI Power Adapter, RTX 5080 Not Booting on 2x8-Pin Configuration

NVIDIA's flagship GeForce RTX 5090 demonstrated flexibility in power compatibility, while its sibling, the RTX 5080, struggled with stricter requirements. Recent tests by a German tech outlet, ComputerBase, reveal that the RTX 5090 can operate with three 8-pin PCI power connectors instead of the recommended four, albeit with a performance trade-off. However, the RTX 5080 fails to boot when using only two 8-pin connectors. The RTX 5090, with a default TDP of 575 W, officially requires a 600 W 12V-2×6 connector or an adapter with four 8-pin PCI cables. However, tests on the ASUS ROG RTX 5090 Astral and Zotac RTX 5090 Solid show the GPU boots even with three 8-pin cables, capping its TDP at 450 W—matching the three connectors' 150 W-per-cable spec. Performance losses are modest: benchmarks indicate a 5% drop in average FPS at 450 W compared to full power.

In contrast, the RTX 5080's 360 W TDP proves less forgiving. Attempts to run the Founders Edition and Zotac RTX 5080 AMP Extreme Infinity with two 8-pin connectors (300 W total) resulted in failure: the screen remained blank, and the card refused to initialize. NVIDIA's firmware appears to lack a lower power-limit threshold for the RTX 5080, unlike the 5090, which automatically adjusts when detecting insufficient power delivery. This requirement forces users to adhere strictly to the three 8-pin or 12V-2×6 power connectors. While the RTX 5090 offers flexibility for users upgrading from older systems, the RTX 5080's limitations may frustrate owners of less powerful PSUs. For the RTX 5090, the 5% performance penalty at 450 W may be a reasonable trade-off for avoiding costly PSU upgrades, but RTX 5080 users have no such recourse. Verifying power supply compatibility, as underpowered setups risk instability or hardware damage, is a must, and when your $2000+ GPU runs, you should at least power it properly. This experiment is more a "for science" type of run.
Sources: ComputerBase, via VideoCardz
Add your own comment

44 Comments on NVIDIA GeForce RTX 5090 Runs on 3x8-Pin PCI Power Adapter, RTX 5080 Not Booting on 2x8-Pin Configuration

#26
hsew
phanbueyit will just heat up those 3 cables more. Ask me how I know.



Was running some maps in POE 2 when the distinct smell of burning plastic filled the air. I was only running 3 cables because I read when I first bought the card that would limit the card in my build to 450W instead of 450W+, and was in spec.

I would not recommend it.
Even if your card runs at a “limited” 450W, the TGP isn’t even the maximum the card will draw. The 12VHPWR spec allows for 900W+ to be drawn from the connector for fractions of a second at a time. Coupling that with the increased resistance from not running all the cables the adapter calls for could be the reason yours melted.

Could also be a bad adapter, bad PSU cable connector, cable, or PSU.

To be honest, I really hate the standard. 8 pin PCIe was far better behaved than this nonsense.
Posted on Reply
#27
Zach_01
EndymioThat assumes the voltage/freq curves for both chips are the same (they aren't) and that the 5080 is pulling the maximum both from the bus and each plug (it isn't, or it wouldn't need the third plug).

Claiming this board is a fail because you can't run it drastically out of spec is an outrageously puerile argument. Had NVidia done what you asked, then the first cable or mobo that was even slightly out-of-spec would have caused anything from a melted cable to an outright fire, and you'd have gone off the rails over that instead.
Personally I wouldn't call a GPU card a fail just because it cant run with 1 less 8pin connector from what the manufacturer specifies. I would maybe for other reasons but definitely not this.
I already run a GPU with 3x8pin and I wasnt ever considering to run it any differently even if it could.
Posted on Reply
#28
wolf
Better Than Native
AusWolfSo much for all those "but you can undervolt it" arguments.
I don't think that's really relevant here, especially as both cards behave differently. As the article covers, always expect to power the card with what it states it requires first, then you can undervolt it after.

I wouldn't expect a card with 2x8-pin to work with only one connected for example, even if a power target was set needing less than 225w.
Posted on Reply
#29
AusWolf
wolfI don't think that's really relevant here, especially as both cards behave differently. As the article covers, always expect to power the card with what it states it requires first, then you can undervolt it after.

I wouldn't expect a card with 2x8-pin to work with only one connected for example, even if a power target was set needing less than 225w.
Just like I've always said, undervolting isn't an argument if it only works through software, and not from boot.
Posted on Reply
#30
wolf
Better Than Native
AusWolfJust like I've always said, undervolting isn't an argument if it only works through software, and not from boot.
I don't know what you mean by argument? does undervolting not work when using software? Well I know the answer, it does.

You might not like it (yeah I know, Linux), but it doesn't mean you can arbitrarily decide it's invalid because it doesn't work from boot.

Undervolting also generally only affects the hardware when a 3D load - ie software, is being placed on it, even if it was baked into hardware it wouldn't matter from boot.

Just connect the card the way the manufacturer says then do what you like with it.
Posted on Reply
#31
AusWolf
wolfI don't know what you mean by argument? does undervolting not work when using software? Well I know the answer, it does.

You might not like it (yeah I know, Linux), but it doesn't mean you can arbitrarily decide it's invalid because it doesn't work from boot.

Undervolting also generally only affects the hardware when a 3D load - ie software, is being placed on it, even if it was baked into hardware it wouldn't matter from boot.

Just connect the card the way the manufacturer says then do what you like with it.
We have to connect the cards properly because they've been designed for X power use. Just because you can undervolt them through software, you still need to account for default power use when factoring in your PSU. So for example, if you don't have 3x 8-pins on your PSU, or the power to feed 3x 8-pins, you shouldn't be thinking about a 5080.

Edit: "But you can undervolt it" is an argument I usually get when I speak up against modern GPUs consuming enormous amounts of power. The example shows that it's not a good argument.
Posted on Reply
#32
Visible Noise
petrojWait, aren't these cards pulling 75w from the PCIe slot as the standard allows them to?
No. As a rule Nvidia only pulls about 2W from the PCIe slot.
Posted on Reply
#33
AusWolf
Visible NoiseNo. As a rule Nvidia only pulls about 2W from the PCIe slot.
Every single Nvidia card I have ever had or worked with disagrees with you.
Posted on Reply
#34
wolf
Better Than Native
AusWolfWe have to connect the cards properly because they've been designed for X power use. Just because you can undervolt them through software, you still need to account for default power use when factoring in your PSU. So for example, if you don't have 3x 8-pins on your PSU, or the power to feed 3x 8-pins, you shouldn't be thinking about a 5080.
100% agree, I feel like this was my point...
AusWolfEdit: "But you can undervolt it" is an argument I usually get when I speak up against modern GPUs consuming enormous amounts of power. The example shows that it's not a good argument.
Disagree. The example just shows you need to connect it as it demands to be connected, basica hardware compatibility.

Then you can do what you like, run stock, undervolt, overclock, etc. I don't see any connection between a card being properly connected to the PC and then how you choose to run it.

GPU's consuming more power than before is true, and "but you can just undervolt them" I think is not a good counter to that either, but I see zero correlation to connecting it properly. You need a PSU that can handle the card in it's default TDP, typically in outright wattage with perhaps a bit of wiggle room depending how you intend to run it, but 100% required in terms of the physical connectors present. To plan to do otherwise, even if fully intending to drastically undervolt would be foolish at best.
AusWolfEvery single Nvidia card I have ever had or worked with disagrees with you.
Don't quote me but I believe it's been tested and perhaps even confirmed by NVidia the 4090 essentially little power from the pci-e slot (and not circa 70-75w), shouldn't be hard to find articles on, I'll take a look. If it is the case, it'd stand to reason other 40 and perhaps now 50 series cards operate the same.
Posted on Reply
#35
AusWolf
wolfYou need a PSU that can handle the card in it's default TDP
Exactly my point. This is why I'm against cards with enormous TDPs and don't accept "but you can undervolt it" as a counter-argument.
wolfDon't quote me but I believe it's been tested and perhaps even confirmed by NVidia the 4090 essentially little power from the pci-e slot (and not circa 70-75w), shouldn't be hard to find articles on, I'll take a look. If it is the case, it'd stand to reason other 40 and perhaps now 50 series cards operate the same.
Naturally, cards with external power connectors don't use the PCI-e slot to its full 75 W specification, but to say they use 2 W is a bit daft.
Posted on Reply
#36
wolf
Better Than Native
AusWolfExactly my point. This is why I'm against cards with enormous TDPs.
I can see why, 450w+ is starting to get crazy. I'm getting comfier with the 300w range though :twitch:
AusWolfNaturally, cards with external power connectors don't use the PCI-e slot to its full 75 W specification, but to say they use 2 W is a bit daft.
2w I'd say seems to low, maybe 10-40w seems more reasonable if it really wants all it's power through the plug in cables.
Posted on Reply
#37
AusWolf
wolfI can see why, 450w+ is starting to get crazy. I'm getting comfier with the 300w range though :twitch:
Same here. 300-ish W is fine as long as it doesn't require a million-slot chunky boy cooler that I can't fit into my tiny m-ATX box. :laugh:

Also, I've currently got 2x 8-pin cables connected to my PSU (no pigtails in here), and I'd like to keep it that way because it's a bit hard to access without taking it out. :laugh:
wolf2w I'd say seems to low, maybe 10-40w seems more reasonable if it really wants all it's power through the plug in cables.
Agreed.
Posted on Reply
#38
wolf
Better Than Native
AusWolfSame here. 300-ish W is fine as long as it doesn't require a million-slot chunky boy cooler that I can't fit into my tiny m-ATX box. :laugh:

Also, I've currently got 2x 8-pin cables connected to my PSU (no pigtails in here), and I'd like to keep it that way because it's a bit hard to access without taking it out. :laugh:
I'm on mini ITX, and I see some people disassemble the same case I have, install a massive card, and then rebuild the bottom of the case around it... I'm not too fussed as long as it fits but damn that's dedication. Doing that you can technically fit cards slightly longer than the case says it takes.

As for the PSU, 750W with 2x8 Pin should be plenty, corsair also make a native 2x8pin to 12v2x6 for their PSU's, so I should be alright with either a 9070XT or 5070Ti/5080 as long as I can fit the bastard in!
Posted on Reply
#39
AusWolf
wolfI'm on mini ITX, and I see some people disassemble the same case I have, install a massive card, and then rebuild the bottom of the case around it... I'm not too fussed as long as it fits but damn that's dedication. Doing that you can technically fit cards slightly longer than the case says it takes.
Respect! :)

I used to be on mini-ITX myself, but had enough of the awkward cable management and lack of choice in motherboards. I wouldn't go any bigger than m-ATX, though.
wolfAs for the PSU, 750W with 2x8 Pin should be plenty, corsair also make a native 2x8pin to 12v2x6 for their PSU's, so I should be alright with either a 9070XT or 5070Ti/5080 as long as I can fit the bastard in!
I've got 750 W, too, and that's exactly my plan. Then, as much as it hurts, I'm gonna put my upgrade urges to rest for a good 3-ish generations. I just wish those cards had come out before Kingdom Come Deliverance 2. Oh well. :ohwell:
Posted on Reply
#40
Zach_01
Visible NoiseNo. As a rule Nvidia only pulls about 2W from the PCIe slot.
Its 1.5W actually... /s
wolfI can see why, 450w+ is starting to get crazy. I'm getting comfier with the 300w range though :twitch:

2w I'd say seems to low, maybe 10-40w seems more reasonable if it really wants all it's power through the plug in cables.
10~40W is x5~20 above 2W. No one sane believes that all 75W are utilized when you have 2~3 or even 4 external 8pin connectors.

BTW I am pretty comfortable with 350+W GPU power consumption since my AIB R9 390X OC version...
Posted on Reply
#41
chrcoluk
4080 Super does same thing as 5080, looks like there is no lower limit fall back mode, I posted in the 4080 (Super?) FE thread about it, initially tried to use the adaptor with just two cables, card wouldnt wake on post, had to use pig tail, am now using a two x 8 pin to 12V-2x6 cable , but with adaptor I had to pig tail, as it needed to detect 3 cables connected. Nonsensical as the 4080 Super has same TDP as 3080 FE which works with 2 input cables, its an artificial Nvidia restriction. Although the Nvidia leaflet that comes with GPU tells people to not pig tail, you can bet they will as its preferable to buying a new PSU, which then puts 66/33 load on cables instead of 50/50.

It is nothing to do with card configuration, what it actually draws, UV, OC etc.
Posted on Reply
#42
Visible Noise
wolfDon't quote me but I believe it's been tested and perhaps even confirmed by NVidia the 4090 essentially little power from the pci-e slot (and not circa 70-75w), shouldn't be hard to find articles on, I'll take a look. If it is the case, it'd stand to reason other 40 and perhaps now 50 series cards operate the same.
Just did a Cyberpunk benchmark run.

Posted on Reply
#43
wolf
Better Than Native
Visible NoiseJust did a Cyberpunk benchmark run.
Damn just under 9w at it's peak! My memory serves me well.
Posted on Reply
#44
Visible Noise
wolfDamn just under 9w at its peak! My memory serves me well.
My guess is the only power pulled from the slot is what the PCIe interface itself uses, with the rest of the GPU being powered through the cable.

I can see 10.5GB a second needing 8 watts to move data off the motherboard and into the card.
Posted on Reply
Add your own comment
Feb 3rd, 2025 22:57 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts