• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

HELP! Quad RTX 3090, x2 EVGA 1600 G2: Burnt out +12v on MoBo power connector

No not necessarily it depends on what on the GPU's is being powered by the PCIE bus on anything high end normally the PCIE Bus power is isolated and only powers auxiliary circuits(basic stuff required to boot the card to a point where the External power kicks in and the core is turned on) sometimes the vram or a part of a AUXILIARY Phase but they are separate circuits because obviously there isn't enough juice there to run anything substantial tho on the 3090 I don't even think any of the vram gets its power from there because of the power requirements

remember he is not gaming he is crunching numbers and that workload is far more fault tolerant then running a game that workload is also famous for massive swings in power consumption and that was likely a factor here
hes also got riser cables too so by all rights the connection to the motherboard from the gpus is a worst case it should have failed there
@buildzoid do you know what the power split is on the 3090 whats powered by slotpower whats powered by PCIE-8pin I know a 3090 draws any ware from 60-70w from PEG_12v

Very, very hard to read, my eyes & brain hurt now! :( Not the only post of yours that is that way, by the way -- just one of many.
You know, comma and full-stop called - they were in tears about OneMoar forgetting them totally, ignoring them and not using them when writing text.. :D
They hoped you could use them sometimes in the future. :)
 
ah, blower fans, love them, automatic case heat ejectors!! i would love to have a 3 slot version, vapo chambered off course.
You can always upgrade those coolers with a GPU Block.

Those are not 3 slots, but has vapor chamber and heat pipe.

Gigabyte RTX3090 TURBO.jpeg


The picture is from RTX 3090, I'm sending RTX 3080 web page for cooler details since both use the same cooler and RTX 3090's web page is down.

https://www.gigabyte.com/Graphics-Card/GV-N3080TURBO-10GD-rev-20

And yeah, I can still find RTX 3090 GPUs ( time to time, not always ) - priviliges of being a workstation / GPU server SI.
 
Very, very hard to read, my eyes & brain hurt now!
You worry to much bud :). I can read it all fine, you have to realize we are all different and from different parts of the world and not the Queens English thats for H.M. ;).
 
You worry to much bud :). I can read it all fine, you have to realize we are all different and from different parts of the world and not the Queens English thats for H.M. ;).
Lmao, thats exactly what i was thinking. Of all the horrible posts to go Grammar Police on this is the one you chose? There's at least 10 posts a day that would have you floppin around on the floor Naki!
Sorry for the off topic. That cracked me up.
 
You can always upgrade those coolers with a GPU Block.

Those are not 3 slots, but has vapor chamber and heat pipe.

View attachment 209268

The picture is from RTX 3090, I'm sending RTX 3080 web page for cooler details since both use the same cooler and RTX 3090's web page is down.

https://www.gigabyte.com/Graphics-Card/GV-N3080TURBO-10GD-rev-20

And yeah, I can still find RTX 3090 GPUs ( time to time, not always ) - priviliges of being a workstation / GPU server SI.
Running EK water blocks on all of them!
 
I don't think ASUS ever expected anybody to use this board in this manner(With SLI now being limited to 2GPU's)
its a consumer grade board he needs to talk to supermicro and get a enterprise grade board
IMO, any board that is designed with multiple PCI-E x16 slots, should have the proper power delivery to max those slots out regardless of if that is a typical use case or not. This is a major issue with every manufacturer. They put the slots there and just pray someone doesn't actually use them.

he didn't state that it burned at both ends in the op
He didn't state in the first post which side it burned. The pictures he posted in the first post showed it burned on both sides.

I don't know if i would trust a molex connector with 150W in any scenario on paper its a 156W but thats on paper and likely no modern psu is going to be cabled for the full brunt of that because molex is generally relegated to low power devices like optical drives or usb cards
It wouldn't be all on the molex, the molex is just a supplement to the 24-pin. It spreads that 300w load over 3 12v pins instead of just 2. Each pin then has to carry 100w of power instead of 150w if there is only the 24-pin. Though, ideally there would be a 6-pin or 8-pin PCI-E connector on the board that provides supplemental power to the PCI-E slots. It's really stupid that an $850 motherboard even still has a molex on it instead of a 6/8-pin PCI-E.
 
Running EK water blocks on all of them!
You have to pardon those that dont read the entire thread.

How does the psu itself look Frank? Does the 24p connector look welded together where the custom cable melted?
Edit- the internal part.
 
RESOLVED!

Not only resolved but getting more throughput from all 4 of the 3090s!!!

Although cabling was a part of the fix, it was honestly just the weakest link that blew out as the result of a deeper yet very simple issue... this is going to make you laugh.

So i:
- changed out every single power cable in the rig to the stock EVGA
- Replaced out both PSUs
- linked-up both PSUs start (a little bit of custom cable work with some high-rated wire did the trick)

But whilst replacing out the power cables, i decided to crack open the manual for the Extreme Alpha on power specifically. Low and behold, one tiny little, 2 line sentence that read, "Connect the 4 pin EZ_PLUG when you install 2 or more high-performance PCI-e cards". Its a 4 pin molex connector but its connects at a right angle to the board so very, very difficult to see.

So ASUS already thought of this issue and provided it to supplement the PCI-e power bus. No wonder the 2x +12v pins had burned up :laugh:

I did run into some hefty POST issues that included CMOS error, RAID read failure and VGA BIOS errors (thought it would never end - probably because of the power re-config) but ultimatley a CMOS reset and enabling 4g encoding took care of all of that.

She is up, super stable and running cooler and faster than ever.

Thanks for your help guys especially @Ominence

Age of lesson of the day: read the damn manual hahahaha

You have to pardon those that dont read the entire thread.

How does the psu itself look Frank? Does the 24p connector look welded together where the custom cable melted?
Edit- the internal part.
Both ends of the +12v pins on the 24 pin were melted. In fact on the PSU end they had melted into place - close call.

The issue is simple. Motherboard choice for 4 x RTX 3090 usage is wrong.

When you want to connect 4X RTX 3090 for heavy tasks like Deep Learning, you have to choose a motherboard that has a PCI-ex supplementary power instead of MOLEX.

Max. power support of MOLEX is way more less than PCI-ex Max. power support. If the GPUs power draw from PCI-ex slots exceed what MOLEX power cable can support, the needed power will be drawn from the motherboard 24 pin cable's 12V cable. The load will burn the cable in the end.

Similar issues have been around since the start of crypto mining frenzy. Saw several ( hundreds if not thousands ) burnt 24 pin PSU cables / PSU sockets because of insufficient 12V support to PCI-ex slots. Same is still valid for PCI-ex risers for crypto mining. PCI-ex power usage is recommended instead of MOLEX.

If you are going to continue to use those motherboard + GPU configuration, my suggestion will be reducing the number of the GPUs used on the motherboard. If you are going to look for another motherboard option, choose a model with PCI-ex power socket.

I have built hundreds of Deep Learning systems for companies, research centers and universities. None of them had such a problem since I'm choosing the components knowing what will be the real loads on each of them.
exactly, well that was the issue. Luckily, the Extreme Alpha comes with supplemental PIC-e power
 

Attachments

  • Screen Shot 2021-07-22 at 3.33.32 PM.png
    Screen Shot 2021-07-22 at 3.33.32 PM.png
    410 KB · Views: 99
But whilst replacing out the power cables, i decided to crack open the manual for the Extreme Alpha on power specifically. Low and behold, one tiny little, 2 line sentence that read, "Connect the 4 pin EZ_PLUG when you install 2 or more high-performance PCI-e cards". Its a 4 pin molex connector but its connects at a right angle to the board so very, very difficult to see.
This is the plug I specifically asked you about at the beginning of the thread and you said was already connected.
 
IMO, any board that is designed with multiple PCI-E x16 slots, should have the proper power delivery to max those slots out regardless of if that is a typical use case or not. This is a major issue with every manufacturer. They put the slots there and just pray someone doesn't actually use them.


He didn't state in the first post which side it burned. The pictures he posted in the first post showed it burned on both sides.


It wouldn't be all on the molex, the molex is just a supplement to the 24-pin. It spreads that 300w load over 3 12v pins instead of just 2. Each pin then has to carry 100w of power instead of 150w if there is only the 24-pin. Though, ideally there would be a 6-pin or 8-pin PCI-E connector on the board that provides supplemental power to the PCI-E slots. It's really stupid that an $850 motherboard even still has a molex on it instead of a 6/8-pin PCI-E.
From the PSU's perspective the physical connector doesn't matter
either the board has current limiting shunt resistors behind the Molex some ware only two of the 3 12V lines are connected to that Molex

I was talking a wild swing saying that the Molex might be providing a additional 38 something Watts to each slot 150/4 = 37.5 which should be the maximum you can get out of a Molex connector

assuming you can that means each slot should only be drawing 35W or so from the 24Pin unless the split is way off which would render it pointless
I can't tell from the picture if the 3.3v is connected to anything on the molex if it is its probly being used to power whatever driver IC is feeding the slots
 
Doh. Pro tip, when you use quad, you connect all the accessory MB power plugs, all of them.
 
This is the plug I specifically asked you about at the beginning of the thread and you said was already connected.
Omg you are totally right, don't know how I miss-read that, thanks!!

Doh. Pro tip, when you use quad, you connect all the accessory MB power plugs, all of them.
Lmao. Ok, go ahead and spot the connection I failed to plug in without googling it.
 

Attachments

  • Screen Shot 2021-07-22 at 5.47.40 PM.png
    Screen Shot 2021-07-22 at 5.47.40 PM.png
    1.5 MB · Views: 119
The reasoning is simple. While they were designing the TRX40 motherboards, they didn't foresee that next gen high end GPUs ( RTX 3090 ) were going to be this much power hungry. Those boards released before launch of RTX3000 series. Also, those motherboards are designed for gaming / creators. This means they are not specifically designed for multi GPU setups ( more than 2 GPUs, since SLI/crossfire has died ) to be used as workstation / GPU server.


WRX80, workstation variant of TRX40 is designed specifically for this kind usage and released after Ampere GPUs, so the board ASUS designed has 2x PCI-ex power socket for supplementary power to 7 PCI-ex slots. With this kind of power delivery, you can use 7 x RTX 3090s with the motherboard, if you water cool them.

You have to consider what was designed for what and what was launched before what, when you decide which components to use in your system and that system will be used for what purpose.
That would be insane. I know someone using 3 RTX 3090s for a small render farm and uses liquid cooling for them. I just can't imagine having seven of those things all liquid cooled in the same case. :kookoo: Granted, 7 RTX 3090s would have you blazing through very large iray renders quickly, but in order to take full advantage of those cards...
7 cards x 24 Gb VRAM (x4 overhead for iray) = 672 Gb system RAM needed just to be able to render with all of those cards at near full VRAM capacity, and there still might be a possibility of exceeding 672 Gb depending on what all is in the scene.
8x128Gb = 1.024 Terabyte in RDIMMs, which would by itself probably cost north of $7,000 USD. I can't imagine how much the liquid cooling solution would cost in addition to that.

If I was going to go with a similar setup, I would much rather go with several RTX A5000. Not as fast with GDDR6 memory and 16,128 less CUDA cores, but its still going to be very fast with 57,344 CUDA cores altogether and certainly not as power hungry or throwing off as much heat! No liquid cooling needed for the cards unless you don't believe in air conditioning for the warmer months of the year.
 
Omg you are totally right, don't know how I miss-read that, thanks!!


Lmao. Ok, go ahead and spot the connection I failed to plug in without googling it.
That molex plug is on almost every highend Asus board. They are notorious for using any plug that they can then call EZPLUG. My RIVE had two EZPLUGs, a 6pin PCIE and get this a 4pin ancient ass floppy drive plug. Do you know how hard it is to find a floppy plug even 10 years ago?
 
Omg you are totally right, don't know how I miss-read that, thanks!!


Lmao. Ok, go ahead and spot the connection I failed to plug in without googling it.
O_o
FOR FKS SAKE .... :banghead:
 
This is the plug I specifically asked you about at the beginning of the thread and you said was already connected.
Yeah he said he did, but in the pictures the MOLEX power was not connected ( pictures in post #11 ). I thought he disconnected when the system failed, so didn't mentioned it.

Running EK water blocks on all of them!
I was referring to the blower style RTX 3090s specifically designed for workstation / GPU render / Deep Learning workloads. I know you are using water cooling with your ASUS TUF RTX3090s. Gaming cards are more aggressive on power draws then the workstation cards.


From the PSU's perspective the physical connector doesn't matter
either the board has current limiting shunt resistors behind the Molex some ware only two of the 3 12V lines are connected to that Molex

I was talking a wild swing saying that the Molex might be providing a additional 38 something Watts to each slot 150/4 = 37.5 which should be the maximum you can get out of a Molex connector

assuming you can that means each slot should only be drawing 35W or so from the 24Pin unless the split is way off which would render it pointless
I can't tell from the picture if the 3.3v is connected to anything on the molex if it is its probly being used to power whatever driver IC is feeding the slots

Physical connector matters. On MOLEX, you have only ONE 12V cable and connection, so all the 12V and it's current will run through that single cable / connection. On the other hand, when you use 6 pin PCI-ex instead of MOLEX you'll have TWO 12V cables and connection, this means the load on 12V cable will be halved and heat resource will be divided and separated on TWO different spots instead of ONE. Total power draw might not change, but stability will increase and melting probability will decrease. It is similar for using TWO 12V EPS power connection instead of ONE 12V EPS. Increasing the 12V delivery ways, you are decreasing the heating on sockets and prevent them from melting and increasing more stable 12V flow to its destination.


That would be insane. I know someone using 3 RTX 3090s for a small render farm and uses liquid cooling for them. I just can't imagine having seven of those things all liquid cooled in the same case. :kookoo: Granted, 7 RTX 3090s would have you blazing through very large iray renders quickly, but in order to take full advantage of those cards...
7 cards x 24 Gb VRAM (x4 overhead for iray) = 672 Gb system RAM needed just to be able to render with all of those cards at near full VRAM capacity, and there still might be a possibility of exceeding 672 Gb depending on what all is in the scene.
8x128Gb = 1.024 Terabyte in RDIMMs, which would by itself probably cost north of $7,000 USD. I can't imagine how much the liquid cooling solution would cost in addition to that.

If I was going to go with a similar setup, I would much rather go with several RTX A5000. Not as fast with GDDR6 memory and 16,128 less CUDA cores, but its still going to be very fast with 57,344 CUDA cores altogether and certainly not as power hungry or throwing off as much heat! No liquid cooling needed for the cards unless you don't believe in air conditioning for the warmer months of the year.
1627018269083.png


Right now working on building an AMD Threadripper PRO version with this case and 7 GPUs ( RTX 3090 nor RTX A 6000 haven't decided yet. ) with this case. I already have the sample case at hand ( I'm also EK's SI partner on their Fluid Gaming and Fluid Works systems ) waiting for the new PRO series GPU blocks to be ready. This build will be a demo and testing unit.
 
Yeah he said he did, but in the pictures the MOLEX power was not connected ( pictures in post #11 ). I thought he disconnected when the system failed, so didn't mentioned it.


I was referring to the blower style RTX 3090s specifically designed for workstation / GPU render / Deep Learning workloads. I know you are using water cooling with your ASUS TUF RTX3090s. Gaming cards are more aggressive on power draws then the workstation cards.




Physical connector matters. On MOLEX, you have only ONE 12V cable and connection, so all the 12V and it's current will run through that single cable / connection. On the other hand, when you use 6 pin PCI-ex instead of MOLEX you'll have TWO 12V cables and connection, this means the load on 12V cable will be halved and heat resource will be divided and separated on TWO different spots instead of ONE. Total power draw might not change, but stability will increase and melting probability will decrease. It is similar for using TWO 12V EPS power connection instead of ONE 12V EPS. Increasing the 12V delivery ways, you are decreasing the heating on sockets and prevent them from melting and increasing more stable 12V flow to its destination.



View attachment 209417

Right now working on building an AMD Threadripper PRO version with this case and 7 GPUs ( RTX 3090 nor RTX A 6000 haven't decided yet. ) with this case. I already have the sample case at hand ( I'm also EK's SI partner on their Fluid Gaming and Fluid Works systems ) waiting for the new PRO series GPU blocks to be ready. This build will be a demo and testing unit
what I was saying is that the PSU does not KNOW or Care what the connector is it will provide as much amprage as the wire will carry until something melts
 
Lmao. Ok, go ahead and spot the connection I failed to plug in without googling it.
The only reason I spotted it right away was because I too learned the hard way about running multiple high draw GPUs from just the 24-pin. So don't feel too bad.

IMG_20130925_120636_975.jpg
 
Thanks for your help guys especially @Ominence
no worries mate, happy to have been able to assist you some. it was curious re the 4pin molex at the bottom of the board. it looked really suss given that you had the pcie riser cables so close to that connector and it did not appear that there was any cable pathing/routing allowance there to be able to connect the 4pin as well (with what was visible in this photo anyway). i'm sure this has been a great learning experience for you. with such an elaborate built, a little patience goes a long way to ensure reliable trouble free performance.

1627031838871.png


My RIVE had two EZPLUGs, a 6pin PCIE and get this a 4pin ancient ass floppy drive plug.
still have it and still connected!
 
Yeah he said he did, but in the pictures the MOLEX power was not connected ( pictures in post #11 ). I thought he disconnected when the system failed, so didn't mentioned it.


I was referring to the blower style RTX 3090s specifically designed for workstation / GPU render / Deep Learning workloads. I know you are using water cooling with your ASUS TUF RTX3090s. Gaming cards are more aggressive on power draws then the workstation cards.




Physical connector matters. On MOLEX, you have only ONE 12V cable and connection, so all the 12V and it's current will run through that single cable / connection. On the other hand, when you use 6 pin PCI-ex instead of MOLEX you'll have TWO 12V cables and connection, this means the load on 12V cable will be halved and heat resource will be divided and separated on TWO different spots instead of ONE. Total power draw might not change, but stability will increase and melting probability will decrease. It is similar for using TWO 12V EPS power connection instead of ONE 12V EPS. Increasing the 12V delivery ways, you are decreasing the heating on sockets and prevent them from melting and increasing more stable 12V flow to its destination.



View attachment 209417

Right now working on building an AMD Threadripper PRO version with this case and 7 GPUs ( RTX 3090 nor RTX A 6000 haven't decided yet. ) with this case. I already have the sample case at hand ( I'm also EK's SI partner on their Fluid Gaming and Fluid Works systems ) waiting for the new PRO series GPU blocks to be ready. This build will be a demo and testing unit.
A testing unit for what? Not sure what you would test with all of those cards unless you're running multiple tests simultaneously, which would definitely save you time. How much system RAM are you going to need to utilize all seven cards together? sheesh, the cards alone will run between $30k-$43k depending on which brand and if you can get them at MSRP. Right now the A6000 PNY cards are around $5500, but there's no way in hell I'm buying a PNY branded A6000.
 
Hopefully you deep learned a lesson here.
 
A testing unit for what? Not sure what you would test with all of those cards unless you're running multiple tests simultaneously, which would definitely save you time. How much system RAM are you going to need to utilize all seven cards together? sheesh, the cards alone will run between $30k-$43k depending on which brand and if you can get them at MSRP. Right now the A6000 PNY cards are around $5500, but there's no way in hell I'm buying a PNY branded A6000.
As I said before, I'm a System Integrator partner for multiple gaming / workstation brands ( and EK is one of them, also EK is also our cooling solution partner for our local Workstation brand. ), so I already have enough components to build a demo unit. I'm just waiting EK to release their PRO series GPU blocks for new Nvidia RTX A series GPUs, since cooling solution in those workstation is different than EK's consumer products.

Our area of expertise are;

- MANUFACTURING and PRODUCT DESIGN (MPD)
- ARCHITECTURE, ENGINEERING and CONSTRUCTION (AEC)
- MEDIA and ENTERTAINMENT (M&E)
- GOVERNMENT, EDUCATION, MEDICAL
- DATA SCIENCE
- STORAGE
- CPU / GPU RENDERING - RENDER FARM
- REMOTE ACCESS with PCoIP ( when WMware's performace on a server is not enough for customer's work case )

This demo unit will be used for testing CPU / GPU rendering, Deep Learning and Scientific Calculations. I'm already building rack AMD EPYC 4/8 GPU servers as solutions; for some customers, I need a solution that is silent, can be kept on an office desk and compact enough for their use case. This system will be a show case for those customers and a long term testing bad for EK.

Instead of offering a fixed system configuration ( or very little upgrade options ) to a customer like HP, DELL or Lenovo does; we would like to test the system with the customer to fine tune the systems configuration to fit perfectly to their workload. Thus customers refers us as "problem solvers" since we also have enough knowledge about the softwares and their system resource needs / usage and give a proper solution to customers who couldn't get any from those brands and came to us as "last resort". During those test we'll use various memory capacities from 256 GB to 2 TB, depending on the project, use case and workload.
 
Back
Top