• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

PCI-Express 4.0 Pushes 16 GT/s per Lane, 300W Slot Power

The other half of that argument is the AMD Naples board they showed off. With at least 750W going in, the PCIe slots are pretty much guaranteed to be getting at least 500W all up. If they're "overloading" the connections (at full tilt, each pin in a PCIe, EPS/CPU and ATX power connector is rated up to something like 9A, so a 6-pin PCIe is safe for 200W on it's own, at 8-pin safe for 300W, with current server cards using only a single 8-pin for 300W cards), pulling in the 1500W for 2 150W CPUs and 4 300W cards is entirely within the realm of possibility. It may not be PCIe 4.0 on Zen, but power support may well trickle down into a revision of 3.x.

I'm personally not too worried about the safety of pushing 300W through the PCIe slot: 300W is only 25A at 12V. Compare that to the literal hundreds of amps (150+W at less than 1V) CPUs have to be fed over very similar surface areas

Yes, It's also a server board. I don't think any official info on those are available yet, but speculation has it that it is for the PCIe slots, yes.
I still wouldn't see that trickle down to enthusiast immediately, even if it were the case.
I could see the server people paying for a high layer-high power mobo, for specific cards, not necessarily GPU's.

I just don't see the need to change the norm for desktops.

Two 1080's on one card will take you over the PCIe spec.
I think this is just what the AIB's/Constructors/AMD/Nvidia etc and the others want.
 
Depends on the board the vast majority are stacked right next to the cpu.
dual socket has it next to cpu1, none by cpu 2. quad socket can either have it by cpu 1, or next to the mainboard connector.

either way cpu2 is having its power come across the board a decent distance and there are no ill effects.

also a single memory slot may not be much but 64 of them is significant. 128-192 watts.

plus 6x 75 watt pcie slots. Now matter how you slice it server boards have significant current running across them.

Having the same on enthusiast builds won't suddenly cause components to degrade. After all some of these or 3-500$ US.
 
dual socket has it next to cpu1, none by cpu 2. quad socket can either have it by cpu 1, or next to the mainboard connector.

I'm using a 2P unit with the mosfets next to both CPU's the last 8 workstations I have dealt with have all had the PWM sections next to the cpu, partially covered by the heatsinks and down facing coolers.
 
dual socket has it next to cpu1, none by cpu 2. quad socket can either have it by cpu 1, or next to the mainboard connector.

either way cpu2 is having its power come across the board a decent distance and there are no ill effects.

also a single memory slot may not be much but 64 of them is significant. 128-192 watts.

plus 6x 75 watt pcie slots. Now matter how you slice it server boards have significant current running across them.

Having the same on enthusiast builds won't suddenly cause components to degrade. After all some of these or 3-500$ US.

I'm using a 2P unit with the mosfets next to both CPU's the last 8 workstations I have dealt with have all had the PWM sections next to the cpu, partially covered by the heatsinks and down facing coolers.

Only if you have the generic form factor motherboards. Proper bespoke servers have the PSUs plugging directly into the motherboard and sending power out from the motherboard to the various other devices, including GPUs. On something like a Supermicro 1028GQ, that means that under full load of dual 145W CPUs, RAM, 4 300W GPUs and 2 25W HHHL PCIe cards, you're pushing over 1600W through the motherboard. For that particular board, the extra non-slotted GPU power pokes out of the board at 4 locations at each corner of the board as single 8-pin plugs per GPU (meaning 225W in the plug's mere 3 +12V pins for 75W/6.25A per pin).
 
Only if you have the generic form factor motherboards. Proper bespoke servers have the PSUs plugging directly into the motherboard and sending power out from the motherboard to the various other devices, including GPUs. On something like a Supermicro 1028GQ, that means that under full load of dual 145W CPUs, RAM, 4 300W GPUs and 2 25W HHHL PCIe cards, you're pushing over 1600W through the motherboard. For that particular board, the extra non-slotted GPU power pokes out of the board at 4 locations at each corner of the board as single 8-pin plugs per GPU (meaning 225W in the plug's mere 3 +12V pins for 75W/6.25A per pin).
actually that one has 4x 4 pins surrounding the cpu sockets.

I'm in mainly a dell shop and those only have 1 8pin for dual cpu and 2 for quad cpu. Again the dual cpu only has the 8 pin by cpu1 and the quad has them next to the mainboard connector. Granting that even with the independent cpu power in that mobo a significant amount is passing through it.

Older tyan boards from the opteron days (or at least the days they were relevant) have them next to cpu 1 and 3 but not 2 and 4. Most of the time cpu2 is getting the power from cpu1 so the distance between is all something that should be subject to degradation and isn't. The boards can handle the extra power and the manufacturers are smart enough to route it properly.
 
actually that one has 4x 4 pins surrounding the cpu sockets.

I'm in mainly a dell shop and those only have 1 8pin for dual cpu and 2 for quad cpu. Again the dual cpu only has the 8 pin by cpu1 and the quad has them next to the mainboard connector. Granting that even with the independent cpu power in that mobo a significant amount is passing through it.

Older tyan boards from the opteron days (or at least the days they were relevant) have them next to cpu 1 and 3 but not 2 and 4. Most of the time cpu2 is getting the power from cpu1 so the distance between is all something that should be subject to degradation and isn't. The boards can handle the extra power and the manufacturers are smart enough to route it properly.

img366854.jpg


having the connector next to the socket isn't the PWM section. Most boards will split the 8 pin to 4+4 for each CPU (if only a single between) boards like this are typically setup to one EPS per cpu and the additional 4 pin supplies the memory.

this would be the Tyan socket 940 opteron board you are talking about by the way.
 
actually that one has 4x 4 pins surrounding the cpu sockets.

I'm in mainly a dell shop and those only have 1 8pin for dual cpu and 2 for quad cpu. Again the dual cpu only has the 8 pin by cpu1 and the quad has them next to the mainboard connector. Granting that even with the independent cpu power in that mobo a significant amount is passing through it.

Older tyan boards from the opteron days (or at least the days they were relevant) have them next to cpu 1 and 3 but not 2 and 4. Most of the time cpu2 is getting the power from cpu1 so the distance between is all something that should be subject to degradation and isn't. The boards can handle the extra power and the manufacturers are smart enough to route it properly.

Only on the desktop parts. On the server bits, specifically high-GPU like the C4130, the Dells run PCIe power using 4 8-pin cables from right in front of the PSU straight to the cards in front.

img366854.jpg


having the connector next to the socket isn't the PWM section. Most boards will split the 8 pin to 4+4 for each CPU (if only a single between) boards like this are typically setup to one EPS per cpu and the additional 4 pin supplies the memory.

this would be the Tyan socket 940 opteron board you are talking about by the way.

Bit of a poor choice of board to illustrate your point: half the VRMs are at the rear, with power being dragged along from the front all the way over.

On a more modern note, we have Supermicro's X10DGQ (used in Supermicro's 1028GQ):

cRpVExz.png


There's 4 black PCIe 8-pin connections, one near the PSU (top left), and the remaining 3 all around near the 16x riser slots, 2 of em all the way at the front together with 3 riser slots. The white 4-pins are used for fans and HDD connections. These boards are out there, i production and apparently working quite well, if a bit warm for the 4th rear-mounted GPU (the other 3 are front-mounted). That particular board has over 1600W going through it when fully loaded, and it about the size of a good E-ATX board, so relax, and hope we can get 300W slots consumer-side as well as server side and make us some very neat-looking, easy to upgrade.
 
Bit of a poor choice of board to illustrate your point: half the VRMs are at the rear, with power being dragged along from the front all the way over.

On a more modern note, we have Supermicro's X10DGQ (used in Supermicro's 1028GQ):

The power pull from the EPS connector isn't where the heat is the VRM section is. There is plenty of PCB to pull the minor amount of power that you would seen pulled form the 8 pin to the VRM section. What I am saying is there isn't a single board on the market that I know of that doesn't have the VRM section close to the CPU. There would be to much Vdroop. The drop from the 8 pin quite honestly doesn't matter and is fixed when it hits the VRM's.
 
The power pull from the EPS connector isn't where the heat is the VRM section is. There is plenty of PCB to pull the minor amount of power that you would seen pulled form the 8 pin to the VRM section. What I am saying is there isn't a single board on the market that I know of that doesn't have the VRM section close to the CPU. There would be to much Vdroop. The drop from the 8 pin quite honestly doesn't matter and is fixed when it hits the VRM's.

Indeed, but chips are pulling at ~1V (usually less) these days, meaning >100A is going to the chip. At those levels of current, you do indeed get a lot of Vdroop.

At 12V and 25A per slot though, the Vdroop is much less of an issue. On top of that, most cards have their own VRMs anyways to get their own supply of precisely-regulated barely above 1V power, so a bit of Vdroop won't be that much of an issue.
 
And.... what about AMD and the new CPU Zen???? I mean, if this PCIe specification finalize by 2016, the new MBoards ( AM4 ) will use PCIe Gen 4.0???.
 
if it's a standard it means it's well implemented. They are not going to put tiny small traces on the motherboard allowing up to 300W of power.

I think it will be a neath and proper design, leaving many cases with videocards without the power supply. Simular as putting 1 x 4 or 8 pins for your CPU and 1 x 4 or 8 pins for your GPU.

Right now simular motherboards share their 12V CPU as well for memory and PCI-express slots (75W configuration).
 
because when I was installing my new psu a few weeks back it really felt like I was dealing with ancient tech compared to the server or nuc markets where everything just slots, no additional cabling needed.

The mainboards will handle it just fine 4 socket server boards push much more power than that to all 4 cpu's all 64 memory slots, and all6 pcie slots. There will be no degradation that's just a silly line of thinking. Again servers have been pushing this amount of power for decades and the video cards are already receiving that amount of power, changing the how isn't going to suddenly degrade them faster.

What this will do is bring enthusiasts into the modern era and get away from cable redundancies. They really should have done this 10 years ago.

You might want to choose better words.. I asked because I didn't know.. It's silly to call someone silly, just because they ask a question.
If that's how you answer honest questions. I'd prefer you not answer mine.. ty..
Or maybe you'd like to ask me about building standards or trolxer nuclear gauge testing and I can call you silly?
 
You might want to choose better words.. I asked because I didn't know.. It's silly to call someone silly, just because they ask a question.
If that's how you answer honest questions. I'd prefer you not answer mine.. ty..
Or maybe you'd like to ask me about building standards or trolxer nuclear gauge testing and I can call you silly?

I don't see anything in yogurt's post that calls you silly, just that your line of thinking/assumption/theory of how things work/question was silly. Asking such a question is a bit like asking if an ingot of 24K gold degrades over time (nuclear decay non-withstanding), or if fire is hot.. It's just a question that makes no real sense. I mean, there's nothing special about PCB copper traces that would put them at any risk of degradation vs lengths of copper wire, very much unlike the doped silicon the chips use.
 
"Slot Power Limit" is a bit ambiguous. I guess that someone might misinterpret the following specification that's already there since forever (well, at least since 0.3 draft).
9m0mRsX.png

The 75 Watt limit is physical limit of the number of pins available for power and each pin power capacity. Increasing it would mean a slot redesign or pin reassignment for power delivery (which means more difficult back/fwd compatibility).
 
"Slot Power Limit" is a bit ambiguous. I guess that someone might misinterpret the following specification that's already there since forever (well, at least since 0.3 draft).
9m0mRsX.png

The 75 Watt limit is physical limit of the number of pins available for power and each pin power capacity. Increasing it would mean a slot redesign or pin reassignment for power delivery (which means more difficult back/fwd compatibility).

That would be why the current theory is that the slot is going to be up-specced to 300W thorugh the edge-connector, since the 300W per card rating has existed for a while now. The Naples board AMD showed off with a massive amount of PCIe power to the board would also suggest that major power delivery changes are inbound.
 
That would be why the current theory is that the slot is going to be up-specced to 300W thorugh the edge-connector, since the 300W per card rating has existed for a while now. The Naples board AMD showed off with a massive amount of PCIe power to the board would also suggest that major power delivery changes are inbound.

I still can't understand why Naples power configuration is used as a hint about PCIe 4.0 capability. Conventional engineering practices would suggest that the six connectors were too far away from the PCIe slots, even crossing across CPU sockets, and too spread apart from each other. If the 6 connectors were used for PCIe, it would be more logical to place them near the PCIe slots and/or place them together in one place.

The four inductors near each of the connectors also suggest that these power most likely used locally (though 4 phase for 4 slot RAM is overkill, maybe Zen got multi rail power, who knows).
 
That would be why the current theory is that the slot is going to be up-specced to 300W thorugh the edge-connector, since the 300W per card rating has existed for a while now. The Naples board AMD showed off with a massive amount of PCIe power to the board would also suggest that major power delivery changes are inbound.

I wouldn't go as far as Theory, its just speculation, on a misunderstanding.

Asus hasn't been able to do a dual GPU card for a while now, probably due to the PCIe official spec limiting to 300W absolute max.
If the PCIe Spec is increased to 450+W, then the Dual GPU cards get the blessings of PCI-SIG.
 
I still can't understand why Naples power configuration is used as a hint about PCIe 4.0 capability. Conventional engineering practices would suggest that the six connectors were too far away from the PCIe slots, even crossing across CPU sockets, and too spread apart from each other. If the 6 connectors were used for PCIe, it would be more logical to place them near the PCIe slots and/or place them together in one place.

The four inductors near each of the connectors also suggest that these power most likely used locally (though 4 phase for 4 slot RAM is overkill, maybe Zen got multi rail power, who knows).

In what world exactly do you need over 750W going to only 2 CPUs? One interesting possibility would be to send 12V over the PCIe data lines, similar to how Power over Ethernet works, although I think that that's unnecessary.

I wouldn't go as far as Theory, its just speculation, on a misunderstanding.

Asus hasn't been able to do a dual GPU card for a while now, probably due to the PCIe official spec limiting to 300W absolute max.
If the PCIe Spec is increased to 450+W, then the Dual GPU cards get the blessings of PCI-SIG.

They'd do it, PCIe be damned if they figured the market would buy. Both AMD and nVidia have violated/exceeded PCIe spec in the past.. hell, AMD is technically still doing so on the 480 by overloading the 6-pin connector rather than the slot.

In serverland on the other hand, NVLink is making inroads with it's simple mezzanine connector design pushing both all the power and I/O, and into the 8GPU per board segment starting with the DGX-1, not to mention all the benefits of having a dedicated, high-speed mesh for inter-GPU communications, in turn relegating PCIe to only CPU communications.

big_quantplex-deep-learning-server.jpg.ashx


As you can observe: not a single PCIe power connector on the NVLink "daughterboard" - all the power goes through the board. All 2400W of it (and then some for the 4 PCIe slot).

The rest of the industry wants in on this neatness with PCIe, and by the sounds of it, they're willing to do it.
 
In what world exactly do you need over 750W going to only 2 CPUs? One interesting possibility would be to send 12V over the PCIe data lines, similar to how Power over Ethernet works, although I think that that's unnecessary.



They'd do it, PCIe be damned if they figured the market would buy. Both AMD and nVidia have violated/exceeded PCIe spec in the past.. hell, AMD is technically still doing so on the 480 by overloading the 6-pin connector rather than the slot.

In serverland on the other hand, NVLink is making inroads with it's simple mezzanine connector design pushing both all the power and I/O, and into the 8GPU per board segment starting with the DGX-1, not to mention all the benefits of having a dedicated, high-speed mesh for inter-GPU communications, in turn relegating PCIe to only CPU communications.

big_quantplex-deep-learning-server.jpg.ashx



As you can observe: not a single PCIe power connector on the NVLink "daughterboard" - all the power goes through the board. All 2400W of it (and then some for the 4 PCIe slot).

The rest of the industry wants in on this neatness with PCIe, and by the sounds of it, they're willing to do it.
you have got very valid points concerning the power delivery in server boards and i agree with you that this scenario is totally doable, i am just worried about the added cost that this standard will bring to the table. lets not forget that server boards are much more expensive than even the enthusiast ones.
 
you have got very valid points concerning the power delivery in server boards and i agree with you that this scenario is totally doable, i am just worried about the added cost that this standard will bring to the table. lets not forget that server boards are much more expensive than even the enthusiast ones.

Not that much from the BOM side of things. Much, much more of price goes into the support network, branding, validation, and of course, profits.
 
In what world exactly do you need over 750W going to only 2 CPUs? One interesting possibility would be to send 12V over the PCIe data lines, similar to how Power over Ethernet works, although I think that that's unnecessary.



They'd do it, PCIe be damned if they figured the market would buy. Both AMD and nVidia have violated/exceeded PCIe spec in the past.. hell, AMD is technically still doing so on the 480 by overloading the 6-pin connector rather than the slot.

In serverland on the other hand, NVLink is making inroads with it's simple mezzanine connector design pushing both all the power and I/O, and into the 8GPU per board segment starting with the DGX-1, not to mention all the benefits of having a dedicated, high-speed mesh for inter-GPU communications, in turn relegating PCIe to only CPU communications.

big_quantplex-deep-learning-server.jpg.ashx


As you can observe: not a single PCIe power connector on the NVLink "daughterboard" - all the power goes through the board. All 2400W of it (and then some for the 4 PCIe slot).

The rest of the industry wants in on this neatness with PCIe, and by the sounds of it, they're willing to do it.

The rest of the server industry, maybe.
Desktop, no.

There is no reason for it, and no need for it.

Your example is yet another server tech, that won't be coming to Enthusiast desktop (NVLink).
The DGX-1 is fully built by Nvidia, there are no "common" parts in it at all, so it can all be proprietary... (mobo, NVLink board, etc...)
i can't even see the power connectors on the NVLink board, so they could be using any number of connectors on the left there, or on the other side of the board...
 
Not that much from the BOM side of things. Much, much more of price goes into the support network, branding, validation, and of course, profits.
well.. yes, but it elevates the cost regardless since it is a more expensive implementation due to its complexity and added engineering.
 
Actually PCI-SIG contacted toms and told them that the slot power was still 75W the extra 300 watts would be from extra power connectors

Update, 8/24/16, 2:06pm PT:PCI-SIG reached out to tell us that the power increase for PCI Express 4.0 will come from secondary connectors and not from the slot directly. They confirmed that we were initially told incorrect information. We have redacted a short passage from our original article that stated what were originally told, which is that the slot would provide at least 300W, and added clarification:

  • PCIe 3.0 max power capabilities: 75W from CEM + 225W from supplemental power connectors = 300W total
  • PCIe 4.0 max power capabilities: TBD
New value “P” = 75W from CEM + (P-75)W from supplemental power connectors.
 
Back
Top