Monday, August 22nd 2016

PCI-Express 4.0 Pushes 16 GT/s per Lane, 300W Slot Power

The PCI-Express gen 4.0 specification promises to deliver a huge leap in host bus bandwidth and power-delivery for add-on cards. According to its latest draft, the specification prescribes a bandwidth of 16 GT/s per lane, double that of the 8 GT/s of the current PCI-Express gen 3.0 specification. The 16 GT/s per lane bandwidth translates into 1.97 GB/s for x1 devices, 7.87 GB/s for x4, 15.75 GB/s for x8, and 31.5 GB/s for x16 devices.

More importantly, it prescribes a quadrupling of power-delivery from the slot. A PCIe gen 4.0 slot should be able to deliver 300W of power (against 75W from PCIe gen 3.0 slots). This should eventually eliminate the need for additional power connectors on graphics cards with power-draw under 300W, however, the change could be gradual, as graphics card designers could want to retain backwards-compatibility with older PCIe slots, and retain additional power connectors. The PCI-SIG, the special interest group behind PCIe, said that it would finalize the gen 4.0 specification by the end of 2016.
Source: Tom's Hardware
Add your own comment

75 Comments on PCI-Express 4.0 Pushes 16 GT/s per Lane, 300W Slot Power

#51
yogurt_21
cdawallDepends on the board the vast majority are stacked right next to the cpu.
dual socket has it next to cpu1, none by cpu 2. quad socket can either have it by cpu 1, or next to the mainboard connector.

either way cpu2 is having its power come across the board a decent distance and there are no ill effects.

also a single memory slot may not be much but 64 of them is significant. 128-192 watts.

plus 6x 75 watt pcie slots. Now matter how you slice it server boards have significant current running across them.

Having the same on enthusiast builds won't suddenly cause components to degrade. After all some of these or 3-500$ US.
Posted on Reply
#52
cdawall
where the hell are my stars
yogurt_21dual socket has it next to cpu1, none by cpu 2. quad socket can either have it by cpu 1, or next to the mainboard connector.
I'm using a 2P unit with the mosfets next to both CPU's the last 8 workstations I have dealt with have all had the PWM sections next to the cpu, partially covered by the heatsinks and down facing coolers.
Posted on Reply
#53
ZeDestructor
yogurt_21dual socket has it next to cpu1, none by cpu 2. quad socket can either have it by cpu 1, or next to the mainboard connector.

either way cpu2 is having its power come across the board a decent distance and there are no ill effects.

also a single memory slot may not be much but 64 of them is significant. 128-192 watts.

plus 6x 75 watt pcie slots. Now matter how you slice it server boards have significant current running across them.

Having the same on enthusiast builds won't suddenly cause components to degrade. After all some of these or 3-500$ US.
cdawallI'm using a 2P unit with the mosfets next to both CPU's the last 8 workstations I have dealt with have all had the PWM sections next to the cpu, partially covered by the heatsinks and down facing coolers.
Only if you have the generic form factor motherboards. Proper bespoke servers have the PSUs plugging directly into the motherboard and sending power out from the motherboard to the various other devices, including GPUs. On something like a Supermicro 1028GQ, that means that under full load of dual 145W CPUs, RAM, 4 300W GPUs and 2 25W HHHL PCIe cards, you're pushing over 1600W through the motherboard. For that particular board, the extra non-slotted GPU power pokes out of the board at 4 locations at each corner of the board as single 8-pin plugs per GPU (meaning 225W in the plug's mere 3 +12V pins for 75W/6.25A per pin).
Posted on Reply
#54
yogurt_21
ZeDestructorOnly if you have the generic form factor motherboards. Proper bespoke servers have the PSUs plugging directly into the motherboard and sending power out from the motherboard to the various other devices, including GPUs. On something like a Supermicro 1028GQ, that means that under full load of dual 145W CPUs, RAM, 4 300W GPUs and 2 25W HHHL PCIe cards, you're pushing over 1600W through the motherboard. For that particular board, the extra non-slotted GPU power pokes out of the board at 4 locations at each corner of the board as single 8-pin plugs per GPU (meaning 225W in the plug's mere 3 +12V pins for 75W/6.25A per pin).
actually that one has 4x 4 pins surrounding the cpu sockets.

I'm in mainly a dell shop and those only have 1 8pin for dual cpu and 2 for quad cpu. Again the dual cpu only has the 8 pin by cpu1 and the quad has them next to the mainboard connector. Granting that even with the independent cpu power in that mobo a significant amount is passing through it.

Older tyan boards from the opteron days (or at least the days they were relevant) have them next to cpu 1 and 3 but not 2 and 4. Most of the time cpu2 is getting the power from cpu1 so the distance between is all something that should be subject to degradation and isn't. The boards can handle the extra power and the manufacturers are smart enough to route it properly.
Posted on Reply
#55
cdawall
where the hell are my stars
yogurt_21actually that one has 4x 4 pins surrounding the cpu sockets.

I'm in mainly a dell shop and those only have 1 8pin for dual cpu and 2 for quad cpu. Again the dual cpu only has the 8 pin by cpu1 and the quad has them next to the mainboard connector. Granting that even with the independent cpu power in that mobo a significant amount is passing through it.

Older tyan boards from the opteron days (or at least the days they were relevant) have them next to cpu 1 and 3 but not 2 and 4. Most of the time cpu2 is getting the power from cpu1 so the distance between is all something that should be subject to degradation and isn't. The boards can handle the extra power and the manufacturers are smart enough to route it properly.


having the connector next to the socket isn't the PWM section. Most boards will split the 8 pin to 4+4 for each CPU (if only a single between) boards like this are typically setup to one EPS per cpu and the additional 4 pin supplies the memory.

this would be the Tyan socket 940 opteron board you are talking about by the way.
Posted on Reply
#56
ZeDestructor
yogurt_21actually that one has 4x 4 pins surrounding the cpu sockets.

I'm in mainly a dell shop and those only have 1 8pin for dual cpu and 2 for quad cpu. Again the dual cpu only has the 8 pin by cpu1 and the quad has them next to the mainboard connector. Granting that even with the independent cpu power in that mobo a significant amount is passing through it.

Older tyan boards from the opteron days (or at least the days they were relevant) have them next to cpu 1 and 3 but not 2 and 4. Most of the time cpu2 is getting the power from cpu1 so the distance between is all something that should be subject to degradation and isn't. The boards can handle the extra power and the manufacturers are smart enough to route it properly.
Only on the desktop parts. On the server bits, specifically high-GPU like the C4130, the Dells run PCIe power using 4 8-pin cables from right in front of the PSU straight to the cards in front.
cdawall

having the connector next to the socket isn't the PWM section. Most boards will split the 8 pin to 4+4 for each CPU (if only a single between) boards like this are typically setup to one EPS per cpu and the additional 4 pin supplies the memory.

this would be the Tyan socket 940 opteron board you are talking about by the way.
Bit of a poor choice of board to illustrate your point: half the VRMs are at the rear, with power being dragged along from the front all the way over.

On a more modern note, we have Supermicro's X10DGQ (used in Supermicro's 1028GQ):



There's 4 black PCIe 8-pin connections, one near the PSU (top left), and the remaining 3 all around near the 16x riser slots, 2 of em all the way at the front together with 3 riser slots. The white 4-pins are used for fans and HDD connections. These boards are out there, i production and apparently working quite well, if a bit warm for the 4th rear-mounted GPU (the other 3 are front-mounted). That particular board has over 1600W going through it when fully loaded, and it about the size of a good E-ATX board, so relax, and hope we can get 300W slots consumer-side as well as server side and make us some very neat-looking, easy to upgrade.
Posted on Reply
#57
cdawall
where the hell are my stars
ZeDestructorBit of a poor choice of board to illustrate your point: half the VRMs are at the rear, with power being dragged along from the front all the way over.

On a more modern note, we have Supermicro's X10DGQ (used in Supermicro's 1028GQ):
The power pull from the EPS connector isn't where the heat is the VRM section is. There is plenty of PCB to pull the minor amount of power that you would seen pulled form the 8 pin to the VRM section. What I am saying is there isn't a single board on the market that I know of that doesn't have the VRM section close to the CPU. There would be to much Vdroop. The drop from the 8 pin quite honestly doesn't matter and is fixed when it hits the VRM's.
Posted on Reply
#58
ZeDestructor
cdawallThe power pull from the EPS connector isn't where the heat is the VRM section is. There is plenty of PCB to pull the minor amount of power that you would seen pulled form the 8 pin to the VRM section. What I am saying is there isn't a single board on the market that I know of that doesn't have the VRM section close to the CPU. There would be to much Vdroop. The drop from the 8 pin quite honestly doesn't matter and is fixed when it hits the VRM's.
Indeed, but chips are pulling at ~1V (usually less) these days, meaning >100A is going to the chip. At those levels of current, you do indeed get a lot of Vdroop.

At 12V and 25A per slot though, the Vdroop is much less of an issue. On top of that, most cards have their own VRMs anyways to get their own supply of precisely-regulated barely above 1V power, so a bit of Vdroop won't be that much of an issue.
Posted on Reply
#59
BackSlash
And.... what about AMD and the new CPU Zen???? I mean, if this PCIe specification finalize by 2016, the new MBoards ( AM4 ) will use PCIe Gen 4.0???.
Posted on Reply
#60
Jism
if it's a standard it means it's well implemented. They are not going to put tiny small traces on the motherboard allowing up to 300W of power.

I think it will be a neath and proper design, leaving many cases with videocards without the power supply. Simular as putting 1 x 4 or 8 pins for your CPU and 1 x 4 or 8 pins for your GPU.

Right now simular motherboards share their 12V CPU as well for memory and PCI-express slots (75W configuration).
Posted on Reply
#61
D007
yogurt_21because when I was installing my new psu a few weeks back it really felt like I was dealing with ancient tech compared to the server or nuc markets where everything just slots, no additional cabling needed.

The mainboards will handle it just fine 4 socket server boards push much more power than that to all 4 cpu's all 64 memory slots, and all6 pcie slots. There will be no degradation that's just a silly line of thinking. Again servers have been pushing this amount of power for decades and the video cards are already receiving that amount of power, changing the how isn't going to suddenly degrade them faster.

What this will do is bring enthusiasts into the modern era and get away from cable redundancies. They really should have done this 10 years ago.
You might want to choose better words.. I asked because I didn't know.. It's silly to call someone silly, just because they ask a question.
If that's how you answer honest questions. I'd prefer you not answer mine.. ty..
Or maybe you'd like to ask me about building standards or trolxer nuclear gauge testing and I can call you silly?
Posted on Reply
#62
ZeDestructor
D007You might want to choose better words.. I asked because I didn't know.. It's silly to call someone silly, just because they ask a question.
If that's how you answer honest questions. I'd prefer you not answer mine.. ty..
Or maybe you'd like to ask me about building standards or trolxer nuclear gauge testing and I can call you silly?
I don't see anything in yogurt's post that calls you silly, just that your line of thinking/assumption/theory of how things work/question was silly. Asking such a question is a bit like asking if an ingot of 24K gold degrades over time (nuclear decay non-withstanding), or if fire is hot.. It's just a question that makes no real sense. I mean, there's nothing special about PCB copper traces that would put them at any risk of degradation vs lengths of copper wire, very much unlike the doped silicon the chips use.
Posted on Reply
#63
ArdWar
"Slot Power Limit" is a bit ambiguous. I guess that someone might misinterpret the following specification that's already there since forever (well, at least since 0.3 draft).

The 75 Watt limit is physical limit of the number of pins available for power and each pin power capacity. Increasing it would mean a slot redesign or pin reassignment for power delivery (which means more difficult back/fwd compatibility).
Posted on Reply
#64
ZeDestructor
ArdWar"Slot Power Limit" is a bit ambiguous. I guess that someone might misinterpret the following specification that's already there since forever (well, at least since 0.3 draft).

The 75 Watt limit is physical limit of the number of pins available for power and each pin power capacity. Increasing it would mean a slot redesign or pin reassignment for power delivery (which means more difficult back/fwd compatibility).
That would be why the current theory is that the slot is going to be up-specced to 300W thorugh the edge-connector, since the 300W per card rating has existed for a while now. The Naples board AMD showed off with a massive amount of PCIe power to the board would also suggest that major power delivery changes are inbound.
Posted on Reply
#65
ArdWar
ZeDestructorThat would be why the current theory is that the slot is going to be up-specced to 300W thorugh the edge-connector, since the 300W per card rating has existed for a while now. The Naples board AMD showed off with a massive amount of PCIe power to the board would also suggest that major power delivery changes are inbound.
I still can't understand why Naples power configuration is used as a hint about PCIe 4.0 capability. Conventional engineering practices would suggest that the six connectors were too far away from the PCIe slots, even crossing across CPU sockets, and too spread apart from each other. If the 6 connectors were used for PCIe, it would be more logical to place them near the PCIe slots and/or place them together in one place.

The four inductors near each of the connectors also suggest that these power most likely used locally (though 4 phase for 4 slot RAM is overkill, maybe Zen got multi rail power, who knows).
Posted on Reply
#66
Evildead666
ZeDestructorThat would be why the current theory is that the slot is going to be up-specced to 300W thorugh the edge-connector, since the 300W per card rating has existed for a while now. The Naples board AMD showed off with a massive amount of PCIe power to the board would also suggest that major power delivery changes are inbound.
I wouldn't go as far as Theory, its just speculation, on a misunderstanding.

Asus hasn't been able to do a dual GPU card for a while now, probably due to the PCIe official spec limiting to 300W absolute max.
If the PCIe Spec is increased to 450+W, then the Dual GPU cards get the blessings of PCI-SIG.
Posted on Reply
#67
ZeDestructor
ArdWarI still can't understand why Naples power configuration is used as a hint about PCIe 4.0 capability. Conventional engineering practices would suggest that the six connectors were too far away from the PCIe slots, even crossing across CPU sockets, and too spread apart from each other. If the 6 connectors were used for PCIe, it would be more logical to place them near the PCIe slots and/or place them together in one place.

The four inductors near each of the connectors also suggest that these power most likely used locally (though 4 phase for 4 slot RAM is overkill, maybe Zen got multi rail power, who knows).
In what world exactly do you need over 750W going to only 2 CPUs? One interesting possibility would be to send 12V over the PCIe data lines, similar to how Power over Ethernet works, although I think that that's unnecessary.
Evildead666I wouldn't go as far as Theory, its just speculation, on a misunderstanding.

Asus hasn't been able to do a dual GPU card for a while now, probably due to the PCIe official spec limiting to 300W absolute max.
If the PCIe Spec is increased to 450+W, then the Dual GPU cards get the blessings of PCI-SIG.
They'd do it, PCIe be damned if they figured the market would buy. Both AMD and nVidia have violated/exceeded PCIe spec in the past.. hell, AMD is technically still doing so on the 480 by overloading the 6-pin connector rather than the slot.

In serverland on the other hand, NVLink is making inroads with it's simple mezzanine connector design pushing both all the power and I/O, and into the 8GPU per board segment starting with the DGX-1, not to mention all the benefits of having a dedicated, high-speed mesh for inter-GPU communications, in turn relegating PCIe to only CPU communications.



As you can observe: not a single PCIe power connector on the NVLink "daughterboard" - all the power goes through the board. All 2400W of it (and then some for the 4 PCIe slot).

The rest of the industry wants in on this neatness with PCIe, and by the sounds of it, they're willing to do it.
Posted on Reply
#68
djrabes
lanlagger"slot should be able to deliver 300W of power".. AMD right now:
"AMD Radeon RX 480 likes this"
Posted on Reply
#69
$ReaPeR$
ZeDestructorIn what world exactly do you need over 750W going to only 2 CPUs? One interesting possibility would be to send 12V over the PCIe data lines, similar to how Power over Ethernet works, although I think that that's unnecessary.



They'd do it, PCIe be damned if they figured the market would buy. Both AMD and nVidia have violated/exceeded PCIe spec in the past.. hell, AMD is technically still doing so on the 480 by overloading the 6-pin connector rather than the slot.

In serverland on the other hand, NVLink is making inroads with it's simple mezzanine connector design pushing both all the power and I/O, and into the 8GPU per board segment starting with the DGX-1, not to mention all the benefits of having a dedicated, high-speed mesh for inter-GPU communications, in turn relegating PCIe to only CPU communications.




As you can observe: not a single PCIe power connector on the NVLink "daughterboard" - all the power goes through the board. All 2400W of it (and then some for the 4 PCIe slot).

The rest of the industry wants in on this neatness with PCIe, and by the sounds of it, they're willing to do it.
you have got very valid points concerning the power delivery in server boards and i agree with you that this scenario is totally doable, i am just worried about the added cost that this standard will bring to the table. lets not forget that server boards are much more expensive than even the enthusiast ones.
Posted on Reply
#70
ZeDestructor
$ReaPeR$you have got very valid points concerning the power delivery in server boards and i agree with you that this scenario is totally doable, i am just worried about the added cost that this standard will bring to the table. lets not forget that server boards are much more expensive than even the enthusiast ones.
Not that much from the BOM side of things. Much, much more of price goes into the support network, branding, validation, and of course, profits.
Posted on Reply
#71
Evildead666
ZeDestructorIn what world exactly do you need over 750W going to only 2 CPUs? One interesting possibility would be to send 12V over the PCIe data lines, similar to how Power over Ethernet works, although I think that that's unnecessary.



They'd do it, PCIe be damned if they figured the market would buy. Both AMD and nVidia have violated/exceeded PCIe spec in the past.. hell, AMD is technically still doing so on the 480 by overloading the 6-pin connector rather than the slot.

In serverland on the other hand, NVLink is making inroads with it's simple mezzanine connector design pushing both all the power and I/O, and into the 8GPU per board segment starting with the DGX-1, not to mention all the benefits of having a dedicated, high-speed mesh for inter-GPU communications, in turn relegating PCIe to only CPU communications.



As you can observe: not a single PCIe power connector on the NVLink "daughterboard" - all the power goes through the board. All 2400W of it (and then some for the 4 PCIe slot).

The rest of the industry wants in on this neatness with PCIe, and by the sounds of it, they're willing to do it.
The rest of the server industry, maybe.
Desktop, no.

There is no reason for it, and no need for it.

Your example is yet another server tech, that won't be coming to Enthusiast desktop (NVLink).
The DGX-1 is fully built by Nvidia, there are no "common" parts in it at all, so it can all be proprietary... (mobo, NVLink board, etc...)
i can't even see the power connectors on the NVLink board, so they could be using any number of connectors on the left there, or on the other side of the board...
Posted on Reply
#72
$ReaPeR$
ZeDestructorNot that much from the BOM side of things. Much, much more of price goes into the support network, branding, validation, and of course, profits.
well.. yes, but it elevates the cost regardless since it is a more expensive implementation due to its complexity and added engineering.
Posted on Reply
#73
KainXS
Actually PCI-SIG contacted toms and told them that the slot power was still 75W the extra 300 watts would be from extra power connectors
Update, 8/24/16, 2:06pm PT:PCI-SIG reached out to tell us that the power increase for PCI Express 4.0 will come from secondary connectors and not from the slot directly. They confirmed that we were initially told incorrect information. We have redacted a short passage from our original article that stated what were originally told, which is that the slot would provide at least 300W, and added clarification:
  • PCIe 3.0 max power capabilities: 75W from CEM + 225W from supplemental power connectors = 300W total
  • PCIe 4.0 max power capabilities: TBD
New value “P” = 75W from CEM + (P-75)W from supplemental power connectors.
Posted on Reply
#75
R3AP3RAFK
300 watt on PCI-E slot... Sounds nice...awesome in fact. Imagine no more cables needed for gpu's, every neatfreak's dream for perfect cable management. I think there's been a misunderstanding though, from what I read someone made the wrong assumptions when the info was posted and that basically "The 300 Watts is intended for the certification process but the maximum slot power will remain to be 75 Watts" the power output on the slots will remain the same. Oh well guess we'll have to wait another 10 years before we'll be able to build high end gaming systems that don't need external power connectors to power our GPU's

Edit: I see someone already mentioned this
Posted on Reply
Add your own comment
Dec 22nd, 2024 17:43 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts