Monday, August 22nd 2016

PCI-Express 4.0 Pushes 16 GT/s per Lane, 300W Slot Power

The PCI-Express gen 4.0 specification promises to deliver a huge leap in host bus bandwidth and power-delivery for add-on cards. According to its latest draft, the specification prescribes a bandwidth of 16 GT/s per lane, double that of the 8 GT/s of the current PCI-Express gen 3.0 specification. The 16 GT/s per lane bandwidth translates into 1.97 GB/s for x1 devices, 7.87 GB/s for x4, 15.75 GB/s for x8, and 31.5 GB/s for x16 devices.

More importantly, it prescribes a quadrupling of power-delivery from the slot. A PCIe gen 4.0 slot should be able to deliver 300W of power (against 75W from PCIe gen 3.0 slots). This should eventually eliminate the need for additional power connectors on graphics cards with power-draw under 300W, however, the change could be gradual, as graphics card designers could want to retain backwards-compatibility with older PCIe slots, and retain additional power connectors. The PCI-SIG, the special interest group behind PCIe, said that it would finalize the gen 4.0 specification by the end of 2016.
Source: Tom's Hardware
Add your own comment

75 Comments on PCI-Express 4.0 Pushes 16 GT/s per Lane, 300W Slot Power

#26
D007
Not sure if I like this.. Why push all the power through the mobo and add more potential component degradation,
when you can just run it through a plug, that has already accounted for the additional power flow and been tested to handle it for years on end?
Does this at all transfer into better gpu performance?
Posted on Reply
#27
Assimilator
iOI really doubt we will see this in consumer boards. It looks more like an optional feature specifically aimed at server applications with new connectors and stuff..
I'm inclined to agree with you there, especially considering the recent pictures of the AMD Zen server board that supposedly has four PCIe 4.0 slots. Perhaps un-coincidentally, that board also has quad PCIe power connectors.
Posted on Reply
#28
TheLostSwede
News Editor
Musselsnot sure if i worded it badly, i meant new cards incompatible with older boards.
Sorry, my bad, I wasn't reading your post properly... :oops:
Posted on Reply
#29
TheinsanegamerN
FrickAye, which makes it kinda moot IMO. The only benefit this has is cable management. And you break back backwards compatibility and looking at how little improvement there is in CPU performance it makes it downright bad.

Unless they make traces that really can do 300W, which is a lot.
OTOH, having the plugs go into the mobo makes cable management much easier for some cases then having the plugs come out of the GPU. Also good for small size builds where the GPU plugs coming out the top of the GPU instead of the back heavily limits your choice of GPU.
Posted on Reply
#30
SaltyFish
I think we're getting pretty far ahead. Has anything managed to saturate PCI-E 2.0 16x / PCI-E 3.0 8x yet?
Posted on Reply
#31
Nobody99
D007Not sure if I like this.. Why push all the power through the mobo and add more potential component degradation,
when you can just run it through a plug, that has already accounted for the additional power flow and been tested to handle it for years on end?
Does this at all transfer into better gpu performance?
At least manufacturers will have to show something for those absurdly high motherboard prices.

They could set it a little lower, like 200W to encourage better power efficiency.
Posted on Reply
#32
cdawall
where the hell are my stars
TheLostSwedeWhy would it make older card incompatible? The Voltage won't change, so there's nothing here to make older stuff incompatible, it's an increase in Amperage. A card will never draw more power than it was designed for, so this is a non-issue.

Time to learn some basics before making statements like that me thinks...
You read that completely backwards. New cards incompatible with older boards. Read a thing or two before posting.
Posted on Reply
#33
Evildead666
I think the guy at Tom's has just misunderstood what the guy said.

You will never get 300W through the slot.

I think he was saying that the official power allowed for a single card will go up, from a minumum of 300W.

ie. We will be seeing 300W+ cards, officially, and with PCI-SIG blessings.
They will still have 8-pin or 6-pin connectors though, just more of them.
Posted on Reply
#34
RejZoR
Can't wait for cards with 300W capable PCIe and 3x 8pin :D Global warming is just a myth :P
Posted on Reply
#35
KainXS
Well the key should definitely have changed then like how AGP's did but since they are allowing manufacturers to keep their current additional power connectors thats nice though..
Posted on Reply
#36
Parn
Jeez... 300W from the slot alone?

I bet new motherboards are going to implement at least 8 phases of VRM just for the 1st x16 slot.
Posted on Reply
#37
yogurt_21
D007Not sure if I like this.. Why push all the power through the mobo and add more potential component degradation,
when you can just run it through a plug, that has already accounted for the additional power flow and been tested to handle it for years on end?
Does this at all transfer into better gpu performance?
because when I was installing my new psu a few weeks back it really felt like I was dealing with ancient tech compared to the server or nuc markets where everything just slots, no additional cabling needed.

The mainboards will handle it just fine 4 socket server boards push much more power than that to all 4 cpu's all 64 memory slots, and all6 pcie slots. There will be no degradation that's just a silly line of thinking. Again servers have been pushing this amount of power for decades and the video cards are already receiving that amount of power, changing the how isn't going to suddenly degrade them faster.

What this will do is bring enthusiasts into the modern era and get away from cable redundancies. They really should have done this 10 years ago.
Posted on Reply
#38
Evildead666
yogurt_21because when I was installing my new psu a few weeks back it really felt like I was dealing with ancient tech compared to the server or nuc markets where everything just slots, no additional cabling needed.

The mainboards will handle it just fine 4 socket server boards push much more power than that to all 4 cpu's all 64 memory slots, and all6 pcie slots. There will be no degradation that's just a silly line of thinking. Again servers have been pushing this amount of power for decades and the video cards are already receiving that amount of power, changing the how isn't going to suddenly degrade them faster.

What this will do is bring enthusiasts into the modern era and get away from cable redundancies. They really should have done this 10 years ago.
Modern sockets have the CPU power right next to the socket.
they don't have it wandering throughout the motherboard.
Any PCie "add-on" connector, is always right next to the slots.

They won't be running 300W to a single x16 slot, or every x16 slot thorugh the motherboard.
Posted on Reply
#39
yogurt_21
Evildead666Modern sockets have the CPU power right next to the socket.
.
Multi-socket server boards do not
Posted on Reply
#40
Prima.Vera
SaltyFishI think we're getting pretty far ahead. Has anything managed to saturate PCI-E 2.0 16x / PCI-E 3.0 8x yet?
Only Dual GPU cards like the latest AMD one...
Posted on Reply
#41
newtekie1
Semi-Retired Folder
MusselsI guess they'll need some kind of physical key system to prevent those cards going into older slots.
AGP Style!

Or they can just make the cards to they can detect a PCI-E 4.0 slot and pull all their power from it, or if the card detects a 3.0 or lower slot it requires the 6 or 8-pin external power connector to be installed. So the card has an external power connector, but you don't have to use it if you plug the card into a PCI-E 4.0 slot.
FrickUnless they make traces that really can do 300W, which is a lot.
I'm more interested in how they are going to get the small pins in the slot itself to carry that much current.
Posted on Reply
#42
cdawall
where the hell are my stars
yogurt_21Multi-socket server boards do not
Depends on the board the vast majority are stacked right next to the cpu.
Posted on Reply
#43
dj-electric
For those wondering where would be the space to power the PCI-E slots.

Think about where the so-called chipsets would be in a year to three from now.
huh? got it? yeah.
Posted on Reply
#44
TheLostSwede
News Editor
cdawallYou read that completely backwards. New cards incompatible with older boards. Read a thing or two before posting.
Hence why I appologised above...
Posted on Reply
#45
MagnuTron
I foresee some pretty beefy PCIe extenders on the rise..
Posted on Reply
#46
alucasa
This might make mini-itx cable management easier.
TheLostSwedeHence why I appologised above...
Don't think your apology was accepted. :p
Posted on Reply
#47
ZeDestructor
So I'm late to the party.. thus, QUOTE EVERYONE, answer ALL the questions and ramble ALL the opinions!
Mussels300W slot power will be interesting, it'll make cards incompatible with old slots - and really heavy power draw from mobos.

I guess they'll need some kind of physical key system to prevent those cards going into older slots.
newtekie1AGP Style!

Or they can just make the cards to they can detect a PCI-E 4.0 slot and pull all their power from it, or if the card detects a 3.0 or lower slot it requires the 6 or 8-pin external power connector to be installed. So the card has an external power connector, but you don't have to use it if you plug the card into a PCI-E 4.0 slot.
Covered by PCIe link negotiation already (currently all PCIe cards start with a 10W max, then request more). If it can't do PCIe 4.0 with the full supplemental power, the card should just clamp down to 75W or 25W mode.. or stay at minimal init power levels.
btarunr300W. The boards should have safety trips beyond that point, because you're delivering that power through traces, and not thick metal plugs. I imagine PCIe 4.0 motherboards with have PCIe power input connectors on the board.
FrickAye, which makes it kinda moot IMO. The only benefit this has is cable management. And you break back backwards compatibility and looking at how little improvement there is in CPU performance it makes it downright bad.

Unless they make traces that really can do 300W, which is a lot.
bonehead1234x the power flowing through the slots, and you will also have 4x the heat produced too :)

So it looks like the mobo makers are really gonna have to step up their game in terms of trace and slot design, to accommodate this change, cause if they dont, there's gonna be hell to pay from mutli-mega billion $$ lawsuits over melted mobos and anything and everything attached to them, not to mention desks and houses burned down etc...........
RejZoROk, if 75W PCIe power delivery is such huge problem where people go batshit insane over RX480 issues, how are they planning to deliver PCB power traces to the PCIe slot without spontaneously combusting the board at 300W? Are they going to run external wires on the back of the boards? You can't make PCB traces that thick (well, you could, but it would be highly impractical), so, how?
newtekie1I'm more interested in how they are going to get the small pins in the slot itself to carry that much current.
D007Not sure if I like this.. Why push all the power through the mobo and add more potential component degradation,
when you can just run it through a plug, that has already accounted for the additional power flow and been tested to handle it for years on end?
Does this at all transfer into better gpu performance?
Based on the past 10 or so years of server pushing >1000W at 12V in a single edge connector for PSU purposes, and a good many of those servers have lasted the full 10 years of 24/7 use as well, we'll be fine..

These days, we're pushing well over 3000W through smaller surface areas, like the 3000W PSU in a Dell M1000e (60mm fan). (For those wondering, that PSU only carries power over the 4 thick connectors, 2 +12V, 2 GND. 1/10th of that is about the same amount of area that the PCIe power pins have combined)

Another example would be MXM modules, where the big 1080 and it's 180W of power is supplied through less than half the area available in a PCIe slot, or even crazier, the GK110 chips used in the TITAN supercomputer's MXM modules pushing over 200W over the MXM3.0 edge-connector.

In conclusion: the 300W will go over the existing pins, perfectly reliably and safely.. at most they'll replace a few of the currently unused/reserved/ground pins with +12V pins.
Musselsok external PCI-E cables that deliver 300W to an external box, THAT makes sense.

bring on the external GPU's that dont need their own PSU!
External cabling has been around since the Tesla generation of GPUs (GeForce 8800 series), particularly used in external Tesla GPU boxes that you'd attach to beefy 1U dual-CPU boxes or workstations with limited GPU space.

In external situations, you only do I/O to the main box, with the external box providing it's own power for the cards, nice and easy, and a massive compatibility improvement.
TheLostSwedeThe same way some companies have been for years, 12V cables to somewhere close to the connectors on the motherboards. In other words, this won't help with the cable clutter...
Check out an NVLink Pascal server sometime. There's no cables in most of them, just big edge-connectors to carry huge currents over, or really short power cables to jump from the CPU end to the PCIe/NVLink daughterboard.
iOI really doubt we will see this in consumer boards. It looks more like an optional feature specifically aimed at server applications with new connectors and stuff..
yogurt_21because when I was installing my new psu a few weeks back it really felt like I was dealing with ancient tech compared to the server or nuc markets where everything just slots, no additional cabling needed.

The mainboards will handle it just fine 4 socket server boards push much more power than that to all 4 cpu's all 64 memory slots, and all6 pcie slots. There will be no degradation that's just a silly line of thinking. Again servers have been pushing this amount of power for decades and the video cards are already receiving that amount of power, changing the how isn't going to suddenly degrade them faster.

What this will do is bring enthusiasts into the modern era and get away from cable redundancies. They really should have done this 10 years ago.
Tell me about it... Hell, I'm annoyed that all the higher-end cases faff around with 3 billion fans and LEDs everywhere and yet not a single one of them is willing to putout tool-less, hot-swappable drive cages as a standard feature... Even the massive 900D only does it for 3 of it's 9 bays.. fucking WHY?!
SaltyFishI think we're getting pretty far ahead. Has anything managed to saturate PCI-E 2.0 16x / PCI-E 3.0 8x yet?
Clearly, yes, else NVLink simply wouldn't exist. In HPC land at least. For home use, 3.0 x8 is plenty for now.
Nobody99At least manufacturers will have to show something for those absurdly high motherboard prices.

They could set it a little lower, like 200W to encourage better power efficiency.
Cards are pulling 300W right now, so there's no point in setting it any lower. Unless you want massive backlash from AMD, nVidia and Intel.. owait, all 3 are major members of the PCI-SIG, and essentially the driving force of the high-power end.
Evildead666I think the guy at Tom's has just misunderstood what the guy said.

You will never get 300W through the slot.

I think he was saying that the official power allowed for a single card will go up, from a minumum of 300W.

ie. We will be seeing 300W+ cards, officially, and with PCI-SIG blessings.
They will still have 8-pin or 6-pin connectors though, just more of them.
Unlikely. There's been a lot of bitching already from the server (and some desktop) people that cabling is a mess for PCIe add-in cards - in 1U form factors, due to the location of the connector and which bit of the chassis fan a passive card gets, that cuts out 1/6-1/3 of the airflow over the card, resulting in a hotter running card needing to be throttled down.

Let's not even get into top-mounted power like most consumer cards.. those instantly drop you from 4 GPU to 3GPU in riser-equipped 1U, and out of 3U and into 4U for vertically mounted, non-riser setups., and that in turns means 25% more cost in terms of CPUs, mobos, chassis, racks, extra power for the extra systems, etc. top-mounted power is shit, and it's a damn shame it's the common setup now.
Evildead666Modern sockets have the CPU power right next to the socket.
they don't have it wandering throughout the motherboard.
Any PCie "add-on" connector, is always right next to the slots.

They won't be running 300W to a single x16 slot, or every x16 slot thorugh the motherboard.
They already do in 4GPU 1U servers like Dell's C4130 (1200 W goes from the rear PSUs, through the mobo and then out some 8-pin cables at the other end of the mobo into the 4GPUs) or Supermicro's 1028GQ-TRT.

On the big blade systems, they move even more power through the backplanes to the PSU (think in 9000W over 10U range, right next to 100gbit network interfaces)
Posted on Reply
#48
Evildead666
From the pics I saw, the 1080 mobile, had a separate connector for extra power.

Server boards, and NVLink are not the things of us mere enthusiast mortals.
They can have stuff that we can't because they will pay through the nose for it.

I still believe that this is a misunderstanding, and its more than 300W per-slot (Card), rather than through-the-slot.

edit : I think they are currently stuffed for dual gpu cards, and would both like to do them, and intel with KNL sucessors (maybe).

At least they would have the option to go mad :)

edit2 : If they really wanted power thru a slot for the card, it would be much easier to go the E-ISA way, and add a connector off the back end of the card for that, leaving the current slot untouched, and fully compliant..
Posted on Reply
#49
ZeDestructor
Evildead666From the pics I saw, the 1080 mobile, had a separate connector for extra power.

Server boards, and NVLink are not the things of us mere enthusiast mortals.
They can have stuff that we can't because they will pay through the nose for it.

I still believe that this is a misunderstanding, and its more than 300W per-slot (Card), rather than through-the-slot.

edit : I think they are currently stuffed for dual gpu cards, and would both like to do them, and intel with KNL sucessors (maybe).

At least they would have the option to go mad :)

edit2 : If they really wanted power thru a slot for the card, it would be much easier to go the E-ISA way, and add a connector off the back end of the card for that, leaving the current slot untouched, and fully compliant..
The other half of that argument is the AMD Naples board they showed off. With at least 750W going in, the PCIe slots are pretty much guaranteed to be getting at least 500W all up. If they're "overloading" the connections (at full tilt, each pin in a PCIe, EPS/CPU and ATX power connector is rated up to something like 9A, so a 6-pin PCIe is safe for 200W on it's own, at 8-pin safe for 300W, with current server cards using only a single 8-pin for 300W cards), pulling in the 1500W for 2 150W CPUs and 4 300W cards is entirely within the realm of possibility. It may not be PCIe 4.0 on Zen, but power support may well trickle down into a revision of 3.x.

I'm personally not too worried about the safety of pushing 300W through the PCIe slot: 300W is only 25A at 12V. Compare that to the literal hundreds of amps (150+W at less than 1V) CPUs have to be fed over very similar surface areas
Posted on Reply
#50
Evildead666
ZeDestructorThe other half of that argument is the AMD Naples board they showed off. With at least 750W going in, the PCIe slots are pretty much guaranteed to be getting at least 500W all up. If they're "overloading" the connections (at full tilt, each pin in a PCIe, EPS/CPU and ATX power connector is rated up to something like 9A, so a 6-pin PCIe is safe for 200W on it's own, at 8-pin safe for 300W, with current server cards using only a single 8-pin for 300W cards), pulling in the 1500W for 2 150W CPUs and 4 300W cards is entirely within the realm of possibility. It may not be PCIe 4.0 on Zen, but power support may well trickle down into a revision of 3.x.

I'm personally not too worried about the safety of pushing 300W through the PCIe slot: 300W is only 25A at 12V. Compare that to the literal hundreds of amps (150+W at less than 1V) CPUs have to be fed over very similar surface areas
Yes, It's also a server board. I don't think any official info on those are available yet, but speculation has it that it is for the PCIe slots, yes.
I still wouldn't see that trickle down to enthusiast immediately, even if it were the case.
I could see the server people paying for a high layer-high power mobo, for specific cards, not necessarily GPU's.

I just don't see the need to change the norm for desktops.

Two 1080's on one card will take you over the PCIe spec.
I think this is just what the AIB's/Constructors/AMD/Nvidia etc and the others want.
Posted on Reply
Add your own comment
Dec 22nd, 2024 17:33 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts