So I'm late to the party.. thus, QUOTE EVERYONE, answer ALL the questions and ramble ALL the opinions!
300W slot power will be interesting, it'll make cards incompatible with old slots - and really heavy power draw from mobos.
I guess they'll need some kind of physical key system to prevent those cards going into older slots.
AGP Style!
Or they can just make the cards to they can detect a PCI-E 4.0 slot and pull all their power from it, or if the card detects a 3.0 or lower slot it requires the 6 or 8-pin external power connector to be installed. So the card has an external power connector, but you don't have to use it if you plug the card into a PCI-E 4.0 slot.
Covered by PCIe link negotiation already (currently all PCIe cards start with a 10W max, then request more). If it can't do PCIe 4.0 with the full supplemental power, the card should just clamp down to 75W or 25W mode.. or stay at minimal init power levels.
300W. The boards should have safety trips beyond that point, because you're delivering that power through traces, and not thick metal plugs. I imagine PCIe 4.0 motherboards with have PCIe power input connectors on the board.
Aye, which makes it kinda moot IMO. The only benefit this has is cable management. And you break back backwards compatibility and looking at how little improvement there is in CPU performance it makes it downright bad.
Unless they make traces that really can do 300W, which is a lot.
4x the power flowing through the slots, and you will also have 4x the heat produced too
So it looks like the mobo makers are really gonna have to step up their game in terms of trace and slot design, to accommodate this change, cause if they dont, there's gonna be hell to pay from mutli-mega billion $$ lawsuits over melted mobos and anything and everything attached to them, not to mention desks and houses burned down etc...........
Ok, if 75W PCIe power delivery is such huge problem where people go batshit insane over RX480 issues, how are they planning to deliver PCB power traces to the PCIe slot without spontaneously combusting the board at 300W? Are they going to run external wires on the back of the boards? You can't make PCB traces that thick (well, you could, but it would be highly impractical), so, how?
I'm more interested in how they are going to get the small pins in the slot itself to carry that much current.
Not sure if I like this.. Why push all the power through the mobo and add more potential component degradation,
when you can just run it through a plug, that has already accounted for the additional power flow and been tested to handle it for years on end?
Does this at all transfer into better gpu performance?
Based on the past 10 or so years of server pushing >1000W at 12V in a single edge connector for PSU purposes, and a good many of those servers have lasted the full 10 years of 24/7 use as well, we'll be fine..
These days, we're pushing well over 3000W through smaller surface areas, like the
3000W PSU in a Dell M1000e (60mm fan). (For those wondering, that PSU only carries power over the 4 thick connectors, 2 +12V, 2 GND. 1/10th of that is about the same amount of area that the PCIe power pins have combined)
Another example would be MXM modules, where the big 1080 and it's 180W of power is supplied through less than half the area available in a PCIe slot, or even crazier, the GK110 chips used in the TITAN supercomputer's MXM modules pushing over 200W over the MXM3.0 edge-connector.
In conclusion: the 300W will go over the existing pins, perfectly reliably and safely.. at most they'll replace a few of the currently unused/reserved/ground pins with +12V pins.
ok external PCI-E cables that deliver 300W to an external box, THAT makes sense.
bring on the external GPU's that dont need their own PSU!
External cabling has been around since the Tesla generation of GPUs (GeForce 8800 series), particularly used in external Tesla GPU boxes that you'd attach to beefy 1U dual-CPU boxes or workstations with limited GPU space.
In external situations, you only do I/O to the main box, with the external box providing it's own power for the cards, nice and easy, and a massive compatibility improvement.
The same way some companies have been for years, 12V cables to somewhere close to the connectors on the motherboards. In other words, this won't help with the cable clutter...
Check out an NVLink Pascal server sometime. There's no cables in most of them, just big edge-connectors to carry huge currents over, or really short power cables to jump from the CPU end to the PCIe/NVLink daughterboard.
I really doubt we will see this in consumer boards. It looks more like an optional feature specifically aimed at server applications with new connectors and stuff..
because when I was installing my new psu a few weeks back it really felt like I was dealing with ancient tech compared to the server or nuc markets where everything just slots, no additional cabling needed.
The mainboards will handle it just fine 4 socket server boards push much more power than that to all 4 cpu's all 64 memory slots, and all6 pcie slots. There will be no degradation that's just a silly line of thinking. Again servers have been pushing this amount of power for decades and the video cards are already receiving that amount of power, changing the how isn't going to suddenly degrade them faster.
What this will do is bring enthusiasts into the modern era and get away from cable redundancies. They really should have done this 10 years ago.
Tell me about it... Hell, I'm annoyed that all the higher-end cases faff around with 3 billion fans and LEDs everywhere and yet not a single one of them is willing to putout tool-less, hot-swappable drive cages as a standard feature... Even the massive 900D only does it for 3 of it's 9 bays.. fucking WHY?!
I think we're getting pretty far ahead. Has anything managed to saturate PCI-E 2.0 16x / PCI-E 3.0 8x yet?
Clearly, yes, else NVLink simply wouldn't exist. In HPC land at least. For home use, 3.0 x8 is plenty for now.
At least manufacturers will have to show something for those absurdly high motherboard prices.
They could set it a little lower, like 200W to encourage better power efficiency.
Cards are pulling 300W right now, so there's no point in setting it any lower. Unless you want massive backlash from AMD, nVidia and Intel.. owait, all 3 are major members of the PCI-SIG, and essentially the driving force of the high-power end.
I think the guy at Tom's has just misunderstood what the guy said.
You will never get 300W through the slot.
I think he was saying that the official power allowed for a single card will go up, from a minumum of 300W.
ie. We will be seeing 300W+ cards, officially, and with PCI-SIG blessings.
They will still have 8-pin or 6-pin connectors though, just more of them.
Unlikely. There's been a lot of bitching already from the server (and some desktop) people that cabling is a mess for PCIe add-in cards - in 1U form factors, due to the location of the connector and which bit of the chassis fan a passive card gets, that cuts out 1/6-1/3 of the airflow over the card, resulting in a hotter running card needing to be throttled down.
Let's not even get into top-mounted power like most consumer cards.. those instantly drop you from 4 GPU to 3GPU in riser-equipped 1U, and out of 3U and into 4U for vertically mounted, non-riser setups., and that in turns means 25% more cost in terms of CPUs, mobos, chassis, racks, extra power for the extra systems, etc. top-mounted power is shit, and it's a damn shame it's the common setup now.
Modern sockets have the CPU power right next to the socket.
they don't have it wandering throughout the motherboard.
Any PCie "add-on" connector, is always right next to the slots.
They won't be running 300W to a single x16 slot, or every x16 slot thorugh the motherboard.
They already do in 4GPU 1U servers like Dell's C4130 (1200 W goes from the rear PSUs, through the mobo and then out some 8-pin cables at the other end of the mobo into the 4GPUs) or Supermicro's 1028GQ-TRT.
On the big blade systems, they move even more power through the backplanes to the PSU (think in 9000W over 10U range, right next to 100gbit network interfaces)