Thursday, November 10th 2022

Intel Data Center Max GPU "Ponte Vecchio" Implements 16-pin 12VHPWR Connector

The swanky new Intel Data center Max GPU "Ponte Vecchio" is the company's first product to implement the 12+4 pin ATX 12VHPWR power connector, which the company helped design as part of the ATX 3.0 spec. The PCI-Express add-in card (AIC) form-factor variant of the GPU comes with a single 12VHPWR connector that can deliver up to 600 W of power with 100% excursions within small fractions of time (as prescribed in the ATX 3.0 spec). The card elegantly positions the connector at the tail end of the PCB, where while it might obstruct the air intake slightly, it would still ensure that the connectors aren't bent at odd angles. More importantly, the positioning of the connector ensures a bunch of these cards can be installed in 4U server enclosures (without adding 3.5 cm to the Z-height).

The first GPU maker to implement the 12VHPWR is NVIDIA, with its "Ampere" GeForce RTX 3090 Ti, doubling down on it with the RTX 4090 and soon-to-launch RTX 4080. The connector's implementation heaped bad press over the past few weeks, particularly with the adapter that converts four 8-pin PCIe power connectors to an 12VHPWR; which is allegedly flimsy in the face of aggressive bending for cable-management; with RTX 4090 users on social-media reporting burnt adapters and power connectors on card due to improper mechanical contact from the cable bending/strain. The cable-management standards for servers are different from those of DIY gaming PCs, with many server PSUs still wiring unsleeved "mustard-and-ketchup" cables.
Source: Tom's Hardware
Add your own comment

23 Comments on Intel Data Center Max GPU "Ponte Vecchio" Implements 16-pin 12VHPWR Connector

#1
Jimmy_
intel toooo! wow...it would be interesting see how this ll be performing whether we will witness some meltdowns or not :)
Posted on Reply
#2
P4-630
Burn baby burn....
Posted on Reply
#3
bonehead123
"can you smell what da Rock is cookin ?"
Posted on Reply
#4
dj-electric
In this thread - people who think Intel will use NVIDIA's problematic 8PIN adapters for 12VHPWR or are unaware of it being the cause of trouble with the connector so far.
Comes to show how important it is for people to spam the internet with useless kilobytes of text posts before even bothering to read the articles themselves properly.
Posted on Reply
#5
Crackong
dj-electricIn this thread - people who think Intel will use NVIDIA's problematic 8PIN adapters for 12VHPWR or are unaware of it being the cause of trouble with the connector so far.
Comes to show how important it is for people to spam the internet with useless kilobytes of text posts before even bothering to read the articles themselves properly.
Except the server spaces usually use EPS12V which give 300W per plug and basically don't need the 12VHPWR connector ?
Posted on Reply
#6
maxfly
dj-electricIn this thread - people who think Intel will use NVIDIA's problematic 8PIN adapters for 12VHPWR or are unaware of it being the cause of trouble with the connector so far.
Comes to show how important it is for people to spam the internet with useless kilobytes of text posts before even bothering to read the articles themselves properly.
There's nothing wrong with pokin fun at the creator lol!

I REALLY want to see the internals of one of those cards.
Posted on Reply
#7
catulitechup


but thinking about possible guilty of suggest this new connector for this product...................bring him



:)
Posted on Reply
#8
Solaris17
Super Dainty Moderator
dj-electricIn this thread - people who think Intel will use NVIDIA's problematic 8PIN adapters for 12VHPWR or are unaware of it being the cause of trouble with the connector so far.
Comes to show how important it is for people to spam the internet with useless kilobytes of text posts before even bothering to read the articles themselves properly.
Or understand the tech they are speaking too in general.
Posted on Reply
#9
john_
So, thousands of 16-pin 12VHPWR connectors in the Aurora supercomputer? :p:p:p:p


(I added extra smiles for extra KBytes)
Posted on Reply
#10
Valantar
As expected, I guess? This connector is very clearly aimed at the enterprise/server/HPC world after all - they're the ones wanting more power density and who can't have the airflow obstruction of dual 8-pin EPS connectors in their flow-through passive accelerators, they're the ones implementing PCIe 5.0 (which is what this connector has been called for most of its lifetime), etc.

I've got a suggestion for the PCI SIG, or Intel, or whoever has the most power over this: make a "high power" standard for 8-pin PCIe connectors, where they can deliver 300W (8.333A/pin). This is within the capabilities of the Mini Fit Jr. connector (crimp pins rated up to 10A are readily available), and could be done relatively easily by making a new configuration for the sense pins - say, one of them needs a specific resistance to ground - while keeping the rest of the pinout the same, maintaining compatibility. This would also be backwards compatible with a simple 2x8-pin-to-1x8-pin adapter with a resistor in the right place. Of course the new standard would also need to mandate 16AWG wiring at the very least. And every GPU configured with one of these would be able to operate without an adapter in a 150W-per-connector mode. This would fix the "these new GPUs need four PCIe power connectors, WTF" issue in consumer applications with far fewer problems than the 12VHPWR connector.
Posted on Reply
#11
kapone32
The placement of the 12 pin is probably not going to lead to those types of failures that we are seeing. From the current information floating around online this could have been one of the straws that broke the relationship as the EVGA card belies Nvidia's demand that the card needs the power connector to be on top of the PCB.
Posted on Reply
#12
Valantar
kapone32The placement of the 12 pin is probably not going to lead to those types of failures that we are seeing. From the current information floating around online this could have been one of the straws that broke the relationship as the EVGA card belies Nvidia's demand that the card needs the power connector to be on top of the PCB.
Did Nvidia actually demand that? That's a.... strange thing to demand, especially on GPUs that are taller than ever. Thankfully end-mounted power connectors are the standard for anything server oriented, as they can't have protrusions from the top of the card for clearance reasons in tight rack mount chassis.
Posted on Reply
#13
kapone32
ValantarDid Nvidia actually demand that? That's a.... strange thing to demand, especially on GPUs that are taller than ever. Thankfully end-mounted power connectors are the standard for anything server oriented, as they can't have protrusions from the top of the card for clearance reasons in tight rack mount chassis.
I heard it on Jays2 Cents and somewhere else like maybe PC World. I also gleaned from the fact that the only difference between the EVGA Engineering Sample (More like tuning) is the only card that is like that in the entire stack.
Posted on Reply
#14
Valantar
kapone32I heard it on Jays2 Cents and somewhere else like maybe PC World.
Hm, interesting.
kapone32I also gleaned from the fact that the only difference between the EVGA Engineering Sample (More like tuning) is the only card that is like that in the entire stack.
That sounds like a stretch to me - correlation is not causation. Just because a feature is unique to the canned EVGA prototypes doesn't make it causally implicated in the break between the companies. That would be an incredibly petty and silly reason. It might be an example of the reported control Nvidia wanted to exert on AIB partners, but if that was the case it's extremely unlikely that EVGA would go so far as to make a near-final GPU design that they knew Nvidia wouldn't condone. A development process like that is very, very expensive, and you don't intentionally make it so that you have to change tooling at the last minute even if you're pissed at your partners.
Posted on Reply
#15
kapone32
ValantarHm, interesting.

That sounds like a stretch to me - correlation is not causation. Just because a feature is unique to the canned EVGA prototypes doesn't make it causally implicated in the break between the companies. That would be an incredibly petty and silly reason. It might be an example of the reported control Nvidia wanted to exert on AIB partners, but if that was the case it's extremely unlikely that EVGA would go so far as to make a near-final GPU design that they knew Nvidia wouldn't condone. A development process like that is very, very expensive, and you don't intentionally make it so that you have to change tooling at the last minute even if you're pissed at your partners.
I am not saying it was the mitigating factor but a contribution. Especially given the anecdotal evidence that Nvidia has yet to respond to the issue. I have heard that (PC World) the 2 media outfits that got them were begging for months for what they got. The telling part for me is what they were able to do with them as breakdown and showing performance examples for a card that was never intended for retail channels is a little funny. Especially after a very public and seismic (North America) shift in the relationship between those 2 specific Companies. This would be a non issue if we didn't have adapters burning.
Posted on Reply
#16
fevgatos
After getting my 4090 and the nvidia adapter, i think the problem with the connector is that its not plugged in all the way. Its easy to get fooled, it plugs in easily, and you think its properly connected, but it actually isnt. When you think its connected, you need to push a little more and then it will click into place. If i hadnt read about it on the internet i would have done the mistake myself.
Posted on Reply
#17
Valantar
kapone32I am not saying it was the mitigating factor but a contribution. Especially given the anecdotal evidence that Nvidia has yet to respond to the issue. I have heard that (PC World) the 2 media outfits that got them were begging for months for what they got.
To me that just sounds like Nvidia's typical reticence in discussing anything that isn't explicitly PR-oriented with press. Nothing new there.
kapone32The telling part for me is what they were able to do with them as breakdown and showing performance examples for a card that was never intended for retail channels is a little funny.
But it was obviously intended for retail channels - you don't spend millions of dollars producing an engineering sample GPU unless you're planning to mass produce it. The existence of the card just shows how close to launch EVGA decided that enough was enough.
kapone32This would be a non issue if we didn't have adapters burning.
That is very much true. And it's entirely possible that Nvidia is in fact demanding top-mounted power connectors for some reason - it's just that this is too small of a detail for me to believe that it had any real effect on the Nvidia-EVGA relationship. Or to put it another way: if that was the proverbial straw that broke the camel's back, then that camel was already massively overloaded with all of the other reported micromanagement and excessive control exerted by Nvidia. And I find the overall relationship and its breakdown far more interesting than specific details like this, as there's far more to learn from looking at the bigger picture.
Posted on Reply
#18
kapone32
ValantarTo me that just sounds like Nvidia's typical reticence in discussing anything that isn't explicitly PR-oriented with press. Nothing new there.

But it was obviously intended for retail channels - you don't spend millions of dollars producing an engineering sample GPU unless you're planning to mass produce it. The existence of the card just shows how close to launch EVGA decided that enough was enough.

That is very much true. And it's entirely possible that Nvidia is in fact demanding top-mounted power connectors for some reason - it's just that this is too small of a detail for me to believe that it had any real effect on the Nvidia-EVGA relationship. Or to put it another way: if that was the proverbial straw that broke the camel's back, then that camel was already massively overloaded with all of the other reported micromanagement and excessive control exerted by Nvidia. And I find the overall relationship and its breakdown far more interesting than specific details like this, as there's far more to learn from looking at the bigger picture.
I agree with you on all your points but for me if EVGA showed in their testing that there was a potential for what is happening by user error for the power connector placement, then making their own card for Nvidia to verify that they could not sell it like that could be enough to put even more strain on an already fractured relationship especially the expansion of the FE program.
Posted on Reply
#19
Valantar
kapone32I agree with you on all your points but for me if EVGA showed in their testing that there was a potential for what is happening by user error for the power connector placement, then making their own card for Nvidia to verify that they could not sell it like that could be enough to put even more strain on an already fractured relationship especially the expansion of the FE program.
That's definitely true, but I don't think an issue like that would have stayed hidden until the Engineering Sample stages of development - Nvidia has engineers involved with these designs throughout the entire process, so if that was the case it most likely would have been caught much earlier IMO.
Posted on Reply
#20
Chrispy_
The connector is in the direct airflow of the cooler though. This will have something like 8x 80mm delta 5000rpm fans pushing fresh air over the connector.

Even if the design is flawed enough to heat up under 600W, it's going to be actively cooled and unlikely to be pulling more than 300W in these examples.
Posted on Reply
#21
Valantar
Chrispy_The connector is in the direct airflow of the cooler though. This will have something like 8x 80mm delta 5000rpm fans pushing fresh air over the connector.

Even if the design is flawed enough to heat up under 600W, it's going to be actively cooled and unlikely to be pulling more than 300W in these examples.
That's a good point. An overheating pin due to poor contact would likely still take damage over time, but it'd be a lot less dramatic than what we've seen on consumer GPUs. But then again, in hundred-thousand-dollar servers I kind of expect the wiring to be well made, tbh.
Posted on Reply
#22
Chrispy_
ValantarThat's a good point. An overheating pin due to poor contact would likely still take damage over time, but it'd be a lot less dramatic than what we've seen on consumer GPUs. But then again, in hundred-thousand-dollar servers I kind of expect the wiring to be well made, tbh.
All of the failures so far have been in the adapters. Presumably these would be server rPSUs with dedicated 12+4pin HPWR connectors.
Posted on Reply
#23
Valantar
Chrispy_All of the failures so far have been in the adapters. Presumably these would be server rPSUs with dedicated 12+4pin HPWR connectors.
I know, I've been making that exact argument repeatedly in another thread here. Was just pointing out that in the (unlikely!) case of an overheating connector here the extra airflow would do a lot to mitigate damage.
Posted on Reply
Add your own comment
Oct 18th, 2024 04:17 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts