I don't remember if that was the exact number he came up with but it's the rating from molex for the pcie mini fit version, 8A per line x3 = 288W - this for the pcie version specifically, there's versions of micro fit going up to 13A per line.
In contrast the micro fit in general only goes up to 8.5A per line (can't find a specific spec for pcie from a good source), so what exactly did we gain beside a much more expensive and harder to implement connector? Some board space?
That doesn't make it better. We traded something cheap and reliable, with a huge ammount of headroom for something more expensive, harder to manufacture and running full tilt with no margin whatsoever. All so we could save the space of a single connector, which ammounts to about less than 1% the total size of a gpu. Who in their right mind made this stupid ass decision!?
I think that was it, yes.
Like I said, 288 was worst-case using the lowest possible values, ie the lower molex 8A per line, the smaller AWG18 cabling rather than AWG16 that is recommended but not always used, and then additional concerns about daisy-chained PCIe cables coming from the PSU itself.
288W per 6+2 pin connector already includes a massive safety margin, it's rare that a 13A Molex with AWG16 would struggle to achieve double that and still have the same safety margin.
I'm not seeing that safety margin on 12VHPWR and when it melts people are blaming the adapters, the bend radius, the manufacturing quality... Nope - it's that the design itself has no safety margin at all. You can look up the temperature delta on stranded cable of any given size, Molex have similar exact specs of what temperatures their connectors will get at any given current - this is all in the public domain and no secret.