• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

PlayStation 5 Power Supply (ADP-400DR)

Joined
Oct 10, 2018
Messages
943 (0.42/day)
What is it with all the ps5 lovin? No one like x-box? All the reviews and hardware announcements is all about ps5 this or that. But I liked this review something different and new.
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
What is it with all the ps5 lovin? No one like x-box? All the reviews and hardware announcements is all about ps5 this or that. But I liked this review something different and new.
Playstation has had a massive mindshare advantage since forever - the first thing people think of when they think "console gaming" is either Playstation or Nintendo, depending on who you ask. Xbox is the perennial underdog in this scenario, despite fluctuating technical superiority (the 360 was clearly superior to the PS3 - and arrived earlier! - the PS4 was slightly faster than the XBO, the XOX was noticeably faster than the PS4P, and the XSX is a bit faster than the PS5 - though few games manage to make any real use of it). Very much a parallel to AMD vs. Nvidia in the GPU space - mindshare (and building it over time) matters far more than moment-to-moment technical superiority, and an incumbent market leader has an inherent advantage.
 
Joined
Jan 28, 2021
Messages
854 (0.60/day)
Are the benefits massive? Obviously not. Are they worthwhile? Yes. Standardization is fantastic, but standards need to be updated or replaced as they age.
Just moving to a 12v system wouldn't be huge but when I said ATX is garbage I meant the whole thing and if its time to refresh the power and cable standards its a good opportunity to refresh the form factor as well.

Nobody ever envisioned systems like we have now when ATX was drawn up. Cooling is problematic and inefficient. Graphics cards that could pull 300+ watts of power and the accompanying 1Kg heatsinks that would be needed to cool would be crazy talk when ATX was speced. But systems literally can't support their own weight, GPUs sag like a limp noodle and you can't ship a system without extreme packaging or its pretty guaranteed to rip itself to pieces in transit. The features and packaging (use of space in ATX) is also just awful. It would be amazing if we could incorporate some of the modern features we have in server chassis like backplanes for drives, and hot swap fans cages and build them into a new enthusiast form factor that makes sense with dedicated chambers for cooling the CPU and GPU separately (aka Mac Pro) and the proper structural support for todays massive heatsinks and GPUs.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.79/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
It would be amazing if we could incorporate some of the modern features we have in server chassis like backplanes for drives, and hot swap fans cages and build them into a new enthusiast form factor that makes sense with dedicated chambers for cooling the CPU and GPU separately (aka Mac Pro) and the proper structural support for todays massive heatsinks and GPUs.
Ah yes, I would just love it if my machine sounded like a rackmount server. :laugh:

Seriously though, what about ATX makes it inadequate for current machines? If you say power draw, then your solution is either to use more (which ATX does,) or larger conductors. The second option is to operate at a higher voltage to allow for smaller conductors, but that also puts a burden on the power delivery of every component because now you need MOSFETs that are rated for something like twice what they can do with a 12v supply to handle stepping down the voltage, plus associated high voltage caps and whatnot on the input side. So while that would result in fewer wires, it would also result in bulkier and more expensive devices. Simply put, it's easier to produce more conductors than a more complicated power delivery system to drive a system voltage of something like 24v instead of 12v, which would theoretically reduce the amount of conductor in half-ish if you ignore the differences in current at the center of the conductor versus on the outside edge of it (which becomes far more obvious when current is not a constant.)

Also a lot of servers still use ATX spec for power delivery. It's not like they don't use the 20+4 pin connector, even in redundant setups.
 
Joined
Jan 28, 2021
Messages
854 (0.60/day)
Ah yes, I would just love it if my machine sounded like a rackmount server. :laugh:
Backplanes and hot swappable fan cages are passive features that make fans and drives easier to install and maintain, they don't have moving parts so they don't make any noise. Installing fans, hard drives, routing cables (the little that there is) in modern server is a dream, installing fans and hard drives in your typical "enthusiast" ancient ATX case is a shit experience that requires running power through the case to every component which is a complete waste and is tedious at best to not make look like crap.

Seriously though, what about ATX makes it inadequate for current machines?
ATX is part of the form factor power is just part of it. ATX dictates the entire layout, where and how components are placed for structural and cooling reasons and is woefully inadequate for reasons I just stated but will repeat to call out specifically.
Cooling is problematic and inefficient. Graphics cards that could pull 300+ watts of power and the accompanying 1Kg heatsinks that would be needed to cool would be crazy talk when ATX was speced. But systems literally can't support their own weight, GPUs sag like a limp noodle and you can't ship a system without extreme packaging or its pretty guaranteed to rip itself to pieces in transit. The features and packaging (use of space in ATX) is also just awful. It would be amazing if we could incorporate some of the modern features we have in server chassis like backplanes for drives, and hot swap fans cages and build them into a new enthusiast form factor that makes sense with dedicated chambers for cooling the CPU and GPU separately (aka Mac Pro) and the proper structural support for todays massive heatsinks and GPUs.
If you asked someone to draw up a new spec today in 2021 nobody would ever come up with anything that looks like ATX today or even 10 years ago. Its been amended and bodged through the decades to work but it makes zero sense for modern hardware demands and usage.

If you say power draw, then your solution is either to use more (which ATX does,) or larger conductors. The second option is to operate at a higher voltage to allow for smaller conductors, but that also puts a burden on the power delivery of every component because now you need MOSFETs that are rated for something like twice what they can do with a 12v supply to handle stepping down the voltage, plus associated high voltage caps and whatnot on the input side. So while that would result in fewer wires, it would also result in bulkier and more expensive devices. Simply put, it's easier to produce more conductors than a more complicated power delivery system to drive a system voltage of something like 24v instead of 12v, which would theoretically reduce the amount of conductor in half-ish if you ignore the differences in current at the center of the conductor versus on the outside edge of it (which becomes far more obvious when current is not a constant.)
I'm not going to dissect everything wrong about this because I wouldn't know where to stat other than to say we are fine where we are with modern hardware running 12v for its main power rail, its not a limitation and is unlikely to ever be any time soon.
Also a lot of servers still use ATX spec for power delivery. It's not like they don't use the 20+4 pin connector, even in redundant setups.
Low-end stuff maybe but you'll never see a high-end DellEMC, HPE, or IBM server use ATX, its all 12v only. Supermicro, Asus, ect server boards still have ATX connectors on them but it really is legacy. Its not providing any needed functionality and is only there because something better hasent come along to kill it off yet. This Asrock Epyc board for example is mATX and ditches 24 pin because it takes up too much space, and gets by with a 24 pin to 4 pin adapter so its clearly not doing anything useful. You'll also notice that the SATA drives are meant to get their power from the board not the PSU, a board with very limited space and component reliability counts.
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Just moving to a 12v system wouldn't be huge but when I said ATX is garbage I meant the whole thing and if its time to refresh the power and cable standards its a good opportunity to refresh the form factor as well.

Nobody ever envisioned systems like we have now when ATX was drawn up. Cooling is problematic and inefficient. Graphics cards that could pull 300+ watts of power and the accompanying 1Kg heatsinks that would be needed to cool would be crazy talk when ATX was speced. But systems literally can't support their own weight, GPUs sag like a limp noodle and you can't ship a system without extreme packaging or its pretty guaranteed to rip itself to pieces in transit. The features and packaging (use of space in ATX) is also just awful. It would be amazing if we could incorporate some of the modern features we have in server chassis like backplanes for drives, and hot swap fans cages and build them into a new enthusiast form factor that makes sense with dedicated chambers for cooling the CPU and GPU separately (aka Mac Pro) and the proper structural support for todays massive heatsinks and GPUs.
Backplanes and hot swappable fan cages are passive features that make fans and drives easier to install and maintain, they don't have moving parts so they don't make any noise. Installing fans, hard drives, routing cables (the little that there is) in modern server is a dream, installing fans and hard drives in your typical "enthusiast" ancient ATX case is a shit experience that requires running power through the case to every component which is a complete waste and is tedious at best to not make look like crap.


ATX is part of the form factor power is just part of it. ATX dictates the entire layout, where and how components are placed for structural and cooling reasons and is woefully inadequate for reasons I just stated but will repeat to call out specifically.

If you asked someone to draw up a new spec today in 2021 nobody would ever come up with anything that looks like ATX today or even 10 years ago. Its been amended and bodged through the decades to work but it makes zero sense for modern hardware demands and usage.


I'm not going to dissect everything wrong about this because I wouldn't know where to stat other than to say we are fine where we are with modern hardware running 12v for its main power rail, its not a limitation and is unlikely to ever be any time soon.

Low-end stuff maybe but you'll never see a high-end DellEMC, HPE, or IBM server use ATX, its all 12v only. Supermicro, Asus, ect server boards still have ATX connectors on them but it really is legacy. Its not providing any needed functionality and is only there because something better hasent come along to kill it off yet. This Asrock Epyc board for example is mATX and ditches 24 pin because it takes up too much space, and gets by with a 24 pin to 4 pin adapter so its clearly not doing anything useful. You'll also notice that the SATA drives are meant to get their power from the board not the PSU, a board with very limited space and component reliability counts.
I kind of agree with you in principle, but there is one major issue with this: the current flexibility of ATX and standards derived from it (mATX, ITX). Despite this in no way being intended when the ATX standard was first made, it made a flexible basis for smaller/alternative implementations, allowing for great flexibility in case design and the like, with the major added bonus of inter-compatibility (all motherboards fitting in any case made for its standard or larger), plus full cooler and PSU compatibility (barring cable length issues or cooler clearance issues, which are unrelated to the ATX standard).

The problem arises when you try to design something to "replace ATX". Do you aim to only replace ATX? If so, you're killing the standard before it's made, by excluding mATX and (most importantly) ITX. Do you aim to make a flexible standard that can work across these? Cool - but good luck. That's a gargantuan project, and any solution will have major drawbacks. Losing intercompatibility will artificially segment case and motherboard markets. Aiming for a modular/daughterboard-based solution will drive up costs. And so on, and so on.

There are major issues in trying to solve the cooler weight problems you mention to: standardizing support structures will very significantly limit case design flexibility. Mandating a standard for GPU support will both force GPU makers into adhering to more-or-less arbitrary GPU design standards (will the supports work for everything from ITX sized to massive 4-slot monster GPUs? If not, where's the cut-off?). Better support for CPU coolers is essentially impossible without creating further issues: flexible socket placement is a necessity for ease of motherboard design, yet it makes ducting and heatsink support impossible to standardize. But so do heatsink sizes and designs. There is simply too much variability to standardize this in an efficient way - this requires designing heatsinks and cases (and to some degree motherboards) together, which again places serious restrictions on design possibilities. OEMs overcome this through having control over the whole system, making ducting and purpose-made coolers, and often also proprietary (or semi-proprietary) motherboards. Doing the same in the DIY space would be impossible - nobody would accept a case that only fit one of 2-3 coolers from the same manufacturer.

Servers bypass all of this by being almost entirely proprietary. Plus they aren't cost-constrained at all, so expensive backplanes and similar features aren't an issue for them. I would love to see more NAS cases with storage backplanes, for example, but have you seen the prices of the ones that exist? They're crazy, $300+ for often pretty basic case - and hot-swap multi-drive bays are easily $100+ as well. These things are of course low volume, driving up prices, but that won't change - not many people need multiple drives these days.

Now, there are ways in which we could make motherboards flexible - like stripping the motherboard itself down to just the CPU, RAM, VRM and I/O with a mezzanine connector for mounting PCIe daughterboards of various sizes, for example, allowing the same "motherboard" to work in sizes from smaller-than-ITX (no PCIe) to gargantuan dozen-slot PCIe monsters, or allowing for creative solutions with these PCIe daughterboards (PLX switches? TB controllers? Riser cables for alternative mounting orientations?). But this would be so damn expensive. Ridiculously so. And I don't think there's any way to design something like this without significantly increasing the base cost of a PC.

(Of course this is entirely discounting the massive cost to case manufacturers for retooling literally every single case design they produce - a cost that can quiet easily get into the millions of dollars with enough SKUs.)

IMO, ATX (and its derivatives) as a form factor is here to stay - it will no doubt change somewhat over time, but radical change is unlikely. Changing the PSU and power delivery system is a good way of alleviating some of the major pain points of an old standard without meaningfully breaking compatibility. And that's a very good way to move forward IMO.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.79/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
a cost that can quiet easily get into the millions of dollars with enough SKUs.
I think you're being far too conservative with that number. I honestly was ignoring the logistics of creating a new standard. However, I think XKCD has a comic for this particular case. The reality is that ATX is doing just fine, well, unless you're one of those poor bastards with a GPU that eats >300w stock. :laugh:

Either way, my main point is that even if we were to create a new standard, I don't see it being a major improvement over what we have now which makes it a really hard sell to redesign literally everything that uses ATX or some derivative.
 
Joined
Jun 18, 2021
Messages
2,569 (2.00/day)
It would be amazing if we could incorporate some of the modern features we have in server chassis like backplanes for drives, and hot swap fans cages and build them into a new enthusiast form factor that makes sense with dedicated chambers for cooling the CPU and GPU separately (aka Mac Pro) and the proper structural support for todays massive heatsinks and GPUs.

The Mac Pro works because Apple controls every single part of it, other similar ish concepts exist like the Corsair One for example. The point is, we can't compare a pre-built solution where the manufacturer is able to design and optimize things however they think is best (Apple trying to make a socketable CPU not user replaceable coffcoff), DIY solutions have to be compatible with a lot of different designs and will always be unoptimized because of it. It's part of the fun in a way to try to find ways to do it ourselves.

Hot swap fans and backplanes for drives? Talk to case manufacturers, some exist but the market is small and as such the price is usually high. An alternative is to get a case with 5.25'' bays, there are alot of compatible modules you can slot in (hot swap HDD cages, water reservoir, etc)

The second option is to operate at a higher voltage to allow for smaller conductors, but that also puts a burden on the power delivery of every component because now you need MOSFETs that are rated for something like twice what they can do with a 12v supply to handle stepping down the voltage, plus associated high voltage caps and whatnot on the input side.

That's not a problem, we have tiny mosfets that can go anywhere from 12v to 100v without breaking a sweat, the current capability is the design burden there but since we're reducing current going for a higher voltage it would actually be an advantage, but would completely break the current standard hence ATX12VO, old PSUs still work with a couple of adapters (PSUs that aren't able to deliver full power on the 12v rail will get wrecked :D)
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
The Mac Pro works because Apple controls every single part of it, other similar ish concepts exist like the Corsair One for example. The point is, we can't compare a pre-built solution where the manufacturer is able to design and optimize things however they think is best (Apple trying to make a socketable CPU not user replaceable coffcoff), DIY solutions have to be compatible with a lot of different designs and will always be unoptimized because of it. It's part of the fun in a way to try to find ways to do it ourselves.

Hot swap fans and backplanes for drives? Talk to case manufacturers, some exist but the market is small and as such the price is usually high. An alternative is to get a case with 5.25'' bays, there are alot of compatible modules you can slot in (hot swap HDD cages, water reservoir, etc)



That's not a problem, we have tiny mosfets that can go anywhere from 12v to 100v without breaking a sweat, the current capability is the design burden there but since we're reducing current going for a higher voltage it would actually be an advantage, but would completely break the current standard hence ATX12VO, old PSUs still work with a couple of adapters (PSUs that aren't able to deliver full power on the 12v rail will get wrecked :D)
Completely agree, with one little point of correction/disagreement: due to 12VO moving to a 12V standby line rather than 5V, making a NON-12VO PSU compatible with adapters is sadly quite unlikely. You can do it in theory with a boost converter on the 5VSB line, but those typically handle a couple of amps at most, so margins for further conversion losses are small, and if there is any increase in standby current requirements for 12VO many PSUs wouldn't be able to handle that. Still, I don't think an industry-wide move to 12VO over 3-5 years (including DIY) would be all that difficult. And most OEMs are already using proprietary 12VO-like "standards" for their desktops, so 12VO should be a shoo-in there.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.79/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
That's not a problem, we have tiny mosfets that can go anywhere from 12v to 100v without breaking a sweat, the current capability is the design burden there but since we're reducing current going for a higher voltage it would actually be an advantage, but would completely break the current standard hence ATX12VO, old PSUs still work with a couple of adapters (PSUs that aren't able to deliver full power on the 12v rail will get wrecked :D)
It's easier for the PSU, yes. Not for the components utilizing it. The VRMs on the motherboard or GPU aren't going to be rated for 100v. In fact a buck converter isn't going to be rated for much more than 13v if the expected input is 12v (because you know, the ATX specs.) Higher voltages tend to result in larger components because of the insolation that's required. Same deal with capacitors. You need bigger caps for the same capacitance. That's not a good thing for power delivery closer to the actual components using them. Either way, I think we're in agreement that 12v is the way to go. I'm just not convinced that it makes ATX in its current form bad. It just means that the handful of pins that handle 5 and 3.3v are obsolete.
 
Joined
Jun 18, 2021
Messages
2,569 (2.00/day)
Completely agree, with one little point of correction/disagreement: due to 12VO moving to a 12V standby line rather than 5V, making a NON-12VO PSU compatible with adapters is sadly quite unlikely. You can do it in theory with a boost converter on the 5VSB line, but those typically handle a couple of amps at most, so margins for further conversion losses are small, and if there is any increase in standby current requirements for 12VO many PSUs wouldn't be able to handle that. Still, I don't think an industry-wide move to 12VO over 3-5 years (including DIY) would be all that difficult. And most OEMs are already using proprietary 12VO-like "standards" for their desktops, so 12VO should be a shoo-in there.

Didn't really think about that, a simple solutions is no 12vsb for you so no standby (which would probably mean no suspend/hibernate, only hard shutdown, pretty bad tradeoff i guess), but i also see converter modules becoming popular with little drawbacks (you do loose most of the efficiency benefits but at least it's 1 less psu on the ewaste pile).

Both of these are already slowly coming to light, like this cable adapter (they don't specifically mention the vsb conversion but there's some extra bulk in the middle of the sleeve so it may do both, who knows). If the pico psu is a thing, i'm sure 5 to 12 vsb converters will definitely also be a thing

It's easier for the PSU, yes. Not for the components utilizing it. The VRMs on the motherboard or GPU aren't going to be rated for 100v. In fact a buck converter isn't going to be rated for much more than 13v if the expected input is 12v (because you know, the ATX specs.) Higher voltages tend to result in larger components because of the insolation that's required. Same deal with capacitors. You need bigger caps for the same capacitance. That's not a good thing for power delivery closer to the actual components using them. Either way, I think we're in agreement that 12v is the way to go. I'm just not convinced that it makes ATX in its current form bad. It just means that the handful of pins that handle 5 and 3.3v are obsolete.

Of course the motherboard would need to be designed to handle 24, 50, 100 or whatever Volts, but focusing on the 24V point insulation would barely change and the capacitors are on the low side (after the 24 get down to 1 that the cpu uses) so that doesn't change. Of course it's always a balancing act that we could go on and on about, imo 24v would probably be a good move if not for completely breaking compatibility with literally everything (hard drives, gpu, fans, etc etc)
 
Last edited:

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,171 (2.79/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
Of course the motherboard would need to be designed to handle 24, 50, 100 or whatever Volts, but focusing on the 24V point insulation would barely change and the capacitors are on the low side (after the 24 get down to 1 that the cpu uses) so that doesn't change. Of course it's always a balancing act that we could go on and on about, imo 24v would probably be a good move if not for completely breaking compatibility with literally everything (hard drives, gpu, fans, etc etc)
I'm debating for the sake of debating, we're on the same page. If we forget about backwards compatibility, I think there are a lot of things that could possibly be done to make a machine small or eat a ton of power. Honestly, I think 24v would be a reasonable middleground and it's not the only electronics space that uses 24v (think furnaces/boilers and their controls.) The nice bit about doing something like that is that you're cutting the current in half for the same power. That lets you deliver a lot more power with the same number of cables. There does come a point though where the amount of insulation is the limiting factor for how small you can make the components for higher voltage applications. There are a lot of laptops that have battery power delivered in this kind of voltage range, so it's not unrealistic, that's for sure.

The problem is adoption and logistics. Nobody wants to remake the wheel if the one we have works just fine.
 
Last edited:
Joined
Mar 21, 2021
Messages
5,150 (3.75/day)
Location
Colorado, U.S.A.
System Name CyberPowerPC ET8070
Processor Intel Core i5-10400F
Motherboard Gigabyte B460M DS3H AC-Y1
Memory 2 x Crucial Ballistix 8GB DDR4-3000
Video Card(s) MSI Nvidia GeForce GTX 1660 Super
Storage Boot: Intel OPTANE SSD P1600X Series 118GB M.2 PCIE
Display(s) Dell P2416D (2560 x 1440)
Power Supply EVGA 500W1 (modified to have two bridge rectifiers)
Software Windows 11 Home
A resistive load may not have coil whine, while a real load may not be totally resistive in nature.
 
Joined
Mar 26, 2010
Messages
9,910 (1.84/day)
Location
Jakarta, Indonesia
System Name micropage7
Processor Intel Xeon X3470
Motherboard Gigabyte Technology Co. Ltd. P55A-UD3R (Socket 1156)
Cooling Enermax ETS-T40F
Memory Samsung 8.00GB Dual-Channel DDR3
Video Card(s) NVIDIA Quadro FX 1800
Storage V-GEN03AS18EU120GB, Seagate 2 x 1TB and Seagate 4TB
Display(s) Samsung 21 inch LCD Wide Screen
Case Icute Super 18
Audio Device(s) Auzentech X-Fi Forte
Power Supply Silverstone 600 Watt
Mouse Logitech G502
Keyboard Sades Excalibur + Taihao keycaps
Software Win 7 64-bit
Benchmark Scores Classified
  • Sweat efficiency spot with 230 V input
That's something new for PSU's :D, did you try some old spice?

But yea, the soldering quality is like from another league, not the usual crap we are seeing.
yep i agree the soldering quality is good and the pcb copper looks pretty good too
 
Joined
Dec 28, 2012
Messages
3,954 (0.90/day)
System Name Skunkworks 3.0
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software Manjaro
Motherboards are expensive now because of the CPU VRM and chipset. Building a another VRM to power the tiny amount of power 3.3 and 5v rails would be single digit dollar amounts.
Yes, because a $5 increase in supply cost equals out to a $100 increase on the consumer end. Have you not paid attention at ALL to current day motherboard prices? Copper goes up by a penny and mobos go up by $10.

You're also forgetting the cost of engineering that solution into every board, RIP MINI ITX boards that are crammed full as is, or even full ATX boards that already have tons of stuff on them. There's no room unless you start adding daughterboards (see again $$$$).
You wouldn't need to, just need new cables or adapters
So more dongles to buy, more $$$ and more plastic/metal to replace something that works fine. This is the nvidia 12 pin argument all over again.
But less expensive power supplies, and simpler more reliable designs perhaps. And it's not like we couldn't eliminate a big part of the 3.3v and 5v from motherboards anyway, HDDs don't need it and DDR5 already moved the power management to the dimm to use 12V. SSDs and USB are the only ones left that I can think of and would be cheap enough to include on the board.
So again we have this mentality of "if a product is less expensive we'lls ee the benefits". Get that out of your head. Manufacturers are taking consumers for as hard of a ride as possible, and if you honestly think PSUs will get cheaper over this I have a bridge to sell you.
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Yes, because a $5 increase in supply cost equals out to a $100 increase on the consumer end. Have you not paid attention at ALL to current day motherboard prices? Copper goes up by a penny and mobos go up by $10.
Raw material prices affect literally everything, and you're ignoring the units in use. Copper goes up by a penny... per what? Gram? Kg? Ton? Also, copper prices have increased massively over the past couple of years. You're way underselling the effect of this here. Also, motherboard prices have increased due to PCIe 4.0 demanding more layers and thicker traces for integrity, as well as redrivers. This is as much of a reason for the current cost increases as raw material prices. There's no reason to think a $5 BOM increase will translate to a $100 price increase.
You're also forgetting the cost of engineering that solution into every board, RIP MINI ITX boards that are crammed full as is, or even full ATX boards that already have tons of stuff on them. There's no room unless you start adding daughterboards (see again $$$$).
ITX boards will need smaller VRMs for minor rails than larger ones as there are fewer things to power (less PCIe and m.2 for 3.3V, less USB for 5V. And DC-DC converters are tiny. Fitting them on the most packed boards will be a bit of a challenge, but perfectly doable, especially when factoring in the space left open by reducing the 24-pin to a 10-pin (and ITX boards will have no need for the supplementary 6-pin cable that larger boards can add for more 12V to PCIe devices. DDR5 moving the DRAM VRM to the DIMM opens up even more room.
So more dongles to buy, more $$$ and more plastic/metal to replace something that works fine. This is the nvidia 12 pin argument all over again.
The only adapters you need to buy are if you want to keep old hardware in service. Buying an adapter is far better than a new PSU or motherboard, no?
So again we have this mentality of "if a product is less expensive we'lls ee the benefits". Get that out of your head. Manufacturers are taking consumers for as hard of a ride as possible, and if you honestly think PSUs will get cheaper over this I have a bridge to sell you.
It's likely we'll see some savings, or at least some slowing of price creep over time, but it won't be major - this will require new platform designs to some degree, which obviously comes with an engineering cost. But over time, it will result in higher efficiency PSUs at relatively lower prices, and cheaper units are likely to be better overall. Top end units will stay expensive, as they're sold as premium products in the first place, and the relation between BOM and price is less direct.
 
Joined
Jan 28, 2021
Messages
854 (0.60/day)
I kind of agree with you in principle, but there is one major issue with this: the current flexibility of ATX and standards derived from it (mATX, ITX). Despite this in no way being intended when the ATX standard was first made, it made a flexible basis for smaller/alternative implementations, allowing for great flexibility in case design and the like, with the major added bonus of inter-compatibility (all motherboards fitting in any case made for its standard or larger), plus full cooler and PSU compatibility (barring cable length issues or cooler clearance issues, which are unrelated to the ATX standard).

The problem arises when you try to design something to "replace ATX". Do you aim to only replace ATX? If so, you're killing the standard before it's made, by excluding mATX and (most importantly) ITX. Do you aim to make a flexible standard that can work across these? Cool - but good luck. That's a gargantuan project, and any solution will have major drawbacks. Losing intercompatibility will artificially segment case and motherboard markets. Aiming for a modular/daughterboard-based solution will drive up costs. And so on, and so on.

There are major issues in trying to solve the cooler weight problems you mention to: standardizing support structures will very significantly limit case design flexibility. Mandating a standard for GPU support will both force GPU makers into adhering to more-or-less arbitrary GPU design standards (will the supports work for everything from ITX sized to massive 4-slot monster GPUs? If not, where's the cut-off?). Better support for CPU coolers is essentially impossible without creating further issues: flexible socket placement is a necessity for ease of motherboard design, yet it makes ducting and heatsink support impossible to standardize. But so do heatsink sizes and designs. There is simply too much variability to standardize this in an efficient way - this requires designing heatsinks and cases (and to some degree motherboards) together, which again places serious restrictions on design possibilities. OEMs overcome this through having control over the whole system, making ducting and purpose-made coolers, and often also proprietary (or semi-proprietary) motherboards. Doing the same in the DIY space would be impossible - nobody would accept a case that only fit one of 2-3 coolers from the same manufacturer.

Servers bypass all of this by being almost entirely proprietary. Plus they aren't cost-constrained at all, so expensive backplanes and similar features aren't an issue for them. I would love to see more NAS cases with storage backplanes, for example, but have you seen the prices of the ones that exist? They're crazy, $300+ for often pretty basic case - and hot-swap multi-drive bays are easily $100+ as well. These things are of course low volume, driving up prices, but that won't change - not many people need multiple drives these days.

Now, there are ways in which we could make motherboards flexible - like stripping the motherboard itself down to just the CPU, RAM, VRM and I/O with a mezzanine connector for mounting PCIe daughterboards of various sizes, for example, allowing the same "motherboard" to work in sizes from smaller-than-ITX (no PCIe) to gargantuan dozen-slot PCIe monsters, or allowing for creative solutions with these PCIe daughterboards (PLX switches? TB controllers? Riser cables for alternative mounting orientations?). But this would be so damn expensive. Ridiculously so. And I don't think there's any way to design something like this without significantly increasing the base cost of a PC.

(Of course this is entirely discounting the massive cost to case manufacturers for retooling literally every single case design they produce - a cost that can quiet easily get into the millions of dollars with enough SKUs.)

IMO, ATX (and its derivatives) as a form factor is here to stay - it will no doubt change somewhat over time, but radical change is unlikely. Changing the PSU and power delivery system is a good way of alleviating some of the major pain points of an old standard without meaningfully breaking compatibility. And that's a very good way to move forward IMO.
Everything you mentioned are valid road blocks to defining a new standard and none of it would be easy to implement but most things worth doing are not easy. It also wouldn't have to be nearly as hard ore restrictive if done in phases and done properly. I spend more time and money on my other hobby, bikes and that industry has gone through a ton of change and new standards in that last 10 years. Different wheels sizes (29ers), literally dozens of different bottom bracket standards (press fit bearings of various sizes, different axle diameters), boost hub and frame spacing (wider stiffer hubs and frames, and stiffer suspension forks), metric shock mount (more efficient shock packaging). Thats not even all of them and people bitched about them in that industry too but all of those changes are responsible for changing the bike for the better but none of that happend all at once and while some standards didn't pan out things have a way or sorting themselves out and you end up with something better in the end. I know its not exactly the same but I think it illustrates the point, you have to brake with the past to make progress. IDK about you but I don't see tampered glass or RGB on some previously impossible square mm progress.

I look at 12VO as the start; get rid of the legacy 24 pin connector that is full of 80% useless cables, and get rid of the voltages that don't need to be in the PSU anymore. After that the next logical step is probably to make the leap to defining a new case and motherboard layout with dedicated cooling chambers for the CPU and GPU. The CPU cooling solution wouldn't have to be as restrictive as you make it sound in ATX sized solution there would be plenty of room to carve out for space for large coolers and several mm of flexibility of socket placement so there would still be plenty of variety design. Maybe not every old cooler would be compatible with the new standard but most would. Smaller mATX and ITX replacements would have tighter tolerances and less flexibility with what fits together but thats no different than now. For GPUs a separate chamber (ala Mac pro) would be ideal, an ATX sized case could support up with 4 slots worth of card, smaller derivative form factors would be less. Existing cards could still fit as PCIE works just fine electrically but you could have a support system that secures the card along its axis to different mounting points built into the motherboard in different length increments similar to how NVMe SSDs work now, only new cards would support this and benefit but old cards would still fit. All of this is just different packaging and moving things around andit addressees pretty much all of ATX shortfalls. It wouldn't cost any more in materials but would of course be more expensive at first because of novelty of it and the design cost but its a one time cost and and doing somthing like this is the only way you make progress that isn't just slapping superficial bullshit on the same thing.

The other niceties I mentioned like hot swap fans and backplanes are not really that expensive. Before shit went insane (pricing wise) you could get a Supermicro 2U chassis that had all those features for around $400 (which includes the rails), which is as good as it gets for DIY server. Lian Li has daisy chain fans that link together so thats have way there, they go for about $30 each so not super expensive, so you'd just need to have the starting point be the case and have some sort of standard. Hot swap drive cages add cost but backplanes are not expensive, it just a PCB with connectors and traces and. The mATX Lian Li PC-M25 (and I'm sure others have also) had a backplane for 5 3.5" drives, cable up the backplane once and your done, adding / removing drives is a breeze after that. I think it was around $150 so not really any more expensive than any other Lian Li. So you can see some of this starting to creep in but without a standard and throwing out some of the old its a hard slog to make things catch on. I'd gladly pay 15-25% more to gain quality useful features like that in case though, cases last forever.

Tooling and design cost are static costs anytime you design a new case if its a new chassis design weather it be ATX or something completely new standard so thats not really a thing. Pointing out all the pain points is easy but just saying no to change shows the lack of imagination in the industry and leads to stagnation in my opinion. And like I said you don't make all these changes at once and you roll them out at the high-end first where you can absorb some of the extra cost of change and eventually it tickles down to midrange and low-end.

Some of them could even go back wards to make ATX better, you could have hot swap fans in ATX for example or add the GPU support points on a ATX board with having a dedicated cooling chamber.

Yes, because a $5 increase in supply cost equals out to a $100 increase on the consumer end. Have you not paid attention at ALL to current day motherboard prices? Copper goes up by a penny and mobos go up by $10.

You're also forgetting the cost of engineering that solution into every board, RIP MINI ITX boards that are crammed full as is, or even full ATX boards that already have tons of stuff on them. There's no room unless you start adding daughterboards (see again $$$$).
This is not accurate. You can literally add up the component cost of a high-end CPU VRM and you come to very substantial % of the cost of the board, those components are expensive. Adding in minor rails for 3.3 or 5v dose not need those high-end expensive components, it would be very, very cheap to add.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Everything you mentioned are valid road blocks to defining a new standard and none of it would be easy to implement but most things worth doing are not easy. It also wouldn't have to be nearly as hard ore restrictive if done in phases and done properly. I spend more time and money on my other hobby, bikes and that industry has gone through a ton of change and new standards in that last 10 years. Different wheels sizes (29ers), literally dozens of different bottom bracket standards (press fit bearings of various sizes, different axle diameters), boost hub and frame spacing (wider stiffer hubs and frames, and stiffer suspension forks), metric shock mount (more efficient shock packaging). Thats not even all of them and people bitched about them in that industry too but all of those changes are responsible for changing the bike for the better but none of that happend all at once and while some standards didn't pan out things have a way or sorting themselves out and you end up with something better in the end. I know its not exactly the same but I think it illustrates the point, you have to brake with the past to make progress. IDK about you but I don't see tampered glass or RGB on some previously impossible square mm progress.
Well, the thing is, bikes don't have an install base of several billion devices that people view as expensive but crucial pieces of equipment. And parts (rims, tires, tubes, gear systems, whatever) are far less expensive to make. (Sure, you can make them expensive, but the tooling for making a new rim size isn't going to be massively expensive.) While I'm sure interoperability of parts is important there too, I'd imagine that market to be far more flexible and adaptable over time than PCs - especially due to the much higher variability and the lower costs involved (a bike shop can easily stock several different wheel/rim sizes without going bust - doing the same for motherboards or cases is much more of a challenge).
I look at 12VO as the start; get rid of the legacy 24 pin connector that is full of 80% useless cables, and get rid of the voltages that don't need to be in the PSU anymore. After that the next logical step is probably to make the leap to defining a new case and motherboard layout with dedicated cooling chambers for the CPU and GPU. The CPU cooling solution wouldn't have to be as restrictive as you make it sound in ATX sized solution there would be plenty of room to carve out for space for large coolers and several mm of flexibility of socket placement so there would still be plenty of variety design. Maybe not every old cooler would be compatible with the new standard but most would. Smaller mATX and ITX replacements would have tighter tolerances and less flexibility with what fits together but thats no different than now. For GPUs a separate chamber (ala Mac pro) would be ideal, an ATX sized case could support up with 4 slots worth of card, smaller derivative form factors would be less.
The thing is, you don't seem to be thinking this through properly. You say separate chambers for CPU and GPU. Sure. That either means ducting, moving components to different sides of the board, or daughterboards. The first can be done even with ATX - you just need to make sure your CPU cooler fits inside of its duct. No change needed - but you will restrict cooler compatibility. That's just how it is - some coolers overhang the first PCIe slot on some boards. You could update the ATX spec by removing the first PCIe slot in favor of ducting - but that would make ATX and ITX incompatible. Either of the two other solutions will necessitate radical changes to motherboard, cooler and case design, and will dramatically increase costs across the board.

The point is: standardizing separate thermal chambers will always be more restrictive or massively more expensive. Avoiding this is not physically possible. There are of course radical design changes that could make ATX dramatically more space efficient - moving AIC slots to the rear of the motherboard and having the connectors towards the front edge of the board, for example, would make the long-and-flat form factor of ATX into a much smaller cuboid design not much taller than ITX. But it would mean breaking compatibility with every component on the market, including every PCIe device in existence. You could instead put the CPU towards the front of the board, with the PCIe slots towards the rear, but that would cause routing conflicts with high speed rear I/O, as well as make for some rather unfortunate case shapes (but it would be great for CPU air cooling).
Existing cards could still fit as PCIE works just fine electrically but you could have a support system that secures the card along its axis to different mounting points built into the motherboard in different length increments similar to how NVMe SSDs work now, only new cards would support this and benefit but old cards would still fit.
That won't really help, as you'll only be adding mounting points along an already existing axis (along the PCIe slot) with that solution. That will, at best, slightly improve GPU security - unless you make that system something that wraps under and around the GPU, but then you're looking at a pretty complicated mechanical design that won't be compatible with all GPUs. GPUs are secured in two axes - along the PCIe slot and the I/O shield (which is technically two axes on its own, but small enough not to matter). Effectively an L shape. Extending the long part of the L won't improve much. For any actually meaningful improvement you need to either change that L into a square, with some means of attaching the top edge (where the power connectors typically sit, opposite the PCIe slot) or you need to brace the GPU diagonally towards the motherboard. Neither of those are really feasible. Diagonal braces will get in the way of other components and AICs; top edge supports will restrict GPU height and enforce specific shapes, as well as some sort of cross brace or bracket in the case along the side panel.
All of this is just different packaging and moving things around andit addressees pretty much all of ATX shortfalls.
Sadly it doesn't.
It wouldn't cost any more in materials but would of course be more expensive at first because of novelty of it and the design cost but its a one time cost and and doing somthing like this is the only way you make progress that isn't just slapping superficial bullshit on the same thing.
A one time cost, but one that most case and AIC manufacturers wouldn't be able to afford.
The other niceties I mentioned like hot swap fans and backplanes are not really that expensive. Before shit went insane (pricing wise) you could get a Supermicro 2U chassis that had all those features for around $400 (which includes the rails), which is as good as it gets for DIY server. Lian Li has daisy chain fans that link together so thats have way there, they go for about $30 each so not super expensive, so you'd just need to have the starting point be the case and have some sort of standard. Hot swap drive cages add cost but backplanes are not expensive, it just a PCB with connectors and traces and. The mATX Lian Li PC-M25 (and I'm sure others have also) had a backplane for 5 3.5" drives, cable up the backplane once and your done, adding / removing drives is a breeze after that. I think it was around $150 so not really any more expensive than any other Lian Li. So you can see some of this starting to creep in but without a standard and throwing out some of the old its a hard slog to make things catch on. I'd gladly pay 15-25% more to gain quality useful features like that in case though, cases last forever.
That's the cheapest SATA backplane case I've seen - likely because it's not a hot swap system. Those (like the Silverstone DS380 or CS381) are typically more than $300. You're entirely right that this isn't expensive, but it's also not something that can be meaningfully standardized - at best, you can get off-the-shelf backplane and drive bay designs, but those already exist, and they're expensive due to their low volumes and niche applications. Wider applications would perhaps drive down prices (those PCBs are likely dirt cheap after all, and it's not like a trayless housing system is that hard to manufacture), but given how few people use 3.5" HDDs today that ship has sailed. Hot swap fans would be great, but they would complicate radiator mounting (would the hot swap trays need to be strong enough to hold a radiator? Copper or aluminium, and how thick?). What I would like is for a standard for interconnected fans with extension cords, as that would already be a dramatic improvement for user friendliness, and would allow for cases to integrate semi-hot-swap systems with integrated cabling for a more premium experience. But again, this is getting pretty niche.
Tooling and design cost are static costs anytime you design a new case if its a new chassis design weather it be ATX or something completely new standard so thats not really a thing.
Sorry, but no. Case manufacturers especially re-use tooling across a wide range of cases - from motherboard trays to near-complete case assemblies with just minor tweaks - without that, we wouldn't have a single sub-$100 case on the market. Starting from scratch would mean every single design needs to be re-done, likely from the ground up. That's going to get expensive.
Pointing out all the pain points is easy but just saying no to change shows the lack of imagination in the industry and leads to stagnation in my opinion. And like I said you don't make all these changes at once and you roll them out at the high-end first where you can absorb some of the extra cost of change and eventually it tickles down to midrange and low-end.
The thing is, the high end won't be able to absorb those costs. $100000+ tooling for a case that sells in the low thousands globally? Once you include all other costs, that's a $1000 case. You will at best be able to sell as many motherboards as those cases - and you need to get at least a few motherboard makers to put out a couple of models each - which would then sell in the low hundreds each, at best. Again, that's $1000 motherboards, easily. And if you break compatibility with current PCIe, you need GPUs and everything else to match too. You will lose all economies of scale doing this piecemeal - it needs to be a concerted, industry-wide effort be even remotely feasible. And that isn't happening.
Some of them could even go back wards to make ATX better, you could have hot swap fans in ATX for example or add the GPU support points on a ATX board with having a dedicated cooling chamber.
Yes, as I said above, that's possible. But none of that is what you're proposing - it would just be adding on new quasi-standard (at best; realistically: optional) features to the current standard. And that can already be done.
 
Joined
Jun 18, 2021
Messages
2,569 (2.00/day)
Yes, because a $5 increase in supply cost equals out to a $100 increase on the consumer end. Have you not paid attention at ALL to current day motherboard prices? Copper goes up by a penny and mobos go up by $10.

You're also forgetting the cost of engineering that solution into every board, RIP MINI ITX boards that are crammed full as is, or even full ATX boards that already have tons of stuff on them. There's no room unless you start adding daughterboards (see again $$$$).

So more dongles to buy, more $$$ and more plastic/metal to replace something that works fine. This is the nvidia 12 pin argument all over again.

So again we have this mentality of "if a product is less expensive we'lls ee the benefits". Get that out of your head. Manufacturers are taking consumers for as hard of a ride as possible, and if you honestly think PSUs will get cheaper over this I have a bridge to sell you.

Horses pulling carriages work perfectly fine, no need to change anything, why pay for gasoline when i have plenty of grass to feed them horses?

Things change, ATX12VO is proven to be better than the current standard and was generally accepted (as in there wasn't a lot of pushback), get over it. Your current stuff will still work fine, when you're ready to upgrade you'll have new better stuff waiting
 
Joined
Jan 28, 2021
Messages
854 (0.60/day)
Well, the thing is, bikes don't have an install base of several billion devices that people view as expensive but crucial pieces of equipment. And parts (rims, tires, tubes, gear systems, whatever) are far less expensive to make. (Sure, you can make them expensive, but the tooling for making a new rim size isn't going to be massively expensive.) While I'm sure interoperability of parts is important there too, I'd imagine that market to be far more flexible and adaptable over time than PCs - especially due to the much higher variability and the lower costs involved (a bike shop can easily stock several different wheel/rim sizes without going bust - doing the same for motherboards or cases is much more of a challenge).
To make things equal I'm going to assume we're talking about enthusiast level stuff here because frankly when it comes to commodity PCs from the likes of Dell, HP or any of the big OEMs is there standard anything anymore? That concept seemed to die out in the mid 00s when I started build PCs and helping people fix their puters. Most people don't build their bike but some do, and lots of riders try to do at least some maintenance themselves so know parts compatibility is part of the game Your average trail bike has way more money in it than your average gaming PC; parts are expensive for bigger ticket items ($400 for set of wheels, $500 for a fork) so a bike shop can have a lot tied up in parts that could potentially become obsolete depending on how the windows blow and this industry is constantly iterating and trying new things.

The thing is, you don't seem to be thinking this through properly. You say separate chambers for CPU and GPU. Sure. That either means ducting, moving components to different sides of the board, or daughterboards. The first can be done even with ATX - you just need to make sure your CPU cooler fits inside of its duct. No change needed - but you will restrict cooler compatibility. That's just how it is - some coolers overhang the first PCIe slot on some boards. You could update the ATX spec by removing the first PCIe slot in favor of ducting - but that would make ATX and ITX incompatible. Either of the two other solutions will necessitate radical changes to motherboard, cooler and case design, and will dramatically increase costs across the board.

The point is: standardizing separate thermal chambers will always be more restrictive or massively more expensive. Avoiding this is not physically possible. There are of course radical design changes that could make ATX dramatically more space efficient - moving AIC slots to the rear of the motherboard and having the connectors towards the front edge of the board, for example, would make the long-and-flat form factor of ATX into a much smaller cuboid design not much taller than ITX. But it would mean breaking compatibility with every component on the market, including every PCIe device in existence. You could instead put the CPU towards the front of the board, with the PCIe slots towards the rear, but that would cause routing conflicts with high speed rear I/O, as well as make for some rather unfortunate case shapes (but it would be great for CPU air cooling).
Yeah you could do most or all of it with ATX it just wouldn't be ideal. If you draft a new standard you could ensure some level of compatibility if you had a case designed around XTX (name for fictional new form factor) with said cambers for CPU and GPU, a XTX motherboard, and CPU cooler designed with XTX in mind you would know it would fit. And yeah, it probably would be more restrictive but so what? Your average ATX case is huge and full of wasted space, and lots of potential to use that volume better if you designated areas for specific uses. 7 expansion slots is really quite pointless, nobody uses all of them, most people have a GPU and maybe a Wi-Fi card and maybe something else for whatever (insert niche use case here).

That won't really help, as you'll only be adding mounting points along an already existing axis (along the PCIe slot) with that solution. That will, at best, slightly improve GPU security - unless you make that system something that wraps under and around the GPU, but then you're looking at a pretty complicated mechanical design that won't be compatible with all GPUs. GPUs are secured in two axes - along the PCIe slot and the I/O shield (which is technically two axes on its own, but small enough not to matter). Effectively an L shape. Extending the long part of the L won't improve much. For any actually meaningful improvement you need to either change that L into a square, with some means of attaching the top edge (where the power connectors typically sit, opposite the PCIe slot) or you need to brace the GPU diagonally towards the motherboard. Neither of those are really feasible. Diagonal braces will get in the way of other components and AICs; top edge supports will restrict GPU height and enforce specific shapes, as well as some sort of cross brace or bracket in the case along the side panel.
Secure the entire backplate of the GPU to the base of the motherboard. The back plate is there to give the GPU structural support, the problem is the backplate isn't secured to anything. In such a configuration if the heatsink is mounted through the entire card, the PCB and the backplate and the backplate is then secured to the motherboard thats an order magnitude more ridged than how GPUs are mounted now as its held in securely along its entire axis. Three different positions could be provided along the board (similar to NVMe drives), you'd just need to design your card to use at least one depending on the size and weight. You don't have to change the PCIe slot itself so existing cards would be fine, and new cards would still work in ATX.

That's the cheapest SATA backplane case I've seen - likely because it's not a hot swap system. Those (like the Silverstone DS380 or CS381) are typically more than $300. You're entirely right that this isn't expensive, but it's also not something that can be meaningfully standardized - at best, you can get off-the-shelf backplane and drive bay designs, but those already exist, and they're expensive due to their low volumes and niche applications. Wider applications would perhaps drive down prices (those PCBs are likely dirt cheap after all, and it's not like a trayless housing system is that hard to manufacture), but given how few people use 3.5" HDDs today that ship has sailed. Hot swap fans would be great, but they would complicate radiator mounting (would the hot swap trays need to be strong enough to hold a radiator? Copper or aluminium, and how thick?). What I would like is for a standard for interconnected fans with extension cords, as that would already be a dramatic improvement for user friendliness, and would allow for cases to integrate semi-hot-swap systems with integrated cabling for a more premium experience. But again, this is getting pretty niche.
Cases all ship with support for 3.5" drives, motherboards still come with tons of SATA ports so people still do use them. The ability to hot swap drives is determined by the system and the controller, the backplane dosn't have any logic on it. Simple is all it needs to be to keep from running cables to every component, having to manage all of the cables, and remanage it when you take a drive out or swap, whatever, its all eliminated, thats what the backplane achieves. The only part that you would benefit from standardizing is getting power to the backplane as a ton of SATA cables running to the same place defeats part of the point. Aside from power the backplane can be whatever and the drives can go in however the case manufacture wants, sleds, toolless, hot swap trays, whatever, same as today.

The same goes for fans. Run power where it needs to be for intake fans, and your exhaust fans, not power for ever single fan individually. I'm sure its within the realm of possibility to design the system to accommodate a way to mount radiators as well.

Sorry, but no. Case manufacturers especially re-use tooling across a wide range of cases - from motherboard trays to near-complete case assemblies with just minor tweaks - without that, we wouldn't have a single sub-$100 case on the market. Starting from scratch would mean every single design needs to be re-done, likely from the ground up. That's going to get expensive.
They do reuse tooling but no into perpetuity. Eventually it wares out and you either stop making that case or you make new tooling so you can keep making it. Look at something like the Lian Li O11, what tooling did that have in common with anything that preceded it? Some parts and maybe a panel or two but very little I'm sure so a case like that is a big risk, but because it was a success Lian Li got to scale it and make several interations and reuse some of that tooling.

The question is how many did the have to sell before they started to make a profit? And thats the case (no pun intended) with any new design you put into large scale manufacture irregardless of whether its a new standard or just something really different. Of course building new cases and motherboards on a new standard is guaranteed risk but such is the cost of progress in anything new and different. It would certainly be a loosing proposition at first and there would have to be enough buy in from different manufactures to see the advantages and to buy into it but as those advantages become evident thats what people begin to buy and those early companies have a lead.
Yes, as I said above, that's possible. But none of that is what you're proposing - it would just be adding on new quasi-standard (at best; realistically: optional) features to the current standard. And that can already be done.
Yeah you can, but you end up breaking the spec if you stray too far anything big or significant which nobody wants to do so nothing changes. Its the same old boring shapes and layouts over and over again.
 
Joined
Sep 4, 2021
Messages
36 (0.03/day)
Processor Intel i7-12700KF
Motherboard Asus Prime Z690-P D4 CSM
Cooling Gelid Glacier Black
Memory 2x16GB Crucial Ballistix 3200MHz RGB (dual rank e-die)
Video Card(s) EVGA 3060 XC Gaming
Storage Patriot VPN100 256GB, Samsung 980 PRO 1TB, Intel 710 300GB, Micron M500 960GB, Seagate Skyhawk 4TB
Display(s) ASUS VG27AQZ
Case Be Quiet Silent Base 800
Audio Device(s) X-fi HD
Power Supply Corsair RM550x (2018)
Mouse Cooler Master MM720
Keyboard Kingston HyperX Alloy FPS
Horses pulling carriages work perfectly fine, no need to change anything, why pay for gasoline when i have plenty of grass to feed them horses?

Things change, ATX12VO is proven to be better than the current standard and was generally accepted (as in there wasn't a lot of pushback), get over it. Your current stuff will still work fine, when you're ready to upgrade you'll have new better stuff waiting

Except that even if ATX12VO is better, it's a minor improvement, that doesn't worth it (moving dc-dc converters from the PSU to the mb wouldn't suddenly increase eff by a ton)
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Except that even if ATX12VO is better, it's a minor improvement, that doesn't worth it (moving dc-dc converters from the PSU to the mb wouldn't suddenly increase eff by a ton)
Not a ton, but enough to matter, especially at scale. Of course, the main target is OEMs, most of which have already moved to proprietary 12VO-like layouts anyhow. So the main change there will be the return to interoperability and easy access to replacement parts and upgrades, which is likely an equal environmental benefit to the (mostly idle/low load) advantages of 12VO. "The improvement isn't sufficiently large" is also a pretty poor argument when there is literally no outlook for a more meaningful improvement.
 
Joined
Jun 18, 2021
Messages
2,569 (2.00/day)
Except that even if ATX12VO is better, it's a minor improvement, that doesn't worth it (moving dc-dc converters from the PSU to the mb wouldn't suddenly increase eff by a ton)

Do computers spend more time under load or idle-ish? Do you have all the USBs (5V rail) loaded all the time charging stuff or other than KB & mouse only plug a flash drive every once in a while?

It doesn't look like a big improvement but once you start figuring out the math it makes a lot of sense and is the only improvement we have on the horizon, something is better than nothing
 
Joined
Sep 4, 2021
Messages
36 (0.03/day)
Processor Intel i7-12700KF
Motherboard Asus Prime Z690-P D4 CSM
Cooling Gelid Glacier Black
Memory 2x16GB Crucial Ballistix 3200MHz RGB (dual rank e-die)
Video Card(s) EVGA 3060 XC Gaming
Storage Patriot VPN100 256GB, Samsung 980 PRO 1TB, Intel 710 300GB, Micron M500 960GB, Seagate Skyhawk 4TB
Display(s) ASUS VG27AQZ
Case Be Quiet Silent Base 800
Audio Device(s) X-fi HD
Power Supply Corsair RM550x (2018)
Mouse Cooler Master MM720
Keyboard Kingston HyperX Alloy FPS
Not a ton, but enough to matter, especially at scale. Of course, the main target is OEMs, most of which have already moved to proprietary 12VO-like layouts anyhow. So the main change there will be the return to interoperability and easy access to replacement parts and upgrades, which is likely an equal environmental benefit to the (mostly idle/low load) advantages of 12VO. "The improvement isn't sufficiently large" is also a pretty poor argument when there is literally no outlook for a more meaningful improvement.
Nothing guarantees that OEMs would throw away their beloved (vendor lock-in) proprietary solutions for a general standard.
 
Top