• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

No PCIe Gen5 for "Raphael," Says Gigabyte's Leaked Socket AM5 Documentation

Joined
Dec 26, 2006
Messages
3,652 (0.57/day)
Location
Northern Ontario Canada
Processor Ryzen 5700x
Motherboard Gigabyte X570S Aero G R1.1 BiosF5g
Cooling Noctua NH-C12P SE14 w/ NF-A15 HS-PWM Fan 1500rpm
Memory Micron DDR4-3200 2x32GB D.S. D.R. (CT2K32G4DFD832A)
Video Card(s) AMD RX 6800 - Asus Tuf
Storage Kingston KC3000 1TB & 2TB & 4TB Corsair MP600 Pro LPX
Display(s) LG 27UL550-W (27" 4k)
Case Be Quiet Pure Base 600 (no window)
Audio Device(s) Realtek ALC1220-VB
Power Supply SuperFlower Leadex V Gold Pro 850W ATX Ver2.52
Mouse Mionix Naos Pro
Keyboard Corsair Strafe with browns
Software W10 22H2 Pro x64
I don’t think pcie 5 is a big deal at this point, we just got 4. And from tpu own pci lane benchmarks vid cards don’t need it either. Might help for nvme drives once there are available a few years down the road.
 

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.07/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
Intels gunna do an X470, with the 16x slot and first NVME wired to the CPU on PCI-E 5.0, and the rest on 4.0... because literally nothing justifies the cost and complexity yet, or for a few years. Still lets them honestly market as first 5.0 platform
 
Joined
Dec 29, 2010
Messages
3,611 (0.73/day)
Processor AMD 5900x
Motherboard Asus x570 Strix-E
Cooling Hardware Labs
Memory G.Skill 4000c17 2x16gb
Video Card(s) RTX 3090
Storage Sabrent
Display(s) Samsung G9
Case Phanteks 719
Audio Device(s) Fiio K5 Pro
Power Supply EVGA 1000 P2
Mouse Logitech G600
Keyboard Corsair K95
Geeze, why's this thread still going. Gen5 doesn't matter yet and this is again just more marketing crap from Intel. Yes, we know they are sooo great!
 
Joined
Sep 14, 2020
Messages
509 (0.36/day)
Location
Greece
System Name Office / HP Prodesk 490 G3 MT (ex-office)
Processor Intel 13700 (90° limit) / Intel i7-6700
Motherboard Asus TUF Gaming H770 Pro / HP 805F H170
Cooling Noctua NH-U14S / Stock
Memory G. Skill Trident XMP 2x16gb DDR5 6400MHz cl32 / Samsung 2x8gb 2133MHz DDR4
Video Card(s) Asus RTX 3060 Ti Dual OC GDDR6X / Zotac GTX 1650 GDDR6 OC
Storage Samsung 2tb 980 PRO MZ / Samsung SSD 1TB 860 EVO + WD blue HDD 1TB (WD10EZEX)
Display(s) Eizo FlexScan EV2455 - 1920x1200 / Panasonic TX-32LS490E 32'' LED 1920x1080
Case Nanoxia Deep Silence 8 Pro / HP microtower
Audio Device(s) On board
Power Supply Seasonic Prime PX750 / OEM 300W bronze
Mouse MS cheap wired / Logitech cheap wired m90
Keyboard MS cheap wired / HP cheap wired
Software W11 / W7 Pro ->10 Pro
Well, in this regard cache allocation is very much part of the core design - at least to the extent that you can't speak of any kind of performance or IPC metric without including the cache. Technically it's separate, and mobile/APU Zen 3 does have a different cache layout - and thus slightly different IPC - but that's also fundamentally different in other significant ways (monolithic vs. chiplets, etc.). So, yeah, technically correct, but also not in practice. One could of course also speak of the effects of the cache layout, associativity, latencies, etc., etc. There are always more details to point out ;)


Did you mean 5 and 4? And part of what @TheLostSwede said was that 5.0 boards might ened redrivers in addition to retimers, and not just instead of. There's also a question of whether more mass market adoption will drive down proces for "for now very expensive PCB materials". Material costs don't generally tend to drop if demand increases, and unless production techniques here scale very well, there won't be any real volume scaling either. There are few expectations of lower interference motherboards and materials dropping noticeably in price. Which means that we instead get more expensive products.

Heck, it's the same with cables - just because USB-C cables are now ubiquitous doesn't mean that they're the same price as old USB 2.0 micro-B cables. They never will be, as there are more materials, more complex production techniques, and both are at commodity volumes already. (You might find USB 2.0-only, low amperage usb-C cables at close to the same costs as USB 2.0 micro-B cables, but it's not going to hit parity, ever.)
I didn’t know that redrivers are appropriate for higher than PCIe 3 speeds. If so I accept it.
About materials it’s obviously a discussion we’ll have in the future, although for enterprise and/or longer distances “exotic” materials are probably being tried or even used already.
 
Joined
May 2, 2017
Messages
7,762 (2.94/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I didn’t know that redrivers are appropriate for higher than PCIe 3 speeds. If so I accept it.
About materials it’s obviously a discussion we’ll have in the future, although for enterprise and/or longer distances “exotic” materials are probably being tried or even used already.
Yes, obviously they are being used already - that's why the categories of mid/low/ultra low loss exist. This isn't new tech. But that's the thing - if it was new, there might be potential for significant cost reductions. This is established, well known stuff - and it uses more difficult/expensive to source and produce materials, more precise production techniques, or a combination of the two. That means higher cost, in a way that doesn't necessarily shrink with scale. Server motherboards are ridiculously expensive compared to consumer motherboards. Part of this is of course because of the market segment, support policies and willingness/ability to pay for quality, but a big part is because they have tons of layers (necessary for lots of fast I/O) and because they use lower loss/lower interference materials (again, necessary for lots of fast I/O). Of course there's also more testing, higher grade components in various areas, and generally higher overall standards for reliability and quality - but those more expensive materials are part of those standards. As was mentioned before, low loss board materials are already in use for high end motherboards - you know, the $500-1000 boards? Sure, those also pack in tons of "premium" features and have very high margins, but they also have 10+ layer PCBs and are much more difficult (and time consuming) to produce. All of which drives up BOM costs, on top of which any margins need to be added.

So, if a full PCIe 5.0 board requires $50 of retimers, another $10-20 in redrivers (which a 4.0 board would of course also need, and might need more of), and $50+ of more expensive materials and manufacturing costs, plus a gross margin of, let's say something low like 30% (which for most industries would barely be enough to cover the costs of running the business), that's a baseline $156 cost increase, before any additional controllers (USB 4.0, nGbE, etc.). That means your 'good enough' baseline $100 PCIe 3.0 motherboard is now a $256 PCIe 5.0 motherboard. Your bargain-basement $80 PCIe 3.0 motherboard would now be a $236 motherboard, unless they choose to restrict PCIe slots further from the socket to slower speeds, but they would still be more expensive as they'd need to ensure 5.0 speeds to the first slot as a minimum. We've already seen this with PCIe 4.0 boards, where baseline costs have jumped $20-50 depending on featuresets, while premium pricing has skyrocketed.
 
Joined
Apr 8, 2008
Messages
334 (0.06/day)
Eh? So the fact that we have a bunch of PCIe 4.0 NVMe SSDs and a both AMD and Nvidia supporting PCIe 4.0 for their current graphics cards and new PCIe 4.0 multi-gigabit Ethernet controllers turning up, equals no support to you? :roll:
You're aware it takes some time to make these things, right? It's not just an interface you can easily swap out when you feel like it.
And as to your other comment, we are actually starting to see devices that are using fewer lanes. Look at Marvell's new AQC113 10Gbps Ethernet controller, it can use a single PCIe 4.0 lane for 10Gbps speeds, instead of four PCIe 3.0 lanes. On top of that, it's in a BGA packaging instead of FCBGA, which makes it cheaper to produce. So in other words, we are seeing cheaper devices that use fewer lanes, but it doesn't happen over night.
I totally Agree with him, I think you didn't get the main idea also, my English is not good enough but I hope I can give a better explanation.

We know that PCIe 4.0 devices can take time, and will take time, especially that a lot of companies want to ensure backward compatibility for older platforms, I mean you can do a 10GbE NIC using PCIe 4.0 x1 lane (with bandwidth to spare also), but they know that most users doesn't have PCIe 4.0, so they make it x2 for PCIe 3.0.

Also, if you look now, low-cost SSD's are using SATA, go one step above and get NVMe x4 regardless of PCIe version as the M.2 slot won't support multi drives, so if you have an SSD were you don't need fast speeds, you will be limited to SATA speeds, as moving to NVMe will mean you wasted x3 more lanes as x1 PCIe 4.0 lane can get you already to 1.9GB/s.

The main idea is to bifurcation, splitting the PCIe 4.0 lanes to multiple devices, sorta like a PCIe 4.0 switch (like a network switch). But PCIe switches are expensive thanks mainly to servers market while they don't need to be that expensive.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
16,766 (2.33/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
I totally Agree with him, I think you didn't get the main idea also, my English is not good enough but I hope I can give a better explanation.

We know that PCIe 4.0 devices can take time, and will take time, especially that a lot of companies want to ensure backward compatibility for older platforms, I mean you can do a 10GbE NIC using PCIe 4.0 x1 lane (with bandwidth to spare also), but they know that most users doesn't have PCIe 4.0, so they make it x2 for PCIe 3.0.

Also, if you look now, low-cost SSD's are using SATA, go one step above and get NVMe x4 regardless of PCIe version as the M.2 slot won't support multi drives, so if you have an SSD were you don't need fast speeds, you will be limited to SATA speeds, as moving to NVMe will mean you wasted x3 more lanes as x1 PCIe 4.0 lane can get you already to 1.9GB/s.

The main idea is to bifurcation, splitting the PCIe 4.0 lanes to multiple devices, sorta like a PCIe 4.0 switch (like a network switch). But PCIe switches are expensive thanks mainly to servers market while they don't need to be that expensive.
You're aware there's no physical PCIe x2 interface, right? So you end up with a x4 interface, which to date all PCIe 3.0 10Gbps cards use. Sure, if it's an onboard 10Gbps you could use two lanes, but then it's not a device in the same sense any more.

I didn't design the specifications and as I pointed out elsewhere in this thread, it's a shame PCIe isn't "forward" compatible as well as backwards compatible.

Bifurcation has a lot of limitations though, the biggest one being that people don't understand how it works. This means people buy the "wrong" hardware for their needs and then later find out it won't work as the expected. A slot is a slot is a slot to most people, they don't understand that it might be bifurcated, muxed or shared with an entirely different interface. This is already causing problems on cheaper motherboards, so it's clearly not a solution that makes a lot of sense and should be best avoided.

PCIe switches were much more affordable, until PLX was bought out by Broadcom who increased the prices by a significant amount. It seems like ASMedia is picking up some of the slack here, but again, this is not a real solution. Yes, it's workable for some things, but you wouldn't want to hang M.2 or 10Gbps Ethernet controllers off of a switch/bridge. So far ASMedia only offers PCIe 3.0 switches, but I'm sure we'll see 4.0 solutions from them in the future.

I don't see any proof of PCIe 4.0 having a slower rollout than PCIe 3.0 had, the issue here is that people have short memories. PCIe 3.0 took just as long, if not longer for anything outside of graphics cards and we had a couple of generations if not three of motherboards that had a mix of PCIe 3.0 and PCIe 2.0 slots. I mean, Intel didn't even manage to offer more than two SATA 6Gbps initially and that was only on the high-end chipset. Anyone remember SATA Express? It heavily pushed by Intel when they launched the 9-series chipset and it was a must have feature, yet I can't say I ever saw a single SATA Express drive in retail. This seems to be the same play, a tick box feature that they can brag about, but that won't deliver anything tangible for the consumer. The difference being that PCIe 5.0 might offer some benefits in 3-4 years from the launch, but by then, Intel's 12th gen will be mostly forgotten.
 
Last edited:
Joined
Sep 1, 2020
Messages
2,117 (1.49/day)
Location
Bulgaria
Server motherboards are ridiculously expensive compared to consumer motherboard
There are much reasons to be expensive. Much higher area than micro-ATX and ATX consumer MB's. More sockets, more DIMMs, more full length PCIe slots, more RJ-45 ports and faster NICs(usually 2*10G+2*1G, but there is rumors that very soon will have with 25/100G onboard, today for 100G must buy discrete card) and cost of PCB per square inch is not only one argument for it's price... PCB maybe around 5-10-20%(?) of price of motherboards depends of other components. Where is problem if price of PCB(only) to increase with 20-30% because 1-2 more layers or more advanced materials?
 
Joined
May 2, 2017
Messages
7,762 (2.94/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
There are much reasons to be expensive. Much higher area than micro-ATX and ATX consumer MB's. More sockets, more DIMMs, more full length PCIe slots, more RJ-45 ports and faster NICs(usually 2*10G+2*1G, but there is rumors that very soon will have with 25/100G onboard, today for 100G must buy discrete card) and cost of PCB per square inch is not only one argument for it's price... PCB maybe around 5-10-20%(?) of price of motherboards depends of other components. Where is problem if price of PCB(only) to increase with 20-30% because 1-2 more layers or more advanced materials?
The thing is, most of what you describe here is very cheap. PCIe slot hardware costs a few cents each in the quantities motherboard makers buy them. Same for DIMM sockets. CPU Sockets are a few dollars, true, but nothing huge. Mechanical components are dirt cheap as they're relatively easily mass produced plastic/metal constructions. The only thing you mention with any real cost is high bandwidth NICs, and not all server boards have those integrated.

What, then, makes having that stuff expensive? Making it work properly. Trace quality, signal integrity, routing complexity. All of which has to do with the PCB, its materials, its thickness, its layout and design. And no, consumer boards don't need as many lanes or DIMMs as server boards - that's why consumer boards have always been cheaper, typically at 1/5th to 1/10th the price. The problem is that the limitation for servers used to be quantity: how good a board do you need to stuff it full of I/O? But now, with PCIe 4.0, and even more 5.0, it instead becomes core functionality: what quality of board do you need to make the essential features work at all? Where you previously needed fancy materials to accommodate tons of PCIe lanes and slots, you now need it to make one single slot work. See how that is a problem? See how that raises costs?

And I have no idea where you're getting your "5-10-20%" number from, but the only way that is even remotely accurate is if you only think of the base material costs and exclude design and production entirely from what counts as PCB costs. Which would be rather absurd. The entire point here is that these fast I/O standards drive up the baseline cost of making stuff work at all. Plus, you're forgetting scale: A $100-200-300 baseline price increase on server boards would be relatively easy for most buyers to absorb, and would be a reasonably small percentage of the base price. A $100-150 increase in baseline price for consumer motherboards would come close to killing the diy PC market. And margins in the consumer space are much smaller than in the server world, meaning all costs will get directly passed on to buyers.
 
Last edited:

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
16,766 (2.33/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
Dug up a few older news posts here...
I suggest to read the comments. If some of these people had gotten their ideas through, we'd still be using AGP...

Also, it took nearly a year for PCIe 3.0 graphics cards to arrive after there was chipset support from Intel. Another six months or so later AMD had it's first chipset. It took three years for the first PCIe 3.0 NVMe SSD controller to appear from Intel's announcement...
So yeah, PCIe 4.0 isn't slow in terms of rollout.
 
Last edited:
Joined
Sep 1, 2020
Messages
2,117 (1.49/day)
Location
Bulgaria
The thing is, most of what you describe here is very cheap. PCIe slot hardware costs a few cents each in the quantities motherboard makers buy them. Same for DIMM sockets. CPU Sockets are a few dollars, true, but nothing huge. Mechanical components are dirt cheap as they're relatively easily mass produced plastic/metal constructions. The only thing you mention with any real cost is high bandwidth NICs, and not all server boards have those integrated.

What, then, makes having that stuff expensive? Making it work properly. Trace quality, signal integrity, routing complexity. All of which has to do with the PCB, its materials, its thickness, its layout and design. And no, consumer boards don't need as many lanes or DIMMs as server boards - that's why consumer boards have always been cheaper, typically at 1/5th to 1/10th the price. The problem is that the limitation for servers used to be quantity: how good a board do you need to stuff it full of I/O? But now, with PCIe 4.0, and even more 5.0, it instead becomes core functionality: what quality of board do you need to make the essential features work at all? Where you previously needed fancy materials to accommodate tons of PCIe lanes and slots, you now need it to make one single slot work. See how that is a problem? See how that raises costs?

И нямам представа откъде получавате своя номер „5-10-20%“, но единственият начин, който е дори отдалеч точен, е ако мислите само за разходите за основни материали и изключвате проектирането и производството изцяло от това, което има значение като разходи за печатни платки. Което би било доста абсурдно. Цялата идея тук е, че тези бързи I/O стандарти повишават базовите разходи за изобщо работа. Освен това забравяте мащаба: Увеличението на базовата цена от 100-200-300 щ.д. на сървърните платки би било сравнително лесно за повечето купувачи и би било сравнително малък процент от базовата цена. Увеличението на базовата цена на потребителските дънни платки със 100-150 долара би било близо до убиването на пазара за персонални компютри. И маржовете в потребителското пространство са много по -малки, отколкото в света на сървърите, което означава, че всички разходи ще се прехвърлят директно върху купувачите.
Costs of design, software (firmware, drivers, bioses and it's updates) and warranty support are included in prices of all motherboards. If costs of design of MB's with PCIe 5.0 are bigger only because PCIe 5.0 and add more price to sale prices I agree with your argument.
 
Joined
May 2, 2017
Messages
7,762 (2.94/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Costs of design, software (firmware, drivers, bioses and it's updates) and warranty support are included in prices of all motherboards. If costs of design of MB's with PCIe 5.0 are bigger only because PCIe 5.0 and add more price to sale prices I agree with your argument.
Thanks :) That's exactly the point: as signal buses increase in bandwidth, they become more sensitive to interference and signal degradation, requiring both much tighter trace routing as well as better shielding, less signal path resistance/impedance, and ultimately ancillary components such as redrivers and retimers are necessary for any kind of reasonable signal length.

PCIe 3.0 turned out to be remarkably resilient in many ways, allowing for large motherboards with long trace lengths without anything especially fancy or exotic involved. Heck, like LTT demonstrated, with decent quality riser cables you can daisy-chain several meters of PCIe 3.0 (with many connectors in the signal path, which is even more remarkable) without adverse effects in some scenarios. PCIe 4.0 changed that dramatically, with not a single commercially available 3.0 riser cable working reliably at 4.0 speeds, and even ATX motherboards requiring thicker boards (more layers) and redrivers (which are essentially in-line amplifiers) to ensure a good signal for that (relatively short) data path. That change really can't be overstated. And now for 5.0 even that isn't sufficient, requiring possibly even more PCB layers, possibly higher quality PCB materials, and more expensive retimers (possibly in addition to redrivers for the furthest slots).

I hope that PCIe becomes a differentiated featureset like USB is - not that consumers need 5.0 at all for the next 5+ years, but when we get it, I hope it's limited in scope to useful applications (likely SSDs first, though the real-world differences are likely to be debatable there as well). But just like we still use USB 2.0 to connect our keyboards, mice, printers, DACs, and all the other stuff that doesn't need bandwidth, I hope the industry has the wherewithal to not push 5.0 and 4.0 where it isn't providing a benefit. And tbh, I don't want PCIe 4.0-packed chipsets to trickle down either. Some connectivity, sure, for integrated components like NICs and potentially fast storage, but the more slots and components are left at 3.0 (at least until we have reduced lane count 4.0 devices) the better in terms of keeping motherboard costs reasonable.

Of course, this might all result in high-end chipsets becoming increasingly niche, as the benefits delivered by them matter less and less to most people. Which in turn will likely lead to feature gatekeeping from manufacturers to allow for easier upselling (i.e. restricting fast SSDs to only the most expensive chipsets). The good thing about that is that the real-world consequences of choosing lower end platforms in the future will be very, very small. We're already seeing this today in how B550 is generally identical to X570 in any relevant metric, but is cheaper and can run with passive cooling. Sure, there are on-paper deficits like only a single 4.0 m.2, but ... so? 3.0 really isn't holding anyone back, and won't realistically be for the useful lifetime of the platform, except in very niche use cases (in which case B550 really wouldn't make sense anyhow).
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
16,766 (2.33/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
Thanks :) That's exactly the point: as signal buses increase in bandwidth, they become more sensitive to interference and signal degradation, requiring both much tighter trace routing as well as better shielding, less signal path resistance/impedance, and ultimately ancillary components such as redrivers and retimers are necessary for any kind of reasonable signal length.

PCIe 3.0 turned out to be remarkably resilient in many ways, allowing for large motherboards with long trace lengths without anything especially fancy or exotic involved. Heck, like LTT demonstrated, with decent quality riser cables you can daisy-chain several meters of PCIe 3.0 (with many connectors in the signal path, which is even more remarkable) without adverse effects in some scenarios. PCIe 4.0 changed that dramatically, with not a single commercially available 3.0 riser cable working reliably at 4.0 speeds, and even ATX motherboards requiring thicker boards (more layers) and redrivers (which are essentially in-line amplifiers) to ensure a good signal for that (relatively short) data path. That change really can't be overstated. And now for 5.0 even that isn't sufficient, requiring possibly even more PCB layers, possibly higher quality PCB materials, and more expensive retimers (possibly in addition to redrivers for the furthest slots).

I hope that PCIe becomes a differentiated featureset like USB is - not that consumers need 5.0 at all for the next 5+ years, but when we get it, I hope it's limited in scope to useful applications (likely SSDs first, though the real-world differences are likely to be debatable there as well). But just like we still use USB 2.0 to connect our keyboards, mice, printers, DACs, and all the other stuff that doesn't need bandwidth, I hope the industry has the wherewithal to not push 5.0 and 4.0 where it isn't providing a benefit. And tbh, I don't want PCIe 4.0-packed chipsets to trickle down either. Some connectivity, sure, for integrated components like NICs and potentially fast storage, but the more slots and components are left at 3.0 (at least until we have reduced lane count 4.0 devices) the better in terms of keeping motherboard costs reasonable.

Of course, this might all result in high-end chipsets becoming increasingly niche, as the benefits delivered by them matter less and less to most people. Which in turn will likely lead to feature gatekeeping from manufacturers to allow for easier upselling (i.e. restricting fast SSDs to only the most expensive chipsets). The good thing about that is that the real-world consequences of choosing lower end platforms in the future will be very, very small. We're already seeing this today in how B550 is generally identical to X570 in any relevant metric, but is cheaper and can run with passive cooling. Sure, there are on-paper deficits like only a single 4.0 m.2, but ... so? 3.0 really isn't holding anyone back, and won't realistically be for the useful lifetime of the platform, except in very niche use cases (in which case B550 really wouldn't make sense anyhow).
The only issue I have with the B550 chipset and something that I hope won't remain for next generation, is the total PCIe lane count, which is too low for a modern platform. It made for too many compromises on the kind of boards the chipset was used on.
 
Joined
May 2, 2017
Messages
7,762 (2.94/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
The only issue I have with the B550 chipset and something that I hope won't remain for next generation, is the total PCIe lane count, which is too low for a modern platform. It made for too many compromises on the kind of boards the chipset was used on.
I guess a logical extension of high-end chipsets moving out of the needs of mainstream users might be the diversification of the mid-range. I.e. would there (at least in a future with 3 coexisting PCIe standards) be room for an x60 chipset tier? Intel kind of has that already (though the level of differentiation is for now rather minuscule). I don't have an issue with B550 as it is, simply because it fits the needs of the vast majority of users by allowing for at least two m.2 slots, 2.5GbE, plus whatever the CPU puts out (which is rather a lot for AMD these days). But I can clearly see how it starts being limiting if you want 10GbE or USB4 (though I would expect USB4 to be integrated into either CPU or chipset in time, that will likely take a couple of generations), and the option for a third m.2 slot would be nice too (without doing what the Aorus Master B550 does and splitting two m.2s off the PEG slot). But the death of multi-GPU and the increasing level of integration of features into motherboards over the past decades has resulted in the vast majority of users needing ever less expansion (and allowing for smaller form factors to reach feature parity).

I think I like the idea of a 4-tier chipset system, instead of the current 3-tier one. Going off of AMD's current naming, something like
x20 - barest minimum
x50 - does what most people need while keeping costs reasonable
x70 - fully featured, for more demanding users
x90 - all the bells and whistles, tons of I/O

In a system like that, you'd get whatever PCIe is provided by the CPU (likely to stay at 20 lanes, though 5.0, 4.0 or a mix?), plus varying levels of I/O from chipsets (though x20 tiers might for example not get 5.0 support. I would probably even advocate for that in the x50 tier to keep prices down, tbh.) x20 chipset is low lane count 3.0, x50 is high(er) lane count 3.0, x70 has plenty of lanes in a mix of 4.0 and 3.0, and x90 goes all 4.0 and might even throw in a few 5.0 lanes - for the >$500 motherboard crowd. I guess the downside of a system like this would be shifting large groups of customers into "less premium" segments, which they might not like the idea of. But it would sure make for more consumer choice and more opportunities for smartly configured builds.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
16,766 (2.33/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
I guess a logical extension of high-end chipsets moving out of the needs of mainstream users might be the diversification of the mid-range. I.e. would there (at least in a future with 3 coexisting PCIe standards) be room for an x60 chipset tier? Intel kind of has that already (though the level of differentiation is for now rather minuscule). I don't have an issue with B550 as it is, simply because it fits the needs of the vast majority of users by allowing for at least two m.2 slots, 2.5GbE, plus whatever the CPU puts out (which is rather a lot for AMD these days). But I can clearly see how it starts being limiting if you want 10GbE or USB4 (though I would expect USB4 to be integrated into either CPU or chipset in time, that will likely take a couple of generations), and the option for a third m.2 slot would be nice too (without doing what the Aorus Master B550 does and splitting two m.2s off the PEG slot). But the death of multi-GPU and the increasing level of integration of features into motherboards over the past decades has resulted in the vast majority of users needing ever less expansion (and allowing for smaller form factors to reach feature parity).

I think I like the idea of a 4-tier chipset system, instead of the current 3-tier one. Going off of AMD's current naming, something like
x20 - barest minimum
x50 - does what most people need while keeping costs reasonable
x70 - fully featured, for more demanding users
x90 - all the bells and whistles, tons of I/O

In a system like that, you'd get whatever PCIe is provided by the CPU (likely to stay at 20 lanes, though 5.0, 4.0 or a mix?), plus varying levels of I/O from chipsets (though x20 tiers might for example not get 5.0 support. I would probably even advocate for that in the x50 tier to keep prices down, tbh.) x20 chipset is low lane count 3.0, x50 is high(er) lane count 3.0, x70 has plenty of lanes in a mix of 4.0 and 3.0, and x90 goes all 4.0 and might even throw in a few 5.0 lanes - for the >$500 motherboard crowd. I guess the downside of a system like this would be shifting large groups of customers into "less premium" segments, which they might not like the idea of. But it would sure make for more consumer choice and more opportunities for smartly configured builds.
My issue is more with boards like this
And this

Once the manufacturers are trying to make high-end tiers with mid-range chipsets, it just doesn't add up.

Both of those boards end up "limiting" the GPU slot to x8 (as you already pointed out), as they need to borrow the other eight lanes or you can't use half of the features on the boards.
For consumers that aren't aware of this and now maybe end up pairing that with an APU, are going to be in for a rude awakening where the CPU doesn't have enough PCIe lanes to enable some features that the board has. It's even worse in these cases, as Gigabyte didn't provide a block diagram, so it's not really clear of what interfaces are shared.

This might not be the chipset vendors fault as such, but 10 PCIe lanes is not enough in these examples. It would be fine on most mATX or mini-ITX boards though.

We're already starting to see high-end boards with four M.2 slots (did Asus have one with five even?) and it seems to be the storage interface of the foreseeable future when it comes to desktop PCs as U.2 never happened in the desktop space.

I think we kind of already have the x90 for AMD, but that changes the socket and moves you to HEDT...
My issue is more that the gap between the current high-end and current mid-range is a little bit too wide, but maybe we'll see that fixed next generation. At least the B550 is an improvement on B450.

Judging by this news post, we're looking at 24 usable PCIe lanes from the CPU, so the USB4 ones might be allocated to NVMe duty on cheaper boards, depending the cost of USB4 host controllers. Obviously this is still PCIe 4.0, but I guess some of those lanes are likely to changed to PCIe 5.0 at some point.
Although we've only seen the Z690 chipset from Intel, it seems like they're kind of going down the route you're suggesting, since they have PCIe 5.0 in the CPU, as well as PCIe 4.0, both for an SSD and the DMI interface, but then have PCIe 4.0 and PCIe 3.0 in the chipset.

PCIe 3.0 is still more than fast enough for the kind of Wi-Fi solutions we get, since it appears no-one is really doing 3x3 cards any more. It's obviously still good enough for almost anything you can slot in, apart from 10Gbps Ethernet (if we assume x1 interface here) and high-end SSDs, but there's little else in a consumer PC that can even begin to take advantage of a faster interface right now. As we've seen from the regular PCIe graphics card bandwidth tests done here, PCIe 4.0 is only just about making a difference on the very highest-end of cards and barely that. Maybe this will change when we get to MCM type GPUs, but who knows.

Anyhow, I think we more or less agree on this and hopefully this is something AMD and Intel also figures out, instead of making differentiators that lock out entire feature sets just to upsell to a much higher tier.

Edit: Just spotted this, which suggest AMD will have AM5 CPU SKUs with 20 or 28 PCIe lanes in total, in addition to the fact that those three SKUs will have an integrated GPU as well.
 
Joined
May 2, 2017
Messages
7,762 (2.94/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
My issue is more with boards like this
And this

Once the manufacturers are trying to make high-end tiers with mid-range chipsets, it just doesn't add up.

Both of those boards end up "limiting" the GPU slot to x8 (as you already pointed out), as they need to borrow the other eight lanes or you can't use half of the features on the boards.
For consumers that aren't aware of this and now maybe end up pairing that with an APU, are going to be in for a rude awakening where the CPU doesn't have enough PCIe lanes to enable some features that the board has. It's even worse in these cases, as Gigabyte didn't provide a block diagram, so it's not really clear of what interfaces are shared.

This might not be the chipset vendors fault as such, but 10 PCIe lanes is not enough in these examples. It would be fine on most mATX or mini-ITX boards though.

We're already starting to see high-end boards with four M.2 slots (did Asus have one with five even?) and it seems to be the storage interface of the foreseeable future when it comes to desktop PCs as U.2 never happened in the desktop space.

I think we kind of already have the x90 for AMD, but that changes the socket and moves you to HEDT...
My issue is more that the gap between the current high-end and current mid-range is a little bit too wide, but maybe we'll see that fixed next generation. At least the B550 is an improvement on B450.

Judging by this news post, we're looking at 24 usable PCIe lanes from the CPU, so the USB4 ones might be allocated to NVMe duty on cheaper boards, depending the cost of USB4 host controllers. Obviously this is still PCIe 4.0, but I guess some of those lanes are likely to changed to PCIe 5.0 at some point.
Although we've only seen the Z690 chipset from Intel, it seems like they're kind of going down the route you're suggesting, since they have PCIe 5.0 in the CPU, as well as PCIe 4.0, both for an SSD and the DMI interface, but then have PCIe 4.0 and PCIe 3.0 in the chipset.

PCIe 3.0 is still more than fast enough for the kind of Wi-Fi solutions we get, since it appears no-one is really doing 3x3 cards any more. It's obviously still good enough for almost anything you can slot in, apart from 10Gbps Ethernet (if we assume x1 interface here) and high-end SSDs, but there's little else in a consumer PC that can even begin to take advantage of a faster interface right now. As we've seen from the regular PCIe graphics card bandwidth tests done here, PCIe 4.0 is only just about making a difference on the very highest-end of cards and barely that. Maybe this will change when we get to MCM type GPUs, but who knows.

Anyhow, I think we more or less agree on this and hopefully this is something AMD and Intel also figures out, instead of making differentiators that lock out entire feature sets just to upsell to a much higher tier.

Edit: Just spotted this, which suggest AMD will have AM5 CPU SKUs with 20 or 28 PCIe lanes in total, in addition to the fact that those three SKUs will have an integrated GPU as well.
Yeah, I think we're pretty much on the same page - I hadn't considered the implications of using PCIe 3.0 APUs in these bifurcated multi-m.2 boards (I guess due to APUs in previous generations being significantly slower than CPUs, as well as my lack of awareness of these boards doing this - I literally spotted that while writing the previous post!), and that is indeed a potential problem that users won't know to avoid. For most of those it probably won't represent an actual bottleneck, at least not today, but it might with their next GPU upgrade.

My thinking with the chipset lineup was more along the lines of moving the current 70 tier to a 90 tier (including tacking on those new features as they arrive - all the USB4 you'd want, etc.), with the 70 tier taking on the role of that intermediate, "better than 50 but doesn't have all the bells and whistles" type of thing. Just enough PCIe to provide a "people's high end" - which of course could also benefit platform holders in marketing their "real" high-end for people with infinite budgets. Of course HEDT is another can of worms entirely, but then the advent of 16-core MSDT CPUs has essetially killed HEDT for anything but actual workstation use - and thankfully AMD has moved their TR chipset naming to another track too :)

Though IMO 3 m.2 slots is a reasonable offering, with more being along the lines of older PCs having 8+ SATA ports - sure, some people used all of them, but the vast majority even back then used 2 or 3. Two SSDs is pretty common, but three is rather rare, and four is reserved for the people doing consecutive builds and upgrades over long periods an carrying over parts - that's a pretty small minority of PC users (or even builders). Two is too low given the capacity restrictions this brings with it, but still acceptable on lower end builds (and realistically most users will only ever use one or two). Five? I don't see anyone but the most dedicated upgrade fiends actually keeping five m.2 drives in service in a single system (unless it's an all-flash storage server/NAS, in which case you likely have an x16 AIC for them anyhow).
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
16,766 (2.33/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
Yeah, I think we're pretty much on the same page - I hadn't considered the implications of using PCIe 3.0 APUs in these bifurcated multi-m.2 boards (I guess due to APUs in previous generations being significantly slower than CPUs, as well as my lack of awareness of these boards doing this - I literally spotted that while writing the previous post!), and that is indeed a potential problem that users won't know to avoid. For most of those it probably won't represent an actual bottleneck, at least not today, but it might with their next GPU upgrade.

My thinking with the chipset lineup was more along the lines of moving the current 70 tier to a 90 tier (including tacking on those new features as they arrive - all the USB4 you'd want, etc.), with the 70 tier taking on the role of that intermediate, "better than 50 but doesn't have all the bells and whistles" type of thing. Just enough PCIe to provide a "people's high end" - which of course could also benefit platform holders in marketing their "real" high-end for people with infinite budgets. Of course HEDT is another can of worms entirely, but then the advent of 16-core MSDT CPUs has essetially killed HEDT for anything but actual workstation use - and thankfully AMD has moved their TR chipset naming to another track too :)

Though IMO 3 m.2 slots is a reasonable offering, with more being along the lines of older PCs having 8+ SATA ports - sure, some people used all of them, but the vast majority even back then used 2 or 3. Two SSDs is pretty common, but three is rather rare, and four is reserved for the people doing consecutive builds and upgrades over long periods an carrying over parts - that's a pretty small minority of PC users (or even builders). Two is too low given the capacity restrictions this brings with it, but still acceptable on lower end builds (and realistically most users will only ever use one or two). Five? I don't see anyone but the most dedicated upgrade fiends actually keeping five m.2 drives in service in a single system (unless it's an all-flash storage server/NAS, in which case you likely have an x16 AIC for them anyhow).
The issue is that the APU's only have eight lanes in total, so nothing to bifurcate...
I means that in the case of the Vision board, the Thunderbolt ports won't work. Not a big deal...

Personally I have generally gone for the upper mid-range of boards, as they used to have a solid feature set. However, this seems to have changed over the past 2-3 generations of boards and you now have to go lower high-end tiers to get some features. I needed a x4 slot for my 10Gbps card and at the point I built this system, that wasn't a common feature on cheaper models for some reason. Had we been at the point we are now, with PCIe x1 10Gbps cards, that wouldn't have been a huge deal, except the fact I would've had to buy a new card...
I just find the current lineups from many of the board makers to be odd, although the X570S SKUs fixed some of the weird feature advantages of B550 boards, like 2.5Gbps Ethernet.
Looking at Intel, they seemingly skipped making an H570 version, but it looks like there will be an H670. So this also makes it harder for the board makers to continue their regular differentiation between various chipsets and SKUs. Anyhow, we'll have to wait and see what comes next year, since we have zero control over any of this.

I think NVMe will be just like SATA in a few years, where people upgrade their system and carry over drives from the old build. I mean, I used to have 4-5 hard drives in my system back in the days, not because I needed to, but because I just kept the old drives around as I had space in the case and was too lazy to copy the files over...
Anyhow, three M.2 slots is plenty right now, but in a couple of years time it might not be. That said, I would prefer them as board edge connectors that goes over the side of the motherboard, with mounting screws on the case instead. This would easily allow for four or five drives along the edge of the board, close to the chipset and with better cooling for the drives, than the current implementation. However, this would require cooperation with case makers and it would require a slight redesign of most cases. Maybe it would be possible to do some simple retrofit mounting bar for them as well, assuming there's enough space in front of the motherboard in the case. Some notebooks is already doing something very similar.
I guess we'll just have to wait and see how things develop. On the plus side, I guess with PCIe 5.0, we'd only need a x8 card for four PCIe 4.0 NVMe drives :p
 
Joined
Sep 1, 2020
Messages
2,117 (1.49/day)
Location
Bulgaria
The only issue I have with the B550 chipset and something that I hope won't remain for next generation, is the total PCIe lane count, which is too low for a modern platform. It made for too many compromises on the kind of boards the chipset was used on.

Image: videocardz.

Read roll with number of PCIe lanes. I suppose that platform with 28 lanes PCIe is X670. Other two will be with 20 lines. I don't know but maybe this is number of lanes from chipsets or from CPU?
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
16,766 (2.33/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w

Image: videocardz.

Read roll with number of PCIe lanes. I suppose that platform with 28 lanes PCIe is X670. Other two will be with 20 lines. I don't know but maybe this is number of lanes from chipsets or from CPU?
That's CPU lanes, not chipset lanes, it says as much at the top of that picture. The current CPUs have 20 lanes already. 16 for PEG, four for NVMe and four for chipset interconnect.
 
Joined
May 2, 2017
Messages
7,762 (2.94/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
The issue is that the APU's only have eight lanes in total, so nothing to bifurcate...
I means that in the case of the Vision board, the Thunderbolt ports won't work. Not a big deal...

Personally I have generally gone for the upper mid-range of boards, as they used to have a solid feature set. However, this seems to have changed over the past 2-3 generations of boards and you now have to go lower high-end tiers to get some features. I needed a x4 slot for my 10Gbps card and at the point I built this system, that wasn't a common feature on cheaper models for some reason. Had we been at the point we are now, with PCIe x1 10Gbps cards, that wouldn't have been a huge deal, except the fact I would've had to buy a new card...
I just find the current lineups from many of the board makers to be odd, although the X570S SKUs fixed some of the weird feature advantages of B550 boards, like 2.5Gbps Ethernet.
Looking at Intel, they seemingly skipped making an H570 version, but it looks like there will be an H670. So this also makes it harder for the board makers to continue their regular differentiation between various chipsets and SKUs. Anyhow, we'll have to wait and see what comes next year, since we have zero control over any of this.

I think NVMe will be just like SATA in a few years, where people upgrade their system and carry over drives from the old build. I mean, I used to have 4-5 hard drives in my system back in the days, not because I needed to, but because I just kept the old drives around as I had space in the case and was too lazy to copy the files over...
Anyhow, three M.2 slots is plenty right now, but in a couple of years time it might not be. That said, I would prefer them as board edge connectors that goes over the side of the motherboard, with mounting screws on the case instead. This would easily allow for four or five drives along the edge of the board, close to the chipset and with better cooling for the drives, than the current implementation. However, this would require cooperation with case makers and it would require a slight redesign of most cases. Maybe it would be possible to do some simple retrofit mounting bar for them as well, assuming there's enough space in front of the motherboard in the case. Some notebooks is already doing something very similar.
I guess we'll just have to wait and see how things develop. On the plus side, I guess with PCIe 5.0, we'd only need a x8 card for four PCIe 4.0 NVMe drives :p
APUs have had 16 lanes since ... I want to say the 3000-series, but it might have been the 4000-series where that changed. But they have 16+4 now just like the CPUs. So the only issue will be the reduction in speed from 4.0 to 3.0.

I agree that we're in a weird place with motherboard lineups, I guess that comes from being in a transition period between various standards. And of course X570 boards mostly arrived too early to really implement 2.5GbE, while B550 was late enough that nearly every board has it, which is indeed pretty weird. But I think that will shake out over time - there are always weird things due to timing and component availability.

I think Intel skipped H570 simply because it would have been the same as H470 and would then either hav required rebranding a heap of motherboards (looks bad) or making new ones (expensive) for no good reason. Makes sense in that context. But Intel having as many chipset types as they do has always been a bit redundant - they usually have five, right? That's quite a lot, and that's across only PCIe 2.0 and 3.0. Even if one is essentially 'Zx70 but for business with no OC' that still leaves a pretty packed field of four. Hopefully that starts making a bit more sense in coming generations as well.

But as I said above, I don't think the "I carry over all my drives" crowd is particularly notable. On these forums? Sure. But most people don't even upgrade, but just sell (or god forbid, throw out) their old PC and buy/build a new one. A few will keep the drives when doing so, but not most. And the biggest reason for multiple SSDs is capacity, which gets annoying pretty soon - there's a limit to how many 256-512GB SSDs you can have in a system and not go slightly insane. People tend to consolidate over time, and either sell/give away older drives or stick them in cases for external use. So, as I said, there'll always be a niche who want 5+ m.2s, but I don't think those are worth designing even upper midrange motherboards around - they're too much of a niche.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
16,766 (2.33/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
APUs have had 16 lanes since ... I want to say the 3000-series, but it might have been the 4000-series where that changed. But they have 16+4 now just like the CPUs. So the only issue will be the reduction in speed from 4.0 to 3.0.

I agree that we're in a weird place with motherboard lineups, I guess that comes from being in a transition period between various standards. And of course X570 boards mostly arrived too early to really implement 2.5GbE, while B550 was late enough that nearly every board has it, which is indeed pretty weird. But I think that will shake out over time - there are always weird things due to timing and component availability.

I think Intel skipped H570 simply because it would have been the same as H470 and would then either hav required rebranding a heap of motherboards (looks bad) or making new ones (expensive) for no good reason. Makes sense in that context. But Intel having as many chipset types as they do has always been a bit redundant - they usually have five, right? That's quite a lot, and that's across only PCIe 2.0 and 3.0. Even if one is essentially 'Zx70 but for business with no OC' that still leaves a pretty packed field of four. Hopefully that starts making a bit more sense in coming generations as well.

But as I said above, I don't think the "I carry over all my drives" crowd is particularly notable. On these forums? Sure. But most people don't even upgrade, but just sell (or god forbid, throw out) their old PC and buy/build a new one. A few will keep the drives when doing so, but not most. And the biggest reason for multiple SSDs is capacity, which gets annoying pretty soon - there's a limit to how many 256-512GB SSDs you can have in a system and not go slightly insane. People tend to consolidate over time, and either sell/give away older drives or stick them in cases for external use. So, as I said, there'll always be a niche who want 5+ m.2s, but I don't think those are worth designing even upper midrange motherboards around - they're too much of a niche.
Ah, my bad, I guess I haven't stayed on top of the changes :oops:
I just read some other thread here in the forums the other day with people having this exact issue, hence why I didn't realise that it had changed.

My board has a Realtek 2.5Gbit Ethernet chip, but I guess a lot of board makers didn't want to get bad reviews for not having Intel Ethernet on higher-end board, as they're apparently the gold standard for some reason. Yes, I obviously know the background to this, but by now there's really no difference between the two companies in terms of performance.
But yes, you're right, timing is always tricky with these things, as there's always something new around the corner and due to company secrecy, it's rare that the stars align and everything launches around the same time.

I would seem I was wrong about the H570 too, it appears to be a thing, but it seems to have very limited appeal with the board makers, which is why I missed it. Gigabyte and MSI didn't even bother making a single board with the chipset. It does look identical to the H470, with the only difference being that H570 boards having a PCIe 4.0 x4 M.2 interface connected to the CPU. The B series chipset seems to have large replaced the H series, mostly due to cost I would guess and the fact that neither supports CPU overclocking.

Well, these days it appears to be six, as they made the W line for workstations as well, I guess it replaced some C version of chipset for Xeons. Intel has really complicated things too much.

I guess I've been spending too much time around ARM chips and not really followed the mid-range and low-end PC stuff enough... The downside of work and being more interested in the higher-end of PC stuff...

You might very well be right about the M.2 drives, although I still hope we can get better board placement and stop having drives squeezed in under the graphics card.
I don't have any SSD under 1TB... I skipped all the smaller M.2 drives, as I had some smaller SATA drives and it was just frustrating running out of space. I also ended up getting to a point with my laptop where the Samsung SATA drive got full enough to slow down the entire computer. Duplicated the drive, extended the partition and it was like having a new laptop. I obviously knew it was an issue, but I didn't expect having an 80%+ full drive being quite as detrimental to the performance as it was.
Can't wait for the day that 4TB NVMe drives or whatever comes next are the norm :p
 
Last edited:

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.07/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
APUs have had 16 lanes since ... I want to say the 3000-series, but it might have been the 4000-series where that changed. But they have 16+4 now just like the CPUs. So the only issue will be the reduction in speed from 4.0 to 3.0.
It was the 4000 series, which wasnt retail available - so realistically, its only just now with the 5000 series it went above 8x for most people
 

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
40,807 (6.54/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
Only GPUs, lan adapters and Slstorage devices can utilize PCIE 4 Bandwidth currently. GPUs as of 2019 only just started using it, we are with 6900XTX , 3090Ti and they still dont use enough pcie 4 bandwidth...
 

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.07/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
Only GPUs, lan adapters and Slstorage devices can utilize PCIE 4 Bandwidth currently. GPUs as of 2019 only just started using it, we are with 6900XTX , 3090Ti and they still dont use enough pcie 4 bandwidth...
after seeing the reviews of the USB 20Gb adaptor TPU just put up, i can see it being really handy for the motherboards to slap in more 10 and 20Gb ports for USB C - we're gunna need it
 
Joined
May 2, 2017
Messages
7,762 (2.94/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
after seeing the reviews of the USB 20Gb adaptor TPU just put up, i can see it being really handy for the motherboards to slap in more 10 and 20Gb ports for USB C - we're gunna need it
Are we though? A B550 platform has integrated controllers for 6 10Gbps ports (four from the CPU, two from the chipset). It should be possible to pair those off into 20Gbps ports. Do you really need more than three of those at any given time? I would be surprised to hear of any significant user base even utilizing more than two 5Gbps ports at the same time, let alone high bandwidth ports like that. Just be smart about where you connect your stuff and don't fill up your high bandwidth ports with low bandwidth peripherals, and even a low number will be more than 99.99% of users ever need.

Of course, adding USB4 support requires some more lanes (or more integrated controllers). But it really isn't a lot.
 
Top