• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

MAXSUN Designs Arc B580 GPU with Two M.2 SSDs, Putting Leftover PCIe Lanes to Good Use

Joined
Mar 7, 2011
Messages
4,636 (0.92/day)
AMD can do x4x4x4x4, but I'm not familiar enough with their platforms to know if that includes x8x4x4 though it would make sense if it could.
Only Z and X series of chipsets for consumers support it just like how Intel has castrated overclocking(even XMP support until recently on lower end chipsets) support on their non X and Z series of chipsets.

 
Joined
Jan 2, 2024
Messages
635 (1.76/day)
Location
Seattle
System Name DevKit
Processor AMD Ryzen 5 3600 ↗4.0GHz
Motherboard Asus TUF Gaming X570-Plus WiFi
Cooling Koolance CPU-300-H06, Koolance GPU-180-L06, SC800 Pump
Memory 4x16GB Ballistix 3200MT/s ↗3800
Video Card(s) PowerColor RX 580 Red Devil 8GB ↗1380MHz ↘1105mV, PowerColor RX 7900 XT Hellhound 20GB
Storage 240GB Corsair MP510, 120GB KingDian S280
Display(s) Nixeus VUE-24 (1080p144)
Case Koolance PC2-601BLW + Koolance EHX1020CUV Radiator Kit
Audio Device(s) Oculus CV-1
Power Supply Antec Earthwatts EA-750 Semi-Modular
Mouse Easterntimes Tech X-08, Zelotes C-12
Keyboard Logitech 106-key, Romoral 15-Key Macro, Royal Kludge RK84
VR HMD Oculus CV-1
Software Windows 10 Pro Workstation, VMware Workstation 16 Pro, MS SQL Server 2016, Fan Control v120, Blender
Benchmark Scores Cinebench R15: 1590cb Cinebench R20: 3530cb (7.83x451cb) CPU-Z 17.01.64: 481.2/3896.8 VRMark: 8009
AMD can do x4x4x4x4, but I'm not familiar enough with their platforms to know if that includes x8x4x4 though it would make sense if it could.
AMD board doing x8x4x4 would be fantastic though I have limited experience with GPUs in x8 mode. I barely noticed differences in x4 for any of my stuff so either all my options are bottom grade performance or I'm just not putting together anything weird enough to cause issues. It would be a good cost cutting measure to obsolete these weird dual chipset boards. They're already waaaaaay more expensive than they deserve.
 
Joined
Sep 6, 2013
Messages
3,420 (0.83/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 + GT 710 (PhysX) / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
In my limited experience, it's pretty common. AFAIK even some A620 boards support it. ASrock seems to be good about bifurcation support on mid-level boards and higher (on AMD).

It's definitely something to research before you buy this kind of card, but the concept is great.
Saves someone from having to buy and figure out how to mount one of these:

View attachment 374686
I was about to remind of the existence of boards like this. And they don't cost much, about $15 on Aliexpress. They probably work the same way ASUS's and Maxsun's GPUs does, while offering the flexibility to be used alone (if someone is using an iGPU) or with any graphics card out there.
As for how to mound them, if the PC case offers the option to put the GPU vertically, I suppose someone just needs a PCIe X16 extension cable.

In any case we didn't had such problems in the past, where we had 6-7 PCIe X16 slots on the motherboards. On motherboards that had a price of less than $100. Today those greedy motherboard makers, use the excuse of M.2 slots, to sell us empty PCBs for $300.
 
Joined
Nov 12, 2020
Messages
168 (0.11/day)
Processor 265K (running stock until more Intel updates land)
Motherboard MPG Z890 Carbon WIFI
Cooling Peerless Assassin 140
Memory 48GB DDR5-7200 CL34
Video Card(s) RTX 3080 12GB FTW3 Ultra Hybrid
Storage 1.5TB 905P and 2x 2TB P44 Pro
Display(s) CU34G2X and Ea244wmi
Case Dark Base 901
Audio Device(s) Sound Blaster X4
Power Supply Toughpower PF3 850
Mouse G502 HERO/G700s
Keyboard Ducky One 3 Pro Nazca
In any case we didn't had such problems in the past, where we had 6-7 PCIe X16 slots on the motherboards. On motherboards that had a price of less than $100. Today those greedy motherboard makers, use the excuse of M.2 slots, to sell us empty PCBs for $300.
Those boards used PLX chips which didn't cost a whole lot to add PCIe expansion at the time. The problem is actually Broadcom buying PLX Technology and raising the prices through the roof almost immediately. PCIe 2.0 and 3.0 switches are still relatively reasonably priced, but anything above that is easily in the $100+ range. So while I wouldn't dispute motherboard manufacturer greed in a general sense this one isn't on them.

Right now the only real hope for more client PCIe is for Intel/AMD to choose to add more either through a wider DMI link or more CPU lanes.
 
Joined
Sep 6, 2013
Messages
3,420 (0.83/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 + GT 710 (PhysX) / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
Those boards used PLX chips which didn't cost a whole lot to add PCIe expansion at the time. The problem is actually Broadcom buying PLX Technology and raising the prices through the roof almost immediately. PCIe 2.0 and 3.0 switches are still relatively reasonably priced, but anything above that is easily in the $100+ range. So while I wouldn't dispute motherboard manufacturer greed in a general sense this one isn't on them.

Right now the only real hope for more client PCIe is for Intel/AMD to choose to add more either through a wider DMI link or more CPU lanes.
There are plenty of PCIe lanes from both the CPU and the chipset. Installing a bunch of PCIe X16 slots on the motherboard and just enabling disabling some ports based on what is connected, it's a common feature on motherboards that probably doesn't need a PLX chip. Does it? Also configuring a slot to work as x16, x8 or x4 is probably also something that can be done in the BIOS. In any case motherboard makers I believe could keep doing what it was common practice until X470 chipset for the AM4 platform. Have 2 PCIe x16 slots that share the 16 PCIe lanes from the CPU. At least that compared to driving all the 16 lanes to just one PCIe slot and having to question the manual of the motherboard to see if the slot can be split and then used with graphics cards like those from ASUS and Maxsun or custom PCIe cards like those sold in places like Aliexpress, an option that also comes with a risk because it is a product from unknown manufacturer.
I think the motherboard makers just removed valuable features that where common in the past, to improve their profit margins. The elimination of SLi and CrossFire together with the integration of the north bridge in the CPUs gave them this opportunity to simplify the PCB design and at the same time replace features, that need extra BIOS support, with "better" looks that cost them nothing.
 
Joined
Jan 3, 2021
Messages
3,625 (2.49/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
It would require your SSDs to be the same PCIe revision as the video card which could be an issue should something like this be implemented in a higher performance card (thinking a card that would require PCIe 5.0 x8 bandwidth which would require PCIe 5.0 SSDs)
Is that really a requirement of Core and Ryzen CPUs? It would be very strange if true because each link should be established independently (determining what's at the other end, negotiating lane count and speed, finding equalisation settings for lowest error rate). Negotiating the speed starts at PCIe 1.0 speed then progresses one by one. Also speed and lane count can change dynamically for power saving.

There are plenty of PCIe lanes from both the CPU and the chipset.
Yes. What's lacking is the ability of Gen 5 x4 ports from CPU for any sort of bifurcation.
 
Joined
Sep 15, 2015
Messages
1,092 (0.32/day)
Location
Latvija
System Name Fujitsu Siemens, HP Workstation
Processor Athlon x2 5000+ 3.1GHz, i5 2400
Motherboard Asus
Memory 4GB Samsung
Video Card(s) rx 460 4gb
Storage 750 Evo 250 +2tb
Display(s) Asus 1680x1050 4K HDR
Audio Device(s) Pioneer
Power Supply 430W
Mouse Acme
Keyboard Trust
Only good for old computer's with Intel northbrdge, so drivers get compatible with IGPU. Like SATA and USB3 in PCIe x1
 
Joined
Nov 12, 2020
Messages
168 (0.11/day)
Processor 265K (running stock until more Intel updates land)
Motherboard MPG Z890 Carbon WIFI
Cooling Peerless Assassin 140
Memory 48GB DDR5-7200 CL34
Video Card(s) RTX 3080 12GB FTW3 Ultra Hybrid
Storage 1.5TB 905P and 2x 2TB P44 Pro
Display(s) CU34G2X and Ea244wmi
Case Dark Base 901
Audio Device(s) Sound Blaster X4
Power Supply Toughpower PF3 850
Mouse G502 HERO/G700s
Keyboard Ducky One 3 Pro Nazca
Is that really a requirement of Core and Ryzen CPUs? It would be very strange if true because each link should be established independently (determining what's at the other end, negotiating lane count and speed, finding equalisation settings for lowest error rate). Negotiating the speed starts at PCIe 1.0 speed then progresses one by one. Also speed and lane count can change dynamically for power saving.
You're totally right I forgot the only limit is maximum slot speed for bifurcation.
 
Joined
Oct 22, 2014
Messages
14,170 (3.81/day)
Location
Sunshine Coast
System Name H7 Flow 2024
Processor AMD 5800X3D
Motherboard Asus X570 Tough Gaming
Cooling Custom liquid
Memory 32 GB DDR4
Video Card(s) Intel ARC A750
Storage Crucial P5 Plus 2TB.
Display(s) AOC 24" Freesync 1m.s. 75Hz
Mouse Lenovo
Keyboard Eweadn Mechanical
Software W11 Pro 64 bit
Instead of storage, why not simply use the drive as cache, making the GPU more efficient and speed up load times etc.
No Bifurcation needed.
 
Joined
Sep 6, 2013
Messages
3,420 (0.83/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 + GT 710 (PhysX) / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
Instead of storage, why not simply use the drive as cache, making the GPU more efficient and speed up load times etc.
No Bifurcation needed.
You mean this



I wonder why we haven't seen other models copying the idea of an SSD as memory expansion. I wonder if Intel's Optane could be used in cards that would target modern AI applications. Then again, Intel killed Optane.
 
Joined
Nov 12, 2020
Messages
168 (0.11/day)
Processor 265K (running stock until more Intel updates land)
Motherboard MPG Z890 Carbon WIFI
Cooling Peerless Assassin 140
Memory 48GB DDR5-7200 CL34
Video Card(s) RTX 3080 12GB FTW3 Ultra Hybrid
Storage 1.5TB 905P and 2x 2TB P44 Pro
Display(s) CU34G2X and Ea244wmi
Case Dark Base 901
Audio Device(s) Sound Blaster X4
Power Supply Toughpower PF3 850
Mouse G502 HERO/G700s
Keyboard Ducky One 3 Pro Nazca
There are plenty of PCIe lanes from both the CPU and the chipset. Installing a bunch of PCIe X16 slots on the motherboard and just enabling disabling some ports based on what is connected, it's a common feature on motherboards that probably doesn't need a PLX chip. Does it? Also configuring a slot to work as x16, x8 or x4 is probably also something that can be done in the BIOS.
CPU lanes and the bifurcation they're capable of is limited by the CPU not the motherboard, but the motherboard controls whether or not you can utilize it. If you want to do anything that is outside the scope of the CPU support you need a PCIe switch (in the old days a PLX chip) and the same goes for using multiple devices on a single chipset connection (this I do not know for certain is a chipset vs motherboard limitation).

Since I know Intel products better than AMD I'll use them as an example (limiting this to LGA1700 with Z/H series chipsets):
ADL/RPL both have 16 lanes of PCIe 5.0 and 4 lanes of PCIe 4.0 from the CPU. The 4 lanes of PCIe 4.0 seem to be mandated for M.2 storage by Intel (this is a guess based on only seeing those lanes used for M.2 and in the case of MSI's MAX line which didn't have PCIe 4.0 M.2 from CPU not using those lanes for one of the PCIe slots on the board). The 16 lanes of PCIe 5.0 can only be run x16 or x8/x8 and there's no way for motherboard manufacturers to change that, but it can be implemented within the same slot or two separate slots (or both, but if you had a two slot and split the primary slot you couldn't use the second slot).

The DMI lanes on the Z and H series chipsets are PCIe 4.0 x8 bandwidth so that's the maximum bandwidth between the CPU and chipset. The way modern chipsets handle PCIe is basically like a PCIe switch so that's where some flexibility comes in. Z790/H770 both can split up to 28 lanes (20 PCIe 4.0/8 PCIe 3.0) and can be configured in x1/x2/x4. I'm not aware of whether or not these lanes have bifurcation capability or if they're configured as wired, but I'd assume the latter as you'll never see more than 20 PCIe 4.0 chipset lanes wired on any motherboard. So you end up with configurations like 4x PCIe 4.0 M.2 slots and 1x PCIe 4.0 slot from the chipset without any further flexibility.
 
Joined
Apr 2, 2008
Messages
455 (0.07/day)
System Name -
Processor Ryzen 9 5900X
Motherboard MSI MEG X570
Cooling Arctic Liquid Freezer II 280 (4x140 push-pull)
Memory 32GB Patriot Steel DDR4 3733 (8GBx4)
Video Card(s) MSI RTX 4080 X-trio.
Storage Sabrent Rocket-Plus-G 2TB, Crucial P1 1TB, WD 1TB sata.
Display(s) LG Ultragear 34G750 nano-IPS 34" utrawide
Case Define R6
Audio Device(s) Xfi PCIe
Power Supply Fractal Design ION Gold 750W
Mouse Razer DeathAdder V2 Mini.
Keyboard Logitech K120
VR HMD Er no, pointless.
Software Windows 10 22H2
Benchmark Scores Timespy - 24522 | Crystalmark - 7100/6900 Seq. & 84/266 QD1 |
Not the first time this has been done though, there are I thingk some 3050's out in the wild with this.
 
Joined
Sep 6, 2013
Messages
3,420 (0.83/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 + GT 710 (PhysX) / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
CPU lanes and the bifurcation they're capable of is limited by the CPU not the motherboard, but the motherboard controls whether or not you can utilize it. If you want to do anything that is outside the scope of the CPU support you need a PCIe switch (in the old days a PLX chip) and the same goes for using multiple devices on a single chipset connection (this I do not know for certain is a chipset vs motherboard limitation).

Since I know Intel products better than AMD I'll use them as an example (limiting this to LGA1700 with Z/H series chipsets):
ADL/RPL both have 16 lanes of PCIe 5.0 and 4 lanes of PCIe 4.0 from the CPU. The 4 lanes of PCIe 4.0 seem to be mandated for M.2 storage by Intel (this is a guess based on only seeing those lanes used for M.2 and in the case of MSI's MAX line which didn't have PCIe 4.0 M.2 from CPU not using those lanes for one of the PCIe slots on the board). The 16 lanes of PCIe 5.0 can only be run x16 or x8/x8 and there's no way for motherboard manufacturers to change that, but it can be implemented within the same slot or two separate slots (or both, but if you had a two slot and split the primary slot you couldn't use the second slot).

The DMI lanes on the Z and H series chipsets are PCIe 4.0 x8 bandwidth so that's the maximum bandwidth between the CPU and chipset. The way modern chipsets handle PCIe is basically like a PCIe switch so that's where some flexibility comes in. Z790/H770 both can split up to 28 lanes (20 PCIe 4.0/8 PCIe 3.0) and can be configured in x1/x2/x4. I'm not aware of whether or not these lanes have bifurcation capability or if they're configured as wired, but I'd assume the latter as you'll never see more than 20 PCIe 4.0 chipset lanes wired on any motherboard. So you end up with configurations like 4x PCIe 4.0 M.2 slots and 1x PCIe 4.0 slot from the chipset without any further flexibility.
Well, I don't have the technical background to know if it is a restriction somewhere in modern hardware, or completely get what you try to explain to me, but I do have an example in my mind. The same AM4 CPU, for example my R5 5500, can be used with a motherboard that is based on the X470 chipset and a motherboard that is based on the X570 chipset. X470 motherboards where cheaper and had two PCIe x16 3.0 slots connected to the CPU. If you where inserting a graphics card in the first X16 slot, it was working as a full x16 slot. Inserting a second graphics card, or an SSD with an adapter in the second slot was meant that the first slot was working now at x8 speed and the second slot was also getting 8x lanes (only 4 needed in the case of the SSD, 8 lanes when a second GPU was inserted). Now, in most X570 that where sold for less than $250-$300, but still much higher price than the X470 motherboards, there was just one X16 slot connected to the CPU. Everything else was connected on the chipset. There where cases with motherboards having 3-4 X16 slots and only the first was connected on the CPU, the others where in fact X1 slots connected to the chipset.
Now, either some change happened with PCIe 4.0 over PCIe 3.0, some kind of limitation maybe, so every CPU from Intel or AMD that supports PCIe 4.0 or 5.0 are limited in some way, or I am probably right when I am talking about greedy motherboard manufacturers who decided to improve their profit margins by designing simpler motherboards and still selling them at much higher prices.
 
Last edited:
Joined
Jan 3, 2021
Messages
3,625 (2.49/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
CPU lanes and the bifurcation they're capable of is limited by the CPU not the motherboard, but the motherboard controls whether or not you can utilize it. If you want to do anything that is outside the scope of the CPU support you need a PCIe switch (in the old days a PLX chip) and the same goes for using multiple devices on a single chipset connection (this I do not know for certain is a chipset vs motherboard limitation).

Since I know Intel products better than AMD I'll use them as an example (limiting this to LGA1700 with Z/H series chipsets):
ADL/RPL both have 16 lanes of PCIe 5.0 and 4 lanes of PCIe 4.0 from the CPU. The 4 lanes of PCIe 4.0 seem to be mandated for M.2 storage by Intel (this is a guess based on only seeing those lanes used for M.2 and in the case of MSI's MAX line which didn't have PCIe 4.0 M.2 from CPU not using those lanes for one of the PCIe slots on the board). The 16 lanes of PCIe 5.0 can only be run x16 or x8/x8 and there's no way for motherboard manufacturers to change that, but it can be implemented within the same slot or two separate slots (or both, but if you had a two slot and split the primary slot you couldn't use the second slot).

The DMI lanes on the Z and H series chipsets are PCIe 4.0 x8 bandwidth so that's the maximum bandwidth between the CPU and chipset. The way modern chipsets handle PCIe is basically like a PCIe switch so that's where some flexibility comes in. Z790/H770 both can split up to 28 lanes (20 PCIe 4.0/8 PCIe 3.0) and can be configured in x1/x2/x4. I'm not aware of whether or not these lanes have bifurcation capability or if they're configured as wired, but I'd assume the latter as you'll never see more than 20 PCIe 4.0 chipset lanes wired on any motherboard. So you end up with configurations like 4x PCIe 4.0 M.2 slots and 1x PCIe 4.0 slot from the chipset without any further flexibility.
AMD's Promontory chip would be a very interesting product if AMD were willing to sell it separately. It's basically a quite flexible PCIe switch, although we don't know if the link to the CPU is fully standard PCIe.
 
Joined
Nov 12, 2020
Messages
168 (0.11/day)
Processor 265K (running stock until more Intel updates land)
Motherboard MPG Z890 Carbon WIFI
Cooling Peerless Assassin 140
Memory 48GB DDR5-7200 CL34
Video Card(s) RTX 3080 12GB FTW3 Ultra Hybrid
Storage 1.5TB 905P and 2x 2TB P44 Pro
Display(s) CU34G2X and Ea244wmi
Case Dark Base 901
Audio Device(s) Sound Blaster X4
Power Supply Toughpower PF3 850
Mouse G502 HERO/G700s
Keyboard Ducky One 3 Pro Nazca
Well, I don't have the technical background to know if it is a restriction somewhere in modern hardware, or completely get what you try to explain to me, but I do have an example in my mind. The same AM4 CPU, for example my R5 5500, can be used with a motherboard that is based on the X470 chipset and a motherboard that is based on the X570 chipset. X470 motherboards where cheaper and had two PCIe x16 3.0 slots connected to the CPU. If you where inserting a graphics card in the first X16 slot, it was working as a full x16 slot. Inserting a second graphics card, or an SSD with an adapter in the second slot was meant that the first slot was working now at x8 speed and the second slot was also getting 8x lanes (only 4 needed in the case of the SSD, 8 lanes when a second GPU was inserted). Now, in most X570 that where sold for less than $250-$300, but still much higher price than the X470 motherboards, there was just one X16 slot connected to the CPU. Everything else was connected on the chipset. There where cases with motherboards having 3-4 X16 slots and only the first was connected on the CPU, the others where in fact X1 slots connected to the chipset.
Now, either some change happened with PCIe 4.0 over PCIe 3.0, some kind of limitation maybe, so every CPU from Intel or AMD that supports PCIe 4.0 or 5.0 are limited in some way, or I am probably right when I am talking about greedy motherboard manufacturers who decided to improve their profit margins by designing simpler motherboards and still selling them at much higher prices.
What you're talking about here is due to the cost of running PCIe 4.0 (and now PCIe 5.0) traces on the motherboard. This was likely a way to maintain their existing margins without changing board costs. The more expensive boards that have multiple CPU PCIe 4.0 (now 5.0) generally have a bunch of other features also bloating the price.

I believe the cheapest board which had two CPU PCIe slots on LGA 1700 was Asus' W680 workstation board which cost ~$330, but otherwise they were pretty much only found on the ~$500 or higher boards.
 
Joined
Sep 6, 2013
Messages
3,420 (0.83/day)
Location
Athens, Greece
System Name 3 desktop systems: Gaming / Internet / HTPC
Processor Ryzen 5 7600 / Ryzen 5 4600G / Ryzen 5 5500
Motherboard X670E Gaming Plus WiFi / MSI X470 Gaming Plus Max (1) / MSI X470 Gaming Plus Max (2)
Cooling Aigo ICE 400SE / Segotep T4 / Νoctua U12S
Memory Kingston FURY Beast 32GB DDR5 6000 / 16GB JUHOR / 32GB G.Skill RIPJAWS 3600 + Aegis 3200
Video Card(s) ASRock RX 6600 + GT 710 (PhysX) / Vega 7 integrated / Radeon RX 580
Storage NVMes, ONLY NVMes / NVMes, SATA Storage / NVMe, SATA, external storage
Display(s) Philips 43PUS8857/12 UHD TV (120Hz, HDR, FreeSync Premium) / 19'' HP monitor + BlitzWolf BW-V5
Case Sharkoon Rebel 12 / CoolerMaster Elite 361 / Xigmatek Midguard
Audio Device(s) onboard
Power Supply Chieftec 850W / Silver Power 400W / Sharkoon 650W
Mouse CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Keyboard CoolerMaster Devastator III Plus / CoolerMaster Devastator / Logitech
Software Windows 10 / Windows 10&Windows 11 / Windows 10
What you're talking about here is due to the cost of running PCIe 4.0 (and now PCIe 5.0) traces on the motherboard. This was likely a way to maintain their existing margins without changing board costs. The more expensive boards that have multiple CPU PCIe 4.0 (now 5.0) generally have a bunch of other features also bloating the price.

I believe the cheapest board which had two CPU PCIe slots on LGA 1700 was Asus' W680 workstation board which cost ~$330, but otherwise they were pretty much only found on the ~$500 or higher boards.
I don't know if going from PCIe 3.0 to PCIe 4.0 skyrockets the cost of the board. What I know is that modern motherboards look like microATX motherboards with ATX dimensions.
 

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
42,721 (6.69/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
Joined
Apr 18, 2019
Messages
2,401 (1.15/day)
Location
Olympia, WA
System Name Sleepy Painter
Processor AMD Ryzen 5 3600
Motherboard Asus TuF Gaming X570-PLUS/WIFI
Cooling FSP Windale 6 - Passive
Memory 2x16GB F4-3600C16-16GVKC @ 16-19-21-36-58-1T
Video Card(s) MSI RX580 8GB
Storage 2x Samsung PM963 960GB nVME RAID0, Crucial BX500 1TB SATA, WD Blue 3D 2TB SATA
Display(s) Microboard 32" Curved 1080P 144hz VA w/ Freesync
Case NZXT Gamma Classic Black
Audio Device(s) Asus Xonar D1
Power Supply Rosewill 1KW on 240V@60hz
Mouse Logitech MX518 Legend
Keyboard Red Dragon K552
Software Windows 10 Enterprise 2019 LTSC 1809 17763.1757
I don't know if going from PCIe 3.0 to PCIe 4.0 skyrockets the cost of the board. What I know is that modern motherboards look like microATX motherboards with ATX dimensions.
More to it than just that, but: Gen4 and newer ReDriver ICs are still quite a bit more expensive than Gen3 ReDrivers.
Also, trace complexity increases and max trace length decreases with Gen4 and newer.

AMD's Promontory chip would be a very interesting product if AMD were willing to sell it separately. It's basically a quite flexible PCIe switch, although we don't know if the link to the CPU is fully standard PCIe.
Wouldn't it be funny, if we could have a return to the (short-lived) era of ATI(AMD) chipsets on Intel boards, again.
-could be used as a secondary FCH/southbridge to Intel's, afaik.

I'd love to see 'feature expansion' AICs, though.

Instead of storage, why not simply use the drive as cache, making the GPU more efficient and speed up load times etc.
No Bifurcation needed.
That is a very interesting idea indeed! Even a 64GB NVMe drive would likely work exceptionally well as a cache.
You mean this



I wonder why we haven't seen other models copying the idea of an SSD as memory expansion. I wonder if Intel's Optane could be used in cards that would target modern AI applications. Then again, Intel killed Optane.
DirectStorage kinda promised this possibility without on-card ASIC/FPGA support like the Radeon Pro SSGs had.
Not all games let you, but with some ingenuity one could put/point live-loaded files and shader caches on those drives. (PrimoCache, NTFS Volume as Folder, Symbolic Links, etc.)

With this MAXSUN card, and others like it
pSLC-modded cheap QLC (gen4 and gen5-eventually) NVMe drives or whatever Optane someone could get their hands on, would work well for any kind of cache needs.
(CPU-connected lanes, no switch-added latency, good cooling, etc.)
 
Last edited:
Joined
Jul 5, 2013
Messages
28,318 (6.75/day)
This was done already
We know. I'm just saying, this is a solid idea.

With this MAXSUN card, and others like it
pSLC-modded cheap QLC (gen4 and gen5-eventually) NVMe drives or whatever Optane someone could get their hands on, would work well for any kind of cache needs.
(CPU-connected lanes, no switch-added latency, good cooling, etc.)
No. QLC isn't up to the task(performance is lacking, even in pSLC mode). I was referring to MLC(still being made and can still be purchased) which is faster and more durable. TLC would be ok if the performance can be stabilized.
 
Last edited:
Top