Friday, December 6th 2024

MAXSUN Designs Arc B580 GPU with Two M.2 SSDs, Putting Leftover PCIe Lanes to Good Use

Thanks to the discovery of VideoCardz, we get a glimpse of MAXSUN's latest Arc 580 GPU with not only a GPU but extra room for two additional M.2 SSDs. The PCIe connector on the Intel Arc B580 has x16 physical pins but runs at PCIe 4.0 x8 speeds. Intel verified it runs only x8 lanes instead of the full x16 slot, leaving x8 lanes unsued. However, MAXSUN thought of a clever way to put the leftover x8 lanes to good use by adding two PCIe x4 M.2 SSDs to thelatest triple-fan iCraft B580 SKU. Power delivery for the M.2 drives comes directly from the graphics card, which is made possible by the GPU's partial PCIe lane utilization. This configuration could prove particularly valuable for compact builds or systems with limited motherboard storage options.

Interestingly, the SSD pair appears to have its own thermal enclosure, which acts as a heatsink. Having constant airflow from the GPU's fans, the M.2 SSD configuration should be able to maximize the full bandwidth of the SSDs without thermals throttling the SSD read/write speeds. The design follows in the footsteps of AMD's Radeon Pro SSG, which introduced integrated storage in workstation cards with PCIe 3.0 M.2 slots. Back then, it was mainly a target for workstation users. However, MAXSUN has envisioned gamers unusually expanding their storage space now. The pricing of the card and availability date remains a mystery.
Source: VideoCardz
Add your own comment

45 Comments on MAXSUN Designs Arc B580 GPU with Two M.2 SSDs, Putting Leftover PCIe Lanes to Good Use

#26
DaemonForce
thestryker6AMD can do x4x4x4x4, but I'm not familiar enough with their platforms to know if that includes x8x4x4 though it would make sense if it could.
AMD board doing x8x4x4 would be fantastic though I have limited experience with GPUs in x8 mode. I barely noticed differences in x4 for any of my stuff so either all my options are bottom grade performance or I'm just not putting together anything weird enough to cause issues. It would be a good cost cutting measure to obsolete these weird dual chipset boards. They're already waaaaaay more expensive than they deserve.
Posted on Reply
#27
john_
LabRat 891In my limited experience, it's pretty common. AFAIK even some A620 boards support it. ASrock seems to be good about bifurcation support on mid-level boards and higher (on AMD).

It's definitely something to research before you buy this kind of card, but the concept is great.
Saves someone from having to buy and figure out how to mount one of these:

I was about to remind of the existence of boards like this. And they don't cost much, about $15 on Aliexpress. They probably work the same way ASUS's and Maxsun's GPUs does, while offering the flexibility to be used alone (if someone is using an iGPU) or with any graphics card out there.
As for how to mound them, if the PC case offers the option to put the GPU vertically, I suppose someone just needs a PCIe X16 extension cable.

In any case we didn't had such problems in the past, where we had 6-7 PCIe X16 slots on the motherboards. On motherboards that had a price of less than $100. Today those greedy motherboard makers, use the excuse of M.2 slots, to sell us empty PCBs for $300.
Posted on Reply
#28
thestryker6
john_In any case we didn't had such problems in the past, where we had 6-7 PCIe X16 slots on the motherboards. On motherboards that had a price of less than $100. Today those greedy motherboard makers, use the excuse of M.2 slots, to sell us empty PCBs for $300.
Those boards used PLX chips which didn't cost a whole lot to add PCIe expansion at the time. The problem is actually Broadcom buying PLX Technology and raising the prices through the roof almost immediately. PCIe 2.0 and 3.0 switches are still relatively reasonably priced, but anything above that is easily in the $100+ range. So while I wouldn't dispute motherboard manufacturer greed in a general sense this one isn't on them.

Right now the only real hope for more client PCIe is for Intel/AMD to choose to add more either through a wider DMI link or more CPU lanes.
Posted on Reply
#29
john_
thestryker6Those boards used PLX chips which didn't cost a whole lot to add PCIe expansion at the time. The problem is actually Broadcom buying PLX Technology and raising the prices through the roof almost immediately. PCIe 2.0 and 3.0 switches are still relatively reasonably priced, but anything above that is easily in the $100+ range. So while I wouldn't dispute motherboard manufacturer greed in a general sense this one isn't on them.

Right now the only real hope for more client PCIe is for Intel/AMD to choose to add more either through a wider DMI link or more CPU lanes.
There are plenty of PCIe lanes from both the CPU and the chipset. Installing a bunch of PCIe X16 slots on the motherboard and just enabling disabling some ports based on what is connected, it's a common feature on motherboards that probably doesn't need a PLX chip. Does it? Also configuring a slot to work as x16, x8 or x4 is probably also something that can be done in the BIOS. In any case motherboard makers I believe could keep doing what it was common practice until X470 chipset for the AM4 platform. Have 2 PCIe x16 slots that share the 16 PCIe lanes from the CPU. At least that compared to driving all the 16 lanes to just one PCIe slot and having to question the manual of the motherboard to see if the slot can be split and then used with graphics cards like those from ASUS and Maxsun or custom PCIe cards like those sold in places like Aliexpress, an option that also comes with a risk because it is a product from unknown manufacturer.
I think the motherboard makers just removed valuable features that where common in the past, to improve their profit margins. The elimination of SLi and CrossFire together with the integration of the north bridge in the CPUs gave them this opportunity to simplify the PCB design and at the same time replace features, that need extra BIOS support, with "better" looks that cost them nothing.
Posted on Reply
#30
Wirko
thestryker6It would require your SSDs to be the same PCIe revision as the video card which could be an issue should something like this be implemented in a higher performance card (thinking a card that would require PCIe 5.0 x8 bandwidth which would require PCIe 5.0 SSDs)
Is that really a requirement of Core and Ryzen CPUs? It would be very strange if true because each link should be established independently (determining what's at the other end, negotiating lane count and speed, finding equalisation settings for lowest error rate). Negotiating the speed starts at PCIe 1.0 speed then progresses one by one. Also speed and lane count can change dynamically for power saving.
john_There are plenty of PCIe lanes from both the CPU and the chipset.
Yes. What's lacking is the ability of Gen 5 x4 ports from CPU for any sort of bifurcation.
Posted on Reply
#31
Readlight
Only good for old computer's with Intel northbrdge, so drivers get compatible with IGPU. Like SATA and USB3 in PCIe x1
Posted on Reply
#32
thestryker6
WirkoIs that really a requirement of Core and Ryzen CPUs? It would be very strange if true because each link should be established independently (determining what's at the other end, negotiating lane count and speed, finding equalisation settings for lowest error rate). Negotiating the speed starts at PCIe 1.0 speed then progresses one by one. Also speed and lane count can change dynamically for power saving.
You're totally right I forgot the only limit is maximum slot speed for bifurcation.
Posted on Reply
#33
Caring1
Instead of storage, why not simply use the drive as cache, making the GPU more efficient and speed up load times etc.
No Bifurcation needed.
Posted on Reply
#34
lexluthermiester
Caring1Instead of storage, why not simply use the drive as cache, making the GPU more efficient and speed up load times etc.
No Bifurcation needed.
That is a very interesting idea indeed! Even a 64GB NVMe drive would likely work exceptionally well as a cache.
Posted on Reply
#35
john_
Caring1Instead of storage, why not simply use the drive as cache, making the GPU more efficient and speed up load times etc.
No Bifurcation needed.
You mean this



I wonder why we haven't seen other models copying the idea of an SSD as memory expansion. I wonder if Intel's Optane could be used in cards that would target modern AI applications. Then again, Intel killed Optane.
Posted on Reply
#36
thestryker6
john_There are plenty of PCIe lanes from both the CPU and the chipset. Installing a bunch of PCIe X16 slots on the motherboard and just enabling disabling some ports based on what is connected, it's a common feature on motherboards that probably doesn't need a PLX chip. Does it? Also configuring a slot to work as x16, x8 or x4 is probably also something that can be done in the BIOS.
CPU lanes and the bifurcation they're capable of is limited by the CPU not the motherboard, but the motherboard controls whether or not you can utilize it. If you want to do anything that is outside the scope of the CPU support you need a PCIe switch (in the old days a PLX chip) and the same goes for using multiple devices on a single chipset connection (this I do not know for certain is a chipset vs motherboard limitation).

Since I know Intel products better than AMD I'll use them as an example (limiting this to LGA1700 with Z/H series chipsets):
ADL/RPL both have 16 lanes of PCIe 5.0 and 4 lanes of PCIe 4.0 from the CPU. The 4 lanes of PCIe 4.0 seem to be mandated for M.2 storage by Intel (this is a guess based on only seeing those lanes used for M.2 and in the case of MSI's MAX line which didn't have PCIe 4.0 M.2 from CPU not using those lanes for one of the PCIe slots on the board). The 16 lanes of PCIe 5.0 can only be run x16 or x8/x8 and there's no way for motherboard manufacturers to change that, but it can be implemented within the same slot or two separate slots (or both, but if you had a two slot and split the primary slot you couldn't use the second slot).

The DMI lanes on the Z and H series chipsets are PCIe 4.0 x8 bandwidth so that's the maximum bandwidth between the CPU and chipset. The way modern chipsets handle PCIe is basically like a PCIe switch so that's where some flexibility comes in. Z790/H770 both can split up to 28 lanes (20 PCIe 4.0/8 PCIe 3.0) and can be configured in x1/x2/x4. I'm not aware of whether or not these lanes have bifurcation capability or if they're configured as wired, but I'd assume the latter as you'll never see more than 20 PCIe 4.0 chipset lanes wired on any motherboard. So you end up with configurations like 4x PCIe 4.0 M.2 slots and 1x PCIe 4.0 slot from the chipset without any further flexibility.
Posted on Reply
#37
b1k3rdude
Not the first time this has been done though, there are I thingk some 3050's out in the wild with this.
Posted on Reply
#38
john_
thestryker6CPU lanes and the bifurcation they're capable of is limited by the CPU not the motherboard, but the motherboard controls whether or not you can utilize it. If you want to do anything that is outside the scope of the CPU support you need a PCIe switch (in the old days a PLX chip) and the same goes for using multiple devices on a single chipset connection (this I do not know for certain is a chipset vs motherboard limitation).

Since I know Intel products better than AMD I'll use them as an example (limiting this to LGA1700 with Z/H series chipsets):
ADL/RPL both have 16 lanes of PCIe 5.0 and 4 lanes of PCIe 4.0 from the CPU. The 4 lanes of PCIe 4.0 seem to be mandated for M.2 storage by Intel (this is a guess based on only seeing those lanes used for M.2 and in the case of MSI's MAX line which didn't have PCIe 4.0 M.2 from CPU not using those lanes for one of the PCIe slots on the board). The 16 lanes of PCIe 5.0 can only be run x16 or x8/x8 and there's no way for motherboard manufacturers to change that, but it can be implemented within the same slot or two separate slots (or both, but if you had a two slot and split the primary slot you couldn't use the second slot).

The DMI lanes on the Z and H series chipsets are PCIe 4.0 x8 bandwidth so that's the maximum bandwidth between the CPU and chipset. The way modern chipsets handle PCIe is basically like a PCIe switch so that's where some flexibility comes in. Z790/H770 both can split up to 28 lanes (20 PCIe 4.0/8 PCIe 3.0) and can be configured in x1/x2/x4. I'm not aware of whether or not these lanes have bifurcation capability or if they're configured as wired, but I'd assume the latter as you'll never see more than 20 PCIe 4.0 chipset lanes wired on any motherboard. So you end up with configurations like 4x PCIe 4.0 M.2 slots and 1x PCIe 4.0 slot from the chipset without any further flexibility.
Well, I don't have the technical background to know if it is a restriction somewhere in modern hardware, or completely get what you try to explain to me, but I do have an example in my mind. The same AM4 CPU, for example my R5 5500, can be used with a motherboard that is based on the X470 chipset and a motherboard that is based on the X570 chipset. X470 motherboards where cheaper and had two PCIe x16 3.0 slots connected to the CPU. If you where inserting a graphics card in the first X16 slot, it was working as a full x16 slot. Inserting a second graphics card, or an SSD with an adapter in the second slot was meant that the first slot was working now at x8 speed and the second slot was also getting 8x lanes (only 4 needed in the case of the SSD, 8 lanes when a second GPU was inserted). Now, in most X570 that where sold for less than $250-$300, but still much higher price than the X470 motherboards, there was just one X16 slot connected to the CPU. Everything else was connected on the chipset. There where cases with motherboards having 3-4 X16 slots and only the first was connected on the CPU, the others where in fact X1 slots connected to the chipset.
Now, either some change happened with PCIe 4.0 over PCIe 3.0, some kind of limitation maybe, so every CPU from Intel or AMD that supports PCIe 4.0 or 5.0 are limited in some way, or I am probably right when I am talking about greedy motherboard manufacturers who decided to improve their profit margins by designing simpler motherboards and still selling them at much higher prices.
Posted on Reply
#39
Wirko
thestryker6CPU lanes and the bifurcation they're capable of is limited by the CPU not the motherboard, but the motherboard controls whether or not you can utilize it. If you want to do anything that is outside the scope of the CPU support you need a PCIe switch (in the old days a PLX chip) and the same goes for using multiple devices on a single chipset connection (this I do not know for certain is a chipset vs motherboard limitation).

Since I know Intel products better than AMD I'll use them as an example (limiting this to LGA1700 with Z/H series chipsets):
ADL/RPL both have 16 lanes of PCIe 5.0 and 4 lanes of PCIe 4.0 from the CPU. The 4 lanes of PCIe 4.0 seem to be mandated for M.2 storage by Intel (this is a guess based on only seeing those lanes used for M.2 and in the case of MSI's MAX line which didn't have PCIe 4.0 M.2 from CPU not using those lanes for one of the PCIe slots on the board). The 16 lanes of PCIe 5.0 can only be run x16 or x8/x8 and there's no way for motherboard manufacturers to change that, but it can be implemented within the same slot or two separate slots (or both, but if you had a two slot and split the primary slot you couldn't use the second slot).

The DMI lanes on the Z and H series chipsets are PCIe 4.0 x8 bandwidth so that's the maximum bandwidth between the CPU and chipset. The way modern chipsets handle PCIe is basically like a PCIe switch so that's where some flexibility comes in. Z790/H770 both can split up to 28 lanes (20 PCIe 4.0/8 PCIe 3.0) and can be configured in x1/x2/x4. I'm not aware of whether or not these lanes have bifurcation capability or if they're configured as wired, but I'd assume the latter as you'll never see more than 20 PCIe 4.0 chipset lanes wired on any motherboard. So you end up with configurations like 4x PCIe 4.0 M.2 slots and 1x PCIe 4.0 slot from the chipset without any further flexibility.
AMD's Promontory chip would be a very interesting product if AMD were willing to sell it separately. It's basically a quite flexible PCIe switch, although we don't know if the link to the CPU is fully standard PCIe.
Posted on Reply
#40
thestryker6
john_Well, I don't have the technical background to know if it is a restriction somewhere in modern hardware, or completely get what you try to explain to me, but I do have an example in my mind. The same AM4 CPU, for example my R5 5500, can be used with a motherboard that is based on the X470 chipset and a motherboard that is based on the X570 chipset. X470 motherboards where cheaper and had two PCIe x16 3.0 slots connected to the CPU. If you where inserting a graphics card in the first X16 slot, it was working as a full x16 slot. Inserting a second graphics card, or an SSD with an adapter in the second slot was meant that the first slot was working now at x8 speed and the second slot was also getting 8x lanes (only 4 needed in the case of the SSD, 8 lanes when a second GPU was inserted). Now, in most X570 that where sold for less than $250-$300, but still much higher price than the X470 motherboards, there was just one X16 slot connected to the CPU. Everything else was connected on the chipset. There where cases with motherboards having 3-4 X16 slots and only the first was connected on the CPU, the others where in fact X1 slots connected to the chipset.
Now, either some change happened with PCIe 4.0 over PCIe 3.0, some kind of limitation maybe, so every CPU from Intel or AMD that supports PCIe 4.0 or 5.0 are limited in some way, or I am probably right when I am talking about greedy motherboard manufacturers who decided to improve their profit margins by designing simpler motherboards and still selling them at much higher prices.
What you're talking about here is due to the cost of running PCIe 4.0 (and now PCIe 5.0) traces on the motherboard. This was likely a way to maintain their existing margins without changing board costs. The more expensive boards that have multiple CPU PCIe 4.0 (now 5.0) generally have a bunch of other features also bloating the price.

I believe the cheapest board which had two CPU PCIe slots on LGA 1700 was Asus' W680 workstation board which cost ~$330, but otherwise they were pretty much only found on the ~$500 or higher boards.
Posted on Reply
#41
john_
thestryker6What you're talking about here is due to the cost of running PCIe 4.0 (and now PCIe 5.0) traces on the motherboard. This was likely a way to maintain their existing margins without changing board costs. The more expensive boards that have multiple CPU PCIe 4.0 (now 5.0) generally have a bunch of other features also bloating the price.

I believe the cheapest board which had two CPU PCIe slots on LGA 1700 was Asus' W680 workstation board which cost ~$330, but otherwise they were pretty much only found on the ~$500 or higher boards.
I don't know if going from PCIe 3.0 to PCIe 4.0 skyrockets the cost of the board. What I know is that modern motherboards look like microATX motherboards with ATX dimensions.
Posted on Reply
#42
eidairaman1
The Exiled Airman
lexluthermiesterThis is a solid idea.
This was done already
Posted on Reply
#43
LabRat 891
john_I don't know if going from PCIe 3.0 to PCIe 4.0 skyrockets the cost of the board. What I know is that modern motherboards look like microATX motherboards with ATX dimensions.
More to it than just that, but: Gen4 and newer ReDriver ICs are still quite a bit more expensive than Gen3 ReDrivers.
Also, trace complexity increases and max trace length decreases with Gen4 and newer.
WirkoAMD's Promontory chip would be a very interesting product if AMD were willing to sell it separately. It's basically a quite flexible PCIe switch, although we don't know if the link to the CPU is fully standard PCIe.
Wouldn't it be funny, if we could have a return to the (short-lived) era of ATI(AMD) chipsets on Intel boards, again.
-could be used as a secondary FCH/southbridge to Intel's, afaik.

I'd love to see 'feature expansion' AICs, though.
lexluthermiester
Caring1Instead of storage, why not simply use the drive as cache, making the GPU more efficient and speed up load times etc.
No Bifurcation needed.
That is a very interesting idea indeed! Even a 64GB NVMe drive would likely work exceptionally well as a cache.
john_You mean this



I wonder why we haven't seen other models copying the idea of an SSD as memory expansion. I wonder if Intel's Optane could be used in cards that would target modern AI applications. Then again, Intel killed Optane.
DirectStorage kinda promised this possibility without on-card ASIC/FPGA support like the Radeon Pro SSGs had.
Not all games let you, but with some ingenuity one could put/point live-loaded files and shader caches on those drives. (PrimoCache, NTFS Volume as Folder, Symbolic Links, etc.)

With this MAXSUN card, and others like it
pSLC-modded cheap QLC (gen4 and gen5-eventually) NVMe drives or whatever Optane someone could get their hands on, would work well for any kind of cache needs.
(CPU-connected lanes, no switch-added latency, good cooling, etc.)
Posted on Reply
#44
lexluthermiester
eidairaman1This was done already
We know. I'm just saying, this is a solid idea.
LabRat 891With this MAXSUN card, and others like it
pSLC-modded cheap QLC (gen4 and gen5-eventually) NVMe drives or whatever Optane someone could get their hands on, would work well for any kind of cache needs.
(CPU-connected lanes, no switch-added latency, good cooling, etc.)
No. QLC isn't up to the task(performance is lacking, even in pSLC mode). I was referring to MLC(still being made and can still be purchased) which is faster and more durable. TLC would be ok if the performance can be stabilized.
Posted on Reply
#45
dragontamer5788
lexluthermiesterThis is a solid idea.
Some might say you've stated the case quite well.
Posted on Reply
Add your own comment
Jan 11th, 2025 03:58 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts