Friday, December 6th 2024
MAXSUN Designs Arc B580 GPU with Two M.2 SSDs, Putting Leftover PCIe Lanes to Good Use
Thanks to the discovery of VideoCardz, we get a glimpse of MAXSUN's latest Arc 580 GPU with not only a GPU but extra room for two additional M.2 SSDs. The PCIe connector on the Intel Arc B580 has x16 physical pins but runs at PCIe 4.0 x8 speeds. Intel verified it runs only x8 lanes instead of the full x16 slot, leaving x8 lanes unsued. However, MAXSUN thought of a clever way to put the leftover x8 lanes to good use by adding two PCIe x4 M.2 SSDs to thelatest triple-fan iCraft B580 SKU. Power delivery for the M.2 drives comes directly from the graphics card, which is made possible by the GPU's partial PCIe lane utilization. This configuration could prove particularly valuable for compact builds or systems with limited motherboard storage options.
Interestingly, the SSD pair appears to have its own thermal enclosure, which acts as a heatsink. Having constant airflow from the GPU's fans, the M.2 SSD configuration should be able to maximize the full bandwidth of the SSDs without thermals throttling the SSD read/write speeds. The design follows in the footsteps of AMD's Radeon Pro SSG, which introduced integrated storage in workstation cards with PCIe 3.0 M.2 slots. Back then, it was mainly a target for workstation users. However, MAXSUN has envisioned gamers unusually expanding their storage space now. The pricing of the card and availability date remains a mystery.
Source:
VideoCardz
Interestingly, the SSD pair appears to have its own thermal enclosure, which acts as a heatsink. Having constant airflow from the GPU's fans, the M.2 SSD configuration should be able to maximize the full bandwidth of the SSDs without thermals throttling the SSD read/write speeds. The design follows in the footsteps of AMD's Radeon Pro SSG, which introduced integrated storage in workstation cards with PCIe 3.0 M.2 slots. Back then, it was mainly a target for workstation users. However, MAXSUN has envisioned gamers unusually expanding their storage space now. The pricing of the card and availability date remains a mystery.
45 Comments on MAXSUN Designs Arc B580 GPU with Two M.2 SSDs, Putting Leftover PCIe Lanes to Good Use
As for how to mound them, if the PC case offers the option to put the GPU vertically, I suppose someone just needs a PCIe X16 extension cable.
In any case we didn't had such problems in the past, where we had 6-7 PCIe X16 slots on the motherboards. On motherboards that had a price of less than $100. Today those greedy motherboard makers, use the excuse of M.2 slots, to sell us empty PCBs for $300.
Right now the only real hope for more client PCIe is for Intel/AMD to choose to add more either through a wider DMI link or more CPU lanes.
I think the motherboard makers just removed valuable features that where common in the past, to improve their profit margins. The elimination of SLi and CrossFire together with the integration of the north bridge in the CPUs gave them this opportunity to simplify the PCB design and at the same time replace features, that need extra BIOS support, with "better" looks that cost them nothing.
No Bifurcation needed.
I wonder why we haven't seen other models copying the idea of an SSD as memory expansion. I wonder if Intel's Optane could be used in cards that would target modern AI applications. Then again, Intel killed Optane.
Since I know Intel products better than AMD I'll use them as an example (limiting this to LGA1700 with Z/H series chipsets):
ADL/RPL both have 16 lanes of PCIe 5.0 and 4 lanes of PCIe 4.0 from the CPU. The 4 lanes of PCIe 4.0 seem to be mandated for M.2 storage by Intel (this is a guess based on only seeing those lanes used for M.2 and in the case of MSI's MAX line which didn't have PCIe 4.0 M.2 from CPU not using those lanes for one of the PCIe slots on the board). The 16 lanes of PCIe 5.0 can only be run x16 or x8/x8 and there's no way for motherboard manufacturers to change that, but it can be implemented within the same slot or two separate slots (or both, but if you had a two slot and split the primary slot you couldn't use the second slot).
The DMI lanes on the Z and H series chipsets are PCIe 4.0 x8 bandwidth so that's the maximum bandwidth between the CPU and chipset. The way modern chipsets handle PCIe is basically like a PCIe switch so that's where some flexibility comes in. Z790/H770 both can split up to 28 lanes (20 PCIe 4.0/8 PCIe 3.0) and can be configured in x1/x2/x4. I'm not aware of whether or not these lanes have bifurcation capability or if they're configured as wired, but I'd assume the latter as you'll never see more than 20 PCIe 4.0 chipset lanes wired on any motherboard. So you end up with configurations like 4x PCIe 4.0 M.2 slots and 1x PCIe 4.0 slot from the chipset without any further flexibility.
Now, either some change happened with PCIe 4.0 over PCIe 3.0, some kind of limitation maybe, so every CPU from Intel or AMD that supports PCIe 4.0 or 5.0 are limited in some way, or I am probably right when I am talking about greedy motherboard manufacturers who decided to improve their profit margins by designing simpler motherboards and still selling them at much higher prices.
I believe the cheapest board which had two CPU PCIe slots on LGA 1700 was Asus' W680 workstation board which cost ~$330, but otherwise they were pretty much only found on the ~$500 or higher boards.
Also, trace complexity increases and max trace length decreases with Gen4 and newer. Wouldn't it be funny, if we could have a return to the (short-lived) era of ATI(AMD) chipsets on Intel boards, again.
-could be used as a secondary FCH/southbridge to Intel's, afaik.
I'd love to see 'feature expansion' AICs, though. DirectStorage kinda promised this possibility without on-card ASIC/FPGA support like the Radeon Pro SSGs had.
Not all games let you, but with some ingenuity one could put/point live-loaded files and shader caches on those drives. (PrimoCache, NTFS Volume as Folder, Symbolic Links, etc.)
With this MAXSUN card, and others like it
pSLC-modded cheap QLC (gen4 and gen5-eventually) NVMe drives or whatever Optane someone could get their hands on, would work well for any kind of cache needs.
(CPU-connected lanes, no switch-added latency, good cooling, etc.)