Friday, August 4th 2017
AMD X399 Platform Lacks NVMe RAID Booting Support
AMD's connectivity-rich Ryzen Threadripper HEDT platform may have an Achilles's heel after all, with reports emerging that it lacks support for booting from NVMe RAID. You can still have bootable NVMe RAID volumes using NVMe RAID HBAs installed as PCI-Express add-on cards. Threadripper processors feature 64-lane PCI-Express gen 3.0 root complexes, which allow you to run at least two graphics cards at full x16 bandwidth, and drop in other bandwidth-hungry devices such as multiple PCI-Express NVMe SSDs. Unfortunately for those planning on striping multiple NVMe SSDs in RAID; the platform lacks NVMe RAID booting support. You should still be able to build soft-RAID arrays striping multiple NVMe SSDs, just not boot from them. Pro-sumers will still be able to dump their heavy data-sets onto such soft-arrays. This limitation is probably due to PCI-Express lanes emerging from different dies on the Threadripper MCM, which could present problems to the system BIOS to boot from.
Ryzen Threadripper is a multi-chip module (MCM) of two 8-core "Summit Ridge" dies. Each 14 nm "Summit Ridge" die features 32 PCI-Express lanes. On a socket AM4 machine, 4 of those 32 lanes are used as chipset-bus, leaving 28 for the rest of the machine. 16 of those head to up to two PEG (PCI-Express Graphics) ports (either one x16 or two x8 slots); and the remaining 12 lanes are spread among M.2 slots, and other onboard devices. On a Threadripper MCM, one of the two "Summit Ridge" dies has chipset-bus access; 16 lanes from each die head to PEG (a total of four PEG ports, either as two x16 or four x8 slots); while the remaining are general purpose; driving high-bandwidth devices such as USB 3.1 controllers, 10 GbE interfaces, and several M.2 and U.2 ports.There is always the likelihood of two M.2/U.2 ports being wired to different "Summit Ridge" dies; which could pose issues in getting RAID to work reliably, which is probably the reason why NVMe RAID booting won't work. The X399 chipset, however, does support RAID on the SATA ports it puts out. Up to four SATA 6 Gb/s ports on a socket TR4 motherboard can be wired directly to the processor, as each "Summit Ridge" puts out two ports. This presents its own set of RAID issues. The general rule of the thumb here is that you'll be able to create bootable RAID arrays only between disks connected to the same exact SATA controller. By default, you have three controllers - one from each of the two "Summit Ridge" dies, and one integrated into the X399 chipset. The platform supports up to 10 ports. You will hence be able to boot from SATA RAID arrays, provided they're built up from the same controller; however, booting from NVMe RAID arrays will not be possible.
Source:
Tom's Hardware
Ryzen Threadripper is a multi-chip module (MCM) of two 8-core "Summit Ridge" dies. Each 14 nm "Summit Ridge" die features 32 PCI-Express lanes. On a socket AM4 machine, 4 of those 32 lanes are used as chipset-bus, leaving 28 for the rest of the machine. 16 of those head to up to two PEG (PCI-Express Graphics) ports (either one x16 or two x8 slots); and the remaining 12 lanes are spread among M.2 slots, and other onboard devices. On a Threadripper MCM, one of the two "Summit Ridge" dies has chipset-bus access; 16 lanes from each die head to PEG (a total of four PEG ports, either as two x16 or four x8 slots); while the remaining are general purpose; driving high-bandwidth devices such as USB 3.1 controllers, 10 GbE interfaces, and several M.2 and U.2 ports.There is always the likelihood of two M.2/U.2 ports being wired to different "Summit Ridge" dies; which could pose issues in getting RAID to work reliably, which is probably the reason why NVMe RAID booting won't work. The X399 chipset, however, does support RAID on the SATA ports it puts out. Up to four SATA 6 Gb/s ports on a socket TR4 motherboard can be wired directly to the processor, as each "Summit Ridge" puts out two ports. This presents its own set of RAID issues. The general rule of the thumb here is that you'll be able to create bootable RAID arrays only between disks connected to the same exact SATA controller. By default, you have three controllers - one from each of the two "Summit Ridge" dies, and one integrated into the X399 chipset. The platform supports up to 10 ports. You will hence be able to boot from SATA RAID arrays, provided they're built up from the same controller; however, booting from NVMe RAID arrays will not be possible.
75 Comments on AMD X399 Platform Lacks NVMe RAID Booting Support
Omg that's unforgivable look all those people with bunch of nvme ssd-s crying out loud "you let us down AMD, instead of booting to shitty win 10 for 10 sec now i need 13 sec oh the horror.." you get the point do you?
HEDT = High End Desk Top
if ok, buy it, if not , leave it . .
So in other words, it's a software/UEFI related issue, as I mentioned in my first post in this thread.
So now it's switching from Windows to Linux, using software RAID and putting /boot on one of the drives. I think I preferred the previous variant. :-D
Is it even possible to do that on a mainstream distribution (Ubuntu, Debian, Mint etc) without editing dozens of config files?
What about the stuff that I want to run before mdadm?
For $350, around 33% more, you can get one ~70% faster reads (intel 600).. and smokes it in iops. Also, your raid card wasnt free. That cost should be included. ;)
Or, spend 200 more, around 80%, for 300% performance increase...1TB 960 evo.
Im not saying its the right move, but there are certainly benenfits of having a single, much faster m.2 drive versus 2 sata drives in R0 on a raid card. Cost /GB isnt there, but performance, shorter boot times because of not having to post raid rom, and less chance of an array crapping out is real.
www.samsung.com/us/computing/memory-storage/solid-state-drives/ssd-960-evo-m-2-1tb-mz-v6e1t0bw/?cid=pla-ecom-mul-27,000,002
Ok, so a drive on sale and a used raid card were 260...many may not have that opportunity. Im just saying there are use cases for single nvme at a higher cost. Its up to the buyer to determine if those costs are worth it. Not a huge amount of real world performamce increases, but, they are there.
It makes no sense why this couldn't be done.
Considering how much is being spent already, granted it would be nice to have the option.
Board level "hardware" raid is barely hardware raid, it's CPU bound which shares the negative impact of Windows "software" raid.
In some cases OS level raid can actually be better than basic onboard raid due to recovery options with failed hardware.
Lastly on boot times... Raid 0 often boots slower than a non raid due to raid detection time, everything is faster after the fact.
With that said, on servers (including VMs,) the boot disk is rarely the largest or the fastest drive in the machine so, booting from NVMe raid to me, seems silly. You can always mount something after the kernel has loaded, even on Windows.