Friday, August 4th 2017

AMD X399 Platform Lacks NVMe RAID Booting Support

AMD's connectivity-rich Ryzen Threadripper HEDT platform may have an Achilles's heel after all, with reports emerging that it lacks support for booting from NVMe RAID. You can still have bootable NVMe RAID volumes using NVMe RAID HBAs installed as PCI-Express add-on cards. Threadripper processors feature 64-lane PCI-Express gen 3.0 root complexes, which allow you to run at least two graphics cards at full x16 bandwidth, and drop in other bandwidth-hungry devices such as multiple PCI-Express NVMe SSDs. Unfortunately for those planning on striping multiple NVMe SSDs in RAID; the platform lacks NVMe RAID booting support. You should still be able to build soft-RAID arrays striping multiple NVMe SSDs, just not boot from them. Pro-sumers will still be able to dump their heavy data-sets onto such soft-arrays. This limitation is probably due to PCI-Express lanes emerging from different dies on the Threadripper MCM, which could present problems to the system BIOS to boot from.

Ryzen Threadripper is a multi-chip module (MCM) of two 8-core "Summit Ridge" dies. Each 14 nm "Summit Ridge" die features 32 PCI-Express lanes. On a socket AM4 machine, 4 of those 32 lanes are used as chipset-bus, leaving 28 for the rest of the machine. 16 of those head to up to two PEG (PCI-Express Graphics) ports (either one x16 or two x8 slots); and the remaining 12 lanes are spread among M.2 slots, and other onboard devices. On a Threadripper MCM, one of the two "Summit Ridge" dies has chipset-bus access; 16 lanes from each die head to PEG (a total of four PEG ports, either as two x16 or four x8 slots); while the remaining are general purpose; driving high-bandwidth devices such as USB 3.1 controllers, 10 GbE interfaces, and several M.2 and U.2 ports.
There is always the likelihood of two M.2/U.2 ports being wired to different "Summit Ridge" dies; which could pose issues in getting RAID to work reliably, which is probably the reason why NVMe RAID booting won't work. The X399 chipset, however, does support RAID on the SATA ports it puts out. Up to four SATA 6 Gb/s ports on a socket TR4 motherboard can be wired directly to the processor, as each "Summit Ridge" puts out two ports. This presents its own set of RAID issues. The general rule of the thumb here is that you'll be able to create bootable RAID arrays only between disks connected to the same exact SATA controller. By default, you have three controllers - one from each of the two "Summit Ridge" dies, and one integrated into the X399 chipset. The platform supports up to 10 ports. You will hence be able to boot from SATA RAID arrays, provided they're built up from the same controller; however, booting from NVMe RAID arrays will not be possible.
Source: Tom's Hardware
Add your own comment

75 Comments on AMD X399 Platform Lacks NVMe RAID Booting Support

#51
notb
AquinusWe are talking about boot device here, aren't we? It's not like you can't do this after the machine has started. My point is for how edge-case this is, there are options to get around it that aren't unreasonable.
"Aren't unreasonable" like switching from Windows to Linux and booting kernel from a flash drive? :-)
Posted on Reply
#52
Hossein Almet
There is no way that I am going to put all my eggs into one baskets. When I changed the platform, and performed a migration of the OS to the new NVMe drive, things are the system was not able to boot after that, I had to do a clean install of the OS, luckily all my important files were stored on 2 other drives. I lost no files, jut time:)
Posted on Reply
#53
kastriot

Omg that's unforgivable look all those people with bunch of nvme ssd-s crying out loud "you let us down AMD, instead of booting to shitty win 10 for 10 sec now i need 13 sec oh the horror.." you get the point do you?
Posted on Reply
#54
Rahmat Sofyan
No big deal here...It is a Threadripper not raid or boot ripper :) ..

HEDT = High End Desk Top

if ok, buy it, if not , leave it . .
Posted on Reply
#56
lexluthermiester
Farmer BoeOh the horror! But seriously, does anyone here actually boot off multiple NVMe SSD's? Seems a bit ridiculous with the speed those already provide.
Actually, yes. Sort of. I boot from dual m2's in raid 0. Their not nvme, but whatever. It's fast as hell so why not?
Posted on Reply
#57
Aquinus
Resident Wat-man
lexluthermiesterActually, yes. Sort of. I boot from dual m2's in raid 0. Their not nvme, but whatever. It's fast as hell so why not?
Because there are NVMe devices (particularly made by Samsung,) that are capable of doing what your SATA-based M.2 RAID-0 setup can do with a single device. Write speeds are going north of 2GB/s and read speeds are also going north of 3GB/s. Compare that to the 1GB/s I get with SATA3 RAID-0 with two devices and you quickly wonder why people like me think that a single NVMe device is enough for a boot drive.
Posted on Reply
#58
lexluthermiester
AquinusBecause there are NVMe devices (particularly made by Samsung,) that are capable of doing what your SATA-based M.2 RAID-0 setup can do with a single device. Write speeds are going north of 2GB/s and read speeds are also going north of 3GB/s. Compare that to the 1GB/s I get with SATA3 RAID-0 with two devices and you quickly wonder why people like me think that a single NVMe device is enough for a boot drive.
And now imagine two of them in tandom. So instead of 20sec bootups, you get 12 to 13sec. If someone has the money and wants to, why not. I say have at it. Oh, and just FYI, my setup gets a nominal 960MB per sec average. There are very few nvme drives that can even approach that number consistently.
Posted on Reply
#59
hellrazor
notb"Aren't unreasonable" like switching from Windows to Linux and booting kernel from a flash drive? :-)
You realize you can make a (maybe) 100MB partition on the SSD for /boot so that GRUB can load the kernel, while loading everything else off of the software RAID, right?
Posted on Reply
#60
notb
hellrazorYou realize you can make a (maybe) 100MB partition on the SSD for /boot so that GRUB can load the kernel, while loading everything else off of the software RAID, right?
Ouch.
So now it's switching from Windows to Linux, using software RAID and putting /boot on one of the drives. I think I preferred the previous variant. :-D

Is it even possible to do that on a mainstream distribution (Ubuntu, Debian, Mint etc) without editing dozens of config files?
What about the stuff that I want to run before mdadm?
Posted on Reply
#61
hellrazor
notbOuch.
So now it's switching from Windows to Linux, using software RAID and putting /boot on one of the drives. I think I preferred the previous variant. :-D

Is it even possible to do that on a mainstream distribution (Ubuntu, Debian, Mint etc) without editing dozens of config files?
What about the stuff that I want to run before mdadm?
You can do all of that before you install with a live CD.
Posted on Reply
#62
R-T-B
hellrazorSilly Windows users.
Actually, windows can. At least Windows Server can (or could). I did it for years.
Posted on Reply
#63
Frick
Fishfaced Nincompoop
hellrazorYou can do all of that before you install with a live CD.
So there's like a box to check for it in the install GUI, or is it like two or three buttons tops to click?
Posted on Reply
#64
Aquinus
Resident Wat-man
lexluthermiesterAnd now imagine two of them in tandom. So instead of 20sec bootups, you get 12 to 13sec. If someone has the money and wants to, why not. I say have at it. Oh, and just FYI, my setup gets a nominal 960MB per sec average. There are very few nvme drives that can even approach that number consistently.
Your right, 3GB/s usually isn't sustained but, 2GB/s is for reads. Same thing for writes, 2GB might not be sustained but >1GB/s is. 2GB/s is double the speed of your RAID-0 with a single device (Samsung 960 Pro,) and your forgetting that boot devices like random read performance which almost always doesn't scale in RAID-0. My mid-2015 Macbook pro which is 2 years old, can do practically 800MB/s with the NVMe card that came with the laptop.
Posted on Reply
#65
thesmokingman
This thread is about as relevant as that thread where ppl whined and complained about the torque of the torx tool by AMD.
Posted on Reply
#66
Slizzo
erockerBut I can still raid non-boot drives. I'm fine with that... If I were to buy a Threadripper... Which I'm not.
Yeah, I mean in an age where a 960EVO gets 3200MB/s bandwidth, do we really need NVMe bootable RAID?
Posted on Reply
#67
lexluthermiester
AquinusYour right, 3GB/s usually isn't sustained but, 2GB/s is for reads. Same thing for writes, 2GB might not be sustained but >1GB/s is. 2GB/s is double the speed of your RAID-0 with a single device (Samsung 960 Pro,) and your forgetting that boot devices like random read performance which almost always doesn't scale in RAID-0. My mid-2015 Macbook pro which is 2 years old, can do practically 800MB/s with the NVMe card that came with the laptop.
And that's a good point. My burst speeds range up into the 1.6GBPS. My point is that I'm using 2 480GB MLC drives on a bootable raid card for $260. You show me a single 960GB NVMe drive that can get the performance you state for the same or less and I'll go buy it.
Posted on Reply
#68
EarthDog
Burst uses cache for that value. Otherwise, you arent breaking 1.1GBps or so as that is how fast the drives are.

For $350, around 33% more, you can get one ~70% faster reads (intel 600).. and smokes it in iops. Also, your raid card wasnt free. That cost should be included. ;)

Or, spend 200 more, around 80%, for 300% performance increase...1TB 960 evo.

Im not saying its the right move, but there are certainly benenfits of having a single, much faster m.2 drive versus 2 sata drives in R0 on a raid card. Cost /GB isnt there, but performance, shorter boot times because of not having to post raid rom, and less chance of an array crapping out is real.
Posted on Reply
#69
lexluthermiester
EarthDogBurst uses cache for that value. Otherwise, you arent breaking 1.1GBps or so as that is how fast the drives are.

For $350, around 33% more, you can get one ~70% faster reads (intel 600).. and smokes it in iops. Also, your raid card wasnt free. That cost should be included. ;)

Or, spend 200 more, around 80%, for 300% performance increase...1TB 960 evo.

Im not saying its the right move, but there are certainly benefits of having a single, much faster m.2 drive versus 2 sata drives in R0 on a raid card. Cost /GB isnt there, but performance, shorter boot times because of not having to post raid rom, and less chance of an array crapping out is real.
That cost included the raid card, which was used[but in perfect condition]. I got the drives on sale new. You seem to have missed the part where I said they were M2 drives. But they're not NVMe, which is what Aquinus and I were talking about.
Posted on Reply
#72
EarthDog
lexluthermiesterThat cost included the raid card, which was used[but in perfect condition]. I got the drives on sale new. You seem to have missed the part where I said they were M2 drives. But they're not NVMe, which is what Aquinus and I were talking about.
i know they arent nvme...sata based m.2. 550MB read drives.. the 1.1 value im talking in R0. I didnt specify nvme when i said m.2, and then referred to sata as a protocol. I can see why you thought that. :)

Ok, so a drive on sale and a used raid card were 260...many may not have that opportunity. Im just saying there are use cases for single nvme at a higher cost. Its up to the buyer to determine if those costs are worth it. Not a huge amount of real world performamce increases, but, they are there.
Posted on Reply
#73
Gasaraki
Farmer BoeOh the horror! But seriously, does anyone here actually boot off multiple NVMe SSD's? Seems a bit ridiculous with the speed those already provide.
I'm sorry, this is a HEDT platform. Who NEEDS 16 cores 32 threads? Who NEEDS 64 GB of RAM? Who NEEDS a Ferrari? HEDT is for people who WANT the BIGGEST e-peen. I WANT to run my 2x NVMe drives in RAID 0.

It makes no sense why this couldn't be done.
Posted on Reply
#74
niko084
For someone needing that kind of drive throughput (not sure what for) how expensive is a raid controller?
Considering how much is being spent already, granted it would be nice to have the option.

Board level "hardware" raid is barely hardware raid, it's CPU bound which shares the negative impact of Windows "software" raid.
In some cases OS level raid can actually be better than basic onboard raid due to recovery options with failed hardware.

Lastly on boot times... Raid 0 often boots slower than a non raid due to raid detection time, everything is faster after the fact.
Posted on Reply
#75
Aquinus
Resident Wat-man
GasarakiI'm sorry, this is a HEDT platform. Who NEEDS 16 cores 32 threads? Who NEEDS 64 GB of RAM? Who NEEDS a Ferrari? HEDT is for people who WANT the BIGGEST e-peen. I WANT to run my 2x NVMe drives in RAID 0.
...or for software engineers who write software for servers, or for people who do genomics, or for people who do machine learning and/or big data processing. The people who need a machine like this aren't using them for video games alone, they're using it for real practical purposes. If I'm writing some software, having a machine with more cores tells me if what I've built will scale. With the 3820, I just assume that if I can saturate at least all of the physical cores, that it's good enough but, I don't really know if it will scale to something more so, when I deploy stuff to say, Google Cloud Platform, I usually will opt for smaller VM instances. Having memory lets me do things like extra caching for database (which can improve PostgreSQL performance by leaps and bounds,) or it could let me use something like Redis for caching. It can also let me spin up VMs that reflect an entire system because I have the physical hardware to actually support it as opposed to needing to use a cloud solution that costs money. Simply put, the people who buy a machine like this for gaming alone indeed is doing it just because they can and would like to gloat about it however, not everyone who buys something like this is doing it for that reason.

With that said, on servers (including VMs,) the boot disk is rarely the largest or the fastest drive in the machine so, booting from NVMe raid to me, seems silly. You can always mount something after the kernel has loaded, even on Windows.
Posted on Reply
Add your own comment
Dec 27th, 2024 03:30 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts