Friday, August 4th 2017

AMD X399 Platform Lacks NVMe RAID Booting Support

AMD's connectivity-rich Ryzen Threadripper HEDT platform may have an Achilles's heel after all, with reports emerging that it lacks support for booting from NVMe RAID. You can still have bootable NVMe RAID volumes using NVMe RAID HBAs installed as PCI-Express add-on cards. Threadripper processors feature 64-lane PCI-Express gen 3.0 root complexes, which allow you to run at least two graphics cards at full x16 bandwidth, and drop in other bandwidth-hungry devices such as multiple PCI-Express NVMe SSDs. Unfortunately for those planning on striping multiple NVMe SSDs in RAID; the platform lacks NVMe RAID booting support. You should still be able to build soft-RAID arrays striping multiple NVMe SSDs, just not boot from them. Pro-sumers will still be able to dump their heavy data-sets onto such soft-arrays. This limitation is probably due to PCI-Express lanes emerging from different dies on the Threadripper MCM, which could present problems to the system BIOS to boot from.

Ryzen Threadripper is a multi-chip module (MCM) of two 8-core "Summit Ridge" dies. Each 14 nm "Summit Ridge" die features 32 PCI-Express lanes. On a socket AM4 machine, 4 of those 32 lanes are used as chipset-bus, leaving 28 for the rest of the machine. 16 of those head to up to two PEG (PCI-Express Graphics) ports (either one x16 or two x8 slots); and the remaining 12 lanes are spread among M.2 slots, and other onboard devices. On a Threadripper MCM, one of the two "Summit Ridge" dies has chipset-bus access; 16 lanes from each die head to PEG (a total of four PEG ports, either as two x16 or four x8 slots); while the remaining are general purpose; driving high-bandwidth devices such as USB 3.1 controllers, 10 GbE interfaces, and several M.2 and U.2 ports.
There is always the likelihood of two M.2/U.2 ports being wired to different "Summit Ridge" dies; which could pose issues in getting RAID to work reliably, which is probably the reason why NVMe RAID booting won't work. The X399 chipset, however, does support RAID on the SATA ports it puts out. Up to four SATA 6 Gb/s ports on a socket TR4 motherboard can be wired directly to the processor, as each "Summit Ridge" puts out two ports. This presents its own set of RAID issues. The general rule of the thumb here is that you'll be able to create bootable RAID arrays only between disks connected to the same exact SATA controller. By default, you have three controllers - one from each of the two "Summit Ridge" dies, and one integrated into the X399 chipset. The platform supports up to 10 ports. You will hence be able to boot from SATA RAID arrays, provided they're built up from the same controller; however, booting from NVMe RAID arrays will not be possible.
Source: Tom's Hardware
Add your own comment

75 Comments on AMD X399 Platform Lacks NVMe RAID Booting Support

#26
mcraygsx
I am kind of astonished at using objective language here. Did TECHPOWERUP not get their customized review kit from AMD?.
"Ryzen Threadripper HEDT platform may have an Achilles's heel after all" is too far fetched even for Techpowerup.

Reminds me of how HARDOCP used to trash AMD every chance they get especially right after Kyle did not get invited to one of AMD Polaris event. But attitude changed right after they send him couple of samples.

For real there might be less then 1% of TR consumes who would RAID 0 two of their PCI x4 NVME drives.
Posted on Reply
#27
cdawall
where the hell are my stars
mcraygsxI am kind of astonished at using objective language here. Did TECHPOWERUP not get their customized review kit from AMD?.
"Ryzen Threadripper HEDT platform may have an Achilles's heel after all" is too far fetched even for Techpowerup.

Reminds me of how HARDOCP used to trash AMD every chance they get especially right after Kyle did not get invited to one of AMD Polaris event. But attitude changed right after they send him couple of samples.

For real there might be less then 1% of TR consumes who would RAID 0 two of their PCI x4 NVME drives.
So? That doesn't mean it shouldn't support it for the 1% who do
Posted on Reply
#28
mcraygsx
cdawallSo? That doesn't mean it shouldn't support it for the 1% who do
Once more again if u cant read! Of course these small issues should be reported for sake of Journalism but using biased/strong words like "Achilles's heel" is taking it too far. This is just one disadvantage that was found and reviews are not even out yet. What happens when next flaw is found in TR. We going to call it Total disappointment and not recommended?

It takes just one person to start the Fire and headlines like such do a pretty good job.
Posted on Reply
#29
cdawall
where the hell are my stars
mcraygsxOnce more again if u cant read! Of course these small issues should be reported for sake of Journalism but using strong words like "Achilles's heel" is taking it too far.
I don't disagree with that. I imagine the Achilles heel will more likely be infinity fabric connected the modules.
Posted on Reply
#30
notb
mcraygsxFor real there might be less then 1% of TR consumes who would RAID 0 two of their PCI x4 NVME drives.
This is quite an interesting estimation...

I'm fairly sure quite an opposite observation is true.
I'd say that for a significant part of people that actually buy into HEDT, disks only come in RAID setups.

It's a bit like if TR was - for whatever reason - not usable for solving PDE. Who cares, right?

But here comes the actual problem.
Threadripper is known to be just a cut-down EPYC server CPU. Does that mean EPYC can't boot from a RAID as well? :-)
Posted on Reply
#31
john_
I don't see a huge problem here. I mean if someone wants to boot at 4.3 seconds instead of 5.6 seconds, yes this is a problem. In every other case it is just an annoyance of having probably one more drive and one more partition in your system and nothing more.
Posted on Reply
#32
thesmokingman
mcraygsxI am kind of astonished at using objective language here. Did TECHPOWERUP not get their customized review kit from AMD?.
"Ryzen Threadripper HEDT platform may have an Achilles's heel after all" is too far fetched even for Techpowerup.

Reminds me of how HARDOCP used to trash AMD every chance they get especially right after Kyle did not get invited to one of AMD Polaris event. But attitude changed right after they send him couple of samples.

For real there might be less then 1% of TR consumes who would RAID 0 two of their PCI x4 NVME drives.
Yea, I've noticed the huge upswing in the anti-amd stance.
Posted on Reply
#33
newtekie1
Semi-Retired Folder
GasarakiNot true. It works with all NVMe drives.
DimiYou don't need any keys/dongles to have a bootable raid 1 or 0 nvme raid configuration on x299 with ANY nvme drive.
Vroc is a different feature.
Yes, after doing some research which I admit I should have done before commenting, that is sort of correct.

If the slot is wired to the chipset, then yes you can use any brand drive in RAID and more than just RAID-0. Of course, this has the disadvantage of routing the data through the slower chipset path instead of directly to the CPU. It will limit the access speed from the rest of the system to the NVMe array to about that of a PCI-E 3.0 x4 link. If the slot is wired to the CPU, then you have to use VROC for RAID.
Posted on Reply
#34
Aquinus
Resident Wat-man
This isn't SATA3. If 4x lanes of PCI-E 3.0 (which is ~3.84GB/s,) isn't enough for your boot drive, then you might as well just put everything you need into system memory.
hellrazorSilly Windows users.
:toast:
Posted on Reply
#35
The Von Matrices
With AMD announcing that it is deprecating Crossfire and now not supporting bootable NVMe RAIDs, those 60 PCIe lanes that everyone was ogling over look more and more superfluous every day.
Posted on Reply
#36
cdawall
where the hell are my stars
AquinusThis isn't SATA3. If 4x lanes of PCI-E 3.0 (which is ~3.84GB/s,) isn't enough for your boot drive, then you might as well just put everything you need into system memory.

:toast:
The Von MatricesWith AMD announcing that it is deprecating Crossfire and now not supporting bootable NVMe RAIDs, those 60 PCIe lanes that everyone was ogling over look more and more superfluous every day.
Depends how efficient the CPU is. If it can pull in $3-4 a day and support 8-12 video cards then the platform will be phenomenal to miners. Threadripper and 12 vega cards per rig, has a nice ring to it.
Posted on Reply
#37
notb
thesmokingmanYea, I've noticed the huge upswing in the anti-amd stance.
Well... I've noticed a huge shift of discussion topics from gaming to server/workstation-related because of how Zen is much better at compressing files than running games.
Sad part is that, while discussion themes changed, the people (their knowledge) didn't.

When a thread is about RAID and majority of comments is about NVMe speed and OS boot time, it kind of says it all...
Posted on Reply
#38
LogitechFan
but but but you still have all those picex lanes for all your...... 5400rpm hdds to connect muahahaha
Posted on Reply
#39
Aquinus
Resident Wat-man
cdawallDepends how efficient the CPU is. If it can pull in $3-4 a day and support 8-12 video cards then the platform will be phenomenal to miners. Threadripper and 12 vega cards per rig, has a nice ring to it.
What does mining have to do with running NVMe devices in RAID other than using up PCI-E lanes that could be used for GPUs instead of NVMe? Very few situations require highly available persistent storage I/O of over 2GB/s and in those situations, I would argue that using an NVMe device suited to using more PCI-E lanes would be a better option than doing RAID, particularly for small file or random read/write operations.

My bigger point what that disk I/O has got fast enough where RAID-0 doesn't make a whole lot of sense. I did it with SATA3 not even for speed but, because at the time, RAID-0 of two 120GBs was cheaper than a single 240GB and just happened to be a little faster in certain situations. However, with how cheap SSD storage has become and how fast NVMe devices are getting, I see very little reason to want to do RAID-0 with NVMe devices. If you need something that fast but, require a replica, I would argue that something more eventually consistent would allow you to retain more performance while sacrificing a minute or so worth of data being written in the case of catastrophe.

All in all, I think that this article is a non-issue and isn't even worthy of the attention it is receiving. It's not a realistic need that solves any real tangible problem.
Posted on Reply
#40
cdawall
where the hell are my stars
AquinusWhat does mining have to do with running NVMe devices in RAID other than using up PCI-E lanes that could be used for GPUs instead of NVMe?
The question was "What is the point of 60 lanes if?"

That is one scenario that would use them. Another would be deep learning, cad/cam, video capture etc. Plenty of uses for pcie lanes outside of nvme devices.
Posted on Reply
#42
Dave65
GasarakiYou can't boot off of software raid.
THIS!
Posted on Reply
#43
SKD007
I just placed order for a M.2 NVMe for my C6H.. hope it will be really fast for windows and page file.
Posted on Reply
#44
Nkd
OMG! what a deal breaker! Buy a server cpu if you need all the server features. lol
Posted on Reply
#45
notb
NkdOMG! what a deal breaker! Buy a server cpu if you need all the server features. lol
RAID is not a server feature. It's used in consumer PCs, but more importantly in workstations. That is, if Threadripper is meant to be a workstation CPU (which is unlikely).

As you've said, it is servers where RAID becomes crucial. And the worst thing is that TR is an EPYC underneath. So will EPYC be able to boot from RAID?
Posted on Reply
#46
ssdpro
thesmokingmanYea, I've noticed the huge upswing in the anti-amd stance.
Way too emotional. Just read the first page... lots of pro AMD there.
Posted on Reply
#47
Aquinus
Resident Wat-man
Dave65THIS!
@hellrazor's quote applies. In Linux, there is absolutely nothing stopping me from using mdadm to do software raid for root. Basically, only the kernel needs to be outside the software RAID to get the system started, which means you could literally have something as simple as a flash drive to get you to be able to boot from a software raid array.
hellrazorSilly Windows users.
Posted on Reply
#48
newtekie1
Semi-Retired Folder
AquinusThis isn't SATA3. If 4x lanes of PCI-E 3.0 (which is ~3.84GB/s,) isn't enough for your boot drive, then you might as well just put everything you need into system memory.

:toast:
Thing is, a single NVMe drive theoretically could use all the bandwidth of a PCI-E 3.0 x4 link. So if you are RAIDing the boot drive for the purpose of increased speed, the x4 link to the chipset becomes the bottleneck. That is whole reason why we are moving towards connecting the drives directly to the CPUs.
Posted on Reply
#49
Aquinus
Resident Wat-man
newtekie1Thing is, a single NVMe drive theoretically could use all the bandwidth of a PCI-E 3.0 x4 link. So if you are RAIDing the boot drive for the purpose of increased speed, the x4 link to the chipset becomes the bottleneck. That is whole reason why we are moving towards connecting the drives directly to the CPUs.
We are talking about boot device here, aren't we? It's not like you can't do this after the machine has started. My point is for how edge-case this is, there are options to get around it that aren't unreasonable.
Posted on Reply
#50
ypsylon
Oh dear no NVME boot RAID support. End of the world as we know it! </sarcasm>

Speaking from workstation point of view, RAID0 on NVMe is waste of time. I would be interested in RAID10 as it adds crucial Redundancy (that's R in RAID for "RAID0 generation"), but just R0 on a devices which can deal with like 300000 IOPS R/W. Nuts! RAID at this moment in time is ancient technology anyway - which never was designed to work with NVMe devices. RAID adds a lot of overhead on top of much superior NVMe protocol.

What's the point?

Only for benchmarks nerds and for bigger e-pen. Nothing else.

In advanced servers yes you can utilize this (to a point), but in SOHO segment... even video editing with multiple 8K live streams won't benefit much if at all from RAID0 on NVMe.

What I would love to see are PCIe cards with M.2 slots for up to 4 drives (don't need RAID just NVMe connectivity). Have you tried drive pooling with NVMe? No? That's quite something to behold without RAID quirks and moods.
Posted on Reply
Add your own comment
Dec 27th, 2024 02:43 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts