# ASUS Hyper M.2 x16 V2 - cannot see multiple drives - even on a blatantly supported ASUS mobo?



## SonicMojo (Mar 23, 2022)

All

First some setup: I have a ASUS Hyper M.2 x16 (V2) installed in my ASUS ProART z490-Creator mobo - occupying PCIEX16_1 - the only slot capable of 16 PCI lanes. I have just a single Samsung EVO 970 Plus (500GB) install on the card and it is my OS drive and it working flawlessly - and oddly - is noticeably snappier installed in this card vs being installed in the actual M.2_1 socket on the board itself. (That's a story for another day)

Recently I added another Samsung EVO 970 Plus - a 2TB drive that I also wanted to house in the ASUS Hyper M.2 x16 (V2) to get maximum speed on the drive (It's used for large audio sample libraries on my digital audio workstation).

I did some very deep study on the card - knowing that the bifurcation allowances for this card are unique depending on the mobo it is used in. For the ASUS ProART z490-Creator - it appears via this link:

[Motherboard] Compatibility of PCIE bifurcation between Hyper M.2 series Cards and Add-On Graphic Cards | Official Support | ASUS Global






That I can use (up to) 3 M2 drives for PCIEX16_1 and those drives can only be installed in M2 slots 1, 3 or 4 on the ASUS Hyper M.2 x16 (V2).

Then - according the manual - dependent on the amount of drives used - one must consider these settings





1 - Clearly states 3 M2 drives "can" be used
2 - Interesting notes in what will happen to PCIEX16_2 - it will be disabled if "up to 3" drives are connected to the card - when in PCIEX_16_1
3 - Head into the BIOS and "activate" the Hyper M.2 x16 setting to "Enabled". if using just one drive like I am now - this setting was set to Disabled and everything worked fine.

So I mount the new 2TB in the Hyper M2 x16 slot 3 (while leaving the boot drive in Slot 1 and Slot 4 stays empty) I remount the card and swing into the BIOS and set the Hyper M2 x16 setting to Enabled.  And I restart.

When the machine resets - I go back into the BIOS to see if the new drive is recognized and it is not. No amount of fooling around can make it show up.

Hence the questions for the crew here:

1. Is there some major setting that I am missing - like maybe something completely off the range like this card needs to be in RAID mode or something to actually see more that one drive? For example - this bizarre note appears in that ASUS link when talkig about restrictions, requirements and so on:

*"For Z590, Z490, Z390 and Z370 series motherboard, install IRST version 16 or above to use RAID on CPU function."*

I do not want or have any interest in RAID-in anything with respect to the drives on this card - I just want to them to show up as two normal drives? and I do not have the Intel RST option even turned on in the BIOS.

2. Do I need to make sure that PCIEX16_2 is actually empty (right now it is not) to be able to see two drives via the Hyper card?

3. Is there something major I am missing here in this disjointed series of documentation that is maybe making this look like it should be easy/obvious - but it really is not?

Appreciate any info from anyone who is/has used this card (preferrably in an ASUS board) and what you needed to do use/see more than one drive.

Cheers

Sonic.


----------



## Ware (Mar 23, 2022)

No experience with this card but I have noticed this in my Z490-H BIOS:



This option must be enabled to support 2 drives, and the note mentions "The number of SSD's that  can be detected varies with configurations of the PCIe X16 slots".


SonicMojo said:


> PCIEX16_1 - the only slot capable of 16 PCI lane...make sure that PCIEX16_2 is actually empty (right now it is not)


Are you using a discrete GPU?  It looks like that board runs X8 if 2 slots are populated.


----------



## SonicMojo (Mar 23, 2022)

Ware said:


> No experience with this card but I have noticed this in my Z490-H BIOS:
> View attachment 241001
> This option must be enabled to support 2 drives, and the note mentions "The number of SSD's that  can be detected varies with configurations of the PCIe X16 slots".
> 
> Are you using a discrete GPU?  It looks like that board runs X8 if 2 slots are populated.



Thanks for joining the thread! yes - I have that option set to Enabled within my z490 ProArt BIOS - but strangely - my little text blurb (At the bottom) differs from yours - it is missing this most critical item:

*"The number of SSD's that  can be detected varies with configurations of the PCIe X16 slots"*

That is nowhere to be seen in my BIOS or the manual. 

Just after I posted this - I hit YouTube to see what else I could find and it looks like you are on to something regarding the PCIEX slots. Found one video where the guy was setting up a RAID array using this card - but he did specifically mention that if there was ANYTHING in PCIEX16_2 - that would ruin the detection of drives in the card. 

I do have a card in PCIEX16_2 but it can be removed so that will be my next test. I do need to tread carefully here as I do not want to mess up any detection of my boot drive which would open up a whole new box of problems.

And no - I am not using a discrete GPU - using the onboard GPU with this motherboard install.

Cheers

Sonic.


----------



## ThrashZone (Mar 23, 2022)

Hi,
You ask on asus forum ?


----------



## SonicMojo (Mar 23, 2022)

ThrashZone said:


> Hi,
> You ask on asus forum ?


ASUS has messed up my forum login so badly that I just gave up. 

Sonic


----------



## ThrashZone (Mar 23, 2022)

SonicMojo said:


> ASUS has messed up my forum login so badly that I just gave up.
> 
> Sonic


Hi,
This forum has sign in issues ?
ROG



			We'll be back.


----------



## SonicMojo (Mar 23, 2022)

ThrashZone said:


> Hi,
> This forum has sign in issues ?
> ROG
> 
> ...


Looks like it's good now. I reset my password and was finally able to log in there...whew!

But that ROG forum is really geared towards the gaming crowd and I am sensing that this crowd probably would not have much interest in this Hyper Card - but I suppose I can post up that and see if anyone responds.

Cheers

Sonic.


----------



## ThrashZone (Mar 23, 2022)

SonicMojo said:


> Looks like it's good now. I reset my password and was finally able to log in there...whew!
> 
> But that ROG forum is really geared towards the gaming crowd and I am sensing that this crowd probably would not have much interest in this Hyper Card - but I suppose I can post up that and see if anyone responds.
> 
> ...


Hi,
Yeah doesn't hurt any to ask there it's not all that popular device that I've noticed here 

Just make sure you're in uefi only boot mode seems the only trick for bios to see the m.2 card....

Same login for all asus forums though.


----------



## SonicMojo (Mar 23, 2022)

ThrashZone said:


> Just make sure you're in uefi only boot mode seems the only trick for bios to see the m.2 card....



Yep - that is all good. I am convinced now that this is due to having another card in the PCIEX16_2 position. If I decode what this is telling me:





What I believe is really happening here is obtuse and hidden in the table above. I read this table like this - 

1 - If you have nothing (-) in PCIEX16_2 - you get your x16 PCI (4x4) lanes of action for the Hyper Card when in PCIEX16_1 (which mine is). 

Each group of 4 lanes - will correspond to the M.2_1, M.2_2, M.2_3 and M.2_4 slots labelled on the Hyper card. Right now - I have just one M2 drive on the card sitting in slot M.2_1 - and I believe this is the only slot that actually works (without configuring anything else) when another card is present in PCIEX16_2

2 - If you have a card in both PCIEX16_1 AND PCIEX16_2 - the 16 PCI lanes get divided into 2 groups of 8 - thus only giving the Hyper Card (in PCIEX16_1) just 2 groups of 4 PCI lanes. AND because the BIOS says you *must* place your M2s onto the Hyper card using *ONLY M.2_1, M.2_3 or M.2_4*  - I am betting that when the two PCIEX16 slots (1 and 2) filled - the Hyper card can only expose M2s within it's *M.2_1 or M.2_2 *slots.  *M.2_3 or M.2_4* are not supported to be "seen" (bifurcation) when something is in PCIEX16_2.

Will test this and report back.

Cheers!

Sonic.


----------



## ThrashZone (Mar 23, 2022)

Hi,
Yep not a magic platform it has limited pci-e lanes 20 I believe
Single m.2 x8 rest of m.2 down to x4 all eventually


----------



## TheoneandonlyMrK (Mar 23, 2022)

Different platform but when I used mine on a crosshair x470 I needed to manually configure the pciex to 4x8 or essentially change a setting in bios to have the slot actually bifurcated.

I would try and look it up but you're on a different platform so I can only suggest looking deeply into your bios.


----------



## SonicMojo (Mar 23, 2022)

TheoneandonlyMrK said:


> Different platform but when I used mine on a crosshair x470 I needed to manually configure the pciex to 4x8 or essentially change a setting in bios to have the slot actually bifurcated.
> 
> I would try and look it up but you're on a different platform so I can only suggest looking deeply into your bios.



Thanks! Yes - it's strange on this ProART - given it's premium price and feature set - you would think it would actually use the word "bifuracation" somewhere in the BIOS - but alas no. 

There is only a single BIOS reference to this actual card - it is actually a named setting for the HYPER M.2 16 Card - leave it Disabled to use a single M2 drive - like I have now OR set it to Enabled (to use up to 3 M2 drives) and that is the only setting that actually matters for this card.

The rest of the config seems be 100% related to what's in the PCIEX16 slots. And if there is anything in PCIEX16_2 - looks like trouble is sure to follow .

SOnic.


----------



## ThrashZone (Mar 23, 2022)

Hi,
So no sata ports being used ?


----------



## SonicMojo (Mar 23, 2022)

ThrashZone said:


> Hi,
> So no sata ports being used ?


Yes - 2 are in use - but none that share bandwidth per the documentation.

SOnic.


----------



## TheoneandonlyMrK (Mar 23, 2022)

SonicMojo said:


> Yes - 2 are in use - but none that share bandwidth per the documentation.
> 
> SOnic.


I would have a go with them unplugged , just to see , sounds like it should be working but I did have issues with SATA using lanes at one point.
Not sure what that was ,it was a while ago.


----------



## Calenhad (Mar 25, 2022)

If you have one or two drives in the hyper m.2 card you should use PCIEX16_2. And keep your gpu in PCIEX16_1. That will give you a x8/x8 split for starters. With the Hyper M.2 option enabled in bios it should split that into x8/x4/x4. With the x8 going to PCIEX16_1. Then it is a case of looking at the Hyper M.2 and identify which two slots get the first 8 lanes. The logical configuration is M.2_1 and M.2_2. Since that would mean that M.2_1 use lanes 1-4, M.2_2 use 5-8, M.2-3 use 9-12, and M.2_4 use 13-16.

The reason why you have to use 1, 3, and 4 (and PCIEX16_1) if you run 3 drives, is that the first x8 lanes can not be bifurcated. Meaning the motherboard will not recognise individual drives in M.2_1 and M.2_2 in this configuration.


----------



## SonicMojo (Mar 25, 2022)

Calenhad said:


> If you have one or two drives in the hyper m.2 card you should use PCIEX16_2. And keep your gpu in PCIEX16_1. That will give you a x8/x8 split for starters. With the Hyper M.2 option enabled in bios it should split that into x8/x4/x4. With the x8 going to PCIEX16_1. Then it is a case of looking at the Hyper M.2 and identify which two slots get the first 8 lanes. The logical configuration is M.2_1 and M.2_2. Since that would mean that M.2_1 use lanes 1-4, M.2_2 use 5-8, M.2-3 use 9-12, and M.2_4 use 13-16.
> 
> The reason why you have to use 1, 3, and 4 (and PCIEX16_1) if you run 3 drives, is that the first x8 lanes can not be bifurcated. Meaning the motherboard will not recognise individual drives in M.2_1 and M.2_2 in this configuration.



Calenhad

Thanks for the update! For the record - there is no GPU in this build. I am using the onboard CPU GPU so I have the PCIEX16_1 slot wide open to use the Hyper Card - 100% dedicated to whatever M2 drives I want to plug in there.

I will be taking the machine to the bench this morning - clearing a basic PCI card that is currently in PCIEX16_2 so it remains empty and then put my 2TB Samsung drive back into M2_3 on the Hyper card.

I believe this whole issue:

*"Meaning the motherboard will not recognise individual drives in M.2_1 and M.2_2 in this configuration" *

Is actually worse that you state - right now with this simple PCI card in PCIEX16_2 - the motherboard will not recognize ANY extra drives on the Hyper card - even with the card all by it's lonesome in PCIEX16_1 AND with the Hyper M.2 Option enabled in the BIOS. 

I just tested my Samsung 2TB in M.2_3 (Leaving the original 500GB drive in M.2_1 and leaving M.2_2 empty and still nothing is detected. I did not try M.2_4 on the Hyper card as it seemed like a moot point if I could not get the drive to show up on M.2_3 on the Hyper card.

Then there is this right in the manual




You are correct about the x8+x4+x4 in PCI slot 1 but....

Notice how the highlight above states "up to" 3 drives - but not specifically stating using exactly 3 drives -  I believe with this board if there is ANYTHING in the PCIEX16_2 slot at all - that instantly causes the bifurcation to be oddly "disabled" - causing PCIEX16_1 to only expose a single drive via The Hyper card (as it is right now).

With this table above - If I want to ever get more than 1 M.2 drive going at all in this card - the Hyper card *needs* to be in PCIEX16_1 and PCIEX16_2 needs to be forever empty.

Cheers

Sonic


----------



## Calenhad (Mar 25, 2022)

If you are trying to use the Hyper M.2 adapter in PCIEX16_1 with another card in PCIEX16_2 it will only ever see the drive in M.2_1. Nothing else will ever work in that configuration. You should only have the Hyper M.2 adapter in PCIEX16_1 if you are using 3 drives and that will only work if PCIEX16_2 is empty. You can technically use PCIEX16_1 with 1 drive and another card in PCIEX16_2, but 2 drives in this configuration is impossible.

Bottom line is: you should always have the Hyper M.2 adapter in PCIEX16_2 unless you plan to use 3 drives. Any other cards (GPU or otherwise) should go in PCIEX16_1. You need to look at the "Up to 2 Intel SSD on CPU support" column in your image.

Forgot to add: use M.2_1 and M.2_2 for your 2 drive setup in PCIEX16_2


----------



## Valantar (Mar 25, 2022)

I think @Calenhad is correct here and that the OP is misinterpreting the spec table. From what I can tell, the motherboard does not support x4+x4+x4+x4 bifurcation at all, but only either x8+x8 or x8+x4+x4. In other words, the maximum number of supported drives on that m.2 card would be three, and these would need to be connected to PCIe lanes 0-3, 8-11 and 12-15, as lanes 4-7 cannot be bifurcated off from the first four by the PCIe controller.

Coupled with an automatic mechanism to prioritize any device in PCIex16_2 - a necessity to make that slot usable if a GPU or other AIC is installed in 16_1 - that means that if there's any AIC in 16_2 and the m.2 card is in 16_1, only one drive will ever be recognized. This is corroborated by the documentation indicating you to not use m.2_2 on the m.2 card, which indicates that the second group of four PCIe lanes is simply inaccessible on its own.

The solution for two drives would then be to put the m.2 card in 16_2 and populate the first two slots, which leaves PCIe lanes 0-7 for 16_1 and allocates lanes 8-11 and 12-15 to the two SSDs.


----------



## Calenhad (Mar 25, 2022)

Let me try and write a more technical explanation for this:

If you use only PCIEX16_1 (or have nothing in either slot), it has 16 lanes physically connected to the CPU. Let me illustrate it like this:
PCIEX16_1 IIIIIIIIIIIIIIII CPU

If you put anything in PCIEX16_2, those lanes will be physically split between the two slots (doesn't matter if PCIEX16_1 is empty). PCIEX16_2 will always "steal" x8 from PCIEX16_1 if it is occupied by even a PCIe x1 card. So like this:
PCIEX16_1: IIIIIIIIxxxxxxxx CPU (PCIEX16_2 stole the x's)
PCIEX16_2: IIIIIIII CPU

The first 8 lanes (which are always connected to PCIEX16_1) can not be split any further. The last 8 can be virtually split into two x4 connections, but they are still physically connected to either PCIEX16_1 or 2. Those last 8 lanes are where you want to connect your two drives in your situation. This is done by using PCIEX16_2 and M.2_1 + M.2_2 (with something else in PCIEX16_1).
Physically those x8 lanes from PCIEX16_1 are lanes 9-16. But they become lanes 1-8 in PCIEX16_2. This is important for identifying the correct slots you have to use on the Hyper M.2 adapter.

The x8/x4/x4 setup is only for a three drive setup, and you can only use PCIEX16_1 for this. PCIEX16_2 must be empty for this to work!
M.2_1 use the first 8 lanes (physically connected to 1-4), M.2_2 does not work since the first 8 lanes can not be virtually split into two x4 connections (it is physically connected to 5-8 but not usable). M.2_3 and M.2_4 use lanes 9-16, virtually split into two x4 connections. All of this via the single x16 physical connection.

(And to use all four M.2 slots you'd need a cpu with support for full 4x4 bifurcation using the PCIEX16_1 slot)


----------



## ThrashZone (Mar 25, 2022)

Hi,
Just fishing here 
If you use top mobo m.2 slot 
Use hyper card in pci_2 
Wonder if you can put another single m.2 card in pci_1 and see all of them :/


----------



## SonicMojo (Mar 25, 2022)

Calenhad said:


> The x8/x4/x4 setup is only for a three drive setup, and you can only use PCIEX16_1 for this. PCIEX16_2 must be empty for this to work!


Thanks guys for chiming in - I really do appreciate the discussion 

Now at the end of the day - all I ever wanted here was my M2s to run as fast as physically possible and the only reason I invested in the card in the first place came after I studied the actual z490 chipset diagram which clearly showed that the actual M2 connectors were always going to be slower than a "direct" connection to the CPU via the PCI slots. 

And that theory was proven easily. On this board - if I connect the OS M2 to say M.2_1 (The actual M2 connection on the board itself) - it was markedly slower (way less snappy is the best I come up with) than connecting the very same drive to the M.2_1 slot on the Hyper Card in PCIEX16_1. This PC flies using the Hyper card and even the Samsung Magician software benchmarks are faster when measuring the M2 via the Hyper card vs the actual M2 connectors.

Now - I was also lead to believe (and perhaps you guys have shown its a complete fallacy) that PCIEX16_1 WAS the fastest of the three PCI slots simply for it's ability to harness 16 lanes - vs 8 for the other two slots. So let's settle these questions: 

1. Raw speed wise - will I attain the same speed on any M2s - regardless of which PCI slot the Hyper card is in?
2. Is the PCIEX16_1 (and it's 16 lanes) primarily geared towards getting the most out of a graphics card - rather than optimizing/streamlining any other card that might be in there (Like my Hyper M.2)

I suppose I have been running here on some assumptions and it would be nice to know exactly what the deal is - since the idiots at ASUS are by far - the worst sources of useful documentation possible. 

Just the fact that they never ever mention the fact that having ANY cards in any other PCI slots will totally mess up the bifurcation is a blatant example of how to waste as much time as possible. Instead of screwing around here endlessly - they could just have stated this in BOLD print "Hey User - if you ever need a max of three M.2 drives on your Hyper card - stick it in PCIEX16_1 *and leave PCIEX16_2 empty* - that is all they needed to say. 

But this is nowhere in any documentation that I have.

Cheers

Sonic.


----------



## Calenhad (Mar 25, 2022)

SonicMojo said:


> 1. Raw speed wise - will I attain the same speed on any M2s - regardless of which PCI slot the Hyper card is in?
> 2. Is the PCIEX16_1 (and it's 16 lanes) primarily geared towards getting the most out of a graphics card - rather than optimizing/streamlining any other card that might be in there (Like my Hyper M.2)


Always enjoy a good conversation. 

1. Your M.2 nvme ssds will run at up to x4 lanes, either PCIe gen. 3.0 or 4.0 depending on the drive and CPU/motherboard support. In your case a gen. 3.0 or 4.0 ssd will perform similarly, since your motherboard/CPU only support 3.0. There are a ton of other variables that impact the performance of different M.2 ssds, but let us ignore those here.
2. PCIEX16_1 is the only option to get a x16 connection to a graphics card or other similar hardware (could be a x16 raid card and others). PCIEX16_2 will only ever get a max of x8 lanes. But since one m.2 ssd only use x4 and both PCIEX16_1 and 2 are connected to the CPU, the performance between those slots should be identical. But, as explained above, you need to use #2 to get both drives working.



SonicMojo said:


> Just the fact that they never ever mention the fact that having ANY cards in any other PCI slots will totally mess up the bifurcation is a blatant example of how to waste as much time as possible. Instead of screwing around here endlessly - they could just have stated this in BOLD print "Hey User - if you ever need a max of three M.2 drives on your Hyper card - stick it in PCIEX16_1 *and leave PCIEX16_2 empty* - that is all they needed to say.


The information is technically there. But the manual does a really crappy job at explaining it. We can probably say it was literally lost in translation.

I hope the others and I have provided some enlightening information. And that you get both drives up and running without more hiccups.


----------



## Ware (Mar 25, 2022)

My board has 3 'X16' slots - it can run 1 X16, or X8+X4+X4.  Your CPU lanes can run 1 X16, or X8+X8.


SonicMojo said:


> they could just have stated this in BOLD print


My manual makes it fairly clear that the card should go in 16_1 on my board.
Your manual should have included this information.


----------



## ThrashZone (Mar 25, 2022)

Hi,
Have you looked in bios for m.2 mode options pci-e verses pch modes and if that changes anything.


----------



## skizzo (Mar 25, 2022)

SonicMojo said:


> (It's used for large audio sample libraries on my digital audio workstation).



I've found zero real world benefit from putting samples on a separate disk, as far as performance goes. maybe nice for bringing them to other locations if you freelance, but especially in todays world of shifting away from HDDs, SSD and especially NVMe SSDs there is no bottleneck having OS, DAW, and samples on the same disk. I've also found zero real world benefit putting samples on SSDs since a regular HDD works fine. what are you doing that requires this setup? or it just "because you can" sorta thing?


----------



## Valantar (Mar 25, 2022)

SonicMojo said:


> Thanks guys for chiming in - I really do appreciate the discussion
> 
> Now at the end of the day - all I ever wanted here was my M2s to run as fast as physically possible and the only reason I invested in the card in the first place came after I studied the actual z490 chipset diagram which clearly showed that the actual M2 connectors were always going to be slower than a "direct" connection to the CPU via the PCI slots.
> 
> ...


No problem! Stuff like this can be tricky to figure out, and manuals are sadly often very low quality. As to your questions:
1: Yes. Both x16_1 and x16_2 are wired directly to your CPU, so performance across both slots should be identical.
2: Not really - you're overcomplicating things a bit. It's better described as follows: The first slot is an x16 slot, with the second slot expected to be left empty on the _vast_ majority of boards. The board vendor doesn't care what you put in either slot, but the expectation is that the only high bandwidth AIC a user will have is a GPU. GPUs nearly always have x16 interfaces, and benefit from being connected directly to the CPU, thus the first slot is optimized for that. The second slot is there for flexibility, with its historical roots in multi-GPU (SLI/CF) setups, but ultimately general purpose. It's essentially a "nice to have if you want it" thing, and as I said, the expectation is for it to be left unused. For the purposes of SLI/CF this slot also needs to be wired to the CPU, and as there is no such thing as an x12 PCIe interface, the split between them is thus either x16/x0 or x8/x8 (which in your case can be further split to x8/x4+x4).

There are a few more default assumptions built into this:
- if there's a card in the second slot, it is assumed that you want it to work - otherwise it wouldn't be there, after all. Thus, and because this shares the same physical lanes as the latter 8 lanes in the first slot, those are multiplexed together (i.e. have a chip connecting either one or the other, never both), with the mechanism being that if anything is detected in the second slot, those pins in the first slot will be disconnected. If this wasn't the case, that would lead to a bunch of problems for people installing cards there and not having them detected.
- Most BIOSes also allow for overriding this, but as the lanes are physically connected to specific pins, you can never have more than 8 lanes in slot 1 if slot 2 is active (as the 9th lane from slot 1 is the first lane in slot 2, and the first lane is required for any PCIe device to initialize. This means an x4 device in slot 2 will still reduce slot 1 to x8).
- Your use case is complicated because it's kind of a niche within a niche within a niche. You're using an x16 4xm.2 card, that depends on bifurcation (i.e. doesn't have an onboard PLX switch or similar lane sharing mechanism), in a motherboard that doesn't actually support x4+x4+x4+x4 bifurcation (only x8+x4+x4), on a CPU that doesn't have enough PCIe lanes to fully populate two x16 slots, and you have devices in both slots. Quite frankly, I would be rather shocked if this was mentioned in a manual, seeing how niche it is. That m.2 card is mainly oriented towards HEDT platforms with tons of PCIe lanes and much more liberal bifurcation allowing for it to be fully utilized. While it's supported on your motherboard, that's likely more of a "why not" thing than anything else. Which thus makes using it rather complicated.

Still, the possible layouts are as follows:
- Slot 1 m.2 AIC, slot 2 other AIC. Don't use this. The m.2 AIC will only have access to a single block of PCIe, and can thus run a single SSD.
- Slot 1 other AIC, slot 2 m.2 AIC. Use this. It will let you run two x4 SSDs in the m.2 AIC _and_ run the second AIC.
- Slot 1 m.2 AIC, slot 2 nothing. Also a good option if you don't need the second AIC, allowing for three m.2 SSDs. Doesn't seem relevant to your use case though.
- Slot 1 nothing, slot 2 m.2 AIC. Also an option I guess, allowing for two SSDs, but why not then stuff your other AIC into the first slot? Ultimately doesn't make sense on its own, but will work for you.


----------



## SonicMojo (Mar 25, 2022)

OK - the plot thickens. And regarding your layouts above - I am expecting and shooting for *layout 3*:

_Slot 1 m.2 AIC, slot 2 nothing. Also a good option if you don't need the second AIC, *allowing for three m.2 SSDs*. Doesn't seem relevant to your use case though._

It is actually totally relevant - because if I can get these two NMVes going with the Hyper card in PCIEX16_1 - (Using slots M.2_1 and M.2_3) I set myself up nicely for a third drive in M.2_4 within the Hyper card some other day

So - I benched the box - took out the PCI card that was in PCIEX16_2 (now empty) and moved my Samsung 2TB NVMe over to the Hyper M.2 Card (which is Still in PCIEX16_1) and mounted the Sammy in the* M.2_3 on the Hyper 16 card* - reseated everything, fired it up and promptly went in the BIOS to turn on this setting to Enabled:










And made note of this blurb that appears below the settings area - when the Hyper M.2x16 setting is highlighted:





Per this guidance - I should have your third scenario - but - restarted the machine and the Samsung 2TB is still not detected.

PCI slot 2 is empty - so the logic that it needed to be empty seems suspect now. Unless M.2_3 on my Hyper card is faulty?

I give up - what am I missing here guys?

EDIT: Is there another hidden message in that blurb above - where it is implying (very very very obscure) that if you are using JUST two drives and the card is in *PCIEX16_1* - only one drive will be detected?
And conversely - if you intend to only ever use *just two drives (and no more)* - the card *MUST* go in *PCIEX16_2?

Off to test this now...*

SOnic.



skizzo said:


> I've found zero real world benefit from putting samples on a separate disk, as far as performance goes. maybe nice for bringing them to other locations if you freelance, but especially in todays world of shifting away from HDDs, SSD and especially NVMe SSDs there is no bottleneck having OS, DAW, and samples on the same disk. I've also found zero real world benefit putting samples on SSDs since a regular HDD works fine. what are you doing that requires this setup? or it just "because you can" sorta thing?



Sorry - should have been more specific. Sample libraries as in Kontakt, Superior Drummer 3, Spectrasonics Omnisphere - large libraries that can take sometimes 30-60 seconds to load from an HDD take 2 seconds via MVMe...

Sonic.


----------



## Valantar (Mar 25, 2022)

SonicMojo said:


> OK - the plot thickens. And regarding your layouts above - I am expecting and shooting for *layout 3*:
> 
> _Slot 1 m.2 AIC, slot 2 nothing. Also a good option if you don't need the second AIC, *allowing for three m.2 SSDs*. Doesn't seem relevant to your use case though._
> 
> ...


That is _very_ weird. Are there any general bifurcation settings in your BIOS? If so, try changing those. If not, then I too fear that you might be looking at a defective card. I would test it in another PC to be sure, but ... yeah, there's no reason why that setup shouldn't work.

I guess it's possible that since you have had a card in the second slot, it has set the bifurcation to use both slots and then doesn't override that setting even though the card is removed.


----------



## skizzo (Mar 25, 2022)

SonicMojo said:


> Sorry - should have been more specific. Sample libraries as in Kontakt, Superior Drummer 3, Spectrasonics Omnisphere - large libraries that can take sometimes 30-60 seconds to load from an HDD take 2 seconds via MVMe...
> 
> Sonic.




No apology required! Gotcha, that's super similar use case to me. and yes, that is one point I forgot to consider here, the load times would certainly improve. I do notice _somethings_ load a little slower on HDD, Superior Drummer is def one of them and I've ran them all and currently use 3 also. Sorta has like a delay when a session is first loaded, which I'm sure is the time it's taking to load the samples since I only see the loading thing happen if I switch up entire kits otherwise. I'd say it's about a 10 - 25sec delay for me though


----------



## SonicMojo (Mar 25, 2022)

Success! So I have now confirmed this:

With the ASUS z490 ProART - if you are using exactly one OR exactly two drives (placed within M.2_1, M.2_3 or M.2_4 on the Hyper M.2 x16) and the card is in *PCIEX16_1* - only one drive will EVER be detected - regardless of whether there is anything in PCIEX16_2 or not. Takeaway:  Must use PCIEX16_1 if you intend to *use exactly 3 drives* at once

Conversely - if you intend to use exactly *two drives *(and no more) - the Hyper M.2 x16 card *MUST* go in *PCIEX16_2. *PCIEX16_1 remains empty here and I am tired of messing around so I am  going to leave it that way for a while.

After switching the 2TB Sammy over the Hyper M.2 x16 Slot M.2_2 and repositioning the card in PCIEX16_2 (with PCIEX16_1) remaining empty - the BIOS finally see the two drives.

Damn - this was a deep instructive (while slightly frustrating) adventure - that I will document for future use.

Thanks everyone for the input.

Cheers

Sonic


----------



## Tbrnk (Nov 18, 2022)

Following up in this great and informative thread (Go D.R.I.!), I wanted to confirm I'm reading the Asus charts correctly before purchasing.

My board is *ROG STRIX Z490-A GAMING.*
Does this mean: placing the Hyper M.2 in PCIEX16_2 or PCIEX16_3 will show ONLY 1 drive?
Also, I have an RTX 3090 GPU so this card must live in PCIEX16_1, correct?

So basically... the Hyper M.2 will only be good for ONE M.2 drive?

What about the older Gen 3 card? Same story? Couldn't find the corresponding chart.

thanks.


----------

