# Anyone use their PCI slots for m.2 drives?



## Spektre (Jun 19, 2020)

I have a z390 E board from Asus and both m.2 slots are filled and I'd like a little more space (thanks a lot Modern Warefare.) Both are in PCIe mode.

Is it worth it to use the empty PCIe with m.2 adapters? Some adapters have 2 m.2 drive slots as well. Will Windows read that as two separate drives? I have no idea what the speeds would be. I also tend to use more budget friendly drives as well, such as the Silicon Power or Inland Professionals.


----------



## Regeneration (Jun 19, 2020)

You can use PCIe X4 (and above) slots for storage expansion cards. Each X4 slot capable of 4GB/s, so its better to have one device per slot.


----------



## kayjay010101 (Jun 19, 2020)

If you have an available X16 slot, ASUS makes a 4x M.2 to PCIe X16 card called the Hyper V2. I use it in my NAS and it works great, you'll need a board that supports PCIe bifurcation though. If not you can only use one device per PCIe slot.


----------



## Hyderz (Jun 19, 2020)

or you can just grab this 









						Rocket Q NVMe SSD
					

M.2 PCIe 3.0 x4 interface. Up to 3,300/2,900 MB/s. NVMe 1.3 compliant with APST/ASPM/L1.2 power management support. Supports SMART, TRIM, and firmware updates with wide flash compatibility. Resiliency ensured through wear-leveling, bad block management, over-provisioning, ECC, RAID, etc...




					www.sabrent.com


----------



## Spektre (Jun 19, 2020)

$1500 is pocket change lol



kayjay010101 said:


> If you have an available X16 slot, ASUS makes a 4x M.2 to PCIe X16 card called the Hyper V2. I use it in my NAS and it works great, you'll need a board that supports PCIe bifurcation though. If not you can only use one device per PCIe slot.


Thanks! I've never heard of PCIe bifurcation before, I'll have to look into that. There seems to be a lot of "generic" adapters as well. Any thoughts on those?


----------



## Hyderz (Jun 19, 2020)

Spektre said:


> $1500 is pocket change lol
> 
> 
> Thanks! I've never heard of PCIe bifurcation before, I'll have to look into that. There seems to be a lot of "generic" adapters as well. Any thoughts on those?



kidneys are overrated anyways  lol anyway its just an idea having 8tb is nice


----------



## cucker tarlson (Jun 19, 2020)

I had an mp9y 512 in pcie
mostly cause it was a good buy - it was same price as regular nvme m.2 but came with an adapter and a heatsink

I sold it tho.I barely used it for anything other than testing cause I have 6 other ssds and due to price spike and weaak currency I was able to get more than I bought it for.
a guy tried to put it in h110 on windows 7 but can't boot.but it's his problem really.should've asked me.


----------



## Flaky (Jun 19, 2020)

Spektre said:


> Some adapters have 2 m.2 drive slots as well.


Most of cheap dual m.2 adapters provide one slot for M.2 PCIe, and one for M.2 SATA.
True dual M.2 PCIe cards have x8 connector, for example AOC-SLG3-2M2.
To fully use it, you need an option in bios to split x8 link into x4+x4 link. If such option exists, then such card would have to be placed in second full-width PCIe slot on your board.

There are also active cards (example: AOC-SHG3-4M2P) with a controller on them, but those are extra pricey.

If you only need an extra one, grab a simple x4->M.2 adapter.


----------



## bonehead123 (Jun 19, 2020)

This will work, been there, done that...just be sure to check the same stuff as listed above (m.2 conector type, pcie speed etc). And W10 will see the drive on the card just like any other that is connected to the board 

If you want basic adapter card(s) I have some FS that I was using with my previous Z170 rig...


----------



## EarthDog (Jun 19, 2020)

it will work....but you may not be able to boot from it(?)...


----------



## newtekie1 (Jun 19, 2020)

Note that if you put a card in the 2nd PCI-E x16 slot, then your graphics card will drop down to x8.  Probably not a big deal, since most graphics cards don't need anything more than x8 anyway.

Though you might just want to grab a cheap PCI-E x4 to M.2 adapter and stick it in the 3rd PCI-E x16 slot, which is electrically only x4 anyway.  Not only are these the cheapest kind of adapters, but this also won't limit your graphics performance.


----------



## kayjay010101 (Jun 19, 2020)

Spektre said:


> Thanks! I've never heard of PCIe bifurcation before, I'll have to look into that. There seems to be a lot of "generic" adapters as well. Any thoughts on those?


Your motherboard doesn't seem to support bifurcation unfortunately if you're going to be using the one in your system specs list. So you're left with these options:

Get a SATA drive, for games there is no difference anyway
Replace existing drive with larger capacity
Use an active card like @Flaky mentioned, expect a price premium
Get a PCIe card, like a Fusion IODrive (3.2TB of MLC that has a practically infinite lifespan for ~$450)
The problem with those cheap adapters are, like mentioned, they aren't 2x NVMe, they're 1 NVMe and 1 SATA. If you wanted to use one of these, you could only put in one additional NVMe M.2 drive and one SATA M.2 drive, at which point you could just get a PCIe x4 to single M.2 adapter and use a normal 2.5" SATA drive to save a few bucks.

I personally use a FusionIO Drive2 that I got off ebay for ultracheap and it works great for storing games when I was faced with the same conundrum as you. Later on I did put the M.2 drives on a Hyper V2 card in my server and stored my games there, but that's another story entirely.

As @newtekie1 mentions, if you use the second slot the top slot will drop down to x8. If you are using anything below a 2080 Ti however, this is not a problem , and you can always use the x4 slot as they mention.


----------



## Assimilator (Jun 19, 2020)

One of these will do you just fine:

* https://www.amazon.com/UGREEN-Express-Adapter-Support-Converter/dp/B07YFW5HBN
* https://www.amazon.com/MICRO-CONNECTORS-Low-Profile-Adapter-PCIE-M21U80HS/dp/B07J6NHX8H

Or, if you don't want to pay 500% markup and are willing to wait a little longer:

* https://www.ebay.com/itm/NVMe-AHCI-...3-0-x4-converter-adapter-caX-LXI/184331966372
* https://www.ebay.com/itm/PCIe-NVMe-...0-x4-x8-x16-adapter-card-co-F-EW/203019641007

I have a couple of these cards (not the same ones as above, they're all the same thing just sold by different vendors) and they work fine. As others have said, just make sure you use it in the bottom-most PCIe slot to avoid stealing bandwidth from your GPU - but unless you have a GTX 2080 Ti there won't be any performance degradation.


----------



## Flaky (Jun 19, 2020)

According to asus website, this board should support bifurcating the PCIe link and support up to two NVMe drives in PCIEX16_2  slot. 
This option may also be called "asus hyper" or something like that.



EarthDog said:


> it will work....but you may not be able to boot from it(?)...


There are no problems with booting from drives connected via passive adapters, no matter the slot. x1->M.2 are also fine in that regard


----------



## kayjay010101 (Jun 19, 2020)

Flaky said:


> According to asus website, this board should support bifurcating the PCIe link and support up to two NVMe drives in PCIEX16_2  slot.
> This option may also be called "asus hyper" or something like that.


I checked the motherboard manual and there is no mention of any bifurcation/PCIe mode. There is a "hyper kit" option, but that's for a different product (ASUS m.2 to miniSASHD adapter). The list on the website you linked also does not mention any Z270 board like OP has. So I don't think it supports bifurcation.


----------



## EarthDog (Jun 19, 2020)

Flaky said:


> There are no problems with booting from drives connected via passive adapters, no matter the slot. x1->M.2 are also fine in that regard


I've had issues in the past...but from generations previous to Z390.


----------



## Flaky (Jun 19, 2020)

kayjay010101 said:


> The list on the website you linked also does not mention any Z270 board like OP has.





Spektre said:


> I have a *z390 E* board from Asus


----------



## Deleted member 197986 (Jun 19, 2020)

This is I'm using and does not rely on bifurcation as it handles itself, for PCIE x16 and 2 M.2 all keys:





						Delock Products 89961 Delock PCI Express x16 Card to 2 x internal NVMe M.2 Key M
					






					www.delock.com


----------



## Assimilator (Jun 19, 2020)

StarExplorer said:


> This is I'm using and does not rely on bifurcation as it handles itself, for PCIE x16 and 2 M.2 all keys:
> 
> 
> 
> ...



... did you just recommend a European company with no US distribution to a guy who lives in the US?


----------



## kayjay010101 (Jun 19, 2020)

Flaky said:


>


Ah. Either OP hasn't updated their system specs list, or they mistyped. Their system specs mention a Z270 board.


----------



## Flaky (Jun 19, 2020)

OP's cpu-z validation bar in signature mentions 9700k+z390, so it seems it's the tpu specs that are outdated.


----------



## Deleted member 197986 (Jun 19, 2020)

Assimilator said:


> ... did you just recommend a European company with no US distribution to a guy who lives in the US?


Well okey, then try https://www.sybausa.com/index.php?route=product/product&product_id=992


----------



## kayjay010101 (Jun 19, 2020)

Flaky said:


> OP's cpu-z validation bar in signature mentions 9700k+z390, so it seems it's the tpu specs that are outdated.


Quite right. In that case it supports bifurcation, which means any passive adapter should work no problem.


----------



## Spektre (Jun 19, 2020)

Whoops, my system specs are a bit outdated lol. I'm on a Z390.


kayjay010101 said:


> Quite right. In that case it supports bifurcation, which means any passive adapter should work no problem.


So a passive adapter seems to be the easiest way. All the ones with dual slots seem to have a two different keys, and M and a B. Are there any with two M keyed slots? It's not a big deal, a single is fine anyway, but now I'm curious.


----------



## TheLostSwede (Jun 19, 2020)

Spektre said:


> Whoops, my system specs are a bit outdated lol. I'm on a Z390.
> 
> So a passive adapter seems to be the easiest way. All the ones with dual slots seem to have a two different keys, and M and a B. Are there any with two M keyed slots? It's not a big deal, a single is fine anyway, but now I'm curious.


B key means the second drive is SATA only.
Something like this would do two PCIe drives, but comes at a price, as it has a PCIe switch for boards that don't support bifurcation.





						Amazon.com: StarTech.com Dual M.2 PCIe SSD Adapter Card - x8 / x16 Dual NVMe or AHCI M.2 SSD to PCI Express 3.0 - M.2 NGFF PCIe (M-Key) Compatible - Supports 2242, 2260, 2280 - RAID & JBOD - Mac & PC (PEX8M2E2): Computers & Accessories
					

Buy StarTech.com Dual M.2 PCIe SSD Adapter Card - x8 / x16 Dual NVMe or AHCI M.2 SSD to PCI Express 3.0 - M.2 NGFF PCIe (M-Key) Compatible - Supports 2242, 2260, 2280 - RAID & JBOD - Mac & PC (PEX8M2E2): Everything Else - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



					www.amazon.com
				



Much cheaper option in case of bifurcation support.





						Amazon.com: Supermicro AOC-SLG3-2M2 PCIe Add-On Card for up to Two NVMe SSDs : Electronics
					

Buy Supermicro AOC-SLG3-2M2 PCIe Add-On Card for up to Two NVMe SSDs: RAID Controllers - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



					www.amazon.com
				




Used a simple one slot adapter in my previous system when I upgraded my SSD, as it allowed me to clone my drive easily. 
They're simply mechanical converters, so there's nothing really to them.


----------



## thesmokingman (Jun 19, 2020)

Spektre said:


> $1500 is pocket change lol
> 
> 
> Thanks! I've never heard of PCIe bifurcation before, I'll have to look into that. There seems to be a lot of "generic" adapters as well. Any thoughts on those?



Bifurication cards are generally all the same, they split 16 lanes into 4/4/4/4 for example. It however depends on your motherboard whether it support bifurication or not.


----------



## Deleted member 197986 (Jun 19, 2020)

thesmokingman said:


> Bifurication cards are generally all the same, they split 16 lanes into 4/4/4/4 for example. It however depends on your motherboard whether it support bifurication or not.


Does not depend on... https://www.sybausa.com/index.php?route=product/product&product_id=992 ... as this card will do that.


----------



## TheLostSwede (Jun 19, 2020)

StarExplorer said:


> Does not depend on... https://www.sybausa.com/index.php?route=product/product&product_id=992


But it's also $200...


----------



## thesmokingman (Jun 19, 2020)

StarExplorer said:


> Does not depend on... https://www.sybausa.com/index.php?route=product/product&product_id=992 ... as this card will do that.



That's not a straight bifur card. Read the specs, it uses pcie switches so it doesn't really on the motherboard for bifurcation. Overhead.... and over priced, might as well get a real raid card.


----------



## Spektre (Jun 19, 2020)

TheLostSwede said:


> B key means the second drive is SATA only.
> Something like this would do two PCIe drives, but comes at a price, as it has a PCIe switch for boards that don't support bifurcation.
> 
> 
> ...


I did find some dual pcie adapters and they're like $200 lol. Forget that. I'll probably be snagging another 1 TB m.2 and $10 adapter soon. Thanks for the info.

Man, if there was one thing I appreciated about the Asrock Taichi board I had a while back, it was the three m.2 sockets lol. And it looked really cool as well.


----------



## congo (Jul 6, 2020)

Flaky said:


> To fully use it, you need an option in bios to split x8 link into x4+x4 link.





kayjay010101 said:


> you'll need a board that supports PCIe bifurcation though. If not you can only use one device per PCIe slot.




Hi all,

Sorry to drag this thread up but I'm a little confused over how the bifurcation works. Until I saw this thread,
I thought I would be able to support two M.2 NVME drives on a simple double adapter placed in my 2nd PCIe x16 slot.

My GA Z390 Gaming SLI board has a bifurcation x8 + x8 switch, so, I'm taking what Flaky said literally, and I would like
a little clarification here before I spend money, if you could please help me understand why I need x8 link into x4+x4 link?

kayjay's comment has me wondering as well. Does this mean that without a bifurcation option, I can still use a single device 
on the second PCIe x16 slot?

Won't a simple double adapter (mechanical with no controller) just share the PCIe x8 bandwidth on the second x16 slot
if I enable x8 bifurcation?

And who first penned the x16  x8  x4  description for PCIe lanes, I'm calling them out for dyslexia, (not that there is anything
wrong with that), shouldn't it be 16x  8x 4x  etc?


----------



## thesmokingman (Jul 6, 2020)

congo said:


> Hi all,
> 
> Sorry to drag this thread up but I'm a little confused over how the bifurcation works. Until I saw this thread,
> I thought I would be able to support two M.2 NVME drives on a simple double adapter placed in my 2nd PCIe x16 slot.
> ...



First, the x is in the right spot, it's x16 not 16x. 16x denotes a multiplier where x16 denotes a width.

As for these running two nvme on a single pcie slot, you need pcie switches. However that creates lag, unavoidable since we are adding switches in the signal path. This also adds cost as the switches are not cheap.

Within the last few generations bifurication has become an alternative to pcie switches. Essentially the motherboard will partition say an x16 width slot into four x4 partitions, thus allowing us to run four x4 nvme drives off one x16 slot. If we were to do this with pcie switches, you end up with a board like the one posted above around 200 bucks or more.

I did a quick google on your board and some reviews are using the wrong terminology, like the toms review lol. They use the term bifurication incorrectly which is no surprise smh.



> CPU PCIe Bifurcation (ie, sharing the CPU’s sixteen PCIe 3.0 lanes across two or three PCIe slots).



See the use of birfurication? CPU lanes are split using pcie switches not bifurication. Bifurication uses no pcie switches.


----------



## kapone32 (Jul 6, 2020)

Flaky said:


> Most of cheap dual m.2 adapters provide one slot for M.2 PCIe, and one for M.2 SATA.
> True dual M.2 PCIe cards have x8 connector, for example AOC-SLG3-2M2.
> To fully use it, you need an option in bios to split x8 link into x4+x4 link. If such option exists, then such card would have to be placed in second full-width PCIe slot on your board.
> 
> ...



The Asus M2 adatpter is usually less than $100 and you can populate that with 4 NVME drives. Not only that but you can have 2 RAID 0 arrays on 1 card.


----------



## congo (Jul 6, 2020)

thesmokingman said:


> First, the x is in the right spot, it's x16 not 16x. 16x denotes a multiplier where x16 denotes a width.



Only a Dyslexic person could know that 

But seriously, thanks for the explanation on bifurcation without switches, and the terminology errors,
but does this mean I can or cannot use a PCIe NVME double adapter in my second x16 PCIe slot when
my BIOS has that x8 x8 bifurcation option, or are you saying that it isn't really bifuraction at all that my bios
is affecting, and that my board actually uses switches instead?


----------



## thesmokingman (Jul 6, 2020)

congo said:


> Only a Dyslexic person could know that
> 
> But seriously, thanks for the explanation on bifurcation without switches, and the terminology errors,
> but does this mean I can or cannot use a PCIe NVME double adapter in my second x16 PCIe slot when
> ...



You keep mentioning x8/x8, again that is not bifurication. CPU lanes are split in hardware via PCIE switches which your bios controls. Bifurication uses NO SWITCHES. Assuming your lanes go from x16 to x8/x8, that portion is done by the bios, again via the pcie switches. You can literally see the switches between the pcie slots. Any nvme adapter whether its dual or quad, will either need switches or uses bifurication.


----------



## congo (Jul 6, 2020)

thesmokingman said:


> You keep mentioning x8/x8, again that is not bifurication. CPU lanes are split in hardware via PCIE switches which your bios controls. Bifurication uses NO SWITCHES. Assuming your lanes go from x16 to x8/x8, that portion is done by the bios, again via the pcie switches. You can literally see the switches between the pcie slots. Any nvme adapter whether its dual or quad, will either need switches or uses bifurication.



Yes, I keep mentioning the x8 x8 because that's my only bios option other than AUTO under a setting that says PCIe Bifurcation Support.
So, you are saying that my motherboard that supposedly supports bifurcation actually doesn't.
This makes me wonder then how SLI graphics support is viable on the two PCIe slots if there is added latency via the switches.

So, presumably, from your explanation, even if I buy a PCIe Double M.2 NVME adapter for that second x16 PCIe slot on my motherboard, I'm
still not going to be able to use those SSD's for a RAID0 array with both SSD's on that card, and that I am literally limited to use just one SSD
on any adapter in that PCIe slot?


----------



## Flaky (Jul 6, 2020)

Bifurcation and switching are two different things.
Bifurcation refers to PCIe controller's capability of being reconfigured - so that instead of single x16 link, it can be x8+x8, or x8+x4+x4 - that's what mainstream Intel cpus support since ivy bridge.
Switching refers to motherboard's capability of routing PCIe signal into different slots depending on configuration - software, hardware, or both.



congo said:


> kayjay's comment has me wondering as well. Does this mean that without a bifurcation option, I can still use a single device
> on the second PCIe x16 slot?


Yes. Simple adapters with one M.2 PCIe slot generally work everywhere.
Worst case scenario for dual/quad adapters is providing only one functioning M.2.
Under the hood, typical SLI motherboard like yours detects a card in second x16 slot, and enables both switching and bifurcation.



congo said:


> Won't a simple double adapter (mechanical with no controller) just share the PCIe x8 bandwidth on the second x16 slot
> if I enable x8 bifurcation?


For using x8->dual M.2 card in second x16 slot, you'd need to have a 4+4 bifurcation option (that would enable the 8+4+4 capability mentioned above).
Without that, such card will have only one M.2 slot working, as the PCIe controller expects only one device on up to 8 lane link.


----------



## congo (Jul 6, 2020)

Thanks Flaky,

I think that clears up a few things.

I tried using the two onboard M.2 two slots for a raid0, only to discover that it doesn't actually work properly as those slots are bottlenecked
through the DMI, so I was trying to get a workaround for the new PC, looks like it's not gonna happen.

I don't think my bios has any options to support raid through any double adapter on a PCIe card anyway, probably because that doesn't actually work.

I bought two Silicone Power nvme 1Tb drives and they turned out to have different controllers on each one of them, this new PC is an absolute failure, 
I got sucked in by the hype and misleading advertising, very unlike me...


----------



## Flaky (Jul 6, 2020)

If you put one drive in native M.2, and second in x8 via any M.2 PCIe adapter, then if you raid these drives, you won't be bottlenecked by DMI (as only one drive will be communicated via it).

Afaik Intel's RST does not support RAIDing cpu-attached NVMes on mainstream platform, so you'd have to rely on software means.
If you're using windows, there are two options:
1) Dynamic disks. Not recommended, as TRIM isn't supported there
2) Storage spaces. Has some space overhead compared to true/fake raid or dynamic disks. I'm not sure if this supports TRIM, but as it's much newer than DD, I expect it to 

What was the original point of raiding NVMes at all? Very little use cases do really benefit from such speeds...


----------



## congo (Jul 6, 2020)

Flaky said:


> If you put one drive in native M.2, and second in x8 via any M.2 PCIe adapter, then if you raid these drives, you won't be bottlenecked by DMI (as only one drive will be communicated via it).



I did try this, but so far I've not been able to raid them in bios, or Ctrl+I raid bios.
The options disappear or in certain configurations, the drive in the PCIe slot doesn't show up in BIOS at all.

SATA options are only AHCI and Intel optane blah blah blah, which is the RAID option according to the manual.
There is no simple RAID config under SATA options. I always thought AHCI was a subset of RAID, so I was suprised
by this bios , in several ways to be honest.

My Gigabyte Z87 board has an impressive bios, so I was left a bit stunned by the bios on this new board, very confusing
layout and though it has a ton of overclocking options, this new bios is less than ideal to say the least. The user manual
describes a completely different bios, so I reverted back three bios versions so I could learn it by the manual, get my head
around it, then I loaded the latest bios, and things seemed to fall in place and work properly, whereas I seemed to be missing
options on the bios it shipped with, and Qfan wasn't working either, but it all seems good now.

The GA Z390 Gaming SLI is my first Z390 board, I bought it because it had SLI slots and a 12 phase power filter.
I didn't think about anything other than using the other x16 slot for another GTX1060 one day,
when or if I need to upgrade my graphics. I'm a flight simmer / casual gamer / PC tech with no particular speciality,
I mainly just do support for other gamers / clan members, for the last 20 years or so.

Anyway, I was doing a build for a clan member, and he found some 9600kf cpu's cheap, so he bought me one for doing his
build, and not even wanting a new PC, I started buying socket 1151 / Z390 compatible stuff as it was going EOL, and all this
at the height of COVID price fever on PC parts, not my best bargain PC purchase 

Why RAID-0 ?   The long answer...

So, the only real reason I wanted a new PC was to get some NVME SSD's happening in a RAID0 array, and NVME's have been
so expensive here in Australia, I just paid $500 for 2 x 1Tb el cheapo Silicone Power 3400/3000 speed drives, so when you get back
off the floor, read on......

The reason I wanted RAID-0 is that I am obviously completely insane, because every "expert" out there reckons it's of no benefit.
However, that is not my experience. I've been using RAID-0 OS's for about 20 years and they have been a joy to work with, I move
a lot of data and do a lot of backups, keep client backups and do some game mod development that requires a fair few copies and
backups, etc.
So, yeah, that's why I want RAID0. It made such a difference to my PC experience over the years on legacy drives and SATA SSD's
that I just figured it would scale to NVME after seeing some benchmark speeds posted by the deceptors.

However, I just spent some time in a reputable RAID forum and learned that I was a fool not looking into this deeper, and scoffing
when other power users were suddenly buying AMD chipsets, and not knowing the implications for NVME raid on the latest crop of
Intel consumer grade boards. ID10T error.

It's now pretty obvious to me that I just need to get myself a decent 2Tb NVME SSD and be done with it.

I am returning my Silicone Power drives to the vendor. We have powerful consumer protection laws here in Australia. If a product does
not suit it's intended purpose, it can be returned for a full refund, no questions asked. I bought these SSD's to build a RAID set. I bought them
as they were advertised as the same exact product each, same model, confirmed matching pair by spec sheet and forum research etc, all
the pre-purchase research points to the fact that they are all using Phison controllers and they are identical. They are not. One of them uses
a Phison controller, and the other has a Silicon Motion controller. They have various architectural differences, but the gist of it is that the
Phison controlled one has more raw grunt moving large data and the Silicone Motion controller uses more cache to process smaller amounts
of data quickly. I wanted to have two identical SSD's so I could use the same firmware on them both, and this is impossible. Instead, the lowest
common denominators will be used for each stick in RAID, so they cannot and do not meet the spec for this build.

After MUCH discussion, the vendor has agreed to accept the returned SSD's for analysis, and they admitted they were not aware of the
differences in these SSD's (after first blaming me for not buying compatible products!).

So, if I do get my money back, I'll just get a 2Tb SSD and be done with it, but, if the vendor finds any valid reason to refuse the RMA, then
I only have one option left.... that friend of mine bought the same stick as mine, the one with the Silicone Motion controller, and he will swap it for
my Phison controlled SSD if I get into  a bind and need it from him. He'll get a better SSD, and I'll have, hopefully, a raid solution, even if it is one
bottlenecked on the DMI. This is looking more probable as I still cannot see a way to get a full speed RAID-0 working on this board without spending
hundreds on a RAID card... if it's $200 USD, it'll be around $400 AUD here, and money doesn't come easy here either.

You see, I need 2Tb of fast access, my main flight simulator currently has 1.3Tb of HD scenery files and it's slowly growing, so that's a BIG chunk
of a 2Tb drive.

It's all a big deal to me because I'm an invalid with a very limited budget and PC parts are so expensive here in OZ, and I've never been caught with
a red face like this before when buying PC hardware, I've obviously lost touch with current tech, but the traps are there, pictured right on the cover
of the boxed hardware and on the product pages themselves, with disclaimers in the fine print. Buyer beware.

I am so over this PC build, I'd sell it tomorrow and take a 20% loss, every other PC I ever bought I was excited about, but this one has been a nightmare.
I ordered and paid for 3 different motherboards and the vendors kept cancelling as "out of stock". This board was my last choice as stocks dried up.



Flaky said:


> Bifurcation refers to PCIe controller's capability of being reconfigured - so that instead of single x16 link, it can be x8+x8, or x8+x4+x4 - that's what mainstream Intel cpus support since ivy bridge.



So, why in the world doesn't my bios support bifurcation properly? 
Presumably the PCIe controller is on the CPU and it should be a a simple thing to switch it in BIOS ?
You would think a $300 motherboard would not be skimping on this, but if it's as I suspect, 
and the manufacturers simply limit the feature in order to sell more premium chipsets, then I 
think I'm gonna get a bit vommity......



Flaky said:


> Bifurcation refers to PCIe controller's capability of being reconfigured - so that instead of single x16 link, it can be x8+x8, or x8+x4+x4 - that's what mainstream Intel cpus support since ivy bridge.



So, why in the world doesn't my bios support bifurcation properly? 
Presumably the PCIe controller is on the CPU and it should be a a simple thing to switch it in BIOS ?
You would think a $300 motherboard would not be skimping on this, but if it's as I suspect, 
and the manufacturers simply limit the feature in order to sell more premium chipsets, then I 
think I'm gonna get a bit vommity...... 
Wouldn't it actually cost more to put the switches on this board, which thesmokingman says he can see, 
(I can't see them, but I don't know what I'm looking for exactly) ?


----------



## Flaky (Jul 7, 2020)

congo said:


> So, why in the world doesn't my bios support bifurcation properly?
> Presumably the PCIe controller is on the CPU and it should be a a simple thing to switch it in BIOS ?


Intel's 1xxx socket CPUs PCIe bifurcation is configured by physically setting high/low states on CPU's CFG pins. For this to be controllable by software (that includes bios) the motherboard has to be designed to support that.
You may ask gigabyte support, but don't expect much - from my experience, their support is the worst out of the big 4.



congo said:


> (I can't see them, but I don't know what I'm looking for exactly) ?


4 rectangular chips in a row under the first x16 slot. They are responsible for routing 8 lanes between first and second x16 slot.


----------



## congo (Jul 8, 2020)

Ok, I see the switches now, hiding under the graphics card.
Thanks so much for being so specific and accurate in your answers Flaky, it really helped.

I really don't understand how the manufacturers have the hide to blatantly advertise bifurcation support when it doesn't
even exist on these boards. It's just openly lying and disappointing their customers, leaving us feel ripped off. What kind of
business model would do that and expect to survive?

The marketing boys really need to reign in their crack habits.
Oh, but this board does have support for pretty blinking lights ........ uutf?


----------



## Flaky (Jul 8, 2020)

congo said:


> I really don't understand how the manufacturers have the hide to blatantly advertise bifurcation support when it doesn't
> even exist on these boards.


The only thing related to this board that has been advertised to you without mentioning important details is chipset's nvme raid capability and it's possibility of being bottlenecked by DMI.

I don't see where did the motherboard manufacturer advertise bifurcation to you. Is this product page? Is this manual? CTRL+F on both of these finds nothing.
Things like chipset HSIO or CPU's bifurcation are building blocks for motherboard manufacturers, and it's up to them whether to even use these, or expose them to end users and market them as "features".


----------



## theonek (Jul 8, 2020)

i have tested this burification function on already old amd chipset x370 and asrock mobo via an asrock M2 4xM2 expansion card, which on this scenario have supported only 2 m2 drives in a single slot. Have raided them in raid0 configuration just for test, and guess what, reading was exceptional but other wasn't so fast, even it was slower than a single drive, that's why raid0 with nvme drives is not so good. Another thing is raid with standard ssd's, then you will be limited only by chipset bandwidth and of course number of ssd drives using in it. Just like a raid0 with hdd's, have been achieved a result with hard drives near 1500MB/s seq r/w but this was with 8 of them. So sata raid is more scalable than nvme one, but there is the catch, a single M2 drive always beats any sata raid on the speed matters!


----------



## thesmokingman (Jul 8, 2020)

theonek said:


> i have tested this burification function on already old amd chipset x370 and asrock mobo via an asrock M2 4xM2 expansion card, which on this scenario have supported only 2 m2 drives in a single slot. Have raided them in raid0 configuration just for test, and guess what, reading was exceptional but other wasn't so fast, even it was slower than a single drive, that's why raid0 with nvme drives is not so good. Another thing is raid with standard ssd's, then you will be limited only by chipset bandwidth and of course number of ssd drives using in it. Just like a raid0 with hdd's, have been achieved a result with hard drives near 1500MB/s seq r/w but this was with 8 of them. So sata raid is more scalable than nvme one, but there is the catch, a single M2 drive always beats any sata raid on the speed matters!



That's not true. Bifurication setups are really for HEDT setups and they are ridiculously fast when done right, preferably Threadrippers. My 3970x production machine achieves 15GB read and writes using an 8TB array, ie. four x4 nvme pcei 4.0 drives.


----------



## theonek (Jul 8, 2020)

15Gb transfer? where? on crystal mark? or onto the same drive copy you able to achieve these speeds? Because there are no single drive with that speed transfer to copy from....


----------



## congo (Jul 8, 2020)

Flaky said:


> I don't see where did the motherboard manufacturer advertise bifurcation to you. Is this product page?



No sir, nothing at all on the product page, it was just modified though.
Nothing in the manual on the bios bifurcation support option either, though it's there in bios.

Perhaps I made some assumptions based on web reviews and forum articles then, my apologies. (red faced)


----------



## thesmokingman (Jul 8, 2020)

theonek said:


> 15Gb transfer? where? on crystal mark? or onto the same drive copy you able to achieve these speeds? *Because there are no single drive with that speed transfer to copy from....*



?? 

You are missing the whole point of bifurication with that statement.









						Gigabyte showcases their Aorus 15,000 MB/s PCIe 4.0 SSD
					

This is what PCIe 4.0 is capable of!




					www.overclock3d.net


----------



## TheoneandonlyMrK (Jul 8, 2020)

I'm using three 1Tb pciex 3 in raid ATM, using a relatively cheap Asus hyper m.2 x16, I can only use two bifurcated lanes on the second slot of an x470 board and only one native m.2 slot but , it works well(9GB Max transfer or 11GB with a ram cache).
It's doable, the adaptor cost about£40.


----------



## Pugheaven (Jul 13, 2020)

if you just wanna use 1 card on 1 PCIE slot then... I used these and they're hitting 3,500MB read/write on PCEI3.0

EZDIY-FAB PCI Express M.2 SSD NGFF PCIe Card to PCIe 3.0 x 4 Adapter (Support M.2 PCIe 22110,2280, 2260, 2242) 

I use these with Sabrent 2TB Rocket Nvme PCIe 4.0 M.2 2280


----------



## congo (Jul 13, 2020)

Pugheaven said:


> if you just wanna use 1 card on 1 PCIE slot then... I used these and they're hitting 3,500MB read/write on PCEI3.0



So, would it be better to use an adapter with the nvme m.2 in the PCIe slot, rather than putting it in the motherboard m.2 slot, to keep the DMI free for other work?


----------



## Hardcore Games (Aug 15, 2020)

My motherboard (MSI X570-A PRO) has a pair of M.2 slots so I can still use my old SSD while using the new one for Windows. When I get a new SSD of the course this bumps out the old SSD and I can use a PCIe card or simply put the SSD on another rig.

Hard disks are another cycle too.


----------



## Sammyfed1 (Aug 15, 2020)

I use this to add extra M.2 Drives.

Dual M.2 PCIe Adapter, M2 SSD NVME (m Key) or SATA (b Key) 22110 2280 2260 2242 2230 to PCI-e 3.0 x 4 Host Controller Expansion Card with Low Profile Bracket for Desktop PCI Express Slot








						Amazon.com: Dual M.2 PCIe Adapter, M2 SSD NVME (m Key) or SATA (b Key) 22110 2280 2260 2242 2230 to PCI-e 3.0 x 4 Host Controller Expansion Card with Low Profile Bracket for Desktop PCI Express Slot: Home Audio & Theater
					

Amazon.com: Dual M.2 PCIe Adapter, M2 SSD NVME (m Key) or SATA (b Key) 22110 2280 2260 2242 2230 to PCI-e 3.0 x 4 Host Controller Expansion Card with Low Profile Bracket for Desktop PCI Express Slot: Home Audio & Theater



					www.amazon.com


----------



## sapsinoy (Aug 18, 2020)

I use a cheap adapter like previous to boot Win10 from a Z87 motherboard  pcie x4


----------



## dirtyferret (Aug 18, 2020)

I recently updated mobo NVMe M.2 slot so I purchased an adapter for my P1 NVMe.  A lot of people need to read their mobo manual as it may drop PCIe x 4 throughput speeds if more then one slot is occupied (wifi card, audio card, another adapter, etc.,)


----------



## Deleted member 193596 (Aug 19, 2020)

to be honest.

my system was a 4TB NVME SSD Only rig (Gen 4 Corsair MP600)

after one SSD died i bought a external WD D10 "Game Drive", shucked it and used the HDD.. (never had a HDD in years)

in 95% of all games the loadings times are more or less the same. (except World of Warcraft.. it is unplayable on anything but a SSD)

Now i have a 8TB enterprise grade Storage solution for 150 bucks.


and to answer your question. 
i run a 970 evo on a Z97 Anniversary and it works perfectly fine.


----------



## raouiayacine (Oct 18, 2020)

Hyderz said:


> or you can just grab this
> 
> 
> 
> ...


Hello! I have a z390p, will it support this much storage on one drive and would it supported on only one or both slots?


----------



## EarthDog (Oct 18, 2020)

raouiayacine said:


> Hello! I have a z390p, will it support this much storage on one drive and would it supported on only one or both slots?


your manual will tell you.


----------



## sapsinoy (Oct 18, 2020)

Not the size is a problem, just read carefully what option do you choose: 1 ssd nvme x4 pcie speed and 1 ssd nvme x2 pcie speed both


----------



## kapone32 (Oct 19, 2020)

This is the only card that allows for 2 PCIe 3.0x4 NVME drives on AM4 boards other than the x16 add in cards meant for Threadripper.






						Canada Computers | Best PC, Laptop, Gaming Gear, Printer, TV, Cables - Canada Computers & Electronics
					

The best deals on laptops, PC, game systems, components, small appliances, cables, and office supplies. Save more by shopping online or in-store!




					www.canadacomputers.com


----------



## jallenlabs (Oct 24, 2020)

Ive got one of these.  Its designed for nvme drives, not sata.  Its was like $14 bucks at newegg.  Plugged into my second x16 (8x wired) slot, which is fine as it wouldn't fit in the 4x slot at the bottom of my board.  I doubt my 2060 needs more than 8 lanes anyway.


----------



## cdesgagne1212 (Nov 11, 2020)

Hello everyone,
I found this tread and after reading it, i didn't found the answer that i'm looking for.
My mobo is a MSI Z390 A-Pro, which have only one NVMe connector. I'm planning to replace my 1To HDD cuz it is slower than ever and the SSD price is dropping. So, my question is, am i better to buy a normal SATA 2,5po SSD or buy an NVME with an expansion card plugged onto the PCIe connector.


----------



## EarthDog (Nov 11, 2020)

cdesgagne1212 said:


> Hello everyone,
> I found this tread and after reading it, i didn't found the answer that i'm looking for.
> My mobo is a MSI Z390 A-Pro, which have only one NVMe connector. I'm planning to replace my 1To HDD cuz it is slower than ever and the SSD price is dropping. So, my question is, am i better to buy a normal SATA 2,5po SSD or buy an NVME with an expansion card plugged onto the PCIe connector.


Six of one, half dozen of the other, really. NVMe's are faster on paper, but in reality, you likely won't notice a difference in most cases. Get a 2.5" SATA drive until you have a native M.2 port.


----------



## cdesgagne1212 (Nov 11, 2020)

EarthDog said:


> Six of one, half dozen of the other, really. NVMe's are faster on paper, but in reality, you likely won't notice a difference in most cases. Get a 2.5" SATA drive until you have a native M.2 port.



This is the idea that i had initially, but since i'm not planning to keep the same set-up forever, i was thinking of buying an NVMe instead of a SSD, and so my next mobo will have more than one NVMe connector.

Also, my motherboard have a PCIe x16 on the top via CPU lanes, but my PCie x4 (PCI_E4) is not via the CPU Lanes. If i'm plugging an expansion card onto it, does it will impact the PCI_E1bandwidth?




Also, right now in Canada, the WD Blue NVMe is cheaper than the WD blue SSD 2,5po


----------



## EarthDog (Nov 11, 2020)

I still say 2.5" SSD now and when you have a native slot, get an M.2 NVMe based module.

As far as your lane breakdown, your manual tells you what does what. That said, looking at your image, the top slot's (PCI_E1) lanes are sourced from the CPU, the rest of the PCIe slots are sourced from the chipset......so no, using PCI_E4 will NOT touch the CPU controlled top slot.


----------



## cdesgagne1212 (Nov 11, 2020)

You're right, i didn't pay attention to the diagram of my motherboard... 



Thanks you for the advice, i'll probably go for a SSD


----------



## EarthDog (Nov 11, 2020)

Yes. your first image even shows the top slot is CPU fed while the rest are PCH.


----------



## jallenlabs (Nov 18, 2020)

Ive used a cheap pcie 4x adapter from newegg to run some of my m.2 NVME ssds.  They work fine.  Just make sure its at least a 4x slot and you are good to go.


----------

