couple remarks
there s controllers with 2 nvme ports and raid controller, but it s not the issue here, reference is 15 pages ago, red card, amazon, 150ish eur/usd
using the integrated raid from the bios for sata is in my opinion a mistake today, better do a softraid under windows, even though, raid includes ahci so everything is "fine" compared to the ahci mode, it activates the intel rom (usually) and adds more stuff to manage which, is complicated. reading Fernando's expert discussions about it brings you to choose carefully more or less outdated intel's drivers depending on firmware versions and it can be quite an adventure to set things right. my soft raid here works just fine.
that s in case it s the intel raid. it could be one of the custom raids from the motherboard's manufacturer, and there, well, wow, not a good idea, not 15 years later, with no driver and/or bad support from windows 10.
finally, never push PCIe over 100mhz, that s a real advice, it s never been a good idea, dont suggest that to people.
@catup : dont mix two things at the same time, do things one per one. test the grey slot in ahci mode first, check if nvme is detected, and what is the speed. then activate the raid, if you lose the drive at that moment, you know what triggered it. then try in the original slot with and without raid. then you ll know precisely what makes the drive disappear.
due to experience here, i would say that things works best with all the "useless" controllers disabled, sata 3, custom raid, disabled. just leave the sata 2 on and then "try" things one per one, knowing that raid usually renames its attached disks to "scsi" instead of "disk", and that the nvme mod uses this "scsi" status, and they dont play well together.
also refer to the schematic in your motherboard user manual explaining which PCIe slot goes through where and shares what with who. you ll surely understand why you lose half the speed on some slots and not some others.
finally, the pci express slots sharing, when you use some together, speed can be cut in half, things can get tricky, especially with a double space video card that you cant put where you want.
i had to move my nvidia down on slot 16 #2 so it covers an old pci slot that i dont use. nvme 2 is on slot 16 #3, nvme 1 is on slot 16 #1 and 10Gbit Lan card is on a slot 8x, the short ones
everything is populated and i would like to put a real soundcard in place of nvme1 so i ll probably buy one of those 2 slots adapter soon or later.
in that configuration where all pciexpress slots are populated, my nvidia is down to x8 instead of x16. didnt see any difference in terms of performance.
edit :
i have 2x 2TB sata drives, 2x 3TB sata drives as soft raid, 1x DVD writer, 1x SSD Sata, activated the E-Sata controller with no problem and i can add 2 more ssd sata powered through cables out of the machine, and the 2 nvme. that s a total of 10 drives working at the same moment, all in sata 2 at 250MB max for the non nvme, and 1600MB/s for the 2 nvme drives. Didnt try to push both nvme at the same time though, as i have no scenario where that would be happening. i could try to launch to crystalmark at the same time, though.
edit 2 : well it works 1800x2
View attachment 344709