I had problems with this mod. I gave up on it after many new Windows installations, For me, it was good for just one boot.
you had a different motherboard and a different bios. please do not spread false informations here. thank you.
Hello,
I have a GA-X58A-UD5 rev 2.0 board with an X5690 that I would like to use for some small-time server tasks and I would like to apply this mod to be able to use several nvmes that I have around. However, I have some questions I'd like to ask before proceeding with flashing the bios and setting up everything:
1) Will the modded bios essentially disable one of the SATA controllers (I have quite a few large 3.5" spinning disks that I use as long term storage and would like to keep using them)
2) Does anyone have any experience on how this mods works with PCIE switch cards such as those with the PLX8747 chip from highpoint?
https://www.highpoint-tech.com/nvme-aic/r1104
I have been reading through the thread and it appears the mod has issues when more than one nvme is plugged in the PCIE adapter. Can you please tell me if any of you have faced issues with this?
3) Finally, I have read that even if it works fine, there are problems with GRUB finding the nvme device to boot from. Can you please clarify this point and tell me your experience installing linux / GRUB with more than one card in the pcie adapter?
the modded bios will _not_ disable any sata controllers, on some gigabyte boards you need to disable manually SATA 3. I forgot if X58A-UD5 r2.0 has it or not.
If it does, chances are you will have to disable sata3 and get a 250MB/s max on your server for the mecas and SSDs
concerning the highpoint. wow. nice hardware. beware though, you re limited at PCIEx 2.0. spending too much money for performances you wont get. but if it is about using multiple nvmes.... yeah, why not.
There is a screenshot early 10 pages of this topic where a dual nvme active switch board was tested with success at 3600MB/s. there is a screenshot at 6500MB/s on another page but with absolutely no details so i cant say you can achieve this.
theorically, you may. but you ll take a whole x16 lane for that. and if you have 4 drives, well, it s down to 2GB/s on each on multi access.
still, interesting hardware, very interesting hardware. kind of curious on how it s managed in the bios in case of Raid 0+1 or Raid 5
and finally, linux
I used 2 drives in 2 different sabrent adapters.
i never managed, with the MBR on one drive, to boot a system on the other drive, because, the "id chain" was incorrect and grub couldnt find the drive with that id.
i m not good enough to explain the why of this, there is an explaination, because, roughly, the trick we do to the bios to be able to boot a nvme drive makes the bios think the controller + drive are only one thing. so the second drive doesnt "belong" to this controller and then... things get messy.
so
if you use this big 4 drive adapters to host and boot multiple systems on multiple drives, yes, i bet you will have troubles.
if the adapter can define and manage its own raid array, i do not know how the Bios will identify it and manage it.
i simply didnt have enough money to make the test and potentially lose the invested money.
and as i just installed a XFI near my 10Gbit lan card, i m out of space, if i install one of this big nvme adapter, i ll block the fans from my video card, so, i ll never have this answer to give.
i hope someone is reading this and can possibly tell if it is possible to do it or not.
currently, my drive has 5 primary partitions
the 5th one being of course impossible to access (it s the rescue partition of w10).
i can boot either linux or windows. grub works. well, THAT grub works (it s the grub from a ubuntu 18, i upgraded the system from 18 to 22 but i think i didnt let him upgrade grub and things are fine. i m pretty sure if i touch anything it will stop working though)
fingers crossed for someone posting more serious informations.
i would say the PLX8747 is overkill and you will anyway wont be able to grab all the speed it has to offer.
if you plan on using only one system, you can have multiple cheap sabrent adapters one on each pciex available, all at 4x speed, and still get 1800MB/s out of each drive. no multi boot, other nvme are used as storage. but again, defining a software raid there.... i dont know. i m not sure. i dont think it will work, again, because of wrong ID of 'non bootable' drives
keep us informed ? it s an interesting project.
i promised a picture once, here it is
the sabrent is all the way down you can see the 3 blue lights. only way to fit all the stuff on the UD7.
and now there is a XFI between the GFX the the lan card.
max pciex link is x8 on 10Gbit lan, GTX 1060. Sabrent is x4 anyway, and XFI is x1.
i dont know how much "speed" id get if i did replace the XFI or the 10Gbit with a switched nvme adapter. 3600MB/s probably. not worth the loss of a real Intel 10Gbit/s chip, or a real EMU sound chip.
sometimes, compromise.