I've been stuck on this for 5 days now, so any help would be appreciated>This is a bare metal server on OVH running Ubuntu 22.04. The OS was installed on a RAID1 array using both "system disks" (these are regular SSDs).The issue appears when creating a RAID10 array of the 6 "data disks" we have (these are nvme disks). Upon creating the partition on the array (ext4), mounting it and amending fstab and mdadm.conf, the OS will not boot.Here's the lsblk
printout:
loop0 7:0 0 63.9M 1 loop /snap/core20/2105loop1 7:1 0 111.9M 1 loop /snap/lxd/24322loop2 7:2 0 40.9M 1 loop /snap/snapd/20290loop3 7:3 0 40.4M 1 loop /snap/snapd/20671sda 8:0 0 447.1G 0 disk├─sda1 8:1 0 511M 0 part├─sda2 8:2 0 1G 0 part│└─md2 9:2 0 1022M 0 raid1 /boot├─sda3 8:3 0 445.1G 0 part│└─md3 9:3 0 445G 0 raid1 /├─sda4 8:4 0 512M 0 part [SWAP]└─sda5 8:5 0 2M 0 partsdb 8:16 0 447.1G 0 disk├─sdb1 8:17 0 511M 0 part /boot/efi├─sdb2 8:18 0 1G 0 part│└─md2 9:2 0 1022M 0 raid1 /boot├─sdb3 8:19 0 445.1G 0 part│└─md3 9:3 0 445G 0 raid1 /└─sdb4 8:20 0 512M 0 part [SWAP]nvme0n1 259:0 0 3.5T 0 disk└─nvme0n1p1 259:6 0 3.5T 0 part└─md127 9:127 0 10.5T 0 raid10 /dbnvme1n1 259:1 0 3.5T 0 disk└─nvme1n1p1 259:8 0 3.5T 0 part└─md127 9:127 0 10.5T 0 raid10 /dbnvme2n1 259:2 0 3.5T 0 disk└─nvme2n1p1 259:9 0 3.5T 0 part└─md127 9:127 0 10.5T 0 raid10 /dbnvme3n1 259:3 0 3.5T 0 disk└─nvme3n1p1 259:10 0 3.5T 0 part└─md127 9:127 0 10.5T 0 raid10 /dbnvme4n1 259:4 0 3.5T 0 disk└─nvme4n1p1 259:11 0 3.5T 0 part└─md127 9:127 0 10.5T 0 raid10 /dbnvme5n1 259:5 0 3.5T 0 disk└─nvme5n1p1 259:12 0 3.5T 0 part└─md127 9:127 0 10.5T 0 raid10 /db
fstab:
UUID=9079d740-3f55-4f57-880b-56f6a628494f / ext4 defaults 0 1UUID=e780ca2c-b474-4ae9-9c1d-6224a23fca6b /boot ext4 defaults 0 0LABEL=EFI_SYSPART /boot/efi vfat defaults 0 1UUID=d3ebca74-ffff-4b55-a9e4-70c002e3edf5 swap swap defaults 0 0UUID=b8298672-7b7c-48bf-9022-9d9408adafeb swap swap defaults 0 0UUID=257b2715-613a-44c7-8744-7c159764c09a /db ext4 defaults 0 0
mdadm.conf:
definitions of existing MD arrays
ARRAY /dev/md/md2 metadata=1.2 UUID=109fefd8:aa46fdd6:bddb613e:892bce00 name=md2ARRAY /dev/md/md3 metadata=1.2 UUID=950bfa3c:f52d5cfd:9f9c1435:0d85a63f name=md3
This configuration was auto-generated on Sun, 14 Jan 2024 18:51:06 +0000 by mkconf
ARRAY /dev/md/md2 metadata=1.2 name=md2 UUID=109fefd8:aa46fdd6:bddb613e:892bce00ARRAY /dev/md/md3 metadata=1.2 name=md3 UUID=950bfa3c:f52d5cfd:9f9c1435:0d85a63fARRAY /dev/md/md0 metadata=1.2 name=ns3229089:md0 UUID=a38b6d22:2e7220cb:79f5ce7d:e0c3d63a
OS is fully updated. Tried:
- Adding "resume=none" in
/etc/initramfs-tools/conf.d/resume
- Blacklisting btrfs in
/etc/modprobe.d/blacklist
- Commented out "GRUB_CMDLINE_LINUX="nomodeset iommu=pt console=tty0 console=ttyS0,115200n8" (not sure why this exists, it was added there by the default OVH installation)
No matter what I try, the OS always gets stuck on the same line.I can enter recovery mode and play around with files, I just cannot boot the system.