Estou executando o Fedora Server Edition 26 e tenho duas unidades USB externas com uma partição em cada uma que combinei no RAID1. Meu arquivo / etc / fstab tem essa linha para automontar a matriz:
UUID=B0C4-A677 /mnt/backup-raid exfat uid=strwrsdbz,gid=strwrsdbz,umask=022,windows_names,locale=en.utf8,nobootwait,nofail 0 2
No entanto, quando a inicialização é concluída, a matriz em / mnt / backup-raid não é montada. Se eu verificar os logs do diário, vejo
Oct 28 21:32:07 hostname systemd[1]: Started File System Check on /dev/disk/by-uuid/B0C4-A677.
Oct 28 21:32:07 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2duuid-B0C4\x2dA677 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:07 hostname kernel: audit: type=1130 audit(1509240727.851:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2duuid-B0C4\x2dA677 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:08 hostname systemd[1]: Mounting /mnt/backup-raid...
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/c.
Oct 28 21:32:08 hostname ntfs-3g[702]: Version 2017.3.23 integrated FUSE 28
Oct 28 21:32:08 hostname ntfs-3g[702]: Mounted /dev/sda1 (Read-Write, label "", NTFS 3.1)
Oct 28 21:32:08 hostname ntfs-3g[702]: Cmdline options: rw,uid=1000,gid=1000,umask=022,windows_names,locale=en.utf8
Oct 28 21:32:08 hostname ntfs-3g[702]: Mount options: rw,allow_other,nonempty,relatime,default_permissions,fsname=/dev/sda1,blkdev,blksize=4096
Oct 28 21:32:08 hostname ntfs-3g[702]: Global ownership and permissions enforced, configuration type 7
Oct 28 21:32:08 hostname lvm[599]: 3 logical volume(s) in volume group "fedora" now active
Oct 28 21:32:08 hostname systemd[1]: Started LVM2 PV scan on device 8:5.
Oct 28 21:32:08 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-pvscan@8:5 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:08 hostname kernel: audit: type=1130 audit(1509240728.594:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-pvscan@8:5 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:08 hostname systemd[1]: Found device /dev/mapper/fedora-home.
Oct 28 21:32:08 hostname systemd[1]: Mounting /home...
Oct 28 21:32:08 hostname kernel: XFS (dm-2): Mounting V4 Filesystem
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/igel1.
Oct 28 21:32:08 hostname systemd-fsck[666]: /dev/sda3: clean, 376/128016 files, 291819/512000 blocks
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/igel2.
Oct 28 21:32:08 hostname systemd[1]: Mounted /mnt/backup-raid.
* snip *
Oct 28 21:32:33 hostname systemd[1]: Created slice system-mdadm\x2dlast\x2dresort.slice.
Oct 28 21:32:33 hostname systemd[1]: Starting Activate md array even though degraded...
Oct 28 21:32:33 hostname systemd[1]: Unmounting /mnt/backup-raid...
Oct 28 21:32:34 hostname systemd[1]: Started Activate md array even though degraded.
Oct 28 21:32:34 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=mdadm-last-resort@md0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:34 hostname audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=mdadm-last-resort@md0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct 28 21:32:34 hostname kernel: md0:
Oct 28 21:32:34 hostname systemd[1]: Unmounted /mnt/backup-raid.
Portanto, parece que ele é montado no primeiro bloco de log, mas depois é desmontado porque está aparecendo como degradado. Mas assim que terminar a inicialização, posso executar sudo mount -a
e as montagens da matriz sem problemas. O conteúdo aparece corretamente em / mnt / backup-raid e verifica / proc / mdstat mostra
Personalities : [raid1]
md0 : active raid1 sdc2[0] sdb2[2]
485345344 blocks super 1.2 [2/2] [UU]
bitmap: 0/4 pages [0KB], 65536KB chunk
unused devices: <none>
tudo parece saudável. Caso isso ajude, meu /etc/mdadm.conf contém
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=hostname:backup-raid UUID=6c8bf3df:c4147eb1:4c3f88d8:e94d1dbc devices=/dev/sdb2,/dev/sdc2
Eu encontrei este tópico de e-mail que parecia estar lidando com um problema semelhante situação, mas parece-me que apenas ficou em silêncio. Me desculpe se a resposta está nesse segmento de e-mail e eu perdi, mas fica um pouco densa demais para eu seguir.