Estou executando um LVM RAID 1 em dois discos. Aqui está o que lvs
conta sobre meu VG:
root@picard:~# lvs -a -o +devices,lv_health_status,raid_sync_action,raid_mismatch_count
/run/lvm/lvmetad.socket: connect failed: No such file or directory
WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices Health SyncAction Mismatches
lv-data vg-data rwi-aor-r- 2.70t 100.00 lv-data_rimage_0(0),lv-data_rimage_1(0) refresh needed idle 0
[lv-data_rimage_0] vg-data iwi-aor-r- 2.70t /dev/sda(0) refresh needed
[lv-data_rimage_1] vg-data iwi-aor--- 2.70t /dev/sdb(1)
[lv-data_rmeta_0] vg-data ewi-aor-r- 4.00m /dev/sda(708235) refresh needed
[lv-data_rmeta_1] vg-data ewi-aor--- 4.00m /dev/sdb(0)
Parece que algo deu errado em /dev/sda
. O log SMART desse disco parece bom, então espero que seja algo temporário, e gostaria de atualizar / ressincronizar meu RAID. Aqui está o que eu faço:
root@picard:~# lvchange --refresh vg-data/lv-data
/run/lvm/lvmetad.socket: connect failed: No such file or directory
WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
(…wait for a couple of minutes…)
root@picard:~# lvs -a -o +devices,lv_health_status,raid_sync_action,raid_mismatch_count
/run/lvm/lvmetad.socket: connect failed: No such file or directory
WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices Health SyncAction Mismatches
lv-data vg-data rwi-aor-r- 2.70t 100.00 lv-data_rimage_0(0),lv-data_rimage_1(0) refresh needed idle 0
[lv-data_rimage_0] vg-data iwi-aor-r- 2.70t /dev/sda(0) refresh needed
[lv-data_rimage_1] vg-data iwi-aor--- 2.70t /dev/sdb(1)
[lv-data_rmeta_0] vg-data ewi-aor-r- 4.00m /dev/sda(708235) refresh needed
[lv-data_rmeta_1] vg-data ewi-aor--- 4.00m /dev/sdb(0)
Então, isso não fez nada? Meu dmesg indica que tentou recuperar o RAID:
[150522.459416] device-mapper: raid: Faulty raid1 device #0 has readable super block. Attempting to revive it.
Bem, ok, talvez esfregar ajude? Vamos tentar isso:
root@picard:~# lvchange --syncaction repair vg-data/lv-data
/run/lvm/lvmetad.socket: connect failed: No such file or directory
WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
root@picard:~# lvs -a -o +devices,lv_health_status,raid_sync_action,raid_mismatch_count
/run/lvm/lvmetad.socket: connect failed: No such file or directory
WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices Health SyncAction Mismatches
lv-data vg-data rwi-aor-r- 2.70t 100.00 lv-data_rimage_0(0),lv-data_rimage_1(0) refresh needed idle 0
[lv-data_rimage_0] vg-data iwi-aor-r- 2.70t /dev/sda(0) refresh needed
[lv-data_rimage_1] vg-data iwi-aor--- 2.70t /dev/sdb(1)
[lv-data_rmeta_0] vg-data ewi-aor-r- 4.00m /dev/sda(708235) refresh needed
[lv-data_rmeta_1] vg-data ewi-aor--- 4.00m /dev/sdb(0)
Aqui, há várias coisas estranhas:
SyncAction
é idle
, ou seja, parece que a limpeza terminou instantaneamente? dmesg diz:
[150695.091180] md: requested-resync of RAID array mdX
[150695.092285] md: mdX: requested-resync done.
Isso também parece que o scrubbing realmente não fez nada.
Estou executando o Armbian com base no Ubuntu 16.04.4 LTS. Versão LVM:
root@picard:~# lvm version
LVM version: 2.02.133(2) (2015-10-30)
Library version: 1.02.110 (2015-10-30)
Driver version: 4.37.0
Tags lvm raid linux software-raid lvm2