Eu tenho 4 x 3TB NAS setup como RAID5, que tem funcionado muito bem por quase um ano.
Após um desligamento abrupto recente (teve que apertar o botão liga / desliga), o RAID não será mais montado na inicialização.
Eu corri:
mdadm --examine /dev/sd[bcdefghijklmn]1 >> raid.status
A saída está abaixo:
/dev/sda:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 7d2a94ca:d9a42ca9:a4e6f976:8b5ca26b
Name : BruceLee:0 (local to host BruceLee)
Creation Time : Mon Feb 4 23:07:01 2013
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 5860271024 (2794.40 GiB 3000.46 GB)
Array Size : 8790405888 (8383.18 GiB 9001.38 GB)
Used Dev Size : 5860270592 (2794.39 GiB 3000.46 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : active
Device UUID : 2c1e0041:21d926d6:1c69aa87:f1340a12
Update Time : Sat Dec 27 20:54:55 2014
Checksum : d94ccaf5 - correct
Events : 17012
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 0
Array State : AAA. ('A' == active, '.' == missing)
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 7d2a94ca:d9a42ca9:a4e6f976:8b5ca26b
Name : BruceLee:0 (local to host BruceLee)
Creation Time : Mon Feb 4 23:07:01 2013
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 5860271024 (2794.40 GiB 3000.46 GB)
Array Size : 8790405888 (8383.18 GiB 9001.38 GB)
Used Dev Size : 5860270592 (2794.39 GiB 3000.46 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : active
Device UUID : a0261c8f:8a2fbb93:4093753a:74e7c5f5
Update Time : Sat Dec 27 20:54:55 2014
Checksum : 7b84067b - correct
Events : 17012
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 1
Array State : AAA. ('A' == active, '.' == missing)
/dev/sdc:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 7d2a94ca:d9a42ca9:a4e6f976:8b5ca26b
Name : BruceLee:0 (local to host BruceLee)
Creation Time : Mon Feb 4 23:07:01 2013
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 5860271024 (2794.40 GiB 3000.46 GB)
Array Size : 8790405888 (8383.18 GiB 9001.38 GB)
Used Dev Size : 5860270592 (2794.39 GiB 3000.46 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : active
Device UUID : 9dc56e9e:d6b00f7a:71da67c7:38b7436c
Update Time : Sat Dec 27 20:54:55 2014
Checksum : 749b3dba - correct
Events : 17012
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 2
Array State : AAA. ('A' == active, '.' == missing)
/dev/sdd:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 7d2a94ca:d9a42ca9:a4e6f976:8b5ca26b
Name : BruceLee:0 (local to host BruceLee)
Creation Time : Mon Feb 4 23:07:01 2013
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 5860271024 (2794.40 GiB 3000.46 GB)
Array Size : 8790405888 (8383.18 GiB 9001.38 GB)
Used Dev Size : 5860270592 (2794.39 GiB 3000.46 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 81e5776f:2a466bee:399251a0:ab60e9a4
Update Time : Sun Nov 2 09:07:02 2014
Checksum : cb4aebaf - correct
Events : 159
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing)
Ao verificar os discos no Gerenciador de discos do Ubuntu
sda / b / c estão mostrando como OK
e sdd está mostrando como OK com 64 setores defeituosos
Se eu executar fsck /dev/md0
Veja:
fsck.ext2: Invalid argument while trying to open /dev/md0
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
Por fim, se eu correr
mdadm --examine /dev/sd[a-d] | egrep 'Event|/dev/sd'
Eu recebo:
/dev/sda:
Events : 17012
/dev/sdb:
Events : 17012
/dev/sdc:
Events : 17012
/dev/sdd:
Events : 159
Se eu executar cat /proc/mdstat
, obtenho:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdb[1](S) sdc[2](S) sdd[3](S) sda[0](S)
1172054204Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdb[1](S) sdc[2](S) sdd[3](S) sda[0](S)
11720542048 blocks super 1.2
unused devices: <none>
8 blocks super 1.2
unused devices: <none>
Por fim, executando file -s /dev/md0
Eu recebo:
/dev/md0: empty
Basicamente, eu acho que preciso correr - montar no RAID, mas tenho medo de perder meus dados, mas também que o quarto drive me preocupa um pouco.
Alguém poderia aconselhar sobre as melhores etapas lógicas a seguir para que isso seja executado novamente?