mdadm não notou um disco com falha no raid0

3

Eu tenho um servidor com o mdadm raid0:

# mdadm --version
mdadm - v3.1.4 - 31st August 2010
# uname -a
Linux orkan 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux

Um dos discos falhou:

# grep sdf /var/log/kern.log | head
Jan 30 19:08:06 orkan kernel: [163492.873861] sd 2:0:9:0: [sdf] Unhandled error code
Jan 30 19:08:06 orkan kernel: [163492.873869] sd 2:0:9:0: [sdf] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Jan 30 19:08:06 orkan kernel: [163492.873874] sd 2:0:9:0: [sdf] Sense Key : Hardware Error [deferred] 

Agora mesmo no dmesg eu posso ver:

Jan 31 15:59:49 orkan kernel: [238587.307760] sd 2:0:9:0: rejecting I/O to offline device
Jan 31 15:59:49 orkan kernel: [238587.307859] sd 2:0:9:0: rejecting I/O to offline device
Jan 31 16:03:58 orkan kernel: [238836.627865] __ratelimit: 10 callbacks suppressed
Jan 31 16:03:58 orkan kernel: [238836.627872] mdadm: sending ioctl 1261 to a partition!
Jan 31 16:03:58 orkan kernel: [238836.627878] mdadm: sending ioctl 1261 to a partition!
Jan 31 16:04:09 orkan kernel: [238847.215187] mdadm: sending ioctl 1261 to a partition!
Jan 31 16:04:09 orkan kernel: [238847.215195] mdadm: sending ioctl 1261 to a partition!

Mas o mdadm não percebeu que a unidade falhou:

# mdadm -D /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Thu Jan 13 15:19:05 2011
     Raid Level : raid0
     Array Size : 71682176 (68.36 GiB 73.40 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Thu Sep 22 14:37:24 2011
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 64K

           UUID : 7e018643:d6173e01:17ab5d05:f75b494e
         Events : 0.9

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       65        1      active sync   /dev/sde1
       2       8       81        2      active sync   /dev/sdf1

Além disso, forçar uma leitura de / dev / md0 suporta a teoria de que / dev / sdf falhou e ainda assim o mdadm não marca a unidade como falhada:

# dd if=/dev/md0 of=/root/md.data bs=512 skip=255 count=1
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.00367142 s, 139 kB/s

# dd if=/dev/md0 of=/root/md.data bs=512 skip=256 count=1
dd: reading '/dev/md0': Input/output error
0+0 records in
0+0 records out
0 bytes (0 B) copied, 0.000359543 s, 0.0 kB/s

# dd if=/dev/md0 of=/root/md.data bs=512 skip=383 count=1
dd: reading '/dev/md0': Input/output error
0+0 records in
0+0 records out
0 bytes (0 B) copied, 0.000422959 s, 0.0 kB/s

# dd if=/dev/md0 of=/root/md.data bs=512 skip=384 count=1
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.000314845 s, 1.6 MB/s

No entanto, tentar acessar o disco / dev / sdf falha com:

# dd if=/dev/sdf of=/root/sdf.data bs=512 count=1
dd: opening '/dev/sdf': No such device or address

Os dados não são tão importantes para mim, só quero entender por que o mdadm insiste que a matriz é "Estado: limpo"

    
por mateusz.kijowski 31.01.2013 / 13:49

2 respostas

2

A página do manual md (4) esclarece como a palavra "clean" é usada (crucial em itálico):

Unclean Shutdown

When changes are made to a RAID1, RAID4, RAID5, RAID6, or RAID10 array there is a possibility of inconsistency for short periods of time as each update requires at least two block to be written to different devices, and these writes probably won't happen at exactly the same time. Thus if a system with one of these arrays is shutdown in the middle of a write operation (e.g. due to power failure), the array may not be consistent.

To handle this situation, the md driver marks an array as "dirty" before writing any data to it, and marks it as "clean" when the array is being disabled, e.g. at shutdown. If the md driver finds an array to be dirty at startup, it proceeds to correct any possibly inconsistency. For RAID1, this involves copying the contents of the first drive onto all other drives. For RAID4, RAID5 and RAID6 this involves recalculating the parity for each stripe and making sure that the parity block has the correct data. For RAID10 it involves copying one of the replicas of each block onto all the others. This process, known as "resynchronising" or "resync" is performed in the background. The array can still be used, though possibly with reduced performance.

If a RAID4, RAID5 or RAID6 array is degraded (missing at least one drive, two for RAID6) when it is restarted after an unclean shutdown, it cannot recalculate parity, and so it is possible that data might be undetectably corrupted. The 2.4 md driver does not alert the operator to this condition. The 2.6 md driver will fail to start an array in this condition without manual intervention, though this behaviour can be overridden by a kernel parameter.

É plausível que um disco no RAID tenha falhado após o RAID foi seguramente e normalmente desativado pelo sistema (em, por exemplo, um desligamento). Em outras palavras, a falha do disco aconteceu com o RAID em um estado consistente e sincronizado. O RAID seria então sinalizado como "limpo" e, quando fosse ativado e um de seus discos falhasse, a bandeira permaneceria.

    
por 06.07.2018 / 03:40
-2

Além do óbvio - que apenas pessoas que não valorizam seus dados executam o RAID-0 - o mdadm não o alerta em nada, a menos que você execute o daemon do monitor: mdadm --monitor /dev/md0 .

Você pode examinar o dispositivo problemático explicitamente usando: mdadm -E /dev/sdf .

É claro que detectar um array RAID-0 falhou é bem insignificante: ele é perdido, recupere-se dos backups.

    
por 31.01.2013 / 16:07

Tags