raid mdadm não monta

2

Eu tenho uma matriz de raid definida em /etc/mdadm.conf da seguinte forma:

ARRAY /dev/md0 devices=/dev/sdb6,/dev/sdc6
ARRAY /dev/md1 devices=/dev/sdb7,/dev/sdc7

mas quando eu tento montá-los, eu entendo isso:

# mount /dev/md0 /mnt/media/
mount: special device /dev/md0 does not exist
# mount /dev/md1 /mnt/data
mount: special device /dev/md1 does not exist

/proc/mdstat entretanto diz:

# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md125 : inactive dm-6[0](S)
      238340224 blocks

md126 : inactive dm-5[0](S)
      244139648 blocks

md127 : inactive dm-3[0](S)
      390628416 blocks

unused devices: <none>

Então eu tentei isso:

# mount /dev/md126 /mnt/data
mount: /dev/md126: can't read superblock
# mount /dev/md125 /mnt/media
mount: /dev/md125: can't read superblock

O fs nas parções é ext3 e quando eu especificar o fs com -t , recebo

mount: wrong fs type, bad option, bad superblock on /dev/md126,
       missing codepage or helper program, or other error
       (could this be the IDE device where you in fact use
       ide-scsi so that sr0 or sda or so is needed?)
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

Como posso instalar meus arrays de ataque? Já funcionou antes.

EDIT 1

# mdadm --detail --scan
mdadm: cannot open /dev/md/127_0: No such file or directory
mdadm: cannot open /dev/md/0_0: No such file or directory
mdadm: cannot open /dev/md/1_0: No such file or directory

EDIT 2

# dmsetup ls
isw_cabciecjfi_Raid7    (252:6)
isw_cabciecjfi_Raid6    (252:5)
isw_cabciecjfi_Raid5    (252:4)
isw_cabciecjfi_Raid3    (252:3)
isw_cabciecjfi_Raid2    (252:2)
isw_cabciecjfi_Raid1    (252:1)
isw_cabciecjfi_Raid     (252:0)
# dmsetup table
isw_cabciecjfi_Raid7: 0 476680617 linear 252:0 1464854958
isw_cabciecjfi_Raid6: 0 488279484 linear 252:0 976575411
isw_cabciecjfi_Raid5: 0 11968362 linear 252:0 1941535638
isw_cabciecjfi_Raid3: 0 781257015 linear 252:0 195318270
isw_cabciecjfi_Raid2: 0 976928715 linear 252:0 976575285
isw_cabciecjfi_Raid1: 0 195318207 linear 252:0 63
isw_cabciecjfi_Raid: 0 1953519616 mirror core 2 131072 nosync 2 8:32 0 8:16 0 1 handle_errors

EDIT 3

# file -s -L /dev/mapper/*
/dev/mapper/control:              ERROR: cannot read '/dev/mapper/control' (Invalid argument)
/dev/mapper/isw_cabciecjfi_Raid:  x86 boot sector
/dev/mapper/isw_cabciecjfi_Raid1: Linux rev 1.0 ext4 filesystem data, UUID=a8d48d53-fd68-40d8-8dd5-3cecabad6e7a (needs journal recovery) (extents) (large files) (huge files)
/dev/mapper/isw_cabciecjfi_Raid3: Linux rev 1.0 ext4 filesystem data, UUID=3cb24366-b9c8-4e68-ad7b-22449668f047 (extents) (large files) (huge files)
/dev/mapper/isw_cabciecjfi_Raid5: Linux/i386 swap file (new style), version 1 (4K pages), size 1496044 pages, no label, UUID=f07e031f-368a-443e-a21c-77fa27adf795
/dev/mapper/isw_cabciecjfi_Raid6: Linux rev 1.0 ext3 filesystem data, UUID=0f0b401a-f238-4b20-9b2a-79cba56dd9d0 (large files)
/dev/mapper/isw_cabciecjfi_Raid7: Linux rev 1.0 ext3 filesystem data, UUID=b2d66029-eeb9-4e4a-952c-0a3bd0696159 (large files)
# 

Além disso, quando eu tenho um disco adicional /dev/mapper/isw_cabciecjfi_Raid no meu sistema, tentei montar uma partição, mas recebi:

# mount /dev/mapper/isw_cabciecjfi_Raid6 /mnt/media
mount: unknown filesystem type 'linux_raid_member'

Eu reiniciei e confirmei que o RAID está desativado no meu BIOS .

I tried to force a mount which seems to allow me to mount but the content of the partition is inaccessible sio it still doesn't work as expected:
# mount -ft ext3 /dev/mapper/isw_cabciecjfi_Raid6 /mnt/media
# ls -l /mnt/media/
total 0
# mount -ft ext3 /dev/mapper/isw_cabciecjfi_Raid /mnt/data
# ls -l /mnt/data
total 0

EDIT 4

Depois de executar os comandos sugeridos, só obtenho:

$ sudo mdadm --examine /dev/sd[bc]6 /dev/sd[bc]7
mdadm: cannot open /dev/sd[bc]6: No such file or directory
mdadm: cannot open /dev/sd[bc]7: No such file or directory

EDIT 5

Eu tenho /dev/md127 montado agora, mas /dev/md0 e /dev/md1 ainda não estão acessíveis:

# mdadm --examine /dev/sd[bc]6 /dev/sd[bc]7
mdadm: cannot open /dev/sd[bc]6: No such file or directory
mdadm: cannot open /dev/sd[bc]7: No such file or directory



root@regDesktopHome:~# mdadm --stop /dev/md12[567]
mdadm: stopped /dev/md127
root@regDesktopHome:~# mdadm --assemble --scan
mdadm: /dev/md127 has been started with 1 drive (out of 2).
root@regDesktopHome:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md127 : active raid1 dm-3[0]
      390628416 blocks [2/1] [U_]

md1 : inactive dm-6[0](S)
      238340224 blocks

md0 : inactive dm-5[0](S)
      244139648 blocks

unused devices: <none>
root@regDesktopHome:~# ls -l /dev/mapper
total 0
crw------- 1 root root  10, 236 Aug 13 22:43 control
brw-rw---- 1 root disk 252,   0 Aug 13 22:43 isw_cabciecjfi_Raid
brw------- 1 root root 252,   1 Aug 13 22:43 isw_cabciecjfi_Raid1
brw------- 1 root root 252,   2 Aug 13 22:43 isw_cabciecjfi_Raid2
brw------- 1 root root 252,   3 Aug 13 22:43 isw_cabciecjfi_Raid3
brw------- 1 root root 252,   4 Aug 13 22:43 isw_cabciecjfi_Raid5
brw------- 1 root root 252,   5 Aug 13 22:43 isw_cabciecjfi_Raid6
brw------- 1 root root 252,   6 Aug 13 22:43 isw_cabciecjfi_Raid7
root@regDesktopHome:~# mdadm --examine
mdadm: No devices to examine
root@regDesktopHome:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md127 : active raid1 dm-3[0]
      390628416 blocks [2/1] [U_]

md1 : inactive dm-6[0](S)
      238340224 blocks

md0 : inactive dm-5[0](S)
      244139648 blocks

unused devices: <none>
root@regDesktopHome:~# mdadm --examine /dev/dm-[356]
/dev/dm-3:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 124cd4a5:2965955f:cd707cc0:bc3f8165
  Creation Time : Tue Sep  1 18:50:36 2009
     Raid Level : raid1
  Used Dev Size : 390628416 (372.53 GiB 400.00 GB)
     Array Size : 390628416 (372.53 GiB 400.00 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 127

    Update Time : Sat May 31 18:52:12 2014
          State : active
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 23fe942e - correct
         Events : 167


      Number   Major   Minor   RaidDevice State
this     0       8       35        0      active sync

   0     0       8       35        0      active sync
   1     1       8       19        1      active sync
/dev/dm-5:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 91e560f1:4e51d8eb:cd707cc0:bc3f8165
  Creation Time : Tue Sep  1 19:15:33 2009
     Raid Level : raid1
  Used Dev Size : 244139648 (232.83 GiB 250.00 GB)
     Array Size : 244139648 (232.83 GiB 250.00 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0

    Update Time : Fri May  9 21:48:44 2014
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : bfad9d61 - correct
         Events : 75007


      Number   Major   Minor   RaidDevice State
this     0       8       38        0      active sync

   0     0       8       38        0      active sync
   1     1       8       22        1      active sync
/dev/dm-6:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 0abe503f:401d8d09:cd707cc0:bc3f8165
  Creation Time : Tue Sep  8 21:19:15 2009
     Raid Level : raid1
  Used Dev Size : 238340224 (227.30 GiB 244.06 GB)
     Array Size : 238340224 (227.30 GiB 244.06 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1

    Update Time : Fri May  9 21:48:44 2014
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 2a7a125f - correct
         Events : 3973383


      Number   Major   Minor   RaidDevice State
this     0       8       39        0      active sync

   0     0       8       39        0      active sync
   1     1       8       23        1      active sync
root@regDesktopHome:~# 

EDIT 6

Parei-os com mdadm --stop /dev/md[01] e confirmei que /proc/mdstat não os mostraria mais, depois executei mdadm --asseble --scan e obtive

# mdadm --assemble --scan
mdadm: /dev/md0 has been started with 1 drives.
mdadm: /dev/md1 has been started with 2 drives.

mas se eu tentar montar qualquer um dos arrays, ainda recebo:

root@regDesktopHome:~# mount /dev/md1 /mnt/data
mount: wrong fs type, bad option, bad superblock on /dev/md1,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

Nesse meio tempo, descobri que meus superblocos parecem estar danificados (PS eu confirmei com tune2fs e fdisk que estou lidando com uma partição ext3 ):

root@regDesktopHome:~# e2fsck /dev/md1
e2fsck 1.42.9 (4-Feb-2014)
The filesystem size (according to the superblock) is 59585077 blocks
The physical size of the device is 59585056 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? yes
root@regDesktopHome:~# e2fsck /dev/md0
e2fsck 1.42.9 (4-Feb-2014)
The filesystem size (according to the superblock) is 61034935 blocks
The physical size of the device is 61034912 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? yes

Mas ambas as partições têm alguns super blocos de backup:

root@regDesktopHome:~# mke2fs -n /dev/md0 mke2fs 1.42.9 (4-Feb-2014)
Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment
size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 15261696
inodes, 61034912 blocks 3051745 blocks (5.00%) reserved for the super
user First data block=0 Maximum filesystem blocks=4294967296 1863
block groups 32768 blocks per group, 32768 fragments per group 8192
inodes per group Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 265408, 
        4096000, 7962624, 11239424, 20480000, 23887872

root@regDesktopHome:~# mke2fs -n /dev/md1 mke2fs 1.42.9 (4-Feb-2014)
Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment
size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 14901248
inodes, 59585056 blocks 2979252 blocks (5.00%) reserved for the super
user First data block=0 Maximum filesystem blocks=4294967296 1819
block groups 32768 blocks per group, 32768 fragments per group 8192
inodes per group Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000, 23887872

O que você acha, devo tentar restaurar o backup em ambas as matrizes para 23887872 ? Acho que eu poderia fazer isso com e2fsck -b 23887872 /dev/md[01] você recomendaria dar uma chance a isso?
Eu não necessariamente quero tentar algo que eu não saiba exatamente e que pode destruir os dados dos meus discos ... man e2fsck não necessariamente diz que é perigoso, mas pode haver outra maneira mais segura de corrigir o problema. superbloco ...?

COMO UMA ÚLTIMA ATUALIZAÇÃO PARA A COMUNIDADE ,

Eu usei resize2fs para colocar meus superblocos de volta em ordem e minhas unidades montadas novamente! ( resize2fs /dev/md0 & resize2fs /dev/md1 recuperou o meu backup!) Longa história, mas finalmente deu certo! E aprendi muito em termos de mdadm ao longo do caminho! Obrigado @IanMacintosh

    
por cerr 02.08.2014 / 20:05

2 respostas

6

Seus arrays não foram iniciados corretamente. Remova-os da sua configuração em execução com isto:

mdadm --stop /dev/md12[567]

Agora, tente usar o recurso autoscan e assemble.

mdadm --assemble --scan

Assumindo que funciona, salve sua configuração (assumindo o derivado do Debian) com (e isso sobrescreverá sua configuração para que façamos um backup primeiro):

mv /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.old
/usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf

Você deve estar pronto para uma reinicialização agora, e ele será automaticamente montado e iniciado todas as vezes.

Se não, forneça a saída de:

mdadm --examine /dev/sd[bc]6 /dev/sd[bc]7

Vai ser um pouco longo, mas mostra tudo que você precisa saber sobre os arrays e os discos de membros dos arrays, seu estado, etc.

Apenas como um aparte, normalmente funciona melhor se você não criar múltiplos arrays de raid em um disco (isto é, / dev / sd [bc] 6 e / dev / sd [bc] 7) separadamente. Em vez disso, crie apenas uma matriz e, em seguida, você poderá criar partições em sua matriz, se necessário. O LVM é uma maneira muito melhor de particionar seu array na maior parte do tempo.

    
por 08.08.2014 / 10:32
-1

Isso consertará permanentemente:

# mdadm -Es > /etc/mdadm.conf
# dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)
    
por 22.01.2015 / 20:47