inconsistente curioso soft raid 5 configuração

2

Ultimamente eu morri o alvo e atualizei meu sistema operacional do Fedora 11 para o Fedora 15, e eu tenho tentado arduamente entender por que o Fedora 15 não pode ver a configuração Raid que eu criei no Fedora 11. Eu acho que deve ter perdido alguma coisa, então eu recorro à sabedoria do grupo aqui.

Quando atualizei, usei uma nova unidade de inicialização para o Fedora 15, para poder trocar fisicamente as unidades de inicialização e inicializar no Fedora 11 ou 15. O Fedora 11 ainda pode ver o Raid e tudo funciona. O Fedora 15 mostra algo muito estranho.

[editado para adicionar a saída do pedido de @psusi]

No Fedora 11

Eu tinha um disco de inicialização regular (/ dev / sda) e um lvm construído no RAID 5 (/ dev / sdb, / dev / sdc, / dev / sdd).

Especificamente, o disco raid / dev / md / 127_0 é construído a partir de / dev / sdb1, / dev / sdc1, / dev / sdd1, onde cada partição ocupa todo o espaço em disco.

O grupo de volumes da unidade de inicialização (/ dev / vg_localhost /) é irrelevante. O grupo de volumes que eu criei no disco de ataque é chamado / dev / lvm-tb-storage / .

A seguir, as configurações que obtive do sistema (mdadm, pvscan, lvscan, etc.)

[root@localhost ~]# cat /etc/mdadm.conf 

[root@localhost ~]# pvscan
  PV /dev/md127   VG lvm-tb-storage   lvm2 [1.82 TB / 0    free]
  PV /dev/sda5    VG vg_localhost        lvm2 [61.44 GB / 0    free]
  Total: 2 [1.88 TB] / in use: 2 [1.88 TB] / in no VG: 0 [0   ]

[root@localhost ~]# lvscan
  ACTIVE            '/dev/lvm-tb-storage/tb' [1.82 TB] inherit
  ACTIVE            '/dev/vg_localhost/lv_root' [54.68 GB] inherit
  ACTIVE            '/dev/vg_localhost/lv_swap' [6.77 GB] inherit

[root@localhost ~]# vgdisplay
  --- Volume group ---
  VG Name               lvm-tb-storage
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  6
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.82 TB
  PE Size               4.00 MB
  Total PE              476839
  Alloc PE / Size       476839 / 1.82 TB
  Free  PE / Size       0 / 0   
  VG UUID               wqIXsb-KRZQ-eRnH-JvuP-VdHk-XJTG-DSWimc

  --- Volume group ---
  VG Name               vg_localhost
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               61.44 GB
  PE Size               4.00 MB
  Total PE              15729
  Alloc PE / Size       15729 / 61.44 GB
  Free  PE / Size       0 / 0   
  VG UUID               IVIpCV-C4qg-Lii7-zwkz-P3si-MXAZ-WYUSe6

[root@localhost ~]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "lvm-tb-storage" using metadata type lvm2
  Found volume group "vg_localhost" using metadata type lvm2

[root@localhost ~]# mdadm --detail --scan
ARRAY /dev/md/127_0 metadata=0.90 UUID=bebfd467:cb6700d9:29bdc0db:c30228ba

[root@localhost ~]# ls -al /dev/md
total 0
drwxr-xr-x.  2 root root   60 2011-09-13 03:14 .
drwxr-xr-x. 19 root root 5180 2011-09-13 03:15 ..
lrwxrwxrwx.  1 root root    8 2011-09-13 03:14 127_0 -> ../md127

[root@localhost ~]# mdadm --detail /dev/md/127_0 
/dev/md/127_0:
        Version : 0.90
  Creation Time : Wed Nov  5 18:26:25 2008
     Raid Level : raid5
     Array Size : 1953134208 (1862.65 GiB 2000.01 GB)
  Used Dev Size : 976567104 (931.33 GiB 1000.00 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 127
    Persistence : Superblock is persistent

    Update Time : Tue Sep 13 03:28:51 2011
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : bebfd467:cb6700d9:29bdc0db:c30228ba
         Events : 0.671154

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       49        1      active sync   /dev/sdd1
       2       8       33        2      active sync   /dev/sdc1

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md127 : active raid5 sdb1[0] sdc1[2] sdd1[1]
      1953134208 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

[root@localhost ~]# mdadm --examine /dev/sdb1
/dev/sdb1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : bebfd467:cb6700d9:29bdc0db:c30228ba
  Creation Time : Wed Nov  5 18:26:25 2008
     Raid Level : raid5
  Used Dev Size : 976567104 (931.33 GiB 1000.00 GB)
     Array Size : 1953134208 (1862.65 GiB 2000.01 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 127

    Update Time : Tue Sep 13 03:29:50 2011
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
       Checksum : f1ddf826 - correct
         Events : 671154

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8       17        0      active sync   /dev/sdb1

   0     0       8       17        0      active sync   /dev/sdb1
   1     1       8       49        1      active sync   /dev/sdd1
   2     2       8       33        2      active sync   /dev/sdc1

[root@localhost ~]# fdisk -lu 2>&1
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1 doesn't contain a valid partition table
Disk /dev/md127 doesn't contain a valid partition table
Disk /dev/dm-2 doesn't contain a valid partition table

Disk /dev/sda: 250.0 GB, 250000000000 bytes
255 heads, 63 sectors/track, 30394 cylinders, total 488281250 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x00000080

  Device Boot      Start         End      Blocks   Id  System
/dev/sda1              63      610469      305203+  83  Linux
/dev/sda2          610470   359004554   179197042+  83  Linux
/dev/sda3   *   359004555   359414154      204800   83  Linux
/dev/sda4       359422245   488279609    64428682+   5  Extended
/dev/sda5       359422308   488278371    64428032   8e  Linux LVM

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0xb03e1980

  Device Boot      Start         End      Blocks   Id  System
/dev/sdb1              63  1953134504   976567221   da  Non-FS data

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x7db522d5

  Device Boot      Start         End      Blocks   Id  System
/dev/sdc1              63  1953134504   976567221   da  Non-FS data

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x20af5840

  Device Boot      Start         End      Blocks   Id  System
/dev/sdd1              63  1953134504   976567221   da  Non-FS data

Disk /dev/dm-0: 58.7 GB, 58707673088 bytes
255 heads, 63 sectors/track, 7137 cylinders, total 114663424 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x00000000


Disk /dev/dm-1: 7264 MB, 7264534528 bytes
255 heads, 63 sectors/track, 883 cylinders, total 14188544 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x00000000


Disk /dev/md127: 2000.0 GB, 2000009428992 bytes
2 heads, 4 sectors/track, 488283552 cylinders, total 3906268416 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x00000000


Disk /dev/dm-2: 2000.0 GB, 2000007725056 bytes
255 heads, 63 sectors/track, 243153 cylinders, total 3906265088 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x00000000

O parâmetro de inicialização do kernel que eu tenho

kernel /vmlinuz-2.6.30.10-105.2.23.fc11.x86_64 ro root=/dev/mapper/vg_localhost-lv_root rhgb quiet

No Fedora 15

Eu instalei o Fedora 15 em uma nova unidade de inicialização, na qual o programa de instalação também criou um lvm (/ dev / vg_20110912a /) para mim, mas novamente isso é irrelevante.

No Fedora 15, lvm , pvscan , vgscan não vê nada além da unidade de inicialização irrelevante. mdadm , no entanto, mostra algo muito estranho - o ataque original é dividido em três ataques, e a combinação é muito intrigante.

[root@localhost ~] # cat /etc/mdadm.conf
# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all

[root@localhost ~]# pvscan
  PV /dev/sda2   VG vg_20110912a   lvm2 [59.12 GiB / 0  free]
  Total: 1 [59.12 GiB] / in use: 1 [59.12 GiB] / in no VG: 0 [0   ]

[root@localhost ~]# lvscan
  ACTIVE            '/dev/vg_20110912a/lv_home' [24.06 GiB] inherit
  ACTIVE            '/dev/vg_20110912a/lv_swap' [6.84 GiB] inherit
  ACTIVE            '/dev/vg_20110912a/lv_root' [28.22 GiB] inherit

[root@localhost ~]# vgdisplay
  --- Volume group ---
  VG Name               vg_20110912a
  System ID          
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               59.12 GiB
  PE Size               32.00 MiB
  Total PE              1892
  Alloc PE / Size       1892 / 59.12 GiB
  Free  PE / Size       0 / 0   
  VG UUID               8VRJyx-XSQp-13mK-NbO6-iV24-rE87-IKuhHH

[root@localhost ~]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "vg_20110912a" using metadata type lvm2

[root@localhost ~]# mdadm --detail --scan
ARRAY /dev/md/0_0 metadata=0.90 UUID=153e151b:8c717565:fd59f149:d2ea02c9
ARRAY /dev/md/127_0 metadata=0.90 UUID=bebfd467:cb6700d9:29bdc0db:c30228ba

[root@localhost ~]# ls -l /dev/md
total 4
lrwxrwxrwx. 1 root root   8 Sep 13 02:39 0_0 -> ../md127
lrwxrwxrwx. 1 root root  10 Sep 13 02:39 0_0p1 -> ../md127p1
lrwxrwxrwx. 1 root root   8 Sep 13 02:39 127_0 -> ../md126
-rw-------. 1 root root 120 Sep 13 02:39 md-device-map

[root@localhost ~]# cat /dev/md/md-device-map
md126 0.90 bebfd467:cb6700d9:29bdc0db:c30228ba /dev/md/127_0
md127 0.90 153e151b:8c717565:fd59f149:d2ea02c9 /dev/md/0_0

[root@localhost ~]# mdadm --detail /dev/md/0_0
/dev/md/0_0:
        Version : 0.90
  Creation Time : Tue Nov  4 21:45:19 2008
    Raid Level : raid5
    Array Size : 976762496 (931.51 GiB 1000.20 GB)
  Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 127
    Persistence : Superblock is persistent

    Update Time : Wed Nov  5 09:04:28 2008
        State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

        Layout : left-symmetric
    Chunk Size : 64K

        UUID : 153e151b:8c717565:fd59f149:d2ea02c9
        Events : 0.2202

    Number   Major   Minor   RaidDevice State
    0       8       48      0   active sync   /dev/sdd
    1       8       16      1   active sync   /dev/sdb

[root@localhost ~]# mdadm --detail /dev/md/127_0
/dev/md/127_0:
        Version : 0.90
  Creation Time : Wed Nov  5 18:26:25 2008
    Raid Level : raid5
    Array Size : 1953134208 (1862.65 GiB 2000.01 GB)
  Used Dev Size : 976567104 (931.33 GiB 1000.00 GB)
   Raid Devices : 3
  Total Devices : 2
Preferred Minor : 126
    Persistence : Superblock is persistent

    Update Time : Tue Sep 13 00:39:51 2011
        State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

        Layout : left-symmetric
    Chunk Size : 64K

        UUID : bebfd467:cb6700d9:29bdc0db:c30228ba
        Events : 0.671154

    Number   Major   Minor   RaidDevice State
    0   259     0       0   active sync   /dev/md/0_0p1
    1       0       0       1   removed
    2       8       33      2   active sync   /dev/sdc1

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md126 : active (auto-read-only) raid5 md127p1[0] sdc1[2]
    1953134208 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]

md127 : active (auto-read-only) raid5 sdb[1] sdd[0]
    976762496 blocks level 5, 64k chunk, algorithm 2 [2/2] [UU]

unused devices: <none>

[root@localhost ~]# mdadm --examine /dev/sdb1
/dev/sdb1:
        Magic : a92b4efc
        Version : 0.90.00
        UUID : bebfd467:cb6700d9:29bdc0db:c30228ba
  Creation Time : Wed Nov  5 18:26:25 2008
    Raid Level : raid5
  Used Dev Size : 976567104 (931.33 GiB 1000.00 GB)
    Array Size : 1953134208 (1862.65 GiB 2000.01 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 127

    Update Time : Tue Sep 13 00:39:51 2011
        State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
    Checksum : f1ddd04f - correct
        Events : 671154

        Layout : left-symmetric
    Chunk Size : 64K

    Number   Major   Minor   RaidDevice State
this    0       8       17      0   active sync   /dev/sdb1

   0    0       8       17      0   active sync   /dev/sdb1
   1    1       8       49      1   active sync   /dev/sdd1
   2    2       8       33      2   active sync   /dev/sdc1

[root@localhost ~]# fdisk -lu 2>&1
Disk /dev/mapper/vg_20110912a-lv_swap doesn't contain a valid partition table
Disk /dev/mapper/vg_20110912a-lv_root doesn't contain a valid partition table
Disk /dev/md127 doesn't contain a valid partition table
Disk /dev/mapper/vg_20110912a-lv_home doesn't contain a valid partition table

Disk /dev/sda: 64.0 GB, 64023257088 bytes
255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001aa2f

  Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1026047      512000   83  Linux
/dev/sda2         1026048   125044735    62009344   8e  Linux LVM

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xb03e1980

  Device Boot      Start         End      Blocks   Id  System
/dev/sdb1              63  1953134504   976567221   da  Non-FS data

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x7db522d5

  Device Boot      Start         End      Blocks   Id  System
/dev/sdc1              63  1953134504   976567221   da  Non-FS data

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x20af5840

  Device Boot      Start         End      Blocks   Id  System
/dev/sdd1              63  1953134504   976567221   da  Non-FS data

Disk /dev/mapper/vg_20110912a-lv_swap: 7348 MB, 7348420608 bytes
255 heads, 63 sectors/track, 893 cylinders, total 14352384 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_20110912a-lv_root: 30.3 GB, 30299652096 bytes
255 heads, 63 sectors/track, 3683 cylinders, total 59179008 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/md127: 2000.0 GB, 2000009428992 bytes
2 heads, 4 sectors/track, 488283552 cylinders, total 3906268416 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 131072 bytes
Disk identifier: 0x00000000


Disk /dev/md126: 1000.2 GB, 1000204795904 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953524992 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disk identifier: 0x20af5840

     Device Boot      Start         End      Blocks   Id  System
/dev/md126p1              63  1953134504   976567221   da  Non-FS data
Partition 1 does not start on physical sector boundary.

Disk /dev/mapper/vg_20110912a-lv_home: 25.8 GB, 25836912640 bytes
255 heads, 63 sectors/track, 3141 cylinders, total 50462720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Meu parâmetro de inicialização do kernel:

kernel /vmlinuz-2.6.40.4-5.fc15.x86_64 ro root=/dev/mapper/vg_20110912a-lv_root rd_LVM_LV=vg_20110912a/lv_root rd_LVM_LV=vg_20110912a/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us rhgb quiet rdblacklist=nouveau nouveau.modeset=0 nodmraid

O último mdadm --examine /dev/sdb1 mostra exatamente o mesmo resultado que no Fedora 11, mas eu não entendo porque mdadm --detail /dev/md/0_0 mostra apenas / dev / sdb e / dev / sdd, e mdadm --detail /dev/md/127_0 mostra / dev / sdc1 e / dev / md / 0_0p1.

Como mdadm --examine /dev/sdb1 mostra o resultado correto, o Fedora 15 é capaz de acessar o ataque de alguma forma, mas não tenho certeza do que fazer. Devo criar / montar um novo raid / dev / md2 e esperar que o lvm que eu criei apareça magicamente?

Obrigado antecipadamente.

    
por Tsan-Kuang Lee 13.09.2011 / 10:13

1 resposta

1

Parece que você tem alguns superblocos de raid crufty antigos por aí. O array que você estava usando tinha 3 discos e o uuid de bebfd467: cb6700d9: 29bdc0db: c30228ba e foi criado em 5 de novembro de 2008. O Fedora 15 reconheceu outro array de raid que possui apenas dois discos e foi criado no dia anterior, usando o conjunto discos em vez da primeira partição. O Fedora 15 parece ter ativado o antigo array de raid, e então tentou usar esse array como um dos componentes no array correto, o que está causando uma bagunça.

Eu acho que você precisa acabar com os velhos e falsos superblocos:

mdadm --zero-superblock /dev/sdb /dev/sdd

Você tem um direito de backup atual? ;)

    
por 13.09.2011 / 17:23

Tags