array mdadm raid6 reporta tamanho incorreto em df -h depois de crescer

1

Eu desenvolvi recentemente um array raid6 de 5x 3tb mdadm (8tb) com um sexto disco no fedora 18, e depois de reconstruir e verificar, "mdadm --detail / dev / md127" retorna o seguinte:

        Version : 1.2
  Creation Time : Sun Feb 10 22:01:32 2013
     Raid Level : raid6
     Array Size : 11720534016 (11177.57 GiB 12001.83 GB)
  Used Dev Size : 2930133504 (2794.39 GiB 3000.46 GB)
   Raid Devices : 6
  Total Devices : 6
    Persistence : Superblock is persistent
  Intent Bitmap : Internal
    Update Time : Sun Jul 21 17:31:32 2013
          State : active 
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0
         Layout : left-symmetric
     Chunk Size : 512K
           Name : ubuntu:tercore
           UUID : f52477e1:ded036fa:95632986:dcb84e51
         Events : 326236
    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       4       8       49        3      active sync   /dev/sdd1
       5       8       80        4      active sync   /dev/sdf
       6       8       64        5      active sync   /dev/sde

Tudo bem.

Eu então executei "cat / proc / mdstat" que retornou o seguinte:

Personalities : [raid6] [raid5] [raid4] 
md127 : active raid6 sde[6] sdf[5] sda1[0] sdb1[1] sdd1[4] sdc1[2]
      11720534016 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      bitmap: 0/22 pages [0KB], 65536KB chunk
unused devices: <none>

Também está bem.

No entanto, ao executar "df -h", recebo o abaixo, que relata incorretamente a capacidade antiga do ataque:

Filesystem                           Size  Used Avail Use% Mounted on
devtmpfs                             922M     0  922M   0% /dev
tmpfs                                939M  140K  939M   1% /dev/shm
tmpfs                                939M  2.6M  936M   1% /run
tmpfs                                939M     0  939M   0% /sys/fs/cgroup
/dev/mapper/fedora_faufnir--hp-root   26G  7.2G   17G  30% /
tmpfs                                939M   20K  939M   1% /tmp
/dev/sdg1                            485M  108M  352M  24% /boot
/dev/md127                           8.2T  7.6T  135G  99% /home/teracore

Alguém pode me ajudar a corrigir essa incompatibilidade? Também está naturalmente causando samba para relatar a capacidade de matriz incorreta através do meu laptop Windows também.

Muito obrigado antecipadamente! Will.

    
por Faufnir 21.07.2013 / 18:52

1 resposta

1

Acho que você esqueceu de executar o comando resize2fs

# mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Sun Jul 21 23:50:49 2013
     Raid Level : raid6
     Array Size : 62914368 (60.00 GiB 64.42 GB)
  Used Dev Size : 20971456 (20.00 GiB 21.47 GB)
   Raid Devices : 5
  Total Devices : 5
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon Jul 22 00:04:43 2013
          State : clean
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 64K

           UUID : c0a5733d:46d5dd5e:b24ac321:6c547228
         Events : 0.13992

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde
       4       8       80        4      active sync   /dev/sdf

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              39G  1.6G   35G   5% /
/dev/sda1             494M   23M  446M   5% /boot
tmpfs                 500M     0  500M   0% /dev/shm
/dev/md0               60G  188M   59G   1% /raid6

# resize2fs /dev/md0
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/md0 is mounted on /raid6; on-line resizing required
Performing an on-line resize of /dev/md0 to 20971456 (4k) blocks.
The filesystem on /dev/md0 is now 20971456 blocks long.

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              39G  1.6G   35G   5% /
/dev/sda1             494M   23M  446M   5% /boot
tmpfs                 500M     0  500M   0% /dev/shm
/dev/md0               79G  192M   79G   1% /raid6

# mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Sun Jul 21 23:50:49 2013
     Raid Level : raid6
     Array Size : 83885824 (80.00 GiB 85.90 GB)
  Used Dev Size : 20971456 (20.00 GiB 21.47 GB)
   Raid Devices : 6
  Total Devices : 6
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon Jul 22 00:04:43 2013
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 64K

           UUID : c0a5733d:46d5dd5e:b24ac321:6c547228
         Events : 0.13992

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde
       4       8       80        4      active sync   /dev/sdf
       5       8       96        5      active sync   /dev/sdg

P.S. Vou sugerir desmontar / dev / md127 antes de redimensionar a operação

    
por 21.07.2013 / 20:09