Os LVs recém-criados não sobrevivem à reinicialização - a verificação de thinpool falhou

1

Para usar snapshots de gluster, estou tentando criar LVs por meio do LVM, já que volumes lógicos thinly provisioned são necessários para fazer os snapshots de gluster funcionarem.

A criação funciona, mas a configuração não sobrevive a uma reinicialização. Em algum lugar no processo, deve haver um erro. Aqui está o que estou fazendo para criar os LVs:

user@node1:~$ sudo lvs
[sudo] password for user: 
  LV     VG        Attr      LSize Pool Origin Data%  Move Log Copy%  Convert
  root   rabbit-vg -wi-ao--- 8.86g                                           
  swap_1 rabbit-vg -wi-ao--- 5.86g                                           

mostre volumes físicos:

user@node1:~$ sudo pvs
  PV         VG        Fmt  Attr PSize  PFree 
  /dev/sda5  rabbit-vg lvm2 a--  14.76g 48.00m
  /dev/sde1            lvm2 a--  20.00g 20.00g

criar volume

user@node1:~$ sudo vgcreate gluster /dev/sde1
  Volume group "gluster" successfully created

crie um pool thin

user@node1:~$ sudo lvcreate -L 19.9G -T gluster/mythinpool
  Rounding up size to full physical extent 19.90 GiB
  Rounding up size to full physical extent 20.00 MiB
  Logical volume "mythinpool" created

crie um volume lógico

user@node1:~$ sudo lvcreate -V 19.9G -T gluster/mythinpool -n thinv1
  Rounding up size to full physical extent 19.90 GiB
  Logical volume "thinv1" created

criar sistema de arquivos

user@node1:~$ sudo mkfs.ext4 /dev/gluster/thinv1 
mke2fs 1.42.9 (4-Feb-2014)
Discarding device blocks: done                            
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=16 blocks, Stripe width=16 blocks
1305600 inodes, 5217280 blocks
260864 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
160 block groups
32768 blocks per group, 32768 fragments per group
8160 inodes per group
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
    4096000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

mostra a configuração

user@node1:~$ sudo lvscan
  ACTIVE            '/dev/gluster/mythinpool' [19.90 GiB] inherit
  ACTIVE            '/dev/gluster/thinv1' [19.90 GiB] inherit
  ACTIVE            '/dev/rabbit-vg/root' [8.86 GiB] inherit
  ACTIVE            '/dev/rabbit-vg/swap_1' [5.86 GiB] inherit

monte-o

user@node1:~$ sudo mount /dev/gluster/thinv1 /bricks/brick1/

mostre dispositivos montados

user@node1:~$ df -h
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/rabbit--vg-root  8.6G  7.4G  839M  90% /
none                         4.0K     0  4.0K   0% /sys/fs/cgroup
udev                         1.5G  4.0K  1.5G   1% /dev
tmpfs                        301M  592K  301M   1% /run
none                         5.0M     0  5.0M   0% /run/lock
none                         1.5G     0  1.5G   0% /run/shm
none                         100M     0  100M   0% /run/user
/dev/sda1                    236M   38M  186M  17% /boot
/dev/sdb1                     15G  4.8G  9.2G  35% /data/mysql
/dev/sdc1                     20G  7.2G   12G  39% /data/gluster
/dev/sdd1                     20G   17G  2.3G  88% /data/files
gs1:/volume1                  20G  7.2G   12G  39% /data/nfs
/dev/mapper/gluster-thinv1    20G   44M   19G   1% /bricks/brick1

agora reinicie e verifique novamente:

user@node1:~$ sudo lvscan
[sudo] password for user: 
inactive          '/dev/gluster/mythinpool' [19.90 GiB] inherit
inactive          '/dev/gluster/thinv1' [19.90 GiB] inherit
ACTIVE            '/dev/rabbit-vg/root' [8.86 GiB] inherit
ACTIVE            '/dev/rabbit-vg/swap_1' [5.86 GiB] inherit

volumes inativos, tente ativar

user@node1:~$ sudo vgchange -ay gluster
/usr/sbin/thin_check: execvp failed: No such file or directory
Check of thin pool gluster/mythinpool failed (status:2). Manual repair required (thin_dump --repair /dev/mapper/gluster-mythinpool_tmeta)!
/usr/sbin/thin_check: execvp failed: No such file or directory
0 logical volume(s) in volume group "gluster" now active

Não importa o que eu faça, os volumes ficam inativos e não consigo montá-los.

O que estou fazendo de errado? Agradeço antecipadamente por qualquer ajuda.

    
por merlin 01.09.2015 / 18:22

0 respostas

Tags