Problema de cenário de uso do armazenamento de dados OpenNebula LVM

0

Eu tenho tentado configurar o OpenNebula no ambiente de teste consiste em dois hosts:

  • nebulosa (máquina da frente) com Sunstone
  • host kvm-node-1 com VG configurado
A máquina

nebula contém:

root@nebula:/var/lib/one/datastores# onedatastore list
      ID NAME                SIZE AVAIL CLUSTERS     IMAGES TYPE DS      TM      STAT
       0 system                 - -     0                 0 sys  -       ssh     on  
       1 default            39.1G 70%   0                 4 img  fs      ssh     on  
       2 files              39.1G 70%   0                 0 fil  fs      ssh     on  
     100 images_shared      39.1G 70%   0                 2 img  fs      shared  on  
     104 lvm_system         39.1G 76%   0                 0 sys  -       fs_lvm  on  
     105 lvm_images         39.1G 70%   0                 1 img  fs      fs_lvm  on  
     106 lvm_system2        39.1G 76%   0                 0 sys  -       fs_lvm  on
root@nebula:/var/lib/one/datastores# ls /var/lib/one/datastores/
0  1  100  101  105  2
root@nebula:/var/lib/one/datastores# showmount -e
Export list for nebula:
/var/lib/one/datastores/105 192.168.122.0/24
/var/lib/one/datastores/100 192.168.122.0/24
A máquina

kvm-node-1 contém o seguinte:

root@kvm-node-1:/var/lib/one/datastores# ls /var/lib/one/datastores/
0  100  104  105  106
root@kvm-node-1:/var/lib/one/datastores# mount|grep nfs
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
192.168.122.240:/var/lib/one/datastores/100 on /var/lib/one/datastores/100 type nfs4 (rw,relatime,vers=4.2,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.74,local_lock=none,addr=192.168.122.240)
192.168.122.240:/var/lib/one/datastores/105 on /var/lib/one/datastores/105 type nfs4 (rw,relatime,vers=4.2,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.74,local_lock=none,addr=192.168.122.240)
root@kvm-node-1:/var/lib/one/datastores# vgs
  VG       #PV #LV #SN Attr   VSize   VFree 
  vg-one-0   1   1   0 wz--n- <10,00g <9,98g

Eu posso implantar a VM com imagem para o hipervisor via Sunstone. Esta imagem é iniciada com sucesso. Mas não consigo terminar a VM devido a erros:

Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 Command execution failed (exit code: 5): /var/lib/one/remotes/tm/fs_lvm/delete nebula:/var/lib/one//datastores/0/29/disk.0 29 105
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG E 29 delete: Command "    set -x
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 DEV=$(readlink /var/lib/one/datastores/0/29/disk.0)
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 if [ -d "/var/lib/one/datastores/0/29/disk.0" ]; then
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 rm -rf "/var/lib/one/datastores/0/29/disk.0"
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 else
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 rm -f /var/lib/one/datastores/0/29/disk.0
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 if [ -z "$DEV" ]; then
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 exit 0
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 fi
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 if echo "$DEV" | grep "^/dev/" &>/dev/null; then
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 sudo lvremove -f $DEV
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 fi
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 fi" failed: ++ readlink /var/lib/one/datastores/0/29/disk.0
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + DEV=/dev/vg-one-0/lv-one-29-0
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + '[' -d /var/lib/one/datastores/0/29/disk.0 ']'
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + rm -f /var/lib/one/datastores/0/29/disk.0
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + '[' -z /dev/vg-one-0/lv-one-29-0 ']'
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + echo /dev/vg-one-0/lv-one-29-0
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + grep '^/dev/'
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + sudo lvremove -f /dev/vg-one-0/lv-one-29-0
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 Volume group "vg-one-0" not found
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 Cannot process volume group vg-one-0
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG E 29 Error deleting /var/lib/one/datastores/0/29/disk.0
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: TRANSFER FAILURE 29 Error deleting /var/lib/one/datastores/0/29/disk.0

Como devo organizar o intercâmbio entre máquinas frontais e hipervisores com o armazenamento de dados LVM para resolver este problema?

    
por Yurij Goncharuk 09.11.2018 / 16:26

1 resposta

0

O problema foi resolvido no fórum do OpenNebula por mim.

Resumindo:

I’ve been solved my problem particularly by removing default System Datastore with id 0. Now, VM instances are creating in right VG (vg-one-104 instead of vg-one-0). I don’t know if it’s right behavior (removing default System Datastore) but it’s work for me by now. So, also VM instance terminate correctly. I set this topic as solved.

Todos os tópicos estão localizados em este link .

    
por 13.11.2018 / 10:02