Proxmox / KVM: não é possível clonar a VM

2

Eu gostaria de criar um clone de uma máquina KVM - ontem consegui criar um clone da mesma máquina em alguns minutos. Agora leva horas e ainda está preso um "0%":

create full clone of drive ide0 (local:100/vm-100-disk-1.raw)
Formatting '/var/lib/vz/images/111/vm-111-disk-1.raw', fmt=raw size=536870912000
transferred: 0 bytes remaining: 536870912000 bytes total: 536870912000 bytes progression: 0.00 %

Eu me pergunto por que esse é o caso. A tarefa está em execução (reiniciei há alguns minutos) - link

Alguma ideia do que está errado de novo?

Editar: O disco de E / S é baixo, pois a tarefa não está gravando nenhum dado no disco rígido ( link ).

Se eu traçar o ID do processo que é fornecido na visão geral, estou recebendo este resultado:

select(24, [16 19], NULL, NULL, {0, 553078}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

Se eu rastrear o processo 233113 (que está realmente funcionando), estou recebendo este resultado:

lseek(7, 49650073600, SEEK_DATA)        = 49650073600

lseek(7, 49650073600, SEEK_HOLE)        = 49650077696

lseek(7, 49650077696, SEEK_DATA)        = 51797557248

lseek(7, 51797557248, SEEK_DATA)        = 51797557248

lseek(7, 51797557248, SEEK_HOLE)        = 51797565440

lseek(7, 51797565440, SEEK_DATA)        = 51797622784

lseek(7, 51797622784, SEEK_DATA)        = 51797622784

lseek(7, 51797622784, SEEK_HOLE)        = 51797630976

lseek(7, 51797630976, SEEK_DATA)        = 51797688320

lseek(7, 51797688320, SEEK_DATA)        = 51797688320

lseek(7, 51797688320, SEEK_HOLE)        = 51801108480

lseek(7, 51801108480, SEEK_DATA)        = 51831242752

lseek(7, 51831242752, SEEK_DATA)        = 51831242752

lseek(7, 51831242752, SEEK_HOLE)        = 51833991168

lseek(7, 51833991168, SEEK_DATA)        = 51931774976

lseek(7, 51931774976, SEEK_DATA)        = 51931774976

lseek(7, 51931774976, SEEK_HOLE)        = 51962368000

lseek(7, 51962368000, SEEK_DATA)        = 51962380288

lseek(7, 51962380288, SEEK_DATA)        = 51962380288

lseek(7, 51962380288, SEEK_HOLE)        = 51963707392

lseek(7, 51963707392, SEEK_DATA)        = 51963756544

lseek(7, 51963756544, SEEK_DATA)        = 51963756544

lseek(7, 51963756544, SEEK_HOLE)        = 51963973632

lseek(7, 51963973632, SEEK_DATA)        = 53945040896

lseek(7, 53945040896, SEEK_DATA)        = 53945040896

lseek(7, 53945040896, SEEK_HOLE)        = 53945049088

lseek(7, 53945049088, SEEK_DATA)        = 53945106432

lseek(7, 53945106432, SEEK_DATA)        = 53945106432

lseek(7, 53945106432, SEEK_HOLE)        = 53945110528

lseek(7, 53945110528, SEEK_DATA)        = 53945171968

lseek(7, 53945171968, SEEK_DATA)        = 53945171968

lseek(7, 53945171968, SEEK_HOLE)        = 53947260928

lseek(7, 53947260928, SEEK_DATA)        = 53978726400

lseek(7, 53978726400, SEEK_DATA)        = 53978726400

lseek(7, 53978726400, SEEK_HOLE)        = 53981294592

lseek(7, 53981294592, SEEK_DATA)        = 54079258624

lseek(7, 54079258624, SEEK_DATA)        = 54079258624

lseek(7, 54079258624, SEEK_HOLE)        = 54096543744

lseek(7, 54096543744, SEEK_DATA)        = 54096560128

lseek(7, 54096560128, SEEK_DATA)        = 54096560128

lseek(7, 54096560128, SEEK_HOLE)        = 54096625664

lseek(7, 54096625664, SEEK_DATA)        = 56092524544

lseek(7, 56092524544, SEEK_DATA)        = 56092524544

lseek(7, 56092524544, SEEK_HOLE)        = 56092532736

lseek(7, 56092532736, SEEK_DATA)        = 56092590080

lseek(7, 56092590080, SEEK_DATA)        = 56092590080

lseek(7, 56092590080, SEEK_HOLE)        = 56092594176

lseek(7, 56092594176, SEEK_DATA)        = 56092655616

lseek(7, 56092655616, SEEK_DATA)        = 56092655616

lseek(7, 56092655616, SEEK_HOLE)        = 56094593024

lseek(7, 56094593024, SEEK_DATA)        = 56126210048

lseek(7, 56126210048, SEEK_DATA)        = 56126210048

lseek(7, 56126210048, SEEK_HOLE)        = 56128737280

lseek(7, 56128737280, SEEK_DATA)        = 56226742272

lseek(7, 56226742272, SEEK_DATA)        = 56226742272

lseek(7, 56226742272, SEEK_HOLE)        = 56250884096

lseek(7, 56250884096, SEEK_DATA)        = 58240008192

lseek(7, 58240008192, SEEK_DATA)        = 58240008192

lseek(7, 58240008192, SEEK_HOLE)        = 58240016384

lseek(7, 58240016384, SEEK_DATA)        = 58240073728

lseek(7, 58240073728, SEEK_DATA)        = 58240073728

lseek(7, 58240073728, SEEK_HOLE)        = 58240077824

lseek(7, 58240077824, SEEK_DATA)        = 58240139264

lseek(7, 58240139264, SEEK_DATA)        = 58240139264

lseek(7, 58240139264, SEEK_HOLE)        = 58241294336

O tamanho da VM é 500GB e há espaço suficiente na máquina:

udev             10M       0   10M    0% /dev
tmpfs            26G    319M   25G    2% /run
/dev/sda4       3,6T    878G  2,6T   26% /
tmpfs            63G     43M   63G    1% /dev/shm
tmpfs           5,0M       0  5,0M    0% /run/lock
tmpfs            63G       0   63G    0% /sys/fs/cgroup
/dev/sda3       946M    188M  694M   22% /boot
cgmfs           100K       0  100K    0% /run/cgmanager/fs
/dev/fuse        30M     28K   30M    1% /etc/pve
tmpfs           100K       0  100K    0% /run/lxcfs/controllers
tmpfs            13G       0   13G    0% /run/user/1000

É uma única instalação proxmox sem cluster.

Estou executando o seguinte sistema proxmox:

proxmox-ve: 4.1-28 (running kernel: 4.2.6-1-pve)

pve-manager: 4.1-2 (running version: 4.1-2/78c5f4a2)

pve-kernel-4.2.6-1-pve: 4.2.6-28

pve-kernel-2.6.32-43-pve: 2.6.32-166

pve-kernel-4.2.2-1-pve: 4.2.2-16

pve-kernel-2.6.32-26-pve: 2.6.32-114

pve-kernel-4.2.3-2-pve: 4.2.3-22

lvm2: 2.02.116-pve2

corosync-pve: 2.3.5-2

libqb0: 0.17.2-1

pve-cluster: 4.0-29

qemu-server: 4.0-42

pve-firmware: 1.1-7

libpve-common-perl: 4.0-42

libpve-access-control: 4.0-10

libpve-storage-perl: 4.0-38

pve-libspice-server1: 0.12.5-2

vncterm: 1.2-1

pve-qemu-kvm: 2.4-18

pve-container: 1.0-35

pve-firewall: 2.0-14

pve-ha-manager: 1.0-16

ksm-control-daemon: 1.2-1

glusterfs-client: 3.5.2-2+deb8u1

lxc-pve: 1.1.5-5

lxcfs: 0.13-pve2

cgmanager: 0.39-pve1

criu: 1.6.0-1

O syslog mostra:

Dec 29 19:03:49 hostname pvedaemon[26539]: <root@pam> starting task UPID:hostname:0000681B:0329CBE4:5682CB05:qmclone:100:root@pam:
Dec 29 19:04:59 hostname pveproxy[19936]: worker exit
Dec 29 19:04:59 hostname pveproxy[24482]: worker 19936 finished
Dec 29 19:04:59 hostname pveproxy[24482]: starting 1 worker(s)
Dec 29 19:04:59 hostname pveproxy[24482]: worker 27331 started
Dec 29 19:05:02 hostname lxcfs[18709]: select: No such file or directory
Dec 29 19:05:12 hostname pvedaemon[26539]: <root@pam> successful auth for user 'root@pam'
Dec 29 19:06:51 hostname lxcfs[18709]: select: Success
Dec 29 19:07:55 hostname kernel: [530914.403182] audit: type=1400 audit(1451412475.136:1387): apparmor="DENIED" operation="ptrace" profile="lxc-container-default" pid=1994 comm="sw-collectd" requested_mask="trace" denied_mask="trace" peer="unconfined"
Dec 29 19:07:55 hostname kernel: [530914.403204] audit: type=1400 audit(1451412475.136:1388): apparmor="DENIED" operation="ptrace" profile="lxc-container-default" pid=1994 comm="sw-collectd" requested_mask="read" denied_mask="read" peer="unconfined"
Dec 29 19:09:25 hostname pvedaemon[26539]: <root@pam> successful auth for user 'root@pam'
Dec 29 19:09:42 hostname pveproxy[24482]: worker 27331 finished
Dec 29 19:09:42 hostname pveproxy[24482]: starting 1 worker(s)
Dec 29 19:09:42 hostname pveproxy[24482]: worker 29677 started
Dec 29 19:09:44 hostname pveproxy[29676]: got inotify poll request in wrong process - disabling inotify
Dec 29 19:09:46 hostname pveproxy[29676]: worker exit
Dec 29 19:10:02 hostname lxcfs[18709]: select: Success
Dec 29 19:10:32 hostname pvedaemon[25727]: worker exit
Dec 29 19:10:32 hostname pvedaemon[1160]: worker 25727 finished
Dec 29 19:10:32 hostname pvedaemon[1160]: starting 1 worker(s)
Dec 29 19:10:32 hostname pvedaemon[1160]: worker 30242 started
Dec 29 19:11:45 hostname lxcfs[18709]: select: Success
Dec 29 19:11:48 hostname pvedaemon[30242]: <root@pam> successful auth for user 'root@pam'
Dec 29 19:13:48 hostname pvedaemon[26539]: worker exit
Dec 29 19:13:48 hostname pvedaemon[1160]: worker 26539 finished
Dec 29 19:13:48 hostname pvedaemon[1160]: starting 1 worker(s)
Dec 29 19:13:48 hostname pvedaemon[1160]: worker 31618 started
Dec 29 19:13:55 hostname pveproxy[29677]: worker exit
Dec 29 19:13:55 hostname pveproxy[24482]: worker 29677 finished
Dec 29 19:13:55 hostname pveproxy[24482]: starting 1 worker(s)
Dec 29 19:13:55 hostname pveproxy[24482]: worker 31677 started
Dec 29 19:14:28 hostname pveproxy[24482]: worker 26350 finished
Dec 29 19:14:28 hostname pveproxy[24482]: starting 1 worker(s)
Dec 29 19:14:28 hostname pveproxy[24482]: worker 31867 started
Dec 29 19:14:31 hostname pveproxy[31866]: worker exit
Dec 29 19:15:02 hostname lxcfs[18709]: select: Success
Dec 29 19:15:40 hostname pvedaemon[23612]: worker exit
Dec 29 19:15:40 hostname pvedaemon[1160]: worker 23612 finished
Dec 29 19:15:40 hostname pvedaemon[1160]: starting 1 worker(s)
Dec 29 19:15:40 hostname pvedaemon[1160]: worker 32588 started
    
por MyFault 29.12.2015 / 18:24

0 respostas