Removendo o cache de LV após lvreduce falhar

1

Vendo algum comportamento estranho com lvms. Se eu redimensionar um volume lógico em cache e, em seguida, tentar remover o cache, obtenho uma falha com o código de saída 5. Alguém sabe por que isso aconteceria? Recriar etapas:

lvcreate -L2g -n lv0 lvs /dev/slow

lvcreate --type cache-pool -L2g -n lv0_cache lvs /dev/fast

lvconvert --type cache --cachepool lvs/lv0_cache lvs/lv0

lvreduce -L500m /dev/vg/lv0

Tudo isso completo com sucesso, em seguida, tente remover / excluir o cache

lvremove /dev/vg/lv0_cache 
Command failed with status code 5.

com -vvv

      Setting activation/monitoring to 1
      Setting global/locking_type to 1
      Setting global/wait_for_locks to 1
      File-based locking selected.
      Setting global/prioritise_write_locks to 1
      Setting global/locking_dir to /run/lock/lvm
      Setting global/use_lvmlockd to 0
      Setting response to OK
      Setting token to filter:3239235440
      Setting daemon_pid to 464
      Setting response to OK
      Setting global_disable to 0
      report/output_format not found in config: defaulting to basic
      log/report_command_log not found in config: defaulting to 0
      Setting response to OK
      Setting response to OK
      Setting response to OK
      Setting name to lvs
      Processing VG lvs 0MHvCE-YbWR-tlGX-VJ1v-0i75-yc1C-TG0eJf
      Locking /run/lock/lvm/V_lvs WB
      Reading VG lvs 0MHvCEYbWRtlGXVJ1v0i75yc1CTG0eJf
      Setting response to OK
      Setting response to OK
      Setting response to OK
      Setting name to lvs
      Setting metadata/format to lvm2
      Setting id to VpmqY3-Kwrh-6h1K-gyGw-5FdN-izZY-JLWJzN
      Setting format to lvm2
      Setting device to 2064
      Setting dev_size to 524288000
      Setting label_sector to 1
      Setting ext_flags to 1
      Setting ext_version to 2
      Setting size to 1044480
      Setting start to 4096
      Setting ignore to 0
      Setting id to JUS32k-IcN3-qLzH-kA2D-WgV6-rjFP-Cb5i2m
      Setting format to lvm2
      Setting device to 256
      Setting dev_size to 8000000
      Setting label_sector to 1
      Setting ext_flags to 1
      Setting ext_version to 2
      Setting size to 1044480
      Setting start to 4096
      Setting ignore to 0
      Setting cache_pool to lv0_cache
      Setting origin to lv0_corig
      Stack lvs/lv0:0[0] on LV lvs/lv0_corig:0.
      Adding lvs/lv0:0 as an user of lvs/lv0_corig.
      Adding lvs/lv0:0 as an user of lvs/lv0_cache.
      Setting data to lv0_cache_cdata
      Setting metadata to lv0_cache_cmeta
      Setting cache_mode to writethrough
      Setting policy to smq
      Stack lvs/lv0_cache:0[0] on LV lvs/lv0_cache_cdata:0.
      Adding lvs/lv0_cache:0 as an user of lvs/lv0_cache_cdata.
      Adding lvs/lv0_cache:0 as an user of lvs/lv0_cache_cmeta.
      Setting response to OK
      Setting response to OK
      Setting response to OK
      metadata/lvs_history_retention_time not found in config: defaulting to 0
      /dev/sdb: size is 524288000 sectors
      /dev/ram0: size is 8000000 sectors
      Setting cache_pool to lv0_cache
      Setting origin to lv0_corig
      Stack lvs/lv0:0[0] on LV lvs/lv0_corig:0.
      Adding lvs/lv0:0 as an user of lvs/lv0_corig.
      Adding lvs/lv0:0 as an user of lvs/lv0_cache.
      Setting data to lv0_cache_cdata
      Setting metadata to lv0_cache_cmeta
      Setting cache_mode to writethrough
      Setting policy to smq
      Stack lvs/lv0_cache:0[0] on LV lvs/lv0_cache_cdata:0.
      Adding lvs/lv0_cache:0 as an user of lvs/lv0_cache_cdata.
      Adding lvs/lv0_cache:0 as an user of lvs/lv0_cache_cmeta.
      Adding lvs/lv0_cache to the list of LVs to be processed.
      Processing LV lv0_cache in VG lvs.
      Setting devices/issue_discards to 1
    Archiving volume group "lvs" metadata (seqno 8).
      lvs/lv0 is active locally
      Locking /run/lock/lvm/A_MHvCEYbWRtlGXVJ1v0i75yc1CTG0eJfNP2h2TcKupYBZ99x2jhP1hLl2BvaeN5l WB
      Locking LV 0MHvCEYbWRtlGXVJ1v0i75yc1CTG0eJfNP2h2TcKupYBZ99x2jhP1hLl2BvaeN5l (W)
      /dev/ram0: read_ahead is 256 sectors
      /dev/sdb: read_ahead is 256 sectors
      Setting activation/verify_udev_operations to 0
      Getting driver version
      Getting target version for cache
      Found cache target v2.0.0.
      Getting target version for linear
      Found linear target v1.4.0.
      Getting target version for striped
      Found striped target v1.6.0.
    Loading lvs-lv0_corig table (253:3)
    Suppressed lvs-lv0_corig (253:3) identical table reload.
    Loading lvs-lv0_cache_cdata table (253:1)
    Suppressed lvs-lv0_cache_cdata (253:1) identical table reload.
    Loading lvs-lv0_cache_cmeta table (253:2)
    Suppressed lvs-lv0_cache_cmeta (253:2) identical table reload.
    Loading lvs-lv0 table (253:0)
    Suppressed lvs-lv0 (253:0) identical table reload.
      Locking memory
      Setting activation/use_mlockall to 0
    Suspending lvs-lv0 (253:0) with device flush
    Suspending lvs-lv0_corig (253:3) with device flush
    Suspending lvs-lv0_cache_cdata (253:1) with device flush
    Suspending lvs-lv0_cache_cmeta (253:2) with device flush
      Unlocking LV 0MHvCEYbWRtlGXVJ1v0i75yc1CTG0eJfNP2h2TcKupYBZ99x2jhP1hLl2BvaeN5l
    Loading lvs-lv0_corig table (253:3)
    Suppressed lvs-lv0_corig (253:3) identical table reload.
    Loading lvs-lv0_cache_cdata table (253:1)
    Suppressed lvs-lv0_cache_cdata (253:1) identical table reload.
    Loading lvs-lv0_cache_cmeta table (253:2)
    Suppressed lvs-lv0_cache_cmeta (253:2) identical table reload.
    Loading lvs-lv0 table (253:0)
    Suppressed lvs-lv0 (253:0) identical table reload.
    Resuming lvs-lv0_corig (253:3)
    Resuming lvs-lv0_cache_cdata (253:1)
    Resuming lvs-lv0_cache_cmeta (253:2)
    Resuming lvs-lv0 (253:0)
      Unlocking /run/lock/lvm/A_MHvCEYbWRtlGXVJ1v0i75yc1CTG0eJfNP2h2TcKupYBZ99x2jhP1hLl2BvaeN5l
      lvs/lv0:0 is no longer a user of lvs/lv0_cache.
      Removing layer lv0_corig for lv0
      Unlocking memory
      Unlocking /run/lock/lvm/V_lvs
      Setting global/notify_dbus to 1
  Command failed with status code 5.
    
por sreya 25.04.2018 / 18:04

0 respostas