device mapper no RHEL6 incapaz de criar devs para o volume lógico do LVM

4

Eu tenho o convidado XEN executando o RHEL6 e ele tem um LUN apresentado a partir do Dom0. Contém um grupo de volumes LVM chamado vg_ALHINT (INT for Integration e ALH é uma abreviação do seu nome de banco de dados Oracle). Os dados são do Oracle 11g. O VG foi importado, ativado e o udev criou os mapas para cada volume lógico.

No entanto, o mapeador de dispositivos não criou mapeamentos para um dos volumes lógicos [LV] e, para o LV em questão, criou / dev / dm-2 com diferentes números menores em comparação com o restante dos LVs.

#  dmsetup table
vg_ALHINT-arch: 0 4300800 linear 202:16 46139392
vg0-lv6: 0 20971520 linear 202:2 30869504
vg_ALHINT-safeset2: 0 4194304 linear 202:16 35653632
vg0-lv5: 0 2097152 linear 202:2 28772352
vg_ALHINT-safeset1: 0 4186112 linear 202:16 54528000
vg0-lv4: 0 524288 linear 202:2 28248064
vg0-lv3: 0 4194304 linear 202:2 24053760
vg_ALHINT-oradata:     **
vg0-lv2: 0 4194304 linear 202:2 19859456
vg0-lv1: 0 2097152 linear 202:2 17762304
vg0-lv0: 0 17760256 linear 202:2 2048
vg_ALHINT-admin: 0 4194304 linear 202:16 41945088

** Você pode ver acima vg_ALHINT-oradata está vazio.

# ls -l /dev/mapper/
total 0
[root@iui-alhdb01 ~]# ls -l /dev/mapper/
total 0
crw-rw---- 1 root root  10, 58 Apr  3 13:43 control
lrwxrwxrwx 1 root root       7 Apr  3 13:43 vg0-lv0 -> ../dm-0
lrwxrwxrwx 1 root root       7 Apr  3 13:43 vg0-lv1 -> ../dm-1
lrwxrwxrwx 1 root root       7 Apr  3 14:35 vg0-lv2 -> ../dm-2
lrwxrwxrwx 1 root root       7 Apr  3 13:43 vg0-lv3 -> ../dm-3
lrwxrwxrwx 1 root root       7 Apr  3 13:43 vg0-lv4 -> ../dm-4
lrwxrwxrwx 1 root root       7 Apr  3 13:43 vg0-lv5 -> ../dm-5
lrwxrwxrwx 1 root root       7 Apr  3 13:43 vg0-lv6 -> ../dm-6
lrwxrwxrwx 1 root root       7 Apr  3 13:59 vg_ALHINT-admin -> ../dm-8
lrwxrwxrwx 1 root root       7 Apr  3 13:59 vg_ALHINT-arch -> ../dm-9
brw-rw---- 1 root disk 253,  7 Apr  3 14:37 vg_ALHINT-oradata
lrwxrwxrwx 1 root root       8 Apr  3 13:59 vg_ALHINT-safeset1 -> ../dm-10
lrwxrwxrwx 1 root root       8 Apr  3 13:59 vg_ALHINT-safeset2 -> ../dm-11

vg_ALHINT-oradata não foi criado até quando eu executei dmsetup mknodes

# cat /proc/partitions
major minor  #blocks  name

 202        0   26214400 xvda
 202        1     262144 xvda1
 202        2   25951232 xvda2
 253        0    8880128 dm-0
 253        1    1048576 dm-1
 253        2    2097152 dm-2
 253        3    2097152 dm-3
 253        4     262144 dm-4
 253        5    1048576 dm-5
 253        6   10485760 dm-6
 202       16   29360128 xvdb
 253        8    2097152 dm-8
 253        9    2150400 dm-9
 253       10    2093056 dm-10
 253       11    2097152 dm-11

dm-7 teria sido vg_ALHINT-oradata e está faltando. Eu corri dmsetup mknodes e dm-7 foi criado mas ainda está faltando em /proc/paritions .

# ls -l /dev/dm-7
brw-rw---- 1 root disk 253, 7 Apr  3 13:59 /dev/dm-7

Seus números maiores e menores são 253:7 , mas os dispositivos e os mesmos LVs em seu VG têm 202: nn

lvs me informa que esse LV foi suspenso:

# lvs
    Logging initialised at Thu Apr  3 14:44:19 2014
    Set umask from 0022 to 0077
    Finding all logical volumes
  LV       VG        Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv0      vg0       -wi-ao----   8.47g
  lv1      vg0       -wi-ao----   1.00g
  lv2      vg0       -wi-ao----   2.00g
  lv3      vg0       -wi-ao----   2.00g
  lv4      vg0       -wi-ao---- 256.00m
  lv5      vg0       -wi-ao----   1.00g
  lv6      vg0       -wi-ao----  10.00g
  admin    vg_ALHINT -wi-a-----   2.00g
  arch     vg_ALHINT -wi-a-----   2.05g
  oradata  vg_ALHINT -wi-s-----  39.95g
  safeset1 vg_ALHINT -wi-a-----   2.00g
  safeset2 vg_ALHINT -wi-a-----   2.00g
    Wiping internal VG cache

O disco foi criado a partir de um instantâneo de nossos bancos de dados de produção. O Oracle foi encerrado e o VG foi exportado antes do snapshot. Devo observar que eu faço essa mesma coleta para centenas de bancos de dados semanalmente por meio de um script. Como esse foi um instantâneo, eu tenho a tabela do dispositivo de mapeamento do original e usei isso para tentar recriar sua tabela ausente:

0 35651584 linear 202:16 2048
35651584 4087808 linear 202:16 50440192
39739392 2097152 linear 202:16 39847936
41836544 41943040 linear 202:16 58714112

Depois de suspender o dispositivo com dmsetup suspend /dev/dm-7 , executei dmsetup load /dev/dm-7 $table.txt

Em seguida, tentei retomar este dispositivo

# dmsetup resume /dev/dm-7
device-mapper: resume ioctl on vg_ALHINT-oradata failed: Invalid argument
Command failed
#

Alguma idéia porque estou realmente perdida. (Sim eu reiniciei e re-snapshotted isso lotes e sempre tem o mesmo problema. Eu até mesmo reinstalei este servidor e execute yum update .)

// EDITAR

Esqueci-me de acrescentar que esta é a tabela dmsetup original do nosso ambiente de produção e tentei carregar o layout oradata no nosso servidor de integração, como observei acima.

#  dmsetup table
vg_ALHPRD-safeset2: 0 4194304 linear 202:32 35653632
vg_ALHPRD-safeset1: 0 4186112 linear 202:32 54528000
vg_ALHPRD-oradata: 0 35651584 linear 202:32 2048
vg_ALHPRD-oradata: 35651584 4087808 linear 202:32 50440192
vg_ALHPRD-oradata: 39739392 2097152 linear 202:32 39847936
vg_ALHPRD-oradata: 41836544 41943040 linear 202:32 58714112
vg_ALHPRD-admin: 0 4194304 linear 202:32 41945088

// EDITAR

Eu rodei o vgscan --mknodes e tive:

The link /dev/vg_ALHINT/oradata should have been created by udev but it was not found. Falling back to direct link creation.



# ls -l /dev/vg_ALHINT/oradata
lrwxrwxrwx 1 root root 29 Apr 3 14:50 /dev/vg_ALHINT/oradata -> /dev/mapper/vg_ALHINT-oradata

Ainda não é possível ativar isso e esta mensagem de erro:

device-mapper: resume ioctl on failed: Invalid argument Unable to resume vg_ALHINT-oradata (253:7) 

// EDITAR

Eu vejo rastreios de pilha em / var / log / messages:

Apr  3 13:58:09 iui-alhdb01 kernel: blkfront: xvdb: barriers disabled
Apr  3 13:58:09 iui-alhdb01 kernel: xvdb: unknown partition table
Apr  3 13:59:35 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr  3 14:02:31 iui-alhdb01 ntpd[1093]: 0.0.0.0 c612 02 freq_set kernel 5.242 PPM
Apr  3 14:02:31 iui-alhdb01 ntpd[1093]: 0.0.0.0 c615 05 clock_sync
Apr  3 14:30:13 iui-alhdb01 kernel: device-mapper: table: 253:2: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr  3 14:33:34 iui-alhdb01 kernel: INFO: task vi:1394 blocked for more than 120 seconds.
Apr  3 14:33:34 iui-alhdb01 kernel:      Not tainted 2.6.32-431.11.2.el6.x86_64 #1
Apr  3 14:33:34 iui-alhdb01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr  3 14:33:34 iui-alhdb01 kernel: vi            D 0000000000000000     0  1394   1271 0x00000084
Apr  3 14:33:34 iui-alhdb01 kernel: ffff88007aef19b8 0000000000000082 ffff88007aef1978 ffffffffa000443c
Apr  3 14:33:34 iui-alhdb01 kernel: ffff88007d208d80 ffff880037cabc08 ffff880037cda0c8 ffff8800022168a8
Apr  3 14:33:34 iui-alhdb01 kernel: ffff880037da45f8 ffff88007aef1fd8 000000000000fbc8 ffff880037da45f8
Apr  3 14:33:34 iui-alhdb01 kernel: Call Trace:
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffffa000443c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod]
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff810a7091>] ? ktime_get_ts+0xb1/0xf0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff815286c3>] io_schedule+0x73/0xc0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf230>] sync_buffer+0x40/0x50
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8152918f>] __wait_on_bit+0x5f/0x90
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff81529238>] out_of_line_wait_on_bit+0x78/0x90
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8109b310>] ? wake_bit_function+0x0/0x50
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf1e6>] __wait_on_buffer+0x26/0x30
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffffa0085875>] __ext4_get_inode_loc+0x1e5/0x3b0 [ext4]
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffffa0088006>] ext4_iget+0x86/0x7d0 [ext4]
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffffa008ec35>] ext4_lookup+0xa5/0x140 [ext4]
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff81198b05>] do_lookup+0x1a5/0x230
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff81198e90>] __link_path_walk+0x200/0xff0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8114a667>] ? handle_pte_fault+0xf7/0xb00
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff811a3c6a>] ? dput+0x9a/0x150
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff81199f3a>] path_walk+0x6a/0xe0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8119a14b>] filename_lookup+0x6b/0xc0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8119b277>] user_path_at+0x57/0xa0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8104a98c>] ? __do_page_fault+0x1ec/0x480
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8119707b>] ? putname+0x2b/0x40
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8118eac0>] vfs_fstatat+0x50/0xa0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff811c4645>] ? nr_blockdev_pages+0x15/0x70
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8115c4ad>] ? si_swapinfo+0x1d/0x90
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8118ec3b>] vfs_stat+0x1b/0x20
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8118ec64>] sys_newstat+0x24/0x50
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff810e2057>] ? audit_syscall_entry+0x1d7/0x200
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
Apr  3 14:35:34 iui-alhdb01 kernel: INFO: task vi:1394 blocked for more than 120 seconds.
Apr  3 14:35:34 iui-alhdb01 kernel:      Not tainted 2.6.32-431.11.2.el6.x86_64 #1
Apr  3 14:35:34 iui-alhdb01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr  3 14:35:34 iui-alhdb01 kernel: vi            D 0000000000000000     0  1394   1271 0x00000084
Apr  3 14:35:34 iui-alhdb01 kernel: ffff88007aef19b8 0000000000000082 ffff88007aef1978 ffffffffa000443c
Apr  3 14:35:34 iui-alhdb01 kernel: ffff88007d208d80 ffff880037cabc08 ffff880037cda0c8 ffff8800022168a8
Apr  3 14:35:34 iui-alhdb01 kernel: ffff880037da45f8 ffff88007aef1fd8 000000000000fbc8 ffff880037da45f8
Apr  3 14:35:34 iui-alhdb01 kernel: Call Trace:
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffffa000443c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod]
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff810a7091>] ? ktime_get_ts+0xb1/0xf0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff815286c3>] io_schedule+0x73/0xc0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf230>] sync_buffer+0x40/0x50
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8152918f>] __wait_on_bit+0x5f/0x90
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81529238>] out_of_line_wait_on_bit+0x78/0x90
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8109b310>] ? wake_bit_function+0x0/0x50
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf1e6>] __wait_on_buffer+0x26/0x30
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffffa0085875>] __ext4_get_inode_loc+0x1e5/0x3b0 [ext4]
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffffa0088006>] ext4_iget+0x86/0x7d0 [ext4]
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffffa008ec35>] ext4_lookup+0xa5/0x140 [ext4]
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81198b05>] do_lookup+0x1a5/0x230
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81198e90>] __link_path_walk+0x200/0xff0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8114a667>] ? handle_pte_fault+0xf7/0xb00
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811a3c6a>] ? dput+0x9a/0x150
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81199f3a>] path_walk+0x6a/0xe0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119a14b>] filename_lookup+0x6b/0xc0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119b277>] user_path_at+0x57/0xa0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8104a98c>] ? __do_page_fault+0x1ec/0x480
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119707b>] ? putname+0x2b/0x40
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118eac0>] vfs_fstatat+0x50/0xa0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4645>] ? nr_blockdev_pages+0x15/0x70
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8115c4ad>] ? si_swapinfo+0x1d/0x90
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118ec3b>] vfs_stat+0x1b/0x20
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118ec64>] sys_newstat+0x24/0x50
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff810e2057>] ? audit_syscall_entry+0x1d7/0x200
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
Apr  3 14:35:34 iui-alhdb01 kernel: INFO: task vgdisplay:1437 blocked for more than 120 seconds.
Apr  3 14:35:34 iui-alhdb01 kernel:      Not tainted 2.6.32-431.11.2.el6.x86_64 #1
Apr  3 14:35:34 iui-alhdb01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr  3 14:35:34 iui-alhdb01 kernel: vgdisplay     D 0000000000000000     0  1437   1423 0x00000080
Apr  3 14:35:34 iui-alhdb01 kernel: ffff88007da35a18 0000000000000086 ffff88007da359d8 ffffffffa000443c
Apr  3 14:35:34 iui-alhdb01 kernel: 000000000007fff0 0000000000010000 ffff88007da359d8 ffff88007d24d380
Apr  3 14:35:34 iui-alhdb01 kernel: ffff880037c8c5f8 ffff88007da35fd8 000000000000fbc8 ffff880037c8c5f8
Apr  3 14:35:34 iui-alhdb01 kernel: Call Trace:
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffffa000443c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod]
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff810a7091>] ? ktime_get_ts+0xb1/0xf0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff815286c3>] io_schedule+0x73/0xc0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c8a9d>] __blockdev_direct_IO_newtrunc+0xb7d/0x1270
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4400>] ? blkdev_get_block+0x0/0x20
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c9207>] __blockdev_direct_IO+0x77/0xe0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4400>] ? blkdev_get_block+0x0/0x20
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c5487>] blkdev_direct_IO+0x57/0x60
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4400>] ? blkdev_get_block+0x0/0x20
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811217bb>] generic_file_aio_read+0x6bb/0x700
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c5fd0>] ? blkdev_get+0x10/0x20
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c5fe0>] ? blkdev_open+0x0/0xc0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118617f>] ? __dentry_open+0x23f/0x360
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4841>] blkdev_aio_read+0x51/0x80
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81188e8a>] do_sync_read+0xfa/0x140
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff810ec3f6>] ? rcu_process_dyntick+0xd6/0x120
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8109b290>] ? autoremove_wake_function+0x0/0x40
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c479c>] ? block_ioctl+0x3c/0x40
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119dc12>] ? vfs_ioctl+0x22/0xa0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119ddb4>] ? do_vfs_ioctl+0x84/0x580
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81226496>] ? security_file_permission+0x16/0x20
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81189775>] vfs_read+0xb5/0x1a0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811898b1>] sys_read+0x51/0x90
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff810e1e4e>] ? __audit_syscall_exit+0x25e/0x290
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
Apr  3 14:39:19 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr  3 14:53:57 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr  3 15:02:42 iui-alhdb01 yum[1544]: Installed: sos-2.2-47.el6.noarch
Apr  3 15:52:29 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr  3 15:59:08 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
    
por si_l 03.04.2014 / 14:50

2 respostas

3

Veja devices.txt na documentação do kernel: O Major 202 é "Xen Virtual Block Device", o principal 253 é LVM / device mapper.

Todos os seus dispositivos dm-x são 253:n ; eles apenas apontam para 202:n .

A mensagem de erro é clara:

device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256

Parece que houve uma alteração no dispositivo XEN. Seu vg_ALHPRD-oradata não pode ser carregado porque tenta acessar o espaço de armazenamento em 202:16 , que simplesmente não existe.

    
por 03.04.2014 / 16:39
1

Parece que o multipath no Hypervisor se recusa a atualizar seus mapas para tamanhos de LUN.

Esse LUN era originalmente de 28 Gb e depois cresceu para 48 Gb no storage array.

A informação do VG acha que seu 48G e de fato este disco é 48G, mas o multipath não atualiza e acha que ainda é 28G.

Multipath se apega a 28G:

# multipath -l 350002acf962421ba
350002acf962421ba dm-17 3PARdata,VV
size=28G features='1 queue_if_no_path' hwhandler='0' wp=rw
'-+- policy='round-robin 0' prio=0 status=active
  |- 8:0:0:22   sdt   65:48    active undef running
  |- 10:0:0:22  sdbh  67:176   active undef running
  |- 7:0:0:22   sddq  71:128   active undef running
  |- 9:0:0:22   sdfb  129:208  active undef running
  |- 8:0:1:22   sdmz  70:432   active undef running
  |- 7:0:1:22   sdoj  128:496  active undef running
  |- 10:0:1:22  sdop  129:336  active undef running
  |- 9:0:1:22   sdqm  132:352  active undef running
  |- 7:0:2:22   sdxh  71:624   active undef running
  |- 8:0:2:22   sdzy  131:704  active undef running
  |- 10:0:2:22  sdaab 131:752  active undef running
  |- 9:0:2:22   sdaed 66:912   active undef running
  |- 7:0:3:22   sdakm 132:992  active undef running
  |- 10:0:3:22  sdall 134:880  active undef running
  |- 8:0:3:22   sdamx 8:1232   active undef running
  '- 9:0:3:22   sdaqa 69:1248  active undef running

Tamanho real do disco no armazenamento:

# showvv ALHIDB_SNP_001
                                                                          -Rsvd(MB)-- -(MB)-
  Id Name           Prov Type  CopyOf            BsId Rd -Detailed_State- Adm Snp Usr  VSize
4098 ALHIDB_SNP_001 snp  vcopy ALHIDB_SNP_001.ro 5650 RW normal            --  --  --  49152

Só para ter certeza de que tenho o disco certo:

# showvlun -showcols VVName,VV_WWN| grep -i  0002acf962421ba
ALHIDB_SNP_001          50002ACF962421BA 

E o VG acha que seu 48G

  --- Volume group ---
  VG Name               vg_ALHINT
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  30
  VG Access             read/write
  VG Status             exported/resizable
  MAX LV                0
  Cur LV                5
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               48.00 GiB
  PE Size               4.00 MiB
  Total PE              12287
  Alloc PE / Size       12287 / 48.00 GiB
  Free  PE / Size       0 / 0
  VG UUID               qqZ9Vi-5Ob1-R6zb-YeWa-jDfg-9wc7-E2wsem

Quando eu re-scanear os HBAs para novos discos e reconfigurar multipthing, o disco ainda exibe 28G, então tentei isso sem alterações:

# multipathd -k'resize map 350002acf962421ba'

Versões:

lvm2-2.02.56-8.100.3.el5
device-mapper-multipath-libs-0.4.9-46.100.5.el5

Solução alternativa Porque eu não conseguia pensar em soluções eu fiz isso: Eu não escrevi anteriormente que eu corro o OVM 3.2 em cima, então parte da minha solução incluirá o OVM. i) Desligar convidados no Xen via OVM. ii) Remova os discos iii) Excluir LUNs da OVM iv) Unpresente LUNs de hypervisors. v) OVM verifica novamente o armazenamento. vi) Aguarde 30 minutos;) vii) Apresentar meus discos para os hipervisores com diferentes IDs de LUN. viii) OVM verifica novamente o armazenamento

E agora, fantasticamente, vejo discos 48G.

# multipath -l 350002acf962421ba
350002acf962421ba dm-18 3PARdata,VV
size=48G features='1 queue_if_no_path' hwhandler='0' wp=rw
'-+- policy='round-robin 0' prio=0 status=active
  |- 9:0:0:127  sdt   65:48    active undef running
  |- 9:0:1:127  sdbh  67:176   active undef running
  |- 9:0:2:127  sddo  71:96    active undef running
  |- 9:0:3:127  sdfb  129:208  active undef running
  |- 10:0:3:127 sdmz  70:432   active undef running
  |- 10:0:0:127 sdoh  128:464  active undef running
  |- 10:0:1:127 sdop  129:336  active undef running
  |- 10:0:2:127 sdqm  132:352  active undef running
  |- 7:0:1:127  sdzu  131:640  active undef running
  |- 7:0:0:127  sdxh  71:624   active undef running
  |- 7:0:3:127  sdaed 66:912   active undef running
  |- 7:0:2:127  sdaab 131:752  active undef running
  |- 8:0:0:127  sdakm 132:992  active undef running
  |- 8:0:1:127  sdall 134:880  active undef running
  |- 8:0:2:127  sdamx 8:1232   active undef running
  '- 8:0:3:127  sdaqa 69:1248  active undef running
    
por 04.04.2014 / 10:48