zpool: a E / S do conjunto está suspensa no momento

7

Estou usando o ZFS no OSX e o zpool que está ativo e on-line:

NAME      SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
WD_1TB    931G   280G   651G    30%  1.00x  ONLINE  -

mas não consigo montá-lo.

$ sudo zfs mount WD_1TB
cannot open 'WD_1TB': pool I/O is currently suspended
cannot open 'WD_1TB': pool I/O is currently suspended

ou desmonte:

$ sudo zfs unmount WD_1TB
cannot open 'WD_1TB': pool I/O is currently suspended
cannot open 'WD_1TB': pool I/O is currently suspended

ou até mesmo destruí-lo:

$ sudo zpool destroy -f WD_1TB
cannot open 'WD_1TB': pool I/O is currently suspended

Ao fazer zpool export WD_1TB , apenas congela.

Ao eliminar erros de dispositivos num conjunto, também ocorre um erro:

$ sudo zpool clear WD_1TB
cannot clear errors for WD_1TB: I/O error

Acima de acontecer qualquer que seja o disco conectado via USB ou não.

O que é interessante é que zpool status aponta zpool para / dev / disk1, mas diskutil list aponta para / dev / disk3.

Eu ativei as mensagens de depuração por meio de: sysctl -w zfs.vnops_osx_debug=1 e execute sudo dmesg | tail , que mostra algo como:

0 [Level 3] [Facility com.apple.system.fs] [ErrType IO] [ErrNo 6] [IOType Read] [PBlkNum 0] [LBlkNum 0] 
0 [Level 3] [Facility com.apple.system.fs] [DevNode devfs] [MountPt /dev] [Path /dev/disk1s2] 
disk1s2: media is not present.
0 [Level 3] [Facility com.apple.system.fs] [ErrType IO] [ErrNo 6] [IOType Read] [PBlkNum 512] [LBlkNum 512] 
0 [Level 3] [Facility com.apple.system.fs] [DevNode devfs] [MountPt /dev] [Path /dev/disk1s2] 

zfs_vnop_write(vp 0xffffff804f6303c0, offset 0x12b00000 size 0x10000
zfs_vnop_write(vp 0xffffff804f6303c0, offset 0x12b10000 size 0x10000
zfs_vnop_write(vp 0xffffff804f6303c0, offset 0x12b20000 size 0x10000
zfs_vnop_write(vp 0xffffff804f6303c0, offset 0x12b30000 size 0x10000
zfs_vnop_write(vp 0xffffff8051b031e0, offset 0x1f0000 size 0x10000

Conectar ou desconectar o HDD não ajuda.

Qualquer maneira de simplesmente montar o HDD no OSX em circunstâncias acima?

Relacionados:

por kenorb 09.12.2013 / 12:53

1 resposta

6

Se a execução de sudo zpool clear WD_1TB não funcionar, tente:

$ sudo zpool clear -nFX WD_1TB

onde esses parâmetros não documentados significam:

  • -F: (undocumented for clear, the same as for import) Rewind. Recovery mode for a non-importable pool. Attempt to return the pool to an importable state by discarding the last few transactions. Not all damaged pools can be recovered by using this option. If successful, the data from the discarded transactions is irretrievably lost. This option is ignored if the pool is importable or already imported.
  • -n: (undocumented for clear, the same as for import) Used with the -F recovery option. Determines whether a non-importable pool can be made importable again, but does not actually perform the pool recovery. For more details about pool recovery mode, see the -F option, above. and then try to re-import again:
  • -X (undocumented): Extreme rewind. The effect of -X seems to be that some extremely lengthy operation is attempted, that never finishes. In some cases, a reboot was necessary to terminate the process.
  • -V (undocumented): Option by UTSLing, when used for import it makes the pool got imported again, but still without an attempt at resilvering.

Fonte: Problema do pool com falhas do ZFS e man zpool .

$ zpool import WD_1TB

Se não ajudar, tente os seguintes comandos para remover o zpool inválido:

$ zpool list -v
$ sudo zfs unmount WD_1TB
$ sudo zpool destroy -f WD_1TB
$ zpool detach WD_1TB disk1s2
$ zpool remove WD_1TB disk1s2
$ zpool remove WD_1TB /dev/disk1s2
$ zpool set cachefile=/etc/zfs/zpool.cache WD_1TB

Finalmente, se nada ajudar, remova o arquivo /etc/zfs/zpool.cache (opcional) e reinicie o computador.

Relacionados:

por 11.12.2013 / 10:49