SSH na máquina do glusterfs que você deseja manter e fazer:
[siddhartha@glusterfs-01-perf ~]$ sudo gluster peer status
Number of Peers: 1
Hostname: 10.240.0.123
Port: 24007
Uuid: 03747753-a2cc-47dc-8989-62203a7d31cd
State: Peer in Cluster (Connected)
Isso nos mostra nosso outro colega do qual desejamos nos livrar.
Para desanexar isso, tente:
sudo gluster peer detach 10.240.0.123
Você pode falhar com:
peer detach: failed: Brick(s) with the peer 10.240.0.123 exist in cluster
Precisamos nos livrar primeiro do tijolo:
[siddhartha@glusterfs-01-perf ~]$ sudo gluster volume info
Volume Name: glusterfs
Type: Replicate
Volume ID: 563f8593-4592-430f-9f0b-c9472c12570b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.240.0.122:/mnt/storage/glusterfs
Brick2: 10.240.0.123:/mnt/storage/glusterfs
Para remover o Brick2, faça:
[siddhartha@glusterfs-01-perf ~]$ sudo gluster volume remove-brick glusterfs 10.240.0.123:/mnt/storage/glusterfs
Isso pode falhar com:
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: failed: Removing bricks from replicate configuration is not allowed without reducing replica count explicitly.
Nossa replicação é definida como 2 e precisa ser explicitamente reduzida a 1, portanto, adicione um replica 1
ao comando anterior:
[siddhartha@glusterfs-01-perf ~]$ sudo gluster volume remove-brick glusterfs replica 1 10.240.0.123:/mnt/storage/glusterfs
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success
Isso deve funcionar:
[siddhartha@glusterfs-01-perf ~]$ sudo gluster volume info glusterfs
Volume Name: glusterfs
Type: Distribute
Volume ID: 563f8593-4592-430f-9f0b-c9472c12570b
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.240.0.122:/mnt/storage/glusterfs
Você provavelmente pode terminar a outra máquina.