colocando os resultados on-line do nó de espera no cérebro drbd split

0

Eu tenho um cluster de 2 nós no pacemaker / corosync em execução no Linux científico 6. drbd está sendo executado nos dois nós master=node1 slave=node2 . Quando eu coloco o slave node2 em standby e depois de volta on-line, tudo funciona bem.

O problema é quando eu coloco o mestre drbd node1 standby. Os recursos são migrados para o outro node2, o que é bom. mas quando eu coloco o node1 de volta online eu recebo um cérebro split drbd:

[root@node1 ~]# cat /proc/drbd
version: 8.4.9-1 (api:1/proto:86-101)
GIT-hash: 9976da086367a2476503ef7f6b13d4567327a280 build by mockbuild@Build64R6,     2016-12-13 18:38:15
0: cs:StandAlone ro:Primary/Unknown ds:UpToDate/DUnknown   r-----
ns:0 nr:0 dw:184 dr:2289 al:4 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:172

[root@node2 ~]# cat /proc/drbd
version: 8.4.9-1 (api:1/proto:86-101)
GIT-hash: 9976da086367a2476503ef7f6b13d4567327a280 build by mockbuild@Build64R6,    2016-12-13 18:38:15
0: cs:StandAlone ro:Secondary/Unknown ds:UpToDate/DUnknown   r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:984

esta é a minha configuração crm:

[root@node1 ~]# crm config show
node node1 \
attributes standby=off
node node2 \
attributes standby=off
primitive drbd0 ocf:linbit:drbd \
params drbd_resource=cluster \
op monitor role=Master interval=59s timeout=45s \
op monitor role=Slave interval=60s timeout=45s
primitive fs0 Filesystem \
params fstype=ext4 directory="/pgdata" device="/dev/drbd0"
primitive pgsqlDB pgsql \
op start interval=0 timeout=120 \
op stop interval=0 timeout=120 \
params pgdata="/var/lib/pgsql/9.6/data" \
meta target-role=Started
primitive virtual_ip IPaddr2 \
params ip=172.16.144.32 cidr_netmask=32 \
op monitor interval=10s \
meta migration-threshold=10 target-role=Started
group pg-group fs0 virtual_ip pgsqlDB \
meta target-role=Started is-managed=true
ms ms-drbd0 drbd0 \
meta clone-max=2 notify=true globally-unique=false target-role=Started is-managed=true \
meta master-max=1 master-node-max=1 clone-node-max=1 is-managed=true
location cli-prefer-ms-drbd0 ms-drbd0 role=Started inf: node1
order ip_before_db Mandatory: virtual_ip:start pgsqlDB:start
order ms-drbd0-before-pg-group Mandatory: ms-drbd0:promote pg-group:start
location ms-drbd0-master-on-node1 ms-drbd0 \
rule $role=master 100: #uname eq node1
colocation pg-group-on-ms-drbd0 inf: pg-group ms-drbd0:Master
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=1.1.18-3.el6-bfe4e80420 \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes=2 \
set \
stonith-enabled=false \
maintenance-mode=false \
last-lrm-refresh=1537427286

é isso que acontece no node1 depois que eu coloquei o node1 de volta online:

[root@node1 ~]# grep '20 12:30' /var/log/messages|less
Sep 20 12:30:11 node1 kernel: drbd cluster: Starting worker thread (from drbdsetup-84 [6553])
Sep 20 12:30:11 node1 kernel: block drbd0: disk( Diskless -> Attaching ) 
Sep 20 12:30:11 node1 kernel: drbd cluster: Method to ensure write ordering: flush
Sep 20 12:30:11 node1 kernel: block drbd0: max BIO size = 1048576
Sep 20 12:30:11 node1 kernel: block drbd0: drbd_bm_resize called with capacity == 4194024
Sep 20 12:30:11 node1 kernel: block drbd0: resync bitmap: bits=524253 words=8192 pages=16
Sep 20 12:30:11 node1 kernel: block drbd0: size = 2048 MB (2097012 KB)
Sep 20 12:30:11 node1 kernel: block drbd0: recounting of set bits took additional 0 jiffies
Sep 20 12:30:11 node1 kernel: block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
Sep 20 12:30:11 node1 kernel: block drbd0: disk( Attaching -> UpToDate ) 
Sep 20 12:30:11 node1 kernel: block drbd0: attached to UUIDs 9F96EE60D649CC0E:0000000000000000:085616B8CCC8E648:085516B8CCC8E648
Sep 20 12:30:11 node1 kernel: drbd cluster: conn( StandAlone -> Unconnected ) 
Sep 20 12:30:11 node1 kernel: drbd cluster: Starting receiver thread (from drbd_w_cluster [6554])
Sep 20 12:30:11 node1 kernel: drbd cluster: receiver (re)started
Sep 20 12:30:11 node1 kernel: drbd cluster: conn( Unconnected -> WFConnection ) 
Sep 20 12:30:11 node1 attrd[5748]:   notice: Sending flush op to all hosts for: master-drbd0 (1000)
Sep 20 12:30:11 node1 attrd[5748]:   notice: Sent update 38: master-drbd0=1000
Sep 20 12:30:11 node1 crmd[5750]:   notice: Result of start operation for drbd0 on node1: 0 (ok) | call=59 key=drbd0_start_0 confirmed=true cib-update=37
Sep 20 12:30:11 node1 crmd[5750]:   notice: Result of notify operation for drbd0 on node1: 0 (ok) | call=60 key=drbd0_notify_0 confirmed=true cib-update=0
Sep 20 12:30:11 node1 crmd[5750]:   notice: Result of notify operation for drbd0 on node1: 0 (ok) | call=61 key=drbd0_notify_0 confirmed=true cib-update=0
Sep 20 12:30:11 node1 kernel: block drbd0: role( Secondary -> Primary ) 
Sep 20 12:30:11 node1 kernel: block drbd0: new current UUID F3E81D4D5AC683BB:9F96EE60D649CC0E:085616B8CCC8E648:085516B8CCC8E648
Sep 20 12:30:11 node1 crmd[5750]:   notice: Result of promote operation for drbd0 on node1: 0 (ok) | call=62 key=drbd0_promote_0 confirmed=true cib-update=38
Sep 20 12:30:11 node1 attrd[5748]:   notice: Sending flush op to all hosts for: master-drbd0 (10000)
Sep 20 12:30:11 node1 attrd[5748]:   notice: Sent update 40: master-drbd0=10000
Sep 20 12:30:11 node1 crmd[5750]:   notice: Result of notify operation for drbd0 on node1: 0 (ok) | call=63 key=drbd0_notify_0 confirmed=true cib-update=0
Sep 20 12:30:11 node1 kernel: drbd cluster: Handshake successful: Agreed network protocol version 101
Sep 20 12:30:11 node1 kernel: drbd cluster: Feature flags enabled on protocol level: 0x7 TRIM THIN_RESYNC WRITE_SAME.
Sep 20 12:30:11 node1 kernel: drbd cluster: Peer authenticated using 20 bytes HMAC
Sep 20 12:30:11 node1 kernel: drbd cluster: conn( WFConnection -> WFReportParams ) 
Sep 20 12:30:11 node1 kernel: drbd cluster: Starting ack_recv thread (from drbd_r_cluster [6580])
Sep 20 12:30:11 node1 kernel: block drbd0: drbd_sync_handshake:
Sep 20 12:30:11 node1 kernel: block drbd0: self F3E81D4D5AC683BB:9F96EE60D649CC0E:085616B8CCC8E648:085516B8CCC8E648 bits:0 flags:0
Sep 20 12:30:11 node1 kernel: block drbd0: peer 7DACE626E5A23724:9F96EE60D649CC0E:085616B8CCC8E648:085516B8CCC8E648 bits:339 flags:0
Sep 20 12:30:11 node1 kernel: block drbd0: uuid_compare()=100 by rule 90
Sep 20 12:30:11 node1 kernel: block drbd0: helper command: /sbin/drbdadm initial-split-brain minor-0
Sep 20 12:30:11 node1 kernel: block drbd0: helper command: /sbin/drbdadm initial-split-brain minor-0 exit code 0 (0x0)
Sep 20 12:30:11 node1 kernel: block drbd0: Split-Brain detected but unresolved, dropping connection!
Sep 20 12:30:11 node1 kernel: block drbd0: helper command: /sbin/drbdadm split-brain minor-0
Sep 20 12:30:11 node1 kernel: block drbd0: helper command: /sbin/drbdadm split-brain minor-0 exit code 0 (0x0)
Sep 20 12:30:11 node1 kernel: drbd cluster: conn( WFReportParams -> Disconnecting ) 
Sep 20 12:30:11 node1 kernel: drbd cluster: error receiving ReportState, e: -5 l: 0!
Sep 20 12:30:11 node1 kernel: drbd cluster: ack_receiver terminated
Sep 20 12:30:11 node1 kernel: drbd cluster: Terminating drbd_a_cluster
Sep 20 12:30:11 node1 kernel: drbd cluster: Connection closed
Sep 20 12:30:11 node1 kernel: drbd cluster: conn( Disconnecting -> StandAlone ) 
Sep 20 12:30:11 node1 kernel: drbd cluster: receiver terminated
Sep 20 12:30:11 node1 kernel: drbd cluster: Terminating drbd_r_cluster
Sep 20 12:30:11 node1 Filesystem(fs0)[6748]: INFO: Running start for /dev/drbd0 on /pgdata
Sep 20 12:30:11 node1 kernel: EXT4-fs (drbd0): warning: maximal mount count reached, running e2fsck is recommended
Sep 20 12:30:11 node1 crmd[5750]:   notice: Result of start operation for fs0 on node1: 0 (ok) | call=65 key=fs0_start_0 confirmed=true cib-update=40
Sep 20 12:30:11 node1 kernel: EXT4-fs (drbd0): mounted filesystem with ordered data mode. Opts: 
Sep 20 12:30:11 node1 IPaddr2(virtual_ip)[6855]: INFO: Adding inet address 172.16.144.32/32 with broadcast address 172.16.144.255 to device eth1
Sep 20 12:30:11 node1 avahi-daemon[1526]: Registering new address record for 172.16.144.32 on eth1.IPv4.
Sep 20 12:30:11 node1 IPaddr2(virtual_ip)[6855]: INFO: Bringing device eth1 up
Sep 20 12:30:11 node1 IPaddr2(virtual_ip)[6855]: INFO: /usr/libexec/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-172.16.144.32 eth1 172.16.144.32 auto not_used not_used
Sep 20 12:30:11 node1 crmd[5750]:   notice: Result of start operation for virtual_ip on node1: 0 (ok) | call=66 key=virtual_ip_start_0 confirmed=true cib-update=41
Sep 20 12:30:12 node1 pgsql(pgsqlDB)[6922]: INFO: server starting
Sep 20 12:30:12 node1 pgsql(pgsqlDB)[6922]: INFO: PostgreSQL start command sent.
Sep 20 12:30:12 node1 pgsql(pgsqlDB)[6922]: WARNING: psql: FATAL: the database system is starting up
Sep 20 12:30:12 node1 pgsql(pgsqlDB)[6922]: WARNING: PostgreSQL template1 isn't running
Sep 20 12:30:12 node1 pgsql(pgsqlDB)[6922]: WARNING: Connection error (connection to the server went bad and the session was not interactive) occurred while executing the psql command.

isto é o que acontece no node2 depois que eu coloquei o node1 de volta online:

[root@node2 ~]# grep '20 12:30' /var/log/messages|less
Sep 20 12:30:09 node2 crmd[8320]:   notice: State transition S_IDLE -> S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph
Sep 20 12:30:09 node2 pengine[8319]:   notice: Pre-allocation failed: got node1 instead of node2
Sep 20 12:30:09 node2 pengine[8319]:   notice:  * Move       drbd0:0        (       node2 -> node1 Master )  
Sep 20 12:30:09 node2 pengine[8319]:   notice:  * Start      drbd0:1        (                                               node2 )  
Sep 20 12:30:09 node2 pengine[8319]:   notice:  * Move       fs0            (              node2 -> node1 )  
Sep 20 12:30:09 node2 pengine[8319]:   notice:  * Move       virtual_ip     (              node2 -> node1 )  
Sep 20 12:30:09 node2 pengine[8319]:   notice:  * Move       pgsqlDB        (              node2 -> node1 )  
Sep 20 12:30:09 node2 pengine[8319]:   notice: Calculated transition 28, saving inputs in /var/lib/pacemaker/pengine/pe-input-3800.bz2
Sep 20 12:30:09 node2 crmd[8320]:   notice: Initiating stop operation pgsqlDB_stop_0 locally on node2 | action 40
Sep 20 12:30:09 node2 crmd[8320]:   notice: Initiating notify operation drbd0_pre_notify_demote_0 locally on node2 | action 56
Sep 20 12:30:09 node2 crmd[8320]:   notice: Result of notify operation for drbd0 on node2: 0 (ok) | call=108 key=drbd0_notify_0 confirmed=true cib-update=0
Sep 20 12:30:10 node2 pgsql(pgsqlDB)[16999]: INFO: waiting for server to shut down.... done server stopped
Sep 20 12:30:10 node2 pgsql(pgsqlDB)[16999]: INFO: PostgreSQL is down
Sep 20 12:30:10 node2 crmd[8320]:   notice: Result of stop operation for pgsqlDB on node2: 0 (ok) | call=107 key=pgsqlDB_stop_0 confirmed=true cib-update=156
Sep 20 12:30:10 node2 crmd[8320]:   notice: Initiating stop operation virtual_ip_stop_0 locally on node2 | action 37
Sep 20 12:30:10 node2 IPaddr2(virtual_ip)[17129]: INFO: IP status = ok, IP_CIP=
Sep 20 12:30:10 node2 avahi-daemon[1592]: Withdrawing address record for 172.16.144.32 on eth1.
Sep 20 12:30:10 node2 crmd[8320]:   notice: Result of stop operation for virtual_ip on node2: 0 (ok) | call=110 key=virtual_ip_stop_0 confirmed=true cib-update=157
Sep 20 12:30:10 node2 crmd[8320]:   notice: Initiating stop operation fs0_stop_0 locally on node2 | action 35
Sep 20 12:30:10 node2 Filesystem(fs0)[17182]: INFO: Running stop for /dev/drbd0 on /pgdata
Sep 20 12:30:10 node2 Filesystem(fs0)[17182]: INFO: Trying to unmount /pgdata
Sep 20 12:30:10 node2 Filesystem(fs0)[17182]: INFO: unmounted /pgdata successfully
Sep 20 12:30:10 node2 crmd[8320]:   notice: Result of stop operation for fs0 on node2: 0 (ok) | call=111 key=fs0_stop_0 confirmed=true cib-update=158
Sep 20 12:30:10 node2 crmd[8320]:   notice: Initiating demote operation drbd0_demote_0 locally on node2 | action 4
Sep 20 12:30:10 node2 kernel: block drbd0: role( Primary -> Secondary ) 
Sep 20 12:30:10 node2 kernel: block drbd0: 1356 KB (339 bits) marked out-of-sync by on disk bit-map.
Sep 20 12:30:10 node2 crmd[8320]:   notice: Result of demote operation for drbd0 on node2: 0 (ok) | call=113 key=drbd0_demote_0 confirmed=true cib-update=159
Sep 20 12:30:10 node2 crmd[8320]:   notice: Initiating notify operation drbd0_post_notify_demote_0 locally on node2 | action 57
Sep 20 12:30:10 node2 attrd[8318]:   notice: Sending flush op to all hosts for: master-drbd0 (1000)
Sep 20 12:30:10 node2 attrd[8318]:   notice: Sent update 77: master-drbd0=1000
Sep 20 12:30:10 node2 crmd[8320]:   notice: Transition aborted by status-node2-master-drbd0 doing modify master-drbd0=1000: Transient attribute change | cib=0.150.5 source=abort_unless_down:343 path=/cib/stat
us/node_state[@id='node2']/transient_attributes[@id='node2']/instance_attributes[@id='status-node2']/nvpair[@id='status-node2-master-drbd0'] c
omplete=false
Sep 20 12:30:10 node2 crmd[8320]:   notice: Result of notify operation for drbd0 on node2: 0 (ok) | call=114 key=drbd0_notify_0 confirmed=true cib-update=0
Sep 20 12:30:10 node2 crmd[8320]:   notice: Transition 28 (Complete=15, Pending=0, Fired=0, Skipped=1, Incomplete=37, Source=/var/lib/pacemaker/pengine/pe-input-3800.bz2): Stopped
Sep 20 12:30:10 node2 pengine[8319]:   notice: Pre-allocation failed: got node1 instead of node2
Sep 20 12:30:10 node2 pengine[8319]:   notice:  * Move       drbd0:0        ( Slave node2 -> Master node1 )  
Sep 20 12:30:10 node2 pengine[8319]:   notice:  * Start      drbd0:1        (                                               node2 )  
Sep 20 12:30:10 node2 pengine[8319]:   notice:  * Start      fs0            (                                               node1 )  
Sep 20 12:30:10 node2 pengine[8319]:   notice:  * Start      virtual_ip     (                                               node1 )  
Sep 20 12:30:10 node2 pengine[8319]:   notice:  * Start      pgsqlDB        (                                               node1 )  
Sep 20 12:30:10 node2 pengine[8319]:   notice: Calculated transition 29, saving inputs in /var/lib/pacemaker/pengine/pe-input-3801.bz2
Sep 20 12:30:10 node2 crmd[8320]:   notice: Initiating notify operation drbd0_pre_notify_stop_0 locally on node2 | action 45
Sep 20 12:30:10 node2 crmd[8320]:   notice: Result of notify operation for drbd0 on node2: 0 (ok) | call=115 key=drbd0_notify_0 confirmed=true cib-update=0
Sep 20 12:30:10 node2 crmd[8320]:   notice: Initiating stop operation drbd0_stop_0 locally on node2 | action 2
Sep 20 12:30:10 node2 kernel: drbd cluster: conn( WFConnection -> Disconnecting ) 
Sep 20 12:30:10 node2 kernel: drbd cluster: Discarding network configuration.
Sep 20 12:30:10 node2 kernel: drbd cluster: Connection closed
Sep 20 12:30:10 node2 kernel: drbd cluster: conn( Disconnecting -> StandAlone ) 
Sep 20 12:30:10 node2 kernel: drbd cluster: receiver terminated
Sep 20 12:30:10 node2 kernel: drbd cluster: Terminating drbd_r_cluster
Sep 20 12:30:10 node2 kernel: block drbd0: disk( UpToDate -> Failed ) 
Sep 20 12:30:10 node2 kernel: block drbd0: 1356 KB (339 bits) marked out-of-sync by on disk bit-map.
Sep 20 12:30:10 node2 kernel: block drbd0: disk( Failed -> Diskless ) 
Sep 20 12:30:10 node2 kernel: drbd cluster: Terminating drbd_w_cluster
Sep 20 12:30:10 node2 attrd[8318]:   notice: Sending flush op to all hosts for: master-drbd0 (<null>)
Sep 20 12:30:10 node2 attrd[8318]:   notice: Sent delete 79: node=node2, attr=master-drbd0, id=<n/a>, set=(null), section=status
Sep 20 12:30:10 node2 crmd[8320]:   notice: Transition aborted by deletion of nvpair[@id='status-node2-master-drbd0']: Transient attribute change | cib=0.150.6 source=abort_unless_down:357 path=/cib/status/no
de_state[@id='node2']/transient_attributes[@id='node2']/instance_attributes[@id='status-node2']/nvpair[@id='status-node2-master-drbd0'] comple
te=false
Sep 20 12:30:10 node2 crmd[8320]:   notice: Result of stop operation for drbd0 on node2: 0 (ok) | call=116 key=drbd0_stop_0 confirmed=true cib-update=161
Sep 20 12:30:10 node2 crmd[8320]:   notice: Transition 29 (Complete=12, Pending=0, Fired=0, Skipped=2, Incomplete=26, Source=/var/lib/pacemaker/pengine/pe-input-3801.bz2): Stopped

Alguém poderia sugerir por que isso está acontecendo e uma correção?

    
por Gaoa Bueao 21.09.2018 / 09:45

1 resposta

2

A remoção das preferências de localização para o drbd Master ajudou a resolver o problema. Agora, quando o node1 se recupera, ele permanece secundário e se conecta com o primário no node2 sem dividir o cérebro. Se eu quiser, posso mover o recurso manualmente para o node1.

linhas removidas do crm config:

location cli-prefer-ms-drbd0 ms-drbd0 role=Started inf: node1
location ms-drbd0-master-on-node1 ms-drbd0 \
rule $role=master 100: #uname eq node1
    
por 21.09.2018 / 14:24