corosync não está em sincronia

1

Você poderia me ajudar a mostrar como formar um cluster?

corosync & marca-passo são usados. dois nós rac0 ip 192.168.0.140 & rac1 ip 192.168.0.142 Eles não são mostrados um no outro. Eles não estão conectados.

[ryan@rac1 cluster]$ sudo pcs status corosync [sudo] password for ryan:

Informações sobre membros

Nodeid      Votes Name
     2          1 rac1 (local)

=============================================== =========

[ryan@rac0 ~]$ sudo pcs status corosync [sudo] password for ryan:

Informações sobre membros

Nodeid      Votes Name
     1          1 rac0 (local)

=============================================== ========

nodelist { node { ring0_addr: rac0 nodeid: 1 } node { ring0_addr: rac1 nodeid: 2 } }

=============================================== ========

[ryan@rac1 corosync]$ cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.0.140 rac0 192.168.0.142 rac1

=============================================== ========================

[ryan@rac0 ~]$ sudo cat /etc/hosts [sudo] password for ryan: 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.0.140 rac0 192.168.0.142 rac1

=============================================== ==================================

[ryan@rac0 ~]$ sudo tail -20 /var/log/cluster/corosync.log Mar 17 17:52:58 [1588] rac0 crmd: info: crm_timer_popped: PEngine Recheck Timer (I_PE_CALC) just popped (900000ms) Mar 17 17:52:58 [1588] rac0 crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ] Mar 17 17:52:58 [1588] rac0 crmd: info: do_state_transition: Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED Mar 17 17:52:58 [1587] rac0 pengine: info: process_pe_message: Input has not changed since last time, not saving to disk Mar 17 17:52:58 [1587] rac0 pengine: notice: cluster_status: We do not have quorum - fencing and resource management disabled Mar 17 17:52:58 [1587] rac0 pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined Mar 17 17:52:58 [1587] rac0 pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option Mar 17 17:52:58 [1587] rac0 pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity Mar 17 17:52:58 [1587] rac0 pengine: info: determine_online_status_fencing: Node rac0 is active Mar 17 17:52:58 [1587] rac0 pengine: info: determine_online_status: Node rac0 is online Mar 17 17:52:58 [1587] rac0 pengine: notice: stage6: Delaying fencing operations until there are resources to manage Mar 17 17:52:58 [1587] rac0 pengine: warning: stage6: Node rac1 is unclean! Mar 17 17:52:58 [1587] rac0 pengine: notice: stage6: Cannot fence unclean nodes until quorum is attained (or no-quorum-policy is set to ignore) Mar 17 17:52:58 [1587] rac0 pengine: warning: process_pe_message: Calculated Transition 2: /var/lib/pacemaker/pengine/pe-warn-8.bz2 Mar 17 17:52:58 [1587] rac0 pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues. Mar 17 17:52:58 [1588] rac0 crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Mar 17 17:52:58 [1588] rac0 crmd: info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1426585978-13) derived from /var/lib/pacemaker/pengine/pe-warn-8.bz2 Mar 17 17:52:58 [1588] rac0 crmd: notice: run_graph: Transition 2 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-8.bz2): Complete Mar 17 17:52:58 [1588] rac0 crmd: info: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE Mar 17 17:52:58 [1588] rac0 crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]

=============================================== ============================

[ryan@rac1 cluster]$ sudo tail -20 corosync.log [sudo] password for ryan: Mar 17 18:08:37 [1345] rac1 pacemakerd: info: pcmk_quorum_notification: State of node rac0[1] is still unknown Mar 17 18:08:37 [1320] rac1 corosync notice [MAIN ] Completed service synchronization, ready to provide service. Mar 17 18:08:37 [1347] rac1 cib: info: crm_cs_flush: Sent 2 CPG messages (0 remaining, last=2901): OK (1) Mar 17 18:08:37 [1347] rac1 cib: info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=rac1/crmd/5800, version=0.4.8) Mar 17 18:08:37 [1347] rac1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=rac1/crmd/5801, version=0.4.8) Mar 17 18:08:38 [1352] rac1 crmd: info: pcmk_quorum_notification: Membership 5812: quorum still lost (1) Mar 17 18:08:38 [1352] rac1 crmd: info: pcmk_quorum_notification: State of node rac0[1] is still unknown Mar 17 18:08:38 [1320] rac1 corosync notice [TOTEM ] A new membership (192.168.0.142:5812) was formed. Members Mar 17 18:08:38 [1352] rac1 crmd: info: do_update_node_cib: Node update for rac0 cancelled: no state, not seen yet Mar 17 18:08:38 [1352] rac1 crmd: info: crm_cs_flush: Sent 0 CPG messages (1 remaining, last=1461): Try again (6) Mar 17 18:08:38 [1347] rac1 cib: info: cib_process_request: Forwarding cib_modify operation for section nodes to master (origin=local/crmd/5804) Mar 17 18:08:38 [1320] rac1 corosync notice [QUORUM] Members[1]: 2 Mar 17 18:08:38 [1347] rac1 cib: info: crm_cs_flush: Sent 0 CPG messages (1 remaining, last=2901): Try again (6) Mar 17 18:08:38 [1345] rac1 pacemakerd: info: pcmk_quorum_notification: Membership 5812: quorum still lost (1) Mar 17 18:08:38 [1347] rac1 cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/5805) Mar 17 18:08:38 [1345] rac1 pacemakerd: info: pcmk_quorum_notification: State of node rac0[1] is still unknown Mar 17 18:08:38 [1320] rac1 corosync notice [MAIN ] Completed service synchronization, ready to provide service. Mar 17 18:08:38 [1347] rac1 cib: info: crm_cs_flush: Sent 2 CPG messages (0 remaining, last=2903): OK (1) Mar 17 18:08:38 [1347] rac1 cib: info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=rac1/crmd/5804, version=0.4.8) Mar 17 18:08:38 [1347] rac1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=rac1/crmd/5805, version=0.4.8) [ryan@rac1 cluster]$

    
por Ryan 17.03.2015 / 11:21

0 respostas