Estou usando Ansible + Vagrant para criar minha infraestrutura ou fazer uma pequena simulação do que desejo. Ele instala postgres e cria um diretório ssh para armazenar as chaves diferentes para cada host.
É a estrutura do meu projeto:
.
├── ansible.cfg
├── cluster_hosts
├── group_vars
│ ├── host_master
│ ├── host_pgpool
│ ├── host_slave1
│ └── postgresql
├── roles
│ ├── postgresql
│ │ ├── files
│ │ ├── handlers
│ │ └── tasks
│ │ └── main.yml
│ └── ssh_agent
│ └── tasks
│ └── main.yml
└── site.yml
É a declaração cluster_hosts :
host_master ansible_ssh_host=192.168.1.10 ansible_ssh_user=vagrant ansible_ssh_pass=vagrant
host_slave1 ansible_ssh_host=192.168.1.11 ansible_ssh_user=vagrant ansible_ssh_pass=vagrant
host_slave2 ansible_ssh_host=192.168.1.12 ansible_ssh_user=vagrant ansible_ssh_pass=vagrant
host_pgpool ansible_ssh_host=192.168.1.13 ansible_ssh_user=vagrant ansible_ssh_pass=vagrant
[ssh]
host_master
host_pgpool
host_slave1
[pg_pool]
host_pgpool
[database]
host_master
host_pgpool
host_slave1
host_slave2
São meus arquivos group_vars:
host_master
known_hosts:
- 192.168.1.11
- 192.168.1.12
host_pgpool
known_hosts:
- 192.168.1.11
- 192.168.1.12
host_slave1
known_hosts:
- 192.168.1.12
E aqui está o meu site.yml :
---
# The main playbook to deploy the cluster
# setup database
- hosts: database
sudo: True
tags:
- setup_db
roles:
- postgresql
# setup ssh
- hosts: all
sudo: True
tags:
- setup_ssh
roles:
- ssh_agent
E aqui está a função ssh_agent :
---
- name: Install sshpass
apt: name={{ item }} state=present
with_items:
- sshpass
- rsync
- name: Create ssh directory
sudo_user: postgres
command: mkdir -p /var/lib/postgresql/.ssh/ creates=/var/lib/postgresql/.ssh/
- name: Generate known hosts
sudo_user: postgres
shell: ssh-keyscan -t rsa {{ item }} >> /var/lib/postgresql/.ssh/known_hosts
with_items:
- "{{ known_hosts }}"
- name: Generate id_rsa key
sudo_user: postgres
command: ssh-keygen -t rsa -N "" -C "" -f /var/lib/postgresql/.ssh/id_rsa
- name: Add authorized_keys
command: sshpass -p postgres ssh-copy-id -i /var/lib/postgresql/.ssh/id_rsa.pub postgres@{{ item }}
sudo_user: postgres
with_items:
- "{{ known_hosts }}"
- name: Owner postgresql
command: chown postgres:postgres /var/lib/postgresql/.ssh/ -R
ok, agora quando eu corro:
ansible-playbook -i cluster_hosts site.yml --tags setup_ssh
Recebo um erro na tarefa Gerar hosts conhecidos :
PLAY [all] ********************************************************************
GATHERING FACTS ***************************************************************
ok: [host_pgpool]
ok: [host_slave2]
ok: [host_slave1]
ok: [host_master]
TASK: [ssh_agent | Install sshpass] *******************************************
ok: [host_slave1] => (item=sshpass,rsync)
ok: [host_master] => (item=sshpass,rsync)
ok: [host_pgpool] => (item=sshpass,rsync)
ok: [host_slave2] => (item=sshpass,rsync)
TASK: [ssh_agent | Create ssh directory] **************************************
skipping: [host_master]
skipping: [host_slave2]
skipping: [host_slave1]
skipping: [host_pgpool]
TASK: [ssh_agent | Generate known hosts] **************************************
fatal: [host_slave1] => One or more undefined variables: 'known_hosts' is undefined
fatal: [host_master] => One or more undefined variables: 'known_hosts' is undefined
fatal: [host_slave2] => One or more undefined variables: 'known_hosts' is undefined
fatal: [host_pgpool] => One or more undefined variables: 'known_hosts' is undefined
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/home/robe/site.retry
host_master : ok=2 changed=0 unreachable=1 failed=0
host_pgpool : ok=2 changed=0 unreachable=1 failed=0
host_slave1 : ok=2 changed=0 unreachable=1 failed=0
host_slave2 : ok=2 changed=0 unreachable=1 failed=0
Eu não entendo porque esse erro? se cada variável é declarada em group_vars (host_master, host_pgpool, host_slave1).
A sintaxe do yml está errada? Eu acho que talvez seja o problema, mas eu vejo que é certo para mim?