Wordpress implementado com o JUJU usando a falha do LXC

3

Quando eu implantar os encapsulamentos wordpress e mysql com juju no meu controlador de maas da seguinte forma:

juju deploy --to lxc:0 wordpress
juju deploy --to lxc:0 mysql
juju add-relation wordpress:db mysql:db --debug

a saída mostra algo como

DEBUG juju.api api.go 492 API hostnames unchanged - not resolving

com um pouco mais de paciência wordpress falhará e o comando juju status mostrará o seguinte:

wordpress/0
 workload-status:
  current: error
  message: 'hook failed: "db-relation-changed" for mysql:db'

status do juju --forma tabular retornou isso para a seção unitária:

[Units]     
ID          WORKLOAD-STATE AGENT-STATE VERSION MACHINE PORTS    PUBLIC-ADDRESS  MESSAGE                                         
juju-gui/0  unknown        idle        1.24.0  0       8080/tcp HP22.rigonet.nl                                                 
mysql/0     unknown        idle        1.24.0  0/lxc/2          192.168.50.86                                                   
wordpress/0 error          idle        1.24.0  0/lxc/7 80/tcp   192.168.50.83   hook failed: "db-relation-changed" for mysql:db 

Eu fui procurar por juju ssh wordpress / 0 para o arquivo /var/log/juju/unit-wordpress-0.log e encontrei:

2015-07-15 15:42:16 INFO worker.uniter.jujuc server.go:138 running hook tool "relation-get" ["password"]
2015-07-15 15:42:16 DEBUG worker.uniter.jujuc server.go:139 hook context id "wordpress/0-db-relation-changed-1875247148271195053"; dir "/var/lib/juju/agents/unit-wordpress-0/charm"
2015-07-15 15:42:16 INFO worker.uniter.jujuc server.go:138 running hook tool "relation-get" ["private-address"]
2015-07-15 15:42:16 DEBUG worker.uniter.jujuc server.go:139 hook context id "wordpress/0-db-relation-changed-1875247148271195053"; dir "/var/lib/juju/agents/unit-wordpress-0/charm"
2015-07-15 15:42:17 DEBUG juju.worker.leadership tracker.go:138 wordpress/0 renewing lease for wordpress leadership
2015-07-15 15:42:17 DEBUG juju.worker.leadership tracker.go:165 checking wordpress/0 for wordpress leadership
2015-07-15 15:42:17 DEBUG juju.worker.leadership tracker.go:180 wordpress/0 confirmed for wordpress leadership until 2015-07-15 15:43:17.202318193 +0000 UTC
2015-07-15 15:42:17 INFO juju.worker.leadership tracker.go:182 wordpress/0 will renew wordpress leadership at 2015-07-15 15:42:47.202318193 +0000 UTC
2015-07-15 15:42:26 INFO juju.worker.uniter.context context.go:534 handling reboot
**2015-07-15 15:42:26 ERROR juju.worker.uniter.operation runhook.go:107 hook "db-relation-changed" failed: exit status 2**
2015-07-15 15:42:26 INFO juju.worker.uniter modes.go:546 ModeAbide exiting
2015-07-15 15:42:26 INFO juju.worker.uniter modes.go:544 ModeHookError starting
2015-07-15 15:42:26 DEBUG juju.worker.uniter.filter filter.go:597 want resolved event
2015-07-15 15:42:26 DEBUG juju.worker.uniter.filter filter.go:591 want forced upgrade true
2015-07-15 15:42:26 DEBUG juju.worker.uniter.filter filter.go:727 no new charm event
2015-07-15 15:42:26 DEBUG juju.worker.uniter modes.go:31 [AGENT-STATUS] error: hook failed: "db-relation-changed"
2015-07-15 15:42:26 DEBUG juju.worker.leadership tracker.go:154 wordpress/0 got wait request for wordpress leadership loss
2015-07-15 15:42:26 DEBUG juju.worker.leadership tracker.go:248 wordpress/0 still has wordpress leadership

O arquivo machine-0-lxc-7.log que encontrei

2015-07-15 15:40:15 INFO juju.worker runner.go:261 start "rsyslog"
2015-07-15 15:40:15 DEBUG juju.worker.rsyslog worker.go:105 starting rsyslog worker mode 1 for "machine-0-lxc-7" ""
2015-07-15 15:40:15 DEBUG juju.worker.logger logger.go:60 logger setup
2015-07-15 15:40:15 INFO juju.worker runner.go:261 start "stateconverter"
2015-07-15 15:40:15 INFO juju.worker runner.go:261 start "diskmanager"
2015-07-15 15:40:15 INFO juju.worker runner.go:261 start "storageprovisioner-machine"
2015-07-15 15:40:15 DEBUG juju.network network.go:220 no lxc bridge addresses to filter for machine
2015-07-15 15:40:15 INFO juju.worker.machiner machiner.go:94 setting addresses for machine-0-lxc-7 to ["local-machine:127.0.0.1" "local-cloud:192.168.50.83" "local-machine:::1"]
2015-07-15 15:40:15 DEBUG juju.worker.proxyupdater proxyupdater.go:151 new proxy settings proxy.Settings{Http:"", Https:"", Ftp:"", NoProxy:""}
2015-07-15 15:40:15 DEBUG juju.worker.logger logger.go:45 reconfiguring logging from "<root>=DEBUG" to "<root>=DEBUG;unit=DEBUG"
2015-07-15 15:40:15 INFO juju.worker.diskmanager diskmanager.go:62 block devices changed: []
2015-07-15 15:40:15 DEBUG juju.network network.go:220 no lxc bridge addresses to filter for machine
2015-07-15 15:40:15 INFO juju.worker.apiaddressupdater apiaddressupdater.go:78 API addresses updated to [["HP22.rigonet.nl:17070" "192.168.50.16:17070" "127.0.0.1:17070" "[::1]:17070"]]
2015-07-15 15:40:15 DEBUG juju.worker.reboot reboot.go:82 Reboot worker got action: noop
2015-07-15 15:40:15 DEBUG juju.worker.rsyslog worker.go:213 making syslog connection for "juju-machine-0-lxc-7" to 192.168.50.16:6514
2015-07-15 15:40:15 INFO juju.worker runner.go:261 start "networker"
2015-07-15 15:40:15 INFO juju.worker runner.go:261 start "authenticationworker"
2015-07-15 15:40:15 INFO juju.networker networker.go:163 networker is disabled - not starting on machine "machine-0-lxc-7"
2015-07-15 15:40:15 DEBUG juju.worker.proxyupdater proxyupdater.go:170 new apt proxy settings proxy.Settings{Http:"", Https:"", Ftp:"", NoProxy:""}

A única coisa interessante que encontrei em / var / log / syslog é:

Jul 15 15:41:22 HP22 kernel: [1280629.664746] type=1400 audit(1436974882.268:99): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default" name="/run/rpc_pipefs/" pid=7655 comm="mount" fstype="rpc_pipefs" srcname="rpc_pipefs" flags="rw"
Jul 15 15:41:22 HP22 kernel: [1280629.664831] type=1400 audit(1436974882.268:100): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default" name="/run/rpc_pipefs/" pid=7655 comm="mount" fstype="rpc_pipefs" srcname="rpc_pipefs" flags="ro"
Jul 15 15:41:22 HP22 kernel: [1280629.687213] type=1400 audit(1436974882.288:101): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default" name="/run/rpc_pipefs/" pid=7668 comm="mount" fstype="rpc_pipefs" srcname="rpc_pipefs" flags="rw"
Jul 15 15:41:22 HP22 kernel: [1280629.687302] type=1400 audit(1436974882.288:102): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default" name="/run/rpc_pipefs/" pid=7668 comm="mount" fstype="rpc_pipefs" srcname="rpc_pipefs" flags="ro"
Jul 15 15:41:22 HP22 kernel: [1280629.729821] type=1400 audit(1436974882.332:105): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default" name="/run/rpc_pipefs/" pid=7692 comm="mount" fstype="rpc_pipefs" srcname="rpc_pipefs" flags="rw"

No controlador do maas, eu também tenho o webmin instalado para visualizar rapidamente as minhas zonas DNS do BIND. Percebi que o host recebe um endereço IP estático do servidor DHCP. Os contêineres lxc nessa máquina recebem endereços DHCP dentro do intervalo que eu defini para o cluster no MAAS, mas o nome da máquina (machine-0-lxc-7) não está registrado no servidor DNS. É estranho para mim que os contêineres LXC não apareçam no MAAS.

O que eu posso fazer para depurar mais e executá-lo.

Atenciosamente, Joham

    
por Joham 16.07.2015 / 02:59

0 respostas