kubectl cluster-info recebe erro 502 Bad Gateway

5

Eu usei juju deploy canonical-kubernetes para implantar um K8S. Mas quando executar ./kubectl cluster-info como Documento charmoso de distribuição canônica de Kubernetes , veja abaixo o erro:

Error from server: an error on the server ("<html>\r\n<head><title>502
Bad Gateway</title></head>\r\n<body bgcolor=\"white\">\r\n<center>
<h1>502           Bad Gateway</h1></center>\r\n<hr><center>nginx/1.10.0
 (Ubuntu)</center>\r\n</body>\r\n</html>") has prevented the request from succeeding

Saída de status do Juju:

MODEL    CONTROLLER  CLOUD/REGION         VERSION
default  lxd-test    localhost/localhost  2.0-rc3

APP                    VERSION  STATUS       SCALE  CHARM                  STORE       REV  OS      NOTES
easyrsa                3.0.1    active           1  easyrsa                jujucharms    2  ubuntu  
elasticsearch                   active           2  elasticsearch          jujucharms   19  ubuntu  
etcd                   2.2.5    active           3  etcd                   jujucharms   13  ubuntu  
filebeat                        active           4  filebeat               jujucharms    5  ubuntu  
flannel                0.6.1    waiting          4  flannel                jujucharms    3  ubuntu  
kibana                          active           1  kibana                 jujucharms   15  ubuntu  
kubeapi-load-balancer  1.10.0   active           1  kubeapi-load-balancer  jujucharms    2  ubuntu  exposed
kubernetes-master      1.4.0    maintenance      1  kubernetes-master      jujucharms    3  ubuntu  
kubernetes-worker      1.4.0    waiting          3  kubernetes-worker      jujucharms    3  ubuntu  exposed
topbeat                         active           3  topbeat                jujucharms    5  ubuntu  

UNIT                      WORKLOAD     AGENT      MACHINE  PUBLIC-ADDRESS  PORTS            MESSAGE
easyrsa/0*                active       idle       0        10.181.160.79                    Certificate Authority connected.
elasticsearch/0*          active       idle       1        10.181.160.62   9200/tcp         Ready
elasticsearch/1           active       idle       2        10.181.160.72   9200/tcp         Ready
etcd/0*                   active       idle       3        10.181.160.41   2379/tcp         Healthy with 3 known peers. (leader)
etcd/1                    active       idle       4        10.181.160.135  2379/tcp         Healthy with 3 known peers.
etcd/2                    active       idle       5        10.181.160.204  2379/tcp         Healthy with 3 known peers.
kibana/0*                 active       idle       6        10.181.160.54   80/tcp,9200/tcp  ready
kubeapi-load-balancer/0*  active       idle       7        10.181.160.42   443/tcp          Loadbalancer ready.
kubernetes-master/0*      maintenance  idle       8        10.181.160.208                   Rendering authentication templates.
  filebeat/0              active       idle                10.181.160.208                   Filebeat ready.
  flannel/0*              waiting      idle                10.181.160.208                   Flannel is starting up.
kubernetes-worker/0*      waiting      idle       9        10.181.160.94                    Waiting for cluster-manager to initiate start.
  filebeat/1*             active       idle                10.181.160.94                    Filebeat ready.
  flannel/1               waiting      idle                10.181.160.94                    Flannel is starting up.
  topbeat/0               active       idle                10.181.160.94                    Topbeat ready.
kubernetes-worker/1       waiting      idle       10       10.181.160.95                    Waiting for cluster-manager to initiate start.
  filebeat/2              active       idle                10.181.160.95                    Filebeat ready.
  flannel/2               waiting      idle                10.181.160.95                    Flannel is starting up.
  topbeat/1*              active       executing           10.181.160.95                    (update-status) Topbeat ready.
kubernetes-worker/2       waiting      idle       11       10.181.160.148                   Waiting for cluster-manager to initiate start.
  filebeat/3              active       idle                10.181.160.148                   Filebeat ready.
  flannel/3               waiting      idle                10.181.160.148                   Flannel is starting up.
  topbeat/2               active       idle                10.181.160.148                   Topbeat ready.

MACHINE  STATE    DNS             INS-ID          SERIES  AZ
0        started  10.181.160.79   juju-23ce86-0   xenial  
1        started  10.181.160.62   juju-23ce86-1   trusty  
2        started  10.181.160.72   juju-23ce86-2   trusty  
3        started  10.181.160.41   juju-23ce86-3   xenial  
4        started  10.181.160.135  juju-23ce86-4   xenial  
5        started  10.181.160.204  juju-23ce86-5   xenial  
6        started  10.181.160.54   juju-23ce86-6   trusty  
7        started  10.181.160.42   juju-23ce86-7   xenial  
8        started  10.181.160.208  juju-23ce86-8   xenial  
9        started  10.181.160.94   juju-23ce86-9   xenial  
10       started  10.181.160.95   juju-23ce86-10  xenial  
11       started  10.181.160.148  juju-23ce86-11  xenial  

RELATION           PROVIDES               CONSUMES               TYPE
certificates       easyrsa                kubeapi-load-balancer  regular
certificates       easyrsa                kubernetes-master      regular
certificates       easyrsa                kubernetes-worker      regular
peer               elasticsearch          elasticsearch          peer
elasticsearch      elasticsearch          filebeat               regular
rest               elasticsearch          kibana                 regular
elasticsearch      elasticsearch          topbeat                regular
cluster            etcd                   etcd                   peer
etcd               etcd                   flannel                regular
etcd               etcd                   kubernetes-master      regular
juju-info          filebeat               kubernetes-master      regular
juju-info          filebeat               kubernetes-worker      regular
sdn-plugin         flannel                kubernetes-master      regular
sdn-plugin         flannel                kubernetes-worker      regular
loadbalancer       kubeapi-load-balancer  kubernetes-master      regular
kube-api-endpoint  kubeapi-load-balancer  kubernetes-worker      regular
beats-host         kubernetes-master      filebeat               subordinate
host               kubernetes-master      flannel                subordinate
kube-dns           kubernetes-master      kubernetes-worker      regular
beats-host         kubernetes-worker      filebeat               subordinate
host               kubernetes-worker      flannel                subordinate
beats-host         kubernetes-worker      topbeat                subordinate
    
por fkpwolf 11.10.2016 / 06:38

1 resposta

2

Isso parece ser porque você está implantando o Kubernetes no LXD. De acordo com o README para Kubernetes canônicos :

  

O kubernetes-master, o kubernetes-worker, o kubeapi-load-balancer e o etcd não são suportados no LXD neste momento.

Esta é uma limitação entre o Docker e o LXD - um que esperamos ter resolvido em breve. Enquanto isso, esses componentes precisam ser executados em pelo menos uma VM.

Você pode fazer isso manualmente, com o LXD, onde você implanta o restante dos componentes no LXD e, em seguida, lança manualmente algumas instâncias do KVM no computador.

Vou tentar obter um conjunto limpo de instruções para isso e responder aqui com eles.

    
por Marco Ceppi 13.10.2016 / 14:03

Tags