Docker: nginx-proxy através do túnel openvpn

0

Estou tentando usar o DigitalOcean VPS como um servidor openVPN para acessar serviços (por exemplo, nextcloud) hospedados em minha rede doméstica por meio de subdomínios (por exemplo, nextcloud.example.com).

Eu configurei o seguinte:

  • [trabalhando] kylemanna / docker openvpn no Digital Ocean VPS
  • [trabalhando] Conectei meu roteador pfSense doméstico como cliente VPN ao Digital Ocean VPS
  • [trabalhando] Configure o serviço nextcloud na minha rede doméstica
  • [trabalhando] Quando conectado à VPN, posso fazer ping entre dispositivos e também acessar o serviço nextcloud por meio do IP interno
  • [Não está funcionando] jwilder / nginx-proxy para rotear nextcloud.example.com pelo túnel do Docker VPN até o IP interno da nextcloud

Eu tentei adicionar um arquivo virtual_host para nextcloud.example.com à solicitação de roteamento nginx-proxy para a porta openvpn 3000 e, em seguida, dentro do contêiner openvpn usando iptables para encaminhar todas as solicitações na porta 3000 para o IP interno nextcloud.

Será que realmente gostaria de receber ajuda, já que estou aqui para ser honesto?

kylemanna / openvpn - encaminhamento de configuração do iptables

user@Debianwebhost:~$ docker exec -it vpn bash
bash-4.4# iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
DNAT       tcp  --  anywhere             anywhere             tcp dpt:3000 to:192.168.0.99:80
DNAT       udp  --  anywhere             anywhere             udp dpt:3000 to:192.168.0.99:80
DNAT       udp  --  anywhere             anywhere             udp dpt:3000 to:192.168.0.99:80

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
SNAT       tcp  --  anywhere             192.168.0.99         tcp dpt:http to:172.17.0.2:3000
SNAT       udp  --  anywhere             192.168.0.99         udp dpt:http to:172.17.0.2:3000

configuração do host virtual nginx-proxy

user@Debianwebhost:/etc/nginx/vhost.d$ cat nextcloud.example.com
server {
        server_name _; # This is just an invalid value which will never trigger on a real hostname.
        listen 80;
        access_log /var/log/nginx/access.log vhost;
        return 503;
}
# nextcloud.example.com
upstream nextcloud.example.com {
                                ## Can be connect with "bridge" network
                        # vpn
                        server 172.17.0.2:3000;
}
server {
        server_name nextcloud.example.com;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        location / {
                proxy_pass http://nextcloud.example.com;
        }
}

nginx-proxy nginx.conf

user@Debianwebhost:/etc/nginx$ cat nginx.conf

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

nginx-proxy default.conf

user@Debianwebhost:/etc/nginx/conf.d$ cat default.conf
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
  default $http_x_forwarded_proto;
  ''      $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
  default $http_x_forwarded_port;
  ''      $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
  default upgrade;
  '' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
  default off;
  https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
                 '"$request" $status $body_bytes_sent '
                 '"$http_referer" "$http_user_agent"';
access_log off;
resolver [hidden ips, but there are 2 of them];
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
        server_name _; # This is just an invalid value which will never trigger on a real hostname.
        listen 80;
        access_log /var/log/nginx/access.log vhost;
        return 503;
}
# nextcloud.example.com
upstream nextcloud.example.com {
                                ## Can be connect with "bridge" network
                        # vpn
                        server 172.17.0.2:3000;
}
server {
        server_name nextcloud.example.com;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        location / {
                proxy_pass http://nextcloud.example.com;
        }
}
    
por Svarto 25.02.2018 / 13:41

1 resposta

0

Eu encontrei uma solução para isso, basicamente eu tive que adicionar um

ip route add 192.168.0.0/24 via 172.19.0.50 

Para informar ao VPS que qualquer solicitação para 192.168.0.99 (minha rede interna) precisa ser roteada através do contêiner docker com o servidor VPN (172.19.0.50).

Uma vez que o pedido entrou na janela de encaixe do servidor VPN, ele sabe o que fazer com ele, pois eu já havia especificado o seguinte no cliente pfSense (/ etc / openvpn / ccd / client) para tornar a VPN ciente de que esses IPs devem passar por esse cliente:

iroute 192.168.0.0 255.255.255.0

Além disso, eu também tive que especificar o seguinte na configuração openVPN (/etc/openvpn/openvpn.conf)

### Route Configurations Below
route 192.168.254.0 255.255.255.0
route 192.168.0.0 255.255.255.0

### Push Configurations Below
push "route 192.168.0.0 255.255.255.0"

Então, é claro, abra os firewalls necessários.

    
por 13.03.2018 / 06:46