terminação SSL NGINX lenta

3

Estou usando o NGINX como um proxy da Web reverso para um servidor da Web do IIS upstream. Eu sou proxy_pass'ing para uma ligação https no servidor IIS, então eu entendo SSL está sendo criptografado / descriptografado duas vezes, mas não deve ser tão lento como é.

NGINX versão 1.4.4

OpenSSL versão 1.0.1f

Coisas que eu tentei:

  • Tweaking SSL cipher list
  • Recompilando o NGINX com depuração e & atualizando para o mais recente OpenSSL
  • Inspeção de saída de log de acesso / erro / depuração
  • Reprodução com várias diretivas NGINX

Usando o ApacheBench para testes:

Solicitações por segundo:

  • HTTP (através de nginx) = 3312 RPS
  • HTTPS (por meio de nginx) = 273 RPS < - ???
  • HTTP (direto para backend) = 4237 RPS
  • HTTPS (direto ao back-end) = 1349 RPS

Exemplo de saída do ApacheBench:

abs -c 100 -n 1000 h ttps://nginxtest1.mydomain.com/
Benchmarking nginxtest1.mydomain.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software:        nginx/1.4.3
Server Hostname:        nginxtest1.mydomain.com
Server Port:            443
SSL/TLS Protocol:       TLSv1,ECDHE-RSA-AES256-SHA,2048,256
Document Path:          /
Document Length:        659 bytes
Concurrency Level:      100
Time taken for tests:   3.493 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      917000 bytes
HTML transferred:       659000 bytes
Requests per second:    286.27 [#/sec] (mean) <-----?????
Time per request:       349.320 [ms] (mean)
Time per request:       3.493 [ms] (mean, across all concurrent requests)
Transfer rate:          256.36 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       39  149  48.4    148     340
Processing:    47  180 179.2    163    3156
Waiting:       25  133 182.8    107    3130
Total:        145  330 181.7    320    3213

Percentage of the requests served within a certain time (ms)
  50%    320
  66%    331
  75%    339
  80%    355
  90%    431
  95%    502
  98%    529
  99%    533
 100%   3213 (longest request)

nginx.conf:

worker_processes  auto;

events {

 worker_connections  1024;
 debug_connection 192.168.2.98;

}

http {

   include       mime.types;
    default_type  application/octet-stream;

   sendfile        on;

   keepalive_timeout  65;
   lingering_time 240;
   client_max_body_size 100m;

   ssl_session_cache       shared:SSL:10m;


   include /usr/local/nginx/conf/srvnj04.conf;

   gzip on;
   gzip_disable "msie6";
   gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

}

srvnj04.conf:

server {

   listen       80;
   server_name nginxtest1.mydomain.com;

     log_format custom4 '$remote_addr '
                        'Conn: $connection '                 #connection serial number
                        'Conn reqs: $connection_requests'    #the current number of requests made through a connection
                        '$status '
                        '$http_referer '
                        #'$body_bytes_sent '
                        '$request '
                        #'"$http_user_agent" '
                        'Processing: $request_time '
                        'Response: $upstream_response_time ';
                        #'$bytes_sent '
                        #'$request_length';


   access_log /var/log/nginx/srvnj04_access.log custom4;
   error_log /var/log/nginx/srvnj04_error.log debug;

    ssl                  off;

    location / {

            #empty_gif;
            proxy_pass http://nginxtest2.appsrv008.mydomain.com;

    }
}

upstream https_backend {

    server 192.168.2.4:444;

    #keepalive 32;
    keepalive 128;
}


server {

listen       443 ssl so_keepalive=0h:5m:0;
server_name  nginxtest1.mydomain.com;
keepalive_timeout 54;
keepalive_requests 128;

error_log /var/log/nginx/nginx_debug_srvnj04.log debug;

ssl                  on;

ssl_certificate ssl/star_mydomain_net_sol_CA_srvnj04b.cer;
ssl_certificate_key ssl/star_mydomain_net_sol.key;
ssl_dhparam ssl/dhparam.pem;

ssl_protocols  SSLv3 TLSv1.2 TLSv1 TLSv1.1;

ssl_ciphers RC4:HIGH:!aNULL:!MD5:!kEDH;

ssl_prefer_server_ciphers   on;

ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

location / {

#proxy_pass https://nginxtest1.mydomain.com;
proxy_pass https://https_backend;    

}

}

Estou estudando isso há cerca de uma semana; qualquer percepção do que poderia estar errado seria muito apreciada.

Como solicitado, saída AB:

HTTPS

# abs -c 100 -n 1000 #https://nginxtest1.mydomain.com/
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking nginxtest1.mydomain.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        nginx/1.4.3
Server Hostname:        nginxtest1.mydomain.com
Server Port:            443
SSL/TLS Protocol:       TLSv1,ECDHE-RSA-AES256-SHA,2048,256

Document Path:          /
Document Length:        334 bytes

Concurrency Level:      100
Time taken for tests:   3.458 seconds
Complete requests:      1000
Failed requests:        0
Non-2xx responses:      1000
Total transferred:      503000 bytes
HTML transferred:       334000 bytes
Requests per second:    289.17 [#/sec] (mean)
Time per request:       345.820 [ms] (mean)
Time per request:       3.458 [ms] (mean, across all concurrent requests)
Transfer rate:          142.04 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       54  186  73.7    174     509
Processing:    25  143  35.8    143     227
Waiting:       22   79  25.7     80     164
Total:         91  329  80.5    316     663

Percentage of the requests served within a certain time (ms)
  50%    316
  66%    318
  75%    323
  80%    334
  90%    404
  95%    542
  98%    568
  99%    595
 100%    663 (longest request)

HTTP

# abs -c 100 -n 1000 http://nginxtest1.mydomain.com/
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking nginxtest1.mydomain.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        nginx/1.4.3
Server Hostname:        nginxtest1.mydomain.com
Server Port:            80

Document Path:          /
Document Length:        659 bytes

Concurrency Level:      100
Time taken for tests:   0.772 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      917000 bytes
HTML transferred:       659000 bytes
Requests per second:    1295.26 [#/sec] (mean)
Time per request:       77.204 [ms] (mean)
Time per request:       0.772 [ms] (mean, across all concurrent requests)
Transfer rate:          1159.92 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.5      0       2
Processing:    33   68  27.7     53     296
Waiting:       12   65  28.4     53     145
Total:         34   69  27.7     54     297

Percentage of the requests served within a certain time (ms)
  50%     54
  66%     86
  75%     94
  80%     97
  90%    103
  95%    108
  98%    113
  99%    122
 100%    297 (longest request)
    
por arthur 11.02.2014 / 01:15

1 resposta

0

I understand SSL is being encrypted/decrypted twice- but it shouldn't be as slow as it is.

Por que não? Por favor, note que não é apenas criptografia / descriptografia dupla, ela também adiciona mais um handshake TCP + TLS por solicitação. Para minimizar a sobrecarga do handshake, você deve usar as conexões keepalive: link

Mas, em geral, o que você está fazendo é uma prática ruim, você deve usar o túnel IPSec.

    
por 11.02.2014 / 21:09

Tags