Qual é a diferença de executar o ab local e remotamente?

1

Eu estava comparando meu site com o apache ab e percebi que o tempo de resposta tinha grandes diferenças ao executar o ab no servidor e executar o ab em uma caixa do cliente remotamente.

Qual é a maior diferença entre executar o comando ab no servidor e executar o remotamente? O tempo é consumido no transporte líquido?

    
por Mickey Shine 14.09.2012 / 05:22

3 respostas

2

Latência e capacidade de rede

Escrevemos um bom artigo sobre o teste de simultaneidade / carga com o Siege (que é muito semelhante ao AB), mencionando especificamente testes locais versus remotos.

Você pode ler a versão completa aqui:

link

Testing remote servers is almost pointless as it is a concurrency test (ie. how many requests can be satisfied repeatedly), the immediate bottleneck is the network connection between the two machines. Latency and TCP/IP overheads are what make testing a remote site completely pointless, the slightest network congestion amongst a peer between the two servers will immediately show reduced performance. So, what really starts to come into play is how fast the TCP 3-way handshake can be completed – the server being tested could be serving a dynamic page or static 0 byte file – and you could see exactly the same rates of performance, as connectivity is the bottleneck.

We can show this using a simple ping. Our data-centres are located in Manchester, United Kingdom, so we’ll try pinging a server in the UK, then a server in the USA and show the differentiation. Both servers are connected to the internet via 100Mbit connections.

Ping from UK to UK

[~]$ ping www.bytemark.co.uk -c4
PING www.bytemark.co.uk (212.110.161.177) 56(84) bytes of data.
64 bytes from extapp-front.bytemark.co.uk (212.110.161.177): icmp_seq=1 ttl=57 
--- www.bytemark.co.uk ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 2.515/2.641/2.869/0.142 mstime=2.86 ms

Ping from UK to USA

[~]$ ping www.mediatemple.net -c 4
PING www.mediatemple.net (64.207.129.182) 56(84) bytes of data.
64 bytes from mediatemple.net (64.207.129.182): icmp_seq=1 ttl=49 time=158 ms
--- www.mediatemple.net ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 154.155/155.282/158.321/1.802 ms

You can immediately see the difference in performance. For that single TCP/IP connection to the USA from the UK, it took 156ms, 62 times more than to a server in the UK. Which means that before you even try anything, the maximum throughput you can achieve on Siege in a second is going to be around 6 transactions per second, due to latency alone.

Lets put this to the test then …

[~]$ siege http://www.wiredtree.com/images/arrow.gif -c 1 -t 10S -b
** SIEGE 2.66
** Preparing 1 concurrent users for battle.
The server is now under siege...
Lifting the server siege...done.                                                                                                                                                                         
Transactions:                      50 hits
Availability:                 100.00 %
Elapsed time:                   9.89 secs
Data transferred:               0.00 MB
Response time:                  0.20 secs
Transaction rate:               5.06 trans/sec
Throughput:                     0.00 MB/sec
Concurrency:                    1.00
Successful transactions:          50
Failed transactions:               0
Longest transaction:            0.20
Shortest transaction:           0.19

Just under the predicted figure of 6 TPS. But unfortunately, this is always going to be the case. The latency will always prove to ruin any concurrency test even if the remote server is capable of much more. Lets repeat the exact same test from a server in the USA to see how latency really affected the test. First up a quick ping,

[~]$ ping www.mediatemple.net -c 4
PING www.mediatemple.net (64.207.129.182) 56(84) bytes of data.
64 bytes from mediatemple.net (64.207.129.182): icmp_seq=1 ttl=52 time=62.8 ms
--- www.mediatemple.net ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3067ms
rtt min/avg/max/mdev = 62.872/62.922/62.946/0.029 ms

[~]$ siege http://mediatemple.net/_images/searchicon.png -c 1 -t 10S -b
** SIEGE 2.72
** Preparing 1 concurrent users for battle.
The server is now under siege...
Lifting the server siege...      done.

Transactions:                     73 hits
Availability:                 100.00 %
Elapsed time:                   9.62 secs
Data transferred:               0.22 MB
Response time:                  0.13 secs
Transaction rate:               7.59 trans/sec
Throughput:                     0.02 MB/sec
Concurrency:                    0.99
Successful transactions:          73
Failed transactions:               0
Longest transaction:            0.14
Shortest transaction:           0.12

So there you have it, we’ve managed to double our transactions per second, without any server-side changes simply by using a server closer to the test site – showing how sensitive Siege is network latency.

Siege is going to be limited by the bandwidth available on your test server and the remote server. So once you start hitting higher levels of throughput, the amount of content being downloaded starts to go up. In our examples above, 0.02MB was downloaded in 10 seconds – which is a tiny 0.16 Mbps (megabits per second). But when you start to increase the number of concurrent users, things can change radically and it is very easy to saturate the network connection – long before the server itself has reached its capacity.

So if the server you were testing from only had 20Mbit of usable bandwidth, you would probably see a maximum of about 500 req/s on the 4Kb resource mentioned earlier.

Conteúdo extraído de

    
por 15.09.2012 / 23:57
0

Sim, a situação da rede diferente é a causa. Uma solicitação HTTP tende a exigir 2 viagens de ida e volta (para uma solicitação e resposta muito pequenas):

Client -> Server, SYN
Server -> Client, SYN/ACK
Client -> Server, ACK and HTTP request
Server -> Client, HTTP response

Então, faça ping no seu servidor e duplique isso; esse é o tempo que está sendo adicionado a cada solicitação, em média.

Você pode ativar o keep-alive de HTTP com -k e descartar uma dessas idas e vindas fora da equação, mas ela ainda será mais lenta que as solicitações locais devido à latência.

    
por 14.09.2012 / 05:29
0

Como você sugeriu, a diferença se deve à transferência da Internet de um cliente remoto para o servidor da Web.

Por isso, é sempre bom praticar o benchmark para simular a experiência do usuário. Então, o que eu faço eu tento executar benchmarks diferentes com base na minha localização geográfica de visitantes para descobrir como eles experimentam o site. Por exemplo, se a maioria dos meus visitantes são dos EUA, eu executo uma instância do EC2 e executo o benchmark.

Com base nisso, você pode decidir implantar algum tipo de CDN, se necessário.

    
por 14.09.2012 / 05:32