Problema de velocidade da rede do contêiner LXC

2

Estou correndo openstack no contêiner LXC e eu encontrei dentro da minha rede de contêineres LXC é muito lento, mas do host é muito rápido

HOST

[root@ostack-infra-01 ~]# time wget http://mirror.cc.columbia.edu/pub/linux/centos/7.5.1804/updates/x86_64/repodata/0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2
--2018-08-04 00:24:09--  http://mirror.cc.columbia.edu/pub/linux/centos/7.5.1804/updates/x86_64/repodata/0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2
Resolving mirror.cc.columbia.edu (mirror.cc.columbia.edu)... 128.59.59.71
Connecting to mirror.cc.columbia.edu (mirror.cc.columbia.edu)|128.59.59.71|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4515677 (4.3M) [application/x-bzip2]
Saving to: ‘0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2’

100%[===========================================================================================================================================>] 4,515,677   23.1MB/s   in 0.2s

2018-08-04 00:24:09 (23.1 MB/s) - ‘0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2’ saved [4515677/4515677]


real    0m0.209s
user    0m0.008s
sys     0m0.014s

Contêiner LXC no mesmo host

[root@ostack-infra-01 ~]# lxc-attach -n ostack-infra-01_neutron_server_container-fbf14420
[root@ostack-infra-01-neutron-server-container-fbf14420 ~]# time wget http://mirror.cc.columbia.edu/pub/linux/centos/7.5.1804/updates/x86_64/repodata/0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2
--2018-08-04 00:24:32--  http://mirror.cc.columbia.edu/pub/linux/centos/7.5.1804/updates/x86_64/repodata/0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2
Resolving mirror.cc.columbia.edu (mirror.cc.columbia.edu)... 128.59.59.71
Connecting to mirror.cc.columbia.edu (mirror.cc.columbia.edu)|128.59.59.71|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4515677 (4.3M) [application/x-bzip2]
Saving to: ‘0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2’

100%[===========================================================================================================================================>] 4,515,677   43.4KB/s   in 1m 58s

2018-08-04 00:26:31 (37.3 KB/s) - ‘0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2’ saved [4515677/4515677]


real    1m59.121s
user    0m0.002s
sys     0m0.361s

Eu não tenho nenhuma configuração de fantasia de qualquer limite definido para a rede, eu tenho outro host que está funcionando bem e velocidade máxima, o que você acha errado aqui

kernel version Linux ostack-infra-01 3.10.0-862.3.3.el7.x86_64 #1 SMP

CentOS 7.5

    
por Satish 04.08.2018 / 06:30

1 resposta

1

Solução

A máquina host tinha a seguinte configuração, estava causando meu dmesg flood com muita pilha de erros do kernel. (7 - nível de depuração).

[root@lxc ~]# cat /proc/sys/kernel/printk
7   4   1   3

Eu mudei para:

[root@lxc ~]# cat /proc/sys/kernel/printk
3   4   1   3

Mais tarde, descobri que tinha iptables --checksum-fill regras em iptables , o que gerava muitos erros de soma de verificação, o que fazia com que a comida do erro da pilha do kernel fosse convertida em dmesg .

    
por 09.08.2018 / 19:37