Eu tenho um servidor com 4 interfaces (eno1..4) ligadas entre si. Até aí tudo bem.
Com as VLANS na configuração, quando inicio os serviços de rede, ele retorna um erro:
$ service networking status
● networking.service - Raise network interfaces
Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2017-09-04 11:04:11 -03; 7min ago
Docs: man:interfaces(5)
Process: 1989 ExecStop=/sbin/ifdown -a --read-environment --exclude=lo (code=exited, status=0/SUCCESS)
Process: 2180 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE)
Process: 2175 ExecStartPre=/bin/sh -c [ "$CONFIGURE_INTERFACES" != "no" ] && [ -n "$(ifquery --read-environment --list --exclude=lo)" ] && udevadm settle (code=exited, status=0/SUCCESS)
Main PID: 2180 (code=exited, status=1/FAILURE)
Se eu comentar auto bond0.20 / 30 e reiniciar a rede, não receberei nenhum erro. Mas se eu ifup bond0 20, eu tenho um erro:
$ ifup bond0.20
RTNETLINK answers: File exists
ifup: failed to bring up bond0.20
Aqui está o meu conf:
# The loopback network interface
auto lo
iface lo inet loopback
# Bonding interfaces
allow-hotplug eno1
iface eno1 inet manual
allow-hotplug eno2
iface eno2 inet manual
allow-hotplug eno3
iface eno3 inet manual
allow-hotplug eno4
iface eno4 inet manual
# Main bonding interface
auto bond0
iface bond0 inet static
address 10.10.0.1
gateway 10.10.0.254
netmask 255.255.255.0
dns-nameservers 10.10.0.254
dns-search mydomain.local
bond-mode 802.3ad
bond-miimon 100
bond-downdelay 200
bond-updelay 200
bond-lacp-rate 1
bond_xmit_hash_policy layer2+3
bond-slaves eno1 eno2 eno3 eno4
auto bond0.20
iface bond0.20 inet static
address 10.20.0.1
gateway 10.20.0.254
netmask 255.255.255.0
auto bond0.30
iface bond0.30 inet static
address 10.30.0.1
gateway 10.30.0.254
netmask 255.255.255.0
EDITAR
Minha nova configuração com pontes:
auto eno1
iface eno1 inet manual
auto eno2
iface eno2 inet manual
auto bond0
iface bond0 inet manual
slaves eno1 eno2
bond-mode 802.3ad
bond-miimon 100
bond-downdelay 200
bond-updelay 200
bond-lacp-rate 1
bond_xmit_hash_policy layer2+3
auto bond0.20
iface bond0.20 inet manual
auto br20
iface br20 inet static
address 192.168.100.1
netmask 255.255.255.0
network 192.168.100.0
bridge_ports bond0.20
bridge_maxwait 5
bridge_stp off
bridge_fd 0
auto bond0.30
iface bond0.30 inet manual
auto br30
iface br30 inet static
address 192.168.200.1
netmask 255.255.255.0
network 192.168.200.0
bridge_ports bond0.30
bridge_maxwait 5
bridge_stp off
bridge_fd 0
Ele sobe, mas retorna um erro, mostra que o status da rede de serviços não está limpo.
$ service networking status
● networking.service - Raise network interfaces
Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2017-09-04 20:08:38 -03; 29s ago
Docs: man:interfaces(5)
Process: 923 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE)
Process: 902 ExecStartPre=/bin/sh -c [ "$CONFIGURE_INTERFACES" != "no" ] && [ -n "$(ifquery --read-environment --list --exclude=lo)" ] && udevadm settle (code=exited, status=0/SUCCESS)
Main PID: 923 (code=exited, status=1/FAILURE)
Sep 04 20:08:38 alpha ifup[923]: + [ meta = meta ]
Sep 04 20:08:38 alpha ifup[923]: + exit 0
Sep 04 20:08:38 alpha ifup[923]: run-parts: executing /etc/network/if-up.d/ip
Sep 04 20:08:38 alpha ifup[923]: run-parts: executing /etc/network/if-up.d/openssh-server
Sep 04 20:08:38 alpha ifup[923]: run-parts: executing /etc/network/if-up.d/postfix
Sep 04 20:08:38 alpha ifup[923]: run-parts: executing /etc/network/if-up.d/upstart
Sep 04 20:08:38 alpha systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
Sep 04 20:08:38 alpha systemd[1]: Failed to start Raise network interfaces.
Sep 04 20:08:38 alpha systemd[1]: networking.service: Unit entered failed state.
Sep 04 20:08:38 alpha systemd[1]: networking.service: Failed with result 'exit-code'.