Eu encontrei o problema. Depois que eu mudei o modo de ligação de 5 (balance-tlb) para 4 (802.3ad), ele funciona agora.
Eu colei duas placas de rede (Intel I-350) no CentOS 6.4. A configuração parece boa, mas eu não sou capaz de fazer ping em nenhum host e alternar em sua sub-rede.
=== Status Bond0 ===
cat / proc / net / bonding / bond0
Driver de ligação de canal Ethernet: v3.6.0 (26 de setembro de 2009)
Modo de ligação: transmitir balanceamento de carga Escravo Primário: Nenhum Escravo Activo Activo: eth1 MII Status: up MII Intervalo de Polling (ms): 80 Up Delay (ms): 0 Down Delay (ms): 0
Interface do escravo: eth1 MII Status: up Velocidade: 1000 Mbps Duplex: completo Contagem de falhas do link: 0 Endpoint HW permanente: xx: xx: xx: xx: xx: b9 ID da fila escrava: 0
Interface do escravo: eth2 MII Status: up Velocidade: 1000 Mbps Duplex: completo Contagem de falhas do link: 0 Endpoint HW permanente: xx: xx: xx: xx: xx: ba ID da fila escrava: 0
=== Interface ===
bond0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:B9 inet addr:192.168.100.2 Bcast:192.168.100.255 Mask:255.255.255.0 inet6 addr: fe80::225:90ff:fe95:cab9/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:6162 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:369234 (360.5 KiB)
eth1 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:B9 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:3106 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:185754 (181.4 KiB) Memory:dfb40000-dfb60000
eth2 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:BA UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:3056 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:183480 (179.1 KiB) Memory:dfb20000-dfb40000
=== Log de mensagens quando ifup bond0 ===
Apr 3 11:01:52 HOSTNAME kernel: bonding: bond0: Setting MII monitoring interval to 80.
Apr 3 11:01:52 HOSTNAME kernel: bonding: bond0: setting mode to balance-tlb (5).
Apr 3 11:01:52 HOSTNAME kernel: bonding: bond0: Setting MII monitoring interval to 80.
Apr 3 11:01:52 HOSTNAME kernel: bonding: bond0: setting mode to balance-tlb (5).
Apr 3 11:01:52 HOSTNAME kernel: bonding: bond0: Adding slave eth1.
Apr 3 11:01:52 HOSTNAME kernel: 8021q: adding VLAN 0 to HW filter on device eth1
Apr 3 11:01:52 HOSTNAME kernel: bonding: bond0: enslaving eth1 as an active interface with a down link.
Apr 3 11:01:52 HOSTNAME kernel: bonding: bond0: Adding slave eth2.
Apr 3 11:01:52 HOSTNAME kernel: 8021q: adding VLAN 0 to HW filter on device eth2
Apr 3 11:01:52 HOSTNAME kernel: bonding: bond0: enslaving eth2 as an active interface with a down link.
Apr 3 11:01:52 HOSTNAME kernel: ADDRCONF(NETDEV_UP): bond0: link is not ready
Apr 3 11:01:52 HOSTNAME kernel: 8021q: adding VLAN 0 to HW filter on device bond0
Apr 3 11:01:55 HOSTNAME kernel: igb: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
Apr 3 11:01:55 HOSTNAME kernel: bond0: link status definitely up for interface eth1, 1000 Mbps full duplex.
Apr 3 11:01:55 HOSTNAME kernel: bonding: bond0: making interface eth1 the new active one.
Apr 3 11:01:55 HOSTNAME kernel: bonding: bond0: first active interface up!
Apr 3 11:01:55 HOSTNAME kernel: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready
Apr 3 11:01:56 HOSTNAME kernel: igb: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
Apr 3 11:01:56 HOSTNAME kernel: bond0: link status definitely up for interface eth2, 1000 Mbps full duplex.
Apr 3 11:01:58 HOSTNAME ntpd[2338]: Listening on interface #8 bond0, fe80::225:90ff:fe95:cab9#123 Enabled
Apr 3 11:01:58 HOSTNAME ntpd[2338]: Listening on interface #9 bond0, 192.168.100.2#123 Enabled
Eu encontrei o problema. Depois que eu mudei o modo de ligação de 5 (balance-tlb) para 4 (802.3ad), ele funciona agora.
Tags bonding