Glam Prestige Journal

Bright entertainment trends with youth appeal.

The system runs Ubuntu 16.04.3 LTS with 3 1GBit NICs: one embedded and 2 Intel PCIe NICs. Both Intel NICs are bonded (bond0) with mode 4 (LACP). The switch is confugured to support LACP on these 2 ports. Here is the network configuration:

cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto enp0s31f6
#iface enp0s31f6 inet dhcp
iface enp0s31f6 inet static mtu 9000 address 192.168.x.x netmask 255.255.x.0 network 192.168.x.0 gateway 192.168.x.1 dns-nameservers 192.168.x.x
auto enp3s0
iface enp3s0 inet manual
bond-master bond0
auto enp4s0
iface enp4s0 inet manual
bond-master bond0
auto bond0
iface bond0 inet static mtu 9000 address 192.168.x.x netmask 255.255.x.0 network 192.168.x.0 bond-mode 802.3ad bond-miimon 100 bond-lacp-rate 1 bond-slaves none

This configuration runs rather well without errors. But if the network load is rather high (for instance, during copying 100-200 GB), the following errors are produced in /var/log/syslog:

Feb 14 17:20:02 ubuntu1 kernel: [29601.287684] e1000e: enp3s0 NIC Link is Down
Feb 14 17:20:02 ubuntu1 kernel: [29601.287993] e1000e 0000:03:00.0 enp3s0: speed changed to 0 for port enp3s0
Feb 14 17:20:02 ubuntu1 kernel: [29601.379193] bond0: link status definitely down for interface enp3s0, disabling it
Feb 14 17:20:02 ubuntu1 kernel: [29601.379199] bond0: first active interface up!
Feb 14 17:20:04 ubuntu1 kernel: [29603.064712] e1000e: enp3s0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Feb 14 17:20:04 ubuntu1 kernel: [29603.079162] bond0: link status definitely up for interface enp3s0, 1000 Mbps full duplex

Is it a known problem? Apparently after some seconds the failed interface works well again. The problem doesn't occur very often.

In the file /proc/net/bonding/bond0 I can see, that the mode 4 was correctly recognized:

cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535

I have tried to use bond-slaves with interface names instead of none. But in this case the ifenslave was locked during restarting of network services. So I have found a recommendation, that using "none" the bond0 will be up and doesn't lock.

Any ideas?

8 Reset to default

Know someone who can answer? Share a link to this question via email, Twitter, or Facebook.

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy