ggRock Server NIC Teaming / Bonding / Link Aggregation This article describes recommendations for bonding configuration between ggRock Server and Network Switches.
Table of Contents
About
Bonding (also called NIC teaming or Link Aggregation) is a technique for combining multiple NICs to a single network switch.
Combining multiple network interfaces in this way can have several benefits:
-
Fault Tolerance
-
Increased Performance
-
Load Balancing
Supported Types of Bonding with ggRock
There are seven modes for bonding in Linux.
It is only recommended to use one of the two following bonding modes with ggRock:
IEEE 802.3ad Dynamic link aggregation (802.3ad) (LACP)
-
Creates aggregation groups from NICs (network interfaces) that share the same speed and duplex settings
-
Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification
-
This mode provides both fault tolerance and load balancing.
Active-backup
-
Only one NIC slave in the bond is active.
-
A different slave becomes active if, and only if, the active slave fails.
-
The single logical bonded interface’s MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch.
-
This mode provides fault tolerance.
In general, if your Network Switch supports the LACP (IEEE 802.3ad) protocol, it is recommended to use the corresponding bonding mode (802.3ad).
Otherwise you should generally use the active-backup mode.
Compatibility Concerns
We don’t recommend any other bonding modes due to compatibility issues with various hardware (network switches and network cards), which also might cause serious performance issues.
You could get a basic comparison of all possible bonding modes in Appendix 2.
IEEE 802.3ad Dynamic link aggregation (802.3ad) (LACP)
Requirements
-
LACP (IEEE 802.3ad) protocol support in Network Switch
-
All network interfaces from the aggregation group have the same speed
-
All network interfaces from the aggregation group work in full-duplex mode
-
Single Switch Topology (all ggRock server network interfaces from the aggregation group are connected to a single network switch)
-
Local network configuration (there are no router devices between ggRock server and client PCs, they are placed on the same local network)
Specific Settings
Transmit Hash Policy
(layer 2 | layer 2+3 | layer 3+4)
-
This setting should be set both on ggRock server and network switch.
-
The recommended option is layer 2+3.
-
This algorithm will place all traffic to a particular network peer on the same slave (network interface).
-
It means that outgoing traffic from ggRock Server to multiple client PCs could be sent in parallel by multiple NICs, but all incoming traffic to ggRock server will use only one network interface and will not have aggregated throughput.
-
layer 3+4 algorithm might allow for traffic to a particular network peer to span multiple slaves in some cases, but this mode is not fully compatible with 802.3ad standard.
During testing, some issues were found with this mode, typically resulting in error:
('tcp_sendpage() failure: -512').
Link Monitoring
(mii | arp | none)
-
This is a network switch option that defines method to use for monitoring the link.
-
The recommended option is mii (arp is not supported).
Miimon
(value in milliseconds)
-
This is how often to monitor the link for failures.
-
Recommended value is 100.
Advantages of IEEE 802.3ad Dynamic link aggregation (802.3ad) (LACP) bonding
-
Aggregated throughput for outgoing traffic from ggRock server to multiple client PCs, which is the most important value
-
Fault Tolerance
Disadvantages of IEEE 802.3ad Dynamic link aggregation (802.3ad) (LACP) bonding
-
Aggregated throughput for incoming traffic to ggRock server
-
Aggregated throughput for a single client PC. Let’s say if you have 4x1G NICs on ggRock server with configured Link Aggregation and your ggRock Client PC has 2.5G NIC.
-
In such case the maximum throughput on client PC would be 1G only (both incoming and outgoing)
Sample IEEE 802.3ad Dynamic link aggregation (802.3ad) (LACP) Configuration
For the purposes of this exercise, we will assume that the ggRock server has two network interfaces named eth0 and eth1, and that it has a network bridge configured for virtual machines.
Tables can't be imported directly. Please insert an image of your table which can be found here.
Default configuration
Configuration with 802.3ad bonding
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.0.2
netmask 255.255.255.0
gateway 192.168.0.1
bridge_ports eth0
bridge_fd 0
bridge_stp off
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
auto bond0
iface bond0 inet manual
bond-slaves eth0 eth1
bond-mode 802.3ad
bond-miimon 100
bond-xmit-hash-policy layer2+3
auto vmbr0
iface vmbr0 inet static
address 192.168.0.2
netmask 255.255.255.0
gateway 192.168.0.1
bridge_ports bond0
bond0bridge_fd 0
bridge_stp off
Network Switch configuration
You have to configure 802.3ad link aggregation in your switch with proper Transmit Hash Policy and MII monitoring settings (described above).
Active-Backup Bonding
Requirements
-
None
Specific Settings
Miimon
(value in milliseconds)
-
How often to monitor the link for failures
-
Recommended value is 100
Advantages of Active-Backup Bonding
-
Fault Tolerance
Disadvantages of Active-Backup Bonding
-
Aggregated throughput
Sample Active-Backup Bonding Configuration
For the purposes of this exercise, we will assume that the ggRock server has two network interfaces named eth0 and eth1, and that it has a network bridge configured for virtual machines.
Tables can't be imported directly. Please insert an image of your table which can be found here.
Default configuration
Configuration with Active-Backup bonding
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.0.2
netmask 255.255.255.0
gateway 192.168.0.1
bridge_ports eth0
bridge_fd 0
bridge_stp off
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
auto bond0
iface bond0 inet manual
bond-primary eth0
bond-slaves eth0 eth1
bond-mode active-backup
bond-miimon 100
auto vmbr0
iface vmbr0 inet static
address 192.168.0.2
netmask 255.255.255.0
gateway 192.168.0.1
bridge_ports bond0
bridge_fd 0
bridge_stp off
Network Switch Configuration
-
None
Conclusion
In general, bonding doesn’t provide aggregated throughput for a single Client PC connection (single iSCSI connection).
Bonding can, however, provide aggregated throughput for a single Client PC in a case with multiple clients.
A significant note is that bonding does not add any additional cost.
Appendix 1 - Single Client PC Performance Testing
Single 1G NIC
802.3ad (Two 1G NICs)
Active-Backup (Two 1G NICs)
Appendix 2 - Bonding Mode Comparison (Advanced Users)
-
This applies to outgoing traffic from the ggRock server, most are not capable of doing this for incoming traffic
-
This is the case for most switches, please refer to your switch's documentation.
-
This only pertains to certain situations
-
To do this, certain ethtool support in the network device driver of the slave interfaces
-
The network device driver must support changing the hardware address while the device is open