Increase bandwidth between the network and an ESXi host by setting up an aggregated link between the two.
These aggregated links are commonly refered to as an etherchannel, trunk, portchannel or teamed NIC’s.
- A recent Cisco switch that supports load balancing over an Etherchannel based on source and destination IP addresses. (For example, the popular Catalyst 2950 series do not support this loadbalancing method). You can check if your switch supports this load balancing by checking if the exec command
port-channel load-balance src-dst-ip
is available. If it’s not, you can’t use this switch for the purpose described in this post.
- ESXi does not support dynamic aggregated links with protocols like LACP. One must manually configure the link on both ends.
- Available bandwidth to 1 single host will not increase, as this is the nature of the aggregation link technologies being used. If you have two 100 MBit links, the maximum attainable speed between a virtual machine and a single host on the network will still be 100 MBit. However, if two hosts were to connect to the virtual machine, chances are pretty good one host’s traffic will go via one physical link and the other host’s via the second physical link.
- This example aggregates two physical links into one, you can use more. You can mix different port speeds, but recommended configuration is all links having the same speed.
Edit the network settings by going to Configuration -> Networking. Edit the virtual network properties which you want to create an aggregated link for, in this example this is vSwitch0.
Next, add the second network interface to the vSwitch in the Network Adapters tab:
Now we need to configure ESXi to bond the links on these to adapters together. Go back to the Ports tab and edit the vSwitch properties:
On the vSwitch properties window, go to the last tab NIC Teaming and set Load Balancing to “Route based on IP hash”:
That’s it for vmware. Now we need to configure the switch to create the aggregated link.
Configure the switch
In this example FastEthernet 0/23 and FastEthernet 0/24 are connected to my VMware ESXi server, so I’m going to use the interface range commands to apply the necessary configuration to both switchports.
It’s important you match the loadbalancing method ESXi uses to the one the switch uses. This is done using the port-channel load-balance command.
s2(config)#interface range FastEthernet 0/23 – 24 s2(config-if-range)# s2(config-if-range)#channel-group 1 mode on Creating a port-channel interface Port-channel 1 00:25:49: %LINK-3-UPDOWN: Interface Port-channel1, changed state to up 00:25:50: %LINEPROTO-5-UPDOWN: Line protocol on Interface Port-channel1, changed state to up s2(config-if-range)#exit s2(config)#port-channel load-balance src-dst-ip
Verify port-channel operation:
s2#show interface port-channel 1 Port-channel1 is up, line protocol is up (connected) Hardware is EtherChannel, address is 64d9.89ee.1234 (bia 64d9.89ee.1234) Description: To vwmare for VMs MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s, link type is auto, media type is unknown input flow-control is off, output flow-control is unsupported Members in this channel: Fa0/23 Fa0/24 ARP type: ARPA, ARP Timeout 04:00:00 Last input never, output 00:00:01, output hang never Last clearing of "show interface" counters never Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 6000 bits/sec, 2 packets/sec 5 minute output rate 3000 bits/sec, 3 packets/sec 32312407 packets input, 33220875322 bytes, 0 no buffer Received 135526 broadcasts (71135 multicasts) 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog, 71135 multicast, 0 pause input 0 input packets with dribble condition detected 55382934 packets output, 67979911754 bytes, 0 underruns 0 output errors, 0 collisions, 1 interface resets 0 unknown protocol drops 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier, 0 pause output 0 output buffer failures, 0 output buffers swapped out s2#
- cisco.com: Understanding EtherChannel Load Balancing and Redundancy on Catalyst Switches
- vmware.com: Sample configuration of EtherChannel / Link aggregation with ESX/ESXi and Cisco/HP switches
- Wikipedia: Link aggregation