I decided to write this post to address some of the many misconceptions out there on many mailing lists, forums etc…
1. More bandwidth
That is one of the first common misconception I often read or hear, if I “trunk” (Solaris term actually) many ports together, the number of ether-channeled port is equal of the number of ports multiplied by the bandwidth of each port.
In other words… 8 * 100Mbps port ether-channeled would create a single logical port with 800Mbps… Now while this is somewhat the anticipation, it isn’t really true. You see, only one physical link would be used for a single connection. If I connect to a workstation over the ether-channel, my packets for this single connection would be using one link and not multiple link, thus the bandwidth for this “conversation” would still be of 100Mbps.
2. All links are load balanced
It is also common to think that all links are equally shared, that is say, one link would not be overloaded as opposed to the other ones.
Now while this is somewhat achievable, it isn’t by default. You see by default Cisco’s ether-channel algorithm hashes the destination MAC address of each packet and assigns it a number ranging from 0 to 7… This value is then assigned to a port belonging to the logical link. Because the maximum number of active links¹ which can be channeled together is 8¹, each of them can only carry one value (the hashed number from the destination mac address + session). If you happen for example to have 5 links, 3 links would be assigned 2 hashed value, while the 2 left, 1 hashed value. If you only had 2 links, then each would be assigned 4 values.
Because Cisco’s default algorithm uses the destination Mac Address, each packets destined to one server would be using the same link.
Imagine this simple scenario
[many workstations] —> switchA <==========> switchB —-> Server1
When PC1, PC2, PC3 which connects to switchA try to access a file on Server1, all connections would be going through the same ether-channel link, while the other links (assuming there is no other traffic) would be completely unused. Now the returned packets from Server1 to PC1, PC2, PC3, would nevertheless be using different links, because we would be having 3 Destination Mac address. So one way, a link would be overloaded, the other way, the link wouldn’t.
Because Ether-channeling is a one way work… it is possible to set the load balancing algorithm on switchA from Destination Mac Address to Source Mac Address, while leaving switchB on Destination Mac Address, which would then result in a “somewhat” load balanced ether-channel links.
Having said that, Cisco Algorithm uses about 9 methods to determine the link to use, those are the Destination/Source Mac Address, the Destination/Source IP Address, the Source/Destination Port, the Destination AND Source IP Address, the Source AND Destination Port and finally the Destination AND Source Mac Address.
(wow that was a long sentence :) )
Depending on which methods you decide to use, there would be some type of drawbacks depending on your network topology. That is to say, Ether-Channel isn’t the prior mean to resolve bandwidth/throughput issue; it is nevertheless useful and when necessary is a big important feature in network topology design.
¹ [ LACP allows a maxium of 16 links with only 8 active ]