Load distribution: Difference between revisions
imported>Howard C. Berkowitz No edit summary |
imported>Howard C. Berkowitz No edit summary |
||
Line 1: | Line 1: | ||
{{subpages}} | {{subpages}} | ||
{{TOC|right}} | |||
'''Load distribution''' is a term in [[computers]] and [[telecommunications network]]s, which refers to the partitioning of a workload M over N resources with capacity less than M. It is often closely related to [[multihoming]] and [[fault tolerance]], and those goals often are integrated. Load distribution, however, may be done purely for performance and capacity reasons. | '''Load distribution''' is a term in [[computers]] and [[telecommunications network]]s, which refers to the partitioning of a workload M over N resources with capacity less than M. It is often closely related to [[multihoming]] and [[fault tolerance]], and those goals often are integrated. Load distribution, however, may be done purely for performance and capacity reasons. | ||
It is perfectly normal to find multiple load distribution mechanisms used in the same network service, for different protocols and for different applications. | |||
==Packet and frame level== | ==Packet and frame level== | ||
Line 9: | Line 13: | ||
A fundamental Internet design concept is that routers, and by extension bridges, are stateless with respect to individual forwarding decisions. In reality, they retain a certain amount of state in their forwarding tables, but these are in the [[control plane]], not the [[forwarding plane]], and are updated at a much slower rate than of data forwarding. | A fundamental Internet design concept is that routers, and by extension bridges, are stateless with respect to individual forwarding decisions. In reality, they retain a certain amount of state in their forwarding tables, but these are in the [[control plane]], not the [[forwarding plane]], and are updated at a much slower rate than of data forwarding. | ||
The next approach was to recognize each new destination address, and, when it was first seen, associate it to the next available link, and always send traffic to that destination over that link. | The next approach was to recognize each new destination address, and, when it was first seen, associate it to the next available link, and always send traffic to that destination over that link. With a large number of destinations, such that workload per link averages to roughly the same amount, this can work. With a small number of destinations, as, for example, on a set of links purely internal to an enterprise, "pinhole congestion" may occur, where one link is assigned to a destination receiving far more traffic than any other. | ||
==Additional routing concepts== | ==Additional routing concepts== | ||
==Transaction== | ==Transaction== | ||
==Application level== | ==Application level== |
Revision as of 11:46, 23 October 2009
Load distribution is a term in computers and telecommunications networks, which refers to the partitioning of a workload M over N resources with capacity less than M. It is often closely related to multihoming and fault tolerance, and those goals often are integrated. Load distribution, however, may be done purely for performance and capacity reasons.
It is perfectly normal to find multiple load distribution mechanisms used in the same network service, for different protocols and for different applications.
Packet and frame level
At the most basic, frame-level bridges and packet-level routers can only make decisions about sending traffic over N directly connected transmission link resources. In the first data networks, a simplifying assumption could be made that all links had the same speed.
Equating cost to the reciprocal of speed, the first approach was called "round robin". Assume N=3. The first packet would go to link 0, the second to link 1, the third to link 2, and the fourth to link (last + 1) modulo N, or back to zero. This worked acceptably with slow links, where computing time and memory were much more available than bandwidth. As bandwidth increased, round robin became less attractive, because, for each destination in the forwarding information base, state had to be maintained: the last link used, which had to be updated for every unit of data sent.
A fundamental Internet design concept is that routers, and by extension bridges, are stateless with respect to individual forwarding decisions. In reality, they retain a certain amount of state in their forwarding tables, but these are in the control plane, not the forwarding plane, and are updated at a much slower rate than of data forwarding.
The next approach was to recognize each new destination address, and, when it was first seen, associate it to the next available link, and always send traffic to that destination over that link. With a large number of destinations, such that workload per link averages to roughly the same amount, this can work. With a small number of destinations, as, for example, on a set of links purely internal to an enterprise, "pinhole congestion" may occur, where one link is assigned to a destination receiving far more traffic than any other.