Forwarding plane: Difference between revisions
John Leach (talk | contribs) m (Text replacement - "[[" to "") |
John Leach (talk | contribs) m (Text replacement - "]]" to "") |
||
Line 1: | Line 1: | ||
{{PropDel}}<br><br>{{subpages}} | {{PropDel}}<br><br>{{subpages}} | ||
In routing | In routing, the '''forwarding plane''' defines the part of the router (or bridge (computer network)|bridge architecture that decides what to do with frames arriving on an inbound interface. Most commonly, it refers to a table in which it looks up the destination address in the incoming packet header, the packet being contained in a frame and retrieves information telling it the outgoing interface(s) to which the receiving element should send it through the internal '''forwarding fabric''' of the forwarding device. The IP Multimedia Subsystem architecture uses the term '''transport plane''' to describe a function roughly equivalent to the routing control plane. Bridges do not look at the packet they encapsulate, but at destination headers in the frames. | ||
For low to medium throughput requirements, there is one forwarding fabric, orders of magnitude faster than any interface connected to it. The fabric is fast enough to be nonblocking: with all input interfaces running at maximum speed, the fabric will have adequate capacity to transfer all packets at line rate. In extremely high performance applications, such as major Internet service provider routers, a crossbar fabric is used, which provides multiple concurrent paths. | For low to medium throughput requirements, there is one forwarding fabric, orders of magnitude faster than any interface connected to it. The fabric is fast enough to be nonblocking: with all input interfaces running at maximum speed, the fabric will have adequate capacity to transfer all packets at line rate. In extremely high performance applications, such as major Internet service provider routers, a crossbar fabric is used, which provides multiple concurrent paths. | ||
Line 6: | Line 6: | ||
Bridges have two levels of bridging table. First, each interface learns the source addresses that are local to the medium connected to it, and should ''not'' be forwarded beyond the local network. Second, a bridge may copy the contents of those local tables, so if interface A knows that destination 1 is local to interface 2, it will send the frame directly to interface 2 rather than the default behavior of flooding it out all other interfaces. | Bridges have two levels of bridging table. First, each interface learns the source addresses that are local to the medium connected to it, and should ''not'' be forwarded beyond the local network. Second, a bridge may copy the contents of those local tables, so if interface A knows that destination 1 is local to interface 2, it will send the frame directly to interface 2 rather than the default behavior of flooding it out all other interfaces. | ||
==Forwarding plane in routing== | ==Forwarding plane in routing== | ||
In routing, the table also might specify that the packet is discarded, for security or for capacity management. In some cases, the router will return an Internet Message Control Program | In routing, the table also might specify that the packet is discarded, for security or for capacity management. In some cases, the router will return an Internet Message Control Program (ICMP) "destination unreachable" or other appropriate code. Some security policies, however, dictate that the router should be programmed to drop the packet silently. By dropping filtered packets silently, a potential attacker does not become aware of a target that is being protected. | ||
The incoming forwarding element will also decrement the time-to-live (TTL) field of the packet, and, if the new value is zero, discard the packet. While the Internet Protocol|IP | The incoming forwarding element will also decrement the time-to-live (TTL) field of the packet, and, if the new value is zero, discard the packet. While the Internet Protocol|IP specification indicates that an Internet Control Message Protocol|ICMP TTL exceeded message should be sent to the originator of the packet (i.e., the node with the source address in the packet), routers may be programmed to drop the packet silently. | ||
Depending on the specific router implementation, the table in which the destination address is looked up could be the routing table | Depending on the specific router implementation, the table in which the destination address is looked up could be the routing table (also known as the routing information base), or a separate forwarding information base that is populated (i.e., loaded) by the control plane, but used by the forwarding plane to look up packets, at very high speed, and decide how to handle them. Before or after examining the destination, other tables may be consulted to make decisions to drop the packet based on other characteristics, such as the source address, the IP protocol identifier field, or Transmission Control Protocol|TCP or User Datagram Protocol|UDP port number. | ||
Forwarding plane functions, run in the forwarding element. <ref>[ftp://ftp.rfc-editor.org/in-notes/rfc3746.txt Forwarding and Control Element Separation (ForCES) Framework], RFC 3746, April 2004</ref>. High-performance routers often have multiple distributed forwarding elements, so that the router increases performance with parallel processing. | Forwarding plane functions, run in the forwarding element. <ref>[ftp://ftp.rfc-editor.org/in-notes/rfc3746.txt Forwarding and Control Element Separation (ForCES) Framework], RFC 3746, April 2004</ref>. High-performance routers often have multiple distributed forwarding elements, so that the router increases performance with parallel processing. | ||
The outgoing interface will encapsulate the packet in the appropriate data link protocol. Depending on the router software and its configuration, functions, usually implemented at the outgoing interface, may set various packet fields, such as the DSCP field used by differentiated services | The outgoing interface will encapsulate the packet in the appropriate data link protocol. Depending on the router software and its configuration, functions, usually implemented at the outgoing interface, may set various packet fields, such as the DSCP field used by differentiated services. | ||
In general, the passage from the input interface directly to an output interface, through the fabric with minimum modification at the output interface, is called the '''fast path''' of the router. If the packet needs significant processing, such as segmentation or encryption, it may go onto a slower path, which is sometimes called the '''services plane''' of the router. Service planes can make forwarding or processing decisions based on higher-layer information, such as a Web URL contained in the packet payload. | In general, the passage from the input interface directly to an output interface, through the fabric with minimum modification at the output interface, is called the '''fast path''' of the router. If the packet needs significant processing, such as segmentation or encryption, it may go onto a slower path, which is sometimes called the '''services plane''' of the router. Service planes can make forwarding or processing decisions based on higher-layer information, such as a Web URL contained in the packet payload. | ||
Line 25: | Line 25: | ||
Several design factors affect router forwarding performance: | Several design factors affect router forwarding performance: | ||
* Forwarding information base|Forwarding information base design | * Forwarding information base|Forwarding information base design | ||
* Data link layer processing and extracting the packet | * Data link layer processing and extracting the packet | ||
* Decoding the packet header | * Decoding the packet header | ||
Line 33: | Line 33: | ||
* Processing and data link encapsulation at the egress interface | * Processing and data link encapsulation at the egress interface | ||
Routers may have one or more processors. In a uniprocessor design, these performance parameters are affected not just by the processor speed, but by competition for the processor. Higher-performance routers invariably have multiple processing elements, which may be general-purpose processor chips or specialized application-specific integrated | Routers may have one or more processors. In a uniprocessor design, these performance parameters are affected not just by the processor speed, but by competition for the processor. Higher-performance routers invariably have multiple processing elements, which may be general-purpose processor chips or specialized application-specific integrated circuits (ASIC). | ||
Very high performance products have multiple processing elements on each interface card. In such designs, the main processor does not participate in forwarding, but only in control plane and management processing. | Very high performance products have multiple processing elements on each interface card. In such designs, the main processor does not participate in forwarding, but only in control plane and management processing. | ||
Line 39: | Line 39: | ||
=== Benchmarking performance === | === Benchmarking performance === | ||
In the Internet Engineering Task Force | In the Internet Engineering Task Force, two working groups in the Operations & Maintenance Area deal with aspects of performance. The Interprovider Performance Measurement (IPPM) group focuses, as its name would suggest, on operational measurement of services. Performance measurements on single routers, or narrowly defined systems of routers, are the province of the Benchmarking Working Group (BMWG). | ||
RFC 2544 is the key BMWG document <ref>[http://www.ietf.org/rfc/rfc2544.txtBenchmarking Methodology for Network Interconnect Devices],RFC 2544, Scott Bradner|S. Bradner | RFC 2544 is the key BMWG document <ref>[http://www.ietf.org/rfc/rfc2544.txtBenchmarking Methodology for Network Interconnect Devices],RFC 2544, Scott Bradner|S. Bradner & J. McQuade,March 1999</ref>. A classic RFC 2544 benchmark uses half the router's (i.e., the device under test (DUT)) ports for input of a defined load, and measures the time at which the outputs appear at the output ports. | ||
== Distributed forwarding == | == Distributed forwarding == | ||
A next step in speeding routers was to have a specialized forwarding processor separate from the main processor. There was still a single path, but forwarding no longer had to compete with control in a single processor. The fast routing processor typically had a small FIB, with hardware memory (e.g., static random access memory | A next step in speeding routers was to have a specialized forwarding processor separate from the main processor. There was still a single path, but forwarding no longer had to compete with control in a single processor. The fast routing processor typically had a small FIB, with hardware memory (e.g., static random access memory (SRAM)) faster and more expensive than the FIB in main memory. Main memory was generally dynamic random access memory (DRAM). | ||
In bridging, a subset of distributed forwarding came very early: the interface-level table of source MAC addresses that are local to that interface and should ''not'' be forwarded. | In bridging, a subset of distributed forwarding came very early: the interface-level table of source MAC addresses that are local to that interface and should ''not'' be forwarded. | ||
Line 51: | Line 51: | ||
=== Early distributed forwarding === | === Early distributed forwarding === | ||
Next, routers began to have multiple forwarding elements, that communicated through a high-speed '''shared bus'''<ref>[http://www.isi.edu/touch/pubs/isi-lanman98abs.pdf High Performance IP Forwarding Using Host Interface Peering], J. Touch ''et al.'',Proc. 9th IEEE Workshop on Local and Metropolitan Area Networks (LANMAN),May | Next, routers began to have multiple forwarding elements, that communicated through a high-speed '''shared bus'''<ref>[http://www.isi.edu/touch/pubs/isi-lanman98abs.pdf High Performance IP Forwarding Using Host Interface Peering], J. Touch ''et al.'',Proc. 9th IEEE Workshop on Local and Metropolitan Area Networks (LANMAN),May 1998</ref> or through a '''shared memory'''<ref>[http://www.cs.ucr.edu/~bhuyan/papers/tpds.ps Shared Memory Multiprocessor Architectures for Software IP Routers], Y. Luo ''et al.'',IEEE Transactions on Parallel and Distributed Systems,2003</ref>. Cisco used shared busses until they saturated, while Juniper preferred shared memory <ref>[http://www.informit.com/articles/article.aspx?p=30631 Juniper Networks Router Architecture],''Juniper Networks Reference Guide: JUNOS Routing, Configuration, and Architecture'', T. Thomas, Addison-Wesley Professional, 2003]</ref>. | ||
Each forwarding element had its own FIB. See, for example, the Versatile Interface Processor on the Cisco 7500 <ref>[http://safari.ciscopress.com/1578701813/ch06 Hardware Architecture of the Cisco 7500 Router],''Inside Cisco IOS Software Architecture (CCIE Professional Development'', V. Bollapragada ''et al.'',Cisco Press, 2000</ref> | Each forwarding element had its own FIB. See, for example, the Versatile Interface Processor on the Cisco 7500 <ref>[http://safari.ciscopress.com/1578701813/ch06 Hardware Architecture of the Cisco 7500 Router],''Inside Cisco IOS Software Architecture (CCIE Professional Development'', V. Bollapragada ''et al.'',Cisco Press, 2000</ref> | ||
Line 61: | Line 61: | ||
As forwarding bandwidth increased, even with the elimination of cache miss overhead, the shared paths limited throughput. While a router might have 16 forwarding engines, if there was a single bus, only one packet transfer at a time was possible. There were some special cases where a forwarding engine might find that the output interface was one of the logical or physical interfaces present on the forwarder card, such that the packet flow was totally inside the forwarder. It was often easier, however, even in this special case, to send the packet out the bus and receive it from the bus. | As forwarding bandwidth increased, even with the elimination of cache miss overhead, the shared paths limited throughput. While a router might have 16 forwarding engines, if there was a single bus, only one packet transfer at a time was possible. There were some special cases where a forwarding engine might find that the output interface was one of the logical or physical interfaces present on the forwarder card, such that the packet flow was totally inside the forwarder. It was often easier, however, even in this special case, to send the packet out the bus and receive it from the bus. | ||
While some designs experimented with multiple shared busses, the eventual approach was to adapt the crossbar switch | While some designs experimented with multiple shared busses, the eventual approach was to adapt the crossbar switch model from telephone switches, in which every forwarding engine had a hardware path to every other forwarding engine. With a small number of forwarding engines, crossbar forwarding fabrics are practical and efficient for high-performance routing. There are multistage designs for crossbar systems, such as Clos networks. | ||
== References == | == References == | ||
{{reflist | 2}} | {{reflist | 2}} |
Revision as of 06:30, 18 March 2024
This article may be deleted soon. | ||
---|---|---|
In routing, the forwarding plane defines the part of the router (or bridge (computer network)|bridge architecture that decides what to do with frames arriving on an inbound interface. Most commonly, it refers to a table in which it looks up the destination address in the incoming packet header, the packet being contained in a frame and retrieves information telling it the outgoing interface(s) to which the receiving element should send it through the internal forwarding fabric of the forwarding device. The IP Multimedia Subsystem architecture uses the term transport plane to describe a function roughly equivalent to the routing control plane. Bridges do not look at the packet they encapsulate, but at destination headers in the frames. For low to medium throughput requirements, there is one forwarding fabric, orders of magnitude faster than any interface connected to it. The fabric is fast enough to be nonblocking: with all input interfaces running at maximum speed, the fabric will have adequate capacity to transfer all packets at line rate. In extremely high performance applications, such as major Internet service provider routers, a crossbar fabric is used, which provides multiple concurrent paths. Forwarding plane in bridgingBridges have two levels of bridging table. First, each interface learns the source addresses that are local to the medium connected to it, and should not be forwarded beyond the local network. Second, a bridge may copy the contents of those local tables, so if interface A knows that destination 1 is local to interface 2, it will send the frame directly to interface 2 rather than the default behavior of flooding it out all other interfaces. Forwarding plane in routingIn routing, the table also might specify that the packet is discarded, for security or for capacity management. In some cases, the router will return an Internet Message Control Program (ICMP) "destination unreachable" or other appropriate code. Some security policies, however, dictate that the router should be programmed to drop the packet silently. By dropping filtered packets silently, a potential attacker does not become aware of a target that is being protected. The incoming forwarding element will also decrement the time-to-live (TTL) field of the packet, and, if the new value is zero, discard the packet. While the Internet Protocol|IP specification indicates that an Internet Control Message Protocol|ICMP TTL exceeded message should be sent to the originator of the packet (i.e., the node with the source address in the packet), routers may be programmed to drop the packet silently. Depending on the specific router implementation, the table in which the destination address is looked up could be the routing table (also known as the routing information base), or a separate forwarding information base that is populated (i.e., loaded) by the control plane, but used by the forwarding plane to look up packets, at very high speed, and decide how to handle them. Before or after examining the destination, other tables may be consulted to make decisions to drop the packet based on other characteristics, such as the source address, the IP protocol identifier field, or Transmission Control Protocol|TCP or User Datagram Protocol|UDP port number. Forwarding plane functions, run in the forwarding element. [1]. High-performance routers often have multiple distributed forwarding elements, so that the router increases performance with parallel processing. The outgoing interface will encapsulate the packet in the appropriate data link protocol. Depending on the router software and its configuration, functions, usually implemented at the outgoing interface, may set various packet fields, such as the DSCP field used by differentiated services. In general, the passage from the input interface directly to an output interface, through the fabric with minimum modification at the output interface, is called the fast path of the router. If the packet needs significant processing, such as segmentation or encryption, it may go onto a slower path, which is sometimes called the services plane of the router. Service planes can make forwarding or processing decisions based on higher-layer information, such as a Web URL contained in the packet payload. Issues in router forwarding performanceVendors design router products for specific markets. Design of routers intended for home use, perhaps supporting several PCs and VoIP telephony, is driven by keeping the cost as low as possible. In such a router, there is no separate forwarding fabric, and there is only one active forwarding path: into the main processor and out of the main processor. Routers for more demanding applications accept greater cost and complexity to get higher throughput in their forwarding planes. Several design factors affect router forwarding performance:
Routers may have one or more processors. In a uniprocessor design, these performance parameters are affected not just by the processor speed, but by competition for the processor. Higher-performance routers invariably have multiple processing elements, which may be general-purpose processor chips or specialized application-specific integrated circuits (ASIC). Very high performance products have multiple processing elements on each interface card. In such designs, the main processor does not participate in forwarding, but only in control plane and management processing. Benchmarking performanceIn the Internet Engineering Task Force, two working groups in the Operations & Maintenance Area deal with aspects of performance. The Interprovider Performance Measurement (IPPM) group focuses, as its name would suggest, on operational measurement of services. Performance measurements on single routers, or narrowly defined systems of routers, are the province of the Benchmarking Working Group (BMWG). RFC 2544 is the key BMWG document [2]. A classic RFC 2544 benchmark uses half the router's (i.e., the device under test (DUT)) ports for input of a defined load, and measures the time at which the outputs appear at the output ports. Distributed forwardingA next step in speeding routers was to have a specialized forwarding processor separate from the main processor. There was still a single path, but forwarding no longer had to compete with control in a single processor. The fast routing processor typically had a small FIB, with hardware memory (e.g., static random access memory (SRAM)) faster and more expensive than the FIB in main memory. Main memory was generally dynamic random access memory (DRAM). In bridging, a subset of distributed forwarding came very early: the interface-level table of source MAC addresses that are local to that interface and should not be forwarded. Early distributed forwardingNext, routers began to have multiple forwarding elements, that communicated through a high-speed shared bus[3] or through a shared memory[4]. Cisco used shared busses until they saturated, while Juniper preferred shared memory [5]. Each forwarding element had its own FIB. See, for example, the Versatile Interface Processor on the Cisco 7500 [6] Eventually, the shared resource became a bottleneck, with the limit of shared bus speed being roughly 2 million packets per second (Mpps). Crossbar fabrics broke through this bottleneck. As forwarding bandwidth increased, even with the elimination of cache miss overhead, the shared paths limited throughput. While a router might have 16 forwarding engines, if there was a single bus, only one packet transfer at a time was possible. There were some special cases where a forwarding engine might find that the output interface was one of the logical or physical interfaces present on the forwarder card, such that the packet flow was totally inside the forwarder. It was often easier, however, even in this special case, to send the packet out the bus and receive it from the bus. While some designs experimented with multiple shared busses, the eventual approach was to adapt the crossbar switch model from telephone switches, in which every forwarding engine had a hardware path to every other forwarding engine. With a small number of forwarding engines, crossbar forwarding fabrics are practical and efficient for high-performance routing. There are multistage designs for crossbar systems, such as Clos networks. References
|