+ All Categories
Home > Documents > Allot’s Approach to Bandwidth Management - Ingecom Queuing What is Bandwidth Management? Bandwidth...

Allot’s Approach to Bandwidth Management - Ingecom Queuing What is Bandwidth Management? Bandwidth...

Date post: 27-Mar-2018
Category:
Upload: vudieu
View: 219 times
Download: 3 times
Share this document with a friend
13
Per-Flow Queuing Allot’s Approach to Bandwidth Management February 2002
Transcript

Per-Flow Queuing

Allot’s Approach to Bandwidth Management

February 2002

Per-Flow Queuing

Table of Contents Introduction.........................................................................................................3

An Overview of TCP/IP .......................................................................................3

What is Bandwidth Management?.....................................................................4

Allot’s Per-Flow Queuing ...................................................................................5 How It Works.................................................................................................................................. 5 Per-Flow Queuing: Two Examples............................................................................................... 6 The Benefits of PFQ .................................................................................................................... 10

Comparing PFQ with other QoS approaches .................................................11 Class-Based Queuing (CBQ) ...................................................................................................... 11 Weighted Fair Queuing (WFQ) ................................................................................................... 11 TCP Rate Control ......................................................................................................................... 11

Summary ...........................................................................................................12

About NetEnforcer and NetPolicy ...................................................................13

About Allot Communications...........................................................................13

Allot Communications www.allot.com Page 2 of 13

Per-Flow Queuing

Introduction Allot Communications NetEnforcer™ family of policy enforcement devices offers a unique, intelligent, policy-powered approach to bandwidth management. The NetEnforcer controls the traffic running at the bottlenecked links of your network—such as the Internet access link—where it optimizes the utilization of your infrastructure, guaranteeing and prioritizing bandwidth for business-critical applications and limiting bandwidth for less important ones. Allot’s patented Per-Flow Queuing is a direct traffic control approach that allows you to maximize the potential of your expensive WAN links by providing granular per-flow bandwidth management and shaping for both incoming and outgoing traffic flows.

An Overview of TCP/IP

The TCP/IP protocol suite includes two main transport protocols, TCP and UDP. The majority of the traffic in today’s networks uses TCP for the transport layer since it provides a reliable flow of data between two end-points. TCP provides a “connection-oriented” bytestream service in which the two end-points must establish a connection with each other before they can exchange data. UDP, on the other hand, provides a simpler but unreliable transport layer and is used for streaming applications such as voice over IP and videoconferencing. Applications that use UDP as the transport layer usually implement some of the TCP abilities, such as rate control, in the application layer to compensate on the lack of these important features in the transport layer. TCP provides the following facilities:

• Reliability - TCP assigns a sequence number to each byte transmitted, and expects a positive acknowledgment (ACK) from the receiving end. If the ACK is not received within a certain interval (called the “timeout interval”), the data is retransmitted. The receiving TCP end uses the sequence numbers to rearrange the segments when they arrive out of order, and to eliminate duplicate segments.

• Rate Control - The receiving TCP end-point, when sending an ACK back to the sender, also indicates to the sender the number of bytes it can receive beyond the last received TCP segment, without causing an overrun or overflow in its internal buffers.

• Slow start and congestion avoidance – These two methods are used by TCP to adapt the sending rate of the transmitting end-point to the available bandwidth in the link between the two end-points. This is especially important when there are bottlenecks in the traffic flow.

• Logical Connections - The reliability and rate control mechanisms described above require that TCP initializes and maintains certain status information for each datastream. The combination of this status, including sockets, sequence numbers and window sizes, is called a “logical connection”.

• Full Duplex - TCP provides for concurrent datastreams in both directions. TCP is a connection-oriented transport protocol and uses sequence numbers and acknowledgment messages to provide a sending node with delivery information about packets transmitted to the destination node. If the sending computer is transmitting too fast for the receiving computer, TCP employs rate control mechanisms to slow data transfer.

Allot Communications www.allot.com Page 3 of 13

Per-Flow Queuing

What is Bandwidth Management? Bandwidth management or quality of service (QoS) is the general term given to a broad range of techniques designed to shape the traffic on your WAN connection. Bandwidth management ensures that the maximum amount of traffic flows over your Internet connection in the most efficient manner possible—so that packets are not dropped or re-transmitted. It also provides a method for enabling your important traffic to move more quickly through your network causing your business applications respond more quickly. WAN links that connect enterprises to Internet Service Providers (ISPs) or branch offices to their headquarters have finite bandwidth resources and are usually bottlenecked due to the wide variety of uses they have in an enterprise network. WAN links serve both applications that are critical for businesses (such as interactive Web applications, VoIP, or ERP etc.) and others applications that are less important and can slow down the performance of the important applications (such as FTP or Napster, KaZaA, AudioGalaxy etc.). Most of us have experienced, at some time or another, the effects of network latency (slow network response). Any of us who have tried to use interactive Web applications using a low speed connection have seen the effect that a file transfer has on the interactive traffic over the connection. The file transfer easily consumes most of the link’s bandwidth, and delays the interactive data. The result is that you—or even your customers—receive poor performance from the important application. This source of the problem occurs because the datagrams or packets of data that contain the file transfer data are given equal priority on the link as those of the interactive applications. No consideration is given to the type of data contained within the datagram when deciding which datagram will be transmitted. All datagrams are scheduled for transmission on a "First Come, First Served" basis. When a new datagram arrives it is added to the end of the transmitting queue; when link bandwidth becomes available, the datagram at the head of the queue is transmitted. Traffic shaping—through policies—allows you to implement a series of network actions that alter the way in which data is queued for transmission. Although it will ultimately take the same amount of time to transmit the entire set of datagrams across a network link, regardless of the order in which they are transmitted, sacrificing the response time of the file transfer by prioritizing the interactive traffic can significantly speed up the response times for your interactive sessions. Policies define how bandwidth management is to be achieved and translate the required business needs into traffic management (“my interactive Web application is critical for me”). Each policy defines both the conditions for matching the traffic with policies and the network actions that need to be applied to the traffic (“highest priority”). In addition to prioritizing traffic, today’s advanced traffic shapers should also provide the following capabilities:

• Setting a minimum amount of bandwidth for an application/user (“guaranteeing”) • Setting a maximum amount of bandwidth for an application/user (“limiting”) • Enforcing a specific CBR (Constant Bit Rate) level for specific connections • Allowing bursts of traffic on certain connections that exceed maximum defined limits • Enabling hierarchical policies that ease policy creation and maintenance

The Allot Communications NetEnforcerTM policy enforcement device offers all of these important traffic shaping features as well as a customizable Policy Editor, a real-time Traffic Monitor, and IP accounting. The NetEnforcer supports three hierarchical levels for shaping the traffic: the connection level; the policy or “Virtual Channel” (VC) level that aggregates connections that match a user-defined rule; and the Pipe level that aggregates several VCs associated with a specific user or IP address.

Allot Communications www.allot.com Page 4 of 13

Per-Flow Queuing

Allot’s Per-Flow Queuing The NetEnforcer uses a unique approach to queuing called Per-Flow Queuing (PFQ). With PFQ, each flow gets its own queue and is treated individually by the NetEnforcer. This enables the NetEnforcer to offer very accurate traffic shaping. The PFQ method is a direct approach to QoS enforcement. Unlike indirect approaches that try to manage the available bandwidth by changing different parameters in the packets/flows (such as “TCP window size”), Per-flow Queuing uses TCP’s inherent flow control to achieve the maximum and most efficient bandwidth usage. Per-Flow Queuing exploits two important internal mechanisms of TCP, the “Slow Start” and “Congestion Avoidance”’. These mechanisms gradually increase the rate of the data flow until they identify that the link between the two end points is saturated. PFQ takes advantage of these mechanisms by artificially (and dynamically) enforcing the proper transmitting rate (bandwidth) per flow in a way that will meet the policy requirement and will avoid collisions. The transmitting TCP will then synchronize to the rate dictated by the NetEnforcer. The NetEnforcer thus “forces” each flow to transmit packets in the rate that will meet the user-defined policy, including the minimum, maximum and priority definitions.

How It Works Allot’s Per-Flow Queuing is implemented by the QoS Enforcement Module in the NetEnforcer. Each packet that arrives into the NetEnforcer/QoS Enforcement Module (see Figure 1) is matched to the proper flow by the Flow Identifier, and inserted to the queue of the proper flow. If the packet does not match any of the existing flows, the New Flow Generator examines the conditions/characteristics of the flow and matches it to the proper policy (Virtual Channel). The new flow’s queue is then added to the system. When the packet arrives to the QoS Enforcement Module, it checks whether the guaranteed bandwidth was exhausted and whether the maximum limitation was achieved. If the guaranteed bandwidth was not exhausted, the packet is transmitted immediately (without any delay). If the maximum limit for that flow has been reached, the packet is placed in a buffer. Otherwise the packet will be placed in its flow queue and will be transmitted based on the priority of the flow and the available bandwidth. The queues are generated and increased dynamically. A queue is generated per-flow and closed once the flow ends. This way the resources of the system are optimally used. The NetEnforcer does not assign a predefined size of buffers per queue; it manages a large buffer bank and dynamically assigns to each queue only the buffer size required for the flow at any given time. Thus even large temporary queues for certain peaks or bursts of a flow can be accommodated. The QoS Enforcement Module uses a very accurate scheduler. This scheduler decides which flow may send a packet in a given moment. After a packet is sent the system decides which flow will send the next packet, based on the defined policy and the number of packets already sent by each flow.

Allot Communications www.allot.com Page 5 of 13

Per-Flow Queuing

flow6

flow4

Flowrecognizer

(matches thepacket to theappropriate

flow)

Session builder:1. Match the new flow to the VC.2. update the QoS mechanism onthe new flow and it's policy

flow2

flow3

flow5

flow1

Was theminimum BW

exhausted

Virtual Channel

VIrtual channel #1

VIrtual channel #2

VIrtual channel #3

Policy

Trafficinbound /outbound

Was theMaximum BW

exhausted

Schedular -According to Priority

Buffer

New flowAdding new queue for

the new flow

No

Yes

No

Yes

Is it anewflow? No

From policydefinition

Figure 1: Schematic Drawing of the NetEnforcer’s QoS Enforcement Module

Per-Flow Queuing: Two Examples Example #1: Synchronizing Flow Rates with the NetEnforcer A flow is established between Client A and Server B. The NetEnforcer enforces a specific rate on the flow based on the minimum/maximum bandwidth and the priority definitions of the flow. At the start of the connection, a larger buffer size may be allocated, but as the sender synchronizes to the enforced rate, almost none is required. This enables the network to operate at maximum efficiency.

Allot Communications www.allot.com Page 6 of 13

Per-Flow Queuing

Client A NetEnforcer Server B

Figure 2: TCP connection synchronizes to the rate dictated by the NetEnforcer.

If additional connections are added to the system, the original flow may transmit at a lower rate in accordance with the defined policies. This may occur because the new connections may have a higher priority or they are of same traffic type/policy (e.g. same priority level) and are treated with “fairness” in accessing the link: flows with the same priority will occupy the same percentage of the link’s bandwidth. Either way, the NetEnforcer reduces the rate and the TCP adapts to the new rate:

Allot Communications www.allot.com Page 7 of 13

Per-Flow Queuing

Client A NetEnforcer Server B

Figure 3: The TCP synchronizes to the new rate dictated by the NetEnforcer

Example #2: Ensuring Fairness Between Connections with PFQ Two flows, one red and one blue, pass through a bottlenecked WAN link. The blue flow tries to transmit at a higher rate than the red one. If the policy defines that both the red and the blue flows should be provided with the same priority, PFQ provides fairness while shaping the traffic. Without the NetEnforcer, there would be no fairness between the flows and the blue flow will consume most of the available bandwidth. When the blue connection increases its rate (Figure 4), both the blue and the red connections will start randomly dropping packets at the congestion point where the wide link meets the narrow link. This usually occurs at the access router in the enterprise where one side has a relatively narrow WAN link (e.g., 256 Kbps) and the other has the relatively wide LAN link (10 Mbps).

Figure 4: Increasing the sending rate without the NetEnforcer

Allot Communications www.allot.com Page 8 of 13

Per-Flow Queuing

Eventually, the sender’s TCP will synchronize to the bandwidth available at the bottlenecked link (see Figure 5). We can see that fairness between connections was not accomplished: both the red and the blue connection had to reduce their sending rate. Packets will not longer be dropped because the bottlenecked link is able to pass all the arriving traffic. Dropped packets were retransmitted—“wasting” additional bandwidth.

Figure 5: Multiple connections reduce the sending rate without NetEnforcer

When using the NetEnforcer both connections will go over the bottleneck link in the same rate (see Figure 6). You can see how the Red connection is sending more traffic on the expense of the blue connection. The NetEnforcer delays the packets of the blue connection in the proper queue. The internal scheduler determines the timing in which each packet should be transmitted.

Figure 6: Reducing the sending rate (without dropping packets) with NetEnforcer

After a while the TCP flow control of the blue flow synchronizes to the rate dictated by the NetEnforcer. After the sending rate synchronizes to the rate dictated by the NetEnforcer, there will not be any packets in the queue (see Figure 7). This is the normal state of the NetEnforcer – most of the queues are empty (or nearly empty with only one packet) most of the time. Packets will be in the queue only until the sending TCP adopts the rate dictated by the NetEnforcer.

Allot Communications www.allot.com Page 9 of 13

Per-Flow Queuing

Figure 7: No packets in the queue.

The Benefits of PFQ Per-Flow Queuing offers a method for direct implementation of QoS and uses TCP’s inherent flow control to achieve the maximum and most efficient bandwidth usage. Some additional abilities and characteristics of the PFQ are: • Maximal use of the available bandwidth – The scheduling mechanism will transmit packets

as long as there is available bandwidth. This ensures maximal link utilization, which results in maximum application performance.

• Very accurate policy enforcement – The scheduling mechanism enforces the policy definitions in extremely high resolution. It provides bandwidth management with resolution of packet size.

• Traffic smoothing – As part of the accurate scheduling, the NetEnforcer smoothes the bandwidth, providing a more stable/constant rate of consumption that helps avoid collisions and packet drops.

• Fairness between connections – One of the important benefits of the PFQ method is fairness among all connections. Two connections with the same priority will get the same bandwidth even if one of them tries to transmit in a higher rate. This is one of the basic demands expected from a traffic shaper.

• Independent flow control (TCP/IP stack at the endpoints)– Unlike other traffic shaping implementations, the NetEnforcer is independent of the flow control mechanisms at the endpoints. This enables the NetEnforcer to use the same algorithms for both TCP application traffic and for UDP application traffic in which rate control is independent.

• Per-connection CBR enforcement – By accurately controlling the rate at which packets are transmitted per flow, the NetEnforcer reduces the jitter and enhances the end-user experience. Reducing jitter is critical for achieving acceptable performance levels for streaming applications such as VoIP and videoconferencing.

Allot Communications www.allot.com Page 10 of 13

Per-Flow Queuing

Comparing PFQ with other QoS approaches Many products today use various queuing approaches such as WFQ (weighted-fair queuing) or CBQ (class-based queuing). These queuing algorithms provide fairness between different classes or priorities of traffic. However, flows that are of the same priority class have no consistent fairness policies. If a connection comes in with a given priority or guaranteed bandwidth, it will be put on a certain queue. As traffic on the router begins to queue up and more connections arrive with that priority class, the new connections will always go to the back of the queue and wait until all previously queued packets are sent. The end result is inconsistent and unpredictable delivery of traffic.

Class-Based Queuing (CBQ) The main differences between PFQ (Per-Flow Queuing) and the CBQ (Class-Based Queuing) are:

• CBQ does not provide fairness between connections. All connections that match a certain class share the bandwidth of the class without fairness among the connections.

• CBQ cannot provide CBR (Constant Bit Rate) per connection because it does not treat

individual connections.

• CBQ is usually used in only a single hierarchical level, with a limited or fixed number of classes.

Weighted Fair Queuing (WFQ) Weighted Fair Queuing (WFQ) has similar limitations as CBQ. The WFQ and CBQ are only prioritization methods and do not enforce minimum and maximum levels of bandwidth per connection or per class. The per-flow WFQ is the most similar prioritization method to Allot’s PFQ mechanism among the known scheduling mechanisms. Per-flow WFQ is considered to be one of the most accurate scheduling methods. The main difference between per-flow WFQ and Allot’s PFQ mechanism is that Allot’s method offers better performance with nearly the same accuracy.

TCP Rate Control The TCP Rate Control uses two main mechanisms to achieve bandwidth management: (a) changing the window size field in the TCP header and (b) generating an intentional delay. This indirect approach to bandwidth management has been proven to have several important drawbacks:

• Inaccurate QoS enforcement – TCP Rate Control tries to enforce the rate per connection by changing window size instead of enforcing the rate directly on the traffic passing through the bandwidth management device. This causes relatively inaccurate enforcement.

• Real-world window sizes are not static - Networks in real life are very dynamic in their nature. Defining the window size per connection depends on the policy defined by the user and on the actual rate of all other connections (not the window size). No one can accurately predict the actual transmission rate of all other connections and therefore the setting for the window size is no more than a good guess. Studies have shown that this

Allot Communications www.allot.com Page 11 of 13

Per-Flow Queuing

method provides inaccurate traffic shaping.

• Inaccurately CBR enforcement - When using the TCP Rate Control, you cannot accurately enforce CBR. The PFQ approach can easily provide CBR (Constant Bit Rate) enforcement by using the queue of the flow to control the rate in which the packets are transmitted from the NetEnforcer. Since TCP Rate Control does not queue packets, you cannot have this ability.

• Slow recovery for slowed connections - TCP Rate Control causes slow recovery for connections that were previously slowed down. When extra available bandwidth is detected, the rate control will update the window size of the connection. Only at the next transmission will the the connection increase its data rate. In the meantime, your expensive bandwidth—and time—is wasted.

• Dual TCP Stack Dependence - TCP Rate Control strongly depends on the TCP stack of both end points. TCP stacks are not uniform and can also change over the years. PFQ does not rely on a specific implementation of flow control (rate control) and only requires that there is rate control at one of the end points.

• Poor performance for short connections - TCP Rate Control has poor performance when using short connections (connections that transfer only few packets during one session). In current networks, a majority of the traffic is comprised of short HTTP (web) connections. These connections do not have the chance to enlarge or shrink their window size before they finish transferring data. For these connections, changing the window size is not relevant and you must use a direct approach to manage these connections.

• No broadcast and multicast support - The Rate control method does not support broadcast and multicast traffic.

• No support for UDP traffic - With TCP Rate Control, you cannot enforce QoS on UDP traffic; it only works with TCP traffic. PFQ is protocol independent, requiring that there will be some sort of flow and rate control at the endpoints. Applications that use UDP implement rate control in the application layer. (For example, RTP uses RTCP for this purpose).

One of the main claims in favor of TCP Rate Control is that packets are never dropped, but unfortunately this is not possible while fully utilizing your link. The reason for this is simple: if you set the window size conservatively, that is smaller than you think is necessary, then each window will have unused space and you will not be full utilizing the link. Otherwise you will eventually have packet drops and retransmissions.

Summary The Per-Flow Queuing method treats each individual flow separately and enforces the defined policy in the most efficient way on your network. PFQ directly controls the traffic speed by using TCP’s built-in mechanisms to synchronize the transmission speed between two end points. Other methods indirectly incorporate rate control (the speed of the data flow) by directly changing parameters (e.g. window size) in the TCP protocols.

Allot Communications www.allot.com Page 12 of 13

Per-Flow Queuing

About NetEnforcer and NetPolicy The NetEnforcer™ family of LAN appliances gives users the power to intelligently shape network bandwidth and deliver system-wide service level guarantees based on the networking needs and business priorities of the IP service provider or enterprise. The NetEnforcer is available in models for use in enterprise and service provider networks (AC-101, AC-201, AC-301 and AC-401) as well as carrier-grade models (AC-601 and AC-701) with redundant components for fail-safe operation. The different models are optimized to support bandwidths from 128 Kbps to 155 Mbps and 128 to 10,000 clients/users. The NetPolicyTM policy-based management suite maximizes the effectiveness of your NetEnforcer policy enforcement devices by providing the real-time usage information and advanced services that your internal and external customers demand. Comprised of the Virtual Bandwidth Monitor, the NetAccountant Reporter and the System Configurator, the NetPolicy suite enables you to allow your customers to view their own bandwidth usage, generate advanced traffic usage reports, and consolidate your policy-based network management tasks.

About Allot Communications Allot Communications was founded in December 1996 to empower networks for business. The company’s Policy-Powered Networking initiative offers solutions for enterprises and IP service providers that improve network performance and enable the deployment of business-critical, time-sensitive applications. Allot’s flagship product, the NetEnforcer, includes best-of-class bandwidth management/traffic shaping technology for QoS/SLA enforcement, real-time IP monitoring, IP accounting and load balancing. In enterprise networks, Allot’s solutions allow network managers to enable Quality of Service (QoS) by linking business policies to specific network actions that improve and control users’ productivity and satisfaction. In IP service provider networks, Allot’s QoS/SLA enforcement solutions enable service providers to effectively use over-subscription to maximize ROI and to deliver tiered services or classes of service.

Americas 250 Prairie Center Drive #355 Eden Prairie, MN 55344 Tel (952) 944-3100 Fax (952) 944-3355 Web www.allot.com Email [email protected]

EMEA World Trade Center 1300, Route Des Cretes BP 255 Sophia Antipolis Cedex France 06905 Tel 33-(0)4-92-38-80-27 Fax 33-(0)4-92-38-80-33

Japan Nishi Ginza Bldg 2F 5-5-9 Ginza Chuo-ku Tokyo 104-0061 Japan Tel 81-(0)3-5537-7114 Fax 81-(0)3-5537-5281

Asia Pacific 9 Raffles Place Republic Plaza #27-01 Singapore 048619 Tel: 65-832-5663 Fax: 65-832-5662

International HQ 5 Hanagar Street Industrial Zone Hod-Hasharon, 45800 Israel Tel 972-9-761-9200 Fax 972-9-744-3626

Copyright © 2002 Allot Communications. NetEnforcer, NetPolicy, CacheEnforcer, NetAccountant, NetBalancer, and the Allot logo are trademarks of Allot Communications Ltd. All other brand or product names are trademarks of their respective holders. All information in this document is subject to change without notice. Allot Communications Ltd. assumes no responsibility for any errors that appear in this document.

Allot Communications www.allot.com Page 13 of 13


Recommended