This article describes the different types of traffic policies supported by EdgeOS CLI. Quality of Service (QoS) allows you to control and assign different levels of service to different types of traffic.
Table of Contents
- Traffic Policies
- Related Articles
Quality of Service (QoS) is a feature that allows a router to provide different levels of service to different types of traffic. Basically, it means some traffic will be treated “better” (for example, higher priority, more bandwidth, etc.) than others.
At a high level, you apply QoS by defining different traffic policies and applying them to the traffic going through the router. Abstractly, a traffic policy can be defined as consisting of one or more rules. Each rule has the following form:
|Type of traffic > How the router should treat it|
For example, you can define a simple rule R1 as follows:
|R1: FTP traffic > Limit the bandwidth to 1 Mbps|
You can define a traffic policy using one or more rules. For example, we can define a traffic policy P1 that has two rules, R1 and R2:
|P1: R1: FTP traffic > Limit the bandwith to 1 Mbps R2: SSH traffic > Make latency lower|
For a traffic policy to take effect, you apply it to a specific direction of traffic on a specific network interface. Here is an example:
|Traffic going out on interface "eth1" > Use traffic policy P1|
Next we describe the details of various traffic policy types.
There are many different types of traffic policies, each with its different use and limitations. Several of the policies supported by the EdgeOS CLI are discussed below.
- Drop Tail (FIFO)
- Random Early Detection
- Rate Control
- Fair Queue
Drop Tail (FIFO)
The drop-tail policy simply keeps a queue of packets. When a packet needs to be transmitted, it is added to the queue, and the router will send the queued packets out on a First In, First Out (FIFO) basis. The queue has a pre-defined length limit, and when the number of queued packets reaches this limit, any further packets cannot be added to the queue and will be dropped; this is called "tail drop" (hence the name of the policy). Tail drop occurs when too many packets are added to the queue, waiting to be sent out, and the router does not have sufficient resources to send the packets out fast enough.
A drop-tail policy has the following limitations:
- Does not distinguish between different types of traffic.
- Has only a single paramenter, the queue length limit.
- Can only be applied to the out direction of traffic on a network interface.
For example, the following CLI commands create a drop-tail policy named policy1 and apply it to the out direction of traffic on interface eth0:
set traffic-policy drop-tail policy1 queue-limit 100 set traffic-policy drop-tail policy1 description "limit queue 100" set interfaces ethernet eth0 traffic-policy out policy1
Random Early Detection
The Random Early Detection (RED) policy differes from the simple drop-tail policy in that it starts dropping packets earlier, before the queue length grows to the pre-defined limit. One benefit of such a policy is that it provides better behavior for TCP traffic by gradually dropping packets to allow TCP endpoints time to detect network congestion and reduce their traffic. This contrasts with drop-tail behavior, which drops all packets once the queue limit has been reached, leading to a larger negative impact on TCP performance.
Like drop-tail, the RED policy can only be applied to the out direction of traffic on a network interface. There are three main RED parameters:
- Minimum queue length: the queue length at which the RED policy will start dropping packets probabilistically.
- Maximum queue length: as the queue length grows from the minimum to the maximum queue length, the RED policy will increase the drop probability from 0 to the maximum drop probability (the next parameter).
- Maximum drop probability: the probability at which the RED policy will drop packets when the queue length has reached the maximum queue length.
Moreover, you can specify a different set of RED parameters for each IP precedence value. Below is an example of CLI commands defining a RED policy:
set traffic-policy random-detect random1 precedence 0 mark-probability 50 set traffic-policy random-detect random1 precedence 0 maximum-threshold 50 set traffic-policy random-detect random1 precedence 0 minimum-threshold 20 set interfaces ethernet eth0 traffic-policy out random1 commit
In this example, the maximum drop probability for IP precedence 0 is 2% (in other words, 1/50)
The rate-control policy aims to ensure that the traffic is transmitted at no more than a pre-defined rate. It can be applied to the out direction, and the main parameter is the maximum rate for the outgoing traffic. For example, the following CLI commands create a rate-control policy that will attempt to ensure that the outgoing traffic is transmitted at no more than 1 Mbps on interface eth0.
set traffic-policy rate-control rate1 bandwidth 1mbit set interfaces ethernet eth0 traffic-policy out rate1 commit
The fair queue policy uses the Stochastic Fairness Queueing approach to separate traffic flows (for example, TCP connections) into different buckets and have the router service each bucket one by one. The separation is done using a hash of the source/destination IP addresses and the source port. Probabilistically this allows the router to fairly service different traffic flows.
The fair queue policy can only be applied to the out direction. Since the fairness is probabilistic, in some cases multiple flows may be put into the same bucket; this can potentially cause unfairness. To minimize effects, you can adjust the hash interval parameter to change the hashing algorithm at fixed time intervals. This is an example of a fair queue policy:
set traffic-policy fair-queue fair1 hash-interval 10 set interfaces ethernet eth0 traffic-policy out fair1 commit
A more complicated policy is the shaper policy, which uses the Hierarchical Token Bucket technique to provide different bandwidth guarantees to different classes of traffic on a network link. A simple example is shown below.
set trafficpolicy shaper shaper1 bandwidth 100mbit set trafficpolicy shaper shaper1 default bandwidth 60mbit set trafficpolicy shaper shaper1 class 2 bandwidth 20mbit set trafficpolicy shaper shaper1 class 2 match client2 ip source address 10.0.1.2/32 set trafficpolicy shaper shaper1 class 3 bandwidth 20mbit set trafficpolicy shaper shaper1 class 3 match client3 ip source address 10.0.1.3/32 set interfaces ethernet eth0 trafficpolicy out shaper1 commit
In this example, a shaper policy shaper1 is defined and applied to the out direction on interface eth0, which has 100 Mbps bandwidth. Two classes of traffic are defined, one for traffic originating from IP address 10.0.1.2 and the other for traffic origination from IP address 10.0.1.3. Each of the two classes is guaranteed 20 Mbps of bandwidth for its traffic, meaning that when under load it guarantees that bandwidth, but can exceed it if there is availability. All other traffic falls into the default class with 60 Mbps reserved bandwidth. So, for example, if the current outgoing traffic on eth0 includes 20 Mbps from 10.0.1.2, 20 Mbps from 10.0.1.3, and 80 Mbps from other sources, the traffic from 10.0.1.2 and 10.0.1.3 will be sent out at their full rates since they are each guaranteed 20 Mbps, and the other traffic will only be sent out at 60 Mbps.
The limiter policy performs ingress policing and therefore can only be applied to the in direction of traffic on a network interface. You can define multiple classes of traffic, and you can apply a separate bandwidth limit to each class. For example, the following policy set a limit of 1 Mbps for incoming ICMP traffic on eth0 and a limit of 10 Mbps for the other traffic.
set trafficpolicy limiter limit1 class 1 bandwidth 1mbit set trafficpolicy limiter limit1 class 1 match match1 ip protocol icmp set trafficpolicy limiter limit1 default bandwidth 10mbit set interfaces ethernet eth0 trafficpolicy in limit1 commit
The limiter policy is designed for traffic destined for the router itself, and the policing behavior is less accurate when it is applied to traffic that is going through the router. One possible workaround for this is to create an input interface, to which any outbound policies can be then applied.