7 Replies Latest reply: Jun 20, 2019 8:09 AM by Juan RSS

    Congestion management: Queuing

    Juan


      Queuing or congestion management is a mechanism that allows to control and manage congestion on a given resource. In the case of network devices, queuing helps to manage the temporary excess of load offered to a given interface.


      Congestion should be a temporary event and shouldn’t be something permanent. A permanent or semi-permanent state of congestion indicates a problem that should be solved by assigning more resources or redesigning the network, and not simply treated by QoS congestion management mechanisms. If a particular network interface is experiencing an event of temporary congestion, queuing deals with it by temporary buffering packets until bandwidth becomes available again and the buffer is cleared.

       

      Congestion management (queuing algorithms) and congestion avoidance mechanisms are complementary QoS tools. While the queuing algorithms manage the front of the queue, the congestion avoidance mechanisms manage the tail of the queue.

       

      This discussion will cover relevant design aspects of the most relevant queuing mechanisms used nowadays: Class-Based Weighted Fair Queuing (CBWFQ) and Low Latency Queuing (LLQ aka PQ-CBWFQ).

       

      Let’s start with a brief review of the queuing system components to be able to view the big picture and easily understand when congestion mechanisms make sense.


      Queuing system components


      Although congestion is an event that could happen at any point on the network, it’s more probable to happen at the points of speed mismatches and traffic aggregation. A common point of congestion due to speed mismatches is the LAN to WAN demarcation point and an example of traffic aggregation it could be aggregation points on the LAN itself.

       

       

       


      Each physical interface has a hardware and a software queuing system associated. Hardware queuing always use a FIFO queue (Tx-ring), while the software queuing system should be configured to optimally match business requirements and its configuration depends on the platform (CBWFQ, LLQ).

       

      Software interfaces have no queues, they congest only when the associated hardware interface congests.


      The hardware queue


      Regardless of the queuing policy applied to an interface there is always an underlying hardware queuing mechanism (Tx-Ring), which is a FIFO output buffer.


      The hardware queue serves two main purposes:

       

      • Enable the chance to drive the link utilization up to its maximum capacity of 100% by always having packets ready to be placed onto the wire.
      • Notify to the software system that the interface is experimenting congestion and need to activate QoS congestion management mechanisms in place.

       

      When the hardware queue (Tx-ring) fills to its maximum capacity the interface is known to be congested and a signal is sent to engage any LLQ/CBWFQ policies that have been previously configured on the interface.

      Usually, the length of the hardware queue depends on the configured interface bandwidth. There is a balance between long and short tx queues. A short tx queue reduce the maximum time that packets wait in the FIFO queue before being transmitted, reducing the associated delay, but at the same time would increase the number of interruptions causing a higher CPU utilization and a higher probability of lower link utilization.


      The software queue


      The software queue is a configurable QoS queuing mechanism activated once the physical interface is congested (hardware queue is full) and it’s deactivated shortly after the interface leaves that state.


      The software queue needs to be designed properly to manage unfairness of different types of traffic and business application needs. Under a congestion event, more critical traffic should be treated preferentially while low priority traffic could be delayed or even dropped.


      If the interface is in an uncongested state, the software queue does not even exist and packets would be placed on the hardware queue to be transmitted right away to the wire.


      In the next few points, it will be discussed the main characteristics of some software queuing mechanisms: Fair Queuing (FQ), CBWFQ and LLQ.


      FQ


      Fair Queuing (FQ) is one of the simplest queuing tools. FQ is a queuing system that works based on flows (flow-based). The queuing system assigns a fair amount of available bandwidth to every flow.


      Each flow can be identified by:


      • Source and destination IP address.
      • Source and destination port.
      • TCP/UDP port.

       

      FQ pre-shorters may be used in conjunction with CBWFQ to provide fairness among flows of traffic with different characteristics assigned to the same traffic class. The default-class, where all traffic is assigned if it is not classified otherwise, should use a FQ pre-shorter to avoid unfairness.


      CBWFQ


      CBWFQ is a QoS congestion management or queuing mechanism used to provide bandwidth guarantee to traffic classes. Traffic Classes are based on business needs and packets satisfying the matching criteria for a given class constitute the traffic for that class. Each class will have a queue reserved where traffic belonging to that class is directed to.


      CBWFQ queuing algorithm combines the ability to guarantee bandwidth with the ability to dynamically ensure fairness to flows within a class of traffic. Each individual queue can be configured with a Pre-Shorter (FQ) to ensure fairness among individual flows. FQ does not need to be applied to every queue, and don’t even have sense to do that (FQ pre-shorters add delay to the queuing system).


       




      Each queue is serviced in a Weighted-Round-Robin (WRR) fashion based on the bandwidth assigned to each class. CBWFQ guarantees bandwidth according to weights assigned to traffic classes. Weights are internally calculated from bandwidth or its percentage.


      LLQ


      LLQ is essentially a CBWFQ combined with a strict Priority Queue (PQ). For this reason, LLQ is also known as PQ-CBWFQ. LLQ addresses the drawbacks of CBWFQ:


      • A PQ is added to CBWFQ for Real Time (RT) traffic needs, as VoIP or Interactive Video.
      • Lower-priority classes will continue using CBWFQ.

      High-priority classes (PQ assigned traffic classes) are guaranteed:


      • Low-latency propagation of packets (delay, and consistent jitter).
      • Bandwidth.

       

      To avoid starvation of other non-priority classes, high-priority classes are policed when congestion occurs to avoid that they exceed their assigned bandwidth. The LLQ mechanism has a built-in implicit policer attached to it to discard the excess of traffic offered to the interface. Traffic admitted by the policer gains access to the strict PQ and is handed off to the Tx-Ring ahead of any other CBWFQ traffic.


      The functionality of the built-in implicit policer is to prevent starvation of the lower priority traffic, provide a LLQ abstraction that allows for configuring and serving multiple LLQ (implementing only one) and protect different classes of RT traffic between them (VoIP vs Interactive Video).


       

       


      As any other element of the software queuing system, the LLQ policer is engaged only when the LLQ-enabled interface is experiencing congestion.


      Summary


      Next, a brief summary about CBWFQ and LLQ main points:


      CBWFQ:


      • Queuing mechanism that supports custom TC definition.
      • Provides minimum BW allocation (guarantee) per TC.
      • FQ pre-shorters provide fairness between flows of the same TC.

       

      CBWFQ does not provide support for RT traffic needs, so VoIP traffic may still suffer unacceptable delays that can ruin your design. LLQ was created to address this specific problem, provide guaranteed BW and low latency to RT traffic.

      LLQ:


      • PQ add-on to the CBWFQ.
      • PQ is serviced using a strict-priority scheduler for RT traffic.
      • LLQ scheduler guarantees low latency (delay) and BW for traffic in the PQ.
      • In a congested scenario, if the PQ traffic exceeds the BW guarantee, an implicit congestion-aware policer is used to drop the exceeding traffic.

       

      I hope you find it useful.