How Does CBWFQ Work?

How Does CBWFQ Work?
Flow-based WFQ automatically detects flows based on characteristics of the third
and fourth layers of the OSI model. Conversations are singled out into flows by
source and destination IP address, port number, and IP precedence.
If a packet going out an interface needs to be queued because of congestion,
the conversation it is part of is determined, and a weight is assigned based on the
characteristic of the flow. Such weights are assigned to ensure that each flow gets
its fair share of the bandwidth.The weight assigned also subsequently determines
which queue the packet will enter and how it will be serviced.
The limitation of flow-based WFQ is that the flows are automatically determined,
and each flow gets a fair share of the bandwidth.This fair share of the
bandwidth is determined by the size of the flow and moderated by IP precedence.
www.syngress.com
Advanced QoS for AVVID Environments • Chapter 8 237
Packets with IP precedences set to values other than the default (zero) are placed
into queues that are serviced more frequently, based on the level of IP precedence,
and thus get a higher overall bandwidth. Specifically, a data stream’s
weighting is the result of some complex calculations, but the important thing to
remember is that weight is a relative number and the lower the weight of a
packet, the higher that packet’s priority.The weight calculation results in a
weight, but the most important thing isn’t that number—it’s the packet’s specific
handling.Thus, a data stream with a precedence of 1 is dealt with twice as fast as
best-effort traffic. However, even with the action of IP Precedence on WFQ,
sometimes a specific bandwidth needs to be guaranteed to a certain type of
traffic. CBWFQ fulfills this requirement.
CBWFQ extends WFQ to include user-defined classes.These classes can be
determined by protocol, Access Control Lists (ACLs), IP precedence, or input
interfaces. Each class has a separate queue, and all packets found to match the
criteria for a particular class are assigned to that queue.
Once the matching criteria are set for the classes, you can determine how
packets belonging to that class will be handled. It may be tempting to think of
classes as having priority over each other, but it is more accurate to think of each
class having a certain guaranteed share of the bandwidth. Note that this bandwidth
guarantee is not a reservation as with RSVP, which reserves bandwidth
during the entire period of the reservation. It is, instead, a guarantee of bandwidth
that is active only during periods of congestion. If a class is not using the
bandwidth guaranteed to it, other traffic may use it. Similarly, if the class needs
more bandwidth than the allocated amount, it may use or borrow some of the
free bandwidth available on the circuit.
You can specifically configure the bandwidth and maximum packet limit (or
queue depth) of each class.The weight assigned to the class’s queue is calculated
from the configured bandwidth of that class. As with WFQ, the actual weight of the
packet is of little importance for any purpose other than the router’s internal operations.
What is important is the general concept that classes with high assigned bandwidth
get a larger share of the link than classes with a lower assigned bandwidth.
CBWFQ allows the creation of up to 64 individual classes, plus a default
class.The number and size of the classes are, of course, based on the bandwidth.
By default, the maximum bandwidth that can be allocated to user-defined classes
is 75 percent of the link speed.This maximum is set so there is still some bandwidth
for Layer 2 overhead, routing traffic (BGP, EIGRP, OSPF, and others), and
best-effort traffic. Although not recommended, it is possible to change this maximum
for very controlled situations in which you want to give more bandwidth
www.syngress.com
238 Chapter 8 • Advanced QoS for AVVID Environments
to user-defined classes. In this case, caution must be exercised to ensure you allow
enough remaining bandwidth to support Layer 2 overhead, routing traffic, and
best-effort traffic.
Each user-defined class is guaranteed a certain bandwidth, but classes that
exceed that bandwidth are not necessarily dropped.Traffic in excess of the class’s
guaranteed bandwidth may use the free bandwidth on the link. Free is defined as
the circuit bandwidth minus the portion of the guaranteed bandwidth currently
being used by all user-defined classes.Within this free bandwidth, the packets are
considered by fair queuing along with other packets, their weight being based
on the proportion of the total bandwidth guaranteed to the class. For example,
on a T1 circuit, if Class A and Class B were configured with 1000 Kbps and 10
Kbps, respectively, and if both were transmitting over their guaranteed bandwidths,
the remaining 534 Kbps (1544–1010) would be shared between the two
at a 100:1 ratio.
All packets not falling into one of the defined classes are considered part of
the default class (or class-default, as it appears in the router configuration).The
default class can be configured to have a set bandwidth like other user-defined
classes, or configured to use flow-based WFQ in the remaining bandwidth and
treated as best effort.The default configuration of the default class is dependent
on the router platform and the IOS revision.
Even though packets that exceed bandwidth guarantees are given WFQ treatment,
bandwidth is, of course, not unlimited.When the fair queuing buffers overflow,
packets are dropped with tail drop unless WRED has been configured for
the class’s policy. In the latter case, packets are dropped randomly before buffers
totally run out in order to signal the sender to throttle back the transmission
speed.This random dropping of packets obviously makes WRED a poor choice
for classes containing critical traffic.We will see in a later section how WRED
interoperates with CBWFQ.