Weighted Fair Queuings

Weighted fair queuing (WFQ)
provides equal amounts of bandwidth to each conversation that
traverses the interface. WFQ uses a process that refers to the timestamp found on the last bit of
a packet as it enters the queue.
Assigning Priorities
WFQ assigns a high priority to all low-volume traffic. Figure 30.3 demonstrates how the timing
mechanism for priority assignment occurs. The algorithm determines which frames belong to either
a high-volume or low-volume conversation and forwards the low-volume packets from the queue
first. Through this timing convention, remaining packets can be assigned an exiting priority.
In Figure 30.3, packets are labeled A through F. As depicted in this figure, Packet A will
be forwarded first because it’s part of a low-volume conversation, even though the last bit
of session B will arrive before the last bit of the packets associated with Packet A did. The
remaining packets are divided between the two high-traffic conversations, with their timestamps
determining the order in which they will exit the queue.
FIGURE 3 0 . 3
Priority assignment using WFQ
Assigning Conversations
We’ve discussed how priority is assigned to a packet or conversation, but it’s also important to
understand the type of information that the processor needs to associate a group of packets with
an established conversation.
The most common elements used to establish a conversation are as follows:

Source and destination IP addresses

MAC addresses

Port numbers

Type of service

DLCI number assigned to an interface
Say a router has two active conversations, one a large FTP transfer and the other an HTTP
session. The router, using some or all of the factors just listed to determine which conversation
a packet belongs to, allocates equal amounts of bandwidth to each conversation. Each of the
two conversations receives half of the available bandwidth.

IOS Queuing Options

As we’ve said, if your serial links are not congested, you do not need to implement queuing.
However, if the load exceeds the transmission rate for small periods of time, you can use a Cisco
IOS queuing option to help the congestion on a serial link.
To effectively configure queuing on a serial link, you must understand the types of queuing
available. If you choose the wrong type of queuing, you can do more harm on the link than help.
Also, this is not a one-time analysis of traffic patterns. You must constantly repeat your analysis
of your serial link congestion to make sure you have implemented the queuing strategy correctly.
Figure 30.2 shows the queuing options available from Cisco.
FIGURE 3 0 . 2
Queuing options
The following steps and Figure 30.2 describe the analysis you should make when deciding on
a queuing policy:
1.
Determine whether the WAN is congested.
2.
Decide whether strict control over traffic prioritization is necessary and whether automatic
configuration is acceptable.
3.
Establish a queuing policy.
4.
Determine whether any of the traffic types you identified in your traffic pattern analysis can
tolerate a delay.

Queuing Policy

Queuing policies help network managers provide users with a high level of service across a
WAN link, as well as control WAN costs. Typically, the corporate goal is to deploy and maintain
a single enterprise network, even though the network supports disparate applications,
organizations, technologies, and user expectations. Consequently, network managers are
concerned about providing all users with an appropriate level of service while continuing to
support mission-critical applications and having the ability to integrate new technologies at
the same time.
Figure 30.1 shows a serial interface that is congested and needs queuing implemented.
It’s important to remember that you need to implement queuing only on interfaces that
experience congestion.
The network administrator should understand the delicate balance between meeting the
business requirements of users and controlling WAN costs. Queuing enables network administrators
to effectively manage network resources.
FIGURE 3 0 . 1
Queuing policy
S0
Bottleneck
Traffic queue
IPX
IP
AppleTalk

Traffic Prioritization

Packet prioritization has become more important because many types of data traffic need to
share a data path through the network, often congesting WAN links. If the WAN link is not
congested, you don’t need to implement traffic prioritization, although it might be appropriate
to add more bandwidth in certain situations.
Prioritization of traffic will be required on your network if you have, for example, a mixture
of file transfer, transaction-based, and desktop video conferencing. Prioritization is most
effective on WAN links where the combination of bursty traffic and relatively lower data rates
can cause temporary congestion. This is typically necessary only on WAN links slower than
T-1/E-1. However, prioritization can also be used across OC (optical carrier)-12 and OC-48
links, because at times, tests can be run to saturate these links, but you might still want voice
and video to have a priority.

Queuing and Compression Queuing

When a packet arrives on a router’s interface, a protocol-independent switching process handles
it. The router then switches the traffic to the outgoing interface buffer. An example of a protocolindependent
switching process is first-in, first-out (FIFO), which is the original algorithm for
packet transmission. FIFO was the default for all routers until weighted fair queuing (WFQ) was
developed. The problem with FIFO is that transmission occurs in the same order as messages are
received. If an application such as Voice over IP (VoIP) required traffic to be reordered, the network
engineer needed to establish a queuing policy other than FIFO queuing.
Cisco IOS software offers three queuing options as an alternative to FIFO queuing:

Weighted fair queuing (WFQ) prioritizes interactive traffic over file transfers to ensure
satisfactory response time for common user applications.

Priority queuing ensures timely delivery of a specific protocol or type of traffic that is transmitted
before all others.

Custom queuing establishes bandwidth allocations for each type of traffic.
We will discuss these three queuing options in detail in the “IOS Queuing Options” section later
in this chapter.

Queuing and Compression

THE CCNP EXAM TOPICS COVERED IN THIS
CHAPTER INCLUDE THE FOLLOWING:

Determine why queuing is enabled, identify alternative
queuing protocols that Cisco products support, and
determine the best queuing method to implement.

Specify the commands to configure queuing.

Specify the commands and procedures used to verify proper
queuing configuration.

Specify the commands and procedures used to effectively
select and implement compression.

Describe traffic control methods used to manage traffic flow
on WAN links.

Plan traffic shaping to meet required quality of service on
access links.

Troubleshoot traffic control problems on a WAN link.

This chapter teaches you how to use both queuing and compression
to help maintain a healthy network, which is important because
user data consists of many types of data packets roaming the internetwork,
hungering for and consuming bandwidth.
Queuing
is the act of sequencing packets for
servicing—similar to a line at an amusement park with a FastPass or “go to the front” ability.
Compression is the ability to communicate a piece of information with fewer bits, typically by
removing repetitions within the data.
As a network administrator, you can help save precious bandwidth on WAN links, the largest
bottlenecks in today’s networks. With Gigabit Ethernet running the core backbones and 10-gigabit
Ethernet networks just now being deployed, a 1.544Mbps T-1 link is painfully slow. By implementing
both queuing and compression techniques, you can help save bandwidth and get the most for
your money.
In addition, this chapter teaches you the three
queuing
techniques available on the Cisco
router: weighted fair queuing (WFQ), priority queuing, and custom queuing. You will learn
when to use each type, as well as how to configure each type on your router. We also present
an overview of newer queuing and policing technologies, including low latency queuing (LLQ),
class-based weighted fair queuing (CBWFQ), and committed access rate (CAR).
Finally, this chapter provides the information you need to both understand and configure the
types of compression on Cisco routers. The types of compression techniques covered in this
chapter include header, payload, and link compression.

Frame Relay Exam Essentials

Understand Frame Relay and its history. Frame Relay is a streamlined version of X.25 without
the windowing and retransmission capabilities. Frame Relay is a layer 2 protocol that was defined
as a network service for ISDN by the CCITT (now ITU-T). The Group of Four extended Frame
Relay in 1990 to allow for a Local Management Interface (LMI) to assist in PVC management.
Understand the two types of virtual circuits (VCs). Know what a switched virtual circuit
(SVC) is used for and what a permanent virtual circuit (PVC) is used for. Understand why you
would use one type over another.
Know what a DLCI is and how it is mapped to Network layer protocols. The data link
connection identifier (DLCI) is used to identify a PVC in a Frame Relay network. The DLCIto-
Network-layer mapping can be statically configured by an administrator using the framerelay
map command or can be dynamically set by using Inverse ARP (IARP).
Understand the Local Management Interface (LMI). LMI was an extension to Frame Relay
to manage the virtual circuits on a connection. LMI virtual circuit status messages provide communication
and DLCI synchronization between Frame Relay DTE and DCE devices.
Know how to configure Frame Relay and what it uses for congestion control. The
encapsulation frame-relay command is used to configure an interface for Frame Relay
operation. The frame-relay intf-type dce command is used to configure an interface
for DCE operation, but by default, the Frame Relay interface type is DTE. Frame Relay uses
backward explicit congestion notification (BECN) and forward explicit congestion notification
(FECN) messages to control congestion on a Frame Relay switch.
Understand the options for traffic shaping on a Frame Relay interface. There are many
options for traffic shaping to enable a Frame Relay network to operate more efficiently. You
can have the router slow traffic on a VC in response to BECNs received. You can set up queuing
on a per-VC basis and limit traffic going out of a VC. You need to know what the committed
information rate (CIR) and excess information rate (EIR) are.
Know what Cisco IOS commands are used to verify and troubleshoot a Frame Relay connection.
The commands show interface, show frame-relay pvc, show frame-relay map, show
frame-relay lmi, and debug frame-relay lmi are all used to see and verify the operation of
Frame Relay. The command clear frame-relay-inarp is used to delete the dynamic PVC-to-
Network-Layer addressing entries.

The Lowdown on Committed Information Rate (CIR)

We’ve talked formally about CIR—we even presented a calculation—but what does it really mean?
As we have said, the acronym itself stands for committed information rate, which really doesn’t
seem that difficult to understand. But there seems to be widespread misinterpretation of this concept,
especially by some service providers, so let’s attempt to figure the whole thing out.
We’ve discussed terms such as burst rate and phrases such as bursting above your CIR, but
these terms can be misleading. They were devised by network engineers who assumed—you
know what that leads to—that we wouldn’t understand hardcore network-engineering concepts,
so they tried to put them in layman’s terms and botched the whole thing up. In reality,
you’re always “bursting” to your line speed because Frame Relay is an HDLC protocol and
there’s no other way to make it work.
HDLC is a synchronous protocol (which means that the data is synchronized to a clock) that
sends data with a standardized framing and checksum technique. When a frame is transmitted,
the data must be contiguous; that is, there cannot be any holes or spaces between bytes of
data. So if you’re transmitting 500 bytes of data, you can’t send 250 and then wait for a while
and then send the rest. It has to go out as one big chunk. The Frame Relay expression bursting
over your CIR comes into play because there is no way to slow down the data or to change the
length of the chunk after you start transmitting; you just send until you are finished. If you happen
to send too much data because your data chunk is larger than the allotment, you’ve
bursted over your CIR.
So what is the big deal about CIR, then? And why, when you buy Frame Relay from a company
like Qwest, do they quote you a CIR? CIR is the “worst-case” throughput that the Frame Relay
network provider attempts to guarantee. It’s like a restaurant guaranteeing that you’ll always
be able to eat a certain amount of food from its buffet. Like the restaurant, the Frame Relay network
provider can’t guarantee that you’ll always be able to transmit at the CIR (take the case
when everyone on the network happens to be transmitting at once), but they can guarantee it
over a reasonable time span (usually over a span of seconds). Basically, the network backbone
is engineered to handle reasonable loads—just like the number of lanes in a highway. Given a
certain amount of traffic, the data should flow through the backbone without delay. At times,
when unusually heavy traffic exists, you have what is called congestion.

Frame Relay Summary

Frame Relay is one of the most popular WAN protocols in the world. This technology
will become even more critical as corporations stretch their networks globally and the Internet
becomes more pervasive.
To become a successful CCNP, you need to understand the Frame Relay protocol. This technology
makes up the majority of the world’s non-dedicated circuits, and its importance cannot
be underestimated.
Frame Relay is the distant cousin to X.25, without some of X.25’s overhead. It does provide
congestion notification, which can be used with traffic shaping to help traffic response. Like
X.25, Frame Relay provides for permanent and switched virtual circuits.
LMI is an extension to the Frame Relay protocol developed by the Frame Relay Forum and
is used to provide management for virtual circuits. This makes the management of DLCI information
easier for the network administrator.
Cisco provides for a mechanism to enable a multipoint interface such as Frame Relay to look
like multiple virtual point-to-point or multipoint interfaces called subinterfaces. Point-to-point
subinterfaces can be used to solve problems caused by distance-vector routing protocols running
over multipoint interfaces.
Setting up a Cisco router as a Frame Relay switch is not something that you would do often,
but it is a useful feature when you are working in a lab environment. There are many troubleshooting
commands that can be used to verify the configuration of Frame Relay on a Cisco
router. They can be used to see the DLCI-to-Network-layer-address mapping and the current
state of LMI on the router. Frame Relay is a technology used in many networks, and mastering
its configuration and operation will take you far in your networking career.

Configuring Traffic Shaping

To configure Frame Relay traffic shaping, you must first enter the map class configuration mode
so you can define a map class. You enter the map class with the global configuration command
map-class frame-relay name. The name parameter is the name you use to apply the map class
to the VC where you want traffic shaping performed. The command looks like the following:

RouterA#config t
RouterA(config)#map-class frame-relay scott
RouterA(config-map-class)#

Notice that the map-class frame-relay scott command changes the prompt to
config-map-class. This enables you to configure the parameters for your map class.
The map class is used to define the average and peak rates allowed in each VC associated
with the map class. The map class mechanism enables you to specify that the router can dynamically
fluctuate the rate at which it sends traffic, depending on the BECNs received. It also
enables you to configure queuing on a per-VC basis.
To define the average and peak rate for links that are faster than the receiving link can handle,
use the following command:
RouterA(config-map-class)#frame-relay traffic-rate average [peak]
The average parameter sets the average rate in bits per second, which is your CIR. Now, how
do you calculate the peak value? First, start with the EIR. The EIR is the average rate over which
bits will be marked with DE and is given by the formula EIR = Be/Tc, with Be being excessive
burst and Tc representing the committed rate measurement interval. The peak value is then calculated
by taking the CIR plus EIR, or peak = CIR + EIR.

The peak parameter is optional. An example of a line is as follows:
RouterA(config-map-class)#frame-relay traffic-rate 9600 18000
To specify that the router should dynamically fluctuate the rate at which it is sending traffic
depending on the number of BECNs received, use the following command:
RouterA(config-map-class)#frame-relay adaptive-shaping becn
To set bandwidth usage for protocols, you can configure traffic shaping to use queuing on
a per-VC basis. To perform this function, use the following commands:
RouterA(config-map-class)#frame-relay custom-queue-list number
RouterA(config-map-class)#frame-relay priority-group number
You can use either command, depending on the type of queuing you are using. The number
parameter at the end of the command is the queue list number. A detailed discussion of queuing
is presented in the next chapter.
After the map class parameters are completed, you then need to configure the traffic shaping
on the interface you want. The following commands are used to perform traffic shaping on an
interface and to apply the map class and its parameters to a subinterface and, by association, its
corresponding VC:

RouterA#config t
RouterA(config)#interface serial0
RouterA(config-if)#frame-relay traffic-shaping
RouterA(config-if)#interface serial0.16 point-to-point
RouterA(config-subif#frame-relay class scott
RouterA(config-subif)#frame-relay interface-dlci 16

You first must enable traffic shaping and per-VC queuing on the interface with the
frame-relay traffic-shaping command. You can then go to the interface or subinterface
and assign the map class by using the frame-relay class name command. The example
just shown uses the name scott because that is the name of the map class defined in the
earlier example.
After you have completed the configuration, use the show running-config and the show
frame-relay pvc commands to verify the configuration.

Using Traffic-Shaping Techniques

The following list outlines the traffic-shaping techniques used with Frame Relay:
 To control the access rate transmitted on a Cisco router, you can configure a peak rate to
limit outbound traffic to either the CIR or excess information rate (EIR).
 You can configure BECN support on a per-VC basis, which will enable the router to then
monitor BECNs and throttle traffic based on BECN-designated packets.
 Queuing can be used for support at the VC level. Priority, custom, and weighted fair queuing
(WFQ) can be used. This gives you more control over traffic flow on individual VCs.
It’s also important to understand when you would use traffic shaping with Frame Relay. The
following list explains this:
 Use traffic shaping when one site, such as the corporate office, has a higher speed line (for
example, a T1), and the remote branches have slower lines (for example, 56Kbps). This
connection would cause bottlenecks on each VC and would result in poor response times
for time-sensitive traffic such as SNA and Telnet. This can cause packets to be dropped. By
using traffic shaping at the corporate office, you can improve response on each VC.
 Traffic shaping is also helpful on a router with many subinterfaces. Because these subinterfaces
will use traffic as fast as the physical link allows, you can use rate enforcement on the subinterface
to match the CIR of the VC. This means you can preallocate bandwidth to each VC.
 Traffic shaping can be used to throttle back transmission on a Frame Relay network that
is constantly congested. This can help prevent packet loss and is done on a per-VC basis.
 Traffic shaping is used effectively if you have multiple Network layer protocols and want
to queue each protocol to allocate bandwidth effectively. Since IOS version 11.2, queuing
can be performed at the VC level.

Frame Relay Traffic Shaping

Traffic shaping on Frame Relay provides different capabilities, and because this information
might be covered on the exam, it is important that you can describe each one. On production
networks, this information can help you understand whether switch problems are occurring.
In this section, you will learn about traffic-shaping techniques and when to use them. You’ll
then learn how to configure traffic shaping.

Frame Relay Switching Commands

The command used to enable Frame Relay switching on a Cisco router is as follows:
Router_A#config t
Router_A(config)#frame-relay switching
This command must come before any of the other Frame Relay switching–related commands
can be executed, or these commands won’t be allowed. When Frame Relay encapsulation is
enabled on an interface, it defaults to DTE, so you will need to change it to DCE for Frame
Relay switching. For a Frame Relay serial connection to function, you must have a DTE at one
end and a DCE at the other. You first configure the router with the following command:
Router_A(config)#interface serial0
Router_A(config-if)#frame-relay intf-type dce
The clocking on a serial link is provided by the DCE device, which is determined
by the type of cable connected to the serial interface. For a Frame Relay connection,
the DCE status is configured, whereas serial DCE status is cabled. For
Frame Relay, the DCE device is the one that provides LMI.
Because this interface is now functioning as the Frame Relay DCE device, you can change the
LMI type from the default of Cisco with the following command:
Router_A(config-if)#frame-relay lmi-type ?
cisco
ansi
q933a
Router_A(config-if)#frame-relay lmi-type
The next step in the configuration process is to create the proper DLCI forwarding rules.
These rules dictate that when a frame enters a particular interface on a certain DLCI, it will be
forwarded to another interface and DLCI. Let’s look at such an example on interface serial 1:
Router_A#config t
Router_A(config)#interface serial 1
Router_A(config-if)#frame-relay route 100 interface Serial2 101
This command states that any frame received on interface serial 1, with DLCI 100, shall be
forwarded to interface serial 2, with DLCI 101. You can view all the frame routing information
with the show frame-relay route command. The following router output shows the settings
of Router A:
Router_A#show frame-relay route
Input Intf Input Dlci Output Intf Output Dlci Status
Serial0 300 Serial1 200 active
Serial1 100 Serial2 101 active
Serial1 200 Serial0 300 active
Serial2 101 Serial1 100 active
Router_A#
The configuration of a router as a Frame Relay switch can be useful for a lab environment
or even as part of a production network.
Now, let’s look at the configuration of the Frame Relay switch:
Router_A#show running-config
Building configuration...
Current configuration:
!
version 11.2
!
hostname Router_A
!
frame-relay switching
!
interface Serial0
no ip address
encapsulation frame-relay
clockrate 56000
frame-relay intf-type dce
frame-relay route 300 interface Serial1 200
!
interface Serial1

no ip address
encapsulation frame-relay
clockrate 56000
frame-relay intf-type dce
frame-relay route 100 interface Serial2 101
frame-relay route 200 interface Serial0 300
!
interface Serial2
no ip address
encapsulation frame-relay
clockrate 56000
frame-relay intf-type dce
frame-relay route 101 interface Serial1 100
!
end
Router_A#
Notice the global command frame-relay switching is at the top of the configuration.
Also notice that both interfaces are configured with frame-relay intf-type dce commands.
On the serial interfaces, you’ll also see that the clock rate command is used to provide clocking
for the line because the router serving as the Frame Relay switch is a DCE device on the
physical interface.