Queuing policies help network managers provide users with a high level of service across a
WAN link, as well as control WAN costs. Typically, the corporate goal is to deploy and maintain
a single enterprise network, even though the network supports disparate applications,
organizations, technologies, and user expectations. Consequently, network managers are
concerned about providing all users with an appropriate level of service while continuing to
support mission-critical applications and having the ability to integrate new technologies at
the same time.
Figure 30.1 shows a serial interface that is congested and needs queuing implemented.
It’s important to remember that you need to implement queuing only on interfaces that
experience congestion.
The network administrator should understand the delicate balance between meeting the
business requirements of users and controlling WAN costs. Queuing enables network administrators
to effectively manage network resources.
FIGURE 3 0 . 1
Queuing policy
S0
Bottleneck
Traffic queue
IPX
IP
AppleTalk
IT Certification CCIE,CCNP,CCIP,CCNA,CCSP,Cisco Network Optimization and Security Tips
Traffic Prioritization
Packet prioritization has become more important because many types of data traffic need to
share a data path through the network, often congesting WAN links. If the WAN link is not
congested, you don’t need to implement traffic prioritization, although it might be appropriate
to add more bandwidth in certain situations.
Prioritization of traffic will be required on your network if you have, for example, a mixture
of file transfer, transaction-based, and desktop video conferencing. Prioritization is most
effective on WAN links where the combination of bursty traffic and relatively lower data rates
can cause temporary congestion. This is typically necessary only on WAN links slower than
T-1/E-1. However, prioritization can also be used across OC (optical carrier)-12 and OC-48
links, because at times, tests can be run to saturate these links, but you might still want voice
and video to have a priority.
share a data path through the network, often congesting WAN links. If the WAN link is not
congested, you don’t need to implement traffic prioritization, although it might be appropriate
to add more bandwidth in certain situations.
Prioritization of traffic will be required on your network if you have, for example, a mixture
of file transfer, transaction-based, and desktop video conferencing. Prioritization is most
effective on WAN links where the combination of bursty traffic and relatively lower data rates
can cause temporary congestion. This is typically necessary only on WAN links slower than
T-1/E-1. However, prioritization can also be used across OC (optical carrier)-12 and OC-48
links, because at times, tests can be run to saturate these links, but you might still want voice
and video to have a priority.
Queuing and Compression Queuing
When a packet arrives on a router’s interface, a protocol-independent switching process handles
it. The router then switches the traffic to the outgoing interface buffer. An example of a protocolindependent
switching process is first-in, first-out (FIFO), which is the original algorithm for
packet transmission. FIFO was the default for all routers until weighted fair queuing (WFQ) was
developed. The problem with FIFO is that transmission occurs in the same order as messages are
received. If an application such as Voice over IP (VoIP) required traffic to be reordered, the network
engineer needed to establish a queuing policy other than FIFO queuing.
Cisco IOS software offers three queuing options as an alternative to FIFO queuing:
Weighted fair queuing (WFQ) prioritizes interactive traffic over file transfers to ensure
satisfactory response time for common user applications.
Priority queuing ensures timely delivery of a specific protocol or type of traffic that is transmitted
before all others.
Custom queuing establishes bandwidth allocations for each type of traffic.
We will discuss these three queuing options in detail in the “IOS Queuing Options” section later
in this chapter.
it. The router then switches the traffic to the outgoing interface buffer. An example of a protocolindependent
switching process is first-in, first-out (FIFO), which is the original algorithm for
packet transmission. FIFO was the default for all routers until weighted fair queuing (WFQ) was
developed. The problem with FIFO is that transmission occurs in the same order as messages are
received. If an application such as Voice over IP (VoIP) required traffic to be reordered, the network
engineer needed to establish a queuing policy other than FIFO queuing.
Cisco IOS software offers three queuing options as an alternative to FIFO queuing:
Weighted fair queuing (WFQ) prioritizes interactive traffic over file transfers to ensure
satisfactory response time for common user applications.
Priority queuing ensures timely delivery of a specific protocol or type of traffic that is transmitted
before all others.
Custom queuing establishes bandwidth allocations for each type of traffic.
We will discuss these three queuing options in detail in the “IOS Queuing Options” section later
in this chapter.
Queuing and Compression
THE CCNP EXAM TOPICS COVERED IN THIS
CHAPTER INCLUDE THE FOLLOWING:
Determine why queuing is enabled, identify alternative
queuing protocols that Cisco products support, and
determine the best queuing method to implement.
Specify the commands to configure queuing.
Specify the commands and procedures used to verify proper
queuing configuration.
Specify the commands and procedures used to effectively
select and implement compression.
Describe traffic control methods used to manage traffic flow
on WAN links.
Plan traffic shaping to meet required quality of service on
access links.
Troubleshoot traffic control problems on a WAN link.
This chapter teaches you how to use both queuing and compression
to help maintain a healthy network, which is important because
user data consists of many types of data packets roaming the internetwork,
hungering for and consuming bandwidth.
Queuing
is the act of sequencing packets for
servicing—similar to a line at an amusement park with a FastPass or “go to the front” ability.
Compression is the ability to communicate a piece of information with fewer bits, typically by
removing repetitions within the data.
As a network administrator, you can help save precious bandwidth on WAN links, the largest
bottlenecks in today’s networks. With Gigabit Ethernet running the core backbones and 10-gigabit
Ethernet networks just now being deployed, a 1.544Mbps T-1 link is painfully slow. By implementing
both queuing and compression techniques, you can help save bandwidth and get the most for
your money.
In addition, this chapter teaches you the three
queuing
techniques available on the Cisco
router: weighted fair queuing (WFQ), priority queuing, and custom queuing. You will learn
when to use each type, as well as how to configure each type on your router. We also present
an overview of newer queuing and policing technologies, including low latency queuing (LLQ),
class-based weighted fair queuing (CBWFQ), and committed access rate (CAR).
Finally, this chapter provides the information you need to both understand and configure the
types of compression on Cisco routers. The types of compression techniques covered in this
chapter include header, payload, and link compression.
CHAPTER INCLUDE THE FOLLOWING:
Determine why queuing is enabled, identify alternative
queuing protocols that Cisco products support, and
determine the best queuing method to implement.
Specify the commands to configure queuing.
Specify the commands and procedures used to verify proper
queuing configuration.
Specify the commands and procedures used to effectively
select and implement compression.
Describe traffic control methods used to manage traffic flow
on WAN links.
Plan traffic shaping to meet required quality of service on
access links.
Troubleshoot traffic control problems on a WAN link.
This chapter teaches you how to use both queuing and compression
to help maintain a healthy network, which is important because
user data consists of many types of data packets roaming the internetwork,
hungering for and consuming bandwidth.
Queuing
is the act of sequencing packets for
servicing—similar to a line at an amusement park with a FastPass or “go to the front” ability.
Compression is the ability to communicate a piece of information with fewer bits, typically by
removing repetitions within the data.
As a network administrator, you can help save precious bandwidth on WAN links, the largest
bottlenecks in today’s networks. With Gigabit Ethernet running the core backbones and 10-gigabit
Ethernet networks just now being deployed, a 1.544Mbps T-1 link is painfully slow. By implementing
both queuing and compression techniques, you can help save bandwidth and get the most for
your money.
In addition, this chapter teaches you the three
queuing
techniques available on the Cisco
router: weighted fair queuing (WFQ), priority queuing, and custom queuing. You will learn
when to use each type, as well as how to configure each type on your router. We also present
an overview of newer queuing and policing technologies, including low latency queuing (LLQ),
class-based weighted fair queuing (CBWFQ), and committed access rate (CAR).
Finally, this chapter provides the information you need to both understand and configure the
types of compression on Cisco routers. The types of compression techniques covered in this
chapter include header, payload, and link compression.
Frame Relay Exam Essentials
Understand Frame Relay and its history. Frame Relay is a streamlined version of X.25 without
the windowing and retransmission capabilities. Frame Relay is a layer 2 protocol that was defined
as a network service for ISDN by the CCITT (now ITU-T). The Group of Four extended Frame
Relay in 1990 to allow for a Local Management Interface (LMI) to assist in PVC management.
Understand the two types of virtual circuits (VCs). Know what a switched virtual circuit
(SVC) is used for and what a permanent virtual circuit (PVC) is used for. Understand why you
would use one type over another.
Know what a DLCI is and how it is mapped to Network layer protocols. The data link
connection identifier (DLCI) is used to identify a PVC in a Frame Relay network. The DLCIto-
Network-layer mapping can be statically configured by an administrator using the framerelay
map command or can be dynamically set by using Inverse ARP (IARP).
Understand the Local Management Interface (LMI). LMI was an extension to Frame Relay
to manage the virtual circuits on a connection. LMI virtual circuit status messages provide communication
and DLCI synchronization between Frame Relay DTE and DCE devices.
Know how to configure Frame Relay and what it uses for congestion control. The
encapsulation frame-relay command is used to configure an interface for Frame Relay
operation. The frame-relay intf-type dce command is used to configure an interface
for DCE operation, but by default, the Frame Relay interface type is DTE. Frame Relay uses
backward explicit congestion notification (BECN) and forward explicit congestion notification
(FECN) messages to control congestion on a Frame Relay switch.
Understand the options for traffic shaping on a Frame Relay interface. There are many
options for traffic shaping to enable a Frame Relay network to operate more efficiently. You
can have the router slow traffic on a VC in response to BECNs received. You can set up queuing
on a per-VC basis and limit traffic going out of a VC. You need to know what the committed
information rate (CIR) and excess information rate (EIR) are.
Know what Cisco IOS commands are used to verify and troubleshoot a Frame Relay connection.
The commands show interface, show frame-relay pvc, show frame-relay map, show
frame-relay lmi, and debug frame-relay lmi are all used to see and verify the operation of
Frame Relay. The command clear frame-relay-inarp is used to delete the dynamic PVC-to-
Network-Layer addressing entries.
the windowing and retransmission capabilities. Frame Relay is a layer 2 protocol that was defined
as a network service for ISDN by the CCITT (now ITU-T). The Group of Four extended Frame
Relay in 1990 to allow for a Local Management Interface (LMI) to assist in PVC management.
Understand the two types of virtual circuits (VCs). Know what a switched virtual circuit
(SVC) is used for and what a permanent virtual circuit (PVC) is used for. Understand why you
would use one type over another.
Know what a DLCI is and how it is mapped to Network layer protocols. The data link
connection identifier (DLCI) is used to identify a PVC in a Frame Relay network. The DLCIto-
Network-layer mapping can be statically configured by an administrator using the framerelay
map command or can be dynamically set by using Inverse ARP (IARP).
Understand the Local Management Interface (LMI). LMI was an extension to Frame Relay
to manage the virtual circuits on a connection. LMI virtual circuit status messages provide communication
and DLCI synchronization between Frame Relay DTE and DCE devices.
Know how to configure Frame Relay and what it uses for congestion control. The
encapsulation frame-relay command is used to configure an interface for Frame Relay
operation. The frame-relay intf-type dce command is used to configure an interface
for DCE operation, but by default, the Frame Relay interface type is DTE. Frame Relay uses
backward explicit congestion notification (BECN) and forward explicit congestion notification
(FECN) messages to control congestion on a Frame Relay switch.
Understand the options for traffic shaping on a Frame Relay interface. There are many
options for traffic shaping to enable a Frame Relay network to operate more efficiently. You
can have the router slow traffic on a VC in response to BECNs received. You can set up queuing
on a per-VC basis and limit traffic going out of a VC. You need to know what the committed
information rate (CIR) and excess information rate (EIR) are.
Know what Cisco IOS commands are used to verify and troubleshoot a Frame Relay connection.
The commands show interface, show frame-relay pvc, show frame-relay map, show
frame-relay lmi, and debug frame-relay lmi are all used to see and verify the operation of
Frame Relay. The command clear frame-relay-inarp is used to delete the dynamic PVC-to-
Network-Layer addressing entries.
Subscribe to:
Posts (Atom)