Translating Inside Local Addresses 990

NAT operates on a router and usually connects two networks. NAT translates the local nonunique
IP addresses into legal, registered Internet addresses before forwarding packets from the
local network to the Internet or another outside network. To do this, NAT uses a six-step process,
as shown in Figure 31.2.
The six-step process, as Figure 31.2 illustrates, is as follows:
1. User 10.1.2.25 sends a packet and attempts to open a connection to 206.100.29.1.
2. When the first packet arrives at the NAT border router, the router checks to see whether
there is an entry for the local address that matches a global address in the NAT table.
3. If a match is found in the NAT table, the process continues to step 4. If a match is not found,
the NAT router uses what is called a simple entry from its pool of global addresses. A simple
entry occurs when the NAT router matches a local IP address (such as the one currently
being used) to a global IP address. In this example, the NAT router will match the address
of 10.1.2.25 to 200.1.1.25.
4. The NAT border router then replaces the local address of 10.1.2.25 (listed as the packet’s
source address) with 200.1.1.25. This makes the destination host believe that the sending
device’s IP address is 200.1.1.25.
FIGURE 3 1 . 2 The process of translating inside local addresses
1
10.1.2.25
10.1.2.26
10.1.2.27
Inside
network
switch
NAT border
router
Inside IP Inside global IP
206.100.29.1
10.1.2.25 200.1.1.25
6
2 4
3
5
Interne
5. When the host on the Internet using the IP address 206.100.29.1 replies, it uses the NAT
router–assigned IP address of 200.1.1.25 as the destination address.
6. When the NAT border router receives the reply from 206.100.29.1 with the packet destined
for 200.1.1.25, the NAT border router checks its NAT table again. The NAT table
shows that the local address of 10.1.2.25 should receive the packet destined for 200.1.1.25
and replaces the destination address with the internal interface’s IP address.
Steps 2 through 6 are repeated for each individual packet.

Performing NAT Operations

Understanding how NAT functions when it is configured a certain way will aid you in your configuration
decisions. This section covers NAT’s operations when NAT is configured to provide
the following functions:
 Translating inside local addresses
 Overloading inside global addresses
 Using TCP load distribution
 Overlapping networks

NAT Traffic Types

NAT supports many traffic types. The Remote Access exam includes questions on both the supported
and unsupported types. Let’s take a look at these types now.
Supported Traffic Types
NAT supports the following traffic types:

TCP traffic that does not carry source and destination addresses in an application stream

UDP traffic that does not carry source and destination addresses in an application stream

Hypertext Transfer Protocol (HTTP)

Trivial File Transfer Protocol (TFTP)

File Transfer Protocol (
FTP PORT
and
PASV
commands)

Archie, which provides lists of anonymous FTP archives

Finger, a software tool for determining whether a person has an account at a particular
Internet site

Network Time Protocol (NTP)

Network File System (NFS)

rlogin
,
rsh
,
rcp
(TCP, Telnet, and Unix entities to ensure the reliable delivery of data)
NAT-supported protocols that carry the IP address in the application stream include:

Internet Control Message Protocol (ICMP)

NetBIOS over TCP (datagram, name, and session services)

Progressive Networks’s RealAudio

CUseeMe Networks CUseeMe

Xing Technology’s StreamWorks

DNS “A” and “PTR” queries

H.323 in IOS versions 12.0(1)/12.0(1)T or later

Microsoft’s NetMeeting (IOS versions 12.0(1)/12.0(1)T or later)

VDOnet’s VDOLive – IOS versions 11.3(4)/11.3(4)T or later

Microsoft’s VXtreme – IOS versions 11.3(4)/11.3(4)T or later

IP Multicast – IOS version 12.0(1)T or later, source address translation only

Point-to-Point Tunneling Protocol (PPTP) support with Port Address Translation (IOS
version 12.1(2)T or later)

Skinny Client Control Protocol, IP Phone to Cisco CallManager (IOS version 12.1(5)T or later)
Unsupported Traffic Types
NAT does not support some traffic types, including the following:
 Routing table updates
 DNS zone transfers
 BOOTP and DHCP
 Talk
 Ntalk
 Simple Network Management Protocol (SNMP)
 NetShow

Disadvantages of NAT

Now that you know about the advantages of using NAT, you should learn about the disadvantages
as well. The following is a list of some of the disadvantages of using NAT compared to
using individually configured, registered IP addresses on each network host:

NAT increases latency (delay). Delays are introduced into the switching path due to the
processor overhead needed to translate each IP address contained in the packet headers. The
router’s CPU must be used to process every packet to decide whether the router needs to translate
and change the IP header. Some Application layer protocols supported, such as DNS, have
IP addresses in their payload that must be translated also. This adds to the increased delay.

NAT hides end-to-end IP addresses that render some applications unusable. Some applications
that use the host IP address inside the payload of the packet will break when NAT
translates the IP addresses across the NAT border router.

Because NAT changes the IP address, there is a loss of IP end-to-end traceability. The multiple
packet-address changes confuse IP tracing utilities. This provides one advantage from
a security standpoint: It eliminates some of a hacker’s ability to identify a packet’s source.

NAT also makes troubleshooting or tracking down where malicious traffic is coming from
more troublesome. Because the traffic could be coming from a single user who is using different
IP addresses depending on when the traffic passes through the NAT router, accountability
becomes much more difficult.

Advantages of NAT

There are many advantages of using NAT. Some of the more important benefits include the
following:

NAT enables you to incrementally increase or decrease registered IP addresses without
changes to hosts, switches, or routers within the network. (The exception to this is the NAT
border routers that connect the inside and outside networks.)

NAT can be used either statically or dynamically:

Static translation occurs when you manually configure an address translation table with
IP addresses. A specific address on the inside of the network uses a specific outside IP
address—manually configured by the network administrator—to access the outside network.
The network administrator can also translate an inside IP address and port pair
to an outside IP address and port pair.

Dynamic mappings enable the administrator to configure one or more pools of outside
IP addresses on the NAT border router. The addresses in the pools can be used by nodes
on the inside network to access nodes on the outside network. This enables multiple
internal hosts to utilize a single pool of IP addresses.

NAT can allow the sharing of packet processing among multiple servers by using the Transmission
Control Protocol (TCP) load distribution feature. NAT load distribution can be
accomplished by using one individual global address mapped to multiple local server
addresses. This round-robin approach is used on the router distributing incoming connections
across the servers.
There is no limit to the number of NAT sessions that can be used on a router or
route processor. The limit is placed on the amount of DRAM the router contains.
The DRAM must store the configurable NAT pools and handle each translation.
Each NAT translation uses approximately 160 bytes, which translates
into about 1.53MB for 10,000 translations. This is far more translations than the
average router needs to provide.
If your internal addresses must change because you have changed your ISP or have merged
with another company that is using the same address space, you can use NAT to translate
the addresses from one network to the other.

How NAT Works

NAT is configured on the router or route processor closest to the border of a stub domain
(a LAN that uses IP addresses—either registered or unregistered for internal use) between
the inside network (local network) and the outside network (public network such as an ISP
or the Internet). The outside network can also be another company, such as when two networks
merge after an acquisition.
An illustration of NAT is shown in Figure 31.1. You should note that the router separates
the inside and outside networks. NAT translates the inside local addresses into the globally
unique inside global IP address, enabling data to flow into the outside network.
FIGURE 3 1 . 1
The NAT router on the border of an inside network and an outside network
such as the Internet
NAT border
router
NAT takes advantage of there being relatively few network users using the outside network
at any given time. NAT does this by using process switching to change the source address on the
outbound packets, directing them to the appropriate router. This enables fewer IP addresses to
be used than the number of hosts in the inside network. Before the implementation of NAT on
all Cisco enterprise routers, the only way to implement these features was to use pass-through
firewall gateways.
NAT was first implemented in Cisco’s IOS release 11.2 and spelled out in
RFC 1631.

NAT Terminology

Before continuing with this chapter, you should be familiar with the following Cisco terms:
Inside network
The
inside network
is the set of network addresses that is subject to translation.
The IP addresses used within the network are invalid on an outside network such as the
Internet or the network’s ISP. Often, the IP addresses used in the inside network are obsolete,
or an IP address is allocated in a range specified by RFC 1918 or RFC 3330 (which reserves certain
IP addresses for internal use only) and is not Internet routable.
Outside network
The
outside network
is not affiliated with or owned by the inside network
organization. (Keep in mind we are referring to a network—not network addresses.) This can
be the network of another company when two companies merge, but typically is the network
of an ISP. The addresses used on this network are legally registered and Internet-routable
IP addresses.
Inside local IP address
The
inside local IP address
is the IP address assigned to an interface in
the inside network. This address can be illegal to use on the Internet, or it can be an address
defined by RFC 1918 as unusable on the Internet. In both cases, this address is not globally
routable. If the address is globally routable, it can be assigned to another organization and cannot
be used on the Internet.
Inside global IP address
The
inside global IP address
is the IP address of an inside host as it
appears to the outside network. This is the “translated IP address.” Addresses can be allocated
from a globally unique address space, typically provided by the ISP (if the enterprise is connected
to the global Internet).
Outside local IP address
The
outside local IP address
is the IP address of an outside host as it
appears to the inside network. These addresses can be allocated from the RFC 1918 space if desired.
Outside global IP address
The
outside global IP address
is the configured IP address assigned
to a host in the outside network.
Simple translation entry
A
simple translation entry
is an entry in the NAT table that results
when the NAT router matches an illegal inside IP address to a globally routable IP address that
is legally registered for Internet use.
Extended translation entry
An
extended translation entry
is a translation entry that maps one
IP address and port pair to another.

Understanding Network Address Translation (NAT)

Before exploring the details of
Network Address Translation (NAT)
operations, configuration,
and troubleshooting, it’s important to thoroughly understand what it is, the terminology associated
with it, its advantages and disadvantages, and the traffic types it supports. NAT is a protocol
that maps an inside IP address used in the local, or inside, network environment to the
outside network environment and vice versa. There are many reasons for using NAT in your
network environment. Some of the benefits you will receive from NAT include the following:

Enabling a private IP network to use unregistered IP addresses to access an outside network
such as the Internet

Providing the ability to reuse assigned IP addresses that are already in use on the Internet

Providing Internet connectivity in networks where there are not enough individual Internetregistered
IP addresses

Appropriately translating the addresses in two merged intranets such as two merged companies

Translating internal IP addresses assigned by old Internet service providers (ISPs) to a new
ISP’s newly assigned addresses without manually configuring the local network interfaces

Network Address Translation and Port Address Translation

THE CCNP EXAM TOPICS COVERED IN THIS
CHAPTER INCLUDE THE FOLLOWING:

Describe the process of Network Address Translation (NAT).

Configure Network Address Translation (NAT).

Troubleshoot nonfunctional remote access systems.

As the Internet grows and individuals increasingly need more than
one IP address to use for Internet access from their home and
office PCs, their phones (Voice over IP, VOIP), their office’s network
printers, and many other network devices, the number of available IP addresses is diminishing.
To add insult to injury, the early designers of TCP/IP—back when the Internet project
was being created by the Advanced Research Projects Agency (ARPA)—never anticipated the
explosion of users from private industry that has occurred.
ARPA’s goal was to design a protocol that could connect all the United States Defense
Department’s major data systems and enable them to talk to one another. The ARPA designers
created not only a protocol that would enable all the Defense Department’s data systems to
communicate with one another, but one that the entire world now relies on to communicate
over the Internet.
Unfortunately, because of the unexpected popularity of this protocol, the distribution of IP
addresses was inadequately planned. As a result, many IP addresses are unusable, and many are
placed in networks that will never use all the addresses assigned to them. For example, every
organization with a Class A network, which provides 16,777,214 addresses per Class A assignment,
would find it difficult to use more than half of the addresses available, and those that are
not used are wasted.
All the Class A and Class B addresses are already assigned to organizations. There are 65,534
Class B addresses available in each Class B address range. If a new organization needs more than
one Class C address range, which provides only 254 addresses, they must get another Class C
address range.
IP version 6 will eventually alleviate IP addressing problems because it increases the address
space from 32 bits to 128 bits, but its adoption has been slow because of the problems associated
with infrastructure and application support. Outside the United States, IPv6 is being paid
more attention because less IPv4 address space is available. Specifically, Japan has implemented
a large-scale IPv6 network because of the number of addresses needed and the availability of
IPv6 address space.
This chapter introduces you to Network Address Translation (NAT) and Port Address
Translation (PAT). Cisco routers and internal route processors use these two protocols to allow
the use of a limited number of registered IP addresses by a large number of users and devices.
As you progress through the chapter, you will learn the differences between NAT and PAT, as
well as their operational boundaries, how to configure them, and how to troubleshoot problems
associated with these two protocols.

Queuing and Compression Exam Essentials

Understand the queuing technologies available in Cisco IOS. You should know that there is
weighted fair queuing (WFQ), priority queuing, and custom queuing. Each has its strengths and
weaknesses, but WFQ is the default technique used for 2.048Mbps or slower interfaces. WFQ can
be tuned with the fair-queue command to enable more conversations to be tracked per interface.
Know how to configure priority queuing and when it is best used. Picking the proper
queuing mechanism is very important, and priority queuing is ideal if you want to ensure that
certain traffic gets priority over other traffic. A priority queue is set up with the prioritylist
command and is applied to the interface with the priority-group command.
Know how to configure custom queuing and when it is best used. Custom queuing is a
technique that enables the WAN designer to allocate a certain amount of bandwidth to different
traffic types. There are 16 queues that can be set up to contain certain types of traffic,
and each of these queues can be allocated a specific amount of bandwidth. A custom queue
is configured with the queue-list command and is applied to the interface with the customqueue-
list command.
Understand compression techniques and algorithms and when to use them. You should know
that the compression techniques are TCP header compression, payload compression, link compression,
and MPPC compression. TCP header compression is used on point-to-point links and
when smaller TCP packets, such as Telnet traffic, are being sent over the link. Payload compression
can be used on links other than point-to-point and is used to compress the data portion of the
packet; it will not alter the layer 2 and layer 3 headers. Link compression is used only on pointto-
point links but is protocol independent. It will compress the whole packet and encapsulate that
packet in another protocol to ensure reliability and sequencing. MPPC is used when the Cisco
device needs to talk to Windows-based clients.
Know the troubleshooting commands used for queuing and compression. The commands to
show queuing are show queue and show queueing [fair | priority | custom]. These can
be used to display queuing on the device and the counters involved. The compression troubleshooting
command is show compress, which can be used to view the compression efficiency and status.

Viewing Compression Information

To view information about the status of compression on the router, use the show compress
command. The following is a sample of the output from this command:
Router2#show compress
Serial1
uncompressed bytes xmt/rcv 82951/85400
1 min avg ratio xmt/rcv 0.798/0.827
5 min avg ratio xmt/rcv 0.789/0.834
10 min avg ratio xmt/rcv 0.779/0.847
no bufs xmt 0 no bufs rcv 0
restarts 0
Additional Stacker Stats:
Transmit bytes: Uncompressed = 27044 Compressed = 66749
Received bytes: Compressed = 76758 Uncompressed = 0
938 Chapter 30  Queuing and Compression
This command shows the uncompressed byte count of compressed data transmitted and
received as well as the ratio of data throughput gained or lost in the compression routine in the
last 1, 5, and 10 minutes. If the restarts are more than 0, the compression routine detected that
the dictionaries were out of sync and restarted building the compression dictionary. Using this
command, you will be able to see if compression is making a difference for the type of traffic
being compressed.
Summary
Queuing is an important technology when using WAN links. As the speed of LAN interfaces
increases more and more, data will be expected to traverse WAN links. Congestion is inevitable,
so to ensure that important data gets through, a queuing mechanism is necessary. There are
many queuing options available when using Cisco IOS.
Weighted fair queuing (WFQ) is the default technique when using interfaces of 2.048Mbps
or slower. WFQ will track conversations and enable lower bandwidth conversations to take
priority over higher bandwidth conversations. This feature can be tuned to allow tracking of
more conversations.
Priority queuing is used to classify traffic into four queues of high, medium, normal, and low.
Each queue is serviced sequentially, and the traffic is forwarded from the higher level queues
before the router services the lower level queues. The lower level queues might not be serviced
for quite some time if there is a large amount of higher priority traffic.
Custom queuing can allocate a certain percentage of the total bandwidth available on the
interface. There are 16 queues available, which can hold a certain type of traffic, and each
queue can be allocated a specific amount of bandwidth. Custom queuing does not suffer
from queue starvation as priority queuing can.
To alleviate congestion on WAN links, compression can be configured on the interface. The
types of compression algorithms are Stac, Predictor, and MPPC. MPPC is used primarily for Windows
clients, whereas Stac and Predictor can be used on many types of WAN technologies. TCP
header compression is the simplest compression technique. Payload compression compresses the
payload portion of the packet and does not alter the layer 2 or layer 3 header information. The
link compression algorithm uses Stac or Predictor to compress the traffic and then encapsulates
the compressed traffic in another link layer such as PPP or LAPB to ensure error correction and
packet sequencing.
Various techniques can ensure that the queuing and compression technologies are working
correctly. The show queue command is used to see queuing on the interface, and the show
queueing [priority | custom | fair] command is used to display the queuing technique
configured on the router. For compression, the show compress command is used to see how
well the compression process is compressing traffic and whether problems might occur with the
compression process.

Compression Considerations

You need to keep a few considerations in mind when selecting and implementing a compression
method:
Modem compression Modems can compress data up to four times smaller than its original size.
There are different types of modem compression techniques, so make sure you understand that
modem compression and router software compression are not compatible. However, the modems
at both ends of the connection will try to negotiate the best compression method to use. If compression
is being done at the modem, do not configure the router to also run compression.
Encrypted data Compression happens at the Data Link layer (layer 2), and encryption
functions at the Network layer (layer 3), although the payload is also encrypted, which
includes layer 7. After the application encrypts the data, the data is then sent to the router,
which provides compression. The problem is that encrypted data typically does not have
repetitive patterns, so the data will not compress. The router will spend a lot a processor
time to determine the traffic is not compressible. So, if data is encrypted, do not attempt to
compress it by using a layer 2 compression algorithm.
CPU cycles versus memory The amount of memory that a router must have varies according
to the protocol being compressed, the compression algorithm, and the number of configured
interfaces on the router. Memory requirements will be higher for Predictor than for Stac, but
Stac is typically more processor intensive.

Payload Compression

Payload compression, also known as per-virtual-circuit compression, compresses only the payload,
or data portion, of the packet. The header of the packet is not touched.
Link Compression
Link compression, also known as per-interface compression, compresses both the header
and payload section of a data stream. Unlike header compression, link compression is protocol
independent.
The link compression algorithm uses Stac or Predictor to compress the traffic in another link
layer such as PPP or LAPB, ensuring error correction and packet sequencing. Cisco proprietary
HDLC protocol is capable of using Stac compression only.
Predictor Use this approach to solve bottleneck problems caused by a heavy load on the
router. The Predictor algorithm learns data patterns and “predicts” the next character by using
an index to look up a sequence in a compression dictionary. This is sometimes referred to as
lossless because no data will be lost during the compression and decompression process.
Stac This method is best used when bottlenecks are related to bandwidth issues. The Stac
method searches the input data stream for redundant strings and replaces them with a token
that is shorter than the original redundant data string.
If the data flow traverses a point-to-point connection, use link compression. In a link
compression environment, the complete packet is compressed and the switching information
in the header is not available for WAN switching networks. Typical examples are
leased lines or ISDN.
If you use payload compression, you should not use header compression. This
is redundant, and you should configure payload compression only.
In the following example, we turned on LAPB encapsulation with Predictor compression and
set the maximum transmission unit (MTU) and the LAPB N1 parameters:
Router#config t
Enter configuration commands, one per line. End with CNTL/Z.
Router(config)#interface serial0
Router(config-if)#encapsulation lapb
Router(config-if)#compress ?
predictor predictor compression type
stac stac compression algorithm
Router(config-if)#compress predictor
Router(config-if)#mtu 1510
Router(config-if)#lapb n1 12096
Compression 937
The LAPB N1 represents the number of bits in an LAPB frame, which holds an X.25 packet.
It is set to eight times the MTU size, plus any overhead when using LAPB over leased lines. For
instance, the N1 is specified at 12,080 (that is, 1,510 × 8) plus 16 bits for protocol overhead. The
LAPB N1 parameter can cause major problems if it’s not configured correctly, and most often it
should be left at its default value. Even so, it can be really valuable if you need to set the MTU size.

TCP Header Compression

TCP header compression as defined in RFC 1144 compresses only the protocol headers, not the
packet data. TCP header compression lowers the overhead generated by the disproportionately
large TCP/IP headers as they are transmitted across the WAN.
It is important to realize that the layer 2 header is not touched, and only the headers at
layers 3 and 4 are compressed. This enables the layer 2 header to direct that packet across a
WAN link.
You would use the header compression on a network with small packets and a few bytes
of data such as Telnet. Cisco’s header compression supports X.25, Frame Relay, and dial-ondemand
WAN link protocols. Because of processing overhead, header compression is generally
used at lower speeds such as 64Kbps links.
TCP header compression is achieved by using the ip tcp header-compression command:
Router(config)#interface serial0
Router(config-if)#ip tcp ?
compression-connections Maximum number of compressed connections
header-compression Enable TCP header compression
Router(config-if)#ip tcp header-compression ?
passive Compress only for destinations which send compressed headers
The passive parameter is optional and is used to instruct the router to compress the headers
of outbound TCP traffic if the other side is also sending compressed TCP headers. If you don’t
include the passive argument, all TCP traffic will use compressed TCP headers.

Compression networks

The Cisco IOS provides congestion control on WAN links by adding compression on serial
interfaces. This can ease the WAN bandwidth bottleneck problems by using less bandwidth on
the link. Along with using the different queuing methods discussed earlier in this chapter, one
of the more effective methods of WAN optimization is compression of the data traveling across
the WAN link.
Software compression can significantly affect router CPU performance, and the Cisco rule of
thumb is that the router’s CPU load must not exceed 65 percent when running software compression.
If it does exceed this limit, it would be better to disable any compression running.
Cisco equipment supports the following types of compression:
 TCP header compression
 Payload compression
 Link compression
 Microsoft Point-to-Point Compression (MPPC)
By default, Cisco routers transmit data across serial links in an uncompressed format, but by
using Cisco serial compression techniques, you can make more efficient use of your available
bandwidth. It’s true that any compression method will cause overhead on the router’s CPU, but
the benefits of compression on slower links can outweigh that disadvantage.
Figure 30.7 shows the three types of compression used in a Cisco internetworking environment.
FIGURE 3 0 . 7 Cisco serial compression methods
Link compression
Header
compression
Payload compression
PPP, HDLC,
X.25, Frame
Relay, or ATM
header
IP/TCP
header Payload
The compression methods are as follows:
TCP header compression Cisco uses the Van Jacobson algorithm to compress the headers of
IP packets before sending them out onto WAN links.
Payload compression This approach compresses the data but leaves the header intact. Because
the packet’s header isn’t changed, it can be switched through a network. This method is the one
generally used for switching services such as X.25, Switched Multimegabit Data Service (SMDS),
Frame Relay, and ATM.
Link compression This method is a combination of both header and payload compression,
and the data will be encapsulated in either PPP or LAPB. Because of this encapsulation, link
compression allows for transport protocol independence.
Microsoft Point-to-Point Compression (MPPC) protocol This is defined in RFC 2118 and
enables Cisco routers to exchange compressed data with Microsoft clients. You would configure
MPPC when exchanging data with a host using MPPC across a WAN link. The MPPC is not
discussed further in this section.
The Cisco compression methods are discussed in more detail next.

Committed Access Rate

Committed access rate (CAR) is an older bandwidth and policing system; however, it is commonly
used in concert with bandwidth management. As noted before, like CBWFQ, committed access
rate can specify a bandwidth guarantee to an application. However, CAR also specifies a hard
upper limit to that application as well. This can be very useful when wanting to reserve bandwidth
for bursty applications. One example of this would be file transfer with Common Internet File System
(CIFS) and other protocols on a circuit with web traffic. An administrator might wish to use
CAR to allocate 128Kbps for HTTP/web traffic, which would have the same impact as saying all
934 Chapter 30  Queuing and Compression
traffic on a T-1 except HTTP/web has over 1,400Kbps available. The advantage is that an administrator
need not define each of the other applications to implement this solution.
CAR has some benefits. However, in many enterprises with a QoS strategy, CBWFQ is
leading the way, and administrators are opting to protect important applications with the
newer technique. You should evaluate CAR and CBWFQ for your specific environment.

Class-Based Weighted Fair Queuing

Class-based weighted fair queuing (CBWFQ) builds upon WFQ by adding the concept of traffic
classes. Classes can be defined by a tag within the frame such as type of service (ToS) or differentiated
services code point (DSCP). These tags are added by the end station or the access router
and are used to forward packets through the network core without each router re-examining the
packet to determine that datagram’s priority. We are not defining the methodology used but
simply explaining the fact that you can use this information for CBWFQ.
Common implementations of CBWFQ establish three or four classes of application services,
typically described as gold, silver, bronze, and other. This categorization does not include network
traffic such as routing updates, which should always have priority over user application traffic.
Although some users will take exception to their traffic being described as a low priority, the network
administrator needs to constrain the total number of classes to keep administration manageable
and negate a situation in which bandwidth is being managed to the bit via QoS policy.
One of the strongest benefits to CBWFQ is the ability to define a specific amount of bandwidth
to an application. For example, Financial Information Exchange (FIX) is a common financial systems
protocol that might warrant special attention. Perhaps this application requires a guaranteed
256Kbps to prevent application failure on a T-1 link. CBWFQ can provide this guarantee, and,
perhaps more importantly, will allow the application to use more than the 256Kbps if bandwidth
is available. This is different from CAR, discussed in the next section, which establishes a hard
limit on the bandwidth available to a specific protocol. Please note that by default you cannot allocate
more than 75 percent of the link’s total bandwidth for management by CBWFQ.
With regard to traffic classes, the model is fairly straightforward. When congestion occurs,
the queue will process packets in the gold class before those in the silver class within the constraints
of WFQ. As such, the administrator is defining that the queue should be fair to all applications,
but that gold traffic is the most important. This will lead to the managed unfairness that
is the basis for all QoS policies; under congestion, the network will have to discard something
to stay within the available resources.
ToS and DSCP are not commonly accepted from end nodes because many
applications and some operating systems will automatically tag all packets for
the highest priority. It is recommended that you configure your edge routers to
ignore the end station and tag based on address or port information.

Low Latency Queuing

Low latency queuing (LLQ) is actually a strict priority queue within class-based weighted fair
queuing (CBWFQ), discussed in the next section. LLQ is Cisco’s solution for voice and other
very small packets that require real-time processing. LLQ operates by prioritizing key packets
to the front of the queue. Because these packets are small by nature, there is little risk of queue
starvation or other problems. However, administrators should evaluate the demands of other
traffic within the network.

Cisco’s Newer Queuing Technologies

Because of their notable absence in the topics covered by the Remote Access examination, we only
briefly cover some of the newer queuing management technologies in this section. Queue control
has become a more important issue in remote access networking with the proliferation of voice
services and other real-time protocols. As these protocols suffer from congestion and low bandwidth,
they are strong candidates for quality of service (QoS), of which queuing is a part.
Remember that queuing is intended to manage the transmission of packets held in the
router’s buffer. Unlike voice, data networks buffer packets during periods of congestion.
Although we could discuss a wide number of queuing options, three key methods are gaining
prominence in the market: low latency queuing, class-based weighted fair queuing, and committed
access rate.

The Real Use of Queuing

As with most things in networking, queuing is a trade-off technology that can provide significant
benefit or detriment to the administrator. As a result, when coupled with the implementation and
management overhead involved, most networks forgo queuing and quality of service (QoS) in
favor of other techniques. The most common of these is bandwidth.
The reality is that bandwidth can be used as a QoS mechanism; however, it will not prioritize a
filled queue, which is the point where queuing takes over. This can greatly degrade voice services
(VoIP), but it can also be a factor when the link is presented with a significant amount of additional
data. This can occur under parallel link failure, wherein two paths are reduced to one, presumably
with a resulting 50 percent loss of total bandwidth.
QoS and queuing can provide a mechanism to protect traffic under this model, and might be a
good augmentation to bandwidth services in your network. The challenge is how to categorize
and prioritize traffic—identification of traffic flows, the amount of bandwidth required, the
amount available, the benefit to the firm, and the ability to categorize are all considerations for
the designer to evaluate. NetFlow, a Cisco IOS feature that can audit network traffic, and Network-
Based Application Recognition (NBAR) can help in this process, but NetFlow requires a good
amount of storage and manual evaluation, and NBAR is not recommended for high-capacity links
because of its processor demands.
In addition, you will likely find infighting as a result of your decisions; a group with its traffic
prioritized as bronze will commonly buck and question why an application was rated above
it at gold. Obtaining early sign-off can greatly reduce this contention.
Another queuing option available to the administrator is in-band prioritization. This does not
help user traffic, but can insulate the network from large-scale denial of service attacks. In this
model, queue priority is given to Telnet, Secure Shell (SSH), and TFTP (Trivial FTP) so that
these ports are available to the network administrator when the network is under heavy load.
This load might be due to user traffic or an attack such as Code Red or Nimda. The caution is
that processor load and other factors might be saturated to negate this protection, and, of
course, users will still lose their applications under attack.

Configuring Byte Count 972

Configure the byte-count queues carefully, because if the setting is too high, the algorithm will
take longer than necessary to move from one queue to the next. This is not a problem while the
processor empties the queue, but if it takes the processor too long to get back to other queues,
they could fill up and start to drop packets.
This is why it’s important to understand how to configure the bandwidth percentage relationship
by using the byte-count command. Because frame sizes vary from protocol to protocol,
you’ll need to know the average frame sizes of the protocols using the custom queued interface to
define the byte count efficiently. You can do this by using simple math.
Suppose you have a router that uses IP, IPX, and SNA as its protocols. Let’s arbitrarily assign
frame sizes, realizing that the values aren’t the real ones. Assign a frame size of 800 bytes to IP,
1,000 bytes to IPX, and 1,500 bytes to SNA. You calculate a simple ratio by taking the highest
frame value and dividing it by the frame size of each protocol:
IP = 1,500 ÷ 800 = 1.875
IPX = 1,500 ÷ 1,000 = 1.5
SNA = 1,500 ÷ 1,500 = 1.0
These values equal your frame size ratios. To assign correct bandwidth percentages, multiply
each ratio by the bandwidth percentage you want to assign to that protocol. For example, assign
40 percent to IP, 30 percent to IPX, and 30 percent to SNA:
IP = 1.875 × (0.4) = 0.75
IPX = 1.5 × (0.3) = 0.45
SNA = 1 × (0.3) = 0.30
These values now need to be normalized by dividing the results by the smallest value:
IP = 0.75 ÷ 0.3 = 2.5
IPX = 0.45 ÷ 0.3 = 1.5
SNA = 0.3 ÷ 0.3 = 1
Custom queuing will send only complete frames. Because the ratios are fractions, you must
round them up to the nearest integer values that maintain the same ratio. To arrive at the nearest
integer value, multiply the original ratios by a common number that will cause the ratios to
become integers. In this case, you can multiply everything by 2 and get the resulting ratio of
5:3:2. What does this mean? Well, five frames of IP, three frames of IPX, and two frames of SNA
will be sent. Because of the protocols’ varying frame size, the bandwidth percentage works out
just the way you calculated:
IP = 5 frames × 800 bytes = 4,000 bytes
IPX = 3 frames × 1,000 bytes = 3,000 bytes
SNA = 2 frames × 1,500 bytes = 3,000 bytes
Total bandwidth is 10,000 bytes. Percentages are verified by dividing the protocol rate
by the total. After doing the math, you verify that IP = 40 percent, IPX = 30 percent, and
SNA = 30 percent.
Now that the byte count has been calculated (4,000, 3,000, and 3,000), you can apply the
results in the queue-list command. The custom queuing algorithm will forward 4,000 bytes
worth of IP packets, move to the IPX queue and forward 3,000 bytes, and then go to the SNA
queue and forward 3,000 bytes.
The following queue list does not follow the IP, IPX, and SNA example we’ve
been discussing.

See the following example on how to configure and apply custom queuing lists:
Router_B#config t
Enter configuration commands, one per line. End with CNTL/Z.
Router_B(config)#queue-list 1 interface Ethernet0 1
Router_B(config)#queue-list 1 protocol ip 2 tcp 23
Router_B(config)#queue-list 1 protocol ip 3 tcp 80
Router_B(config)#queue-list 1 protocol ip 4 udp snmp
Router_B(config)#queue-list 1 protocol ip 5
Router_B(config)#queue-list 1 default 6
Router_B(config)#queue-list 1 queue 1 limit 40
Router_B(config)#queue-list 1 queue 5 byte-count 4000
Router_B(config)#queue-list 1 queue 4 byte-count 500
Router_B(config)#queue-list 1 queue 3 byte-count 4000
Router_B(config)#queue-list 1 queue 2 byte-count 1000
Router_B(config)#interface serial0
Router_B(config-if)#custom-queue-list 1
Router_B(config-if)#^Z
Router_B#
After analyzing the list, you can see that six queues were configured. The first one was configured
to handle incoming traffic from interface Ethernet 0, and the second is reserved for
Telnet traffic. Queue number 3 is configured for WWW traffic, and the fourth is configured
to handle SNMP traffic. The fifth queue will handle all other IP traffic, while queue number 6
is set up as the default queue where all unspecified traffic will go. A limit of 40 packets was
placed on queue 1 (from the default of 20), and the byte count was changed from the default
value of 1,500 for queues 2, 3, 4, and 5. Finally, after the queue list was created, it was applied
to interface serial 0.
Here is what the configuration looks like:
!
interface Serial0
ip address 10.1.1.1
255.255.255.0
custom-queue-list 1
!
queue-list 1 protocol ip 2 tcp telnet
queue-list 1 protocol ip 3 tcp www
queue-list 1 protocol ip 4 udp snmp
queue-list 1 protocol ip 5
queue-list 1 default 6
queue-list 1 interface Ethernet0 1
queue-list 1 queue 1 limit 40

queue-list 1 queue 2 byte-count 1000
queue-list 1 queue 3 byte-count 4000
queue-list 1 queue 4 byte-count 500
queue-list 1 queue 5 byte-count 4000
As with the other queuing algorithms, you need to verify both the configuration and the
status of custom queuing. Issuing the same command as before, except this time substituting
custom for priority, produces the following output:
Router_B#show queueing custom
Current custom queue configuration:
List Queue Args
1 6 default
1 1 interface Ethernet0
1 2 protocol ip tcp port telnet
1 3 protocol ip tcp port www
1 4 protocol ip udp port snmp
1 5 protocol ip
1 1 limit 40
1 2 byte-count 1000
1 3 byte-count 4000
1 4 byte-count 500
1 5 byte-count 4000
Router_B#
This output information gives you a breakdown of the custom queue lists configured
on the device, detailing queue assignments and any limits or byte counts assigned to each
custom queue.

Configuring Custom Queuing

Configuring custom queuing is similar to configuring priority queuing, but instead of completing
three tasks, you must complete five. As with priority queuing, you have to configure a list
to separate types of incoming traffic into their desired queues. After that, you must configure a
default queue for the traffic that will be unassigned to any of the other queues. After the specific
and default queues are defined, you can adjust the capacity or size of each queue or just stick
with the default settings.
When that’s complete, specify the transfer rate, or byte count, for each queue. This is important—
the byte count determines the percentage of bandwidth reserved for a specified queue, with a default
of 1,500 bytes as the denominator. After these parameters are set, apply them to an interface.
The commands used to configure the queuing list, default queue, queue size, and transmit
rate follow:
queue-list list-number default queue-number
queue-list list-number interface interface-type interface-number queue-number
queue-list list-number lowest-custom queue-number
queue-list list-number protocol protocol-name queue-number queue-keyword
keyword-value
queue-list list-number queue [queue-number byte-count byte-count-number | limit
limit-number]
queue-list list-number stun [queue-number | address STUN-group-number]
The syntax can be presented in many ways to configure the desired command. The listnumber
is a value from 1 to 16 and associates the list with the given number. The following are
available options:
default The default option designates a custom queue for packets that do not match another
queue-list.
interface The interface option assigns incoming packets on the specified interface to a
custom queue. When the interface option is specified, you must supply the interfacetype
and interface-number as well. The interface-type is the type of physical interface,
and interface-number is the interface’s physical port.
lowest-custom The lowest-custom option specifies the lowest queue number considered a
custom queue.
protocol The protocol option indicates that the packets are to be sent to the custom queue
if they are of the protocol specified. The protocol option also requires additional information.
Obviously, the protocol-name must be specified. In Table 30.1, a sample of available protocol
names is listed, but available protocols are dependent upon the feature set and version of IOS.
After the protocol-name, you might supply the keyword-value to refine the protocols and
port numbers used for filtering.
queue The queue option allows for specific queue parameters to be configured. The parameters
for the queue are discussed later in this section.
stun The stun option establishes queuing priority for STUN packets.
TABLE 3 0 . 1 Sample of Available Protocol Names
Protocol Name Description
aarp AppleTalk ARP
apollo Apollo
appletalk AppleTalk
arp IP ARP
bridge Bridging
bstun Block Serial Tunnel
cdp Cisco Discovery Protocol
compressedtcp Compressed TCP
decnet DECnet
decnet_node DECnet Node
decnet_router-l1 DECnet Router L1
decnet_router-l2 DECnet Router L2

To define the operational parameters for the custom queues, you use the queue option. After
specifying the queue-number, you’re given two parameters to configure:
limit The limit parameter enables you to change the number of packets allowed in the queue.
The range is from 0 to 32,767, with the default being 20.
byte-count The byte-count parameter specifies the average number of bytes forwarded from
each queue during a queue cycle.

Custom Queuingz

Custom queuing functions on the concept of sharing bandwidth among traffic types. Instead of
assigning a priority classification to a specific traffic or packet type, custom queuing forwards
traffic in the different queues by using the FIFO method within each queue. Custom queuing
offers the ability to customize the amount of actual bandwidth that a specified traffic type uses.
While remaining within the limits of the physical line’s capacity, virtual pipes are configured
through the custom queuing option. Varying amounts of the total bandwidth are reserved for various
specific traffic types, and if the bandwidth isn’t being fully utilized by its assigned traffic type,
other types can borrow its bandwidth. The configured limits go into effect during high levels of
utilization or when congestion on the line causes different traffic types to compete for bandwidth.
Figure 30.5 shows each queue being processed, one after the other. After this begins, the
algorithm checks the first queue, processes the data within it, and then moves to the next—if it
comes across an empty one, it will simply move on to the next one without hesitating. Each
queue’s byte count specifies the amount of data that will be forwarded from that queue, which
directs the algorithm to move to the next queue after it has been reached. Custom queuing permits
a maximum of 16 configurable queues. The system queue is for network specific traffic,
including system datagrams such as SNMP and routing updates.
FIGURE 3 0 . 5 Custom queuing algorithm
S0
High priority
(keepalives)
Custom queue
list for S0
20 entries
Default
2
1
0 (System)
Deliver x number
of bytes per cycle
3
14
15
16
Figure 30.6 shows how the bandwidth allocation via custom queuing looks relative to the
physical connection. Using the frame size of the protocols and configuring the byte count for
each queue will configure appropriate bandwidth allocations for each traffic type. In other
words, when a particular queue is being processed, packets are sent until the number of bytes
sent exceeds the queue byte count defined

Verifying Priority Queuing

To make sure the queuing configuration is working and configured properly, you can use the
same command used to verify WFQ with the added option for priority queuing.
The following command output summarizes the preceding configured priority list:
Router_C#show queueing priority
Current priority queue configuration:
List Queue Args
1 high protocol ip gt 1000
1 low protocol ip lt 256
1 normal protocol ip
1 normal interface Serial1
1 high interface Ethernet0
1 high limit 40
1 medium limit 80
1 normal limit 120
1 low limit 160
Router_C#

Configuring Priority Queuing network

Implementing priority queuing on an interface requires three steps:
1. Create a priority list that the processor will use to determine packet priority.
2. Adjust the size of the queues if desired.
3. Apply the priority list to the desired interfaces.
Let’s go over how to build a priority list by using the following commands:
priority-list list-number protocol protocol-name] {high | medium | normal | low}
queue-keyword keyword-value
priority-list list-number interface interface-type {high | medium | normal | low}
The list-number parameter identifies the specific priority list, and the valid values are
1 through 16. The protocol parameter directs the router to assign packets to the appropriate
queue based on the protocol, and protocol-name defines which protocol to match.
The queue-keyword and keyword-value parameters enable packets to be classified by
their byte count, access list, protocol port number, or name and fragmentation. With the
interface parameter, any traffic coming from the interface is assigned to the specified
queue. Next, after specifying the protocol or interface, the type of queue needs to be
defined—high, medium, normal, or low.
priority-list list-number default queue-number
The same priority-list command can be used to configure a default queue for traffic that
doesn’t match the protocols or interfaces defined in the priority list.
priority-list list-number queue-limit [high-limit [medium-limit [normal-limit
[low-limit]]]]
The queue-limit parameter is used to specify the maximum number of packets allowed
in each of the priority queues. The configuration of the queue size must be handled carefully,
because if a packet is forwarded to the appropriate queue but the queue is full, the packet will
be discarded—even if bandwidth is available. This means that enabling priority queuing on an
interface can be useless (even destructive) if queues aren’t accurately configured to respond to
actual network needs. It’s important to make the queues large enough to accommodate congestion
so that the influx of packets can be accepted and stored until they can be forwarded.
After creating the priority list, you can apply that list to an interface in interface configuration
mode with the following command:
priority-group list
The list parameter is the priority list number, from 1 to 16, to use on this interface. After
the list is applied to the interface, it is implicitly applied to outbound traffic. All packets will be
checked against the priority list before entering their corresponding queue. The ones that don’t
match will be placed in the default queue. Here’s an example:
Router_C#config t
Enter configuration commands, one per line. End with CNTL/Z.
Router_C(config)#priority-list 1 protocol ip high gt 1000
Router_C(config)#priority-list 1 protocol ip low lt 256
Router_C(config)#priority-list 1 protocol ip normal
Router_C(config)#priority-list 1 interface serial 1 normal
Router_C(config)#priority-list 1 interface ethernet 0 high
Router_C(config)#priority-list 1 default normal
Router_C(config)#priority-list 1 queue-limit 40 80 120 160
Router_C(config)#interface serial 0
Router_C(config-if)#priority-group 1
Router_C(config-if)#^Z
Router_C#
The first line of the priority list assigns high priority to all IP traffic with a packet size greater
than (gt) 1,000 bytes. The second line assigns low priority to IP traffic with a packet size less
than (lt) 256 bytes. The third line assigns all remaining IP traffic to the normal queue. The
fourth line assigns all incoming traffic on serial 1 to the normal queue also. All incoming traffic
on Ethernet 0 is assigned a high priority, and any remaining traffic will be assigned normal priority.
The size of each queue is defined by the queue-limit parameter, and the numbers follow
the order of high, medium, normal, and low queue sizes.
Following is an example of what the interface configuration looks like. The priority list has
been assigned to the interface with the priority-group command. You can see the final form
of the applied priority list in the following configuration snippet:
!
interface Serial0
ip address 172.16.40.6 255.255.255.252
priority-group 1
!
priority-list 1 protocol ip high gt 1000
priority-list 1 protocol ip low lt 256
priority-list 1 protocol ip normal
priority-list 1 interface Serial1 normal
priority-list 1 interface Ethernet0 high
priority-list 1 queue-limit 40 80 120 160
As with access control lists, the order of a matching packet is important. A 1,500-byte packet
on Serial 0 would match the first and fourth lines, but would only be queued by the first instruction,
placing it in the high-priority queue.

Assigning Priorities

Priority queuing uses the packet header information consisting of either the TCP port or the
protocol as a classification mechanism. When a packet enters the router, it’s compared against
a list that will assign a priority to it and forward it to the corresponding queue.
Priority queuing can assign a packet to one of four priorities—high, medium, normal, and
low—with a separate dispatching algorithm to manage the traffic in all four. Figure 30.4 shows
how these queues are serviced. You can see that the algorithm starts with the high-priority
queue processing all the data there. When that queue is empty, the dispatching algorithm moves
down to the medium-priority queue, and so on down the priority chain, performing a cascade
check of each queue before moving on. So if the algorithm finds packets in a higher priority
queue, it will process them first before moving on. This is where problems can develop; packets
in the lower priority queues could be totally neglected in favor of the higher priority ones if
they’re continually busy with the arrival of new packets.

Priority Queuing

Unlike weighted fair queuing, which occurs on a session basis, priority queuing occurs on a packetby-
packet basis and is ideal in network environments that carry time-sensitive traffic. When congestion
occurs on low-speed interfaces, priority queuing guarantees that traffic assigned a high priority
will be sent first. On the negative side, if the queue for high-priority traffic is always full and monopolizing
bandwidth, packets in the other queues will be severely delayed or dropped.

Verifying Weighted Fair Queuing

Now that WFQ is configured on the router’s serial 0 interface, let’s see what it’s doing. To verify the
configuration and operation of the queuing system, you can issue the following two commands:
show queueing [fair | priority | custom]
show queue [
interface-type interface-number
]
Results from these commands on Router C can be seen next. Since WFQ is the only type
of queuing that’s been enabled on this router, it isn’t necessary to issue the optional commands
of
fair
,
custom
, or
priority
.
Router_C#
show queueing
Current fair queue configuration:
Interface Discard Dynamic Reserved
threshold queue count queue count
Serial0 96 256 0
Serial1 64 256 0
Current priority queue configuration:
Current custom queue configuration:
Current RED queue configuration:
Router_C#
This command shows that WFQ is enabled on both serial interfaces and that the discard
threshold for serial 0 was changed from 64 to 96. There’s a maximum of 256 dynamic queues
for both interfaces—the default value. The lines following the interface information are empty
because their corresponding queuing algorithms haven’t been configured yet.
The next command displays more detailed information pertaining to the specified interface:
Router_C#
show queue serial0
Input queue: 0/75/0 (size/max/drops); Total output drops: 0
Queueing strategy: weighted fair
Output queue: 0/1000/96/0 (size/max total/threshold/drops)
Conversations 0/1/256 (active/max active/max total)
Reserved Conversations 0/0 (allocated/max allocated)
This command displays the input queue information, which is the current size of the queue,
the maximum size of the queue, and the number of conversations that have been dropped. The
queuing strategy is defined as weighted fair, or WFQ. The output queue (usually the one with
the most activity) defines the current size, the maximum total number of output queue entries, the
number of conversations per queue, and the number of conversations dropped. The conversations
section represents the number of conversations in the queue. The active number describes the
number of current active conversations. The max active keeps a record of the maximum number
of active conversations at any one time, and max total gives the total number of all conversations
possible within the queue. Reserved queues are also displayed with the current number allocated
and maximum number of allocated queues.

Configuring Weighted Fair Queuing

You’re now ready to learn how to configure WFQ. For all interfaces having a line speed equal to
or lower than 2.048Mbps (E-1 speed), WFQ is on by default. Here’s an example of how WFQ is
configured on an interface. You can use the
fair-queue
command to alter the default settings:
Router_C#
config t
Enter configuration commands, one per line. End with CNTL/Z.
Router_C(config)#
interface serial0
Router_C(config-if)#
fair-queue 96
Router_C(config-if)#^
Z
Router_C#
To understand what was configured, look at the syntax of the command:
fair-queue
[
congestive-discard-threshold
[
dynamic-queues
[
reservable-queues
]]]
congestive-discard-threshold
This value specifies the number of messages allowed in each
queue. The default is 64 messages, with a range 16–4,096. When a conversation reaches this
threshold, new message packets will be dropped.
dynamic-queues
Dynamic queues are exactly that—queues established dynamically to handle
conversations that don’t have special requirements. The valid values for this parameter are 16,
32, 64, 128, 256, 512, 1024, 2048, and 4096, with the default value being 256; ISDN BRI has
a default of 16.
reservable-queues
This parameter defines the number of queues established to handle special
conversations. The available range is from 0 to 1,000. The default is 0. These queues are for
interfaces that use Resource Reservation Protocol (RSVP).