Translating Inside Local Addresses 990

NAT operates on a router and usually connects two networks. NAT translates the local nonunique
IP addresses into legal, registered Internet addresses before forwarding packets from the
local network to the Internet or another outside network. To do this, NAT uses a six-step process,
as shown in Figure 31.2.
The six-step process, as Figure 31.2 illustrates, is as follows:
1. User 10.1.2.25 sends a packet and attempts to open a connection to 206.100.29.1.
2. When the first packet arrives at the NAT border router, the router checks to see whether
there is an entry for the local address that matches a global address in the NAT table.
3. If a match is found in the NAT table, the process continues to step 4. If a match is not found,
the NAT router uses what is called a simple entry from its pool of global addresses. A simple
entry occurs when the NAT router matches a local IP address (such as the one currently
being used) to a global IP address. In this example, the NAT router will match the address
of 10.1.2.25 to 200.1.1.25.
4. The NAT border router then replaces the local address of 10.1.2.25 (listed as the packet’s
source address) with 200.1.1.25. This makes the destination host believe that the sending
device’s IP address is 200.1.1.25.
FIGURE 3 1 . 2 The process of translating inside local addresses
1
10.1.2.25
10.1.2.26
10.1.2.27
Inside
network
switch
NAT border
router
Inside IP Inside global IP
206.100.29.1
10.1.2.25 200.1.1.25
6
2 4
3
5
Interne
5. When the host on the Internet using the IP address 206.100.29.1 replies, it uses the NAT
router–assigned IP address of 200.1.1.25 as the destination address.
6. When the NAT border router receives the reply from 206.100.29.1 with the packet destined
for 200.1.1.25, the NAT border router checks its NAT table again. The NAT table
shows that the local address of 10.1.2.25 should receive the packet destined for 200.1.1.25
and replaces the destination address with the internal interface’s IP address.
Steps 2 through 6 are repeated for each individual packet.

Performing NAT Operations

Understanding how NAT functions when it is configured a certain way will aid you in your configuration
decisions. This section covers NAT’s operations when NAT is configured to provide
the following functions:
 Translating inside local addresses
 Overloading inside global addresses
 Using TCP load distribution
 Overlapping networks

NAT Traffic Types

NAT supports many traffic types. The Remote Access exam includes questions on both the supported
and unsupported types. Let’s take a look at these types now.
Supported Traffic Types
NAT supports the following traffic types:

TCP traffic that does not carry source and destination addresses in an application stream

UDP traffic that does not carry source and destination addresses in an application stream

Hypertext Transfer Protocol (HTTP)

Trivial File Transfer Protocol (TFTP)

File Transfer Protocol (
FTP PORT
and
PASV
commands)

Archie, which provides lists of anonymous FTP archives

Finger, a software tool for determining whether a person has an account at a particular
Internet site

Network Time Protocol (NTP)

Network File System (NFS)

rlogin
,
rsh
,
rcp
(TCP, Telnet, and Unix entities to ensure the reliable delivery of data)
NAT-supported protocols that carry the IP address in the application stream include:

Internet Control Message Protocol (ICMP)

NetBIOS over TCP (datagram, name, and session services)

Progressive Networks’s RealAudio

CUseeMe Networks CUseeMe

Xing Technology’s StreamWorks

DNS “A” and “PTR” queries

H.323 in IOS versions 12.0(1)/12.0(1)T or later

Microsoft’s NetMeeting (IOS versions 12.0(1)/12.0(1)T or later)

VDOnet’s VDOLive – IOS versions 11.3(4)/11.3(4)T or later

Microsoft’s VXtreme – IOS versions 11.3(4)/11.3(4)T or later

IP Multicast – IOS version 12.0(1)T or later, source address translation only

Point-to-Point Tunneling Protocol (PPTP) support with Port Address Translation (IOS
version 12.1(2)T or later)

Skinny Client Control Protocol, IP Phone to Cisco CallManager (IOS version 12.1(5)T or later)
Unsupported Traffic Types
NAT does not support some traffic types, including the following:
 Routing table updates
 DNS zone transfers
 BOOTP and DHCP
 Talk
 Ntalk
 Simple Network Management Protocol (SNMP)
 NetShow

Disadvantages of NAT

Now that you know about the advantages of using NAT, you should learn about the disadvantages
as well. The following is a list of some of the disadvantages of using NAT compared to
using individually configured, registered IP addresses on each network host:

NAT increases latency (delay). Delays are introduced into the switching path due to the
processor overhead needed to translate each IP address contained in the packet headers. The
router’s CPU must be used to process every packet to decide whether the router needs to translate
and change the IP header. Some Application layer protocols supported, such as DNS, have
IP addresses in their payload that must be translated also. This adds to the increased delay.

NAT hides end-to-end IP addresses that render some applications unusable. Some applications
that use the host IP address inside the payload of the packet will break when NAT
translates the IP addresses across the NAT border router.

Because NAT changes the IP address, there is a loss of IP end-to-end traceability. The multiple
packet-address changes confuse IP tracing utilities. This provides one advantage from
a security standpoint: It eliminates some of a hacker’s ability to identify a packet’s source.

NAT also makes troubleshooting or tracking down where malicious traffic is coming from
more troublesome. Because the traffic could be coming from a single user who is using different
IP addresses depending on when the traffic passes through the NAT router, accountability
becomes much more difficult.

Advantages of NAT

There are many advantages of using NAT. Some of the more important benefits include the
following:

NAT enables you to incrementally increase or decrease registered IP addresses without
changes to hosts, switches, or routers within the network. (The exception to this is the NAT
border routers that connect the inside and outside networks.)

NAT can be used either statically or dynamically:

Static translation occurs when you manually configure an address translation table with
IP addresses. A specific address on the inside of the network uses a specific outside IP
address—manually configured by the network administrator—to access the outside network.
The network administrator can also translate an inside IP address and port pair
to an outside IP address and port pair.

Dynamic mappings enable the administrator to configure one or more pools of outside
IP addresses on the NAT border router. The addresses in the pools can be used by nodes
on the inside network to access nodes on the outside network. This enables multiple
internal hosts to utilize a single pool of IP addresses.

NAT can allow the sharing of packet processing among multiple servers by using the Transmission
Control Protocol (TCP) load distribution feature. NAT load distribution can be
accomplished by using one individual global address mapped to multiple local server
addresses. This round-robin approach is used on the router distributing incoming connections
across the servers.
There is no limit to the number of NAT sessions that can be used on a router or
route processor. The limit is placed on the amount of DRAM the router contains.
The DRAM must store the configurable NAT pools and handle each translation.
Each NAT translation uses approximately 160 bytes, which translates
into about 1.53MB for 10,000 translations. This is far more translations than the
average router needs to provide.
If your internal addresses must change because you have changed your ISP or have merged
with another company that is using the same address space, you can use NAT to translate
the addresses from one network to the other.

How NAT Works

NAT is configured on the router or route processor closest to the border of a stub domain
(a LAN that uses IP addresses—either registered or unregistered for internal use) between
the inside network (local network) and the outside network (public network such as an ISP
or the Internet). The outside network can also be another company, such as when two networks
merge after an acquisition.
An illustration of NAT is shown in Figure 31.1. You should note that the router separates
the inside and outside networks. NAT translates the inside local addresses into the globally
unique inside global IP address, enabling data to flow into the outside network.
FIGURE 3 1 . 1
The NAT router on the border of an inside network and an outside network
such as the Internet
NAT border
router
NAT takes advantage of there being relatively few network users using the outside network
at any given time. NAT does this by using process switching to change the source address on the
outbound packets, directing them to the appropriate router. This enables fewer IP addresses to
be used than the number of hosts in the inside network. Before the implementation of NAT on
all Cisco enterprise routers, the only way to implement these features was to use pass-through
firewall gateways.
NAT was first implemented in Cisco’s IOS release 11.2 and spelled out in
RFC 1631.

NAT Terminology

Before continuing with this chapter, you should be familiar with the following Cisco terms:
Inside network
The
inside network
is the set of network addresses that is subject to translation.
The IP addresses used within the network are invalid on an outside network such as the
Internet or the network’s ISP. Often, the IP addresses used in the inside network are obsolete,
or an IP address is allocated in a range specified by RFC 1918 or RFC 3330 (which reserves certain
IP addresses for internal use only) and is not Internet routable.
Outside network
The
outside network
is not affiliated with or owned by the inside network
organization. (Keep in mind we are referring to a network—not network addresses.) This can
be the network of another company when two companies merge, but typically is the network
of an ISP. The addresses used on this network are legally registered and Internet-routable
IP addresses.
Inside local IP address
The
inside local IP address
is the IP address assigned to an interface in
the inside network. This address can be illegal to use on the Internet, or it can be an address
defined by RFC 1918 as unusable on the Internet. In both cases, this address is not globally
routable. If the address is globally routable, it can be assigned to another organization and cannot
be used on the Internet.
Inside global IP address
The
inside global IP address
is the IP address of an inside host as it
appears to the outside network. This is the “translated IP address.” Addresses can be allocated
from a globally unique address space, typically provided by the ISP (if the enterprise is connected
to the global Internet).
Outside local IP address
The
outside local IP address
is the IP address of an outside host as it
appears to the inside network. These addresses can be allocated from the RFC 1918 space if desired.
Outside global IP address
The
outside global IP address
is the configured IP address assigned
to a host in the outside network.
Simple translation entry
A
simple translation entry
is an entry in the NAT table that results
when the NAT router matches an illegal inside IP address to a globally routable IP address that
is legally registered for Internet use.
Extended translation entry
An
extended translation entry
is a translation entry that maps one
IP address and port pair to another.

Understanding Network Address Translation (NAT)

Before exploring the details of
Network Address Translation (NAT)
operations, configuration,
and troubleshooting, it’s important to thoroughly understand what it is, the terminology associated
with it, its advantages and disadvantages, and the traffic types it supports. NAT is a protocol
that maps an inside IP address used in the local, or inside, network environment to the
outside network environment and vice versa. There are many reasons for using NAT in your
network environment. Some of the benefits you will receive from NAT include the following:

Enabling a private IP network to use unregistered IP addresses to access an outside network
such as the Internet

Providing the ability to reuse assigned IP addresses that are already in use on the Internet

Providing Internet connectivity in networks where there are not enough individual Internetregistered
IP addresses

Appropriately translating the addresses in two merged intranets such as two merged companies

Translating internal IP addresses assigned by old Internet service providers (ISPs) to a new
ISP’s newly assigned addresses without manually configuring the local network interfaces

Network Address Translation and Port Address Translation

THE CCNP EXAM TOPICS COVERED IN THIS
CHAPTER INCLUDE THE FOLLOWING:

Describe the process of Network Address Translation (NAT).

Configure Network Address Translation (NAT).

Troubleshoot nonfunctional remote access systems.

As the Internet grows and individuals increasingly need more than
one IP address to use for Internet access from their home and
office PCs, their phones (Voice over IP, VOIP), their office’s network
printers, and many other network devices, the number of available IP addresses is diminishing.
To add insult to injury, the early designers of TCP/IP—back when the Internet project
was being created by the Advanced Research Projects Agency (ARPA)—never anticipated the
explosion of users from private industry that has occurred.
ARPA’s goal was to design a protocol that could connect all the United States Defense
Department’s major data systems and enable them to talk to one another. The ARPA designers
created not only a protocol that would enable all the Defense Department’s data systems to
communicate with one another, but one that the entire world now relies on to communicate
over the Internet.
Unfortunately, because of the unexpected popularity of this protocol, the distribution of IP
addresses was inadequately planned. As a result, many IP addresses are unusable, and many are
placed in networks that will never use all the addresses assigned to them. For example, every
organization with a Class A network, which provides 16,777,214 addresses per Class A assignment,
would find it difficult to use more than half of the addresses available, and those that are
not used are wasted.
All the Class A and Class B addresses are already assigned to organizations. There are 65,534
Class B addresses available in each Class B address range. If a new organization needs more than
one Class C address range, which provides only 254 addresses, they must get another Class C
address range.
IP version 6 will eventually alleviate IP addressing problems because it increases the address
space from 32 bits to 128 bits, but its adoption has been slow because of the problems associated
with infrastructure and application support. Outside the United States, IPv6 is being paid
more attention because less IPv4 address space is available. Specifically, Japan has implemented
a large-scale IPv6 network because of the number of addresses needed and the availability of
IPv6 address space.
This chapter introduces you to Network Address Translation (NAT) and Port Address
Translation (PAT). Cisco routers and internal route processors use these two protocols to allow
the use of a limited number of registered IP addresses by a large number of users and devices.
As you progress through the chapter, you will learn the differences between NAT and PAT, as
well as their operational boundaries, how to configure them, and how to troubleshoot problems
associated with these two protocols.

Queuing and Compression Exam Essentials

Understand the queuing technologies available in Cisco IOS. You should know that there is
weighted fair queuing (WFQ), priority queuing, and custom queuing. Each has its strengths and
weaknesses, but WFQ is the default technique used for 2.048Mbps or slower interfaces. WFQ can
be tuned with the fair-queue command to enable more conversations to be tracked per interface.
Know how to configure priority queuing and when it is best used. Picking the proper
queuing mechanism is very important, and priority queuing is ideal if you want to ensure that
certain traffic gets priority over other traffic. A priority queue is set up with the prioritylist
command and is applied to the interface with the priority-group command.
Know how to configure custom queuing and when it is best used. Custom queuing is a
technique that enables the WAN designer to allocate a certain amount of bandwidth to different
traffic types. There are 16 queues that can be set up to contain certain types of traffic,
and each of these queues can be allocated a specific amount of bandwidth. A custom queue
is configured with the queue-list command and is applied to the interface with the customqueue-
list command.
Understand compression techniques and algorithms and when to use them. You should know
that the compression techniques are TCP header compression, payload compression, link compression,
and MPPC compression. TCP header compression is used on point-to-point links and
when smaller TCP packets, such as Telnet traffic, are being sent over the link. Payload compression
can be used on links other than point-to-point and is used to compress the data portion of the
packet; it will not alter the layer 2 and layer 3 headers. Link compression is used only on pointto-
point links but is protocol independent. It will compress the whole packet and encapsulate that
packet in another protocol to ensure reliability and sequencing. MPPC is used when the Cisco
device needs to talk to Windows-based clients.
Know the troubleshooting commands used for queuing and compression. The commands to
show queuing are show queue and show queueing [fair | priority | custom]. These can
be used to display queuing on the device and the counters involved. The compression troubleshooting
command is show compress, which can be used to view the compression efficiency and status.

Viewing Compression Information

To view information about the status of compression on the router, use the show compress
command. The following is a sample of the output from this command:
Router2#show compress
Serial1
uncompressed bytes xmt/rcv 82951/85400
1 min avg ratio xmt/rcv 0.798/0.827
5 min avg ratio xmt/rcv 0.789/0.834
10 min avg ratio xmt/rcv 0.779/0.847
no bufs xmt 0 no bufs rcv 0
restarts 0
Additional Stacker Stats:
Transmit bytes: Uncompressed = 27044 Compressed = 66749
Received bytes: Compressed = 76758 Uncompressed = 0
938 Chapter 30  Queuing and Compression
This command shows the uncompressed byte count of compressed data transmitted and
received as well as the ratio of data throughput gained or lost in the compression routine in the
last 1, 5, and 10 minutes. If the restarts are more than 0, the compression routine detected that
the dictionaries were out of sync and restarted building the compression dictionary. Using this
command, you will be able to see if compression is making a difference for the type of traffic
being compressed.
Summary
Queuing is an important technology when using WAN links. As the speed of LAN interfaces
increases more and more, data will be expected to traverse WAN links. Congestion is inevitable,
so to ensure that important data gets through, a queuing mechanism is necessary. There are
many queuing options available when using Cisco IOS.
Weighted fair queuing (WFQ) is the default technique when using interfaces of 2.048Mbps
or slower. WFQ will track conversations and enable lower bandwidth conversations to take
priority over higher bandwidth conversations. This feature can be tuned to allow tracking of
more conversations.
Priority queuing is used to classify traffic into four queues of high, medium, normal, and low.
Each queue is serviced sequentially, and the traffic is forwarded from the higher level queues
before the router services the lower level queues. The lower level queues might not be serviced
for quite some time if there is a large amount of higher priority traffic.
Custom queuing can allocate a certain percentage of the total bandwidth available on the
interface. There are 16 queues available, which can hold a certain type of traffic, and each
queue can be allocated a specific amount of bandwidth. Custom queuing does not suffer
from queue starvation as priority queuing can.
To alleviate congestion on WAN links, compression can be configured on the interface. The
types of compression algorithms are Stac, Predictor, and MPPC. MPPC is used primarily for Windows
clients, whereas Stac and Predictor can be used on many types of WAN technologies. TCP
header compression is the simplest compression technique. Payload compression compresses the
payload portion of the packet and does not alter the layer 2 or layer 3 header information. The
link compression algorithm uses Stac or Predictor to compress the traffic and then encapsulates
the compressed traffic in another link layer such as PPP or LAPB to ensure error correction and
packet sequencing.
Various techniques can ensure that the queuing and compression technologies are working
correctly. The show queue command is used to see queuing on the interface, and the show
queueing [priority | custom | fair] command is used to display the queuing technique
configured on the router. For compression, the show compress command is used to see how
well the compression process is compressing traffic and whether problems might occur with the
compression process.

Compression Considerations

You need to keep a few considerations in mind when selecting and implementing a compression
method:
Modem compression Modems can compress data up to four times smaller than its original size.
There are different types of modem compression techniques, so make sure you understand that
modem compression and router software compression are not compatible. However, the modems
at both ends of the connection will try to negotiate the best compression method to use. If compression
is being done at the modem, do not configure the router to also run compression.
Encrypted data Compression happens at the Data Link layer (layer 2), and encryption
functions at the Network layer (layer 3), although the payload is also encrypted, which
includes layer 7. After the application encrypts the data, the data is then sent to the router,
which provides compression. The problem is that encrypted data typically does not have
repetitive patterns, so the data will not compress. The router will spend a lot a processor
time to determine the traffic is not compressible. So, if data is encrypted, do not attempt to
compress it by using a layer 2 compression algorithm.
CPU cycles versus memory The amount of memory that a router must have varies according
to the protocol being compressed, the compression algorithm, and the number of configured
interfaces on the router. Memory requirements will be higher for Predictor than for Stac, but
Stac is typically more processor intensive.

Payload Compression

Payload compression, also known as per-virtual-circuit compression, compresses only the payload,
or data portion, of the packet. The header of the packet is not touched.
Link Compression
Link compression, also known as per-interface compression, compresses both the header
and payload section of a data stream. Unlike header compression, link compression is protocol
independent.
The link compression algorithm uses Stac or Predictor to compress the traffic in another link
layer such as PPP or LAPB, ensuring error correction and packet sequencing. Cisco proprietary
HDLC protocol is capable of using Stac compression only.
Predictor Use this approach to solve bottleneck problems caused by a heavy load on the
router. The Predictor algorithm learns data patterns and “predicts” the next character by using
an index to look up a sequence in a compression dictionary. This is sometimes referred to as
lossless because no data will be lost during the compression and decompression process.
Stac This method is best used when bottlenecks are related to bandwidth issues. The Stac
method searches the input data stream for redundant strings and replaces them with a token
that is shorter than the original redundant data string.
If the data flow traverses a point-to-point connection, use link compression. In a link
compression environment, the complete packet is compressed and the switching information
in the header is not available for WAN switching networks. Typical examples are
leased lines or ISDN.
If you use payload compression, you should not use header compression. This
is redundant, and you should configure payload compression only.
In the following example, we turned on LAPB encapsulation with Predictor compression and
set the maximum transmission unit (MTU) and the LAPB N1 parameters:
Router#config t
Enter configuration commands, one per line. End with CNTL/Z.
Router(config)#interface serial0
Router(config-if)#encapsulation lapb
Router(config-if)#compress ?
predictor predictor compression type
stac stac compression algorithm
Router(config-if)#compress predictor
Router(config-if)#mtu 1510
Router(config-if)#lapb n1 12096
Compression 937
The LAPB N1 represents the number of bits in an LAPB frame, which holds an X.25 packet.
It is set to eight times the MTU size, plus any overhead when using LAPB over leased lines. For
instance, the N1 is specified at 12,080 (that is, 1,510 × 8) plus 16 bits for protocol overhead. The
LAPB N1 parameter can cause major problems if it’s not configured correctly, and most often it
should be left at its default value. Even so, it can be really valuable if you need to set the MTU size.

TCP Header Compression

TCP header compression as defined in RFC 1144 compresses only the protocol headers, not the
packet data. TCP header compression lowers the overhead generated by the disproportionately
large TCP/IP headers as they are transmitted across the WAN.
It is important to realize that the layer 2 header is not touched, and only the headers at
layers 3 and 4 are compressed. This enables the layer 2 header to direct that packet across a
WAN link.
You would use the header compression on a network with small packets and a few bytes
of data such as Telnet. Cisco’s header compression supports X.25, Frame Relay, and dial-ondemand
WAN link protocols. Because of processing overhead, header compression is generally
used at lower speeds such as 64Kbps links.
TCP header compression is achieved by using the ip tcp header-compression command:
Router(config)#interface serial0
Router(config-if)#ip tcp ?
compression-connections Maximum number of compressed connections
header-compression Enable TCP header compression
Router(config-if)#ip tcp header-compression ?
passive Compress only for destinations which send compressed headers
The passive parameter is optional and is used to instruct the router to compress the headers
of outbound TCP traffic if the other side is also sending compressed TCP headers. If you don’t
include the passive argument, all TCP traffic will use compressed TCP headers.

Compression networks

The Cisco IOS provides congestion control on WAN links by adding compression on serial
interfaces. This can ease the WAN bandwidth bottleneck problems by using less bandwidth on
the link. Along with using the different queuing methods discussed earlier in this chapter, one
of the more effective methods of WAN optimization is compression of the data traveling across
the WAN link.
Software compression can significantly affect router CPU performance, and the Cisco rule of
thumb is that the router’s CPU load must not exceed 65 percent when running software compression.
If it does exceed this limit, it would be better to disable any compression running.
Cisco equipment supports the following types of compression:
 TCP header compression
 Payload compression
 Link compression
 Microsoft Point-to-Point Compression (MPPC)
By default, Cisco routers transmit data across serial links in an uncompressed format, but by
using Cisco serial compression techniques, you can make more efficient use of your available
bandwidth. It’s true that any compression method will cause overhead on the router’s CPU, but
the benefits of compression on slower links can outweigh that disadvantage.
Figure 30.7 shows the three types of compression used in a Cisco internetworking environment.
FIGURE 3 0 . 7 Cisco serial compression methods
Link compression
Header
compression
Payload compression
PPP, HDLC,
X.25, Frame
Relay, or ATM
header
IP/TCP
header Payload
The compression methods are as follows:
TCP header compression Cisco uses the Van Jacobson algorithm to compress the headers of
IP packets before sending them out onto WAN links.
Payload compression This approach compresses the data but leaves the header intact. Because
the packet’s header isn’t changed, it can be switched through a network. This method is the one
generally used for switching services such as X.25, Switched Multimegabit Data Service (SMDS),
Frame Relay, and ATM.
Link compression This method is a combination of both header and payload compression,
and the data will be encapsulated in either PPP or LAPB. Because of this encapsulation, link
compression allows for transport protocol independence.
Microsoft Point-to-Point Compression (MPPC) protocol This is defined in RFC 2118 and
enables Cisco routers to exchange compressed data with Microsoft clients. You would configure
MPPC when exchanging data with a host using MPPC across a WAN link. The MPPC is not
discussed further in this section.
The Cisco compression methods are discussed in more detail next.

Committed Access Rate

Committed access rate (CAR) is an older bandwidth and policing system; however, it is commonly
used in concert with bandwidth management. As noted before, like CBWFQ, committed access
rate can specify a bandwidth guarantee to an application. However, CAR also specifies a hard
upper limit to that application as well. This can be very useful when wanting to reserve bandwidth
for bursty applications. One example of this would be file transfer with Common Internet File System
(CIFS) and other protocols on a circuit with web traffic. An administrator might wish to use
CAR to allocate 128Kbps for HTTP/web traffic, which would have the same impact as saying all
934 Chapter 30  Queuing and Compression
traffic on a T-1 except HTTP/web has over 1,400Kbps available. The advantage is that an administrator
need not define each of the other applications to implement this solution.
CAR has some benefits. However, in many enterprises with a QoS strategy, CBWFQ is
leading the way, and administrators are opting to protect important applications with the
newer technique. You should evaluate CAR and CBWFQ for your specific environment.

Class-Based Weighted Fair Queuing

Class-based weighted fair queuing (CBWFQ) builds upon WFQ by adding the concept of traffic
classes. Classes can be defined by a tag within the frame such as type of service (ToS) or differentiated
services code point (DSCP). These tags are added by the end station or the access router
and are used to forward packets through the network core without each router re-examining the
packet to determine that datagram’s priority. We are not defining the methodology used but
simply explaining the fact that you can use this information for CBWFQ.
Common implementations of CBWFQ establish three or four classes of application services,
typically described as gold, silver, bronze, and other. This categorization does not include network
traffic such as routing updates, which should always have priority over user application traffic.
Although some users will take exception to their traffic being described as a low priority, the network
administrator needs to constrain the total number of classes to keep administration manageable
and negate a situation in which bandwidth is being managed to the bit via QoS policy.
One of the strongest benefits to CBWFQ is the ability to define a specific amount of bandwidth
to an application. For example, Financial Information Exchange (FIX) is a common financial systems
protocol that might warrant special attention. Perhaps this application requires a guaranteed
256Kbps to prevent application failure on a T-1 link. CBWFQ can provide this guarantee, and,
perhaps more importantly, will allow the application to use more than the 256Kbps if bandwidth
is available. This is different from CAR, discussed in the next section, which establishes a hard
limit on the bandwidth available to a specific protocol. Please note that by default you cannot allocate
more than 75 percent of the link’s total bandwidth for management by CBWFQ.
With regard to traffic classes, the model is fairly straightforward. When congestion occurs,
the queue will process packets in the gold class before those in the silver class within the constraints
of WFQ. As such, the administrator is defining that the queue should be fair to all applications,
but that gold traffic is the most important. This will lead to the managed unfairness that
is the basis for all QoS policies; under congestion, the network will have to discard something
to stay within the available resources.
ToS and DSCP are not commonly accepted from end nodes because many
applications and some operating systems will automatically tag all packets for
the highest priority. It is recommended that you configure your edge routers to
ignore the end station and tag based on address or port information.

Low Latency Queuing

Low latency queuing (LLQ) is actually a strict priority queue within class-based weighted fair
queuing (CBWFQ), discussed in the next section. LLQ is Cisco’s solution for voice and other
very small packets that require real-time processing. LLQ operates by prioritizing key packets
to the front of the queue. Because these packets are small by nature, there is little risk of queue
starvation or other problems. However, administrators should evaluate the demands of other
traffic within the network.

Cisco’s Newer Queuing Technologies

Because of their notable absence in the topics covered by the Remote Access examination, we only
briefly cover some of the newer queuing management technologies in this section. Queue control
has become a more important issue in remote access networking with the proliferation of voice
services and other real-time protocols. As these protocols suffer from congestion and low bandwidth,
they are strong candidates for quality of service (QoS), of which queuing is a part.
Remember that queuing is intended to manage the transmission of packets held in the
router’s buffer. Unlike voice, data networks buffer packets during periods of congestion.
Although we could discuss a wide number of queuing options, three key methods are gaining
prominence in the market: low latency queuing, class-based weighted fair queuing, and committed
access rate.

The Real Use of Queuing

As with most things in networking, queuing is a trade-off technology that can provide significant
benefit or detriment to the administrator. As a result, when coupled with the implementation and
management overhead involved, most networks forgo queuing and quality of service (QoS) in
favor of other techniques. The most common of these is bandwidth.
The reality is that bandwidth can be used as a QoS mechanism; however, it will not prioritize a
filled queue, which is the point where queuing takes over. This can greatly degrade voice services
(VoIP), but it can also be a factor when the link is presented with a significant amount of additional
data. This can occur under parallel link failure, wherein two paths are reduced to one, presumably
with a resulting 50 percent loss of total bandwidth.
QoS and queuing can provide a mechanism to protect traffic under this model, and might be a
good augmentation to bandwidth services in your network. The challenge is how to categorize
and prioritize traffic—identification of traffic flows, the amount of bandwidth required, the
amount available, the benefit to the firm, and the ability to categorize are all considerations for
the designer to evaluate. NetFlow, a Cisco IOS feature that can audit network traffic, and Network-
Based Application Recognition (NBAR) can help in this process, but NetFlow requires a good
amount of storage and manual evaluation, and NBAR is not recommended for high-capacity links
because of its processor demands.
In addition, you will likely find infighting as a result of your decisions; a group with its traffic
prioritized as bronze will commonly buck and question why an application was rated above
it at gold. Obtaining early sign-off can greatly reduce this contention.
Another queuing option available to the administrator is in-band prioritization. This does not
help user traffic, but can insulate the network from large-scale denial of service attacks. In this
model, queue priority is given to Telnet, Secure Shell (SSH), and TFTP (Trivial FTP) so that
these ports are available to the network administrator when the network is under heavy load.
This load might be due to user traffic or an attack such as Code Red or Nimda. The caution is
that processor load and other factors might be saturated to negate this protection, and, of
course, users will still lose their applications under attack.