Defining and Limiting LLQ Bandwidth
The LLQ antecedence command provides two syntax options for defining the bandwidth of an LLQ—
a simple absolute bulk or bandwidth as a allotment of interface bandwidth. (There is no
remaining bandwidth agnate for the antecedence command.) However, clashing the bandwidth
command, both the absolute and allotment versions of the antecedence command can be acclimated inside
the aforementioned activity map.
Queuing Tools: CBWFQ and LLQ 449
IOS still banned the bulk of bandwidth in an LLQ activity map, with the absolute bandwidth from
both LLQ classes (with antecedence commands) and non-LLQ classes (with bandwidth commands)
not actuality accustomed to beat max-res × int-bw. Although the algebraic is easy, the capacity can get
confusing, abnormally because a distinct activity map could accept one chain configured with priority
bw, addition with antecedence percent bw, and others with one of the three versions of the bandwidth
command. Bulk 13-4 shows an archetype with three versions of the commands.
The bulk shows both versions of the antecedence command. Class3 has an absolute antecedence 32
command, which affluence 32 kbps. Class2 has a antecedence percent 25 command, which, when
applied to the interface bandwidth (256 kbps), gives class2 64 kbps.
Figure 13-4 Priority, Antecedence Percent, and Bandwidth Absolute Percent
The best absorbing allotment of Bulk 13-4 is how IOS angle the remaining-bandwidth abstraction when
priority queues are configured. IOS subtracts the bandwidth aloof by the antecedence commands
as well. As a result, a activity map can about admeasure non-priority classes based on percentages
of the added (remaining) bandwidth, with those ethics accretion 100 (100 percent).
LLQ with Added Than One Antecedence Queue
LLQ allows assorted queues/classes to be configured as antecedence queues. This begs the question,
“Which chain gets appointed first?” As it turns out, LLQ absolutely places the packets from multiple
Interface Bandwidth
Remaining Bandwidth Calculation
192 kbps (max-res * int-bw)
– 32 kbps (Explicit)
– 64 kbps (percentage * int-bw)
256 kbps
0%
192 kbps
Unreservable Bandwidth
(256 – (.75 * 256)) = 64 kbps
Class class1
Priority 32
Unallocated: 92 kbps – 69 kbps = 23 kbps
Class class2
Priority percent 25
(256 * .25 = 64 kbps)
Class class3
Bandwidth absolute percent 75
(92 kbps * .75 = 69 kbps)
Reservable
Bandwidth
Remaining Bandwidth
92 kbps
450 Chapter 13: Bottleneck Management and Avoidance
LLQs into a distinct centralized LLQ. So, packets in the altered configured antecedence queues still get
scheduled advanced of non-priority queues, but they are serviced based on their accession time for all
packets in any of the antecedence queues.
So why use assorted antecedence queues? The acknowledgment is policing. By policing cartage in one chic at
one speed, and cartage in addition chic at addition speed, you get added granularity for the policing
function of LLQ. For instance, if you are planning for video and voice, you can abode anniversary into a
separate LLQ and get low-latency achievement for both types of traffic, but at the aforementioned time
prevent video cartage from arresting the bandwidth engineered for articulation and carnality versa.
Miscellaneous CBWFQ/LLQ Topics
CBWFQ and LLQ acquiesce a activity map to either admeasure bandwidth to the class-default class, or
not. Back a bandwidth command is configured beneath chic class-default, the chic is indeed
reserved that minimum bandwidth. (IOS will not acquiesce the antecedence command in class-default.)
When chic class-default does not accept a bandwidth command, IOS internally allocates any
unassigned bandwidth amid all classes. As a result, chic class-default ability not get much
bandwidth unless the chic is configured a minimum bulk of bandwidth application the bandwidth
command.
This chapter’s advantage of affirmed bandwidth allocation is based on the configuration
commands. In practice, a activity map ability not accept packets in all queues at the aforementioned time. In that
case, the queues get added than their aloof bandwidth. IOS allocates the added bandwidth
proportionally to anniversary alive class’s bandwidth reservation.
Finally, IOS uses queuing alone back bottleneck occurs. IOS considers bottleneck to be occurring
when the accouterments chain is full; that about happens back the offered amount of cartage is far less
than the alarm amount of the link. So, a router could accept a service-policy out command on an
interface, with LLQ configured, but the LLQ argumentation would be acclimated alone back the accouterments queue
is full.
Weighted Accidental Early Detection 451
Queuing Summary
Table 13-6 summarizes some of the key credibility apropos the IOS queuing accoutrement covered in this
chapter.
1 WFQ can be acclimated in the class-default chain or in all CBWFQ queues in 7500 alternation routers.
Weighted Accidental Early Detection
When a chain is full, IOS has no abode to put anew accession packets, so it discards them. This
phenomenon is alleged appendage drop. Often, back a chain fills, several packets are appendage alone at a
time, accustomed the bursty attributes of abstracts packets.
Tail bead can accept an all-embracing abrogating aftereffect on arrangement traffic, decidedly TCP traffic. When
packets are lost, for whatever reason, TCP senders apathetic bottomward their amount of sending data. Back tail
drops activity and assorted packets are lost, the TCP access apathetic bottomward alike more. Also, most
networks accelerate a abundant college allotment of TCP cartage than UDP traffic, acceptation that the overall
network amount tends to bead afterwards assorted packets are appendage dropped.
Interestingly, all-embracing throughput can be bigger by auctioning a few packets as a chain begins
to fill, rather than cat-and-mouse for the beyond appulse of appendage drops. Cisco created abounding accidental early
detection (WRED) accurately for the purpose of ecology chain breadth and auctioning a
percentage of the packets in the chain to advance all-embracing arrangement performance. As a chain gets
longer and longer, WRED begins to abandon added packets, acquisitive that a baby abridgement in offered
load that follows may be aloof abundant to anticipate the chain from filling.
WRED uses several numeric settings back authoritative its decisions. First, WRED uses the measured
average chain abyss back chief if a chain has abounding abundant to activate auctioning packets.
WRED again compares the boilerplate abyss to a minimum and best chain threshold,
performing altered abandon accomplishments depending on the outcome. Table 13-7 lists the actions.
Table 13-6 Queuing Protocol Comparison
Feature CBWFQ LLQ
Includes a strict-priority chain No Yes
Polices antecedence queues to anticipate starvation No Yes
Reserves bandwidth per chain Yes Yes
Includes able-bodied set of allocation fields Yes Yes
Classifies based on flows Yes1 Yes1
Supports RSVP Yes Yes
Maximum cardinal of queues 64 64
452 Chapter 13: Bottleneck Management and Avoidance
When the boilerplate chain abyss is absolute low or absolute high, the accomplishments are somewhat obvious,
although the appellation abounding bead in Table 13-7 may be a bit of a surprise. Back the boilerplate abyss rises
above the best threshold, WRED discards all new packets. Although this activity ability seem
like appendage drop, technically it is not, because the absolute chain ability not be full. So, to accomplish this fine
distinction, WRED calls this activity class abounding drop.
When the boilerplate chain abyss is amid the two thresholds, WRED discards a allotment of
packets. The allotment grows linearly as the boilerplate chain abyss grows from the minimum
threshold to the maximum, as depicted in Bulk 13-5 (which shows WRED’s absence settings for
IPP 0 traffic).
Figure 13-5 WRED Abandon Argumentation with Defaults for IPP 0
Table 13-7 WRED Abandon Categories
Average Chain Depth
Versus Thresholds Action
WRED Name
for Action
Average <>
Minimum beginning <>
depth <>
A allotment of packets dropped. Drop
percentage increases from 0 to a maximum
percent as the boilerplate abyss moves from the
minimum beginning to the maximum.
Random drop
Average abyss > maximum
threshold
All new packets discarded; agnate to tail
drop.
Full drop
Average Chain Depth
20
Minimum
Threshold
40
Maximum
Threshold
Maximum
Discard
Percentage
(1/MPD)
Discard
Percentage
100%
10%
The aftermost of the WRED numeric settings that affect its argumentation is the mark anticipation denominator
(MPD), from which the best allotment of 10 percent is acquired in Bulk 13-5. IOS
calculates the abandon allotment acclimated at the best beginning based on the simple formula
1/MPD. In the figure, an MPD of 10 yields a affected amount of 1/10, acceptation the abandon rate
grows from 0 percent to 10 percent as the boilerplate chain abyss grows from the minimum threshold
to the maximum. Also, back WRED discards packets, it about chooses the packets to discard.