The User Datagram Protocol (UDP) is a connectionless protocol on the Transport layer of the
OSI model. The overall structure of UDP is simpler than TCP, because UDP is connectionless
and therefore does not have overhead to maintain connection information. UDP is commonly used for real-time applications such as video and voice. In these time-sensitive applications,
when a packet is lost or corrupted there is not enough time for the applications to recognize that
a packet is missing and request that it be resent, and for this retransmitted packet to arrive.
Therefore, the overhead that comes with TCP is not warranted for this type of data transfer.
The following frame snippet was taken using EtherPeek and is of a DNS request:
UDP - User Datagram Protocol
Source Port: 1213
Destination Port: 53 domain
Length: 38
Checksum: 0xBFBA
As you can see, all of the overhead that is associated with the connection-oriented nature of
the TCP frame, such as sequence and acknowledgment number, has been removed in UDP. As
a result, the UDP packet is condensed down to four fields.
The first two of these fields, Source Port and Destination Port, are both 16 bits long. The
Destination Port field must be filled in with the destination port of the service that is being
requested; however, the Source Port field only needs a value when the sending station needs
a reply from the receiver. When the conversation is unidirectional and the source port is not
used, this field should be set to 0. When a reply is needed, the receiving station will reply to the
sender on the port indicated in the original packet’s source field.
The last two fields in a UDP header are Length and Checksum. Like the source and destination
port information, the length and checksum are both 16 bits long. The Length field shows the total
number of bytes in the UDP packet, including the UDP header and user data. Checksum, though
optional, allows the receiving station to verify the integrity of the UDP header as well as the data
that is contained in the packet. If Checksum is not used, it should be set to a value of 0.
A great deal of information is covered in this chapter, with the focus on Network and Data Link
layer protocols. It is important to understand this information in order to facilitate your troubleshooting
efforts. If you do not sufficiently understand the protocols present in layers 2 and 3 of
the OSI model, you should study them in depth. The majority of networking problems occur in
these two layers.
Many encapsulation types are available at the second layer of the OSI model. The ones
discussed in this chapter were Ethernet, PPP, SDLC, Frame Relay, and ISDN. Each has its
own strengths and weaknesses that make it better suited for a particular installation.
There are two major protocol classifications: connection-oriented and connectionless.
Connection-oriented protocols allow for sequenced data transfer, flow control, and error
control. Examples of connection-oriented protocols include ATM and TCP. Connectionless
protocols require less overhead; however, they do so at the expense of the sequenced data
transfer and the error and flow control offered by connection-oriented protocols. The connectionless
protocol discussed in this chapter is UDP.
Know the differences between connectionless and connection-oriented protocols. Connectionoriented
protocols have flow-control and error-checking methodologies that are not present in
connectionless protocols. Connectionless protocols offer better performance characteristics for
real-time voice and video applications.
Know the Data Link protocols and technologies. The major technologies covered in this
section include Ethernet, PPP, SDLC (HDLC), Frame Relay, and ISDN.
Know how to calculate subnet masks. Understand how VLSM functions, and know how to
determine an appropriate address and subnet mask combination. 1122
IT Certification CCIE,CCNP,CCIP,CCNA,CCSP,Cisco Network Optimization and Security Tips
Transmission Control Protocol (TCP)
The Transmission Control Protocol (TCP), a connection-oriented protocol on the Transport
layer that provides reliable delivery of data, is an integral part of the IP suite. Look at the structure
of the TCP packet. The following EtherPeek frame was taken during a POP3 transaction:
TCP - Transmission Control Protocol
Source Port: 110 POP3
Destination Port: 1097
Sequence Number: 997270908
Ack Number: 7149472
Offset: 5
Reserved: 0000
Code: %010000
Ack is valid
Window: 8760
Checksum: 0x8064
Urgent Pointer: 0
No TCP Options
No More POP Command or Reply Data
Extra bytes (Padding):
UUUUUU 55 55 55 55 55 55
Frame Check Sequence: 0x04020000
This structure is similar to the IP packet structure. The TCP header is 32 bits long and has a
minimum length of five fields, but can be six fields deep when options are specified. The first layer
starts with the Source Port and Destination Port fields. Each of these fields is 16 bits long.
A Sequence Number field occupies the entire second layer, meaning that it is 32 bits long.
TCP is a connection-oriented protocol, and this field is used to keep track of the various requests
that have been sent.
The third layer is a 32-bit length field containing the acknowledgment sequence number that
is used to track responses.
The fourth layer begins with the Offset field, which is four bits and specifies the number of
32-bit words present in the header. Six bits are reserved for future use (this is called the Reserved
field). This field follows the Offset field.
The next field, called the Flag or Code field, is also a six-bit field, and it contains control
information. Look at Table 36.6 for an explanation of the six bits within the Flag field.
The Window field specifies the buffer size for incoming data. Once the buffer is filled, the
sending system must wait for a response from the receiving system. This field is 16 bits long.
Layer 5 of the TCP header begins with the Checksum parameter, which also occupies 16 bits.
It is used to verify the integrity of the transmitted data.
The Urgent Pointer field references the last byte of data, so the receiver knows how much
urgent data it will receive. This is also a 16-bit field.
Finally, there is the Option field, which must also be 32 bits long. If the options do not
occupy 32 bits, padding is added to reach the correct length.
layer that provides reliable delivery of data, is an integral part of the IP suite. Look at the structure
of the TCP packet. The following EtherPeek frame was taken during a POP3 transaction:
TCP - Transmission Control Protocol
Source Port: 110 POP3
Destination Port: 1097
Sequence Number: 997270908
Ack Number: 7149472
Offset: 5
Reserved: 0000
Code: %010000
Ack is valid
Window: 8760
Checksum: 0x8064
Urgent Pointer: 0
No TCP Options
No More POP Command or Reply Data
Extra bytes (Padding):
UUUUUU 55 55 55 55 55 55
Frame Check Sequence: 0x04020000
This structure is similar to the IP packet structure. The TCP header is 32 bits long and has a
minimum length of five fields, but can be six fields deep when options are specified. The first layer
starts with the Source Port and Destination Port fields. Each of these fields is 16 bits long.
A Sequence Number field occupies the entire second layer, meaning that it is 32 bits long.
TCP is a connection-oriented protocol, and this field is used to keep track of the various requests
that have been sent.
The third layer is a 32-bit length field containing the acknowledgment sequence number that
is used to track responses.
The fourth layer begins with the Offset field, which is four bits and specifies the number of
32-bit words present in the header. Six bits are reserved for future use (this is called the Reserved
field). This field follows the Offset field.
The next field, called the Flag or Code field, is also a six-bit field, and it contains control
information. Look at Table 36.6 for an explanation of the six bits within the Flag field.
The Window field specifies the buffer size for incoming data. Once the buffer is filled, the
sending system must wait for a response from the receiving system. This field is 16 bits long.
Layer 5 of the TCP header begins with the Checksum parameter, which also occupies 16 bits.
It is used to verify the integrity of the transmitted data.
The Urgent Pointer field references the last byte of data, so the receiver knows how much
urgent data it will receive. This is also a 16-bit field.
Finally, there is the Option field, which must also be 32 bits long. If the options do not
occupy 32 bits, padding is added to reach the correct length.
Internet Control Message Protocol (ICMP)
The Internet Control Message Protocol (ICMP) is used throughout IP networks. ICMP was
designed to provide routing-failure information to the source system. This protocol provides
four types of feedback that are used to make the IP routing environment more efficient:
Reachability This is determined by using ICMP echo and reply messages.
Redirects These messages tell hosts to redirect traffic or choose alternative routes.
Timeouts These messages indicate that a packet’s designated TTL is expired.
Router Discovery These messages discover directly connected routers’ IP addresses. Router
discovery actually uses the ICMP Router Discovery Protocol to do this. This passive method
gathers directly connected IP addresses without having to understand specific routing protocols.
Here is a look at a couple of ICMP packets (echo request and reply):
ICMP - Internet Control Messages Protocol
ICMP Type: 8 Echo Request
Code: 0
Checksum: 0x495c
Identifier: 0x0200
Sequence Number: 512
ICMP Data Area:
abcdefghijklmnop 61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70
qrstuvwabcdefghi 71 72 73 74 75 76 77 61 62 63 64 65 66 67 68 69
Frame Check Sequence: 0x342e3235
ICMP - Internet Control Messages Protocol
ICMP Type: 0 Echo Reply
Code: 0
Checksum: 0x515c
Identifier: 0x0200
Sequence Number: 512
ICMP Data Area:
abcdefghijklmnop 61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70
qrstuvwabcdefghi 71 72 73 74 75 76 77 61 62 63 64 65 66 67 68 69
Frame Check Sequence: 0x342e3235
The ICMP structure is similar to the IP structure in that it has a type, checksum, identifier,
and sequence number. The field names differ a little but have the same functionality.
designed to provide routing-failure information to the source system. This protocol provides
four types of feedback that are used to make the IP routing environment more efficient:
Reachability This is determined by using ICMP echo and reply messages.
Redirects These messages tell hosts to redirect traffic or choose alternative routes.
Timeouts These messages indicate that a packet’s designated TTL is expired.
Router Discovery These messages discover directly connected routers’ IP addresses. Router
discovery actually uses the ICMP Router Discovery Protocol to do this. This passive method
gathers directly connected IP addresses without having to understand specific routing protocols.
Here is a look at a couple of ICMP packets (echo request and reply):
ICMP - Internet Control Messages Protocol
ICMP Type: 8 Echo Request
Code: 0
Checksum: 0x495c
Identifier: 0x0200
Sequence Number: 512
ICMP Data Area:
abcdefghijklmnop 61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70
qrstuvwabcdefghi 71 72 73 74 75 76 77 61 62 63 64 65 66 67 68 69
Frame Check Sequence: 0x342e3235
ICMP - Internet Control Messages Protocol
ICMP Type: 0 Echo Reply
Code: 0
Checksum: 0x515c
Identifier: 0x0200
Sequence Number: 512
ICMP Data Area:
abcdefghijklmnop 61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70
qrstuvwabcdefghi 71 72 73 74 75 76 77 61 62 63 64 65 66 67 68 69
Frame Check Sequence: 0x342e3235
The ICMP structure is similar to the IP structure in that it has a type, checksum, identifier,
and sequence number. The field names differ a little but have the same functionality.
Tips for Successfully Using VLSM in a Network
As is the case with many elements of networking, planning is the key to successfully using
VLSM in a network. This is especially true of VLSM implementations being put in place on existing
networks. Without proper planning, a VLSM implementation can provoke serious support
problems. There are numerous ways to implement VLSM; here we will only focus on two.
Divide up a single /24 network. This implementation strategy is best designed for smaller
remote sites connecting to one or two central locations. A single /24 network can be divided
up and used for the remote sites. In this manner, summarization and problem tracking are
made easier. For example, assume that the standard remote location has 60 IP-enabled
devices on a single segment, two routers, one switch, and two point-to-point Frame Relay
links, and is assigned the 10.1.1.0 /24 subnet. Using the small-site VLSM strategy, you can
take this /24 and divide it up into the following:
10.1.1.0 /25 for the user segment
10.1.1.244 /30 for Frame Relay link 2
10.1.1.248 /30 for Frame Relay link 1
10.1.1.253 /32 for router 2 loopback
10.1.1.254 /32 for router 1 loopback
As you can see, /32 subnets are being used for the router loopback addresses. This does not
conform to the rules of IP addressing, but it is supported by Cisco routers. Also, though it is true
that with only 60 IP-enabled devices a /26 mask could have been used, that would leave no
room for future growth. The suggested arrangement, on the other hand, allows for effective
use of the address range and permits some future expansion. Notice also that /30 masks were
used for the Frame Relay links. In the event that these links might become point-to-multipoint
links, however, a different mask should be used.
Use one mask size per service. The second tip for implementing VLSM is to try to use the
same mask size for the same service type. For example, use a /32 mask for all loopback interfaces,
a /30 mask for all point-to point links, a /26 mask for all server segments, and a /24 mask
for all user segments. In this manner you can easily identify the general purpose of a subnet
just by looking at the mask.
As stated, there are various ways to implement VLSM successfully; it just takes some planning
up front. This planning must take into account the current IP addressing scheme. In addition,
make sure that the final implementation is consistently applied and will be scalable and adaptable
as the network requirements change. 1117
VLSM in a network. This is especially true of VLSM implementations being put in place on existing
networks. Without proper planning, a VLSM implementation can provoke serious support
problems. There are numerous ways to implement VLSM; here we will only focus on two.
Divide up a single /24 network. This implementation strategy is best designed for smaller
remote sites connecting to one or two central locations. A single /24 network can be divided
up and used for the remote sites. In this manner, summarization and problem tracking are
made easier. For example, assume that the standard remote location has 60 IP-enabled
devices on a single segment, two routers, one switch, and two point-to-point Frame Relay
links, and is assigned the 10.1.1.0 /24 subnet. Using the small-site VLSM strategy, you can
take this /24 and divide it up into the following:
10.1.1.0 /25 for the user segment
10.1.1.244 /30 for Frame Relay link 2
10.1.1.248 /30 for Frame Relay link 1
10.1.1.253 /32 for router 2 loopback
10.1.1.254 /32 for router 1 loopback
As you can see, /32 subnets are being used for the router loopback addresses. This does not
conform to the rules of IP addressing, but it is supported by Cisco routers. Also, though it is true
that with only 60 IP-enabled devices a /26 mask could have been used, that would leave no
room for future growth. The suggested arrangement, on the other hand, allows for effective
use of the address range and permits some future expansion. Notice also that /30 masks were
used for the Frame Relay links. In the event that these links might become point-to-multipoint
links, however, a different mask should be used.
Use one mask size per service. The second tip for implementing VLSM is to try to use the
same mask size for the same service type. For example, use a /32 mask for all loopback interfaces,
a /30 mask for all point-to point links, a /26 mask for all server segments, and a /24 mask
for all user segments. In this manner you can easily identify the general purpose of a subnet
just by looking at the mask.
As stated, there are various ways to implement VLSM successfully; it just takes some planning
up front. This planning must take into account the current IP addressing scheme. In addition,
make sure that the final implementation is consistently applied and will be scalable and adaptable
as the network requirements change. 1117
IP Addressing Review
No review of TCP/IP networking would be complete without a review of IP addressing. In
this section we will not explain the basics of IP addressing; rather, we will focus more on
the application of variable-length subnet masking (VLSM) and the calculation of networks
as it pertains to troubleshooting in an IP environment. If you need a more detailed discussion,
see CCNA: Cisco Certified Network Associate Study Guide, 4th ed., by Todd Lammle
(Sybex, 2004).
As internetworks grew and address space became more scarce, several methodologies were
devised to extend the address space availability. One of these methodologies was VLSM. In older
routing protocols, if you wanted to subnet a major network, you had to make all the subnets the
same size. This was because the routing protocols passed only network information and did not
include subnet mask information. Newer routing protocols pass subnet information along with
the individual routes, allowing for the use of VLSM. This enables better use of address space
because network administrators can size the subnets based on the need. For example, a point-topoint
connection has only two nodes on it, and as such only needs two host addresses. Without
VLSM, if your standard subnet mask was 255.255.255.0, a /24 subnet, then 256 “addresses”
would be used on this point-to-point connection (though 256 addresses are used, only 254 are
usable by hosts). With VLSM, this same connection could use a 255.255.255.252 mask, /30, using
only four addresses—two for the hosts, one for the subnet, and one for the broadcast address. For
reference, Table 36.5 shows various subnet mask information.
One drawback to VLSM is the complexity that it adds to the network. When there was only
one mask used in an environment, the network administrators could easily memorize the subnet
information. With VLSM, however, subnet information needs to be calculated based on the
individual situation. Miscalculation of the subnets can lead to communication problems if
machines are assigned outside a subnet boundary or on a subnet or broadcast address.
this section we will not explain the basics of IP addressing; rather, we will focus more on
the application of variable-length subnet masking (VLSM) and the calculation of networks
as it pertains to troubleshooting in an IP environment. If you need a more detailed discussion,
see CCNA: Cisco Certified Network Associate Study Guide, 4th ed., by Todd Lammle
(Sybex, 2004).
As internetworks grew and address space became more scarce, several methodologies were
devised to extend the address space availability. One of these methodologies was VLSM. In older
routing protocols, if you wanted to subnet a major network, you had to make all the subnets the
same size. This was because the routing protocols passed only network information and did not
include subnet mask information. Newer routing protocols pass subnet information along with
the individual routes, allowing for the use of VLSM. This enables better use of address space
because network administrators can size the subnets based on the need. For example, a point-topoint
connection has only two nodes on it, and as such only needs two host addresses. Without
VLSM, if your standard subnet mask was 255.255.255.0, a /24 subnet, then 256 “addresses”
would be used on this point-to-point connection (though 256 addresses are used, only 254 are
usable by hosts). With VLSM, this same connection could use a 255.255.255.252 mask, /30, using
only four addresses—two for the hosts, one for the subnet, and one for the broadcast address. For
reference, Table 36.5 shows various subnet mask information.
One drawback to VLSM is the complexity that it adds to the network. When there was only
one mask used in an environment, the network administrators could easily memorize the subnet
information. With VLSM, however, subnet information needs to be calculated based on the
individual situation. Miscalculation of the subnets can lead to communication problems if
machines are assigned outside a subnet boundary or on a subnet or broadcast address.
IP Packet Structure
Now that you know what IP is, let’s look at the actual packet structure in more detail. The
following is an IP packet that was broken down by EtherPeek, a network analyzer. The entire
header has six layers, and each layer consists of 32 bits. Look at each section of the header
and get an explanation for each:
IP Header - Internet Protocol Datagram
Version: 4
Header Length: 5
Precedence: 0
Type of Service: 0
Unused:
Total Length: 60
Identifier: 0
Fragmentation Flags: 0
Fragment Offset: 0
Time To Live: 2
IP Type: 0x58 EIGRP
Header Checksum: 0x10dc
Source IP Address: 205.124.250.7
Dest. IP Address: 224.0.0.10
No Internet Datagram Options
At this point, we will define the key fields that appear in this listing. As you can see, the
packet IP header starts out with the Version field. Right now, the standard is IPv4. The version
parameter uses four of the 32 bits available.
The next field is the IP Header Length, or IHL. This field also uses another four bits, and it
specifies the datagram header length in 32-bit words.
The Type of Service (TOS field) follows the IHL. This field uses eight bits and indicates
datagram priority and how other OSI layers are to handle the datagram once they receive it.
Following the TOS field is the Total Length parameter. This field indicates how long the
packet is, including header and payload or data. The length is in units of bytes. The field itself
uses 16 bits, which brings the total for these fields to 32 bits or four bytes.
The second field begins with the Identifier or Identification field. The Identifier is
a 16-bit field that contains an integer value that identifies the packet. It is like a sequencing number
that is used when reassembling datagram fragments.
The Fragmentation Flags field follows, using only three bits. This field is used to control
fragmentation of a datagram. If the datagram can be fragmented, the first bit has a value of 0;
otherwise, a value of 1 is assigned to the first bit if the datagram is not to be fragmented. The
second bit is used to indicate the last fragment of a fragmented datagram. The third bit is an
undefined bit and is set to 0.
Fragment Offset follows the Flags field. This value uses 13 bits and specifies the fragment’s
position in the original datagram. The position is measured from the beginning of the datagram
and marked off in 64-bit increments. This again brings you to 32 bits, so you must move down
to the next layer in the IP packet.
The third field begins with the Time-to-Live (TTL) field, which is a counter whose units are
measured in hops. A starting value is given, and it counts decrements by 1 as it passes through each
hop or router. Once the value of this field is 0, the packet is discarded. This field uses eight bits.
The protocol field (IP Type) follows the TTL parameter. This field tells layer 3 which upper
layer protocol is supposed to receive the packet. It uses a decimal value to specify the protocol.
This field uses eight bits.
The Header Checksum field finishes the third layer. The checksum is used to help verify the
integrity of the IP header. This field uses 16 bits.
The next two fields are the Source IP Address and Dest. IP Address respectively. Both
of these fields are 32 bits long.
An Options field occupies the final field of the header. The field needs to be 32 bits long, so
any additional empty bits are padded.
Figure 36.13 gives a good visual representation of the IP packet structure.
following is an IP packet that was broken down by EtherPeek, a network analyzer. The entire
header has six layers, and each layer consists of 32 bits. Look at each section of the header
and get an explanation for each:
IP Header - Internet Protocol Datagram
Version: 4
Header Length: 5
Precedence: 0
Type of Service: 0
Unused:
Total Length: 60
Identifier: 0
Fragmentation Flags: 0
Fragment Offset: 0
Time To Live: 2
IP Type: 0x58 EIGRP
Header Checksum: 0x10dc
Source IP Address: 205.124.250.7
Dest. IP Address: 224.0.0.10
No Internet Datagram Options
At this point, we will define the key fields that appear in this listing. As you can see, the
packet IP header starts out with the Version field. Right now, the standard is IPv4. The version
parameter uses four of the 32 bits available.
The next field is the IP Header Length, or IHL. This field also uses another four bits, and it
specifies the datagram header length in 32-bit words.
The Type of Service (TOS field) follows the IHL. This field uses eight bits and indicates
datagram priority and how other OSI layers are to handle the datagram once they receive it.
Following the TOS field is the Total Length parameter. This field indicates how long the
packet is, including header and payload or data. The length is in units of bytes. The field itself
uses 16 bits, which brings the total for these fields to 32 bits or four bytes.
The second field begins with the Identifier or Identification field. The Identifier is
a 16-bit field that contains an integer value that identifies the packet. It is like a sequencing number
that is used when reassembling datagram fragments.
The Fragmentation Flags field follows, using only three bits. This field is used to control
fragmentation of a datagram. If the datagram can be fragmented, the first bit has a value of 0;
otherwise, a value of 1 is assigned to the first bit if the datagram is not to be fragmented. The
second bit is used to indicate the last fragment of a fragmented datagram. The third bit is an
undefined bit and is set to 0.
Fragment Offset follows the Flags field. This value uses 13 bits and specifies the fragment’s
position in the original datagram. The position is measured from the beginning of the datagram
and marked off in 64-bit increments. This again brings you to 32 bits, so you must move down
to the next layer in the IP packet.
The third field begins with the Time-to-Live (TTL) field, which is a counter whose units are
measured in hops. A starting value is given, and it counts decrements by 1 as it passes through each
hop or router. Once the value of this field is 0, the packet is discarded. This field uses eight bits.
The protocol field (IP Type) follows the TTL parameter. This field tells layer 3 which upper
layer protocol is supposed to receive the packet. It uses a decimal value to specify the protocol.
This field uses eight bits.
The Header Checksum field finishes the third layer. The checksum is used to help verify the
integrity of the IP header. This field uses 16 bits.
The next two fields are the Source IP Address and Dest. IP Address respectively. Both
of these fields are 32 bits long.
An Options field occupies the final field of the header. The field needs to be 32 bits long, so
any additional empty bits are padded.
Figure 36.13 gives a good visual representation of the IP packet structure.
Internet Protocol (IP)
It is important to distinguish between the Internet Protocol suite and the actual Internet Protocol
that is used in the Network layer of the OSI model.
The IP suite consists of several discrete protocols that are implemented at different levels of
the OSI model.
The Internet Protocol (IP) is a Network layer protocol of the IP suite. It is used to allow
routing among internetworks and heterogeneous systems. IP is a connectionless protocol,
even though it can provide error reporting, and it performs the segmentation and reassembly
of PDUs.
that is used in the Network layer of the OSI model.
The IP suite consists of several discrete protocols that are implemented at different levels of
the OSI model.
The Internet Protocol (IP) is a Network layer protocol of the IP suite. It is used to allow
routing among internetworks and heterogeneous systems. IP is a connectionless protocol,
even though it can provide error reporting, and it performs the segmentation and reassembly
of PDUs.
Layers 3 and 4: IP Routed Protocols
The Network layer is used by the Transport layer to provide the best end-to-end services and
path for PDU delivery. This means that the Network layer also uses protocols to accomplish this task. This section discusses protocols that are used within layer 3 of the OSI model. Some of
these protocols use other protocols within them for finer granularity of certain functions.
There is a significant difference between routing protocols and routed protocols. Routing
protocols are used to exchange route information and to create a network topology, thus
enabling routing decisions to be made. The routed protocols, on the other hand, contain information
regarding the end systems, how communication is established, and other information
relevant to the transfer of data. The routing protocols will be covered in Chapter 38, “TCP/IP
Routing Protocol Troubleshooting.”
path for PDU delivery. This means that the Network layer also uses protocols to accomplish this task. This section discusses protocols that are used within layer 3 of the OSI model. Some of
these protocols use other protocols within them for finer granularity of certain functions.
There is a significant difference between routing protocols and routed protocols. Routing
protocols are used to exchange route information and to create a network topology, thus
enabling routing decisions to be made. The routed protocols, on the other hand, contain information
regarding the end systems, how communication is established, and other information
relevant to the transfer of data. The routing protocols will be covered in Chapter 38, “TCP/IP
Routing Protocol Troubleshooting.”
Frame Structure
Look at Figure 36.12 to get a picture of the ISDN frame. As you can see, this frame is similar
to the HDLC frame that you studied earlier (Figure 36.11). ISDN uses LAP (Link Access Procedure)
on the D channel for layer 2 functions. Unlike the HDLC frame, the ISDN frame is
bounded by Flag fields.
After the Flag field, again going from right to left, we see the Address field. The Address
field contains several bits of key information:
SAPI This field is the service access point identifier. It defines which services are provided
to layer 3.
C/R This field designates the frame as a command or a response.
EA This is the last bit of the first byte of the Address field. This bit defines the Address field
as one or two bytes. If it is set to one byte, this is the last field within the Address field. If it is
set to two bytes, then one more field follows, ending with another EA bit.
TEI This is the terminal end point identifier, the layer 2 address used to identify individual
devices connecting to an ISDN network.
to the HDLC frame that you studied earlier (Figure 36.11). ISDN uses LAP (Link Access Procedure)
on the D channel for layer 2 functions. Unlike the HDLC frame, the ISDN frame is
bounded by Flag fields.
After the Flag field, again going from right to left, we see the Address field. The Address
field contains several bits of key information:
SAPI This field is the service access point identifier. It defines which services are provided
to layer 3.
C/R This field designates the frame as a command or a response.
EA This is the last bit of the first byte of the Address field. This bit defines the Address field
as one or two bytes. If it is set to one byte, this is the last field within the Address field. If it is
set to two bytes, then one more field follows, ending with another EA bit.
TEI This is the terminal end point identifier, the layer 2 address used to identify individual
devices connecting to an ISDN network.
Integrated Services Digital Network (ISDN)
Integrated Services Digital Network (ISDN) is a service that allows telephone networks to carry
data, voice, and other digital traffic. There are two types of ISDN interfaces: Basic Rate Interface
(BRI) and Primary Rate Interface (PRI). BRI uses two B channels and one D channel. Each
of the two B channels operates at 64Kbps bidirectionally; the D channel operates at 16Kbps.
The B channels are used for transmitting and receiving data. The D channel is used for protocol
communications and signaling.
In contrast, PRI uses 23 B channels and 1 D channel. All 23 B channels are added to a rotary
group, as well. The D channel runs at the same line speed as the B channels—64Kbps. Because of the
D channel’s additional line speed, PRI has the equivalent line speed of a T-1 circuit (1.544Mbps). In
Europe, PRI offers 30 B channels and 1 D channel, making it the equivalent of an E-1 circuit.
Just as there are two types of ISDN interfaces, there are two terminal equipment types. Type 1
(TE1) is equipment that was built specifically for use on ISDN. Type 2 (TE2) is equipment that
was made before the ISDN specifications, and it requires a terminal adapter to actually interface
with ISDN. Terminal equipment, which is comparable to DTE as described in the “Frame Relay”
section earlier in this chapter, includes computers or routers.
In order for terminal equipment to work, it must be able to connect to a network termination.
There are three types of ISDN network terminations, known as NT devices. Type 1 (NT1) devices
are treated as customer premises equipment. Type 2 (NT2) devices are more intelligent devices
than NT1 and can perform concentration and switching functions. The last type is a combination
of Types 1 and 2. It is known as a Type 1/2 or NT1/2.
More information about troubleshooting ISDN is covered in Chapter 40, “Troubleshooting
ISDN.”
data, voice, and other digital traffic. There are two types of ISDN interfaces: Basic Rate Interface
(BRI) and Primary Rate Interface (PRI). BRI uses two B channels and one D channel. Each
of the two B channels operates at 64Kbps bidirectionally; the D channel operates at 16Kbps.
The B channels are used for transmitting and receiving data. The D channel is used for protocol
communications and signaling.
In contrast, PRI uses 23 B channels and 1 D channel. All 23 B channels are added to a rotary
group, as well. The D channel runs at the same line speed as the B channels—64Kbps. Because of the
D channel’s additional line speed, PRI has the equivalent line speed of a T-1 circuit (1.544Mbps). In
Europe, PRI offers 30 B channels and 1 D channel, making it the equivalent of an E-1 circuit.
Just as there are two types of ISDN interfaces, there are two terminal equipment types. Type 1
(TE1) is equipment that was built specifically for use on ISDN. Type 2 (TE2) is equipment that
was made before the ISDN specifications, and it requires a terminal adapter to actually interface
with ISDN. Terminal equipment, which is comparable to DTE as described in the “Frame Relay”
section earlier in this chapter, includes computers or routers.
In order for terminal equipment to work, it must be able to connect to a network termination.
There are three types of ISDN network terminations, known as NT devices. Type 1 (NT1) devices
are treated as customer premises equipment. Type 2 (NT2) devices are more intelligent devices
than NT1 and can perform concentration and switching functions. The last type is a combination
of Types 1 and 2. It is known as a Type 1/2 or NT1/2.
More information about troubleshooting ISDN is covered in Chapter 40, “Troubleshooting
ISDN.”
Frame Structure
Frame Relay does not provide any information on flow and error control. As a result, no space is
reserved within the frame for this information. These functions are left to the upper layer protocols.
Frame Relay does provide congestion detection and can notify the upper layers of possible problems;
however, Frame Relay is primarily concerned only with the transmission and reception of data.
As a mechanism for data circuit identification, Frame Relay uses a DLCI number. Ten bits
of the two-byte Address field are used to define the DLCI. To a Frame Relay frame, the DLCI
is the most significant address in the header. Figure 36.11 depicts a Frame Relay frame.
reserved within the frame for this information. These functions are left to the upper layer protocols.
Frame Relay does provide congestion detection and can notify the upper layers of possible problems;
however, Frame Relay is primarily concerned only with the transmission and reception of data.
As a mechanism for data circuit identification, Frame Relay uses a DLCI number. Ten bits
of the two-byte Address field are used to define the DLCI. To a Frame Relay frame, the DLCI
is the most significant address in the header. Figure 36.11 depicts a Frame Relay frame.
Frame Relay
Frame Relay was developed as a digital packet-switching technology, whereas older technologies
such as X.25 were analog-based technologies. The technology used in Frame Relay allows it to
multiplex several different data flows over the same physical media. More information on Frame
Relay is presented in Chapter 39, “Troubleshooting Serial Line and Frame Relay Connectivity.”
Frame Relay also uses permanent and switched virtual circuits between the data terminal
equipment (DTE) (customer connection) and the data communication equipment (DCE) (service
provider’s frame relay switch). These virtual circuits have unique identifiers that allow the
Frame Relay to keep track of each logical data flow. The identifier is known as a DLCI (data
link connection identifier). The DLCI number is used to create a logical circuit within a physical
circuit. Multiple logical circuits can be created within one physical circuit.
Look at the following router configuration excerpt:
interface Serial1/5
description Physical Circuit
no ip address
no ip directed-broadcast
encapsulation frame-relay
!
interface Serial1/5.1 point-to-point
description To Building A
ip address 172.16.1.17 255.255.255.252
no ip directed-broadcast
frame-relay interface-dlci 17 IETF
!
interface Serial1/5.2 point-to-point
description To Building B
ip address 172.16.1.25 255.255.255.252
no ip directed-broadcast
frame-relay interface-dlci 22 IETF
From this configuration, you can see that two logical circuits have been defined to communicate
over one physical circuit. Notice that each subinterface or logical circuit has a unique DLCI. Each
DLCI maps to another DLCI within the Frame Relay cloud. This mapping continues throughout the
Frame Relay cloud until it maps to another DTE on the destination side of the virtual circuit.
such as X.25 were analog-based technologies. The technology used in Frame Relay allows it to
multiplex several different data flows over the same physical media. More information on Frame
Relay is presented in Chapter 39, “Troubleshooting Serial Line and Frame Relay Connectivity.”
Frame Relay also uses permanent and switched virtual circuits between the data terminal
equipment (DTE) (customer connection) and the data communication equipment (DCE) (service
provider’s frame relay switch). These virtual circuits have unique identifiers that allow the
Frame Relay to keep track of each logical data flow. The identifier is known as a DLCI (data
link connection identifier). The DLCI number is used to create a logical circuit within a physical
circuit. Multiple logical circuits can be created within one physical circuit.
Look at the following router configuration excerpt:
interface Serial1/5
description Physical Circuit
no ip address
no ip directed-broadcast
encapsulation frame-relay
!
interface Serial1/5.1 point-to-point
description To Building A
ip address 172.16.1.17 255.255.255.252
no ip directed-broadcast
frame-relay interface-dlci 17 IETF
!
interface Serial1/5.2 point-to-point
description To Building B
ip address 172.16.1.25 255.255.255.252
no ip directed-broadcast
frame-relay interface-dlci 22 IETF
From this configuration, you can see that two logical circuits have been defined to communicate
over one physical circuit. Notice that each subinterface or logical circuit has a unique DLCI. Each
DLCI maps to another DLCI within the Frame Relay cloud. This mapping continues throughout the
Frame Relay cloud until it maps to another DTE on the destination side of the virtual circuit.
Synchronous Data Link Control (SDLC)
Synchronous Data Link Control (SDLC) is based on a synchronous, more-efficient, faster, and
flexible bit-oriented format. SDLC has several derivatives that perform similar functions with
some enhancements: HDLC, LAPB (Link Access Procedure, Balanced), and IEEE 802.2, just to
name a few. HDLC is the default encapsulation type on most Cisco router serial interfaces.
SDLC is used for many link types. Two node types exist within SDLC: primary nodes and
secondary nodes. Primary nodes are responsible for the control of secondary stations and for
link management operations such as link setup and teardown. Secondary nodes talk only to the
primary node when fulfilling two requirements. First, they have permission from the primary
node; second, they have data to transmit. Even if a secondary node has data to send, it cannot
send the data if it does not have permission from the primary node.
Primary and secondary stations can be configured together in four different topologies:
Point-to-point This topology requires only two nodes—a primary and a secondary.
Multipoint This configuration uses one primary station and multiple secondary stations.
Loop This configuration uses one primary and multiple secondary stations. The difference
between the loop and multipoint setups is that in a loop, the primary station is connected between
two secondary stations, which makes two directly connected secondary stations. When more
secondary stations are added, they must connect to the other secondary stations that are currently
in the loop. When one of these stations wants to send information to the primary node, it must
transit to the other secondary stations before it reaches the primary.
Hub go-ahead This configuration also uses one primary and multiple secondary stations, but
it uses a different communication topology. The primary station has an outbound channel. This
channel is used to communicate with each of the secondary stations. An inbound channel is
shared among the secondary stations and has a single connection into the primary station.
Frame Structure
SDLC uses three different frame structures: information, supervisory, and unnumbered. Overall,
the structure of the frames is similar among all three, except for the Control frame. The Control
frame is varied to distinguish the type of SDLC frame that is being used. Figure 36.10 gives
the structure for the different SDLC frames. Pay close attention to the bit values next to the send
sequence number within the Control frame.
First, let’s talk about the frame fields that are common among all three frame types. As
you can see, all three frames depicted in Figure 36.10 start with a Flag field that is followed
by an Address field. The Address field of SDLC frames is different from other frame structures
because only the address of the secondary node is used, rather than a destination and
source address. The secondary address is used because all communication is either originated
or received by the primary node; thus, it is not necessary to specify its address within
the frame.
The Control frame follows the Address field. Information contained within the Control
frame defines the SDLC frame type. The Control frame begins with a receive sequence number.
This sequence number is used to tell the protocol the number of the next frame to
be received.
The P/F or Poll Final number following the receive sequence number is used differently
by primary and secondary nodes. Primary nodes use the information to communicate to the secondary
node that an immediate response is required. The secondary node uses the information
to tell the primary node that the frame is the last one in the current dialog.
The Data field follows the Control frame. As with other frame types, the FCS field comes
next and is used to calculate the CRC. SDLC frames differ again with the last field, which is
another Flag field like one at the beginning of the frame.
Now that we have discussed the frame structure, let’s examine the three different frame
types. Information frames carry exactly that—information destined for the upper layer protocols.
Supervisory frames control SDLC communications; they are responsible for flow control
and error control for I-frame (information). Unnumbered frames provide the initialization of
secondary nodes, as well as other managerial functions. 1109
flexible bit-oriented format. SDLC has several derivatives that perform similar functions with
some enhancements: HDLC, LAPB (Link Access Procedure, Balanced), and IEEE 802.2, just to
name a few. HDLC is the default encapsulation type on most Cisco router serial interfaces.
SDLC is used for many link types. Two node types exist within SDLC: primary nodes and
secondary nodes. Primary nodes are responsible for the control of secondary stations and for
link management operations such as link setup and teardown. Secondary nodes talk only to the
primary node when fulfilling two requirements. First, they have permission from the primary
node; second, they have data to transmit. Even if a secondary node has data to send, it cannot
send the data if it does not have permission from the primary node.
Primary and secondary stations can be configured together in four different topologies:
Point-to-point This topology requires only two nodes—a primary and a secondary.
Multipoint This configuration uses one primary station and multiple secondary stations.
Loop This configuration uses one primary and multiple secondary stations. The difference
between the loop and multipoint setups is that in a loop, the primary station is connected between
two secondary stations, which makes two directly connected secondary stations. When more
secondary stations are added, they must connect to the other secondary stations that are currently
in the loop. When one of these stations wants to send information to the primary node, it must
transit to the other secondary stations before it reaches the primary.
Hub go-ahead This configuration also uses one primary and multiple secondary stations, but
it uses a different communication topology. The primary station has an outbound channel. This
channel is used to communicate with each of the secondary stations. An inbound channel is
shared among the secondary stations and has a single connection into the primary station.
Frame Structure
SDLC uses three different frame structures: information, supervisory, and unnumbered. Overall,
the structure of the frames is similar among all three, except for the Control frame. The Control
frame is varied to distinguish the type of SDLC frame that is being used. Figure 36.10 gives
the structure for the different SDLC frames. Pay close attention to the bit values next to the send
sequence number within the Control frame.
First, let’s talk about the frame fields that are common among all three frame types. As
you can see, all three frames depicted in Figure 36.10 start with a Flag field that is followed
by an Address field. The Address field of SDLC frames is different from other frame structures
because only the address of the secondary node is used, rather than a destination and
source address. The secondary address is used because all communication is either originated
or received by the primary node; thus, it is not necessary to specify its address within
the frame.
The Control frame follows the Address field. Information contained within the Control
frame defines the SDLC frame type. The Control frame begins with a receive sequence number.
This sequence number is used to tell the protocol the number of the next frame to
be received.
The P/F or Poll Final number following the receive sequence number is used differently
by primary and secondary nodes. Primary nodes use the information to communicate to the secondary
node that an immediate response is required. The secondary node uses the information
to tell the primary node that the frame is the last one in the current dialog.
The Data field follows the Control frame. As with other frame types, the FCS field comes
next and is used to calculate the CRC. SDLC frames differ again with the last field, which is
another Flag field like one at the beginning of the frame.
Now that we have discussed the frame structure, let’s examine the three different frame
types. Information frames carry exactly that—information destined for the upper layer protocols.
Supervisory frames control SDLC communications; they are responsible for flow control
and error control for I-frame (information). Unnumbered frames provide the initialization of
secondary nodes, as well as other managerial functions. 1109
Point-to-Point Protocol (PPP)
Point-to-Point Protocol (PPP) is used to transfer data over serial point-to-point links. It accomplishes
this by using a layer 2 serial encapsulation called High-Level Data Link Control (HDLC).
HDLC is used for frame encapsulation on synchronous serial lines. It uses a link control protocol
(LCP) to manage the serial connection. Network control protocols (NCPs) are used to allow PPP
to use other protocols from layer 3, thus enabling PPP to assign IP addresses dynamically.
PPP uses the same frame structure as HDLC. Figure 36.9 gives you a picture of what the
frame looks like. As always, we move from right to left.
FIGURE 3 6 . 9 PPP packet structure
First, we have the Flag field, which uses one byte to specify the beginning or ending of a
frame. Then there is another byte that is used in the Address field to hold a broadcast address
of 11111111.
The Address field is followed by the one-byte Control field, which requests a transmission
of user data. The two-byte Protocol field follows the Control field. This field indicates the
encapsulated data’s protocol.
The Data field contains the information that will be handed to the upper layer protocols. It is a
variable-length field. After that is the FCS. Like the other protocols, it is used for CRC calculation.
this by using a layer 2 serial encapsulation called High-Level Data Link Control (HDLC).
HDLC is used for frame encapsulation on synchronous serial lines. It uses a link control protocol
(LCP) to manage the serial connection. Network control protocols (NCPs) are used to allow PPP
to use other protocols from layer 3, thus enabling PPP to assign IP addresses dynamically.
PPP uses the same frame structure as HDLC. Figure 36.9 gives you a picture of what the
frame looks like. As always, we move from right to left.
FIGURE 3 6 . 9 PPP packet structure
First, we have the Flag field, which uses one byte to specify the beginning or ending of a
frame. Then there is another byte that is used in the Address field to hold a broadcast address
of 11111111.
The Address field is followed by the one-byte Control field, which requests a transmission
of user data. The two-byte Protocol field follows the Control field. This field indicates the
encapsulated data’s protocol.
The Data field contains the information that will be handed to the upper layer protocols. It is a
variable-length field. After that is the FCS. Like the other protocols, it is used for CRC calculation.
Frame Structure
Frame formats are similar between Ethernet and IEEE 802.3. Figure 36.8 depicts the similarities
and differences between the two. The frame structures are read from right to left. Starting at the
right, you see that both frames begin with a preamble. The
Preamble
is a seven-byte field.
(Notice that we have moved from bits to bytes to specify field lengths.) The preamble consists
of alternating 1s and 0s.
The next field is the
SOF
, the start-of-frame delimiter. It is used to synchronize the framereception
portions of all the machines on the segment. This field is only one byte long.
The two fields following the SOF are six bytes each; they are the
Destination
and Source
MAC addresses of the receiving and sending stations. Each MAC address is unique.
Up to this point, the frames are exactly the same. Starting with the next field, they are different.
The next field is a two-byte field in both frame structures. Ethernet defines the field as a Type field;
IEEE 802.3 defines it as a Length field. Ethernet uses this field to specify which upper layer protocol
will receive the packet. IEEE 802.3 uses the field to define the number of bytes in the payload
(802.2 header and data) field. One easy method of observing the difference between an Ethernet
and 802.3 frame is to look at the Type/Length field. If this value is 1500 (0x05DC) or less, then
it is an IEEE 802.3 frame. If it is greater than 1500, it is an Ethernet frame.
Next is the Data field, in both Ethernet and 802.3 formats. The only difference between the
two versions of this field is that Ethernet uses a variable byte size, between 46 and 1500 bytes,
for data. This data is what will be handed to the upper layer protocols. IEEE 802.3 uses a
46–1500 variable byte size, as well, but the information here contains the 802.2 header and the
encapsulated data that will eventually be passed to an upper layer protocol that is defined
within the Data field.
Finally, the last field is the Frame Check sequence (FCS) field. It is four bytes and stores
information that will be used for calculating the CRC after the data has been sent or received.
and differences between the two. The frame structures are read from right to left. Starting at the
right, you see that both frames begin with a preamble. The
Preamble
is a seven-byte field.
(Notice that we have moved from bits to bytes to specify field lengths.) The preamble consists
of alternating 1s and 0s.
The next field is the
SOF
, the start-of-frame delimiter. It is used to synchronize the framereception
portions of all the machines on the segment. This field is only one byte long.
The two fields following the SOF are six bytes each; they are the
Destination
and Source
MAC addresses of the receiving and sending stations. Each MAC address is unique.
Up to this point, the frames are exactly the same. Starting with the next field, they are different.
The next field is a two-byte field in both frame structures. Ethernet defines the field as a Type field;
IEEE 802.3 defines it as a Length field. Ethernet uses this field to specify which upper layer protocol
will receive the packet. IEEE 802.3 uses the field to define the number of bytes in the payload
(802.2 header and data) field. One easy method of observing the difference between an Ethernet
and 802.3 frame is to look at the Type/Length field. If this value is 1500 (0x05DC) or less, then
it is an IEEE 802.3 frame. If it is greater than 1500, it is an Ethernet frame.
Next is the Data field, in both Ethernet and 802.3 formats. The only difference between the
two versions of this field is that Ethernet uses a variable byte size, between 46 and 1500 bytes,
for data. This data is what will be handed to the upper layer protocols. IEEE 802.3 uses a
46–1500 variable byte size, as well, but the information here contains the 802.2 header and the
encapsulated data that will eventually be passed to an upper layer protocol that is defined
within the Data field.
Finally, the last field is the Frame Check sequence (FCS) field. It is four bytes and stores
information that will be used for calculating the CRC after the data has been sent or received.
Ethernet/IEEE 802.3
These two terms actually refer to different things:
Ethernet
is a communication technology and
IEEE802.3
is a variety of Ethernet. Ethernet, in the more specific sense, is a
carrier sense, multiple
access/collision detection (CSMA/CD)
local area network. An Ethernet network uses these
attributes—carrier sense, multiple access, and collision detection—to enhance communication. This
definitely does
not
mean that Ethernet is the only technology that uses these attributes. In today’s
technical jargon, however, the term
Ethernet
is getting closer to meaning
all
CSMA/CD technologies.
Both Ethernet and IEEE 802.3 are broadcast networks. All frames that cross a given segment
can be heard by all machines populating that segment. Because all machines on the segment have
equal access to the physical media, each station tries to wait for a quiet spot before it transmits its
data. If two machines talk at the same time, a collision occurs.
Ethernet services both the Physical and Data Link layers, whereas IEEE 802.3 is more concerned
with the Physical layer and how it talks to the Data Link layer. Several IEEE 802.3 protocols
exist; each one has a distinct name that describes how it is different from other IEEE 802.3
protocols. Table 36.3 summarizes these differences.
TABLE 3 6 . 3
IEEE 802.3 Characteristics
802.3
Values
Data Rate
(Mbps)
Signaling
Method
Maximum
Segment
Length (m) Media Topology
10Base5 10 Baseband 500 50 Ohm coax Bus
10Base2 10 Baseband 185 50 Ohm coax Bus
1Base5 1 Baseband 185 Unshielded twisted pair Star
10BaseT 10 Baseband 100 Unshielded twisted pair Star
100BaseT 100 Baseband 100 Unshielded twisted pair Star
10Broad36 10 Broadband 1800 75 Ohm coax Bus
1000BaseT 1000 Baseband 100 Unshielded twisted pair Star
Table 36.3 is an excerpt from Cisco documentation; for the full document,
please see
www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/
ethernet.htm
.
In Table 36.3, you will notice that the terms baseband and broadband are used to describe
the signaling type. In a baseband transmission, only a single frequency is used for sending data,
and therefore only a single signal can be sent over the same media. A broadband signal multiplexes
multiple signals of different frequencies together on the same physical media.
Though not specifically called out in the table, there are four different IP encapsulation types
supported by Cisco for Ethernet: ARPA, SNAP, Novell-Ether, and SAP. Of these, ARPA is the
default encapsulation type used.
Ethernet
is a communication technology and
IEEE802.3
is a variety of Ethernet. Ethernet, in the more specific sense, is a
carrier sense, multiple
access/collision detection (CSMA/CD)
local area network. An Ethernet network uses these
attributes—carrier sense, multiple access, and collision detection—to enhance communication. This
definitely does
not
mean that Ethernet is the only technology that uses these attributes. In today’s
technical jargon, however, the term
Ethernet
is getting closer to meaning
all
CSMA/CD technologies.
Both Ethernet and IEEE 802.3 are broadcast networks. All frames that cross a given segment
can be heard by all machines populating that segment. Because all machines on the segment have
equal access to the physical media, each station tries to wait for a quiet spot before it transmits its
data. If two machines talk at the same time, a collision occurs.
Ethernet services both the Physical and Data Link layers, whereas IEEE 802.3 is more concerned
with the Physical layer and how it talks to the Data Link layer. Several IEEE 802.3 protocols
exist; each one has a distinct name that describes how it is different from other IEEE 802.3
protocols. Table 36.3 summarizes these differences.
TABLE 3 6 . 3
IEEE 802.3 Characteristics
802.3
Values
Data Rate
(Mbps)
Signaling
Method
Maximum
Segment
Length (m) Media Topology
10Base5 10 Baseband 500 50 Ohm coax Bus
10Base2 10 Baseband 185 50 Ohm coax Bus
1Base5 1 Baseband 185 Unshielded twisted pair Star
10BaseT 10 Baseband 100 Unshielded twisted pair Star
100BaseT 100 Baseband 100 Unshielded twisted pair Star
10Broad36 10 Broadband 1800 75 Ohm coax Bus
1000BaseT 1000 Baseband 100 Unshielded twisted pair Star
Table 36.3 is an excerpt from Cisco documentation; for the full document,
please see
www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/
ethernet.htm
.
In Table 36.3, you will notice that the terms baseband and broadband are used to describe
the signaling type. In a baseband transmission, only a single frequency is used for sending data,
and therefore only a single signal can be sent over the same media. A broadband signal multiplexes
multiple signals of different frequencies together on the same physical media.
Though not specifically called out in the table, there are four different IP encapsulation types
supported by Cisco for Ethernet: ARPA, SNAP, Novell-Ether, and SAP. Of these, ARPA is the
default encapsulation type used.
Layer 2: Data Link Layer
Protocols and Applications
This section is dedicated to layer 2 protocols and applications. It is a very important section
because it provides specific information on how the layer 2 protocols work. What better
way to be able to troubleshoot a problem than by understanding the intricacies of the protocol
in question?
This section covers the following layer 2 protocols:
Ethernet/IEEE 802.3
PPP
SDLC
Frame Relay
ISDN
This section is dedicated to layer 2 protocols and applications. It is a very important section
because it provides specific information on how the layer 2 protocols work. What better
way to be able to troubleshoot a problem than by understanding the intricacies of the protocol
in question?
This section covers the following layer 2 protocols:
Ethernet/IEEE 802.3
PPP
SDLC
Frame Relay
ISDN
Connectionless Protocols
Now that connection-oriented protocols have been discussed, we’ll move on to connectionless
protocols.
Connectionless protocols
differ from connection-oriented protocols because they do
not provide for flow control.
Figure 36.7 shows you how connectionless protocols work. This figure looks somewhat
like Figure 36.3, except that there are no steps that involve a connection setup or
termination. It is also missing the flow control and error control information sent by the
receiving system.
Connectionless protocols do not send data relative to any other data units. The data
included in the PDU must contain enough information for the PDU to get to its destination
and for the receiving system to properly process it. Because there is no established connection,
flow and error control cannot be implemented. Without flow and error control,
the originating system has no way of knowing whether all of the transmitted data was
received by the destination system without errors. Table 36.2 shows examples of connectionless
protocols.
protocols.
Connectionless protocols
differ from connection-oriented protocols because they do
not provide for flow control.
Figure 36.7 shows you how connectionless protocols work. This figure looks somewhat
like Figure 36.3, except that there are no steps that involve a connection setup or
termination. It is also missing the flow control and error control information sent by the
receiving system.
Connectionless protocols do not send data relative to any other data units. The data
included in the PDU must contain enough information for the PDU to get to its destination
and for the receiving system to properly process it. Because there is no established connection,
flow and error control cannot be implemented. Without flow and error control,
the originating system has no way of knowing whether all of the transmitted data was
received by the destination system without errors. Table 36.2 shows examples of connectionless
protocols.
Subscribe to:
Posts (Atom)