Virtual LANs (VLAN)


VLANs provide the means to logically group several
end stations with common sets of requirements.
VLANs are independent of physical locations,
meaning that two end stations connected to different
switches on different floors can belong to the
same VLAN. Typically the logical grouping follows
workgroup functions such as engineering or
finance, but this can be customized.
With VLANS it is much easier to assign access
rules and provision services to groups of users
regardless of their physical location. For example,
using VLANs you can give all members of a project
team access to project files by virtue of their
VLAN membership. This ability also makes it easier
to add or delete users without rerunning cables
or changing network addresses.
VLANs also create their own broadcast domains
without the addition of Layer 3 devices. 64

Frame Transmission Modes


Switches typically are Layer 2 devices (some
switches now perform Layer 3 and higher functions).
According to the OSI model, the data unit
processed by a switch is called a frame. Switches
must balance speed and accuracy (no errors) when
processing frames, because typically they are measured
on both attributes.
The three primary frame switching modes are as
follows:
• Cut-through: Also known as fast-forward. The
switch checks only the destination address and
immediately begins forwarding the frame. This
can decrease latency but also can transmit
frames containing errors.
• Store-and-forward: The switch waits to receive
the entire frame before forwarding. The entire
frame is read, and a cyclic redundancy check
(CRC) is performed. If the CRC is bad, the
frame is discarded. Although this method
increases latency (processing time), it also tends
to minimize errors.
• Fragment-free (modified cut-through): The
switch reads the first 64 bytes before forwarding
the frame. 64 bytes is the minimum number of
bytes necessary to detect and filter out collision
frames.

Address Learning


A switch must learn the addresses of the devices
attached to it. First it inspects the source address of
all the traffic sent through it. Then it associates the
port the traffic was received on with the MAC
address listed. The following example illustrates
this concept. The MAC addresses are not in the
correct format and are shown for clarity only:
• Time 0: The switch shown has an empty MAC
address table.
• Time 1: The device attached to port 2 sends a
message intended for the device on port 0. This
kicks off two actions within the switch. First, the
switch now knows the address associated with
the device on port 2, so it enters the information
into its table. Second, because it does not have
an association for the device the traffic is intended
for (the computer on port 0), the switch
floods the message out all ports except the one
on which it was received.

• Time 2: The device on port 0 replies to the message.
The switch associates the source address of
the message with port 0.
Any future communications involving either of
these end stations will not require these steps,
because the switch now knows which ports they
are associated with.
This process happens all the time in every switch.
For most switches, when a table entry has reached
a certain “age” and has not been referenced in a
while, it can be removed. This process is called
aging out.


Forwarding and Filtering

From a network efficiency standpoint, it is easy to
see that it is much better for the network when the
switch knows all the addresses on every port.
However, it is not always practical to enter this
information manually. As the network grows and
changes are made, it becomes almost impossible to
keep up.

A switch always does something when it receives
traffic. The preference is to send the traffic out a
specific port (called filtering), but this works only
when the location of the intended destination is
known. When the destination address is unknown,
the switch forwards the traffic out every port,
except the one on which the traffic was received.
This process is called flooding. Think of this as a
guy calling every number in the phone book because
he lost a woman’s number from the night before. 61

Broadcast and Collision Domains

From time to time, a device on the network will
want to communicate with all other “local” devices
at the same time. Typically, this occurs when a
device wants to query the network for an IP address,
when a device is newly added to a network, or when
a change occurs in the network.

A group of devices that receive all broadcast messages
from members of that group is called a
broadcast domain. Network broadcast domains
typically are segmented with Layer 3 devices (routers).
Think of a broadcast domain as like standing in
your yard and yelling as loudly as you can. The
neighbors who hear you are within your broadcast
domain.

Broadcast and Collision Domains

From time to time, a device on the network will
want to communicate with all other “local” devices
at the same time. Typically, this occurs when a
device wants to query the network for an IP address,
when a device is newly added to a network, or when
a change occurs in the network.

A group of devices that receive all broadcast messages
from members of that group is called a
broadcast domain. Network broadcast domains
typically are segmented with Layer 3 devices (routers).
Think of a broadcast domain as like standing in
your yard and yelling as loudly as you can. The
neighbors who hear you are within your broadcast
domain.

What Problems Need to Be Solved?

• MAC address learning: Switches must learn
about the network to make intelligent decisions.
Because of the size and changing nature of networks,
switches have learned how to discover
network addresses and keep track of network
changes. Switches do this by finding the address
information contained in the frames flowing
through the network, and they maintain private
tables with that information.
• Forwarding and filtering: Switches must decide
what to do with traffic. These decisions are
based on the switch’s knowledge of the network.
• Segmenting end stations: Switches must also have
mechanisms for segregating users into logical
groupings (virtual LANs [VLAN] or broadcast
domains) to allow efficient provisioning of service.

Why Should I Care About Switching?

Advances in switching technology combined with
a decrease in switch prices have made computer
networks a common and increasingly important
aspect of business today.

Switches Take Over the World

As switches established themselves in networks, vendors added increasing
functionality. Modern switches can perform forwarding decisions based on
Layer 3 routing, as well as on Layer 4 and above. Even though switches can
perform the functions of other higher-layer devices such as routers and content
switches, you still must separate these functionalities to avoid single points of
failure. Switches are the workhorse of networks, providing functionality across
almost all layers of the OSI model reliably and quickly. Switches can also provide
power to devices such as IP-based phones using the same Ethernet connection.
Again, this applies to very large switches serving corporate networks
rather than the switches in a small office or home.

Switching Ethernets

As switch Ethernet ports became less expensive, switches replaced hubs in the
wiring closet. Initially, when switches were introduced, network administrators
plugged hubs (containing multiple hosts) into switch ports. But eventually, it
became cost-effective to plug the hosts directly into a switch port. This
arrangement gives each host its own dedicated Ethernet and removes the possibility
of collisions. Because a dedicated switch connection has only two devices
(the switch and the host), you can configure an Ethernet switch port as full
duplex. This means that a device can receive incoming traffic and transmit
traffic simultaneously. End stations have considerably more bandwidth when
they use switches. Ethernet can run at multiple speeds: 10 Mbps, 100 Mbps,
1 Gbps, and 10 Gbps. Therefore, switches can provide connectivity at these
speeds. However, network applications and the web create considerably more
network traffic, reintroducing new congestion problems. Switches can use
quality of service (QoS) and other mechanisms to help solve the congestion
issue.

Switching Basics: It’s a Bridge

Network devices have one primary purpose: to pass network traffic from one
segment to another. (There are exceptions, of course, such as network analyzers,
which inspect traffic as it goes by.) With devices that independently make
forwarding decisions, traffic can travel from its source to the destination. The
higher up the OSI model a device operates, the deeper it looks into a packet to
make a forwarding decision. Railroad-switching stations provide a similar
example. The switches enable a train to enter the appropriate tracks (path)
that take it to its final destination. If the switches are set wrong, a train can
end up traveling to the wrong destination or traveling in a circle.

Switching technology emerged as the replacement for bridging. Switches provide
all the features of traditional bridging and more. Compared to bridges,
switches provide superior throughput performance, higher port density, and
lower per-port cost.
The different types of bridging include the following:
• Transparent bridging primarily occurs in Ethernet networks.
• Source-route bridging occurs in Token Ring networks.
• Translational bridging occurs between different media. For example, a translational
bridge might connect a Token Ring network to an Ethernet network.
Bridging and switching occur at the data link layer (Layer 2 in the OSI model),
which means that bridges control data flow, provide transmission error handling,
and enable access to physical media. Basic bridging is not complicated:
A bridge or switch analyzes an incoming frame, determines where to forward
the frame based on the packet’s header information (which contains information
on the source and destination addresses), and forwards the frame toward
its destination. With transparent bridging, forwarding decisions happen one
hop (or network segment) at a time. With source-route bridging, the frame
contains a predetermined path to the destination.
Bridges and switches divide networks into smaller, self-contained units.
Because only a portion of the traffic is forwarded, bridging reduces the overall
traffic that devices see on each connected network. The bridge acts as a kind
of firewall in that it prevents frame-level errors from propagating from one
segment to another. Bridges also accommodate communication among more
devices than are supported on a single segment or ring.
Bridges and switches essentially extend the effective length of a LAN, permitting
more workstations to communicate with each other within a single broadcast
domain. The primary difference between switches and bridges is that
bridges segment a LAN into a few smaller segments. Switches, through their
increased port density and speed, permit segmentation on a much larger scale.
Modern-day switches used in corporate networks have hundreds of ports per
chassis (unlike the four-port box connected to your cable modem).

Additionally, modern-day switches interconnect LAN segments operating at
different speeds.
Switching describes technologies that are an extension of traditional bridges.
Switches connect two or more LAN segments and make forwarding decisions
about whether to transmit packets from one segment to another. When a
frame arrives, the switch inspects the destination and source Media Access
Control (MAC) addresses in the packet. (This is an example of store-andforward
switching.) The switch places an entry in a table indicating that the
source MAC address is located off the switch interface on which the packet
arrived. The bridge then consults the same table for an entry for the destination
MAC address. If it has an entry for the destination MAC address, and the
entry indicates that the MAC address is located on a different port from which
the packet was received, the switch forwards the frame to the specified port.
If the switch table indicates that the destination MAC address is located on the
same interface on which the frame was just received, the bridge does not forward
the frame. Why send it back onto the network segment from which it
came? This decision is where a switch reduces network congestion. Finally, if
the destination MAC address is not in the switch’s table, this indicates that the
switch has not yet seen a frame destined for this MAC address. In this case,
the switch then forwards the frames out all other ports (called flooding) except
the one on which the packet was received.
At their core, switches are multiport bridges. However, switches have radically
matured into intelligent devices, replacing both bridges and hubs. Switches not
only reduce traffic through the use of bridge tables, but also offer new functionality
that supports high-speed connections, virtual LANs, and even traditional
routing.

Fast Computers Need Faster Networks

The PC emerged as the most common desktop computer in the 1980s. LANs
emerged as a way to network PCs in a common location. Networking technologies
such as Token Ring and Ethernet allowed users to share resources
such as printers and exchange files with each other. As originally defined,
Ethernet and Token Ring provided network access to multiple devices on the
same network segment or ring. These LAN technologies have inherent limitations
as to how many devices can connect to a single segment, as well as the
physical distance between computers. Desktop computers got faster, the number
of computers grew, operating systems began multitasking (allowing multiple
tasks to operate at the same time), and applications became more networkcentric.
All these advancements resulted in congestion on LANs.
To address these issues, two device types emerged: repeaters and bridges.
Repeaters are simple Open Systems Interconnection (OSI) Layer 1 devices that
allow networks to extend beyond their defined physical distances (which were
limited to about 150 feet without the use of a repeater).
Bridges are OSI Layer 2 devices that physically split a segment into two and
reduce the amount of traffic on either side of the bridge. This setup allows
more devices to connect to the LAN and reduces congestion. LAN switches
emerged as a natural extension of bridging, revolutionizing the concept of
local-area networking.

What They Gave Away

In the 1970s Xerox Corporation assembled a
group of talented researchers to investigate new
technologies. The new group was located in the
newly opened Palo Alto Research Center (PARC),
well away from the corporate headquarters in
Connecticut.
In addition to developing Ethernet, the brilliant
folks at the PARC invented the technology for
what eventually became the personal computer
(PC), the graphical user interface (GUI), laser printing,
and very-large-scale integration (VLSI).
Inexplicably, Xerox Corporation failed to recognize
the brilliance (and commercial viability) of many of
these homegrown innovations and let them walk
out the door.
To give you an idea of what this cost Xerox in
terms of opportunity, the worldwide budget for
Ethernet equipment was more than $7 billion in
2006 and was expected to grow to more than $10
billion by 2009. Just imagine if a single company
owned the assets of Apple, Intel, Cisco, HP, and
Microsoft. There almost was such a company. Its
name is Xerox.
At-a-Glance: Ethernet

LAN Routers

LAN-based routers greatly extend the speed, distance,
and intelligence of Ethernet LANs. Routers
also allow traffic to be sent along multiple paths.
Routers, however, require a common protocol
between the router and end stations.

Switched Ethernet

A LAN switch can be thought of as a high-speed,
multiport bridge with a brain. Switches don’t just
allow each end station to have a dedicated port
(meaning that no collisions occur). They also allow
end stations to transmit and receive at the same
time (using full duplex), greatly increasing the
LAN’s efficiency.

Bridges

Bridges are simple Layer 2 devices that create new
segments, resulting in fewer collisions. Bridges
must learn the addresses of the computers on each
segment to avoid forwarding traffic to the wrong
port. Unlike hubs, which are usually used for networks
with a small number of end stations (4 to
8), bridges can handle much larger networks with
dozens of end stations.

Repeaters

Repeaters simply extend the transmission distance
of an Ethernet segment.

Hubs

Hubs enable you to add and remove computers
without disabling the network, but they do not
create additional collision domains.

Ethernet Segments

A segment is the simplest form of network, in
which all devices are directly connected. In this type
of arrangement, if any of the computers gets disconnected,
or if one is added, the segment is disabled.

Increasing Bandwidth

In addition to creating additional segments to
increase available bandwidth, you can use a faster
medium such as optical fiber or Gigabit Ethernet.
Although these technologies are faster, they are still
shared media, so collision domains will still exist
and will eventually experience the same problems
as slower media.

Increasing Bandwidth

In addition to creating additional segments to
increase available bandwidth, you can use a faster
medium such as optical fiber or Gigabit Ethernet.
Although these technologies are faster, they are still
shared media, so collision domains will still exist
and will eventually experience the same problems
as slower media.

Increasing Bandwidth

In addition to creating additional segments to
increase available bandwidth, you can use a faster
medium such as optical fiber or Gigabit Ethernet.
Although these technologies are faster, they are still
shared media, so collision domains will still exist
and will eventually experience the same problems
as slower media.

Smaller Segments

Segments can be divided to reduce the number of
users and increase the bandwidth available to each
user in the segment. Each new segment created
results in a new collision domain. Traffic from one
segment or collision domain does not interfere with
other segments, thereby increasing the available
bandwidth of each segment. In the following figure,
each segment has greater bandwidth, but all segments
are still on a common backbone and must
share the available bandwidth. This approach works
best when care is taken to make sure that the largest
users of bandwidth are placed in separate segments.
There are a few basic methods for segmenting an
Ethernet LAN into more collision domains:
• Use bridges to split collision domains.
• Use switches to provide dedicated domains to
each host.
• Use routers to route traffic between domains
(and to not route traffic that does not matter to
the other domain).
This sheet discusses segmenting using bridges and
routers (switching is covered in the next chapter). 53

Ethernet Collisions

In a traditional LAN, several users would all share
the same port on a network device and would
compete for resources (bandwidth). The main limitation
of such a setup is that only one device can
transmit at a time. Segments that share resources
in this manner are called collision domains,
because if two or more devices transmit at the
same time, the information “collides,” and both
end points must resend their information (at different
times). Typically the devices both wait a random
amount of time before attempting to retransmit.
This method works well for a small number of
users on a segment, each having relatively low
bandwidth requirements. As the number of users
increases, the efficiency of collision domains
decreases sharply, to the point where overhead traffic
(management and control) clogs the network.

What Problems Need to Be Solved?

Ethernet is a shared resource in which end stations
(computers, servers, and so on) all have access to the
transmission medium at the same time. The result is
that only one device can send information at a time.
Given this limitation, two viable solutions exist:
• Use a sharing mechanism: If all end stations are
forced to share a common wire, rules must exist
to ensure that each end station waits its turn
before transmitting. In the event of simultaneous
transmissions, rules must exist for retransmitting.

• Divide the shared segments, and insulate them:
Another solution to the limitations of shared
resources is to use devices that reduce the number
of end stations sharing a resource at any given time.

Why Should I Care About Ethernet?

Ethernet was developed in 1972 as a way to connect
newly invented computers to newly invented
laser printers. It was recognized even at that time
as a remarkable technology breakthrough.
However, very few people would have wagered
that the ability to connect computers and devices
would change human communication on the same
scale as the invention of the telephone and change
business on the scale of the Industrial Revolution.
Several competing protocols have emerged since
1972, but Ethernet remains the dominant standard
for connecting computers into local-area networks
(LAN). For many years Ethernet was dominant in
home networks as well. Ethernet has been mostly
replaced by wireless technologies in the home networking
market. Wireless or Wi-Fi is covered in
Part VIII, “Mobility.”

Evolution of Ethernet

When Metcalfe originally developed Ethernet, computers were connected to a
single copper cable. The physical limitations of a piece of copper cable carrying
electrical signals restricted how far computers could be from each other on
an Ethernet. Repeaters helped alleviate the distance limitations. Repeaters are
small devices that regenerate an electrical signal at the original signal strength.
This process allows an Ethernet to extend across an office floor that might
exceed the Ethernet distance limitations.
The addition or removal of a device on the Ethernet cable disrupts the network
for all other connected devices. A device called an Ethernet hub solves
this problem. First, each port on a hub is actually a repeater. Second, hubs let
computers insert or remove themselves nondisruptively from the network.
Finally, hubs simplify Ethernet troubleshooting and administration. As networks
grow larger, companies need to fit more and more computers onto an
Ethernet. As the number of computers increases, the number of collisions on
the network increases. As collisions increase, useful network traffic decreases
(administrative traffic actually increases because of all the error messages getting
passed around). Networks come to a grinding halt when too many collisions
occur.
Ethernet bridges resolve this problem by physically breaking an Ethernet into
two or more segments. This arrangement means that devices communicating
on one side of the bridge do not collide with devices communicating on the
other side of the bridge. Bridges also learn which devices are on each side and
only transfer traffic to the network containing the destination device. A twoport
bridge also doubles the bandwidth previously available, because each port
is a separate Ethernet.
Ethernet bridges evolved to solve the problem of connecting Ethernet networks
to Token Ring networks. This process of translating a packet from one LAN
technology to another is called translational bridging.
As Ethernet networks continue to grow in a corporation, they become more
complex, connecting hundreds and thousands of devices. Ethernet switches
allow network administrators to dynamically break their networks into multiple
Ethernet segments.
Initially, switches operated as multiport Ethernet bridges. But eventually, as the
cost per port decreased significantly, Ethernet switches replaced hubs, in which
each connected device receives its own dedicated Ethernet bandwidth. With
switches, collisions are no longer an issue, because connections between computer
and switch can be point-to-point, and the Ethernet can both send and
receive traffic at the same time. This ability to send and receive simultaneously
is called full duplex, as opposed to traditional Ethernet, which operated at half
duplex. Half duplex means that a device can receive or transmit traffic on the
network, but not at the same time. If both happen at the same time, a collision
occurs.
This is different from subnetting in a couple of distinct ways. First, Ethernet is
a Layer 2 protocol, and subnetting has to do with IP addressing (which is a
Layer 3 function). Second, IP addressing is a logical segmentation scheme, and
switching is a physical separation, because each end station has a dedicated
physical port on the switch.

What Is Ethernet?

Ethernet describes a system that links the computers in a building or within a
local area. It consists of hardware (a network interface card), software, and
cabling used to connect the computers. All computers on an Ethernet are
attached to a shared data link, as opposed to traditional point-to-point networks,
in which a single device connects to another single device.
Because all computers share the same data link on an Ethernet network, the
network needs a protocol to handle contention if multiple computers want to
transmit data at the same time, because only one can talk at a time without
causing interference. Metcalfe’s invention introduced the carrier sense multiple
access collision detect (CSMA/CD) protocol. CSMA/CD defines how a computer
should listen to the network before transmitting. If the network is quiet,
the computer can transmit its data. However, a problem arises if more than
one computer listens, hears silence, and transmits at the same time: The data
collides. The collision-detect part of CSMA/CD defines a method in which
transmitting computers back off when collisions occur and randomly attempt
to restart transmission. Ethernet originally operated at 3 Mbps, but today it
operates at speeds ranging from 10 Mbps (that’s 10 million bits per second) to
10 Gbps (that’s 10 billion bits per second). 51

History of Ethernet

Robert Metcalfe developed Ethernet at the famous Xerox Palo Alto Research
Center (PARC) in 1972. The folks at Xerox PARC had developed a personal
workstation with a graphical user interface. They needed a technology to network
these workstations with their newly developed laser printers. (Remember,
the first PC, the MITS altair, was not introduced to the public until 1975.)
Metcalfe originally called this network the Alto Aloha Network. He changed
the name to Ethernet in 1973 to make it clear that any type of device could
connect to his network. He chose the name “ether” because the network carried
bits to every workstation in the same manner that scientists once thought
waves were propagated through space by the “luminiferous ether.”
Metcalfe’s first external publication concerning Ethernet was available to the
public in 1976. Metcalfe left Xerox, and in 1979 he got Digital Equipment
Corporation (DEC), Intel, and Xerox to agree on a common Ethernet standard
called DIX. In 1982, the Institute of Electrical and Electronic Engineers (IEEE)
adopted a standard based on Metcalfe’s Ethernet.
Ethernet took off in academic networks and some corporate networks. It was
cheap, and public domain protocols such as Internet Protocol (IP) ran natively
on it. However, another company (IBM) wanted the world to adopt its protocol
instead, called Token Ring. Before switching was introduced, Ethernet was
more difficult to troubleshoot than Token Ring. Although Ethernet was less
expensive to implement, larger corporations chose Token Ring because of their
relationship with IBM and the ability to more easily troubleshoot problems.
Early Ethernet used media such as coaxial cable, and a network could literally
be a single long, continuous segment of coax cable tied into many computers.
(This cable was known as Thinnet or Thicknet, depending on the thickness of
the coax used.) When someone accidentally kicked the cable under his or her
desk, this often produced a slight break in the network. A break meant that
no one on the network could communicate, not just the poor schmuck who
kicked the cable. Debugging usually entailed crawling under desks and
following the cable until the break was found.

In contrast, Token Ring had more sophisticated tools (than crawling on your
knees) for finding the breaks. It was usually pretty obvious where the token
stopped being passed and, voilà, you had your culprit.
The battle for the LAN continued for more than ten years, until eventually
Ethernet became the predominant technology. Arguably, it was the widespread
adoption of Ethernet switching that drove the final nail in Token Ring’s coffin.
Other LAN technologies, such as AppleTalk and Novell IPX, have been and
continue to be introduced, but Ethernet prevails as the predominant technology
for local high-speed connectivity.
Thankfully, we have left behind early media such as coax for more sophisticated
technologies.

Networking Infrastructure

With the fundamentals of networking under our belt, we can now take a closer look at the infrastructure
that makes up the networks we all use. This section focuses on the switches and routers that make up networks,
along with the protocols that drive them.
We start this section with a discussion of the Ethernet protocol, which defines the rules and processes by
which computers in a local area communicate. Long before the Internet was in use, computers communicated
locally using the Ethernet protocol, and it is still widely used.
We then move on to local-area network (LAN) switching, an extension of the Ethernet protocol required
when there are more computers in a local segment than can communicate efficiently. Switching is one of
the core technologies in networking.
One of the necessities in networking is link redundancy, something that makes it more likely that data
reaches its intended target. Sometimes, however, link redundancy can create loops in the network, which
causes an explosion of administrative traffic that can take down a network in a matter of minutes.
Spanning Tree is one of the mechanisms that keeps these “broadcast storms” from wiping out your local
network, so we look at how this important protocol works.
We end this section with routing, which provides the basis for network communication over long distances.
The advent of routing allowed the growth of the Internet and corporate networking as we know it
today. This section explores how routing works and how routers communicate.

What Are HTTP and HTML, and What Do They Do?

You might have noticed that many Internet sites include the letters HTTP in
the site address that appears in the address line of your web browser. HTTP
(another OSI Layer 7 protocol) defines the rules for transferring information,
files, and multimedia on web pages. Hypertext Markup Language (HTML) is
the language used within HTTP. HTML is actually a fairly simple, easy-tolearn
computer language that embeds symbols into a text file to specify visual
or functional characteristics such as font size, page boundaries, and application
usages (such as launching an e-mail tool when a user clicks certain links).
When the developer of an HTTP file (or web page) wants to allow for a jump
to a different place on the page, or even a jump to a new page, he or she simply
places the appropriate symbols into the file. People viewing the page just
see the link, which is most commonly specified with blue, underlined text. The
ease of jumping from site to site (called web surfing) is one of the reasons for
the proliferation of websites on, and growth of, the Internet.
Several free and commercial tools allow you to create a web page using
HTML without having to know all the rules.
One of the issues with HTML is that it is fairly limited as far as what it can do
given that it works only on text and still pictures. To achieve some of the really
cool moving graphics and web page features, other tools such as Flash,
XML, JavaScript, or other scripting languages are needed. 48

Web Browsing

Browsing web pages on the Internet is another common network application.
Browsers run on a computer and allow a viewer to see website content.
Website content resides on a server, a powerful computer with a lot of disk
space and lots of computing cycles. The protocol that allows browsers and
servers to communicate is HTTP.

Receiving E-Mails

E-mail is often received via a different server than the one that sends e-mail.
The type of server depends on which type of e-mail tool you use. For those
using an e-mail client, your e-mail is probably delivered to you via the most
common method, Post Office Protocol 3 (POP3) server. (We have no idea what
happened to the first two.) The POP3 server receives all its e-mails from SMTP
servers and sorts them into file spaces dedicated to each user (much the same
way mail is put into post office boxes at a local post office—thus the name).
When you open your e-mail client, it contacts the POP3 server to request all
the new e-mails. The e-mails are then transferred to your PC, and in most
cases the e-mails are erased from the POP3 server.
Another common method (or protocol) for mail retrieval is an Internet Mail
Access Protocol (IMAP) server. This is the protocol normally used by webbased
e-mail clients, and corporate e-mail systems such as Microsoft
Exchange. The IMAP server receives and sorts e-mail in much the same way as
a POP3 server. Unlike POP3, however, IMAP does not transfer the e-mails to
the machine of the account holder; instead, it keeps e-mail on the server. This
allows users to connect to use their e-mail account from multiple machines.
IMAP also allows for server-side filtering, a method of presorting e-mail based
on rules before it even gets to your PC. It’s kind of like having a friendly
postal worker who sorts all your bills to the top and magazines to the bottom.
Two main issues with IMAP servers are storage space and working offline.
Most Internet e-mail services put a limit on the amount of storage each subscriber
gets (some charge extra for additional storage space). In addition, these
services often limit the file size of attachments (such as photos). The other
issue is the ability to work offline or when not connected to the Internet. One
solution is called caching, which temporarily places the subscriber’s e-mail
information on whatever PC he or she wants to work offline with. When the
user reconnects, any e-mails created while offline are sent, and any new incoming
e-mails can be viewed.

Sending E-Mails

E-mails are distributed using a (OSI Layer 7) protocol called Simple Mail
Transfer Protocol (SMTP). SMTP normally operates on powerful computers
dedicated to e-mail distribution, called SMTP servers. When you create and
send an e-mail, your e-mail client sends the file to the SMTP server. The server
pulls out the addresses from the message. (You can send e-mails to multiple
recipients.) For each domain name, the SMTP server must send a message to a
DNS to get the IP address of each recipient’s e-mail server. If the recipient is on
the same server as you (that is, if you send an e-mail to someone with the
same domain name), this step is unnecessary.
After your SMTP server knows the IP address of the recipient’s server, your
SMTP server transfers the e-mail message to the recipient’s SMTP server. If
there are multiple recipients in different e-mail domains, a separate copy of the
e-mail is transferred to each recipient’s SMTP server. According to the name of
the protocol, this is all pretty simple.

What’s Up with the @ Sign?

All e-mail addresses are made up of two parts: a recipient part and a domain
name. An @ symbol separates the two parts to denote that a recipient is
unique within a domain name. The domain name is usually the name of your
ISP (or your company if you have e-mail there), and, like a website, an e-mail
domain has an associated IP address. This allows (actually, requires) the use of
a DNS server to translate the domain name portion of an e-mail address to the
IP address of the server where the e-mail account resides.
The recipient part is the chosen identifier that you are known by within the email
domain. There are a lot of possibilities for choosing the recipient. Here
are a few popular styles:
Firstname.Lastname John.Brown
FirstinitialLastname JBrown
Nickname DowntownJohnnyBrown
Personalized license plate L8RG8R
Other obscure reference GrassyKnoll63
When picking an e-mail address, remember that sometimes you’ll have to verbally
tell someone your e-mail address, so “X3UT67B” is inadvisable.

E-Mail Tools

There are two basic ways to create, send, and receive e-mails: with an e-mail
client and with a web-based e-mail tool:
• E-mail clients that are installed on individual machines are in wide use today.
The most popular are Microsoft Outlook/Outlook Express. E-mail clients
allow for the creation, distribution, retrieval, and storage of e-mails (as well as
some other useful features). These types of clients were originally designed so
that e-mails to and from an account could be accessed from a single machine.
E-mail clients physically move the e-mail from the e-mail server to your PC’s
hard drive. After the e-mail is downloaded, it no longer exists in the e-mail
provider’s network. The e-mail exists in your e-mail client program (on the
PC’s hard drive) until you delete it.
• Web-based e-mail tools, such as Google Mail, allow users to access their
e-mail from any machine connected to the Internet. Users log in to the website
with their registered name and password. Then they are given access to
a web-based e-mail client that has all the basic abilities of e-mail clients,
such as the ability to create, send, and receive e-mails. Many have more
advanced features, such as the ability to send and receive file attachments
and create and use address books.
Web-based e-mail tools differ from e-mail clients in that the e-mail is not
downloaded to your PC’s hard drive. It exists only on the e-mail provider’s
network until you delete it. Some people use a combination of web-based email
and e-mail clients. For example, you may use the web-based e-mail tool
to access your e-mail when you are away from home and not using your
home PC. When you are at home, you could then use your e-mail client.

E-Mail

E-mail is one of the most common network applications in use today.
Although it might seem relatively new, e-mail was invented in the early 1970s.
Back then, of course, there was no Internet as we know it today, so having email
was a bit like owning a car before there was a highway system.
Today, e-mail is so widespread that ISPs just assume that you want an e-mail
address and automatically assign you one (or even several) when you begin
your service agreement.

The Internet and Its Applications

What makes the Internet useful and interesting to the average person is not the
network, but rather the applications that operate on the network. The two
most common Internet applications in use today are e-mail and web browsers.

IPv6 Transition

There have been many predictions over the years
about IPv6 migration, but the fact is that the IPv4
workarounds that have been developed in the
meantime have been pretty good. It could be that
despite being a superior solution to the address
scarcity issue, IPv6 may never displace IPv4 and its
work-arounds. To underscore this point, look back
at the chart at the beginning of this section. Here
we are in 2007, with only limited deployments of
IPv6, and with many more devices on the Internet
than anticipated back in the late 1990s, but IPv4
keeps chugging along.
Several factors may finally cause the transition—
first as IPv6 “islands” connected with IPv4 networks,
and then finally into end-to-end IPv6 networks.
These factors include the U.S. federal government
mandating that its networks must be IPv6-capable
by a certain date, Microsoft adopting IPv6 into
Windows starting with Vista, and Japan adopting
IPv6 as its country network addressing standard.
At a minimum, it is important for network administrators
and companies to understand IPv6 and its
potential impacts so that they are prepared if and
when the transition occurs. 40

IPv6 Mobility

IPv6 supports a greater array of features for the
mobile user, whether the mobile device is a cell
phone, PDA, laptop computer, or moving vehicle.
Mobile IPv6 (MIPv6) supports a more streamlined
approach to routing packets to and from the
mobile device. It also supports IPsec between the
mobile device and other network devices and hosts.

IPv6 Security

IPv6 has embedded support for IPsec (a common
protocol for encryption). Currently the host operating
system (OS) can configure an IPsec tunnel
between the host and any other host that has IPv6
support. With IPv4 the vast majority of IPsec
deployments are network-based and unknown to
host devices. With IPv6 IPsec, the host could create
an encrypted data connection between itself and
another device on the network. This means that
network administrators do not need to set up the
encryption, because hosts can do it themselves on
demand.

IPv6 Autoconfiguration

IPv4 deployments use one of two methods to
assign IP addresses to a host: static assignment
(which is management-intensive) or DHCP/
BOOTP, which automatically assigns IP addresses
to hosts upon booting onto the network.
IPv6 provides a feature called stateless autoconfiguration,
which is similar to DHCP. Unlike DHCP,
however, stateless autoconfiguration does not
require the use of a special DHCP application or
server when providing addresses to simple network
devices that do not support DHCP (such as robotic
arms used in manufacturing).
Using DHCP, any router interface that has an IPv6
address assigned to it becomes the “provider” of IP
addresses on the network to which it is attached.
Safeguards are built into IPv6 that prevent duplicate
addresses. This feature is called Duplicate
Address Detection. With the IPv4 protocol, nothing
prevents two hosts from joining the network
with identical IP addresses. The operating system
or application may be able to detect the problem,
but often unpredictable results occur.

IPv6 Notation

The first figure demonstrates the notation and
shortcuts for IPv6 addresses.
An IPv6 address uses the first 64 bits in the
address for the network ID and the second 64 bits
for the host ID. The network ID is separated into
prefix chunks. The next figure shows the address
hierarchy.

IPv6 Addresses

The 128-bit address used in IPv6 allows for a
greater number of addresses and subnets (enough
space for 1015 endpoints—340,282,366,920,938,
463,463,374,607,431,768,211,456 total!).
IPv6 was designed to give every user on Earth multiple
global addresses that can be used for a wide
variety of devices, including cell phones, PDAs, IPenabled
vehicles, consumer electronics, and many
more. In addition to providing more address space,
IPv6 has the following advantages over IPv4:
• Easier address management and delegation
• Easy address autoconfiguration
• Embedded IPsec (short for IP Security—
encrypted IP)
• Optimized routing
• Duplicate Address Detection (DAD)

What Problems Need to Be Solved?

Network Address Translation (NAT) and Port
Address Translation (PAT) were developed as
solutions to the diminishing availability of IP
addresses. NAT and PAT, as implemented today in
many network routers, allow a company or user to
share a single or a few assigned public IP addresses
among many private addresses (which are not
bound by an address authority).
Although these schemes preserve address space
and provide anonymity, the benefits come at the
cost of individuality. This eliminates the very reason
for networking (and the Internet): allowing peer-topeer
collaboration through shared applications.
IP version 6 (IPv6) provides an answer to the
problem of running out of address space. It also
allows for the restoration of a true end-to-end
model in which hosts can connect to each other
unobstructed and with greater flexibility. Some of
the key elements of IPv6 include allowing each
host to have a unique global IP address, the ability
to maintain connectivity even when in motion and
roaming, and the ability to natively secure host
communications.

Why Should I Care About IPv6?

The addressing scheme used for the TCP/IP protocols
is IP version 4 (IPv4). This scheme uses a 32-
bit binary number to identify networks and end
stations. The 32-bit scheme yields about 4 billion
addresses, but because of the dotted-decimal system
(which breaks the number into four sections of
8 bits each) and other considerations, there are
really only about 250 million usable addresses.
When the scheme was originally developed in the
1980s, no one ever thought that running out of
addresses would be a possibility. However, the
explosion of the Internet, along with the increased
number of Internet-capable devices, such as cell
phones and PDAs (which need an IP address), has
made running out of IPv4 addresses a serious concern.
The chart shows the trend of address space,
starting in 1980. It shows the address space running
out sometime before 2010.

Identifying Subnet Addresses

Given an IP address and subnet mask, you can
identify the subnet address, broadcast address, and
first and last usable addresses within a subnet as
follows:
1. Write down the 32-bit address and the subnet
mask below that (174.24.4.176/26 is shown
in the following figure).
2. Draw a vertical line just after the last 1 bit in
the subnet mask.
3. Copy the portion of the IP address to the left
of the line. Place all 1s for the remaining free
spaces to the right. This is the broadcast
address for the subnet.
4. The first and last address can also be found
by placing ...0001 and ...1110, respectively, in
the remaining free spaces.
5. Copy the portion of the IP address to the left
of the line. Place all 0s for the remaining free
spaces to the right. This is the subnet number.
174.24.4.176 1010111000110000000100 10110000 Host
255.255.255.192 1111111111111111111111 11000000 Mask
174.24.4.128 1010111000110000000100 10000000 Subnet
174.24.4.191 1010111000110000000100 10111111 Broadcast

Subnet Masks

Routers use a subnet mask to determine which
parts of the IP address correspond to the network,
the subnet, and the host. The mask is a 32-bit
number in the same format as the IP address. The
mask is a string of consecutive 1s starting from the
most-significant bits, representing the network ID,
followed by a string of consecutive 0s, representing
the host ID portion of the address bits.

Each address class has a default subnet mask (A =
/8, B = /16, C = /24). The default subnet masks
only the network portion of the address, the effect
of which is no subnetting. With each bit of subnetting
beyond the default, you can create 2n–2 subnets.
The preceding example has 254 subnets, each
with 254 hosts. This counts the address ending
with .0, but not the address ending in .255.
Continuing with the preceding analogy, the subnet
mask tells the network devices how many apartments
are in the building.

Identifying Subnet Addresses
Given an IP address and subnet mask, you can
identify the subnet address, broadcast address, and
first and last usable addresses within a subnet as
follows:
1. Write down the 32-bit address and the subnet
mask below that (174.24.4.176/26 is shown
in the following figure).
2. Draw a vertical line just after the last 1 bit in
the subnet mask.
3. Copy the portion of the IP address to the left
of the line. Place all 1s for the remaining free
spaces to the right. This is the broadcast
address for the subnet.
4. The first and last address can also be found
by placing ...0001 and ...1110, respectively, in
the remaining free spaces.
5. Copy the portion of the IP address to the left
of the line. Place all 0s for the remaining free
spaces to the right. This is the subnet number.
174.24.4.176 1010111000110000000100 10110000 Host
255.255.255.192 1111111111111111111111 11000000 Mask
174.24.4.128 1010111000110000000100 10000000 Subnet
174.24.4.191 1010111000110000000100 10111111 Broadcast
At-a-Glance: IP Addressing
Network
128
10000000
10
00001010
173
10110010
46
00101110
Host
IP
Address
Subnet
Mask
This subnet mask can also be written as "/24", where 24
represents the number of 1s in the subnet mask.

Subnetting

Subnetting is a method of segmenting hosts within
a network and providing additional structure.
Without subnets, an organization operates as a flat
network. These flat topologies result in short routing
tables, but as the network grows, the use of
bandwidth becomes inefficient.

In the figure, a Class B network is flat, with a single
broadcast and collision domain. Collision
domains are explained in more detail in the
Ethernet chapter. For now, just think of them as a
small network segment with a handful of devices.
Adding Layer 2 switches to the network creates
more collision domains but does not control
broadcasts.
In the next figure, the same network has been subdivided
into several segments or subnets. This is
accomplished by using the third octet (part of the
host address space for a Class B network) to segment
the network. Note that the outside world
sees this network the same as in the previous figure.

Subnetting is a bit complex at first pass. Think of
it like a street address. For a house, the street
address may provide the needed addressability to
reach all the house’s occupants. Now consider an
apartment building. The street address only gets
you to the right building. You need to know in
which apartment the occupant you are seeking
resides. In this crude example, the apartment number
acts a bit like a subnet.

Address Classes

When the IP address scheme was developed, only
the first octet was used to identify the network
portion of the address. At the time it was assumed
that 254 networks would be more than enough to
cover the research groups and universities using
this protocol. As usage grew, however, it became
clear that more network designations would be
needed (each with fewer hosts). This issue led to
the development of address classes.
Addresses are segmented into five classes (A
through E). Classes A, B, and C are the most common.
Class A has 8 network bits and 24 host bits.
Class B has 16 network bits and 16 host bits, and
Class C has 24 network bits and 8 host bits. This
scheme was based on the assumption that there
would be many more small networks (each with
fewer endpoints) than large networks in the world.
Class D is used for multicast, and Class E is
reserved for research. The following table breaks
down the three main classes. Note that the Class A
address starting with 127 is reserved.

Logical Versus Physical

MAC addresses are considered physical addresses
because they are assigned to pieces of hardware by
the manufacturer and cannot be reassigned.
IP addresses are assigned by a network administrator
and have meaning only in a TCP/IP network.
These addresses are used solely for routing purposes
and can be reassigned.
Host and network: Rather than assigning numbers
at random to various endpoints (which would be
extremely difficult to manage), every company and
organization listed on the Internet is given a block
of public address numbers to use. This is accomplished
by using a two-part addressing scheme that
identifies a network and host. This two-part
scheme allows the following:
• All the endpoints within a network share the
same network number.
• The remaining bits identify each host within
that network.
In the figure, the first two octets (128.10) identify
a company with an Internet presence (it’s the
address of the router that accesses the Internet).
All computers and servers within the company’s
network share the same network address. The next
two octets identify a specific endpoint (computer,
server, printer, and so on). In this example the
company has 65,536 addresses it can assign (16
bits, or 216). Therefore, all devices in this network
would have an address between 128.10.0.1 and
128.10.255.255.

What Problems Need to Be Solved?

Each IP address is a 32-bit number, which means
that there are about 4.3 trillion address combinations.
These addresses must be allocated in a way
that balances the need for administrative and routing
efficiency with the need to retain as many
usable addresses as possible.
Dotted decimal: The most common notation for
describing an IP address is dotted decimal. Dotted
decimal breaks a 32-bit binary number into four
8-bit numbers (represented in decimal form), which
is called an octet. Each octet is separated by a period,
which aids in the organizational scheme to be
discussed. For example, the binary address
00001010100000001011001000101110 can be
represented in dotted decimal as 10.128.178.46.

Why Should I Care About IP Addressing?

Behind every website, Universal Resource Locator
(URL), and computer or other device connected to
the Internet is a number that uniquely identifies
that device. This unique identifier is called an IP
address. These addresses are the key components
of the routing schemes used over the Internet. For
example, if you are downloading a data sheet from
www.cisco.com to your computer, the header of
the packets comprising the document includes both
the host address (in this case, the IP address of
Cisco’s public server) and the destination address
(your PC).

Port Numbers

TCP and UDP can send data from several upperlayer
applications on the same datagram. Port
numbers (also called socket numbers) are used to
keep track of different conversations crossing the
network at any given time. Some of the more wellknown
port numbers are controlled by the Internet
Assigned Numbers Authority (IANA). For example,
Telnet is always defined by port 23.
Applications that do not use well-known port
numbers have numbers randomly assigned from a
specific range.
The use of port numbers is what allows you to
watch streaming video on your computer while
checking e-mails and downloading documents
from a web page all at the same time. All three
may use TCP/IP, but use of a port number allows
the applications to distinguish which are video and
which are e-mail packets.

UDP

UDP is a connectionless, unreliable Layer 4 protocol.
Unreliable in this sense means that the protocol
does not ensure that every packet will reach its
destination. UDP is used for applications that provide
their own error recovery process or when
retransmission does not make sense. UDP is simple
and efficient, trading reliability for speed.
Why not resend? It may not be obvious why you
would not resend dropped packets if you had the
option to do so. However, real-time applications
such as voice and video could be disrupted by
receiving old packets out of order. For example,
suppose a packet containing a portion of speech is
received 2 seconds later than the rest of the conversation.
Playing the sound out into the earpiece
probably will sound like poor audio quality to the
user, because the user is listening further into the
conversation. In these cases, the application usually
can conceal the dropped packets from the end
user so long as they account for a small percentage
of the total.

TCP Windowing

One way to structure a communications protocol is
to have the receiver acknowledge every packet
received from a sender. Although this is the most
reliable method, it can add unnecessary overhead,
especially on fairly reliable connection media.
Windowing is a compromise that reduces overhead
by acknowledging packets only after a specified
number have been received.
The window size from one end station informs the
other side of the connection how much it can accept
at one time. With a window size of 1, each segment
must be acknowledged before another segment is
sent. This is the least efficient use of bandwidth. A
window size of 7 means that an acknowledgment
needs to be sent after the receipt of seven segments;
this allows better utilization of bandwidth. A windowing
example is shown in the figure.

How TCP Connections Are Established

End stations exchange control bits called SYN (for
synchronize) and Initial Sequence Numbers (ISN)
to synchronize during connection establishment.
TCP/IP uses what is known as a three-way handshake
to establish connections.
To synchronize the connection, each side sends its
own initial sequence number and expects to
receive a confirmation in an acknowledgment
(ACK) from the other side. The following figure
shows an example.

TCP/IP Datagrams

TCP/IP information is sent via datagrams. A single
message may be broken into a series of datagrams
that must be reassembled at their destination. Three
layers are associated with the TCP/IP protocol stack:
• Application layer: This layer specifies protocols
for e-mail, file transfer, remote login, and other
applications. Network management is also supported.
• Transport layer: This layer allows multiple
upper-layer applications to use the same data
stream. TCP and UDP protocols provide flow
control and reliability.
• Network layer: Several protocols operate at the
network layer, including IP, ICMP, ARP, and
RARP.
IP provides connectionless, best-effort routing of
datagrams.
TCP/IP hosts use Internet Control Message
Protocol (ICMP) to carry error and control messages
with IP datagrams. For example, a process
called ping allows one station to discover a host
on another network.
Address Resolution Protocol (ARP) allows communication
on a multiaccess medium such as
Ethernet by mapping known IP addresses to MAC
addresses.

What Problems Need to Be Solved?

TCP is a connection-oriented, reliable protocol that
breaks messages into segments and reassembles
them at the destination station (it also resends
packets not received at the destination). TCP also
provides virtual circuits between applications.
A connection-oriented protocol establishes and
maintains a connection during a transmission. The
protocol must establish the connection before sending
data. As soon as the data transfer is complete,
the session is torn down.
User Datagram Protocol (UDP) is an alternative
protocol to TCP that also operates at Layer 4. UDP
is considered an “unreliable,” connectionless protocol.
Although “unreliable” may have a negative
connotation, in cases where real-time information is
being exchanged (such as a voice conversation),
taking the time to set up a connection and resend
dropped packets can do more harm than good.

Why Should I Care About TCP/IP?

TCP/IP is the best-known and most popular protocol
suite used today. Its ease of use and widespread
adoption are some of the best reasons for the
Internet explosion that is taking place.
Encompassed within the TCP/IP protocol is the
capability to offer reliable, connection-based packet
transfer (sometimes called synchronous) as well
as less reliable, connectionless transfers (also called
asynchronous).

Domain Names and Relationship to IP Addresses

Because IP addresses are difficult to remember in their dotted-decimal notation,
a naming convention called domain names was established that’s more
natural for people to use. Domain names such as www.cisco.com are registered
and associated with a particular public IP address. The Domain Name
System (DNS) maps a readable name to an IP address. For example, when you
enter http://www.cisco.com into a browser, the PC uses the DNS protocol to
contact a DNS name server. The name server translates the name
http://www.cisco.com into the actual IP address for that host..

Dynamically Allocated IP Addresses

A network administrator is responsible for assigning which devices receive
which IP addresses in a corporate network. The admin assigns an IP address to
a device in one of two ways: by configuring the device with a specific address
or by letting the device automatically learn its address from the network.
Dynamic Host Configuration Protocol (DHCP) is the protocol used for automatic
IP address assignment. Dynamic addressing saves considerable administrative
effort and conserves IP addressing space. It can be difficult to manually
administer IP addresses for every computer and device on a network. Most
networks use DHCP to automatically assign an available IP address to a device
when it connects to the network. Generally, devices that don’t move around
receive fixed addresses, known as static addressing. For example, servers,
routers, and switches usually receive static IP addresses. The rest use dynamic
addressing. For home networks you do not need a network administrator to
set up your address; instead, a home broadband router allocates IP addresses
via DHCP.

What Is an Address?

For computers to send and receive information to each other, they must have
some form of addressing so that each end device on the network knows what
information to read and what information to ignore. This capability is important
both for the computers that ultimately use the information and for the
devices that deliver information to end stations, such as switches and routers.
Every computer on a network has two addresses:
• MAC address: A manufacturer-allocated ID number (such as a global serial
number) that is permanent and unique to every network device on Earth.
MAC addresses are analogous to a social security number or other national
identification number. You have only one, it stays the same wherever you go,
and no two people (devices) have the same number. MAC address are formatted
using six pairs of hexadecimal numbers, such as 01-23-45-67-89-AB.
Hexadecimal or “hex” is a base 16 numbering scheme that uses the numbers
0 through 9 and the letters A through F to count from 0 to 15. This
might seem odd, but it provides an easy translation from binary (which uses
only 1s and 0s), which is the language of all computers.
• IP address: This address is what matters most to basic networking. Unlike a
MAC address, the IP address of any device is temporary and can be
changed. It is often assigned by the network itself and is analogous to your
street address. It only needs to be unique within a network. Someone else’s
network might use the same IP address, much like another town might have
the same street (for example, 101 Main Street). Every device on an IP network
is given an IP address, which looks like this: 192.168.1.100.
The format of this address is called dotted-decimal notation. The period separators
are pronounced “dot,” as in one ninety two dot one sixty eight dot....”
Because of some rules with binary, the largest number in each section is 255.
In addition to breaking up the number, the dots that appear in IP addresses
allow us to break the address into parts that represent networks and hosts. In
this case, the “network” portion refers to a company, university, government
agency, or your private network. The hosts would be the addresses of all the
computers on the individual network. If you think of the network portion of
the address as a street, the hosts would be all the houses on that street. If you
could see the IP addresses of everyone who is on the same network segment as
you, you would notice that the network portion of the address is the same for
all computers, and the host portion changes from computer to computer. An
example will probably help. Think of an IP address as being like your home
address for the post office: state.city.street.house-number.
Each number in the IP address provides a more and more specific location so
that the Internet can find your computer among millions of other computers.
The Internet is not organized geographically like the postal system, though.
The components of the address (intentionally oversimplified) are majornetwork.
minor-network.local-network.device.

Computers Speaking the Same Language

The Internet protocols comprise the most popular, nonproprietary data-networking
protocol suite in the world. The Internet protocols are communication protocols
used by electronic devices to talk to each other. Initially, computers were the
primary clients of IP protocols, but other types of electronic devices can connect
to IP networks, including printers, cellular phones, and MP3 players.
Today, even common devices such as vending machines, dishwashers, and cars
are being connected to IP networks.
The two best-known Internet protocols are Transmission Control Protocol
(TCP) and Internet Protocol (IP). The Defense Advanced Research Projects
Agency (DARPA) developed the Internet protocols in the mid-1970s. DARPA
funded Stanford University and Bolt, Beranek, and Newman (BBN) to develop
a set of protocols that would allow different types of computers at various
research locations to communicate over a common packet-switched network.
The result of this research produced the Internet protocol suite, which was
later distributed for free with the Berkeley Software Distribution (BSD) UNIX
operating system.
From there, IP became the primary networking protocol, serving as the basis
for the World Wide Web (WWW) and the Internet in general. Internet protocols
are discussed and adopted in the public domain. Technical bulletins called
Requests for Comments (RFC) documents proposed protocols and practices.
These documents are reviewed, edited, published, and analyzed, and then are
accepted by the Internet community (this process takes years).
The Internet protocol suite also comprises application-based protocols, including
definitions for the following:
• Electronic mail (Simple Mail Transfer Protocol [SMTP])
• Terminal emulation (Telnet)
• File transfer (File Transfer Protocol [FTP])
• HTTP
IP is considered a Layer 3 protocol according to the OSI model, and TCP is a
Layer 4 protocol.

Back Haul Providers

A few back haul providers comprise the
high-speed backbone of the Internet.
Only a handful of these providers are
capable of handling the massive
amounts of Internet traffic that continues
to grow. Many parts of the back haul
providers overlap with each other, which
improves both the speed and reliability
of the network. 28

Web Servers

Web Servers
All web pages are stored on computers
called web servers. Thousands of these
servers can be dedicated servers for
companies, hosting servers that house
many personal pages, or even single
computers housing individual pages.

Domain Name Server (DNS)

This server maps domain names to
their IP addresses. One of the reasons
that the Internet has taken off in use and
popularity is because www.cisco.com
is much easier to remember than
25.156.10.4.

Access Providers

The web is really made
of many networks connected
in a hierarchy. Local Internet
service providers (ISPs) typically give
residential and small business access
to the Internet. Regional providers
typically connect several local ISPs to
each other and to back haul providers
that connect with other regional
providers.

Extra Layers?

Discussions among technical purists can often lead
to philosophical or budgetary debates that can
quickly derail otherwise-productive meetings.
These discussions are often referred to as Layer 8
(political) and Layer 9 (financial) debates.
Although these layers are not really part of the
OSI model, they are usually the underlying cause
of heated technology arguments.
Another common joke among networking professionals
is the type of networking problem referred
to as a “Layer 8 issue.” Because the network, computers,
and applications stop at Layer 7, Layer 8
sometimes represents the end user actually using
the system. So if you hear your IT person snicker
to his colleagues that your IT trouble ticket is
closed and it was a “Layer 8 issue,” the IT person
is referring to you.

De-encapsulation

De-encapsulation, the opposite of encapsulation, is
the process of passing information up the stack.
When a layer receives a PDU from the layer below,
it does the following:
1. It reads the control information provided by
the peer source device.
2. The layer strips the control information
(header) from the frame.
3. It processes the data (usually passing it up the
stack).
Each subsequent layer performs this same deencapsulation
process. To continue the preceding
example, when the plane arrives, the box of mail is
removed from the plane. The mailbags are taken
out of the boxes and are sent to the correct post
office. The letters are removed from the mailbags
and are delivered to the correct address. The
intended recipient opens the envelope and reads
the letter.

Encapsulation

The process of passing data down the stack using
PDUs is called data encapsulation. Encapsulation
works as follows: When a layer receives a PDU
from the layer above it, it encapsulates the PDU
with a header and trailer and then passes the PDU
down to the next layer. The control information
that is added to the PDU is read by the peer layer
on the remote device. Think of this as like putting
a letter in an envelope, which has the destination
address on it. The envelope is then put in a mailbag
with a zip code on it. The bag is then placed in
large box with a city name on it. The box is then
put on a plane for transport to the city.

At-a-Glance: OSI Model

Communicating Between Layers
Each layer of the OSI model uses its own protocol
to communicate with its peer layer in the destination
device. The OSI model specifies how each
layer communicates with the layers above and
below it, allowing vendors to focus on specific layers
that will work with any other vendor’s adjacent
layers.
Information is exchanged between layers using
protocol data units (PDU). PDUs include control
information (in the form of headers and trailers)
and user data. PDUs include different types of
information as they go up or down the layers
(called “the stack”). To clarify where the PDU is
on the stack, it is given a distinct name at each of
the lower levels.
At-a-Glance: OSI Model
In other words, a PDU that is a segment (Layer 4)
includes all the application layer’s information. A
packet (Layer 3) includes network layer control
information in addition to the data and control
information contained at the transport layer.
Similarly, a frame (Layer 2) is a PDU that includes
data link layer control information in addition to
the upper layer control information and data.
Finally, PDUs at the physical layer (Layer 1) are
called bits. 24

What Problems Need to Be Solved?

An OSI layer can communicate only with the layers
immediately above and below it on the stack,
and with its peer layer on another device. A
process must be used so that information (including
data and stack instructions) can be passed
down the stack, across the network, and back up
the stack on the peer device.

OSI Layers and Definitions

OSI Layers and Definitions
The OSI layers are defined as follows:
Layer 1: Physical
Layer 2: Data link
Layer 3: Network
Layer 4: Transport
Layer 5: Session
Layer 6: Presentation
Layer 7: Application
The four lower layers (called the data flow layers)
define connection protocols and methods for
exchanging data.
The three upper layers (called the application layers)
define how the applications within the end stations
communicate with each other and with users.
Several mnemonics have been developed to help you
memorize the layers and their order. Here’s one:
Please Do Not Throw Sausage Pizza Away

Why Should I Care About the OSI Model?

The Open Systems Interconnection (OSI) model is
a conceptual framework that defines network functions
and schemes. The framework simplifies complex
network interactions by breaking them into
simple modular elements. This open-standards
approach allows many independent developers to
work on separate network functions, which can
then be combined in a “plug-and-play” manner.
The OSI model serves as a guideline for creating
and implementing network standards, devices, and
internetworking schemes. Advantages of using the
OSI model include the following:
• It breaks interrelated aspects of network operation
into less-complex elements.
• It enables companies and individual engineers to
specialize design and development efforts on
modular functions.
• It provides standard interfaces for plug-and-play
compatibility and multivendor integration.
• It abstracts different layers of the network from
each other to provide easier adoption of new
technologies within a layer.

Layer 7, application:

Layer 7, application: The application layer provides networking services to a
user or application. For example, when an e-mail is sent, the application
layer begins the process of taking the data from the e-mail program and
preparing it to be put onto a network, progressing through Layers 6
through 1.
The combination of the seven layers is often called a stack. A transmitting
workstation traverses the stack from Layer 7 through Layer 1, converting the
application data into network signals. The receiving workstation traverses the
stack in the opposite direction: from Layer 1 to Layer 7. It converts the
received transmission back into a chunk of data for the running application.
When the OSI model was created, there was an industry initiative that tried to
implement a universal set of OSI network protocols, but it was not adopted.
Most popular protocols today generally use design principles that are similar
to and compatible with the OSI model, but they deviate from it in some areas
for various technical reasons. That said, the OSI model is still considered the
basis of all network communication.

Layer 6, presentation:

Layer 6, presentation: The presentation layer provides formatting services
for the application layer. For example, file encryption happens at this layer,
as does format conversion.

Layer 5, session:

Layer 5, session: The session layer manages connections between hosts. If
the application on one host needs to talk to the application on another, the
session layer sets up the connection and ensures that resources are available
to facilitate the connection. Networking folks tend to refer to Layers 5 to 7
collectively as the application layers.

Layer 4, transport:

Layer 4, transport: The transport layer is responsible for taking the chunk
of data from the application and preparing it for shipment onto the network.
Prepping data for transport involves chopping the chunk into smaller
pieces and adding a header that identifies the sending and receiving application
(otherwise known as port numbers). For example, Hypertext Transfer
Protocol (HTTP) web traffic uses port 80, and FTP traffic uses port 21.
Each piece of data and its associated headers is called a packet.

Layer 3, network:

Layer 3, network: The network layer is where the majority of communications
protocols do their work, relying on Layers 2 and 1 to send and receive
messages to other computers or network devices. The network layer adds
another header to the front of the packet, which identifies the unique source
and destination IP addresses of the sender and receiver. The process of routing
IP packets occurs at this level.

Layer 1, physical

Layer 1, physical: The physical layer is responsible for converting a frame
(the output from Layer 2) into electrical signals to be transmitted over the
network. The actual physical network can be copper wiring, optical fiber,
wireless radio signals, or any other medium that can carry signals. (We often
joke about running networks over barbed wire. It’s just a joke, but it actually
can be done.) This layer also provides a method for the receiving device
to validate that the data was not corrupted during transmission. 21

Open Versus Proprietary Systems

Although the open-source model is well-known today, when the OSI model was
being developed, there was an ongoing struggle to balance technical openness
with competitive advantage. At that time, each individual network equipment
vendor saw it as an advantage to develop technologies that other companies
could not copy or interact with. Proprietary systems let a vendor claim competitive
advantage as well as collect fees from other vendors it might choose to
share the technology with.
However, proprietary systems can complicate the network administrator’s job
by locking him or her into one vendor, reducing competitiveness and allowing
the vendor to charge higher prices. If the vendor goes out of business or abandons
the technology, no one is left to support or enhance the technology.
The alternative is an open-systems approach in which standards bodies, such
as the Institute of Electrical and Electronic Engineers (IEEE) or ISO, define
technologies. Ethernet, Transmission Control Protocol/Internet Protocol
(TCP/IP), and Spanning Tree Protocol (STP) are examples of technologies that
became standards. Today it is almost impossible to gain market traction with a
product that does not at least allow an open interface for other vendors to
work with. Any network-equipment vendor can implement an open standard.

The OSI Model

At some point, everyone involved with networking comes across a reference to
the Open Systems Interconnection (OSI) seven-layer model. Because this model
provides the architectural framework for all of network and computing communication,
it’s a good place to start. Even if you don’t ever plan on setting up
your own network, being familiar with this model is essential to understanding
how it all works.
The OSI seven-layer model describes the functions for computers to communicate
with each other. The International Organization for Standardization (ISO)
published this model in 1984 to describe a layered approach for providing network
services using a reference set of protocols called OSI. The basis of the
definition is that each of the seven layers has a particular function it must perform,
and each layer needs to know how to communicate with only the layers
immediately above and below it.
The advantages of the OSI approach may not be readily apparent. But this
simple concept of having layers understand only those adjacent to themselves
allows communications systems to be easily adapted and modified as technologies
evolve. For example, as new technologies are introduced in a lower layer,
such as Layer 1, upper layers do not necessarily need to be changed. Instead,
the adaptations at Layer 2 allow the layers above to use the new technologies
transparently. Imagine if all web browsers and e-mail programs had to be
replaced every time a new wireless network standard were introduced.
When the OSI networking model was defined, there was little standardization
among network equipment manufacturers. Customers generally had to standardize
on a particular vendor’s often proprietary hardware and software to
have devices communicate with each other. As a result of the ISO’s and other
standardization efforts, networking customers can mix and match hardware
when running open-standards protocols, such as Internet Protocol (IP).

Networking Fundamentals

Before we begin talking about specific networking technologies and applications, it’s worth taking a few
pages to go over some networking fundamentals. Networks exist for the sole purpose of sharing information
between people or machines. However, to share information, rules must be followed to ensure that
the myriad combinations of devices, transports, hardware, and software can communicate smoothly.
In “How Computers Communicate,” we cover the most basic aspects of computer networking, starting
with the OSI model. This communication model is the basis for all other topics discussed in this book, so
it’s a great place to start.
In “TCP/IP and IP Addressing,” we explore how two of the most popular protocols in use today work.
TCP/IP is the communication protocol that drives the Internet as well as most corporate traffic. We then
go a bit deeper into the Internet Protocol with a discussion of IP addressing, the concept that allows
shared information to reach its intended destination. We end the chapter with an overview of IPv6. The
addressing scheme discussed here (known as IPv4) has been in service for years. However, there has been
some concern in recent years that Internet has grown beyond the current IP addressing scheme’s ability to
serve an ever-growing demand. Changing addressing schemes this far into networking’s history provides
some interesting challenges, which we will also explore.
“Internet Applications” provides a look at two of the most common applications—e-mail and web browsing.
This chapter provides some background on how these applications came about and provides a summary
of how they work. This should be helpful, because you probably use these applications every day.

BRANDWIDTH OVER BANDWIDTH

Branding will eventually replace the how. Just as consumers
talk about fueling up down at the local Texaco station but don’t
bother to explain if they filled up with diesel or gasoline, they
will talk about accessing the network via a particular brand.
Wireless Internet access brands will be not unlike the cellular
carriers of today; I’m an AT&T customer but often roam or use
another carrier’s network all while telling others I’m an AT&T
customer.
Even the most insightful futurists can’t guarantee exactly
what the interaction between culture and Wireless Internet
technology will result in. But even though the experts can’t
predict how the Wireless Internet will evolve, please keep one
thing in mind—the answer may someday be in your hand.

WILL THE WIRELESS INTERNET SURVIVE?

In an age where rapid technology development produces concepts
and innovations that disappear often as quickly as they
come it’s only natural to ask the question—Will the Wireless
Internet survive? We believe the Wireless Internet will eventually
disappear.
It will be out of sight, but it will still exist. Not as the
wired or wireless Internet, but simply as “the Internet” or “the
network.” Access method and device will eventually become
irrelevant.
As the Wireless Internet evolves and embeds itself in the
society and culture of our modern world, the phrase “Wireless
Internet” will quietly go away. When is the last time you heard
someone refer to the “electric” light? Or the “gasoline powered”
automobile? Or even “indoor” plumbing? The descriptors
of how eventually fall away as society gets used to
assuming the obvious or irrelevant. What will matter in the
future is that a user is connecting to a network; whether that
user arrives via cable broadband, GPRS, or a public WLAN
won’t really matter.

THE FUTURE OF WIRELESS INTERNET

THE FUTURE OF WIRELESS INTERNET
IS CERTAIN—TO CHANGE!
This book has covered the technologies and applications of the
Wireless Internet in an attempt to give you a high-level glimpse
of the many challenges and issues surrounding its evolution.
On many levels it’s still anyone’s guess as to which protocols
and specific technologies will emerge as part of the standard of
the future.
The wireless industry has many players all working to provide
their contribution to this amazing future of Wireless
Internet access. Not everyone agrees on the best way to ensure
success but the momentum has generated a self-fulfilling
prophecy of sorts, led by industry: If they think it will happen,
it will (eventually anyway). But how much will it cost consumers
and industry? Who will profit?
REAL TIME ADDS VALUE
Remember the last time you went to a concert or show? Let’s
assume you have one friend who would have enjoyed the show
but wasn’t able to attend: The longer you wait to tell him about
it the less you will remember and the less emotion you will feel
about the event. As time passes, you’ll have a reduced ability to
recall the event details.
Now imagine that you could share images, sounds, and
your thoughts in text in almost real time. Ever watched a live
TV show? The value of sharing events while the event is occurring
is apparent on TV. Wireless multimedia messages won’t be
TV, but they will be more like a short commercial—images,
sounds, and text combined to communicate with detail,
efficiency, and emotion and to allow the person on the other end
to better understand you. 223

SPEED INFLUENCES THE VOLUME OF COMMUNICATION

The speed of our communication process influences the
amount of things we want to communicate. Real-time communications
allow us to share things while they still have relevance.
Human communication is often about human
experiences—things that somehow impact our five senses.
Even intense experiences eventually fade from our memory.
Communication of these experiences is best right after the
event, or ideally, during the event.
When was the last time you took pictures? Birthday party?
Vacation? Wedding? Pictures are usually taken at high-emotion
events so that we can capture the moment and remember it
later. How fun is it to share these kinds of photos with friends
soon after you take them? Many of us can’t wait to share our
pictures as soon as we get them. Trouble is, the longer you
wait, the less fun it usually is to share. As Wireless Internet
devices enable users to capture and transmit images, sound,
and other data the frequency of communication will increase.

ERODING EMOTION

One big problem exists: Emotions erode with time. You’ll put
up with co-workers who pull out pictures of some recent event
but most of us tend to run when someone suggests sitting down
to view that old home movie or pictures from that vacation
back in 1978. Newlyweds always seem to have a wedding
album handy but grandma and grandpa have theirs packed
away someplace (if you’re lucky).
So the goal becomes trying to share emotions in a timely
manner—in near-real-time whenever possible. Whether it’s an
IM session giving you a blow-by-blow account of the heated
debate coming from the corner office, a newly snapped pic of the
goings on down at the local pub, or a recently recorded audio clip
from your friends at the concert that you couldn’t get away for,
communication offers more emotional value when it is timely
and fresh. Wireless Internet applications will help users make the
most of personal communications while the content to be shared
still has value, before it erodes and becomes lifeless and dull. 222

NO, I DON’T WANT TO SEE WHAT YOU DID LAST SUMMER

Photos are a great example of sharing emotion. Think about
the pictures the average family takes: The subjects are people
and places that they care about—family and friends, places
they visited, etc. The activities pictured add more detail to a
child’s birthday party, a friend’s graduation, or scuba diving on
that Caribbean vacation.
Now think about what you do after you get the pictures
developed (assuming you don’t leave the film in a drawer for a
year). The natural inclination is to show others. Why do we do
it? To share the emotions that we felt when the pictures were
taken. Whether you were there when the picture was taken or
not, you’re still fair game when those pictures come back from
the photo lab.

IT’S ALL ABOUT EMOTION

We all remember the AT&T long distance ads on television that
encouraged us to “reach out and touch someone.” Despite
what AT&T might have charged back in the good old days of
the long distance monopoly, we must admit that they had figured
out the most important driver of communication. They
realized that personal communication is largely an emotional
activity, and people will pay to share emotions. Now, we aren’t
saying that communication should make you cry, but communications
can allow the kind of sharing that people will value. 221

WIRELESS EFFICIENCY

As we mentioned earlier, humans have always sought to communicate
efficiently. Who wants to endlessly repeat something
or have to deal with not being understood? The most successful
persons throughout history have been those who communicated
well on some level. Perhaps it wasn’t through speech—an
engineer might choose a technical drawing to entirely communicate
an idea and avoid talking at all.

MULTIMEDIA MESSAGING

One of the Wireless Internet technologies on the near horizon
is Multimedia Messaging (MMS). MMS is an application that
uses a data call to a wireless device that delivers a message
capable of incorporating any of the following in an organized
and choreographed presentation:

• Pictures
• Data
• Text
• Audio
• Video
• Voice
Whereas SMS messaging typically uses a digital control channel,
MMS will be one of the first applications that make use of the
carrier’s higher speed data capabilities available in 2.5 and 3G systems.
MMS will take advantage of combinations of media to allow
users to communicate with more detail, emotion, and efficiency.
It’s important to understand how wireless mobility adds value to
multimedia by allowing the timely exchange of information.
MMS will be used to communicate in ways that even a digital
voice call can’t achieve. Though many of us have tried to
explain the sights and sounds around us while on a simple
voice call we can agree that the effect is poor at best. Just as
SMS will be the first nonvoice communications most of us
encounter, MMS will be one of the first 3G communications
we use in a wireless fashion.

MULTIMEDIA MESSAGING

One of the Wireless Internet technologies on the near horizon
is Multimedia Messaging (MMS). MMS is an application that
uses a data call to a wireless device that delivers a message
capable of incorporating any of the following in an organized
and choreographed presentation:

• Pictures
• Data
• Text
• Audio
• Video
• Voice
Whereas SMS messaging typically uses a digital control channel,
MMS will be one of the first applications that make use of the
carrier’s higher speed data capabilities available in 2.5 and 3G systems.
MMS will take advantage of combinations of media to allow
users to communicate with more detail, emotion, and efficiency.
It’s important to understand how wireless mobility adds value to
multimedia by allowing the timely exchange of information.
MMS will be used to communicate in ways that even a digital
voice call can’t achieve. Though many of us have tried to
explain the sights and sounds around us while on a simple
voice call we can agree that the effect is poor at best. Just as
SMS will be the first nonvoice communications most of us
encounter, MMS will be one of the first 3G communications
we use in a wireless fashion.

TECHNOLOGY IMPROVES SOCIAL INTERACTION

It is easy to think that all this new technology will dehumanize
us all and shift the emphasis from communicating with people
to interacting with technology. But the reality is just the opposite,
because innovations such as the Wireless Internet allow
for more frequent and detailed social contacts.
The Wireless Internet will be used more as a social medium,
making complex interactions less dependent on face-toface
encounters. Technologies such as wireless email and
messaging help maintain contact while away from friends and
family and are very useful for arranging impromptu face-toface
interactions. There will be an evolution from using a
voice-only phone to using a 2.5G or 3G computing or handheld
device to send pictures, coordinate diaries, organize social
events, and play games.
People will also benefit from the multimedia presentation
of information. The inclusion of graphics, sound, and animation
as part of the information that users consume conveys
much more than text. In an age of macromedia Flash and MTV
today’s users may reject information that is not presented in an
interesting way.

LIFE TURNS DIGITAL

We possess increasingly more personal digital content—digital
photos and video clips, digital music clips, and even cherished
emails. (Admit it—you’ve saved more than one personal email
for no other reason that to read it over and over, you softie!)
The Wireless Internet will encourage the collection of a growing
amount of personal digital content. Some of the newer
wireless devices have already announced plans for MP3 players,
audio-recording capabilities, and built-in digital cameras.
We will soon have the tools to digitally capture and share
like never before. Just as the world was forever changed with
the adoption of the personal video camcorder. (As chronicled
by shows like America’s Funniest Home Videos—just imagine
what the future will bring. Anyone care to tune into America’s
Funniest PDA Audio Captures? Just think: You could actually
win a prize for recording those Dilbertesque comments your
boss makes in the weekly staff meetings.) 219

WIRELESS INTERNET— THIS TIME IT’S PERSONAL!

Our personal information is increasingly found in digital format—
pictures, letters, bills, receipts, videos—digital means it’s
easier to share not only content but the impact of content,
whether the content is informational, educational, entertaining,
or emotional.
Internet users today can create messages that incorporate
many media types: emails can include attachments of sound,
picture, audio, and pure data files. But let’s think about what
we would send and when we send it once we have the ability to
compose and send while mobile. In short, real-time distribution
will result in an increase in the quality and frequency of
communication.

WIRELESS BRIDGES THE DIVIDE

A Wireless Internet can play an important role in transforming
the digital divide into the digital dividend. The flexibility of
wireless infrastructure allows carriers to provide coverage in
difficult terrain as well as access in established buildings with
minimal labor and installation time. Equipment costs are
much less than for the PCs typically used to access the fixed
Internet; therefore the Wireless Internet is more accessible for
those who wish to own the access equipment for personal use
as well as for pay-per-use businesses. Remote users in developing
countries will benefit from the mobility and freedom of
smaller more portable devices that can be easily transported
from village to village. Although Wireless Internet access is
more limited than fixed, PC-based access, many countries will
benefit from the use of wireless access services as an important
part of the digital dividend solution. 218

DIGITAL DIVIDE—HOW WIRELESS CAN CHANGE THE WORLD

The growing consensus is that in the New Economy access to
knowledge is critical for economic success. Unfortunately the
economic power of the Internet is not equally distributed.
Recent Internet usage statistics show that there are currently
429 million Internet users worldwide. This number is
actually small when considered in context. Of that 429 million,
41 percent are in North America; in fact, the United States has
more computers than the rest of the world combined!
These 429 million users actually represent only 6 percent of
the world’s entire population. The following breakdown shows
just how uneven Internet usage is across the world’s regions.
Of the online population:*
• 41 percent are in the United States and Canada
• 27 percent live in Europe, the Middle East, and Africa
• 20 percent are located in Asia
• Only 4 percent are located in South America
The importance of Internet access will further divide the
world’s population into two main groups—those having access
and those who do not.

The poorest members of society suffer based on three primary
assumptions:
• The poor cannot afford to buy the necessary equipment needed
to be connected to the Internet.
• The infrastructure of developing countries may be so poor
that a significant portion of the population is not able to connect
even if equipment is available.
• The poor may not be literate enough to make use of equipment
and connectivity even when available.
The issue of the digital divide is beginning to evolve into a
drive towards realizing the digital dividend. The digital dividend
focuses on how to use technology to improve the economic
possibilities of global society.
Some of the key principles that will enable a digital dividend
include:
• Access vs. ownership. The assumption that users must purchase
equipment to have access to the Internet must be challenged.
In the New Economy the true economic benefit
comes from access to sources of knowledge and competence,
not from ownership of the access device.
A phenomenon is developing in several developing countries
where the trend is for individuals with equipment and
access to create a business around providing access. Local
entrepreneurs in India (mostly women) are operating payper-
use telephone services that provide traveling access to
remote and other underserved areas. With little more than a
mobile phone, these entrepreneurs have made access to the
telephone possible for a large number of urban poor and people
in remote villages. Many are now adding fax and PC services
to their portfolio of services.
• Rational trade offs. While many of us would opt for direct ownership
of a PC or cellular phone, trading currency for convenience,
the poor make an equally logical trade-off by exchanging
personal convenience for low-cost, no-investment access.

This approach may also make sense for those who are able
to purchase, because technology seems to advance at a rate
that quickly makes equipment obsolete!
In an age of ever-changing PC features, individual ownership
may not be the best choice after all.
• The connectivity leapfrog. Many developing countries have
never had far-reaching telephony systems due in part to the
cost of infrastructure needed to cover sparse or difficult terrain.
Without a legacy wireline system in place, users are
unable to access even simple communications. With infrastructure
costs less than half that of a wireline system, wireless
is becoming the telephony system of choice for many
regions that lack existing copper connections to homes and
businesses. The Wireless Internet will help overcome connectivity
issues in countries that lack adequate physical
wiring.
• Multimedia literacy. It’s well known that the Internet started
as largely an English-language medium to the exclusion of
many languages, especially those that use a non-Arabic
alphabet. The tide is slowly turning and more Web sites are
publishing content in local languages.
The move towards multimedia will also help alleviate this
issue for those who are not able to read text but can communicate
verbally and visually. Many cultures have unique
dialects that are difficult and costly to translate into text but
that can be published at lower cost in a voice format.
Multimedia will enable communication to take place in ways
that accommodate the needs of the user by integrating text,
audio, and video in ways that the individual user can utilize. 217