Ethernet describes a system that links the computers in a building or within a
local area. It consists of hardware (a network interface card), software, and
cabling used to connect the computers. All computers on an Ethernet are
attached to a shared data link, as opposed to traditional point-to-point networks,
in which a single device connects to another single device.
Because all computers share the same data link on an Ethernet network, the
network needs a protocol to handle contention if multiple computers want to
transmit data at the same time, because only one can talk at a time without
causing interference. Metcalfe’s invention introduced the carrier sense multiple
access collision detect (CSMA/CD) protocol. CSMA/CD defines how a computer
should listen to the network before transmitting. If the network is quiet,
the computer can transmit its data. However, a problem arises if more than
one computer listens, hears silence, and transmits at the same time: The data
collides. The collision-detect part of CSMA/CD defines a method in which
transmitting computers back off when collisions occur and randomly attempt
to restart transmission. Ethernet originally operated at 3 Mbps, but today it
operates at speeds ranging from 10 Mbps (that’s 10 million bits per second) to
10 Gbps (that’s 10 billion bits per second). 51
IT Certification CCIE,CCNP,CCIP,CCNA,CCSP,Cisco Network Optimization and Security Tips
History of Ethernet
Robert Metcalfe developed Ethernet at the famous Xerox Palo Alto Research
Center (PARC) in 1972. The folks at Xerox PARC had developed a personal
workstation with a graphical user interface. They needed a technology to network
these workstations with their newly developed laser printers. (Remember,
the first PC, the MITS altair, was not introduced to the public until 1975.)
Metcalfe originally called this network the Alto Aloha Network. He changed
the name to Ethernet in 1973 to make it clear that any type of device could
connect to his network. He chose the name “ether” because the network carried
bits to every workstation in the same manner that scientists once thought
waves were propagated through space by the “luminiferous ether.”
Metcalfe’s first external publication concerning Ethernet was available to the
public in 1976. Metcalfe left Xerox, and in 1979 he got Digital Equipment
Corporation (DEC), Intel, and Xerox to agree on a common Ethernet standard
called DIX. In 1982, the Institute of Electrical and Electronic Engineers (IEEE)
adopted a standard based on Metcalfe’s Ethernet.
Ethernet took off in academic networks and some corporate networks. It was
cheap, and public domain protocols such as Internet Protocol (IP) ran natively
on it. However, another company (IBM) wanted the world to adopt its protocol
instead, called Token Ring. Before switching was introduced, Ethernet was
more difficult to troubleshoot than Token Ring. Although Ethernet was less
expensive to implement, larger corporations chose Token Ring because of their
relationship with IBM and the ability to more easily troubleshoot problems.
Early Ethernet used media such as coaxial cable, and a network could literally
be a single long, continuous segment of coax cable tied into many computers.
(This cable was known as Thinnet or Thicknet, depending on the thickness of
the coax used.) When someone accidentally kicked the cable under his or her
desk, this often produced a slight break in the network. A break meant that
no one on the network could communicate, not just the poor schmuck who
kicked the cable. Debugging usually entailed crawling under desks and
following the cable until the break was found.
In contrast, Token Ring had more sophisticated tools (than crawling on your
knees) for finding the breaks. It was usually pretty obvious where the token
stopped being passed and, voilà, you had your culprit.
The battle for the LAN continued for more than ten years, until eventually
Ethernet became the predominant technology. Arguably, it was the widespread
adoption of Ethernet switching that drove the final nail in Token Ring’s coffin.
Other LAN technologies, such as AppleTalk and Novell IPX, have been and
continue to be introduced, but Ethernet prevails as the predominant technology
for local high-speed connectivity.
Thankfully, we have left behind early media such as coax for more sophisticated
technologies.
Center (PARC) in 1972. The folks at Xerox PARC had developed a personal
workstation with a graphical user interface. They needed a technology to network
these workstations with their newly developed laser printers. (Remember,
the first PC, the MITS altair, was not introduced to the public until 1975.)
Metcalfe originally called this network the Alto Aloha Network. He changed
the name to Ethernet in 1973 to make it clear that any type of device could
connect to his network. He chose the name “ether” because the network carried
bits to every workstation in the same manner that scientists once thought
waves were propagated through space by the “luminiferous ether.”
Metcalfe’s first external publication concerning Ethernet was available to the
public in 1976. Metcalfe left Xerox, and in 1979 he got Digital Equipment
Corporation (DEC), Intel, and Xerox to agree on a common Ethernet standard
called DIX. In 1982, the Institute of Electrical and Electronic Engineers (IEEE)
adopted a standard based on Metcalfe’s Ethernet.
Ethernet took off in academic networks and some corporate networks. It was
cheap, and public domain protocols such as Internet Protocol (IP) ran natively
on it. However, another company (IBM) wanted the world to adopt its protocol
instead, called Token Ring. Before switching was introduced, Ethernet was
more difficult to troubleshoot than Token Ring. Although Ethernet was less
expensive to implement, larger corporations chose Token Ring because of their
relationship with IBM and the ability to more easily troubleshoot problems.
Early Ethernet used media such as coaxial cable, and a network could literally
be a single long, continuous segment of coax cable tied into many computers.
(This cable was known as Thinnet or Thicknet, depending on the thickness of
the coax used.) When someone accidentally kicked the cable under his or her
desk, this often produced a slight break in the network. A break meant that
no one on the network could communicate, not just the poor schmuck who
kicked the cable. Debugging usually entailed crawling under desks and
following the cable until the break was found.
In contrast, Token Ring had more sophisticated tools (than crawling on your
knees) for finding the breaks. It was usually pretty obvious where the token
stopped being passed and, voilà, you had your culprit.
The battle for the LAN continued for more than ten years, until eventually
Ethernet became the predominant technology. Arguably, it was the widespread
adoption of Ethernet switching that drove the final nail in Token Ring’s coffin.
Other LAN technologies, such as AppleTalk and Novell IPX, have been and
continue to be introduced, but Ethernet prevails as the predominant technology
for local high-speed connectivity.
Thankfully, we have left behind early media such as coax for more sophisticated
technologies.
Networking Infrastructure
With the fundamentals of networking under our belt, we can now take a closer look at the infrastructure
that makes up the networks we all use. This section focuses on the switches and routers that make up networks,
along with the protocols that drive them.
We start this section with a discussion of the Ethernet protocol, which defines the rules and processes by
which computers in a local area communicate. Long before the Internet was in use, computers communicated
locally using the Ethernet protocol, and it is still widely used.
We then move on to local-area network (LAN) switching, an extension of the Ethernet protocol required
when there are more computers in a local segment than can communicate efficiently. Switching is one of
the core technologies in networking.
One of the necessities in networking is link redundancy, something that makes it more likely that data
reaches its intended target. Sometimes, however, link redundancy can create loops in the network, which
causes an explosion of administrative traffic that can take down a network in a matter of minutes.
Spanning Tree is one of the mechanisms that keeps these “broadcast storms” from wiping out your local
network, so we look at how this important protocol works.
We end this section with routing, which provides the basis for network communication over long distances.
The advent of routing allowed the growth of the Internet and corporate networking as we know it
today. This section explores how routing works and how routers communicate.
that makes up the networks we all use. This section focuses on the switches and routers that make up networks,
along with the protocols that drive them.
We start this section with a discussion of the Ethernet protocol, which defines the rules and processes by
which computers in a local area communicate. Long before the Internet was in use, computers communicated
locally using the Ethernet protocol, and it is still widely used.
We then move on to local-area network (LAN) switching, an extension of the Ethernet protocol required
when there are more computers in a local segment than can communicate efficiently. Switching is one of
the core technologies in networking.
One of the necessities in networking is link redundancy, something that makes it more likely that data
reaches its intended target. Sometimes, however, link redundancy can create loops in the network, which
causes an explosion of administrative traffic that can take down a network in a matter of minutes.
Spanning Tree is one of the mechanisms that keeps these “broadcast storms” from wiping out your local
network, so we look at how this important protocol works.
We end this section with routing, which provides the basis for network communication over long distances.
The advent of routing allowed the growth of the Internet and corporate networking as we know it
today. This section explores how routing works and how routers communicate.
Subscribe to:
Posts (Atom)