Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 74

Unit 2

Multiple Access Protocols


The Medium Access Control Sublayer
• Network Links can be divided into:
1. Point-to-point connections
2. Broadcast channels
• In any broadcast network, the key issue is how to determine who gets
to use the channel when there is competition.
• Example: teleconferencing.
• Also know as: Multi-access channels or random access channels.
• The protocols belong to sublayer of the data link layer called Medium
Access Control (MAC) sublayer.
• Especially important in LAN’s and especially in wireless
communication.
• WANs use point-to-point links with exception of satellite links
Channel Allocation Problem
• Types of channel allocation
1) Static channel allocation
2) Assumptions for dynamic allocation.
 Static Channel Allocation:
• Traditionally capacity of the channel is split among multiple competing users (e.g., TDM or
FDM).
• Example: FM radio stations.
• However, when the number of senders is large and varying or the traffic is bursty FDM
presents some problems.
• If the spectrum is cut up into N regions and
– Fewer than N users are currently interested in communicating, a large piece of valuable
spectrum will be wasted.
– More than N users want to communicate some of them will be denied permission for
lack of bandwidth.
• Dividing the channel into constant number of users of static sub channels is inherently
inefficient.
• A static allocation is poor fit to most computer systems, in which data traffic is extremely
bursty:
– Peak traffic to mean traffic rations of 1000:1 are common.
– Consequently most of the channels will be idle most of the time.
For the following values determine the mean time delay T.
C = 100 Mbps, 1/µ = 10000 bits, λ= 5000 frames/sec.
Using the above formula, the mean time delay is T = 200 msec. This result holds only when
there is no contention in the channel.
• Divide a single channel into N independent channels:
– C/N = 100/N Mbps,
– 1/m = 10,000 bits
– λ = 5000 frames/sec
– TN = Nx200 msec
– For N=10 => TN = 2 msec.
 Assumptions for Dynamic Channel Allocation:
1. Independent traffic
2. Single channel
3. Observable Collisions
4. Continuous or slotted time
5. Carrier sense or no carrier sense
• Independent Traffic.
− The model consists of N independent stations (e.g., computers, telephones), each with a program or
user that generates frames for transmission.
− The expected number of frames generated in an interval of length Δt is λΔt, where λ is a constant
(the arrival rate of new frames).
− Once a frame has been generated, the station is blocked and does nothing until the frame has been
successfully transmitted.
• Single Channel:
– The single channel is available for all communication.
– All stations can transmit on it and all can receive from it.
– The stations are assumed to be equally capable though protocols may assign then different roles
(i.e., priorities)
• Observable Collisions:
– If two frames are transmitted simultaneously, they overlap in time and the resulting signal is
garbled. This event is know as collision.
– All stations can detect that a collision has occurred. A collided frame must be retransmitted.
– No errors other than those generated by collision occur.
• Continuous or Slotted Time:
– Time may be assumed continuous. In which case frame transmission can begin at
any instant.
– Alternatively, time may be slotted or divided into discrete intervals (called slots).
– Frame transmission must hen begin at the start of a slot.
– A slot may contain 0, 1 or more frames, corresponding to an idle slot, a successful
transmission, or collision, respectively
• Carrier Sense or No Carrier Sense:
– With the carrier sense assumption, stations can tell if the channel is in use before
trying got use it.
– No station will attempt to use the channel while it is sensed as busy.
– If there is no carrier sense, stations cannot sense the channel before trying to use it.
– They will transmit then. One later they can determine whether the transmission was
successful.
• Poisson models are used to model independence assumption due to its tractability. This
is know to not be true.

• Single channel assumption is the heart of the model. This models is not a good model .
What is Multiple Access?
 Broadcast link used in LAN Broadcast links Examples
consists of multiple sending and
receiving nodes connected to or
use a single shared link

Data link layer divided into two functionality-oriented sub layers


Introduction to Multiple Access protocol
 Problem: When two or more nodes transmit at the same time, their frames will collide and
the link bandwidth is wasted during collision
 How to coordinate the access of multiple sending/receiving nodes to the shared link???
• Solution: We need a protocol to coordinate the transmission of the active nodes
• These protocols are called Medium or Multiple Access Control (MAC) Protocols belong to
a sublayer of the data link layer called MAC (Medium Access Control)
• What is expected from Multiple Access Protocols:
– Main task is to minimize collisions in order to utilize the bandwidth by:
• Determining when a station can use the link (medium)
• what a station should do when the link is busy
• what the station should do when it is involved in collision

Types of Multiple-access protocols:


Random Access Protocols
Random Access (or contention) Protocols:
No station is superior to another station and none is assigned the control over
another.
A station with a frame to be transmitted can use the link directly based on a
procedure defined by the protocol to make a decision on whether or not to send.
Types of Random Access Protocols:
(1) ALOHA (2) CSMA
ALOHA:
It is an Hawaiian words “ALO” means “Share” and “HA” means “ Essence of
life”
It was designed for wireless LAN and can be used for any shared medium
History
Development of the ALOHA network was begun in 1968 at the University of Hawaii under
the leadership of Norman Abramson.
 The goal was to use low-cost commercial radio equipment to connect users on Oahu(an
island of Hawaii) and the other Hawaiian islands with a central time-sharing computer on
the main Oahu campus.
 ALOHAnet became operational in June, 1971, providing the first public demonstration
of a wireless packet data network.
 Although ALOHAnet was designed for wireless communication, there were two other
media available for the application of an ALOHA channel – cables and satellites.
 In the 1970s ALOHA random access was employed in the widely used Ethernet cable
based network and then in the satellite network.
 A family of Contention Protocols
Contention is a media access method that is used to share a broadcast medium. In
contention, any computer in the network can transmit data at any time (first come-first served).
The ALOHA net solution was to allow each client to send its data without controlling when it
was sent, with an acknowledgment/retransmission scheme used to deal with collisions.
Basic Idea Behind ALOHA
Use of two distinct frequencies in a hub/star configuration.
The hub machine broadcasting packets to everyone on the "outbound“ channel.
Various client machines sending data packets to the hub on the "inbound“ channel.
 If data was received correctly at the hub, a short acknowledgment packet was sent to the client.
- If an acknowledgment was not received by a client machine after a short wait time, it would
automatically retransmit the data packet after waiting a randomly selected time interval.
- This acknowledgment mechanism was used to detect and correct for "collisions" created when
two client machines both attempted to send a packet at the same time.
 Types of ALOHA
1. Pure ALOHA
2. Slotted ALOHA
 Pure ALOHA:
– All frames from any station are of fixed length (L bits)
– Stations transmit at equal transmission time (all stations produce frames with
equal frame lengths).
– A station that has data can transmit at any time
– After transmitting a frame, the sender waits for an acknowledgment for an
amount of time (time out) equal to the maximum round-trip propagation
delay = 2* tprop(see next slide)
– If no ACK was received, sender assumes that the frame or ACK has been
destroyed and resends that frame after it waits for a random amount of time
– If station fails to receive an ACK after repeated transmissions, it gives up
– Channel utilization or efficiency or Throughput is the percentage of the
transmitted frames that arrive successfully (without collisions) or the
percentage of the channel bandwidth that will be used for transmitting frames
without collisions
– PURE ALOHA Maximum channel utilization is 18% (i.e, if the system
produces F frames/s, then 0.18 * F frames will arrive successfully on average
without the need of retransmission).
• Maximum propagation delay(tprop):
The time it takes for a bit of a frame
to travel between the two most
widely separated stations.

Vulnerable or Critical time for pure


ALOHA protocol:
If the frame transmission time is Tfr sec,
then the vulnerable time is = 2 Tfr sec.
This means no station should send during
the Tfr-sec before this station starts
transmission and no station should start
sending during the Tfr-sec period that the
current station is sending.
Throughput for pure ALOHA protocol
It is the average number of successful transmitted frames.
The throughput (S) for pure ALOHA is
S = G × e −2G .
The maximum throughput, Smax = 0.184 when G= 0.5 (i.e., channel utilization is 18%.)

G = Average number of frames generated by the system (all stations) during one frame
transmission time.
Flow Diagram for Pure ALOHA protocol
 Slotted ALOHA:
• Time is divided into slots equal to a frame transmission time (Tfr)
• A station can transmit at the beginning of a slot only
• If a station misses the beginning of a slot, it has to wait until the beginning of the next time
slot.
• A central clock or station informs all stations about the start of a each slot
• Maximum channel utilization is 37%
Vulnerable or Critical time for Slotted ALOHA protocol

If the frame transmission time is Tfr sec,


then the vulnerable time is = Tfr sec.
This means no station should send during
the Tfr-sec before this station starts
transmission and no station should start
sending during the Tfr-sec period that the
current station is sending.

Throughput for Slotted ALOHA protocol

The throughput for slotted ALOHA is S = G × e−G .


The maximum throughput Smax = 0.368 when G = 1.
 Advantage of ALOHA protocols
A node that has frames to be transmitted can transmit continuously at the full
rate of channel (R bps) if it is the only node with frames
Simple to be implemented
No master station is needed to control the medium
 Disadvantage
If (M) nodes want to transmit, many collisions can occur and the rate allocated for
each node will not be on average R/M bps
 This causes low channel utilization

Efficiency of Aloha:
CARRIER SENSE MULTIPLE ACCESS (CSMA)
• To improve performance, avoid transmissions that are certain to cause collisions
• Based on the fact that in LAN propagation time is very small
•  If a frame was sent by a station, All stations knows immediately so they can wait before
start sending
–  A station with frames to be sent, should sense the medium for the presence of another
transmission (carrier) before it starts its own transmission
• This can reduce the possibility of collision but it cannot eliminate it.
– Collision can only happen when more than one station begin transmitting within a short
time (the propagation time period).
 Vulnerable time for CSMA is the maximum propagation time
 The longer the propagation delay, the worse the performance of the protocol
because of the above case.

Types of CSMA Protocols


 Different CSMA protocols that determine:
 What a station should do when the medium is idle?
 What a station should do when the medium is busy?
 Different Methods of CSMA Protocols are:
1. Non-Persistent CSMA
2. 1-Persistent CSMA
3. p-Persistent CSMA
Nonpersistent CSMA
 A station with frames to be sent, should sense the medium
 If medium is idle, transmit; otherwise, go to 2
 If medium is busy, (back off) wait a random amount of time and
repeat 1
 Non-persistent Stations are deferential (respect others)
 Performance:
 Random delays reduces probability of collisions because two stations
with data to be transmitted will wait for different amount of times.
 Bandwidth is wasted if waiting time (backoff) is large because
medium will remain idle following end of transmission even if one or
more stations have frames to send .
1-persistent CSMA
 The protocol is called 1-persistance because the station transmits with a probability
of 1 when it finds the channel idle.
 To avoid idle channel time, 1-persistent protocol used
 Station wishing to transmit listens to the medium:
 If medium idle, transmit immediately;
 If medium busy, continuously listen until medium becomes idle; then transmit
immediately with probability 1
 Performance
 1-persistent stations are selfish
 If two or more stations becomes ready at the same time, collision guaranteed
P-persistent CSMA
 Time is divided to slots where each Time unit (slot) typically equals maximum
propagation delay
 Station wishing to transmit listens to the medium:
 If medium idle,
 transmit with probability (p), OR
 wait one time unit (slot) with probability (1 – p), then repeat 1.
 If medium busy, continuously listen until idle and repeat step 1
 Performance
 Reduces the possibility of collisions like nonpersistent
 Reduces channel idle time like 1-persistent
Comparison of the Channel Utilization versus Load for Various Random
Access Protocols
of its own
signal, it means collision occurred
CSMA/CD (Collision Detection)
 CSMA (all previous methods) has an inefficiency:
 If a collision has occurred, the channel is unstable until colliding packets have

been fully transmitted


 CSMA/CD (Carrier Sense Multiple Access with Collision Detection) overcomes this
as follows:
 While transmitting, the sender is listening to medium for collisions.

 Sender stops transmission if collision has occurred reducing channel wastage.

 It is widely used for bus topology LANs (IEEE 802.3, Ethernet).


 It uses one of the CSMA persistence algorithm (non-persistent, 1-persistent, p-
persistent) for transmission.
 If a collision is detected by a station during its transmission then it should do the
following:
Abort transmission and
Transmit a jam signal (48 bit) to notify other stations of collision so that they
will discard the transmitted frame also to make sure that the collision signal will
stay until detected by the furthest station
After sending the jam signal, backoff (wait) for a random amount of time, then
Transmit the frame again
• Question: How long does it
take to detect a collision?
• Answer: In the worst case,
twice the maximum
propagation delay of the
medium

 Restrictions:
 Packet transmission time should be at least as long as the time needed to
detect a collision (2 * maximum propagation delay + jam sequence
transmission time).
 Otherwise, CSMA/CD does not have an advantage over CSMA
Flow diagram for the CSMA/CD
CSMA/CA
Flow chart for CSMA/CA
Performance of Random Access Protocols
• Simple and easy to implement

• Decentralized (no central device that can fail and bring down the entire system)

• In low-traffic, packet transfer has low-delay

• However, limited throughput and in heavier traffic, packet delay has no limit.

• In some cases, a station may never have a chance to transfer its packet. (unfair
protocol)

• A node that has frames to be transmitted can transmit continuously at the full
rate of channel (R) if it is the only node with frames

• If (M) nodes want to transmit, many collisions can occur and the rate for each
node will not be on average R/M
Collision-Free Protocols
 • Can we avoid collision all together, even during the contention period ?
Yes. But how ?
 One assumption: there are N stations, each with a unique address from 0 to N –1 ``wired'' into
it.
Which station gets the channel after a successful transmission ?
 A BASIC BIT-MAP PROTOCOL:
• Each contention period consists of exactly N slots, with one slot time being at least 2t.
• If station i (0 £ i £ N -1) has a frame to send, it transmits 1 bit during the ith slot; otherwise, it
transmits 0 bit during the ith slot.
• After all slots have passed by, stations begin transmitting in numerical order.
• After the last ready station has transmitted its frame, another N-bit contention period is begun.
• Protocols like this in which the desire to transmit is broadcast before the actual transmission
are called reservation protocols.
• Low load situation: the bit map repeats over and over.
 TOKEN PASSING
• Form a circular list. Pass a token around. Whoever has
the token can transmit i.e., the token represents
permission to send.
• Only the station that wants to transmit, seize the token
and release it after successful transmission.
• Possession of the token allows a station to use the bus to
send one frame, as before. This protocol is called token
bus.
• The performance of token passing is similar to that of
the bit-map protocol, though the contention slots and
frames of one cycle are now intermingled. After sending
a frame, each station must wait for all N stations
(including itself) to send the token to their neighbors and
the other N − 1 stations to send a frame, if they have
one.
 THE BINARY COUNTDOWN PROTOCOL
A problem with the basic bit-map protocol is that
overhead is 1 contention bit slot per station. We can
do better than that by using binary station addresses.
1. Each station has a binary address. All addresses
are the same length.
2. To transmit, a station broadcasts its address as a
binary bit string, starting with high order bit.
3. The bits in each address position from different
stations are BOOLEAN ORed together (so called
Binary countdown).
4. As soon as a station sees that a high-order bit
position that is 0 in its address has been
overwritten with a 1, it gives up.
5. After the winning station has transmitted its frame,
there is no information available telling how
many other stations to send, so the algorithm
begins all over with the next frame.
ETHERNET
• Types of Ethernets
1. Classical Ethernet
2. Switched Ethernet
• Physical layer
• MAC sublayer protocol

Figure IEEE standard for LANs


Types of ETHERNETs
• Two kinds of Ethernet exist: classic Ethernet, which solves the multiple access problem
using the techniques random access and collision free protocols; and switched Ethernet,
in which devices called switches are used to connect different computers.
• It is important to note that, while they are both referred to as Ethernet, they are quite different.
Classic Ethernet is the original form and ran at rates from 3 to 10 Mbps. Switched Ethernet
runs at 100, 1000, and 10,000 Mbps, in forms called fast Ethernet, gigabit Ethernet, and 10
gigabit Ethernet.
• In practice, only Switched Ethernet is used nowadays.

Classic Ethernet Physical Layer


• Bob Metcalfe with David Boggs designed and implemented the first local area network in
1976 in Xerox Palo Alto Lab.
• It used a single long thick coaxial cable and its Speed 3 Mbps.
• ETHERNET – luminiferous (transferring data) ether.
• Successful designed that was later drafted as standard in 1978 by Xerox, DEC, Intel with a 10
Mbps. In 1983 it became the IEEE 802.3 standard
• Thick Ethernet – a thick cable. Segment could be as long as 500 m. Could be used to connect
up to 100 computers.
• Thin Ethernet – BNC connectors. Segment could be no longer than 185 m. Could be used to
connect up to 30 computers.
fig: Architecture of classic Ethernet
• For a large length connectivity the cables could be connected by repeaters.
• Repeater is a physical layer device that receives, amplifies, and retransmits signals in both
directions.
• Over each of those cables the signal was coded using Manchester encoding.
• Other restriction was that no two transceivers could be more than 2.5 km apart and no path
between any two transceivers could traverse more than four repeaters.
• This limitation was impose due to the MAC protocol used.
MAC Sublayer Protocol
• The format used to send frames is shown in the figure
• Frame formats. (a) Ethernet (DIX). (b) IEEE 802.3

1. Preamble – 8 bytes
• Preamble is of 8 bytes, each containing the bit pattern 10101010 (with the exception of the
• last byte, in which the last 2 bits are set to 11). This last byte is called the Start of
• Frame delimiter for 802.3.
• The Manchester encoding of this pattern produces a10-MHz square wave for 6.4 μsec to allow the
receiver’s clock to synchronize with the sender’s.
• The last two 1 bits tell the receiver that the rest of the frame is about to start.
2. Two addresses each 6 bytes – destination + source
 First bit of the destination address is 0 for ordinary addresses and 1 for group addresses.
 Group address allow multiple destinations to listen to a single address – Multicasting.
 Special address consisting of all 1 is reserved for broadcasting.
 Uniqueness of the addresses:
 First 3 bytes are used for OUI (Organizationally Unique Identifier)
 Manufactures are assigned blocks of 224 addresses. The manufacturer assigns the last 3 bytes
of the address and programs the complete address into the NIC.
3. Type or Length field.
– Depending whether the frame is Ethernet or IEEE 802.3
– Ethernet uses a Type field to tell the receiver what to do with the frame.
– Multiple network-layer protocols may be in use at the same time on the same machine.
So when Ethernet frame arrives, the operating system has to know which one to hand
the frame to. The Type field specifies which process to give the frame to. E.g. 0x0800
indicates the frame contains IPv4 packet.
– Length of the field could be carried as well.
– Ethernet length was determined by looking inside the data – a layer violation.
– Added another header for the Logical Link Control (LLC) protocol within the data. It
uses 8 bytes to convey the 2 bytes of protocol type information.
– Rule: any number less than or equal to 0x600 is interpreted as Length, and any number
greater than 0x600 can be interpreted as Type.
4. Data Field and Pad Field
– Up to 1500 bytes.
– Minimum frame length – valid frames must be at least 64 bytes long – from destination
address to checksum.
– If data portion is less than 46 bytes the Pad field is used to fill out the frame to the
minimum size.
– Minimum frame length is also serves one very important role – prevents the sender to
complete transmission before the first bit arrives at the destination.
–In addition to there being a maximum frame length, there is also a minimum frame
length.
–While a data field of 0 bytes is sometimes useful, it causes a problem.
–When a transceiver detects a collision, it truncates the current frame, which means that
stray bits and pieces of frames appear on the cable all the time.
–To make it easier to distinguish valid frames from garbage, Ethernet requires that valid
frames must be at least 64 bytes long, from destination address to checksum, including
both.
–If the data portion of a frame is less than 46 bytes, the Pad field is used to fill out the
frame to the minimum size.
–10 Mbps LAN with a maximum length of 2500 m and four repeaters the round-trip
time has been determined to be nearly 50 msec in the worst case.
–Shortest allowed frame must take at least this long to transmit.
At 10 Mbps a bit takes 100 nsec
500 bits (numbit = 10 Mbps X 100 nsec) rounded up to 512 bits = 64 bytes.
5. Checksum
– It is a 32-bit CRC and is defined exactly for a generator polynomial.
– CRC is an error detecting code that is used to determine if the bits of the frame
have been received correctly.
– It just does error detection, with the frame dropped if an error is detected.
Reason for having a minimum length frame?
• Another (and more important) reason for having a minimum
length frame is to prevent a station from completing the
transmission of a short frame before the first bit has even reached
the far end of the cable, where it may collide with another frame.
It is illustrated in Fig.
• At time 0, station A, at one end of the network, sends off a
frame. Let us call the propagation time for this frame to reach the
other end τ. Just before the frame gets to the other end (i.e., at
time τ − ε), the most distant station, B, starts transmitting.
• When B detects that it is receiving more power than it is putting
out, it knows that a collision has occurred, so it aborts its
transmission and generates a 48-bit noise burst to warn all other
stations.
• In other words, it jams the ether to make sure the sender does not
miss the collision.
• At about time 2τ, the sender sees the noise burst and aborts its
transmission, too. It then waits a random time before trying
again.
• If a station tries to transmit a very short frame, it is conceivable
that a collision will occur, but the transmission will have
completed before the noise burst gets back to the station at 2τ.
• The sender will then incorrectly conclude that the frame was
successfully sent. To prevent this situation from occurring, all
frames must take more than 2τ to send so that the transmission
is still taking place when the noise burst gets back to the sender.
CSMA/CD with Binary Exponential Backoff
• Classic Ethernet uses the 1-persistent CSMA/CD algorithm.
• It means that stations sense the medium when they have a frame to send and send the
frame as soon as the medium becomes idle.
• They monitor the channel for collisions as they send. If there is a collision, they abort
the transmission with a short jam signal and retransmit after a random interval.
• In this method a random interval is determined when a collision occurs. The model is
shown in below figure.
• After a collision, time is divided into discrete slots whose length is equal to the worst-
case roundtrip propagation time on the ether (2τ). To accommodate the longest path
allowed by Ethernet, the slot time has been set to 512 bit times, or 51.2 μsec.
• After the first collision, each station waits either 0 or 1 slot times at random before trying
again. If two stations collide and each one picks the same random number, they will
collide again.
• After the second collision, each one picks either0, 1, 2, or 3 at random and waits that number
of slot times.
• If a third collision occurs (the probability of this happening is 0.25), the next time the number
of slots to wait is chosen at random from the interval 0 to 23 − 1.
• In general, after “i” collisions, a random number between 0 and 2i − 1 is chosen, and that
number of slots is skipped. However, after 10 collisions have been reached, the randomization
interval is frozen at a maximum of 1023 slots. After 16 collisions, the controller throws in the
towel and reports failure back to the computer. Further recovery is up to higher layers.
• This algorithm, called binary exponential backoff, was chosen to dynamically adapt to the
number of stations trying to send. If the randomization interval for all collisions were 1023,
the chance of two stations colliding for a second time would be negligible, but the average
wait after a collision would be hundreds of slot times, introducing significant delay.
• On the other hand, if each station always delayed for either 0 or 1 slots, then if 100 stations
ever tried to send at once they would collide over and over until 99 of them picked 1 and the
remaining station picked 0. This might take years.
• By having the randomization interval grow exponentially as more and more consecutive
collisions occur, the algorithm ensures a low delay when only a few stations collide but also
ensures that the collisions are resolved in a reasonable interval when many stations collide.
Truncating the backoff at 1023 keeps the bound from growing too large.
• If there is no collision, the sender assumes that the frame was probably successfully
delivered.
Figure 802.3 MAC frame

Figure Minimum and maximum lengths


Figure Example of an Ethernet address in hexadecimal notation

Figure : Unicast and multicast addresses

The least significant bit of the first byte defines the type of address.
If the bit is 0, the address is unicast;
otherwise, it is multicast.
The broadcast destination address is a special case of the multicast address in
which all bits are 1s.
Example

Define the type of the following destination addresses:


a. 4A:30:10:21:10:1A b. 47:20:1B:2E:08:EE
c. FF:FF:FF:FF:FF:FF
Solution
To find the type of the address, we need to look at the
second hexadecimal digit from the left. If it is even, the
address is unicast. If it is odd, the address is multicast. If
all digits are F’s, the address is broadcast. Therefore, we
have the following:
a. This is a unicast address because A in binary is 1010.
b. This is a multicast address because 7 in binary is 0111.
c. This is a broadcast address because all digits are F’s.
13.45
Example

Show how the address 47:20:1B:2E:08:EE is sent out on


line.

Solution
The address is sent left-to-right, byte by byte; for each byte,
it is sent right-to-left, bit by bit, as shown below:

13.46
Data Link Layer Switching

• Uses of bridges
• Learning bridges
• Spanning tree bridges
• Repeaters, hubs, bridges, switches,
routers, and gateways
• Virtual LANs
DATA LINK LAYER SWITCHING
• Many organizations have multiple LANs and it would it be convenient to join the
LANs together to make a larger LAN.

• These connections are made with devices called bridges. The Ethernet switches is
a modern name for bridges; they provide functionality that goes beyond classic
Ethernet and Ethernet hubs. It is easy to join multiple LANs into a larger and faster
network.

• The terms ‘‘bridge’’ and ‘‘switch’’ interchangeably.

• Bridges operate in the data link layer, they examine the data link layer addresses to
forward frames. Since they not examine the payload field of the frames they
forward, they can handle IP packets.

• In contrast, routers examine the addresses in packets and route based on them. They
only work with the protocols that they are designed to handle.
Uses of Bridges
• Some common situations in which bridges are used.
• The three basic reasons:
– First, many university and corporate departments have their own LANs to connect their
own personal computers, servers, and devices such as printers. Since the goals of the
various departments differ, different departments may set up different LANs, without
regard to what other departments are doing. There is a need for interaction, so bridges are
needed. In this example, multiple LANs come into existence due to the autonomy of their
owners.
– Second, the organization may be geographically spread over several buildings separated
by considerable distances. It may be cheaper to have separate LANs in each building and
connect them with bridges and a few long-distance fiber optic links than to run all the
cables to a single central switch. Even if laying the cables is easy to do, there are limits
on their lengths (e.g., 200 m for twisted-pair gigabit Ethernet). The network would not
work for longer cables due to the excessive signal attenuation or round-trip delay. The
only solution is to partition the LAN and install bridges to join the pieces to increase the
total physical distance that can be covered.
– Third, it may be necessary to split what is logically a single LAN into separate LANs
(connected by bridges) to accommodate the load. At many large universities, for
example, thousands of workstations are available for student and faculty computing.
Companies may also have thousands of employees. The scale of this system precludes
putting all the workstations on a single LAN—there are more computers than ports on
any Ethernet hub and more stations than allowed on a single classic Ethernet.
• All of the stations share the same, fixed amount of bandwidth. The more stations there are,
the less average bandwidth per station.
• Two separate LANs have twice the capacity of a single LAN. Bridges let the LANs be joined
together while keeping this capacity. each LAN can run at full speed. This behavior also
increases reliability.
• By deciding what to forward and what not to forward, bridges act like fire doors in a
building, preventing a single node that has gone berserk from bringing down the entire
system.
• Ideally bridges are completely transparent. It should be possible to go out and buy bridges,
plug the LAN cables into the bridges, and have everything work perfectly, instantly.
• There should be no hardware changes required, no software changes required, no setting of
address switches, no downloading of routing tables or parameters, nothing at all. Just plug in
the cables and walk away.
• Further, the operation of the existing LANs should not be affected by the bridges at all. As far
as the stations are concerned, there should be no observable difference whether or not they
are part of a bridged LAN.
• It should be as easy to move stations around the bridged LAN as it is to move them around a
single LAN.
• Actually it is possible to create bridges that are transparent by using two algorithms.
– a backward learning algorithm to stop traffic being sent where it is not needed; and
– a spanning tree algorithm to break loops that maybe formed when switches are cabled
together.
LEARNING BRIDGES

(a) (b)
Bridge connecting two Bridges (and a hub) connecting seven point-
multidrop LANs to-point stations.
(c) Protocol processing at a bridge.
SPANNING TREE BRIDGES

(a) Bridges with two parallel links


(b)
A spanning tree connecting five bridges. The dotted
lines are links that are not part of the spanning tree.
Repeaters, Hubs, Bridges, Switches, Routers, and
Gateways

Which device is in which layer. Frames, packets, and headers.


Repeaters:
These are analog devices that work with signals on the cables to which they are connected.
A signal appearing on one cable is cleaned up, amplified, and put out on another cable.
Repeaters do not understand frames, packets, or headers. They understand the symbols that
encode bits as volts.
 Classic Ethernet, for example, was designed to allow four repeaters that would boost the signal
to extend the maximum cable length from 500 meters to 2500 meters.
Hubs:
A hub has a number of input lines that it joins electrically.
Frames arriving on any of the lines are sent out on all the others. If two frames arrive at the
same time, they will collide, just as on a coaxial cable.
All the lines coming into a hub must operate at the same speed.
Hubs differ from repeaters in that they do not (usually) amplify the incoming signals and are
designed for multiple input lines, but the differences are slight.
Like repeaters, hubs are physical layer devices that do not examine the link layer addresses or
use them in any way.
Types of Hub
Active Hub :- These are the hubs which have their own power supply and can clean , boost and
relay the signal along the network. It serves both as a repeater as well as wiring center. These are
used to extend maximum distance between nodes.
Passive Hub :- These are the hubs which collect wiring from nodes and power supply from
active hub. These hubs relay signals onto the network without cleaning and boosting them and
can’t be used to extend distance between nodes.

You might also like