+ All Categories
Home > Engineering > Dcn data link_layer

Dcn data link_layer

Date post: 16-Jul-2015
Category:
Upload: mangal-das
View: 206 times
Download: 0 times
Share this document with a friend
Popular Tags:
124
DATA COMMUNICATION NETWORK Mangal Das Assistant Professor Electronics And Communication Department 1
Transcript
Page 1: Dcn data link_layer

DATA COMMUNICATION NETWORK

Mangal Das

Assistant Professor

Electronics And Communication

Department

1

Page 2: Dcn data link_layer

2

Page 3: Dcn data link_layer

TYPES OF TRANSMISSION MEDIA

3

Page 4: Dcn data link_layer

PHYSICAL LAYER: TRANSMISSION MEDIA

The physical layer is the layer that actually interacts with

the transmission media, the physical part of the network

that connects network components together. This layer

is involved in physically carrying information from one

node in the network to the next.

The physical layer has complex tasks to perform. One

major task is to provide services for the data link layer.

The data in the data link layer consists of 0s and 1s

organized into frames that are ready to be sent across

the transmission medium. This stream of Os and I s

must first be converted into another entity: signals. One

of the services provided by the physical layer is to

create a signal that represents this stream of bits. 4

Page 5: Dcn data link_layer

PHYSICAL LAYER: TRANSMISSION MEDIA

The physical layer must also take care of the

physical network, the transmission medium. The

transmission medium is a passive entity; it has no

internal program or logic for control like other

layers. The transmission medium must be

controlled by the physical layer. The physical layer

decides on the directions of data flow. The physical

layer decides on the number of logical channels for

transporting data coming from different sources.

5

Page 6: Dcn data link_layer

GUIDED MEDIA

Guided media, which are those that provide a

conduit from one device to another, include twisted-

pair cable, coaxial cable, and fiber-optic cable. A

signal traveling along any of these media is directed

and contained by the physical limits of the medium.

Twisted-pair and coaxial cable use metallic (copper)

conductors that accept and transport signals in the

form of electric current. Optical fiber is a cable that

accepts and transports signals in the form of light.

6

Page 7: Dcn data link_layer

TWISTED-PAIR CABLE

A twisted pair consists of two conductors (normally copper),each with its own plastic insulation, twisted together, as shownin Figure.

One of the wires is used to carry signals to the receiver, and the other is used only as a ground reference. The receiver uses the difference between the two.

In addition to the signal sent by the sender on one of the wires, interference (noise) and crosstalk may affect both wires and create unwanted signals.

7

Page 8: Dcn data link_layer

TWISTED-PAIR CABLE

If the two wires are parallel, the effect of these unwanted

signals is not the same in both wires because they are

at different locations relative to the noise or crosstalk

sources(e,g., one is closer and the other is farther). This

results in a difference at the receiver. By twisting the

pairs, a balance is maintained. For example, suppose in

one twist, one wire is closer to the noise source and the

other is farther; in the next twist, the reverse is true.

Twisting makes it probable that both wires are equally

affected by external influences (noise or crosstalk). This

means that the receiver, which calculates the difference

between the two, receives no unwanted signals.

8

Page 9: Dcn data link_layer

COAXIAL CABLE

Coaxial cable (or coax) carries signals of higherfrequency ranges than those in twisted pair cable,in part because the two media are constructed quitedifferently. Instead of having two wires, coax has acentral core conductor of solid or stranded wire(usually copper) enclosed in an insulating sheath,which is, in turn, encased in an outer conduct or ofmetal foil, braid, or a combination of the two. Theouter metallic wrapping serves both as a shieldagainst noise and as the second conductor, whichcompletes the circuit. This outer conductor is alsoenclosed in an insulating sheath, and the wholecable is protected by a plastic cover.

9

Page 10: Dcn data link_layer

COAXIAL CABLE

Coaxial cables are categorized by their radio

government (RG) ratings. Each RG number

denotes a unique set of physical specifications,

including the wire gauge of the inner conductor, the

thickness and type of the inner insulator, the

construction of the shield, and the size and type of

the outer casing.

10

Page 11: Dcn data link_layer

COAXIAL CABLE

11

Page 12: Dcn data link_layer

FIBER-OPTIC CABLE

A fiber-optic cable is made of glass or plastic and

transmits signals in the form of light.

Optical fibers use reflection to guide light through a

channel. A glass or plastic core is surrounded by a

cladding of less dense glass or plastic. The

difference in density of the two materials must be

such that a beam of light moving through the core is

reflected off the cladding instead of being refracted

into it.

12

Page 13: Dcn data link_layer

FIBER-OPTIC CABLE

13

Page 14: Dcn data link_layer

FIBER-OPTIC CABLE

Current technology supports two modes (multimode andsingle mode) for propagating light along opticalchannels, each requiring fiber with different physicalcharacteristics. Multimode can be implemented in twoforms: step-index or graded-index.

Multimode Multimode is so named because multiplebeams from a light source move through the core indifferent paths. How these beams move within the cabledepends on the structure of the core.

Single-mode uses step-index fiber and a highly focusedsource of light that limits beams to a small range ofangles, all close to the horizontal. The single mode fiberitself is manufactured with a much smaller diameter thanthat of multimode fiber, and with substantialiy lowerdensity (index of refraction).

14

Page 15: Dcn data link_layer

ADVANTAGES OF OPTICAL FIBER

Higher bandwidth. Fiber-optic cable can supportdramatically higher bandwidths (and hence datarates) than either twisted-pair or coaxial cable.Currently, data rates and bandwidth utilization overfiber-optic cable are limited not by the medium butby the signal generation and reception technologyavailable.

Less signal attenuation. Fiber-optic transmission distance is significantly greater than that of other guided media. A signal can run for 50 km without requiring regeneration. We need repeaters every 5 km for coaxial or twisted-pair cable.

15

Page 16: Dcn data link_layer

ADVANTAGES OF OPTICAL FIBER

Immunity to electromagnetic interference.

Electromagnetic noise cannot affect fiber-optic

cables.

Resistance to corrosive materials. Glass is more

resistant to corrosive materials than copper.

Light weight. Fiber-optic cables are much lighter

than copper cables.

Greater immunity to tapping. Fiber-optic cables are

more immune to tapping than copper cables.

Copper cables create antenna effects that can

easily be tapped.16

Page 17: Dcn data link_layer

DISADVANTAGES OF OPTICAL FIBER

Installation and maintenance. Fiber-optic cable is a

relatively new technology. Its installation and

maintenance require expertise that is not yet

available everywhere.

Unidirectional light propagation. Propagation of light

is unidirectional. If we need bidirectional

communication, two fibers are needed.

Cost. The cable and the interfaces are relatively

more expensive than those of other guided media.

If the demand for bandwidth is not high, often the

use of optical fiber cannot be justified.17

Page 18: Dcn data link_layer

UNGUIDED MEDIA: WIRELESS

Unguided media transport electromagnetic waveswithout using a physical conductor. This type ofcommunication is often referred to as wirelesscommunication. Signals are normally broadcastthrough free space and thus are available toanyone who has a device capable of receivingthem.

Part of the electromagnetic spectrum, ranging from3 kHz to 900 THz, used for wirelesscommunication.

Unguided signals can travel from the source todestination in several ways: ground propagation,sky propagation, and line-of-sight propagation. 18

Page 19: Dcn data link_layer

UNGUIDED MEDIA: WIRELESS

19

Page 20: Dcn data link_layer

SWITCHING

A network is a set of connected devices. Whenever wehave multiple devices, we have the problem of how toconnect them to make one-to-one communicationpossible.

One solution is to make a point-to-point connectionbetween each pair of devices (a mesh topology) orbetween a central device and every other device (a startopology). These methods, however, are impractical andwasteful when applied to very large networks.

The number and length of the links require too muchinfrastructure to be cost-efficient, and the majority ofthose links would be idle most of the time. Othertopologies employing multipoint connections, such as abus, are ruled out because the distances betweendevices and the total number of devices increasebeyond the capacities of the media and equipment. 20

Page 21: Dcn data link_layer

SWITCHING

A better solution is switching. A switched network

consists of a series of interlinked nodes, called

switches. Switches are devices capable of creating

temporary connections between two or more

devices linked to the switch. In a switched network,

some of these nodes are connected to the end

systems (computers or telephones, for example).

Others are used only for routing.

21

Page 22: Dcn data link_layer

METHODS OF SWITCHING

Traditionally, three methods of switching have been

important: circuit switching, packet switching, and

message switching.

The first two are commonly used today. The third

has been phased out in general communications

but still has networking applications.

We can divide today's networks into three broad

categories: circuit-switched networks, packet-

switched networks, and message-switched. Packet-

switched networks can further be divided into two

subcategories-virtual-circuit networks and datagram

networks. 22

Page 23: Dcn data link_layer

CIRCUIT-SWITCHED NETWORKS

A circuit-switched network consists of a set of

switches connected by physical links.

A connection between two stations is a dedicated

path made of one or more links. However, each

connection uses only one dedicated channel on

each link. Each link is normally divided into n

channels by using FDM or TDM

Figure shows a trivial circuit-switched network with

four switches and four links. Each link is divided

into n (n is 3 in the figure) channels by using FDM

or TDM.23

Page 24: Dcn data link_layer

CIRCUIT-SWITCHED NETWORKS

24

Page 25: Dcn data link_layer

CIRCUIT-SWITCHED NETWORKS

We have explicitly shown the multiplexing symbols toemphasize the division of the link into channels eventhough multiplexing can be implicitly included in theswitch fabric.

The end systems, such as computers or telephones, aredirectly connected to a switch. We have shown only twoend systems for simplicity. When end system A needs tocommunicate with end system M, system A needs torequest a connection to M that must be accepted by allswitches as well as by M itself. This is called the setupphase.

A circuit (channel) is reserved on each link, and thecombination of circuits or channels defines thededicated path. After the dedicated path made ofconnected circuits (channels) is established, datatransfer can take place. After all data have beentransferred, the circuits are teardown.

25

Page 26: Dcn data link_layer

CIRCUIT-SWITCHED NETWORKS

Circuit switching takes place at the physical layer.

Before starting communication, the stations must make areservation for the resources to be used during thecommunication. These resources, such as channels(bandwidth in FDM and time slots in TDM), switch buffers,switch processing time, and switch input/output ports, mustremain dedicated during the entire duration of data transferuntil the teardown phase.

Data transferred between the two stations are not packetized(physical layer transfer of the signal). The data are acontinuous flow sent by the source station and received by thedestination station, although there may be periods of silence.

There is no addressing involved during data transfer. Theswitches route the data based on their occupied band (FDM)or time slot (TDM). Of course, there is end-to end addressingused during the setup phase.

26

Page 27: Dcn data link_layer

DATAGRAM NETWORKS

In data communications, we need to send messages from oneend system to another. If the message is going to passthrough a packet-switched network, it needs to be divided intopackets of fixed or variable size. The size of the packet isdetermined by the network and the governing protocol.

In packet switching, there is no resource allocation for apacket. This means that there is no reserved bandwidth onthe links, and there is no scheduled processing time for eachpacket.

Resources are allocated on demand. The allocation isdone on a first come, first-served basis. When a switchreceives a packet, no matter what is the source or destination,the packet must wait if there are other packets beingprocessed. As with other systems in our daily life, this lack ofreservation may create delay. For example, if we do not havea reservation at a restaurant, we might have to wait.

27

Page 28: Dcn data link_layer

DATAGRAM NETWORKS

In a datagram network, each packet is treatedindependently of all others. Even if a packet ispart of a multipacket transmission, the networktreats it as though it existed alone. Packets in thisapproach are referred to as datagrams.

Datagram switching is normally done at the networklayer.

Figure shows how the datagram approach is usedto deliver four packets from station A to station X.The switches in a datagram network aretraditionally referred to as routers. That is whywe use a different symbol for the switches in thefigure. 28

Page 29: Dcn data link_layer

DATAGRAM NETWORKS

29

Page 30: Dcn data link_layer

DATAGRAM NETWORKS

In this example, all four packets (or datagrams) belongto the same message, but may travel different paths toreach their destination. This is so because the links maybe involved in carrying packets from other sources anddo not have the necessary bandwidth available to carryall the packets from A to X.

This approach can cause the datagrams of atransmission to arrive at their destination out of orderwith different delays between the packets. Packets mayalso be lost or dropped because of a lack of resources.

In most protocols, it is the responsibility of an upper-layer protocol to reorder the datagrams or ask for lostdatagrams before passing them on to the application.

30

Page 31: Dcn data link_layer

DATAGRAM NETWORKS

The datagram networks are sometimes referred to asconnectionless networks. The term connectionless heremeans that the switch (packet switch) does not keepinformation about the connection state. There are nosetup or teardown phases. Each packet is treated thesame by a switch regardless of its source or destination.

If there are no setup or teardown phases, how are thepackets routed to their destinations in a datagramnetwork? In this type of network, each switch (or packetswitch) has a routing table which is based on thedestination address. The routing tables are dynamic andare updated periodically. The destination addresses andthe corresponding forwarding output ports are recordedin the tables.

31

Page 32: Dcn data link_layer

DATAGRAM NETWORKS

Every packet in a datagram network carries a header thatcontains, among other information, the destination address ofthe packet. When the switch receives the packet, thisdestination address is examined; the routing table is consultedto find the corresponding port through which the packet shouldbe forwarded. This destination address, unlike the address ina virtual-circuit-switched network, remains the same duringthe entire journey of the packet.

Switching in the Internet is done by using the datagramapproach to packet switching at the network layer.

32

Destination Address Output Port

1200 1

…………… ………

1250 5

Page 33: Dcn data link_layer

VIRTUAL-CIRCUIT NETWORKS

A virtual-circuit network is a cross between a

circuit-switched network and a datagram

network.

As in a circuit-switched network, there are setup

and teardown phases in addition to the data

transfer phase.

Resources can be allocated during the setup

phase, as in a circuit-switched network, or on

demand, as in a datagram network.

33

Page 34: Dcn data link_layer

VIRTUAL-CIRCUIT NETWORKS

As in a datagram network, data are packetized and

each packet carries an address in the header.

However, the address in the header has local

jurisdiction (it defines what should be the next

switch and the channel on which the packet is

being carried), not end-to-end jurisdiction. The

reader may ask how the intermediate switches

know where to send the packet if there is no final

destination address carried by a packet.

As in a circuit-switched network, all packets follow

the same path established during the connection.

34

Page 35: Dcn data link_layer

VIRTUAL-CIRCUIT NETWORKS

A virtual-circuit network is normallyimplemented in the data link layer, while acircuit-switched network is implemented in thephysical layer and a datagram network in thenetwork layer. But this may change in the future.

The network has switches that allow traffic fromsources to destinations. A source or destination canbe a computer, packet switch, bridge, or any otherdevice that connects other networks.

The network has switches that allow traffic fromsources to destinations. A source or destination canbe a computer, packet switch, bridge, or any otherdevice that connects other networks. 35

Page 36: Dcn data link_layer

36

Page 37: Dcn data link_layer

VIRTUAL-CIRCUIT NETWORKS :

ADDRESSING

In a virtual-circuit network, two types of addressing areinvolved: global and local (virtual-circuit identifier).

Global Addressing: A source or a destination needs to havea global address-an address that can be unique in the scopeof the network or internationally if the network is part of aninternational network.

Virtual-Circuit Identifier: The identifier that is actually usedfor data transfer is called the virtual-circuit identifier (VCl). AVCI, unlike a global address, is a small number that has onlyswitch scope; it is used by a frame between two switches.When a frame arrives at a switch, it has a VCI; when it leaves,it has a different VCl. Figure shows how the VCI in a dataframe changes from one switch to another. Note that a VCIdoes not need to be a large number since each switch canuse its own unique set of VCls.

37

Page 38: Dcn data link_layer

VIRTUAL-CIRCUIT IDENTIFIER

38

Page 39: Dcn data link_layer

DATA LINK LAYER

The data link layer transforms the physical layer, a

raw transmission facility, to a link responsible for

node-to-node (hop-to-hop) communication.

Specific responsibilities of the data link layer

include framing, addressing, flow control, error

control, and media access control.

39

Page 40: Dcn data link_layer

WORK OF DATA LINK LAYER

The data link layer divides the stream of bits

received from the network layer into manageable

data units called frames.

The data link layer adds a header to the frame to

define the addresses of the sender and receiver of

the frame.

If the rate at which the data are absorbed by the

receiver is less than the rate at which data are

produced in the sender, the data link layer imposes

a flow control mechanism to avoid overwhelming

the receiver.40

Page 41: Dcn data link_layer

WORK OF DATA LINK LAYER

The data link layer also adds reliability to the

physical layer by adding mechanisms to detect and

retransmit damaged, duplicate, or lost frames.

When two or more devices are connected to the

same link, data link layer protocols are necessary to

determine which device has control over the link at

any given time.

41

Page 42: Dcn data link_layer

ERROR DETECTION AND CORRECTION

Types of Errors:

Single-Bit Error: In a single-bit error, only 1 bit in the

data unit has changed.

Burst Error: The term burst error means that 2 or

more bits in the data unit have changed from 1 to 0

or from 0 to 1.

42

Page 43: Dcn data link_layer

REDUNDANCY

The central concept in detecting or correcting

errors is redundancy. To be able to detect or

correct errors, we need to send some extra bits

with our data. These redundant bits are added by

the sender and removed by the receiver. Their

presence allows the receiver to detect or correct

corrupted bits.

43

Page 44: Dcn data link_layer

DETECTION VERSUS CORRECTION

The correction of errors is more difficult than

the detection. In error detection, we are looking

only to see if any error has occurred. The

answer is a simple yes or no. We are not even

interested in the number of errors.

In error correction, we need to know the exact

number of bits that are corrupted and more

importantly, their location in the message. The

number of the errors and the size of the message

are important factors.

44

Page 45: Dcn data link_layer

FORWARD ERROR CORRECTION VERSUS

RETRANSMISSION

There are two main methods of error correction.

Forward error correction is the process in which the

receiver tries to guess the message by using

redundant bits. This is possible, as we see later, if

the number of errors is small.

Correction by retransmission is a technique in

which the receiver detects the occurrence of an

error and asks the sender to resend the message.

Resending is repeated until a message arrives that

the receiver believes is error-free.

45

Page 46: Dcn data link_layer

CODING

Redundancy is achieved through variouscoding schemes. The sender adds redundant bitsthrough a process that creates a relationshipbetween the redundant bits and the actual data bits.

The receiver checks the relationships between thetwo sets of bits to detect or correct the errors.

The ratio of redundant bits to the data bits and therobustness of the process are important factors inany coding scheme.

coding schemes can be divided into two broadcategories: block coding and convolutioncoding.

Coding Techniques: Block Coding, Hamming Code,Cyclic Code, Checksum

46

Page 47: Dcn data link_layer

FRAMING

Framing in the data link layer separates a message from

one source to a destination, or from other messages to

other destinations, by adding a sender address and a

destination address. The destination address defines

where the packet is to go; the sender address helps the

recipient acknowledge the receipt.

Although the whole message could be packed in one

frame, that is not normally done. One reason is that a

frame can be very large, making flow and error control

very inefficient. When a message is carried in one very

large frame, even a single-bit error would require the

retransmission of the whole message. When a message

is divided into smaller frames, a single-bit error affects

only that small frame.47

Page 48: Dcn data link_layer

CHARACTER-ORIENTED PROTOCOLS

In a character-oriented protocol, data to be carried

are 8-bit characters from a coding system such as

ASCII.

The header, which normally carries the source and

destination addresses and other control

information, and the trailer, which carries error

detection or error correction redundant bits, are

also multiples of 8 bits.

To separate one frame from the next, an 8-bit (1-

byte) flag is added at the beginning and the end of

a frame. The flag, composed of protocol dependent

special characters, signals the start or end of a

frame.48

Page 49: Dcn data link_layer

CHARACTER-ORIENTED PROTOCOLS

Character-oriented framing was popular when only

text was exchanged by the data link layers.

The flag could be selected to be any character not

used for text communication.

Character-oriented protocols present problem in

data communications. The universal coding

systems in use today, such as Unicode, have 16-bit

and 32-bit characters that conflict with 8-bit

characters.

Byte stuffing (or character stuffing) used for

video, Audio, Picture transmission.49

Page 50: Dcn data link_layer

CHARACTER-ORIENTED PROTOCOLS

Byte stuffing is the process of adding 1 extra byte

whenever there is a flag or escape character in the text.

50

Page 51: Dcn data link_layer

BIT-ORIENTED PROTOCOLS

In a bit-oriented protocol, the data section of a

frame is a sequence of bits to be interpreted by the

upper layer as text, graphic, audio, video, and so

on.

However, in addition to headers (and possible

trailers), we still need a delimiter to separate one

frame from the other.

Most protocols use a special 8-bit pattern flag

01111110 as the delimiter to define the beginning

and the end of the frame, as shown in Figure.

51

Page 52: Dcn data link_layer

BIT-ORIENTED PROTOCOLS

52

Page 53: Dcn data link_layer

FLOW AND ERROR CONTROL

Flow control refers to a set of procedures used to

restrict the amount of data that the sender can send

before waiting for acknowledgment.

Error control in the data link layer is based on

automatic repeat request, which is the

retransmission of data.

Data link layer can combine framing, flow control,

and error control to achieve the delivery of data

from one node to another by “The protocols”.

53

Page 54: Dcn data link_layer

PROTOCOLS

The protocols are normally implemented in software

by using one of the common programming

languages.

Protocols can be divided into those that can be

used for noiseless(error-free) channels and those

that can be used for noisy (error-creating) channels.

The protocols in the first category cannot be used in

real life, but they serve as a basis for understanding

the protocols of noisy channels.

54

Page 55: Dcn data link_layer

PROTOCOLS

55

Page 56: Dcn data link_layer

SIMPLEST PROTOCOL

No flow or error control.

It is very simple. The sender sends a sequence of

frames without even thinking about the receiver. To

send three frames, three events occur at the sender

site and three events at the receiver site.

Note that the data frames are shown by tilted

boxes; the height of the box defines the

transmission time difference between the first bit

and the last bit in the frame.

56

Page 57: Dcn data link_layer

SIMPLEST PROTOCOL

57

Page 58: Dcn data link_layer

STOP-AND-WAIT PROTOCOL

If data frames arrive at the receiver site faster thanthey can be processed, the frames must be storeduntil their use. Normally, the receiver does not haveenough storage space, especially if it is receivingdata from many sources. This may result in eitherthe discarding of frames or denial of service. Toprevent the receiver from becoming overwhelmedwith frames, we somehow need to tell the sender toslow down. There must be feedback from thereceiver to the sender.

The protocol we discuss now is called the Stop-and-Wait Protocol because then sender sends oneframe, stops until it receives confirmation from thereceiver (okay to go ahead), and then sends thenext frame.

We add flow control to our previous protocol.58

Page 59: Dcn data link_layer

STOP-AND-WAIT PROTOCOL

59

Page 60: Dcn data link_layer

NOISY CHANNELS : STOP-AND-WAIT AUTOMATIC

REPEAT REQUEST

Stop-and-Wait Automatic Repeat Request (Stop-andWait ARQ), adds a simple error control mechanismto the Stop-and-Wait Protocol.

To detect and correct corrupted frames, we need to addredundancy bits to our data frame.

When the frame arrives at the receiver site, it is checkedand if it is corrupted, it is silently discarded. Thedetection of errors in this protocol is manifested by thesilence of the receiver.

Lost frames are more difficult to handle than corruptedones. In our previous protocols, there was no way toidentify a frame. The received frame could be thecorrect one, or a duplicate, or a frame out of order. Thesolution is to number the frames. When the receiverreceives a data frame that is out of order, this meansthat frames were either lost or duplicated.

60

Page 61: Dcn data link_layer

STOP-AND-WAIT AUTOMATIC REPEAT

REQUEST

The completed and lost frames need to be resent in

this protocol. If the receiver does not respond when

there is an error, how can the sender know which

frame to resend? To remedy this problem, the

sender keeps a copy of the sent frame. At the same

time, it starts a timer. If the timer expires and there

is no ACK for the sent frame, the frame is resent,

the copy is held, and the timer is restarted. Since

the protocol uses the stop-and-wait mechanism,

there is only one specific frame that needs an ACK

even though several copies of the same frame can

be in the network.61

Page 62: Dcn data link_layer

STOP-AND-WAIT AUTOMATIC REPEAT

REQUEST

Since an ACK frame can also be corrupted and lost,it too needs redundancy bits and a sequencenumber. The ACK frame for this protocol has asequence number field. In this protocol, the sendersimply discards a corrupted ACK frame or ignoresan out-of-order one.

Sequence Numbers: In Stop-and-Wait ARQ, we usesequence numbers to number the frames. Thesequence numbers are based on modulo-2arithmetic.

Acknowledgment Numbers: In Stop-and-Wait ARQ,the acknowledgment number always announces inmodulo-2 arithmetic the sequence number of thenext frame expected. 62

Page 63: Dcn data link_layer

STOP-AND-WAIT AUTOMATIC REPEAT REQUEST

63

Page 64: Dcn data link_layer

PIPELINING

In networking and in other areas, a task is often

begun before the previous task has ended.

This is known as pipelining.

There is no pipelining in Stop-and-Wait ARQ

because we need to wait for a frame to reach the

destination and be acknowledged before the next

frame can be sent.

64

Page 65: Dcn data link_layer

ERROR CONTROL TECHNIQUES

65

Page 66: Dcn data link_layer

GO-BACK-N AUTOMATIC REPEAT REQUEST

To improve the efficiency of transmission (filling the

pipe), multiple frames must be in transition while

waiting for acknowledgment.

In other words, we need to let more than one frame

be outstanding to keep the channel busy while the

sender is waiting for acknowledgment.

In “Go-Back-N Automatic Repeat Request” In this

protocol we can send several frames before

receiving acknowledgments; we keep a copy of

these frames until the acknowledgments arrive.

Sequence Numbers: In the Go-Back-N Protocol,

the sequence numbers are modulo ,where m

is the size of the sequence number field in bits.66

2m

Page 67: Dcn data link_layer

GO-BACK-N AUTOMATIC REPEAT REQUEST

Frames from a sending station are numbered

sequentially. However, because we need to include

the sequence number of each frame in the header,

we need to set a limit. If the header of the frame

allows m bits for the sequence number, the

sequence numbers range from 0 to 2m - 1. For

example, if m is 4, the only sequence numbers are

0 through 15 inclusive. However, we can repeat the

sequence. So the sequence numbers are :

0,1,2,3,4,5,6, 7,8,9, 10, 11, 12, 13, 14, 15,0,

1,2,3,4,5,6,7,8,9,10, 11, ...

67

Page 68: Dcn data link_layer

GO-BACK-N AUTOMATIC REPEAT REQUEST

Sender maintains a list of sequence numbers that itis allowed to send (sender window). The size of thesender’s window is at most 2^k – 1. The sender isprovided with a buffer equal to the window size.

The receiver acknowledges a frame by sending anACK frame that includes the sequence number ofthe next frame expected. This also explicitlyannounces that it is prepared to receive the next Nframes, beginning with the number specified. Thisscheme can be used to acknowledge multipleframes. It could receive frames 2, 3, 4 but withholdACK until frame 4 has arrived. By returning an ACKwith sequence number 5, it acknowledges frames2, 3, 4 in one go.

68

Page 69: Dcn data link_layer

GO-BACK-N AUTOMATIC REPEAT REQUEST

Sliding window algorithm is a method of flow

control for network data transfers. TCP, the

Internet's stream transfer protocol, uses a

sliding window algorithm.

A sliding window algorithm places a buffer

between the application program and the

network data flow.

Sender sliding Window:

69

Page 70: Dcn data link_layer

GO-BACK-N AUTOMATIC REPEAT REQUEST

It looks for a specific frame (frame 4 as shown in

the figure) to arrive in a specific order. If it receives

any other frame (out of order), it is discarded and it

needs to be resent. However, the receiver window

also slides by one as the specific frame is received

and accepted as shown in the figure. The receiver

acknowledges a frame by sending an ACK frame

that includes the sequence number of the next

frame expected.

70

Page 71: Dcn data link_layer

GO-BACK-N AUTOMATIC REPEAT REQUEST

71

Page 72: Dcn data link_layer

GO-BACK-N AUTOMATIC REPEAT REQUEST

72

Page 73: Dcn data link_layer

GO-BACK-N AUTOMATIC REPEAT REQUEST

73

Page 74: Dcn data link_layer

GO-BACK-N AUTOMATIC REPEAT REQUEST

o The receiver always maintains a window of size 1

as shown in Fig.

Hence, Sliding Window Flow Control :

o Allows transmission of multiple frames

o Assigns each frame a k-bit sequence number

o Range of sequence number is [0…2k-1], i.e.,

frames are counted modulo 2k.

74

Page 75: Dcn data link_layer

SELECTIVE-REPEAT ARQ

The selective-repetitive ARQ scheme retransmits

only those for which NAKs are received or for which

timer has expired, this is shown in the Fig. This is

the most efficient among the ARQ schemes, but the

sender must be more complex so that it can send

out-of-order frames. The receiver also must have

storage space to store the post-NAK frames and

processing power to reinsert frames in proper

sequence.

75

Page 76: Dcn data link_layer

SELECTIVE-REPEAT ARQ

76

Page 77: Dcn data link_layer

PIGGYBACKING

The three protocols we discussed in this section are

all unidirectional: data frames flow in only one

direction although control information such as ACK

and NAK frames can travel in the other direction. In

real life, data frames are normally flowing in both

directions: from node A to node B and from node B

to node A.

So, instead of sending separate acknowledgement

packets, a portion (few bits) of the data frames can

be used for acknowledgement. This phenomenon is

known as piggybacking.

The piggybacking helps in better channel utilization.

Further, multi-frame acknowledgement can be

done. 77

Page 78: Dcn data link_layer

HIGH-LEVEL DATA LINK CONTROL

HDLC is a bit-oriented protocol.

It specifies a packitization standard for serial links.

HDLC supports several modes of operation,

including a simple sliding-window mode for reliable

delivery. Since Internet provides retransmission at

higher levels (i.e., TCP), most Internet applications

use HDLC's unreliable delivery mode, Unnumbered

Information.

78

Page 79: Dcn data link_layer

HDLC STATIONS AND CONFIGURATIONS

HDLC specifies the following three types of stations for data link control:

Primary Station

Secondary Station

Combined Station

Primary Station : Within a network using HDLC as itsdata link protocol, if a configuration is used in whichthere is a primary station, it is used as the controllingstation on the link. It has the responsibility of controllingall other stations on the link (usually secondarystations). A primary issues commands and secondaryissues responses. Despite this important aspect of beingon the link, the primary station is also responsible for theorganization of data flow on the link. It also takes care oferror recovery at the data link level (layer 2 of the OSImodel). 79

Page 80: Dcn data link_layer

HDLC STATIONS AND CONFIGURATIONS

Secondary Station: If the data link protocol being

used is HDLC, and a primary station is present, a

secondary station must also be present on the data

link. The secondary station is under the control of

the primary station. It has no ability, or direct

responsibility for controlling the link. It is only

activated when requested by the primary station. It

only responds to the primary station. The

secondary station's frames are called responses. It

can only send response frames when requested by

the primary station. A primary station maintains a

separate logical link with each secondary station.

80

Page 81: Dcn data link_layer

HDLC STATIONS AND CONFIGURATIONS

Combined Station: A combined station is a

combination of a primary and secondary station. On

the link, all combined stations are able to send and

receive commands and responses without any

permission from any other stations on the link. Each

combined station is in full control of itself, and does

not rely on any other stations on the link. No other

stations can control any combined station. May

issue both commands and responses.

http://nptel.ac.in/courses/106105080/1381

Page 82: Dcn data link_layer

HDLC OPERATIONAL MODES

A mode in HDLC is the relationship between two

devices involved in an exchange; the mode

describes who controls the link. Exchanges over

unbalanced configurations are always conducted in

normal response mode. Exchanges over symmetric

or balanced configurations can be set to specific

mode using a frame design to deliver the

command. HDLC offers three different modes of

operation. These three modes of operations are:

Normal Response Mode (NRM)

Asynchronous Response Mode (ARM)

Asynchronous Balanced Mode (ABM)

82

Page 83: Dcn data link_layer

HDLC OPERATIONAL MODES

Normal Response Mode : This is the mode in which the primary station initiates transfers to the secondary station. The secondary station can only transmit a response when, and only when, it is instructed to do so by the primary station.

Asynchronous Response Mode : In this mode, the primary station doesn't initiate transfers to the secondary station. In fact, the secondary station does not have to wait to receive explicit permission from the primary station to transfer any frames. Due to the fact that this mode is asynchronous, the secondary station must wait until it detects and idle channel before it can transfer any frames. This is when the ARM link is operating at half-duplex.

83

Page 84: Dcn data link_layer

HDLC OPERATIONAL MODES

Synchronous Balanced Mode :This mode is used in

case of combined stations. There is no need for

permission on the part of any station in this mode. This

is because combined stations do not require any sort of

instructions to perform any task on the link.

Normal Response Mode is used most frequently in

multi-point lines, where the primary station controls

the link. Asynchronous Response Mode is better for

point-to-point links, as it reduces overhead.

Asynchronous Balanced Mode is not used widely today.

The "asynchronous" in both ARM and ABM does not

refer to the format of the data on the link. It refers to the

fact that any given station can transfer frames

without explicit permission or instruction from any

other station. 84

Page 85: Dcn data link_layer

HDLC FRAME STRUCTURE

85

Page 86: Dcn data link_layer

HDLC FRAME STRUCTURE

The Flag field: Every frame on the link must begin andend with a flag sequence field (F). Stations attached tothe data link must continually listen for a flag sequence.The flag sequence is an octet looking like 01111110.Flags are continuously transmitted on the link betweenframes to keep the link active. Two other bit sequencesare used in HDLC as signals for the stations on the link.

HDLC is a code-transparent protocol. It does not rely ona specific code for interpretation of line control. Thismeans that if a bit at position N in an octet has a specificmeaning, regardless of the other bits in the same octet.If an octet has a bit sequence of 01111110, but is not aflag field, HLDC uses a technique called bit-stuffing todifferentiate this bit sequence from a flag field.

86

Page 87: Dcn data link_layer

HDLC FRAME STRUCTURE

The Address field: The address field (A) identifies

the primary or secondary stations involvement in

the frame transmission or reception. Each station

on the link has a unique address.

The Control field: HDLC uses the control field (C)

to determine how to control the communications

process. This field contains the commands,

responses and sequences numbers used to

maintain the data flow accountability of the link,

defines the functions of the frame and initiates the

logic to control the movement of traffic between

sending and receiving stations.

87

Page 88: Dcn data link_layer

HDLC FRAME STRUCTURE

The Poll/Final Bit (P/F):The 5th bit position in the

control field is called the poll/final bit, or P/F bit. It

can only be recognized when it is set to 1. If it is set

to 0, it is ignored. The poll/final bit is used to

provide dialogue between the primary station and

secondary station.

The Information field or Data field: This field is

not always present in a HDLC frame. It is only

present when the Information Transfer Format is

being used in the control field. The information field

contains the actually data the sender is transmitting

to the receiver in an I-Frame and network

management information in U-Frame.88

Page 89: Dcn data link_layer

HDLC FRAME STRUCTURE

The Frame check Sequence field: This field

contains a 16-bit, or 32-bit cyclic redundancy check

bits.

89

Page 90: Dcn data link_layer

HDLC COMMANDS AND RESPONSES

Information transfer format command and

response (I-Frame) : The function of the

information command and response is to transfer

sequentially numbered frames, each containing an

information field, across the data link.

Supervisory format command and responses

(S-Frame) :Supervisory (S) commands and

responses are used to perform numbered

supervisory functions such as acknowledgment,

polling, temporary suspension of information

transfer, or error recovery. Frames with the S format

control field cannot contain an information field.

90

Page 91: Dcn data link_layer

HDLC COMMANDS AND RESPONSES

Unnumbered Format Commands and responses

(U-Frame) :The unnumbered format commands

and responses are used to extend the number of

data link control functions. The unnumbered format

frames have 5 modifier bits, which allow for up to

32 additional commands and 32 additional

response functions.

91

Page 92: Dcn data link_layer

MULTIPLE ACCESS

In the protocols we described, we assumed that

there is an available dedicated link (or channel)

between the sender and the receiver. This

assumption may or may not be true.

92

Page 93: Dcn data link_layer

MULTIPLE ACCESS

Data link layer can be considered as two

sublayers. The upper sublayer is responsible for

data link control, and the lower sublayer is

responsible for resolving access to the shared

media. If the channel is dedicated, we do not need

the lower sublayer.

The upper sublayer that is responsible for flow and

error control is called the logical link control (LLC)

layer.

The lower sublayer that is mostly responsible for

multiple access resolution is called the media

access control (MAC) layer.

93

Page 94: Dcn data link_layer

MULTIPLE ACCESS

94

Page 95: Dcn data link_layer

RANDOM ACCESS

In random access or contention methods, no station

is superior to another station and none is assigned

the control over another. No station permits, or

does not permit, another station to send.

At each instance, a station that has data to send

uses a procedure defined by the protocol to make a

decision on whether or not to send. This decision

depends on the state of the medium (idle or busy).

95

Page 96: Dcn data link_layer

RANDOM ACCESS

In a random access method, each station has the rightto the medium without being controlled by any otherstation. However, if more than one station tries to send,there is an access conflict-collision-and the frames willbe either destroyed or modified. To avoid access conflictor to resolve it when it happens, each station follows aprocedure.

Two features give this method its name.

First, there is no scheduled time for a station to transmit.Transmission is random among the stations. That is whythese methods are called random access.

Second, no rules specify which station should sendnext. Stations compete with one another to access themedium. That is why these methods are also calledcontention methods.

96

Page 97: Dcn data link_layer

RANDOM ACCESS

The random access methods have evolved from a very interesting protocol known as ALOHA, which used a very simple procedure called multiple access (MA).

The method was improved with the addition of a procedure that forces the station to sense the medium before transmitting. This was called carrier sense multiple access.

Carrier sense multiple access later evolved into two parallel methods:

carrier sense multiple access with collision detection (CSMA/CD) . CSMA/CD tells the station what to do when a collision is detected.

carrier sense multiple access with collision avoidance (CSMA/CA). CSMA/CA tries to avoid the collision.

97

Page 98: Dcn data link_layer

ALOHA

ALOHA, the earliest random access method, was

developed at the University of Hawaii in early 1970.

The original ALOHA protocol is called pure ALOHA.

The idea is that each station sends a frame whenever it

has a frame to send. However, since there is only one

channel to share, there is the possibility of collision

between frames from different stations.

Figure shows four stations (unrealistic assumption) that

contend with one another for access to the shared

channel. The figure shows that each station sends two

frames; there are a total of eight frames on the shared

medium. Some of these frames collide because multiple

frames are in contention for the shared channel.

98

Page 99: Dcn data link_layer

ALOHA

99

Page 100: Dcn data link_layer

ALOHA

There are four stations that contend with one

another for access to the shared channel. The

figure shows that each station sends two frames;

there are a total of eight frames on the shared

medium. Some of these frames collide because

multiple frames are in contention for the shared

channel.

Figure shows that only two frames survive: frame

1.1 from station 1 and frame 3.2 from station 3. We

need to mention that even if one bit of a frame

coexists on the channel with one bit from another

frame, there is a collision and both will be

destroyed.100

Page 101: Dcn data link_layer

ALOHA

It is obvious that we need to resend the frames that

have been destroyed during transmission. The pure

ALOHA protocol relies on acknowledgments from

the receiver.

When a station sends a frame, it expects the

receiver to send an acknowledgment. If the

acknowledgment does not arrive after a time-out

period, the station assumes that the frame (or the

acknowledgment) has been destroyed and resends

the frame.

vulnerable time, the length of time in which there is

a possibility of collision.

101

Page 102: Dcn data link_layer

ALOHA

A collision involves two or more stations. If all these

stations try to resend their frames after the time-out,

the frames will collide again.

Pure ALOHA dictates that when the time-out period

passes, each station waits a random amount of

time before resending its frame. The randomness

will help avoid more collisions. We call this time the

back-off time TB.

Pure ALOHA has a second method to prevent

congesting the channel with retransmitted frames.

After a maximum number of retransmission

attempts Kmax a station must give up and try later.

102

Page 103: Dcn data link_layer

SLOTTED ALOHA

Slotted ALOHA was invented to improve the efficiency ofpure ALOHA.

In slotted ALOHA we divide the time into slots of Tfr sand force the station to send only at the beginning of thetime slot.

Because a station is allowed to send only at thebeginning of the synchronized time slot, if a stationmisses this moment, it must wait until the beginning ofthe next time slot.

This means that the station which started at thebeginning of this slot has already finished sending itsframe. Of course, there is still the possibility of collision iftwo stations try to send at the beginning of the sametime slot.

However, the vulnerable time is now reduced to one-half

103

Page 104: Dcn data link_layer

SLOTTED ALOHA

104

Page 105: Dcn data link_layer

CARRIER SENSE MULTIPLE ACCESS (CSMA)

To minimize the chance of collision and, therefore,

increase the performance, the CSMA method was

developed. The chance of collision can be reduced

if a station senses the medium before trying to use

it. Carrier sense multiple access (CSMA) requires

that each station first listen to the medium (or check

the state of the medium) before sending.

In other words, CSMA is based on the principle

"sense before transmit" or "listen before talk."

105

Page 106: Dcn data link_layer

CARRIER SENSE MULTIPLE ACCESS (CSMA)

The possibility of collision still exists because ofpropagation delay; when a station sends a frame, itstill takes time (although very short) for the first bitto reach every station and for every station to senseit.

In other words, a station may sense the mediumand find it idle, only because the first bit sent byanother station has not yet been received. At timet0 station B senses the medium and finds it idle, soit sends a frame.

At time t1 (t1 > t0) station D senses the mediumand finds it idle because, at this time, the first bitsfrom station B have not reached station C.

Station C also sends a frame. The two signalscollide and both frames are destroyed. 106

Page 107: Dcn data link_layer

CARRIER SENSE MULTIPLE ACCESS (CSMA)

107

Page 108: Dcn data link_layer

CARRIER SENSE MULTIPLE ACCESS (CSMA)

The vulnerable time for CSMA is the propagation

time Tp .

This is the time needed for a signal to propagate

from one end of the medium to the other. When a

station sends a frame, and any other station tries to

send a frame during this time, a collision will result.

But if the first bit of the frame reaches the end of

the medium, every station will already have heard

the bit and will refrain from sending.

108

Page 109: Dcn data link_layer

CARRIER SENSE MULTIPLE ACCESS (CSMA)

109

Page 110: Dcn data link_layer

IEEE STDS.

In 1985, the Computer Society of the IEEE started

a project, called Project 802, to set standards to

enable intercommunication among equipment from

a variety of manufacturers.

The relationship of the 802 Standard to the

traditional OSI model is shown in Figure. The IEEE

has subdivided the data link layer into two

sublayers:

logical link control (LLC) and media access control

(MAC). IEEE has also created several physical

layer standards for different LAN protocols.

110

Page 111: Dcn data link_layer

IEEE STDS FOR LAN:IEEE 802.3

111

Page 112: Dcn data link_layer

IEEE STDS.

The original Ethernet was created in 1976 at

Xerox's Palo Alto Research Center (PARC).

Since then, it has gone through four generations:

Standard Ethernet (10 Mbps), Fast Ethernet (100

Mbps), Gigabit Ethernet (l Gbps), and Ten-Gigabit

Ethernet (l0 Gbps).

112

Page 113: Dcn data link_layer

STANDARD ETHERNET :MAC SUBLAYER

In Standard Ethernet, the MAC sublayer governs

the operation of the access method. It also frames

data received from the upper layer and passes

them to the physical layer.

Frame Format: Ethernet does not provide any

mechanism for acknowledging received frames,

making it what is known as an unreliable medium.

Acknowledgments must be implemented at the

higher layers.

113

Page 114: Dcn data link_layer

FRAME FORMAT

Preamble: It alerts the receiving system to the coming

frame and enables it to synchronize its input timing. The

pattern provides only an alert and a timing pulse.

Start frame delimiter (SFD): The SFD warns the

station or stations that this is the last chance for

synchronization. The last 2 bits is 11 and alerts the

receiver that the next field is the destination address.

Destination address (DA)

Source address (SA)

Length or type: The original Ethernet used this field as

the type field to define the upper-layer protocol using the

MAC frame. The IEEE standard used it as the length

field to define the number of bytes in the data field. Both

uses are common today.114

Page 115: Dcn data link_layer

STANDARD ETHERNET

Data. This field carries data encapsulated from the

upper-layer protocols

CRC: The last field contains error detection

information, in this case a CRC-32.

Frame Length: Minimum: 64 bytes (512 bits)

Maximum: 1518 bytes (12,144 bits)

Addressing: Each station on an Ethernet network

(such as a PC, workstation, or printer) has its own

network interface card (NIC). The NIC fits inside the

station and provides the station with a 6-byte

physical address.

Example of an Ethernet address in hexadecimal

notation :06:01 :02:01:2C:4B 115

Page 116: Dcn data link_layer

STANDARD ETHERNET :PHYSICAL LAYER

All standard implementations use digital signaling

(baseband) at 10 Mbps. At the sender, data are

converted to a digital signal using the Manchester

scheme.

10Base5, thick Ethernet, or Thicknet: The

nickname derives from the size of the cable, which

is roughly the size of a garden hose and too stiff to

bend with your hands. 10Base5 was the first

Ethernet specification to use a bus topology with an

external transceiver connected via a tap to a thick

coaxial cable.

116

Page 117: Dcn data link_layer

10BASE5, 10BASE2 ETHERNET

117

Page 118: Dcn data link_layer

STANDARD ETHERNET :PHYSICAL LAYER

10Base2, Thin Ethernet: It uses a bus topology,but the cable is much thinner and more flexible. Thecable can be bent to pass very close to the stations.In this case, the transceiver is normally part of thenetwork interface card (NIC), which is installedinside the station.

10Base-T: Twisted-Pair Ethernet : The thirdimplementation is called l0Base-T or twisted-pairEthernet. 10Base-T uses a physical star topology.The stations are connected to a hub via two pairs oftwisted cable.

Two pairs of twisted cable create two paths (one forsending and one for receiving) between the stationand the hub. Any collision here happens in the hub.

118

Page 119: Dcn data link_layer

STANDARD ETHERNET :PHYSICAL LAYER

10Base-F: Fiber Ethernet: Although there are

several types of optical fiber l0-Mbps Ethernet, the

most common is called 10Base-F. 10Base-F uses a

star topology to connect stations to a hub. The

stations are connected to the hub using two fiber-

optic cables.

119

Page 120: Dcn data link_layer

WIRELESS LANS: IEEE 802.11

The standard defines two kinds of services: thebasic service set (BSS) and the extended serviceset (ESS).

Basic Service Set: A basic service set is made ofstationary or mobile wireless stations and anoptional central base station, known as the accesspoint (AP).

The BSS without an AP is a stand-alone networkand cannot send data to other BSSs. It is calledan ad hoc architecture. In this architecture,stations can form a network without the need ofan AP; they can locate one another and agree tobe part of a BSS. A BSS with an AP issometimes referred to as an infrastructurenetwork.

120

Page 121: Dcn data link_layer

WIRELESS LANS: IEEE 802.11

121

Page 122: Dcn data link_layer

WIRELESS LANS: IEEE 802.11

Extended Service Set: An extended service set

(ESS) is made up of two or more BSSs with APs. In

this case, the BSSs are connected through a

distribution system, which is usually a wired LAN.

The distribution system connects the APs in the

BSSs.

IEEE 802.11 does not restrict the distribution

system; it can be any IEEE LAN such as an

Ethernet.

The extended service set uses two types of

stations: mobile and stationary. The mobile stations

are normal stations inside a BSS. The stationary

stations are AP stations that are part of a wired

LAN.122

Page 123: Dcn data link_layer

WIRELESS LANS: IEEE 802.11

123

Page 124: Dcn data link_layer

MAC SUBLAYER: IEEE 802.11

IEEE 802.11 defines two MAC sublayers: thedistributed coordination function (DCF) and pointcoordination function (PCF).

DCF uses CSMA/CA as the access method.

The point coordination function (PCF) is an optionalaccess method that can be implemented in aninfrastructure network (not in an ad hoc network). Itis implemented on top of the DCF and is usedmostly for time-sensitive transmission.

PCF has a centralized, contention-free pollingaccess method. The AP performs polling forstations that are capable of being polled. Thestations are polled one after another, sending anydata they have to the AP.

124


Recommended