Home >Engineering >Dcn data link_layer

Dcn data link_layer

Date post:16-Jul-2015
Category:
View:201 times
Download:0 times
Share this document with a friend
Transcript:

Data Communication Network

Data Communication NetworkMangal DasAssistant ProfessorElectronics And CommunicationDepartment112

Types of transmission media3

Physical layer: Transmission mediaThe physical layer is the layer that actually interacts with the transmission media, the physical part of the network that connects network components together. This layer is involved in physically carrying information from one node in the network to the next.The physical layer has complex tasks to perform. One major task is to provide services for the data link layer. The data in the data link layer consists of 0s and 1s organized into frames that are ready to be sent across the transmission medium. This stream of Os and I s must first be converted into another entity: signals. One of the services provided by the physical layer is to create a signal that represents this stream of bits.4Physical layer: Transmission mediaThe physical layer must also take care of the physical network, the transmission medium. The transmission medium is a passive entity; it has no internal program or logic for control like other layers. The transmission medium must be controlled by the physical layer. The physical layer decides on the directions of data flow. The physical layer decides on the number of logical channels for transporting data coming from different sources.5GUIDED MEDIAGuided media, which are those that provide a conduit from one device to another, include twisted-pair cable, coaxial cable, and fiber-optic cable. A signal traveling along any of these media is directed and contained by the physical limits of the medium. Twisted-pair and coaxial cable use metallic (copper) conductors that accept and transport signals in the form of electric current. Optical fiber is a cable that accepts and transports signals in the form of light.6Twisted-Pair CableA twisted pair consists of two conductors (normally copper), each with its own plastic insulation, twisted together, as shown in Figure.

One of the wires is used to carry signals to the receiver, and the other is used only as a ground reference. The receiver uses the difference between the two.In addition to the signal sent by the sender on one of the wires, interference (noise) and crosstalk may affect both wires and create unwanted signals.

7

Twisted-Pair CableIf the two wires are parallel, the effect of these unwanted signals is not the same in both wires because they are at different locations relative to the noise or crosstalk sources(e,g., one is closer and the other is farther). This results in a difference at the receiver. By twisting the pairs, a balance is maintained. For example, suppose in one twist, one wire is closer to the noise source and the other is farther; in the next twist, the reverse is true.Twisting makes it probable that both wires are equally affected by external influences (noise or crosstalk). This means that the receiver, which calculates the difference between the two, receives no unwanted signals.8Coaxial CableCoaxial cable (or coax) carries signals of higher frequency ranges than those in twisted pair cable, in part because the two media are constructed quite differently. Instead of having two wires, coax has a central core conductor of solid or stranded wire (usually copper) enclosed in an insulating sheath, which is, in turn, encased in an outer conduct or of metal foil, braid, or a combination of the two. The outer metallic wrapping serves both as a shield against noise and as the second conductor, which completes the circuit. This outer conductor is also enclosed in an insulating sheath, and the whole cable is protected by a plastic cover.9Coaxial CableCoaxial cables are categorized by their radio government (RG) ratings. Each RG number denotes a unique set of physical specifications, including the wire gauge of the inner conductor, the thickness and type of the inner insulator, the construction of the shield, and the size and type of the outer casing.10Coaxial Cable

11Fiber-Optic CableA fiber-optic cable is made of glass or plastic and transmits signals in the form of light.Optical fibers use reflection to guide light through a channel. A glass or plastic core is surrounded by a cladding of less dense glass or plastic. The difference in density of the two materials must be such that a beam of light moving through the core is reflected off the cladding instead of being refracted into it.12Fiber-Optic Cable

13Fiber-Optic CableCurrent technology supports two modes (multimode and single mode) for propagating light along optical channels, each requiring fiber with different physical characteristics. Multimode can be implemented in two forms: step-index or graded-index.Multimode Multimode is so named because multiple beams from a light source move through the core in different paths. How these beams move within the cable depends on the structure of the core.Single-mode uses step-index fiber and a highly focused source of light that limits beams to a small range of angles, all close to the horizontal. The single mode fiber itself is manufactured with a much smaller diameter than that of multimode fiber, and with substantialiy lower density (index of refraction).14Advantages of Optical FiberHigher bandwidth. Fiber-optic cable can support dramatically higher bandwidths (and hence data rates) than either twisted-pair or coaxial cable. Currently, data rates and bandwidth utilization over fiber-optic cable are limited not by the medium but by the signal generation and reception technology available.Less signal attenuation. Fiber-optic transmission distance is significantly greater than that of other guided media. A signal can run for 50 km without requiring regeneration. We need repeaters every 5 km for coaxial or twisted-pair cable.15Advantages of Optical FiberImmunity to electromagnetic interference. Electromagnetic noise cannot affect fiber-optic cables.Resistance to corrosive materials. Glass is more resistant to corrosive materials than copper.Light weight. Fiber-optic cables are much lighter than copper cables.Greater immunity to tapping. Fiber-optic cables are more immune to tapping than copper cables. Copper cables create antenna effects that can easily be tapped.16Disadvantages of Optical FiberInstallation and maintenance. Fiber-optic cable is a relatively new technology. Its installation and maintenance require expertise that is not yet available everywhere.Unidirectional light propagation. Propagation of light is unidirectional. If we need bidirectional communication, two fibers are needed.Cost. The cable and the interfaces are relatively more expensive than those of other guided media. If the demand for bandwidth is not high, often the use of optical fiber cannot be justified.17UNGUIDED MEDIA: WIRELESSUnguided media transport electromagnetic waves without using a physical conductor. This type of communication is often referred to as wireless communication. Signals are normally broadcast through free space and thus are available to anyone who has a device capable of receiving them.Part of the electromagnetic spectrum, ranging from 3 kHz to 900 THz, used for wireless communication.Unguided signals can travel from the source to destination in several ways: ground propagation, sky propagation, and line-of-sight propagation.18UNGUIDED MEDIA: WIRELESS

19SwitchingA network is a set of connected devices. Whenever we have multiple devices, we have the problem of how to connect them to make one-to-one communication possible. One solution is to make a point-to-point connection between each pair of devices (a mesh topology) or between a central device and every other device (a star topology). These methods, however, are impractical and wasteful when applied to very large networks.The number and length of the links require too much infrastructure to be cost-efficient, and the majority of those links would be idle most of the time. Other topologies employing multipoint connections, such as a bus, are ruled out because the distances between devices and the total number of devices increase beyond the capacities of the media and equipment.20SwitchingA better solution is switching. A switched network consists of a series of interlinked nodes, called switches. Switches are devices capable of creating temporary connections between two or more devices linked to the switch. In a switched network, some of these nodes are connected to the end systems (computers or telephones, for example). Others are used only for routing.21methods of switchingTraditionally, three methods of switching have been important: circuit switching, packet switching, and message switching. The first two are commonly used today. The third has been phased out in general communications but still has networking applications.We can divide today's networks into three broad categories: circuit-switched networks, packet-switched networks, and message-switched. Packet-switched networks can further be divided into two subcategories-virtual-circuit networks and datagram networks.22CIRCUIT-SWITCHED NETWORKSA circuit-switched network consists of a set of switches connected by physical links.A connection between two stations is a dedicated path made of one or more links. However, each connection uses only one dedicated channel on each link. Each link is normally divided into n channels by using FDM or TDMFigure shows a trivial circuit-switched network with four switches and four links. Each link is divided into n (n is 3 in the figure) channels by using FDM or TDM.23CIRCUIT-SWITCHED NETWORKS24

CIRCUIT-SWITCHED NETWORKSWe have explicitly shown the multiplexing symbols to emphasize the division of the link into channels even though multiplexing can be implicitly included in the switch fabric.The end systems, such as computers or telephones, are directly connected to a switch. We have shown only two end systems for simplicity. When end system A needs to communicate with end system M, system A needs to request a connection to M that must be accepted by all switches as well as by M itself. This is called the setup phase.A circuit (channel) is reserved on each link, and the combination of circuits or channels defines the dedicated path. After the dedicated path made of connected circuits (channels) is established, data transfer can take place. After all data have been transferred, the circuits are teardown.25CIRCUIT-SWITCHED NETWORKSCircuit switching takes place at the physical layer.Before starting communication, the stations must make a reservation for the resources to be used during the communication. These resources, such as channels (bandwidth in FDM and time slots in TDM), switch buffers, switch processing time, and switch input/output ports, must remain dedicated during the entire duration of data transfer until the teardown phase.Data transferred between the two stations are not packetized (physical layer transfer of the signal). The data are a continuous flow sent by the source station and received by the destination station, although there may be periods of silence.There is no addressing involved during data transfer. The switches route the data based on their occupied band (FDM) or time slot (TDM). Of course, there is end-to end addressing used during the setup phase.26DATAGRAM NETWORKSIn data communications, we need to send messages from one end system to another. If the message is going to pass through a packet-switched network, it needs to be divided into packets of fixed or variable size. The size of the packet is determined by the network and the governing protocol.In packet switching, there is no resource allocation for a packet. This means that there is no reserved bandwidth on the links, and there is no scheduled processing time for each packet. Resources are allocated on demand. The allocation is done on a first come, first-served basis. When a switch receives a packet, no matter what is the source or destination, the packet must wait if there are other packets being processed. As with other systems in our daily life, this lack of reservation may create delay. For example, if we do not have a reservation at a restaurant, we might have to wait.27DATAGRAM NETWORKSIn a datagram network, each packet is treated independently of all others. Even if a packet is part of a multipacket transmission, the network treats it as though it existed alone. Packets in this approach are referred to as datagrams.Datagram switching is normally done at the network layer. Figure shows how the datagram approach is used to deliver four packets from station A to station X. The switches in a datagram network are traditionally referred to as routers. That is why we use a different symbol for the switches in the figure.28DATAGRAM NETWORKS

29DATAGRAM NETWORKSIn this example, all four packets (or datagrams) belong to the same message, but may travel different paths to reach their destination. This is so because the links may be involved in carrying packets from other sources and do not have the necessary bandwidth available to carry all the packets from A to X. This approach can cause the datagrams of a transmission to arrive at their destination out of order with different delays between the packets. Packets may also be lost or dropped because of a lack of resources. In most protocols, it is the responsibility of an upper-layer protocol to reorder the datagrams or ask for lost datagrams before passing them on to the application.30DATAGRAM NETWORKSThe datagram networks are sometimes referred to as connectionless networks. The term connectionless here means that the switch (packet switch) does not keep information about the connection state. There are no setup or teardown phases. Each packet is treated the same by a switch regardless of its source or destination.If there are no setup or teardown phases, how are the packets routed to their destinations in a datagram network? In this type of network, each switch (or packet switch) has a routing table which is based on the destination address. The routing tables are dynamic and are updated periodically. The destination addresses and the corresponding forwarding output ports are recorded in the tables.31DATAGRAM NETWORKSEvery packet in a datagram network carries a header that contains, among other information, the destination address of the packet. When the switch receives the packet, this destination address is examined; the routing table is consulted to find the corresponding port through which the packet should be forwarded. This destination address, unlike the address in a virtual-circuit-switched network, remains the same during the entire journey of the packet.Switching in the Internet is done by using the datagram approach to packet switching at the network layer.

32Destination AddressOutput Port1200112505

VIRTUAL-CIRCUIT NETWORKSA virtual-circuit network is a cross between a circuit-switched network and a datagram network. As in a circuit-switched network, there are setup and teardown phases in addition to the data transfer phase.Resources can be allocated during the setup phase, as in a circuit-switched network, or on demand, as in a datagram network.33VIRTUAL-CIRCUIT NETWORKSAs in a datagram network, data are packetized and each packet carries an address in the header. However, the address in the header has local jurisdiction (it defines what should be the next switch and the channel on which the packet is being carried), not end-to-end jurisdiction. The reader may ask how the intermediate switches know where to send the packet if there is no final destination address carried by a packet.As in a circuit-switched network, all packets follow the same path established during the connection.34VIRTUAL-CIRCUIT NETWORKSA virtual-circuit network is normally implemented in the data link layer, while a circuit-switched network is implemented in the physical layer and a datagram network in the network layer. But this may change in the future.The network has switches that allow traffic from sources to destinations. A source or destination can be a computer, packet switch, bridge, or any other device that connects other networks.The network has switches that allow traffic from sources to destinations. A source or destination can be a computer, packet switch, bridge, or any other device that connects other networks.35

36VIRTUAL-CIRCUIT NETWORKS : AddressingIn a virtual-circuit network, two types of addressing are involved: global and local (virtual-circuit identifier).Global Addressing: A source or a destination needs to have a global address-an address that can be unique in the scope of the network or internationally if the network is part of an international network.Virtual-Circuit Identifier: The identifier that is actually used for data transfer is called the virtual-circuit identifier (VCl). A VCI, unlike a global address, is a small number that has only switch scope; it is used by a frame between two switches. When a frame arrives at a switch, it has a VCI; when it leaves, it has a different VCl. Figure shows how the VCI in a data frame changes from one switch to another. Note that a VCI does not need to be a large number since each switch can use its own unique set of VCls.37Virtual-Circuit Identifier38

Data Link LayerThe data link layer transforms the physical layer, a raw transmission facility, to a link responsible for node-to-node (hop-to-hop) communication.Specific responsibilities of the data link layer include framing, addressing, flow control, error control, and media access control.39Work Of Data Link LayerThe data link layer divides the stream of bits received from the network layer into manageable data units called frames.The data link layer adds a header to the frame to define the addresses of the sender and receiver of the frame.If the rate at which the data are absorbed by the receiver is less than the rate at which data are produced in the sender, the data link layer imposes a flow control mechanism to avoid overwhelming the receiver.

40Work Of Data Link LayerThe data link layer also adds reliability to the physical layer by adding mechanisms to detect and retransmit damaged, duplicate, or lost frames.When two or more devices are connected to the same link, data link layer protocols are necessary to determine which device has control over the link at any given time.

41Error Detection and CorrectionTypes of Errors:Single-Bit Error: In a single-bit error, only 1 bit in the data unit has changed.Burst Error: The term burst error means that 2 or more bits in the data unit have changed from 1 to 0 or from 0 to 1.

42

RedundancyThe central concept in detecting or correcting errors is redundancy. To be able to detect or correct errors, we need to send some extra bits with our data. These redundant bits are added by the sender and removed by the receiver. Their presence allows the receiver to detect or correct corrupted bits.43Detection Versus CorrectionThe correction of errors is more difficult than the detection. In error detection, we are looking only to see if any error has occurred. The answer is a simple yes or no. We are not even interested in the number of errors.In error correction, we need to know the exact number of bits that are corrupted and more importantly, their location in the message. The number of the errors and the size of the message are important factors.44Forward Error Correction Versus RetransmissionThere are two main methods of error correction.Forward error correction is the process in which the receiver tries to guess the message by using redundant bits. This is possible, as we see later, if the number of errors is small.Correction by retransmission is a technique in which the receiver detects the occurrence of an error and asks the sender to resend the message. Resending is repeated until a message arrives that the receiver believes is error-free.45CodingRedundancy is achieved through various coding schemes. The sender adds redundant bits through a process that creates a relationship between the redundant bits and the actual data bits.The receiver checks the relationships between the two sets of bits to detect or correct the errors.The ratio of redundant bits to the data bits and the robustness of the process are important factors in any coding scheme.coding schemes can be divided into two broad categories: block coding and convolution coding.Coding Techniques: Block Coding, Hamming Code, Cyclic Code, Checksum

46FramingFraming in the data link layer separates a message from one source to a destination, or from other messages to other destinations, by adding a sender address and a destination address. The destination address defines where the packet is to go; the sender address helps the recipient acknowledge the receipt.Although the whole message could be packed in one frame, that is not normally done. One reason is that a frame can be very large, making flow and error control very inefficient. When a message is carried in one very large frame, even a single-bit error would require the retransmission of the whole message. When a message is divided into smaller frames, a single-bit error affects only that small frame.47Character-Oriented ProtocolsIn a character-oriented protocol, data to be carried are 8-bit characters from a coding system such as ASCII.The header, which normally carries the source and destination addresses and other control information, and the trailer, which carries error detection or error correction redundant bits, are also multiples of 8 bits.To separate one frame from the next, an 8-bit (1-byte) flag is added at the beginning and the end of a frame. The flag, composed of protocol dependent special characters, signals the start or end of a frame.48Character-Oriented ProtocolsCharacter-oriented framing was popular when only text was exchanged by the data link layers.The flag could be selected to be any character not used for text communication.Character-oriented protocols present problem in data communications. The universal coding systems in use today, such as Unicode, have 16-bit and 32-bit characters that conflict with 8-bit characters.Byte stuffing (or character stuffing) used for video, Audio, Picture transmission.49Character-Oriented ProtocolsByte stuffing is the process of adding 1 extra byte whenever there is a flag or escape character in the text.

50

Bit-Oriented ProtocolsIn a bit-oriented protocol, the data section of a frame is a sequence of bits to be interpreted by the upper layer as text, graphic, audio, video, and so on. However, in addition to headers (and possible trailers), we still need a delimiter to separate one frame from the other. Most protocols use a special 8-bit pattern flag 01111110 as the delimiter to define the beginning and the end of the frame, as shown in Figure.51Bit-Oriented Protocols

52FLOW AND ERROR CONTROLFlow control refers to a set of procedures used to restrict the amount of data that the sender can send before waiting for acknowledgment.Error control in the data link layer is based on automatic repeat request, which is the retransmission of data.Data link layer can combine framing, flow control, and error control to achieve the delivery of data from one node to another by The protocols.

53ProtocolsThe protocols are normally implemented in software by using one of the common programming languages.Protocols can be divided into those that can be used for noiseless(error-free) channels and those that can be used for noisy (error-creating) channels. The protocols in the first category cannot be used in real life, but they serve as a basis for understanding the protocols of noisy channels.54Protocols

55Simplest ProtocolNo flow or error control.It is very simple. The sender sends a sequence of frames without even thinking about the receiver. To send three frames, three events occur at the sender site and three events at the receiver site. Note that the data frames are shown by tilted boxes; the height of the box defines the transmission time difference between the first bit and the last bit in the frame.56Simplest Protocol57

Stop-and-Wait ProtocolIf data frames arrive at the receiver site faster than they can be processed, the frames must be stored until their use. Normally, the receiver does not have enough storage space, especially if it is receiving data from many sources. This may result in either the discarding of frames or denial of service. To prevent the receiver from becoming overwhelmed with frames, we somehow need to tell the sender to slow down. There must be feedback from the receiver to the sender.The protocol we discuss now is called the Stop-and-Wait Protocol because then sender sends one frame, stops until it receives confirmation from the receiver (okay to go ahead), and then sends the next frame.We add flow control to our previous protocol.58Stop-and-Wait Protocol59

Noisy Channels : Stop-and-Wait Automatic Repeat RequestStop-and-Wait Automatic Repeat Request (Stop-and Wait ARQ), adds a simple error control mechanism to the Stop-and-Wait Protocol.To detect and correct corrupted frames, we need to add redundancy bits to our data frame.When the frame arrives at the receiver site, it is checked and if it is corrupted, it is silently discarded. The detection of errors in this protocol is manifested by the silence of the receiver.Lost frames are more difficult to handle than corrupted ones. In our previous protocols, there was no way to identify a frame. The received frame could be the correct one, or a duplicate, or a frame out of order. The solution is to number the frames. When the receiver receives a data frame that is out of order, this means that frames were either lost or duplicated.60Stop-and-Wait Automatic Repeat RequestThe completed and lost frames need to be resent in this protocol. If the receiver does not respond when there is an error, how can the sender know which frame to resend? To remedy this problem, the sender keeps a copy of the sent frame. At the same time, it starts a timer. If the timer expires and there is no ACK for the sent frame, the frame is resent, the copy is held, and the timer is restarted. Since the protocol uses the stop-and-wait mechanism, there is only one specific frame that needs an ACK even though several copies of the same frame can be in the network.61Stop-and-Wait Automatic Repeat RequestSince an ACK frame can also be corrupted and lost, it too needs redundancy bits and a sequence number. The ACK frame for this protocol has a sequence number field. In this protocol, the sender simply discards a corrupted ACK frame or ignores an out-of-order one.Sequence Numbers: In Stop-and-Wait ARQ, we use sequence numbers to number the frames. The sequence numbers are based on modulo-2 arithmetic.Acknowledgment Numbers: In Stop-and-Wait ARQ, the acknowledgment number always announces in modulo-2 arithmetic the sequence number of the next frame expected.62Stop-and-Wait Automatic Repeat Request

63PipeliningIn networking and in other areas, a task is often begun before the previous task has ended.This is known as pipelining. There is no pipelining in Stop-and-Wait ARQ because we need to wait for a frame to reach the destination and be acknowledged before the next frame can be sent.64Error Control Techniques

65Go-Back-N Automatic Repeat RequestTo improve the efficiency of transmission (filling the pipe), multiple frames must be in transition while waiting for acknowledgment. In other words, we need to let more than one frame be outstanding to keep the channel busy while the sender is waiting for acknowledgment.In Go-Back-N Automatic Repeat Request In this protocol we can send several frames before receiving acknowledgments; we keep a copy of these frames until the acknowledgments arrive.Sequence Numbers: In the Go-Back-N Protocol, the sequence numbers are modulo ,where m is the size of the sequence number field in bits.66

Go-Back-N Automatic Repeat RequestFrames from a sending station are numbered sequentially. However, because we need to include the sequence number of each frame in the header, we need to set a limit. If the header of the frame allows m bits for the sequence number, the sequence numbers range from 0 to 2m - 1. For example, if m is 4, the only sequence numbers are 0 through 15 inclusive. However, we can repeat the sequence. So the sequence numbers are : 0,1,2,3,4,5,6, 7,8,9, 10, 11, 12, 13, 14, 15,0, 1,2,3,4,5,6,7,8,9,10, 11, ...

67Go-Back-N Automatic Repeat RequestSender maintains a list of sequence numbers that it is allowed to send (sender window). The size of the senders window is at most 2^k 1. The sender is provided with a buffer equal to the window size. The receiver acknowledges a frame by sending an ACK frame that includes the sequence number of the next frame expected. This also explicitly announces that it is prepared to receive the next N frames, beginning with the number specified. This scheme can be used to acknowledge multiple frames. It could receive frames 2, 3, 4 but withhold ACK until frame 4 has arrived. By returning an ACK with sequence number 5, it acknowledges frames 2, 3, 4 in one go. 68Go-Back-N Automatic Repeat RequestSliding window algorithm is a method of flow control for network data transfers. TCP, the Internet's stream transfer protocol, uses a sliding window algorithm. A sliding window algorithm places a buffer between the application program and the network data flow. Sender sliding Window:

69

Go-Back-N Automatic Repeat RequestIt looks for a specific frame (frame 4 as shown in the figure) to arrive in a specific order. If it receives any other frame (out of order), it is discarded and it needs to be resent. However, the receiver window also slides by one as the specific frame is received and accepted as shown in the figure. The receiver acknowledges a frame by sending an ACK frame that includes the sequence number of the next frame expected.

70

Go-Back-N Automatic Repeat Request

71Go-Back-N Automatic Repeat Request

72Go-Back-N Automatic Repeat Request

73Go-Back-N Automatic Repeat RequestThe receiver always maintains a window of size 1 as shown in Fig. Hence, Sliding Window Flow Control :Allows transmission of multiple framesAssigns each frame a k-bit sequence numberRange of sequence number is [02k-1], i.e., frames are counted modulo 2k.

74Selective-Repeat ARQ The selective-repetitive ARQ scheme retransmits only those for which NAKs are received or for which timer has expired, this is shown in the Fig. This is the most efficient among the ARQ schemes, but the sender must be more complex so that it can send out-of-order frames. The receiver also must have storage space to store the post-NAK frames and processing power to reinsert frames in proper sequence. 75Selective-Repeat ARQ

76PiggybackingThe three protocols we discussed in this section are all unidirectional: data frames flow in only one direction although control information such as ACK and NAK frames can travel in the other direction. In real life, data frames are normally flowing in both directions: from node A to node B and from node B to node A. So, instead of sending separate acknowledgement packets, a portion (few bits) of the data frames can be used for acknowledgement. This phenomenon is known as piggybacking. The piggybacking helps in better channel utilization. Further, multi-frame acknowledgement can be done. 77

High-Level Data Link Control HDLC is a bit-oriented protocol.It specifies a packitization standard for serial links. HDLC supports several modes of operation, including a simple sliding-window mode for reliable delivery. Since Internet provides retransmission at higher levels (i.e., TCP), most Internet applications use HDLC's unreliable delivery mode, Unnumbered Information. 78HDLC Stations and Configurations HDLC specifies the following three types of stations for data link control: Primary Station Secondary Station Combined Station Primary Station : Within a network using HDLC as its data link protocol, if a configuration is used in which there is a primary station, it is used as the controlling station on the link. It has the responsibility of controlling all other stations on the link (usually secondary stations). A primary issues commands and secondary issues responses. Despite this important aspect of being on the link, the primary station is also responsible for the organization of data flow on the link. It also takes care of error recovery at the data link level (layer 2 of the OSI model).

79HDLC Stations and Configurations Secondary Station: If the data link protocol being used is HDLC, and a primary station is present, a secondary station must also be present on the data link. The secondary station is under the control of the primary station. It has no ability, or direct responsibility for controlling the link. It is only activated when requested by the primary station. It only responds to the primary station. The secondary station's frames are called responses. It can only send response frames when requested by the primary station. A primary station maintains a separate logical link with each secondary station. 80HDLC Stations and Configurations Combined Station: A combined station is a combination of a primary and secondary station. On the link, all combined stations are able to send and receive commands and responses without any permission from any other stations on the link. Each combined station is in full control of itself, and does not rely on any other stations on the link. No other stations can control any combined station. May issue both commands and responses.

http://nptel.ac.in/courses/106105080/1381HDLC Operational Modes A mode in HDLC is the relationship between two devices involved in an exchange; the mode describes who controls the link. Exchanges over unbalanced configurations are always conducted in normal response mode. Exchanges over symmetric or balanced configurations can be set to specific mode using a frame design to deliver the command. HDLC offers three different modes of operation. These three modes of operations are: Normal Response Mode (NRM) Asynchronous Response Mode (ARM) Asynchronous Balanced Mode (ABM)

82HDLC Operational Modes Normal Response Mode : This is the mode in which the primary station initiates transfers to the secondary station. The secondary station can only transmit a response when, and only when, it is instructed to do so by the primary station. Asynchronous Response Mode : In this mode, the primary station doesn't initiate transfers to the secondary station. In fact, the secondary station does not have to wait to receive explicit permission from the primary station to transfer any frames. Due to the fact that this mode is asynchronous, the secondary station must wait until it detects and idle channel before it can transfer any frames. This is when the ARM link is operating at half-duplex.

83HDLC Operational Modes Synchronous Balanced Mode :This mode is used in case of combined stations. There is no need for permission on the part of any station in this mode. This is because combined stations do not require any sort of instructions to perform any task on the link. Normal Response Mode is used most frequently in multi-point lines, where the primary station controls the link. Asynchronous Response Mode is better for point-to-point links, as it reduces overhead. Asynchronous Balanced Mode is not used widely today. The "asynchronous" in both ARM and ABM does not refer to the format of the data on the link. It refers to the fact that any given station can transfer frames without explicit permission or instruction from any other station. 84HDLC Frame Structure

85HDLC Frame Structure The Flag field: Every frame on the link must begin and end with a flag sequence field (F). Stations attached to the data link must continually listen for a flag sequence. The flag sequence is an octet looking like 01111110. Flags are continuously transmitted on the link between frames to keep the link active. Two other bit sequences are used in HDLC as signals for the stations on the link. HDLC is a code-transparent protocol. It does not rely on a specific code for interpretation of line control. This means that if a bit at position N in an octet has a specific meaning, regardless of the other bits in the same octet. If an octet has a bit sequence of 01111110, but is not a flag field, HLDC uses a technique called bit-stuffing to differentiate this bit sequence from a flag field. 86HDLC Frame Structure The Address field: The address field (A) identifies the primary or secondary stations involvement in the frame transmission or reception. Each station on the link has a unique address. The Control field: HDLC uses the control field (C) to determine how to control the communications process. This field contains the commands, responses and sequences numbers used to maintain the data flow accountability of the link, defines the functions of the frame and initiates the logic to control the movement of traffic between sending and receiving stations.

87HDLC Frame Structure The Poll/Final Bit (P/F):The 5th bit position in the control field is called the poll/final bit, or P/F bit. It can only be recognized when it is set to 1. If it is set to 0, it is ignored. The poll/final bit is used to provide dialogue between the primary station and secondary station. The Information field or Data field: This field is not always present in a HDLC frame. It is only present when the Information Transfer Format is being used in the control field. The information field contains the actually data the sender is transmitting to the receiver in an I-Frame and network management information in U-Frame. 88HDLC Frame Structure The Frame check Sequence field: This field contains a 16-bit, or 32-bit cyclic redundancy check bits. 89HDLC Commands and Responses Information transfer format command and response (I-Frame) : The function of the information command and response is to transfer sequentially numbered frames, each containing an information field, across the data link. Supervisory format command and responses (S-Frame) :Supervisory (S) commands and responses are used to perform numbered supervisory functions such as acknowledgment, polling, temporary suspension of information transfer, or error recovery. Frames with the S format control field cannot contain an information field. 90HDLC Commands and Responses Unnumbered Format Commands and responses (U-Frame) :The unnumbered format commands and responses are used to extend the number of data link control functions. The unnumbered format frames have 5 modifier bits, which allow for up to 32 additional commands and 32 additional response functions. 91Multiple AccessIn the protocols we described, we assumed that there is an available dedicated link (or channel) between the sender and the receiver. This assumption may or may not be true.92Multiple AccessData link layer can be considered as two sublayers. The upper sublayer is responsible for data link control, and the lower sublayer is responsible for resolving access to the shared media. If the channel is dedicated, we do not need the lower sublayer.The upper sublayer that is responsible for flow and error control is called the logical link control (LLC) layer. The lower sublayer that is mostly responsible for multiple access resolution is called the media access control (MAC) layer.

93Multiple Access94

RANDOM ACCESSIn random access or contention methods, no station is superior to another station and none is assigned the control over another. No station permits, or does not permit, another station to send.At each instance, a station that has data to send uses a procedure defined by the protocol to make a decision on whether or not to send. This decision depends on the state of the medium (idle or busy).95RANDOM ACCESSIn a random access method, each station has the right to the medium without being controlled by any other station. However, if more than one station tries to send, there is an access conflict-collision-and the frames will be either destroyed or modified. To avoid access conflict or to resolve it when it happens, each station follows a procedure.Two features give this method its name. First, there is no scheduled time for a station to transmit. Transmission is random among the stations. That is why these methods are called random access. Second, no rules specify which station should send next. Stations compete with one another to access the medium. That is why these methods are also called contention methods.96RANDOM ACCESSThe random access methods have evolved from a very interesting protocol known as ALOHA, which used a very simple procedure called multiple access (MA). The method was improved with the addition of a procedure that forces the station to sense the medium before transmitting. This was called carrier sense multiple access. Carrier sense multiple access later evolved into two parallel methods: carrier sense multiple access with collision detection (CSMA/CD) . CSMA/CD tells the station what to do when a collision is detected. carrier sense multiple access with collision avoidance (CSMA/CA). CSMA/CA tries to avoid the collision.97ALOHAALOHA, the earliest random access method, was developed at the University of Hawaii in early 1970.The original ALOHA protocol is called pure ALOHA.The idea is that each station sends a frame whenever it has a frame to send. However, since there is only one channel to share, there is the possibility of collision between frames from different stations.Figure shows four stations (unrealistic assumption) that contend with one another for access to the shared channel. The figure shows that each station sends two frames; there are a total of eight frames on the shared medium. Some of these frames collide because multiple frames are in contention for the shared channel.98ALOHA99

ALOHAThere are four stations that contend with one another for access to the shared channel. The figure shows that each station sends two frames; there are a total of eight frames on the shared medium. Some of these frames collide because multiple frames are in contention for the shared channel.Figure shows that only two frames survive: frame 1.1 from station 1 and frame 3.2 from station 3. We need to mention that even if one bit of a frame coexists on the channel with one bit from another frame, there is a collision and both will be destroyed.100ALOHAIt is obvious that we need to resend the frames that have been destroyed during transmission. The pure ALOHA protocol relies on acknowledgments from the receiver.When a station sends a frame, it expects the receiver to send an acknowledgment. If the acknowledgment does not arrive after a time-out period, the station assumes that the frame (or the acknowledgment) has been destroyed and resends the frame.vulnerable time, the length of time in which there is a possibility of collision.101AlohaA collision involves two or more stations. If all these stations try to resend their frames after the time-out, the frames will collide again. Pure ALOHA dictates that when the time-out period passes, each station waits a random amount of time before resending its frame. The randomness will help avoid more collisions. We call this time the back-off time TB.Pure ALOHA has a second method to prevent congesting the channel with retransmitted frames. After a maximum number of retransmission attempts Kmax a station must give up and try later.102Slotted AlohaSlotted ALOHA was invented to improve the efficiency of pure ALOHA.In slotted ALOHA we divide the time into slots of Tfr s and force the station to send only at the beginning of the time slot.Because a station is allowed to send only at the beginning of the synchronized time slot, if a station misses this moment, it must wait until the beginning of the next time slot. This means that the station which started at the beginning of this slot has already finished sending its frame. Of course, there is still the possibility of collision if two stations try to send at the beginning of the same time slot. However, the vulnerable time is now reduced to one-half103Slotted Aloha

104Carrier Sense Multiple Access (CSMA)To minimize the chance of collision and, therefore, increase the performance, the CSMA method was developed. The chance of collision can be reduced if a station senses the medium before trying to use it. Carrier sense multiple access (CSMA) requires that each station first listen to the medium (or check the state of the medium) before sending. In other words, CSMA is based on the principle "sense before transmit" or "listen before talk."105Carrier Sense Multiple Access (CSMA)The possibility of collision still exists because of propagation delay; when a station sends a frame, it still takes time (although very short) for the first bit to reach every station and for every station to sense it. In other words, a station may sense the medium and find it idle, only because the first bit sent by another station has not yet been received. At time t0 station B senses the medium and finds it idle, so it sends a frame. At time t1 (t1 > t0) station D senses the medium and finds it idle because, at this time, the first bits from station B have not reached station C. Station C also sends a frame. The two signals collide and both frames are destroyed.106Carrier Sense Multiple Access (CSMA)

107Carrier Sense Multiple Access (CSMA)The vulnerable time for CSMA is the propagation time Tp . This is the time needed for a signal to propagate from one end of the medium to the other. When a station sends a frame, and any other station tries to send a frame during this time, a collision will result. But if the first bit of the frame reaches the end of the medium, every station will already have heard the bit and will refrain from sending.108Carrier Sense Multiple Access (CSMA)109

Ieee Stds.In 1985, the Computer Society of the IEEE started a project, called Project 802, to set standards to enable intercommunication among equipment from a variety of manufacturers.The relationship of the 802 Standard to the traditional OSI model is shown in Figure. The IEEE has subdivided the data link layer into two sublayers: logical link control (LLC) and media access control (MAC). IEEE has also created several physical layer standards for different LAN protocols.110Ieee Stds For LAN:IEEE 802.3

111Ieee Stds.The original Ethernet was created in 1976 at Xerox's Palo Alto Research Center (PARC).Since then, it has gone through four generations: Standard Ethernet (10 Mbps), Fast Ethernet (100 Mbps), Gigabit Ethernet (l Gbps), and Ten-Gigabit Ethernet (l0 Gbps).

112Standard Ethernet :Mac SublayerIn Standard Ethernet, the MAC sublayer governs the operation of the access method. It also frames data received from the upper layer and passes them to the physical layer.Frame Format: Ethernet does not provide any mechanism for acknowledging received frames, making it what is known as an unreliable medium. Acknowledgments must be implemented at the higher layers.

113

Frame FormatPreamble: It alerts the receiving system to the coming frame and enables it to synchronize its input timing. The pattern provides only an alert and a timing pulse.Start frame delimiter (SFD): The SFD warns the station or stations that this is the last chance for synchronization. The last 2 bits is 11 and alerts the receiver that the next field is the destination address.Destination address (DA)Source address (SA)Length or type: The original Ethernet used this field as the type field to define the upper-layer protocol using the MAC frame. The IEEE standard used it as the length field to define the number of bytes in the data field. Both uses are common today.114Standard EthernetData. This field carries data encapsulated from the upper-layer protocolsCRC: The last field contains error detection information, in this case a CRC-32.Frame Length: Minimum: 64 bytes (512 bits) Maximum: 1518 bytes (12,144 bits)Addressing: Each station on an Ethernet network (such as a PC, workstation, or printer) has its own network interface card (NIC). The NIC fits inside the station and provides the station with a 6-byte physical address.Example of an Ethernet address in hexadecimal notation :06:01 :02:01:2C:4B115Standard Ethernet :Physical layerAll standard implementations use digital signaling (baseband) at 10 Mbps. At the sender, data are converted to a digital signal using the Manchester scheme.10Base5, thick Ethernet, or Thicknet: The nickname derives from the size of the cable, which is roughly the size of a garden hose and too stiff to bend with your hands. 10Base5 was the first Ethernet specification to use a bus topology with an external transceiver connected via a tap to a thick coaxial cable.

11610Base5, 10Base2 Ethernet117

Standard Ethernet :Physical layer10Base2, Thin Ethernet: It uses a bus topology, but the cable is much thinner and more flexible. The cable can be bent to pass very close to the stations. In this case, the transceiver is normally part of the network interface card (NIC), which is installed inside the station.10Base-T: Twisted-Pair Ethernet : The third implementation is called l0Base-T or twisted-pair Ethernet. 10Base-T uses a physical star topology. The stations are connected to a hub via two pairs of twisted cable.Two pairs of twisted cable create two paths (one for sending and one for receiving) between the station and the hub. Any collision here happens in the hub.118Standard Ethernet :Physical layer10Base-F: Fiber Ethernet: Although there are several types of optical fiber l0-Mbps Ethernet, the most common is called 10Base-F. 10Base-F uses a star topology to connect stations to a hub. The stations are connected to the hub using two fiber-optic cables.119Wireless Lans: IEEE 802.11The standard defines two kinds of services: the basic service set (BSS) and the extended service set (ESS).Basic Service Set: A basic service set is made of stationary or mobile wireless stations and an optional central base station, known as the access point (AP).The BSS without an AP is a stand-alone network and cannot send data to other BSSs. It is called an ad hoc architecture. In this architecture, stations can form a network without the need of an AP; they can locate one another and agree to be part of a BSS. A BSS with an AP is sometimes referred to as an infrastructure network.120Wireless Lans: IEEE 802.11121

Wireless Lans: IEEE 802.11Extended Service Set: An extended service set (ESS) is made up of two or more BSSs with APs. In this case, the BSSs are connected through a distribution system, which is usually a wired LAN. The distribution system connects the APs in the BSSs. IEEE 802.11 does not restrict the distribution system; it can be any IEEE LAN such as an Ethernet.The extended service set uses two types of stations: mobile and stationary. The mobile stations are normal stations inside a BSS. The stationary stations are AP stations that are part of a wired LAN.122Wireless Lans: IEEE 802.11123

MAC Sublayer: IEEE 802.11IEEE 802.11 defines two MAC sublayers: the distributed coordination function (DCF) and point coordination function (PCF).DCF uses CSMA/CA as the access method.The point coordination function (PCF) is an optional access method that can be implemented in an infrastructure network (not in an ad hoc network). It is implemented on top of the DCF and is used mostly for time-sensitive transmission.PCF has a centralized, contention-free polling access method. The AP performs polling for stations that are capable of being polled. The stations are polled one after another, sending any data they have to the AP.124

Popular Tags:
of 124/124
DATA COMMUNICATION NETWORK Mangal Das Assistant Professor Electronics And Communication Department 1
Embed Size (px)
Recommended