+ All Categories
Home > Documents > bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

Date post: 28-May-2019
Category:
Upload: doanquynh
View: 218 times
Download: 0 times
Share this document with a friend
64
COMPUTER NETWORK UNIT – 5 Submitted By: (GROUP 5) Sanjana Reddy(1741146) Shambhavi Kirtiraj Jilkar(1741147) Shane Alex Pereira(1741148) Sharan Patil(1741149) Shaurya Chopra(1741150)
Transcript
Page 1: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

COMPUTER NETWORK

UNIT – 5

Submitted By: (GROUP 5)

Sanjana Reddy(1741146)Shambhavi Kirtiraj Jilkar(1741147)

Shane Alex Pereira(1741148)Sharan Patil(1741149)

Shaurya Chopra(1741150)Shivani D Talreja(1741151)

Sneha Agarwal(1741152)Sourish Ghosh(1741153)

Sreehari S(1741155)Sweta Panigrahi (1741156)

Page 2: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

1)Describe distance vector routing protocol with one valid example. Explain working of RIP.

In distance vector routing, the least-cost route between any two nodes is the route with minimum distance. In this protocol, each node maintains a vector (table) of minimum distances to every node. The table at each node also guides the packets to the desired node by showing the next stop in the route (next-hop routing).

Fig. DISTANCE VECTOR ROUTING TABLES

Page 3: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

Initialization

At the beginning, each node can know only the distance between itself and its immediate neighbors, those directly connected to it. So, for the moment, we assume that each node can send a message to the immediate neighbors and find the distance between itself and these neighbors.

Fig. INITIALIZATION OF TABLES

Sharing

The whole idea of distance vector routing is the sharing of information between neighbors. Although node A does not know about node E, node C does. So, if node C shares its routing table with A, node A can also know how to reach node E. On the other hand, node C does not know how to reach node D, but node A does. If node A shares its routing table with node C, node C also knows how to reach node D. In other words, nodes A and C, as immediate neighbors, can improve their routing tables if they help each other.A problem in this is that a node therefore can send only the first two columns of its table to any neighbor.

In simple words distance vector routing, each node shares its routing table with its immediate neighbors periodically and when there is a change.

Page 4: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

Updating

When a node receives a two-column table from a neighbor, it needs to update its routing table. Updating takes three steps:

1.The receiving node needs to add the cost between itself and the sending node to each value in the second column.

2.The receiving node needs to add the name of the sending node to each row as the third column if the receiving node uses information from any row. The ending node is the next node in the route.3.The receiving node needs to compare each row of its old table with the corresponding row of the modified version of the received table.

a. If the next-node entry is different, the receiving node chooses the row with the smaller cost. If there is a tie, the old one is kept.

b. If the next-node entry is the same, the receiving node chooses the new row.

Fig. UPDATING IN DISTANCE VECTOR ROUTING

Page 5: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

When to Share?

The table is sent both periodically and when there is a change in the table.Periodic Update: A node sends its routing table, normally every 30s in a periodic update. The period depends on the protocol that is using distance vector protocol.

Triggered update: A node sends its two-column routing table to its neighbors anytime when there is a change in its routing table.

Two-Node Loop Instability

A problem with distance vector routing is instability, which means that a network using this protocol can become unstable.

Fig. TWO-NODE INSTABILITY

At the beginning, both nodes A and B know how to reach node X. But suddenly, the link between A and X fails. Node A changes its table. If A can send its table to B immediately, everything is fine. However, the system becomes unstable if B sends its routing table to A before receiving A's routing table. Node A receives the update and, assuming that B has found a way to reach X, immediately updates its routing table. Based on the triggered update strategy, A sends its new update to B. Now B thinks that something has been changed around A and updates its routing table. The cost of reaching X increases gradually until it reaches infinity. At this moment, both A and B know that X cannot be reached. However, during this time the system is not stable. Node A thinks that the route to X is via B; node B thinks that the route to X is via A. If A receives a packet destined for X, it goes to B and then comes back to A. Similarly, if B receives a packet destined for X, it goes to A and comes back to B. Packets bounce between A and B, creating a two-node loop problem. A few solutions have been proposed for instability of this kind.

Page 6: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

Three-Node Instability

Suppose, after finding that X is not reachable, node A sends a packet to Band C to inform them of the situation. Node B immediately updates its table, but the packet to C is lost in the network and never reaches C. Node C remains in the dark and still thinks that there is a route to X via A with a distance of 5. After a while, node C sends to Bits routing table, which includes the route to X. Node B is totally fooled here. It receives information on the route to X from C, and according to the algorithm, it updates its table, showing the route to X via C with a cost of 8. This information has come from C, not from A, so after a while node B may advertise this route to A. Now A is fooled and updates its table to show that A can reach X via B with a cost of 12. Of course, the loop continues; now A advertises the route to X to C, with increased cost, but not to B. Node C then advertises the route to B with an increased cost. Node B does the same to A. And so on. The loop stops when the cost in each node reaches infinity.

Fig. THREE-NODE INSTABILITY

Page 7: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

RIP (Routing Information Protocol)

The Routing Information Protocol (RIP) is an intra domain routing protocol used inside an autonomous system. It is a very simple protocol based on distance vector routing. RIP implements distance vector routing directly with some considerations:

1 .In an autonomous system, we are dealing with routers and networks (links). The routers have routing tables; networks do not.

2. The destination in a routing table is a network, which means the first column defines a network address.

3. The metric used by RIP is very simple; the distance is defined as the number of links (networks) to reach the destination. For this reason, the metric in RIP is called a hop count.

4. Infinity is defined as 16, which means that any route in an autonomous system using RIP cannot have more than 15 hops.

5. The next-node column defines the address of the router to which the packet is to be sent to reach its destination.

Working of RIP

An autonomous system with seven networks and four routers. The table of each router is also shown. Let us look at the routing table for Rl. The table has seven entries to show how to reach each network in the autonomous system. Router Rl is directly connected to networks 130.10.0.0 and 130.11.0.0, which means that there are no next-hop entries for these two networks. To send a packet to one of the three networks at the far left, router Rl needs to deliver the packet to R2. The next-node entry for these three networks is the interface of router R2 with IP address 130.10.0.1. To send a packet to the two networks at the far right, router Rl needs to send the packet to the interface of router R4 with IP address 130.11.0.1. The other tables can be explained similarly.

Page 8: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

Fig. EXAMPLE OF A DOMAIN USING RIP

2.What is shortest path tree, short based tree and group shared tree? Explain with valid examples.

Shortest Path tree

The process of optimal interdomain routing eventually results in the finding of the shortest path tree. The root of the tree is the source, and the leaves are the potential destinations. The path from the root to each destination is the shortest path. However, the number of trees and the formation of the trees in unicast and multicast routing are different.

Unicast routing:

When a router receives a packet to forward, it needs to find the shortest path to the destination of the packet. The router consults its routing table for that particular destination. The next-hop entry corresponding to the destination is the start of the shortest path. The router knows the shortest path for each destination, which means that the router has a shortest path tree to optimally reach all destinations. In other words, each line of the routing table is a shortest path; the whole routing table is a shortest path tree. In unicast routing, each router needs only one shortest path tree to forward a packet; however each router has its own shortest path tree.In unicast routing, each router in the domain has a table that defines a shortest path tree to possible destinations.

Page 9: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

The figure shows the details of the routing table and the shortest path tree for router R1. Each line in the routing table corresponds to one path from the root to the corresponding network. The whole table represents the shortest path tree.

Multicast Routing:

When a router receives a multicast packet, the situation is different from when it receives a unicast packet. A multicast packet may have destinations in more than one network. Forwarding of a single packet to members of a group requires a shortest path tree. If we have n groups, we may need n shortest path trees. We can imagine the complexity of multiplexing routing. Two approaches have been used to solve the problem:Source based trees and group shared trees.

Source Based Tree:

In this approach, each router needs to have one shortest path tree for each group. The shortest path tree for a group defines the next hop for each network that has loyal member(s) for that group.

Page 10: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

In the figure we assume that we have only five groups in the domain: G1, G2, G3, G4, and G5. At the moment G1 has loyal members in four networks, G2 in three, G3 in two, G4 in two, and G5 in two. We have shown the names of the groups with loyal members of each network. The figure also shows the multicasting routing table for router R1. There is one shortest path tree for each group; therefore there are five shortest path trees for five groups. If router R1 receives a packet with destination address G1, it needs to send a copy of the packet to the attached network, a copy to router R2, and a copy to router R4 so that all members of G1 can receive a copy. In this approach, if the number of groups is m, each router needs to have m shortest path trees, one for each group. We can imagine the complexity of the routing table if we have hundreds or thousands of groups. However, we will show how different protocols manage to alleviate the situation.In the source based tree approach, each router needs to have one shortest path tree for each group.

Group Shared Tree:

In this approach, instead of each router having m shortest path trees, only one designated router, called the centre core, or rendezvous router, takes the responsibility of distributing multicast traffic. The core has m shortest path trees in its routing table. The rest of the routers in the domain have none. If a router receives a multicast packet, it encapsulates the packet in a unicast packet and sends it to the core router. The core router removes the multicast packet from its capsule, and consults its routing table to route the packet.In the group shared tree approach, only the core router, which has a shortest path tree for each group, is involved in multicasting.

Page 11: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

3. Provide the classification of Congestion Control. Explain Open Loop techniques with details.

Congestion ControlCongestion control refers to techniques and mechanisms that can either prevent congestion before it happens or remove congestion after it has happened. In general, we candivide congestion control mechanisms into two broad categories: open-loop congestioncontrol (prevention) and closed-loop congestion control (removal).

Open-Loop Congestion Control

Page 12: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

In open-loop congestion control, policies are applied to prevent congestion before ithappens. In these mechanisms, congestion control is handled by either the source or thedestination. We give a brief list of policies that can prevent congestion.

Retransmission Policy Retransmission is sometimes unavoidable. If the senderfeels that a sent packet is lost or corrupted, the packet needs to be retransmitted.Retransmission in general may increase congestion in the network. However, agood retransmission policy can prevent congestion. The retransmission policy andthe retransmission timers must be designed to optimize efficiency and at the sametime prevent congestion.

Window Policy The type of window at the sender may also affect congestion. TheSelective Repeat window is better than the Go-Back-N window for congestion control. Inthe Go-Back-N window, when the timer for a packet times out, several packets may beresent, although some may have arrived safe and sound at the receiver. This duplicationmay make the congestion worse. The Selective Repeat window, on the other hand, triesto send the specific packets that have been lost or corrupted.

Acknowledgment Policy The acknowledgment policy imposed by the receiver mayalso affect congestion. If the receiver does not acknowledge every packet it receives, itmay slow down the sender and help prevent congestion. Several approaches are used inthis case. A receiver may send an acknowledgment only if it has a packet to be sent or aspecial timer expires. A receiver may decide to acknowledge only N packets at a time.

We need to know that the acknowledgments are also part of the load in a network.Sending fewer acknowledgments means imposing less load on the network.

Discarding Policy A good discarding policy by the routers may prevent congestionand at the same time may not harm the integrity of the transmission. For example, inaudio transmission, if the policy is to discard less sensitive packets when congestion islikely to happen, the quality of sound is still preserved and congestion is prevented oralleviated.Admission Policy An admission policy, which is a quality-of-service mechanism (discussedin Chapter 30), can also prevent congestion in virtual-circuit networks. Switchesin a flow first check the resource requirement of a flow before admitting it to the network.A router can deny establishing a virtual-circuit connection if there is congestion inthe network or if there is a possibility of future congestion.

4. Explain flow characteristics. Also explain Reliability, Delay, Jitter and Bandwidth with example.

Quality of service (QoS) refers to a network’s ability to achieve maximum bandwidth and deal with other network performance elements like latency, error rate and uptime. Quality of service also involves controlling and managing network resources by setting priorities for specific types of data (video, audio, files) on the network. QoS is exclusively applied to network traffic generated for video on demand, IPTV, VoIP, streaming media, videoconferencing and online

Page 13: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

gaming. The characteristics that determine the QoS are called flow characteristics. Some of the flow characteristics are:-

Bandwidth refers to the speed of and capacity of a link, measured in bits per second (bps). This sure means how many bits can be sent over the link per given second.The networking device’s (router or switch) QoS tools determine and control what packet is sent over the link at a given point;  which messages get access to the bandwidth next, and how much of that bandwidth (capacity) each type of traffic gets over time.

In a large network, a typical WAN edge router has hundreds of packets waiting to pass through the link.The WAN edge router link might be configured with a QoS queuing tool to reserve 50 percent of the bandwidth for very important or emergency data traffic, 10 percent for voice, and leave the rest of the bandwidth for all other types of traffic.

Delay. There is two type of delay here; one-way delay or round-trip delay.

One-way delay describes the time lapse it takes for a packet to arrive at the destination host.

Round-trip delay measures the time lapse between when the packet gets to the destination host and the receiver to send it back.

In saying that, So many different individual actions cause the delay of packets on a link; just like so many factors cause delay when you driving from point A to B

Quality of service (QoS) refers to a network’s ability to achieve maximum bandwidth and deal with other network performance elements like latency, error rate and uptime. Quality of service also involves controlling and managing network resources by setting priorities for specific types of data (video, audio, files) on the network. QoS is exclusively applied to network traffic generated for video on demand, IPTV, VoIP, streaming media, videoconferencing and online gaming.

Page 14: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

ReliabilityIf a packet gets lost or acknowledgement is not received (at sender), the re-transmission of data will be needed. This decreases the reliability. The importance of the reliability can differ according to the application. For example:E- mail and file transfer need to have a reliable transmission as compared to that of an audio conferencing. 

5) What is the function of routing algorithm? How are they classified? Explain flooding algorithm? (10 marks)

Functions of routing algorithm:

Routing protocols have been created in response to the demand for dynamic routing tables.

A routing protocol is a combination of rules and procedures that lets routers in the internet inform each other of changes.

It allows routers to share whatever they know about the internet or their neighbourhood.

The routing protocols also include procedures for combining information received from other routers.

Classification of routing algorithms: -

1) Adaptive Routing Algorithm: These algorithms are dynamic and change their routing decisions to reflect changes in the topology and in traffic as well. These get their routing information from adjacent routers or from all routers. The optimization parameters are the distance, number of hops and estimated transit time.

i) Centralized ii) Isolated iii) Distributed

2) Non-Adaptive Routing Algorithm: These algorithms do not base their routing decisions on measurements and estimates of the current traffic and topology. Instead the route to be taken in going from one node to the other is computed in advance, off-line, and downloaded to the routers when the network is booted. This is also known as static routing.i) Flooding

ii) Random walk

Classification of Routing Protocols:-

Page 15: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

Flooding Algorithm:

A router receives a packet and without even looking at the destination group address, sends it out from every interlace except the one from which it was received. It accomplishes multicasting. A network with active and non-active members receives the packet. Flooding broadcasts packets but creates loops in the systems. A packet that has left the router may come back again from another interlace or the same interlace and be forwarded again. Some flooding protocols keep a copy of the packet for a while and discard any duplicates to avoid loops.

6.Write a short note on Scheduling. How many types of Queuing is done in networks? Explain each with clear figures.

Scheduling, packets from different flows arrive at a switch or router for processing. A good scheduling technique treats different flows in a fair manner. Several scheduling techniques are designed to improve the quality of service. The three types of queuing discussed here are:

1.FIFO queuing :

Page 16: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the router or switch is ready to process them. If the average arrival rate is higher than the average processing rate the queue will fill up and new packets will be discarded.

2.Priority Queuing:

In Priority queuing, packets are first assigned to a priority class. Each priority class has its own queue. The packets in the highest-priority queue are processed first. Packets in the lowest-priority queue are processed last. (Note: The system does not stop serving a queue until it is empty.)

3. Weighted Fair Queuing:

In this technique the packets are still assigned to different classes and admitted to different queues. The queues however are weighted based on the priority of the queue; higher priority means a higher weight. The system processes packets in each queue in a round -robin fashion with the number of packets selected from each queue based on the corresponding weight.

7). Explain the working of Link State routing algorithm with respect to OSPF. Provide valid figures and examples describing it’s working.

Link State Routing –Link state routing is the second family of routing protocols. While distance vector routers use a distributed algorithm to compute their routing tables, link-state routing uses link-state routers to exchange messages that allow each router to learn the entire network topology. Based on this

Page 17: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

learned topology, each router is then able to compute its routing table by using a shortest path computation.Features of link state routing protocols –

Link state packet – A small packet that contains routing information. Link state database – A collection information gathered from link state packet. Shortest path first algorithm (Dijkstra algorithm) – A calculation performed on the

database results into shortest path Routing table – A list of known paths and interfaces.

.Open shortest path first (OSPF) routing protocol –

Open Shortest Path First (OSPF) is a unicast routing protocol developed by working group of the Internet Engineering Task Force (IETF).

It is a intradomain routing protocol. It is an open source protocol. It is similar to Routing Information Protocol (RIP) OSPF is a classless routing protocol, which means that in its updates, it includes the subnet

of each route it knows about, thus, enabling variable-length subnet masks. With variable-length subnet masks, an IP network can be broken into many subnets of various sizes. This provides network administrators with extra network-configuration flexibility. These updates are multicasts at specific addresses (224.0.0.5 and 224.0.0.6).

OSPF is implemented as a program in the network layer using the services provided by the Internet Protocol

IP datagram that carries the messages from OSPF sets the value of protocol field to 89 OSPF is based on the SPF algorithm, which sometimes is referred to as the Dijkstra

algorithm OSPF has two versions – version 1 and version 2. Version 2 is used mostly

OSPF Messages – OSPF is a very complex protocol. It uses five different types of messages. These are as follows:

10. Hello message (Type 1) – It is used by the routers to introduce itself to the other routers.11. Database description message (Type 2) – It is normally send in response to the Hello

message.12. Link-state request message (Type 3) – It is used by the routers that need information

about specific Link-State packet.13. Link-state update message (Type 4) – It is the main OSPF message for building Link-

State Database.14. Link-state acknowledgement message (Type 5) – It is used to create reliability in the

OSPF protocol.

Page 18: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

This example sets up an OSPF network at a small office. There are 3 routers, all running OSPF v2. The border router connects to a BGP network.

All three routers in this example are FortiGate units. Router1 will be the designated router (DR) and router2 will be the backup DR (BDR) due to their priorities. Router3 will not be considered for either the DR or BDR elections. Instead, Router3 is the area border router (ASBR) routing all traffic to the ISP’s BGP router on its way to the Internet.

Router2 has a modem connected that provides dialup access to the Internet as well, at a reduced bandwidth. This is a PPPoE connection to a DSL modem. This provides an alternate route to the Internet if the other route goes down. The DSL connection is slow, and is charged by the amount of traffic. For these reasons OSPF will highly favor Router3’s Internet access.

The DSL connection connects to an OSPF network with the ISP, so no redistribution of routes is required. The ISP network does have to be added to that router’s configuration however.

8.Discuss traffic shaping mechanism with neat diagram.(10 marks)

Page 19: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

Traffic shaping is a mechanism to control the amount and rate of traffic sent to the network. Two techniques can shape traffic: Leaky bucket and token bucket.

Leaky Bucket: If a bucket has a small hole at the bottom, the water leaks from the bucket at a considerate rate as long as there is water in the bucket. The rate at which the water leaks does not depend on the rate at which the water is input to the bucket unless the bucket is empty. The input rate can vary, but the output remains constant. Similarly in networking, a technique called leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the bucket and sent out at an average rate.

In the figure we assume that the network has committed a bandwidth of 3 Mbps for a host. The host sends a burst of data at a rate of 12 Mbps for 2s for a total of 24mbits of data. The host is silent for 5s and then sends data at a rate of 2 Mbps for 3s for a total of 6 Mbits of data. In all, the hosts has sent 30 Mbits of data in 10s. The leaky bucket smooths the traffic by sending out data at a rate of 3 Mbps for the same 10s. Without the leaky bucket the bursts of data could have hurt the network by consuming more bandwidth than set aside for this host.

Token bucket: The leaky bucket is very restrictive. It does not credit an idle host. For example, if a host is not sending for a while,its bucket becomes empty. Now if the host has bursty data, the leaky bucket allows only an average rate. The time when the host was idle is not taken into account. On the other hand the token bucket algorithm allows idle hosts to accumulate credit for the future in the forms of tokens. For each tick of the clock, the system sends n tokens to the bucket. For example if n is 100 and the host is idle for 100 ticks, the bucket collects 10,000 tokens. Now host can consume all these tokens in one tick with 10,000 cells, or the host takes 1000 ticks with 10 cells per tick. In other words the host can send bursty data as long as the bucket is not empty.

Page 20: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

9) Discuss in detail the different categories of congestion control techniques with an example?

Congestion control refers to the techniques used to control or prevent congestion. Congestion control techniques can be broadly classified into two categories:

1. Open Loop Congestion Control

2. Closed Loop Congestion Control

Open Loop Congestion Control

Open loop congestion control policies are applied to prevent congestion before it happens. Policies adopted by open loop Congestion control are:

1. Retransmission Policy

2. Window Policy

3. Discarding Policy

4. Acknowledgement Policy

5. Admission Policy

1. Retransmission Policy: It is the policy in which retransmission of the packets are taken care. If the sender feels that a sent packet is lost or corrupted, the packet needs to be retransmitted. This transmission may increase the congestion in the network. To prevent congestion, retransmission timers must be designed to prevent congestion and also able to optimize efficiency.

2. Window Policy: The type of window at the sender side may also affect the congestion. Several packets in the Go-back-n window are resent, although some packets may be received successfully at the receiver side. This duplication may increase the congestion in the network and making it worse. Therefore, Selective repeat window should be adopted as it sends the specific packet that may have been lost.

Page 21: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

3. Discarding Policy: A good discarding policy adopted by the routers is that the routers may prevent congestion and at the same time partially discards the corrupted or less sensitive package and also able to maintain the quality of a message. In case of audio file transmission, routers can discard less sensitive packets to prevent congestion and also maintain the quality of the audio file.

4. Acknowledgment Policy: Since acknowledgement are also the part of the load in network, the acknowledgment policy imposed by the receiver may also affect congestion. Several approaches can be used to prevent congestion related to acknowledgment. The receiver should send acknowledgement for N packets rather than sending acknowledgement for a single packet. The receiver should send a acknowledgment only if it has to send a packet or a timer expires.

5. Admission Policy: In admission policy a mechanism should be used to prevent congestion. Switches in a flow should first check the resource requirement of a network flow before transmitting it further. If there is a chance of a congestion or there is a congestion in the network, router should deny establishing a virtual network connection to prevent further congestion.

Closed Loop Congestion Control

Closed loop congestion control technique is used to treat congestion after it happens. Several techniques are used by different protocols; some of them are:

1. Backpressure

2. Choke Packet Technique

3. Implicit Signaling

4. Explicit Signaling

Backpressure is a technique in which a congested node stop receiving packet from upstream node. This may cause the upstream node or nodes to become congested and rejects receiving data from above nodes. Backpressure is a node-to-node congestion control technique that propagate in the opposite direction of data flow. The backpressure technique can be applied only to virtual circuit where each node has information of its above upstream node.

Page 22: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

1. In above diagram the 3rd node is congested and stops receiving packets as a result 2nd node may be get congested due to slowing down of the output data flow. Similarly 1st node may get congested and informs the source to slow down.

2. Choke Packet Technique: Choke packet technique is applicable to both virtual networks as well as datagram subnets. A choke packet is a packet sent by a node to the source to inform it of congestion. Each router monitor its resources and the utilization at each of its output lines. Whenever the resource utilization exceeds the threshold value which is set by the administrator, the router directly sends a choke packet to the source giving it a feedback to reduce the traffic. The intermediate nodes through which the packets has traveled are not warned about congestion.

3. Implicit Signaling: In implicit signaling, there is no communication between the congested nodes and the source. The source guesses that there is congestion in a network. For example when sender sends several packets and there is no acknowledgment for a while, one assumption is that there is a congestion.

4. Explicit Signaling: In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the source or destination to inform about congestion. The difference between choke packet and explicit signaling is that the signal is included in the packets that carry data rather than creating different packet as in case of choke packet technique. Explicit signaling can occur in either forward or backward direction.

Forward Signaling: In forward signaling signal is sent in the direction of the congestion. The destination is warned about congestion. The receiver in this case adopt policies to prevent further congestion.

Backward Signaling: In forward signaling signal is sent in the opposite direction of the congestion. The source is warned about congestion and it needs to slow down.

Q10) Explain different techniques to improve QoS.

ANS) Techniques to Improve QoS:

Techniques that can be used to improve the quality of service as follows scheduling, traffic shaping, admission control and resource reservation.

Scheduling :

Packets from different flows arrive at a switch or router for processing. A good scheduling technique treats the different flows in a fair and appropriate manner. Several scheduling techniques are designed to improve the quality of service. Three of them here: FIFO queuing, priority queuing, and weighted fair queuing.

1) FIFO Queuing: In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node (router or switch) is ready to process them. If the average arrival

Page 23: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

rate is higher than the average processing rate, the queue will fill up and new packets will be discarded. Figure9 shows a conceptual view of a FIFO queue.

Fig9: FIFO queue

2) Priority Queuing: In priority queuing, packets are first assigned to a priority class. Each priority class has its own queue. The packets in the highest-priority queue are processed first. Packets in the lowest-priority queue are processed last. Note that the system does not stop serving a queue until it is empty.

Figure10 shows priority queuing with two priority levels (for simplicity).

Fig10: Priority queuing

A priority queue can provide better QoS than the FIFO queue because higher priority traffic, such as multimedia, can reach the destination with less delay.

3) Weighted Fair Queuing: A better scheduling method is weighted fair queuing. In this technique, the packets are still assigned to different classes and admitted to different queues. The queues, however, are weighted based on the priority of the queues; higher priority means a higher weight. The system processes packets in each queue in a round-robin fashion with the number of packets selected from each queue based on the corresponding weight.

Traffic Shaping :

Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the network. Two techniques can shape traffic: leaky bucket and token bucket.

1) Leaky Bucket: A technique called leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the bucket and sent out at an average rate.

A simple leaky bucket implementation is shown in Figure11. A FIFO queue holds the packets. If the traffic consists of fixed-size packets, the process removes a fixed number

Page 24: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

of packets from the queue at each tick of the clock. If the traffic consists of variable-length packets, the fixed output rate must be based on the number of bytes or bits.

Fig11: Leaky bucket implementation

The following is an algorithm for variable-length packets:

1. Initialize a counter to n at the tick of the clock.2. If n is greater than the size of the packet, send the packet and decrement the counter by

the packet size. Repeat this step until n is smaller than the packet size.3. Reset the counter and go to step 1.

2) Token Bucket:

The token bucket algorithm allows idle hosts to accumulate credit for the future in the form of tokens. For each tick of the clock, the system sends n tokens to the bucket. The system removes one token for every cell (or byte) of data sent. For example, if n is 100 and the host is idle for 100 ticks, the bucket collects 10,000 tokens. Now the host can consume all these tokens in one tick with 10,000 cells, or the host takes 1,000 ticks with 10 cells per tick. In other words, the host can send bursty data as long as the bucket is not empty. Figure12shows the idea.

Fig12: Token bucket

Resource Reservation :

A flow of data needs resources such as a buffer, bandwidth, CPU time, and so on. The quality of service is improved if these resources are reserved beforehand.

Admission Control :

Page 25: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

Admission control refers to the mechanism used by a router, or a switch, to accept or reject a flow based on predefined parameters called flow specifications. Before a router accepts a flow for processing, it checks the flow specifications to see if its capacity (in terms of bandwidth, buffer size, CPU speed, etc.) and its previous commitments to other flows can handle the new flow.

11)Define routing protocol.

A router normally connects LANs and WANs in the Internet and has routing table that is used for making decisions about the route. The routing tables are normally dynamic and are updated using routing protocols. Routing protocols are used to continuously update the routing tables that are consulted for forwarding and routing.

12) Draw the figure providing protocols used at each layer of TCP/IP suites.

Page 26: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

FIFO:

First In First Out (FIFO) is a Fair Queuing method, the first packet to get to the router will be the first packet to be sent out. There is only one queue with FIFO, One Queue for received traffic and one queue for traffic being sent out of the router. This is essentially the best effort queuing strategy which gives no priority to any traffic types and is not recommended for voice and video applications deployments. The default form of queuing on nearly all interfaces is First-In First-Out (FIFO).

Priority Queuing:

There are 4 Queues of Traffic in Priority queuing, and you define what type of traffic goes into these queues. The 4 types of Queues are based on Priorities which are

High Medium Normal Low Priority Queue

This is how Priority Queuing works, as long as there is traffic in High Queue, the other Queues will be neglected, the next to be processed will be traffic in Medium Queue and as long as there is traffic in Medium Queue, the traffic in normal and low queues will be neglected. Also, while serving the traffic in Medium Queue, if the router receives traffic in High Queue, then High Queue will be processed and unless all traffic has cleared the High Queue the router will not go back to Medium Queue. This could result in resource starvation for the traffic arriving and sitting on the lower priority queues like the normal and low queues.

Page 27: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

Priority Queuing is a strict Priority method, which will always prefer the traffic in High Priority Queue to other queues, and the order of processing isHigh Priority Queue > Medium Priority Queue > Normal Priority Queue>Low Queue

13. Define Data Traffic.

Network traffic or data traffic is the amount of data moving across a network at a given point of time. Network data in computer networks is mostly encapsulated in network packets, which provide the load in the network. Network traffic is the main component for network traffic measurement, network traffic control and simulation. The proper organization of network traffic helps in ensuring the quality of service in a given network.

14)List the types of Static and Dynamic Routing algorithm.

Ans14) Types of dynamic routing

Distance-vector Routing Protocols

Link-state Routing Protocols

Hybrid Routing Protocols: 

Types of Static routing

i. Shortest path routing:

ii. Flooding:

iii. Flow Based Routing

15) What is SYN and ACK? (2 marks)

Page 28: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

SYN:

SYN segment is a control segment and carries no data, but it consumes one sequence number and does not contain an acknowledgment number. A SYN segment is for synchronization of sequence numbers. Example: A client chooses a random number as the first sequence number and sends this number to the server. This sequence number is called the initial sequence number (ISN).

ACK:

ACK segment acknowledges the receipt of the second segment with the ACK flag and acknowledgment number field. An ACK segment, if carrying no data, consumes no sequence number.

16.Write any two functions of a router.(2 marks)

Routers are used to connect networks. Routers process packets, which are units of data at the Network layer. A Router receives a packet and examines the destination IP address information to determine what network the packet needs to reach, and then sends the packet out of the corresponding interface.

Routers carry out two basic functions: -

● They select a path between networks● They securely transmit information packets across that path towards an intended

destination● They draw on routing protocols and algorithms. These algorithms are designed to plot

routes using such criteria as throughput, delay, simplicity, low overhead, reliability/stability, and flexibility.

Page 29: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

17.Write about Congestion control.

Congestion in a network occurs when the load on the network i.e. the number of packets sent on a network is greater than the network capacity i.e. the number of packets that can be handled.

Congestion control refers to the mechanism and techniques that are involved to keep the lad below the congestion eves. Congestion occurs because routers and switches have buffers and queues that are meant to hold packets before and after processing.

Factors that are responsible for causing congestion includes:

o Slower processor

o Busy traffic

o Memory insufficiency

Packet arrival rate greater than outgoing link capacity. To avoid congesting, the subnet must prevent additional packets from entering the congested region until those already present can be processed as the congested routers can discard queued packets to make rooms for those that are arriving.

Congestion control algorithm can be classified broadly into two categories namely open loop and closed loop.

In closed loop congestion, the mechanism is involved to remove the congestion after it has occurred. The various methods involved are:

Page 30: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

1. In open loop congestion control, the policies are used to prevent the congestion before it happens and is handled either by the source or by the destination. The various methods involved are:

The different open loop policies can be defined and described as under:

o Retransmission policy:

Under this policy, the sender re-transmits a packet if it sees that the packet it has sent is lost or corrupted. But retransmission of the packets in general will increase the congestion in the network.

However, building an excellent retransmission policy can be used to prevent congestion. The retransmission policy and retransmission timers need to be designed which in turn optimizes efficiency and at the same time controls congestion.

o Window policy: Selective reject window is the method that is used to control congestion under window policy. This method is preferred over G-back N method as duplication frames at the cost of increase complexity at the receiver end.

o Acknowledgment policy: Acknowledgment at the receiver end effects congestion. If the receiver does not acknowledge every packet it receives it may slow down the sender and help prevent congestion.

Sending fewer acknowledgements can reduce the load on the network however this can be implemented as

• Receiver may send acknowledgment if it has a packet to be sent.

• Receiver may send acknowledgment when a timer expires.

Page 31: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

• Receiver may as decide to acknowledge any N packets at a time.

o Discarding policy: Less sensitive packets are discarded, when the congestion may be felt to be likely to happen. This helps to improve congestion and at the same time doesn’t harm integrity of the transmission.

o Admission policy: This act as quality of service mechanism and prevent congestion in virtual circuit networks. Resource requirements are checked before admitting the packets to the networks and the virtual circuit congestion connections can be denied if there is congestion in the network.

1. Leaky Bucket and Token Bucket algorithms are few of the algorithms that control network congestion by traffic shaping mechanism and controlling the amount and rate of traffic that is being sent to the network.

18.What is choke packet?(6 marks)

A choke packet is used in network maintenance and quality management to inform a specific node or transmitter that its transmitted traffic is creating congestion over the network. This forces the node or transmitter to reduce its output rate. Choke packets are used for congestion and flow control over a network. The source node is addressed directly by the router, forcing it to decrease its sending rate .The source node acknowledges this by reducing the sending rate by some percentage.

The router sends a choke packet back to the source host, giving it the destination found on the path. The original packet is tagged (a header bit is turned on) so that it will not generate any more choke packets farther along the path and is then forwarded in the usual way.

When the source host gets the choke packet, it is required to reduce the traffic sent to the specified destination by X percent.

See next figure, flow starts reducing from step 5.

Reduction from 25% to 50% to 75% and so on.

Router maintains threshold. And based on it gives

● Mild Warning● Stern Warning● Ultimatum.

Variation: Use queue length or buffers instead of line utilization as trigger signal. This will reduce traffic. Chocks also increase traffic.

Page 32: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

19) Write the formula for average data rate .

Average data rate = Amount of data

Time( Amount of datadivided by time)

20) Congestion feeds itself. Justify.

Congestion tends to feed upon itself to get even worse. Routers respond to overloading by dropping packets. When these packets contain TCP segments, the segments don't reach their destination, and they are therefore left unacknowledged, which eventually leads to timeout and retransmission. So, the major cause of congestion is often the busty nature of traffic.

21.Differentiate between TCP and UDP.(2 marks)

Page 33: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

22) What is congestion control?

Network congestion in data networking and queueing theory is the reduced quality of service thatoccurs when a network node or link is carrying more data than it can handle. Typical effectsinclude queueing delay, packet loss or the blocking of new connections

23)What is the role of a repeater?In digital communication systems, a repeater is a device that receives a digital signal on an electromagnetic or optical transmission medium and regenerates the signal along the next leg of the medium. In electromagnetic media, repeaters overcome the attenuation caused by free-space electromagnetic-field divergence or cable loss. A series of repeaters make possible the extension of a signal over a distance.

24)What is the importance of Autonomous systems? Provide the classification of Routing Protocols. Explain each in two statements.

Ans24) An autonomous system (AS) is a network or a collection of networks that are all managed and supervised by a single entity or organization.

An AS is a heterogeneous network typically governed by a large enterprise. An AS has many different subnetworks with combined routing logic and common routing policies. Each subnetwork is assigned a globally unique 16 digit identification number (known as the AS number or ASN) by the Internet Assigned Numbers Authority (IANA).

Page 34: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

Static Routing

Network administrators can create routing tables manually, but it is a tedious task. The only advantage is that the administrator knows the exact path data is taking to get to a destination, making static routing tables predictable and manageable. Static routing works best in small networks.

 

Dynamic Routing

A less tedious way to create a routing table is dynamically. Dynamic routing requires each device in a network to broadcast information about its location, which other devices use to update their routing tables. Frequent broadcasting keeps the tables up to date. Dynamic routing protocols use different algorithms to help routers refine path selection: interior, exterior, link state and distance vector, according to where they are in a network and what type of information they provide.

 

Interior and Exterior Protocols

Interior gateway protocols, as the Internet community calls them, are typically used in small, cooperative set of networks such as might be found on a university campus. One of the oldest interior protocols is Routing Information Protocol, or RIP. Newer interior protocols include Interior Gateway Routing Protocol, or IGRP, and Open Shortest Path First, or OSPF. Cisco network devices can also use Cisco’s proprietary Enhanced Interior Gateway Routing Protocol, or EIGRP. Interior protocols are fairly easy to set up, but do not scale well to large networks. For large networks, network administrators use an exterior protocol such as Border Gateway Protocol, or BGP, to connect large entities like corporate and university networks to the Internet.

Link-State Protocol

As its name implies, a link-state protocol collects data about the links or segments between one device and another. Mostly this data is about distance and connectivity, but in some networks

Page 35: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

link-state data includes information about bandwidth, traffic loads, and type of traffic accepted on the links. OSPF is a link-state routing protocol.

 

Distance-Vector Protocol

Two basic pieces of data are exchanged in distance-vector protocols, distance to the destination and which vector — direction — to take to get there. RIP is a simple distance-vector protocol that keeps a table of paths it learns and the distances to them.

 

Hybrid Protocols

Not all routing protocols fall into the categories defined above. For example, Cisco’s proprietary EIGRP is sometimes described as a hybrid of the link-state and distance-vector protocols. Cisco describes EIGRP as “an enhanced distance-vector protocol … [that] calculates the shortest path to a network.” The BGP exterior gateway protocol uses an algorithm called path vector, which means it keeps track of paths used and compares them to determine the best one.

25) Explain distance vector routing algorithm.Distance vector routing:

In distance vector routing, the least-cost route between any two nodes is the route with minimum distance. In this protocol, as the name implies, each node maintains a vector (table) of minimum distances to every node. The table at each node also guides the packets to the desired node by showing the next stop in the route (next-hop routing).

Page 36: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

Initialization: a) The table in figure are stable. b) Each node knows how to reach any node and their cost. c) At the beginning, each node knows the cost of itself and its immediate neighbour. (Those nodes directly connected to it).d) Assume that each node send a message to the immediate neighbours and find the distance between itself and their neighbours. e) The distance of any entry that is not a neighbour is marked as infinite (unreachable).

Sharing: a) Idea is to share the information between neighbours. b) The node A does not know the distance about E, but node C does. c) If node C shares its routing table with A, node A can also know how to reach node E. d) On the other hand, node C does not know how to reach node D, but node A does. e) If node A shares its routing table with C, then node C can also know how to reach node D. f) Node A and C are immediate neighbours, can improve their routing tables if they help each other.

26.Explain Link State Routing Algorithm.(2 marks)

Link state routing algorithm is a routing method used by dynamic routers in which every router maintains a database of its individual autonomous system (AS) topology. The Open Shortest Path First (OSPF) routing protocol uses the link state routing algorithm to allow OSPF routers to exchange routing information with each other.

Link state routing is the second family of routing protocols. While distance vector routers use a distributed algorithm to compute their routing tables, link-state routers exchange messages to allow each router to learn the entire network topology. Based on this learned topology, each router is then able to compute its routing table by using a shortest path computation by using Djikstra.

Page 37: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

For link-state routing, a network is modelled as a directed weighted graph. Each router is a node, and the links between routers are the edges in the graph. A positive weight is associated to each directed edge and routers use the shortest path to reach each destination.

27)How Congestion Control is done in TCP? What are the techniques used, explain in short? Draw TCP congestion policy summary figure.

TCP uses a congestion window and a congestion policy that avoid congestion. Previously, we assumed that only receiver can dictate the sender’s window size. We ignored another entity here, the network. If the network cannot deliver the data as fast as it is created by the sender, it must tell the sender to slow down. In other words, in addition to the receiver, the network is a second entity that determines the size of the sender’s window.

Congestion policy in TCP –1. Slow Start Phase: starts slowly increment is exponential to threshold2. Congestion Avoidance Phase: After reaching the threshold increment is by 13. Congestion Detection Phase: Sender goes back to Slow start phase or Congestion

avoidance phase.Slow Start Phase: exponential increment – In this phase after every RTT the congestion window size increments exponentially.Initially cwnd = 1

After 1 RTT, cwnd = 2^(1) = 2

2 RTT, cwnd = 2^(2) = 4

3 RTT, cwnd = 2^(3) = 8

Congestion Avoidance Phase: additive increment – This phase starts after the threshold value also denoted as ssthresh. The size of cwnd(congestion window) increases additive. After each RTT cwnd = cwnd + 1.

Initially cwnd = i

After 1 RTT, cwnd = i+1

2 RTT, cwnd = i+2

3 RTT, cwnd = i+3

Congestion Detection Phase : multiplicative decrement – If congestion occurs, the congestion

window size is decreased. The only way a sender can guess that congestion has occurred is the

need to retransmit a segment. Retransmission is needed to recover a missing packet which is

assumed to have been dropped by a router due to congestion. Retransmission can occur in one of

two cases: when the RTO timer times out or when three duplicate ACKs are received.

Page 38: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

Case 1 : Retransmission due to Timeout – In this case congestion possibility is high.(a) ssthresh is reduced to half of the current window size.(b) set cwnd = 1(c) start with slow start phase again.

Case 2 : Retransmission due to 3 Acknowledgement Duplicates – In this case congestion possibility is less.(a) ssthresh value reduces to half of the current window size.(b) set cwnd= ssthresh(c) start with congestion avoidance phase

Example – Assume a TCP protocol experiencing the behavior of slow start. At 5th transmission round with a threshold (ssthresh) value of 32 goes into congestion avoidance phase and continues till 10th transmission. At 10th transmission round, 3 duplicate ACKs are received by the receiver and enter into additive increase mode. Timeout occurs at 16th transmission round. Plot the transmission round (time) vs congestion window size of TCP segments.

28.Explain distance vector routing in detail.

Page 39: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

Distance vector routing: In distance vector routing, the least-cost route between any two nodes is the route with minimum distance. In this protocol, as the name implies, each node maintains a vector (table) of minimum distances to every node. The table at each node also guides the packets to the desired node by showing the next stop in the route (next-hop routing).

Initialization:

a)The table in figure are stable.

b)Each node knows how to reach any node and their cost.

c)At the beginning, each node knows the cost of itself and its immediate neighbour. (Those nodes directly connected to it).

d)Assume that each node send a message to the immediate neighbours and find the distance between itself and their neighbours.

e)The distance of any entry that is not a neighbour is marked as infinite(unreachable).

Sharing:

a)Idea is to share the information between neighbours.

b)The node A does not know the distance about E, but node C does.

c)If node C shares its routing table with A, node A can also know how to reach node E.

d)On the other hand, node C does not know how to reach node D, but node A does.

Page 40: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

e)If node A shares its routing table with C, then node C can also know how to reach node D.

f)Node A and C are immediate neighbours, can improve their routing tables if they help each other.

29) Explain the general principles of congestion control. (6 marks)

Congestion in a network may occur if the load on the network, the number of packets sent to the network is greater than the capacity of the network, the number of packets a network can handle. Congestion control refers to the mechanisms and techniques to control the congestion and keep the load below the capacity. When too many packets are present in (a part of) the subnet, performance degrades. This situation is called congestion. As traffic increases too far, the routers are no longer able to cope, and they begin losing packets. At very high traffic, performance collapses completely and almost no packets are delivered.

General Principles of Congestion Control:

Three Step approach to apply congestion control:

1. Monitor the system - detect when and where congestion occurs.

2. Pass information to where action can be taken.

3. Adjust system operation to correct the problem.

The subnet for congestion should be monitored by

1) Percentage of all packets discarded for lack of buffer space.

2) Average queue lengths

Page 41: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

3) Number of packets that time out and are retransmitted

4) Average packet delay

5) Standard deviation of packet delay (jitter Control).

Knowledge of congestion will cause the hosts to take appropriate action to reduce the congestion.

For a scheme to work correctly, the time scale must be adjusted carefully.

If every time two packets arrive in a row, a router yells STOP and every time a router is idle for 20 µsec, it yells GO, the system will oscillate wildly and never converge.

Dividing all algorithms into open loop or closed loop

They further divide the open loop algorithms into ones that act at the source versus ones that act at the destination.

The closed loop algorithms are also divided into two subcategories: Explicit feedback & implicit feedback

In explicit feedback algorithms, packets are sent back from the point of congestion to warn the source.

In implicit algorithms, the source deduces the existence of congestion by making local observations, such as the time needed for acknowledgements to come back. The presence of congestion means that the load is (temporarily) greater than the resources can handle. Hence the solution is to increase the resources or decrease the load. (That is not always possible. So, we apply some congestion prevention policy.)

Page 42: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

30) Explain congestion in a network with an example.

Ans 30) Congestion, in the context of networks, refers to a network state where a node or link carries so much data that it may deteriorate network service quality, resulting in queuing delay, frame or data packet loss and the blocking of new connections. In a congested network, response time slows with reduced network throughput. Congestion occurs when bandwidth is insufficient and network data traffic exceeds capacity.

Data packet loss from congestion is partially countered by aggressive network protocol retransmission, which maintains a network congestion state after reducing the initial data load. This can create two stable states under the same data traffic load - one dealing with the initial load and the other maintaining reduced network throughput.For ex:

Bandwidth refers to the “size of the pipe” in which Internet data can travel through. If the pipe is not large enough for all the traffic to move through at once, there becomes congestion.

This occurs during peak TV streaming hours when Netflix is consuming 40% of the Internet. The result is congestion, as many people are trying to consume large file size streaming.

31. Provide the classification of Congestion Control. Explain Closed Loop techniques with details.

Congestion ControlCongestion control refers to techniques and mechanisms that can either prevent congestion before it happens or remove congestion after it has happened. In general, we can divide congestion control mechanisms into two broad categories: open-loop congestion control (prevention) and closed-loop congestion control (removal).

Page 43: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

Closed-Loop Congestion ControlClosed-loop congestion control mechanisms try to alleviate congestion after it happens. Several mechanisms have been used by different protocols. We describe a few of them here. Backpressure The technique of backpressure refers to a congestion control mechanism in which a congested node stops receiving data from the immediate upstream node or nodes. This may cause the upstream node or nodes to become congested, and they, in turn, reject data from their upstream node or nodes, and so on. Backpressure is a node to- node congestion control that starts with a node and propagates, in the opposite direction of data flow, to the source. The backpressure technique can be applied only to virtual circuit networks, in which each node knows the upstream node from which a flow of data is coming. Figure 18.14 shows the idea of backpressure.

Choke Packet A choke packet is a packet sent by a node to the source to inform it of congestion. Note the difference between the backpressure and choke-packet methods. In backpressure, the warning is from one node to its upstream node, although the warning may eventually reach the source station. In the choke-packet method, the warning is from the router, which has encountered congestion, directly to the source station. The intermediate nodes through which the packet has travelled are not warned. We will see an example of this type of control in ICMP.

Page 44: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

Implicit Signalling In implicit signalling, there is no communication between the congested node or nodes and the source. The source guesses that there is congestion somewhere in the network from other symptoms. For example, when a source sends several packets and there is no acknowledgment for a while, one assumption is that the network is congested. The delay in receiving an acknowledgment is interpreted as congestion in the network; the source should slow down.

Explicit Signalling The node that experiences congestion can explicitly send a signal to the source or destination. The explicit-signalling method, however, is different from the choke-packet method. In the choke-packet method, a separate packet is used for this purpose; in the explicit-signalling method, the signal is included in the packets that carry data. Explicit signalling can occur in either the forward or the backward direction. This type of congestion control can be seen in an ATM network.

32) Explain priority and FIFO Queueing in QoS

FIFO:

First In First Out (FIFO) is a Fair Queuing method, the first packet to get to the router will be the first packet to be sent out. There is only one queue with FIFO, One Queue for received traffic and one queue for traffic being sent out of the router. This is essentially the best effort queuing strategy which gives no priority to any traffic types and is not recommended for voice and video applications deployments. The default form of queuing on nearly all interfaces is First-In First-Out (FIFO).

Page 45: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

Priority Queuing:

There are 4 Queues of Traffic in Priority queuing, and you define what type of traffic goes into these queues. The 4 types of Queues are based on Priorities which are

High Medium Normal Low Priority Queue

This is how Priority Queuing works, as long as there is traffic in High Queue, the other Queues will be neglected, the next to be processed will be traffic in Medium Queue and as long as there is traffic in Medium Queue, the traffic in normal and low queues will be neglected. Also, while serving the traffic in Medium Queue, if the router receives traffic in High Queue, then High Queue will be processed and unless all traffic has cleared the High Queue the router will not go back to Medium Queue. This could result in resource starvation for the traffic arriving and sitting on the lower priority queues like the normal and low queues.

Page 46: bhg2.files.wordpress.com file · Web viewbhg2.files.wordpress.com

Priority Queuing is a strict Priority method, which will always prefer the traffic in High Priority Queue to other queues, and the order of processing isHigh Priority Queue > Medium Priority Queue > Normal Priority Queue>Low Queue


Recommended