+ All Categories
Home > Documents > Learning Outcome

Learning Outcome

Date post: 04-Jan-2016
Category:
Upload: buffy-gardner
View: 19 times
Download: 0 times
Share this document with a friend
Description:
EEC4113 Data Communication & Multimedia System Chapter 8: Transport Layer by Muhazam Mustapha, November 2011. Learning Outcome. By the end of this chapter, students are expected to be able to explain issues related to internetworking protocols and a few routing algorithms. Chapter Content. - PowerPoint PPT Presentation
Popular Tags:
68
EEC4113 Data Communication & Multimedia System Chapter 8: Transport Layer by Muhazam Mustapha, November 2011
Transcript
Page 1: Learning Outcome

EEC4113Data Communication &

Multimedia SystemChapter 8: Transport Layer

by Muhazam Mustapha, November 2011

Page 2: Learning Outcome

Learning Outcome

• By the end of this chapter, students are expected to be able to explain issues related to internetworking protocols and a few routing algorithms

Page 3: Learning Outcome

Chapter Content

• Internetworking Protocol– X.25, Frame Relay, ATM– IP Address

• Routing Algorithms

Page 4: Learning Outcome

Internetworking Protocols

CO1

Page 5: Learning Outcome

Internetworking

• Internetworking, or internet, is a set of standards involved in connecting LAN-s to form a huge system of WAN

• Can be implemented as hardware or software

• Involves some algorithms on routing

• Involves IP address assignment

CO1

Page 6: Learning Outcome

Internetworking

• Connection standards:– X.25– Frame Relay– Asynchronous Transfer Mode (ATM)

CO1

Page 7: Learning Outcome

X.25

• Old ITU (International Telecommunication Union) standard– Older and wasn’t part of OSI or TCP/IP

• Interface between host and packet switched network

• Almost universal on packet switched networks and packet switching in ISDN (Integrated Services Digital Network)

CO1

Page 8: Learning Outcome

X.25

• Defines three layers– Physical– Link– Packet

CO1

Page 9: Learning Outcome

X.25

CO1

Page 10: Learning Outcome

X.25 - Physical

• Interface between station node link

• Two ends are distinct – Data Terminal Equipment, DTE (user

equipment)– Data Circuit-terminating Equipment, DCE

(node)

• Physical layer specification is X.21

• Can be implemented as EIA-232 (formerly RS-232)

CO1

Page 11: Learning Outcome

X.25 - Physical

Frame Relay / X.25

CO1

Page 12: Learning Outcome

X.25 - Link

• Implemented as Link Access Protocol Balanced (LAPB)– Subset of HDLC

• Provides reliable transfer of data over link

• Sending as a sequence of frames

CO1

Page 13: Learning Outcome

X.25 - Packet

• Provides a logical connections (virtual circuit) between subscribers

• All data in this connection form a single stream between the end stations

• Established on demand

• Termed external virtual circuits

CO1

Page 14: Learning Outcome

Issues with X.25

• Key features include:– Calling of control packets, in-band signaling– Multiplexing of virtual circuits at layer 3

(network layer)– Layers 2 (data link) and 3 include flow and

error control

• Hence have considerable overhead

• Not appropriate for modern digital systems with high reliability

CO1

Page 15: Learning Outcome

Frame Relay

• Designed before ATM to eliminate most X.25 overhead

• Has larger installation base than ATM

• Frame relay is for internet, ATM is for LAN

• Provides LAN-LAN connection

• Implemented as virtual circuit just like X.25

CO1

Page 16: Learning Outcome

Frame Relay

• Key differences from X.25:– Call control carried in separate logical

connection– Multiplexing and switching at layer 2– No hop by hop error or flow control– Hence end to end flow and error control (if

used) are done by higher layer

• A single user data frame is sent from source to destination and higher layer ACK sent back

CO1

Page 17: Learning Outcome

X.25 vs Frame Relay

X.25 Frame Relay

Layer 1 Specification Yes None

Layer 2 Protocol Family HDLC HDLC

Layer 3 Support PLP None

Error Correction Node to Node None

Propagation Delay High Low

Ease of Implementation Difficult Easy

Good for Interactive Applications Too Slow Yes

Good for Voice No Yes

Good for LAN File Transfer Slow Yes

CO1

Page 18: Learning Outcome

X.25 vs Frame Relay

• Many X.25 networks have been replaced by Frame Relay or X.25 over Frame Relay Networks

• X.25 still in use for low bandwidth applications such as credit card verification

• It is likely that ATM Networks will ultimately replace Frame Relay and X.25 Networks

CO1

Page 19: Learning Outcome

ATM

• Also called cell relay because it transfers data as FIXED cell size

• More favorable than frame relay for LAN

• Provides much higher data rate

• Still implemented as virtual circuit like frame relay and X.25

CO1

Page 20: Learning Outcome

ATM vs Frame Relay

ATM Frame Relay

Chronology Later Earlier

Data Unit Fixed 53 byte Cells Variable size frame

Installation Hardware oriented Software oriented

Flexibility Lower Higher

Bit Rate Up to 10Gbps 155.520 Mbps or 622.080 Mbps

Transmission overhead Lower Higher

Connection Type Virtual Circuit Virtual Circuit

Error Handling At nodes At stations

CO1

Page 21: Learning Outcome

IP Addressing

CO1, CO2

Page 22: Learning Outcome

IPv4

• In general, IP address is the identifier used in the network layer of the TCP/IP model to identify each device connected to the Internet – called the IP address or Internet address

• The current version of IP address widely used is IPv4 with a 32-bit binary address

• IP addresses are universal & unique

CO1, CO2

Page 23: Learning Outcome

IPv4

• Universal because the addressing system must be accepted by any host that wants to be connected to the Internet

• Unique because two devices on the Internet can never have the same IP address at the same time

• 32-bit binary gives total of 232 = 4,294,967,296 unique IP addresses

CO1, CO2

Page 24: Learning Outcome

IPv4

• There are 2 common notations to show an IP address– Binary notation– Dotted-decimal notation

CO1, CO2

Page 25: Learning Outcome

Network Classes

• The three principal network classes are best suited to the following conditions :– Class A : Few networks, with many hosts– Class B : Medium number of networks, each

with medium number of hosts– Class C : Many networks, with a few hosts

• Two other classes :– Class D : Used for multicast– Class E : For future use

CO1, CO2

Page 26: Learning Outcome

Network Classes• The address is coded to allow a variable

allocation of bits to specify network & host (netid & hostid)

CO1, CO2

Page 27: Learning Outcome

Class A• Start with binary 0 • First decimal number in the range from 0 (00000000) to

127 (01111111) • Only 126 usable network address although there are 128

possible combinations• Because decimal number of 0 and 127 are reserved• Number of addresses per network = 224 = 16,777,216• Each Class A network address can accommodate

16,777,216 hosts

netid hostid

CO1, CO2

Page 28: Learning Outcome

Class A

CO1, CO2

Page 29: Learning Outcome

Class B• Start with binary 10 • First decimal number in the range of 128

(10000000) to 191 (10111111)• 16,384 possible network addresses (214)• Number of addresses per network = 216 = 65,536• Each Class B network address can

accommodate 65,536 hosts

netid

CO1, CO2

Page 30: Learning Outcome

Class B

CO1, CO2

Page 31: Learning Outcome

Class C• Start with binary 110 • First decimal number in the range of 192

(11000001) to 223 (11011111)• 2,097,152 possible network addresses

(221)• Number of addresses per network = 28 =

256

netid hostid

CO1, CO2

Page 32: Learning Outcome

Class C

CO1, CO2

Page 33: Learning Outcome

Subnet and Subnet Masks

• Allows arbitrary complexity of internetworked LANs within organization

• Insulate overall internet from growth of network numbers and routing complexity

• Site looks to rest of internet like single network

CO1, CO2

Page 34: Learning Outcome

Subnet and Subnet Masks

• Each LAN assigned subnet number

• Local routers route within subnetted network

• IP addresses are partitioned into subnet number and host number

• Subnet mask indicates which bits are subnet number and which are host number

CO1, CO2

Page 35: Learning Outcome

Subnet and Subnet Masks

Binary Representation Dotted Decimal

IP address 11000000.11100100.00010001.00111001 192.228.17.57

Subnet mask 11111111 .11111111.11111111 .11100000 255.255.255.224

Bitwise AND ofaddress and mask(resultantnetwork/subnetnumber)

11000000.11100100.00010001.00100000 192.228.17.32

Subnet number 11000000.11100100.00010001.001 1

Host number 00000000.00000000.00000000.00011001 25

CO1, CO2

Page 36: Learning Outcome

Subnet and Subnet Masks

CO1, CO2

Page 37: Learning Outcome

IPv6

• IP v 1-3 defined and replaced

• IP v4 - current version

• IP v5 - streams protocol - never implemented

• IP v6 - replacement for IP v4– during development it was called IPng (IP

Next Generation)

CO1

Page 38: Learning Outcome

IPv6 – Why?

• Address space exhaustion– two level addressing (network and host)

wastes space– network addresses used even if not

connected– growth of networks and the Internet– single address per host

• Requirements for new types of service

CO1

Page 39: Learning Outcome

IPv6 – Why?

• Security– IPv6 includes MAC address information,

hence individual network card can be resolved

• Faster– Better geographical location assignment– IPv4 has unfairly assigned less addresses to

recently growing China and India

CO1

Page 40: Learning Outcome

IPv6 – Examples

• Full (128 bits) – 3ffe:1900:4545:0003:0200:f8ff:fe21:67cf

• Zeros MSB can be omitted – 3ffe:1900:4545:3:200:f8ff:fe21:67cf

• Complete zero can be omitted all over – fe80:0:0:0:200:f8ff:fe21:67cf or fe80::200:f8ff:fe21:67cf

CO1

Page 41: Learning Outcome

Congestion Detection & Avoidance

CO1

Page 42: Learning Outcome

Congestion

• Definition of CONGESTION– Different from collision– Situation that occurs when network is over

utilized– Stations could not serve requests on time– Results in:

• Packet loss• Delay• Blocking connection• Queue (buffer) overflow

CO1

Page 43: Learning Outcome

Congestion Detection

• Two schemes:– Drop-tail queue management– Random Early Detection (RED)

CO1

Page 44: Learning Outcome

Drop-Tail Queue Management

• Default queue management mechanism

• Packets accepted if there is room in queue, regardless of who sent it

• Packets dropped upon queue overflow, regardless of who sent it

• If the queue is consistently full for some period of time, congestion is assumed and notification is sent

CO1

Page 45: Learning Outcome

Drop-Tail Queue Management

• Excess packet loss due to late congestion notification

• Congestion notification is too late and results in:– Global synchronization – because during

congestion drop-tail does not discriminate sender, all senders slows down transmission

– Poor link utilization– Potentially large queuing delay

CO1

Page 46: Learning Outcome

Random Early Detection (RED)• Randomize congestion detection

• Early notification of congestion

• Steps:– Average queue size is monitored– Packets can be dropped even if the queue is

not full• Including packets from senders that don’t heavily

utilization the link (drop-tail discriminates heavy users)

• Done by some statistical calculation• More you send more probable you will be dropped

CO1

Page 47: Learning Outcome

Random Early Detection (RED)

• Steps (cont):– If the queue is almost empty, everyone is

accepted– If the queue size is more than some max

threshold value (but NOT full), everyone will be dropped and early congestion notification is sent – hence a real congestion is avoided

CO1

Page 48: Learning Outcome

Random Early Detection (RED)

• RED works by:– Not discriminating packets drop when the

queue is wide open NOR when the queue is almost full

• Hence everyone experiences global synchronization at more later time

– Notifying congestion before it takes place

CO1

Page 49: Learning Outcome

Weighted RED (WRED)

• A variant of RED

• Includes sender priority in the random statistical calculation for packet dropping

• Discriminates low priority sender

CO1

Page 50: Learning Outcome

Routing Algorithms

CO1, CO2

Page 51: Learning Outcome

Need for Routing

• Many possible paths in network mesh may seem like more options but it forces network to seek best path in term of efficiency, cost, resilience, distance, etc

• Utilizes one of mathematics field called graph algorithm

CO1

Page 52: Learning Outcome

Need for Routing

• Public telephone switches (circuit switching) are tree structures– Static routing uses the same approach all the

time – hence no routing required

• Dynamic routing is possible AND required in packet switching which allows for changes in routing depending on traffic

CO1

Page 53: Learning Outcome

Routing Strategies

• Fixed

• Flooding

• Random

• Adaptive

CO1

Page 54: Learning Outcome

Fixed Routing

• Single permanent route for each source to destination pair

• Determine routes using a least cost algorithm

• Route fixed, at least until a change in network topology

CO1

Page 55: Learning Outcome

Fixed Routing

CO1

Page 56: Learning Outcome

Flooding

• No network info required

• Packet sent by node to every neighbor

• Incoming packets retransmitted on every link except incoming link

• Eventually a number of copies will arrive at destination

• Each packet is uniquely numbered so duplicates can be discarded

CO1

Page 57: Learning Outcome

Flooding

• Nodes can remember packets already forwarded to keep network load in bounds

• Can include a hop count in packets

CO1

Page 58: Learning Outcome

Random Routing

• Similar to flooding, but node selects ONE outgoing path for retransmission of incoming packet, instead of all of them

• Selection can be random or round robin

• Can select outgoing path based on probability calculation

• No network info needed

• Route is typically not least cost nor minimum hop

CO1

Page 59: Learning Outcome

Adaptive Routing• Routing decisions change as conditions on

the network change due to:– Failure– Congestion

• Requires info about network

• Decisions more complex

• Tradeoff between quality of network info and overhead

CO1, CO2

Page 60: Learning Outcome

Adaptive Routing

• Two main algorithm family found in internet:– Distance Vector– Link State Protocols

(we will cover only distance vector)

CO1, CO2

Page 61: Learning Outcome

Distance Vector

• Each node knows the distance to its directly connected neighbors

• A node sends periodically a list of routing TABLE updates to its neighbors

• All nodes update their distance table based on BELLMAN-FORD algorithm – the routing tables eventually converge

• New nodes advertise themselves to their neighbors

CO1, CO2

Page 62: Learning Outcome

Bellman-Ford Algorithm

• Define distances at each node x– Dx(y) = cost of least-cost path from x to y

• Update distances based on neighbors– Dx(y) = min {c(x,v)+Dv(y)} over all neighbors v

32

2

1

14

1

4

5

3

u

v

w

x

y

z

s

t

Du(z) = min{c(u,v) + Dv(z),

c(u,w) + Dw(z)}

CO1, CO2

Page 63: Learning Outcome

Distance Vector Algorithm

• c(x,v) = cost for direct link from x to v– Node x maintains costs of direct links c(x,v)

• Dx(y) = estimate of least cost from x to y– Node x maintains distance vector Dx = [Dx(y): y є N ]

• Node x maintains its neighbors’ distance vectors– For each neighbor v, x maintains Dv = [Dv(y): y є N ]

• Each node v periodically sends Dv to its neighbors– And neighbors update their own distance vectors– Dx(y) ← minv{c(x,v) + Dv(y)} for each node y N∊

• Over time, the distance vector Dx convergesCO1, CO2

Page 64: Learning Outcome

Distance Vector Algorithm

• It is iterative, asynchronous – each local iteration caused by:– Local link cost change – Distance vector update message from

neighbor

• It is distributed:– Each node notifies neighbors only when its DV

changes– Neighbors then notify their neighbors if

necessaryCO1, CO2

Page 65: Learning Outcome

Distance Vector Algorithm

wait for (change in local link cost or message from neighbor)

recompute estimates

if DV to any destination has

changed, notify neighbors

Each node:

CO1, CO2

Page 66: Learning Outcome

Distance Vector Algorithm (Example – Step 1)

A

E

F

C

D

B

2

3

6

4

1

1

1

3

Table for A

Dst Cst Hop

A 0 A

B 4 B

C –

D –

E 2 E

F 6 F

Table for B

Dst Cst Hop

A 4 A

B 0 B

C –

D 3 D

E –

F 1 F

Table for C

Dst Cst Hop

A –

B –

C 0 C

D 1 D

E –

F 1 F

Table for D

Dst Cst Hop

A –

B 3 B

C 1 C

D 0 D

E –

F –

Table for E

Dst Cst Hop

A 2 A

B –

C –

D –

E 0 E

F 3 F

Table for F

Dst Cst Hop

A 6 A

B 1 B

C 1 C

D –

E 3 E

F 0 F

Optimum 1-hop paths

CO1, CO2

Page 67: Learning Outcome

Distance Vector Algorithm (Example – Step 2)

Table for A

Dst Cst Hop

A 0 A

B 4 B

C 7 F

D 7 B

E 2 E

F 5 E

Table for B

Dst Cst Hop

A 4 A

B 0 B

C 2 F

D 3 D

E 4 F

F 1 F

Table for C

Dst Cst Hop

A 7 F

B 2 F

C 0 C

D 1 D

E 4 F

F 1 F

Table for D

Dst Cst Hop

A 7 B

B 3 B

C 1 C

D 0 D

E –

F 2 C

Table for E

Dst Cst Hop

A 2 A

B 4 F

C 4 F

D –

E 0 E

F 3 F

Table for F

Dst Cst Hop

A 5 B

B 1 B

C 1 C

D 2 C

E 3 E

F 0 F

Optimum 2-hop paths

A

E

F

C

D

B

2

3

6

4

1

1

1

3

CO1, CO2

Page 68: Learning Outcome

Distance Vector Algorithm (Example – Step 3)

Table for A

Dst Cst Hop

A 0 A

B 4 B

C 6 E

D 7 B

E 2 E

F 5 E

Table for B

Dst Cst Hop

A 4 A

B 0 B

C 2 F

D 3 D

E 4 F

F 1 F

Table for C

Dst Cst Hop

A 6 F

B 2 F

C 0 C

D 1 D

E 4 F

F 1 F

Table for D

Dst Cst Hop

A 7 B

B 3 B

C 1 C

D 0 D

E 5 C

F 2 C

Table for E

Dst Cst Hop

A 2 A

B 4 F

C 4 F

D 5 F

E 0 E

F 3 F

Table for F

Dst Cst Hop

A 5 B

B 1 B

C 1 C

D 2 C

E 3 E

F 0 F

Optimum 3-hop paths

A

E

F

C

D

B

2

3

6

4

1

1

1

3

CO1, CO2


Recommended