COMPUTER NETWORK
PROGRAMMING
LAB (B.Tech. ECE, 8th SEM)
Submitted To:_ ________________________________
_________________________________
_________________________________
Submitted By: _________________________________
_________________________________
_________________________________
INDEX
1. PRELIMINARIES: Study and use of common TCP/IP protocols and term viz. telnet
rlogin ftp, ping,finger, Socket, Port etc.
2. DATA STRUCTURES USED IN NETWORK PROGRAMMING: Representation of
unidirectional,Directional weighted and unweighted graphs.
3. ALGORITHMS IN NETWORKS: computation of shortest path for one source-one
destination and one source –all destination.
4. SIMULATION OF NETWORK PROTOCOLS: M/M/1 and M/M/1/N queues.
5. Case study : on LAN Training kit
(i) Observe the behavior & measure the throughput of reliable data transfer protocols under
various Bit error rates for following DLL layer protocols
a.Stop & Wait
b. Sliding Window : Go-Back-N and Selective Repeat
(ii) Observe the behavior & measure the throughput under various network load conditions
for following MAC layer Protocols
a. Aloha
b. CSMA, CSMA/CD & CSMA/CA
c. Token Bus & Token Ring
6. DEVELOPMENT OF CLIENT SERVER APPLICATION:
(i) Develop ‘telnet’ client and server which uses port other than 23.
(ii) Write a finger application which prints all available information for five users currently
logged on and are using the network for longest duration. Print the information in ascending
order of time.
1. TCP/IP PROTOCOLS
AIM: Study and use of common TCP/IP protocols and term viz. telnet rlogin ftp, ping,
finger, Socket, Port.
TCP/IP is actually a suite, or stack, of protocols that interconnect and work together to
provide for reliable and efficient data communications across an internetwork.
TCP/IP Protocol Stack Maps to the OSI Model
OSI Layers TCP/IP Protocols
Application, Presentation, Session Telnet, FTP, SMTP, SNMP, DNS, HTTP
Transport TCP, UDP
Network IP, ICMP, ARP, RARP
Data Link, Physical Ethernet, Token Ring,
Application Layer Protocols
FTP
File Transfer Protocol (FTP) enables a file on one system to be copied to another system.
The user doesn't actually log in as a full user to the machine he or she wants to access, as
with Telnet, but instead uses the FTP program to enable access. Again, the correct
permissions are necessary to provide access to the files.
Once the connection to a remote machine has been established, FTP enables you to copy
one or more files to your machine. (The term transfer implies that the file is moved from
one system to another but the original is not affected. Files are copied.) FTP is a widely
used service on the Internet, as well as on many large LANs and WANs.
Telnet
The Telnet program provides a remote login capability. This lets a user on one machine
log onto another machine and act as though he or she were directly in front of the second
machine. The connection can be anywhere on the local network or on another network
anywhere in the world, as long as the user has permission to log onto the remote system.
You can use Telnet when you need to perform actions on a machine across the country.
This isn't often done except in a LAN or WAN context, but a few systems accessible
through the Internet allow Telnet sessions while users play around with a new application or
operating system.
Transport Layer Protocols
Network protocols are either connection-oriented or connectionless.
Connection-oriented protocols - require that a direct connection be established between
two devices before data can begin to transfer between the devices. Packets are transferred
using a prescribed sequence of actions that include an acknowledgment to signal when a
packet arrives, and possibly resending the packet if there are errors. This method is reliable
and, as a result of its reliability and the overhead involved, much slower than connectionless
protocols.
Connectionless protocols - are largely based on your faith in the technology. Packets are
sent over the network without regard to whether they actually arrive at their destinations.
There are no acknowledgments or guarantees, but you can send a datagram to many
different destinations at the same time. Connectionless protocols are fast because no time is
used in establishing and tearing down connections. Connectionless protocols are also
referred to as best-effort protocols. A port (noun) is a "logical connection place" and
specifically, using the Internet's protocol, TCP/IP, the way a client program specifies a
particular server program on a computer in a network. Higher-level applications that use
TCP/IP such as the Web protocol, Hypertext Transfer Protocol, have ports with pre
assigned numbers. These are known as "well-known ports" that have been assigned by the
Internet Assigned Numbers Authority (IANA). Other application processes are given port
numbers dynamically for each connection. When a service (server program) initially is
started, it is said to bind to its designated port number.
Transmission Control Protocol (TCP)
TCP is a connection-oriented reliable, delivery protocol that ensures that packets arrive at
their destination error-free. Using TCP is similar to sending a registered letter. When you
send the letter, you know for sure that it will get to its destination and that you'll be notivied
that it got there in good condition. On the Transport layer, packets are referred to as
segments. User Datagram Protocol (UDP) :User Datagram Protocol (UDP) is a
connectionless-oriented protocol, meaning that it does not provide for the retransmission of
data grams (unlike TCP, which is connection oriented). UDP is not very reliable, but it does
have specialized purposes. If the applications that use UDP have reliability checking built
into them, the shortcomings of UDP are overcome.
Network Layer Protocols
A number of TCP/IP protocols operate on the Network layer of the OSI Model, including
IP, ARP, RARP, BOOTP, and ICMP. Remember, the OSI Network layer is concerned with
routing messages across the internetwork.
Internet Protocol (IP)
Where TCP is connection-oriented, IP is connectionless. IP provides for the best-effort
delivery of the packets (or datagrams) that it creates from the setments it receives from the
Transport layer protocols. The IP protocol provides for logical addressing on the Network
layer.
COMPUTER NETWORK’S TERMS
rlogin
rlogin is a software utility for Unix-like computer operating systems that allows users to log
in on another host via a network, communicating via TCP port .rlogin is also the name of
the application layer protocol used by the software, part of the TCP/IP protocol suite.
Authenticated users can act as if they were physically present at the computer. rlogin
communicates with a daemon, rlogind, on the remote host. rlogin is similar to the Telnet
command, but has the disadvantage of not being as customizable and being able to connect
only to Unix hosts. rlogin is most commonly deployed on corporate or academic networks,
where user account information is shared between all the Unix machines on the network.
Ping
Ping is a computer network administration utility used to test the reachability of a host on
an Internet Protocol (IP) network and to measure the round-trip time for messages sent from
the originating host to a destination computer. The name comes from active sonar
terminology. Ping operates by sending Internet Control Message Protocol (ICMP) echo
request packets to the target host and waiting for an ICMP response. In the process it
measures the time from transmission to reception (round-trip time) and records any packet
loss. The results of the test are printed in form of a statistical summary of the response
packets received, including the minimum, maximum, and the mean round-trip times, and
sometimes the standard deviation of the mean.
Finger
Finger was one of the first computer network applications. It enabled people to see who else
was using the computer system as well as find basic information on that user. To find
information about a specific user, it was necessary to know that person's email address.
Typical information provided by Finger would be a person's real name, their office location
and phone number, and they last time they logged in. Users also could modify the plan
field to add whatever text they wished.
Socket: A socket represents a single connection between two network applications. These
two applications nominally run on different computers, but sockets can also be used for
interprocess communication on a single computer. Applications can create multiple sockets
for communicating with each other.
2. DATA STRUCTURES USED IN NETWORK PROGRAMMING
AIM: Representation of unidirectional,Directional weighted and unweighted graphs.
Definition: A graph is a collection (nonempty set) of vertices and edges.
A path from vertex x to vertex y : a list of vertices in which successive vertices are
connected by edges.
Connected graph: There is a path between each two vertices.
Simple path: No vertex is repeated.
Cycle: Simple path except that the first vertex is equal to the last.
Loop: An edge that connects the vertex with itself.
Tree: A graph with no cycles.
Spanning tree of a graph: a subgraph that contains all the vertices, and no cycles.
Complete graphs: Graphs with all edges present – each vertex is connected to all other
vertices.
Weighted graphs – weights are assigned to each edge (e.g. road map with distances).
Directed graphs: The edges are oriented, they have a beginning and an end .
Types of graphs: directed, acyclic
Degree of a node U: the number of edges (U,V) - outgoing edges
Indegree of a node U: the number of edges (V,U) - incoming edges
Algorithm
1. Initialize sorted list to be empty, and a counter to 0
2. Compute the indegrees of all nodes
3. Store all nodes with indegree 0 in a queue
4. While the queue is not empty
a. get a node U and put it in the sorted list. Increment the counter.
b. For all edges (U,V) decrement the indegree of V, and put V in the queue if the
updated indegree is 0.
5. If counter is not equal to the number of nodes, there is a cycle.
Complexity
The number of operations is O(|E| + |V|), |V| - number of vertices, |E| - number of edges.
How many operations are needed to compute the indegrees?
Depends on the representation:
Adjacency lists: O(|E|)
Matrix: O(|V|2)
Representation of Graphs
There are two common ways of representing graphs. A 2 dimensional array for dense
graphs and a linked list structure for sparse graphs. These will now be discussed in detail
and the structure of a graph class that could be implemented.
3. ALGORITHMS IN NETWORKS
AIM: Single source shortest path algorithm for directed weighted graphs:Dijkstra’s
algorithm
Similar to the single source shortest path algorithm for unweighted graphs.
Algorithm:
s – starting node
DT – Distance Table,
PQ – priority queue, the priority of a node is equal to the distance from s to that node
Initialize DT(s,0) = 0, DT(s,1) = 0, all remaining DT(j,k) = -1
1. Store s in PQ with distance = 0
2. While there are vertices in the queue:
1. DeleteMin a vertex v from the queue
2. For all adjacent vertices w:
Compute new_distance = (distance to v) + (distance(v,w))
i.e. new_distance = DT(v,0) + distance(v,w)
If distance to w not computed (DT(w,0) = -1)
store new distance in table : DT(w,0)= new_distance
append w in PQ with priority new_distance
make path to w equal to v, i.e. DT(w,1) = v
else
if old distance > new distance, i.e. DT(w,0) > new_distance
Update old_distance = new_distance, i.e. DT(w,0) =
new_distance
Update the priority of w in PQ
(this is done by updating the priority of an element in the
queue - decreaseKey operation. Complexity O(logV))
Update path to w to be v, i.e. DT(w,1) = v
Complexity O(ElogV + VlogV) = O((E + V)log(V))
Each vertex is stored only once in the queue - max elements = V
The deleteMin operation is O( VlogV )
The decreaseKey operation is logV (a search in the binary heap). It might
be performed for each examined edge - O(ElogV).
Single source shortest path algorithm for directed unweighted graphs
Algorithm for computing the distance for a vertex s to all other vertices:
1. Initialize
D_table s,0 = 0 (distance from s to itself = 0)
D_table s,1 = 0 (s is the starting vertex)
D_table i,j = -1 i ≠s
2. Store s in a queue
3. While there are vertices in the queue:
1. Read a vertex v from the queue
2. For all adjacent vertices w:
If D_table w,0 = -1 (distance not computed)
D_table w,0 ← D_table v,0 + 1 (i.e. Distance w is (distance to v) +
1)
D_table w,1 ← v (i.e. Pathw = v)
Append w to the queue
Complexity:
Matrix representation: O(V2)
Adjacency lists: O(E + V)
Spanning trees (for unweighted graphs)
Similar to finding the shortest path in unweighted graphs
Data structures needed: A table (an array) T with size = number of vertices, where Ti =
parent of vertex vi
Adjacency lists
A queue of vertices to be processed
Algorithm
1. Choose a vertex u and store it in the queue. Set a counter = 0, and Tu = 0 (u would be
the root of the tree)
2. While the queue is not empty and counter < |V| -1 do the following:
Read a vertex v from the queue.
For each uk in the adjacency list of v do:
If Tk is empty,
Tk = v
counter = counter + 1
store uk in the queue
Complexity: O(E + V) - we process all edges and all nodes
Minimal spanning trees (weighted graphs) Prim’s algorithm
The algorithm is similar to finding the shortest paths in weighted graphs.
The difference is that we record in the table the length of the current edge, not the length
of the path .
Data structures needed:
A table T with number of rows = number of vertices, and three columns:
Ti,1 = True if the vertex has been fixed in the tree, False otherwise. This is
necessary because the graph is not directed and without this information we
may enter a cycle.
Ti,2 = the length of the edge from the chosen parent (stored in the third column
of the table) to the vertex vi,
Ti,3 = parent of vertex vi
Adjacency lists
A priority queue of vertices to be processed.
Algorithm:
1. Initialize first column to False, select a vertex s and store it in the priority queue
with priority = 0, set Ts,2 = 0, Ts,3 = root
(It does not matter which vertex is chosen, because all vertices have to be in the tree.)
2. While there are vertices in the queue:
a. DeleteMin a vertex v from the queue and set Tv,1 = True
b. For all adjacent vertices w:
If Tw,1 = T do nothing
If Tw,2 is empty:
o Tw,2 = weight of edge (v,w) (stored in the adjacency list)
o Tw,3 = v (this is the parent)
o append w in the queue with priority = weight of (v,w)
If Tw,2 > weight of (v,w)
o Update Tw,2 = weight of edge (v,w)
o Update the priority of w
(this is done by updating the priority of an element in the
queue - decreaseKey operation. Complexity O(logV))
o Update Tw,3 = v
At the end of the algorithm, the tree would be represented in the table with its edges
{(Ti,3 , vi ) | i = 1, 2, ...,V}
4. SIMULATION OF NETWORK PROTOCOLS: M/M/1 and M/M/1/N queues.
The M/M/1 is a single-server queue model, that can be used to approximate simple systems.
Following Kendall's notation it indicates a system where
arrivals are a Poisson process;
service time is exponentially distributed;
there is one server;
the length of queue in which arriving users wait before being served is infinite;
the population of users (i.e. the pool of users) available to join the system is infinite.
Analysis
Such a system can be modelled by a birth-death process, where each state represents the
number of users in the system. As the system has an infinite queue and the population is
unlimited, the number of states the system can occupy is infinite: state 0 (no users in the
system), state 1 (1 user), state 2 (two users), etc. As the queue will never be full and the
population size being infinite, the birth rate (arrival rate), λ, is constant for every state. The
death rate (service rate), μ, is also constant for all states (apart from in state 0). In fact,
regardless of the state, we can have only two events:
A new user arrives. So if the system is in state k, it goes to state k + 1 with rate λ
A user leaves the system. So if the system is in state k, it goes to state k − 1 (or k if k
is 0) with rate μ
It's easy now to see that the system is stable only if λ < μ. In fact if the death rate is less than
the birth rate, the average number of users in the queue will become infinite. I.e. the system
will not have an equilibrium.
The model can reveal interesting performance measures of the system being modelled, for
example:
The mean time a user spends in the system
The mean time a user spends waiting in the queue
The expected number of users in the system
The expected number of users in the queue
The throughput (Number of users served per unit time).
Stationary solution
We can define
The probability the system is in state i can be easily calculated:
With this information, the performance measures of interest can be found; for example:
The expected number of users in the system N is given by
, and its variance by
.
The expected number of requests in the server
The expected number of requests in the queue
The total expected waiting time (queue+service) is
Expected waiting time in the queue is
Example
There are many situations in which an M/M/1 model could be applied. One example is a
post office with only one employee, and therefore one queue. The customers arrive, enter
the queue, do business with the postal worker, and leave the system. If the arrival process is
Poisson and the service time is exponential, a M/M/1 model can be used. Hence, the
expected number of people in the queue can be easily calculated, along with the
probabilities they will have to wait for a particular length of time, and so forth.
• For a P-priority system, class P of highest priority
• Independent, Poisson arrival processes for each class with li as average arrival rate for
class i
• Service times for each class are independent of each other and of the arrival processes and
are exponentially distributed with mean 1/mi for class i
• Both Non-preemptive and Preemptive Priority Service disciplines are considered
Solution Approach
• Define System State appropriately
• Draw the corresponding State Transition Diagram with the appropriate flows between the
states
• Write and solve the balance equations to obtain the system state probabilities
M/M/-/- Queue with Preemptive Priority
For a P-priority queue of this type, define the system state as the following P-tuple
(n1, n2,……,nP)
where
ni = Number of jobs of priority class i in the queue i=1,…..,P
Note that the server will always be engaged by a job of the highest priority class present in
the system, i.e. by a job of class j with service rate mj if nj³1 and nj+1=.....=nP=0.
We illustrate the approach first for a 2-priority M/M/1/ queue
5. Case study : On LAN Training kit
(i) Observe the behavior & measure the throughput of reliable data transfer protocols
under various Bit error rates for following DLL layer protocols
a.Stop & Wait b. Sliding Window : Go-Back-N and Selective Repeat
(ii) Observe the behavior & measure the throughput under various network load
conditions for following MAC layer Protocols
a. Aloha b. CSMA, CSMA/CD & CSMA/CA c. Token Bus & Token Ring
Sliding Window Protocol
A sliding window protocol is a feature of packet-based data transmission protocols. Sliding
window protocols are used where reliable in-order delivery of packets is required, such as in
the Data Link Layer (OSI model) as well as in the Transmission Control Protocol (TCP).
Conceptually, each portion of the transmission (packets in most data link layers, but bytes
in TCP) is assigned a unique consecutive sequence number, and the receiver uses the
numbers to place received packets in the correct order, discarding duplicate packets and
identifying missing ones. The problem with this is that there is no limit of the size of the
sequence numbers that can be required.
By placing limits on the number of packets that can be transmitted or received at any given
time, a sliding window protocol allows an unlimited number of packets to be communicated
using fixed-size sequence numbers.
A transmitter that does not hear an acknowledgment cannot know if the receiver actually
received the packet; it may be that the packet was lost in transmission (or damaged; if error
detection finds an error, the packet is ignored), or it may be that an acknowledgment was
sent, but it was lost. In the latter case, the receiver must acknowledge the retransmission,
but must otherwise ignore it.
Likewise, the receiver is usually uncertain about whether its acknowledgments are being
received.
Stop-and-wait
Stop-and-wait ARQ is a method used in telecommunications to send information between
two connected devices. It ensures that information is not lost due to dropped packets and
that packets are received in the correct order. It is the simplest kind of automatic repeat-
request (ARQ) method. A stop-and-wait ARQ sender sends one frame at a time; it is a
special case of the general sliding window protocol with both transmit and receive window
sizes equal to 1. After sending each frame, the sender doesn't send any further frames until
it receives an acknowledgement (ACK) signal. After receiving a good frame, the receiver
sends an ACK. If the ACK does not reach the sender before a certain time, known as the
timeout, the sender sends the same frame again.
The above behavior is the simplest Stop-and-Wait implementation. However, in a real life
implementation there are problems to be addressed.
Typically the transmitter adds a redundancy check number to the end of each frame. The
receiver uses the redundancy check number to check for possible damage. If the receiver
sees that the frame is good, it sends an ACK. If the receiver sees that the frame is damaged,
the receiver discards it and does not send an ACK -- pretending that the frame was
completely lost, not merely damaged.
One problem is where the ACK sent by the receiver is damaged or lost. In this case, the
sender doesn't receive the ACK, times out, and sends the frame again. Now the receiver has
two copies of the same frame, and doesn't know if the second one is a duplicate frame or the
next frame of the sequence carrying identical data.
Another problem is when the transmission medium has such a long latency that the sender's
timeout runs out before the frame reaches the receiver. In this case the sender resends the
same packet. Eventually the receiver gets two copies of the same frame, and sends an ACK
for each one. The sender, waiting for a single ACK, receives two ACKs, which may cause
problems if it assumes that the second ACK is for the next frame in the sequence.
To avoid these problems, the most common solution is to define a 1 bit sequence number in
the header of the frame. This sequence number alternates (from 0 to 1) in subsequent
frames. When the receiver sends an ACK, it includes the sequence number of the next
packet it expects. This way, the receiver can detect duplicated frames by checking if the
frame sequence numbers alternate. If two subsequent frames have the same sequence
number, they are duplicates, and the second frame is discarded. Similarly, if two subsequent
ACKs reference the same sequence number, they are acknowledging the same frame.
Stop-and-wait ARQ is inefficient compared to other ARQs, because the time between
packets, if the ACK and the data are received successfully, is twice the transit time
(assuming the turnaround time can be zero). The throughput on the channel is a fraction of
what it could be. To solve this problem, one can send more than one packet at a time with a
larger sequence number and use one ACK for a set. This is what is done in Go-Back-N
ARQ and the Selective Repeat ARQ.
Go-Back-N
Go-Back-N ARQ is a specific instance of the automatic repeat request (ARQ) protocol, in
which the sending process continues to send a number of frames specified by a window size
even without receiving an acknowledgement (ACK) packet from the receiver. It is a special
case of the general sliding window protocol with the transmit window size of N and receive
window size of 1.
The receiver process keeps track of the sequence number of the next frame it expects to
receive, and sends that number with every ACK it sends. The receiver will ignore any frame
that does not have the exact sequence number it expects – whether that frame is a "past"
duplicate of a frame it has already ACK'ed [1]
or whether that frame is a "future" frame past
the last packet it is waiting for. Once the sender has sent all of the frames in its window, it
will detect that all of the frames since the first lost frame are outstanding, and will go back
to sequence number of the last ACK it received from the receiver process and fill its
window starting with that frame and continue the process over again.
Go-Back-N ARQ is a more efficient use of a connection than Stop-and-wait ARQ, since
unlike waiting for an acknowledgement for each packet, the connection is still being
utilized as packets are being sent.
CSMA
Carrier Sense Multiple Access (CSMA) is a probabilistic Media Access Control (MAC)
protocol in which a node verifies the absence of other traffic before transmitting on a shared
transmission medium, such as an electrical bus, or a band of the electromagnetic spectrum.
"Carrier Sense" describes the fact that a transmitter uses feedback from a receiver that
detects a carrier wave before trying to send. That is, it tries to detect the presence of an
encoded signal from another station before attempting to transmit. If a carrier is sensed, the
station waits for the transmission in progress to finish before initiating its own transmission.
"Multiple Access" describes the fact that multiple stations send and receive on the medium.
Transmissions by one node are generally received by all other stations using the medium.
1-persistent
When the sender (station) is ready to transmit data, it checks if the physical medium
is busy. If so, it senses the medium continually until it becomes idle, and then it
transmits a piece of data (a frame). In case of a collision, the sender waits for a
random period of time and attempts to transmit again. 1-persistent CSMA is used in
CSMA/CD systems including Ethernet.
P-persistent
When the sender is ready to send data, it checks continually if the medium is busy. If
the medium becomes idle, the sender transmits a frame with a probability p. If the
station chooses not to transmit (the probability of this event is 1-p), the sender waits
until the next available time slot and transmits again with the same probability p. This
process repeats until the frame is sent or some other sender stops transmitting. In the
latter case the sender monitors the channel, and when idle, transmits with a
probability p, and so on. p-persistent CSMA is used in CSMA/CA systems including
WiFi and other packet radio systems.
O-persistent
Each station is assigned a transmission order by a supervisor station. When medium
goes idle, stations wait for their time slot in accordance with their assigned
transmission order. The station assigned to transmit first transmits immediately. The
station assigned to transmit second waits one time slot (but by that time the first
station has already started transmitting). Stations monitor the medium for
transmissions from other stations and update their assigned order with each detected
transmission (i.e. they move one position closer to the front of the queue).[1]
O-
persistent CSMA is used by CobraNet, LonWorks and the controller area network.
CSMA/CD
Carrier sense multiple access with collision detection (CSMA/CD) is a computer
networking access method in which:
a carrier sensing scheme is used.
a transmitting data station that detects another signal while transmitting a frame,
stops transmitting that frame, transmits a jam signal, and then waits for a random
time interval before trying to send that frame again.
CSMA/CD is a modification of pure carrier sense multiple access (CSMA). CSMA/CD is
used to improve CSMA performance by terminating transmission as soon as a collision is
detected, thus reducing the probability of a second collision on retry.
CSMA/CD is a layer 2 access method, not a protocol of the OSI model When a station
wants to send some information, it uses the following algorithm:
Main procedure
1. Frame ready for transmission.
2. Is medium idle? If not, wait until it becomes ready[note 1]
3. Start transmitting.
4. Did a collision occur? If so, go to collision detected procedure.
5. Reset retransmission counters and end frame transmission.
Collision detected procedure
1. Continue transmission until minimum packet time is reached to ensure that all
receivers detect the collision.
2. Increment retransmission counter.
3. Was the maximum number of transmission attempts reached? If so, abort
transmission.
4. Calculate and wait random backoff period based on number of collisions.
5. Re-enter main procedure at stage 1.
This can be likened to what happens at a dinner party, where all the guests talk to each other
through a common medium (the air). Before speaking, each guest politely waits for the
current speaker to finish. If two guests start speaking at the same time, both stop and wait
for short, random periods of time (in Ethernet, this time is measured in microseconds). The
hope is that by each choosing a random period of time, both guests will not choose the same
time to try to speak again, thus avoiding another collision.
Methods for collision detection are media dependent, but on an electrical bus such as
10BASE-5 or 10BASE-2, collisions can be detected by comparing transmitted data with
received data or by recognizing a higher than normal signal amplitude on the bus.
Applications
CSMA/CD was used in bus topology Ethernet variants and in early versions of twisted-pair
Ethernet. Modern Ethernet networks built with switches and/or full-duplex connections no
longer utilize CSMA/CD. IEEE Std 802.3, which defines all Ethernet variants, for historical
reasons still bears the title "Carrier sense multiple access with collision detection
(CSMA/CD) access method and physical layer specifications".
Variations of the concept are used in radio frequency systems that rely on frequency
sharing, including Automatic Packet Reporting System.
The ALOHA protocol
Pure ALOHA
The first version of the protocol (now called "Pure ALOHA",
and the one implemented in ALOHAnet) was quite simple:
If you have data to send, send the data
If the message collides with another transmission, try resending "later"
Note that the first step implies that Pure ALOHA does not check whether the channel is
busy before transmitting. The critical aspect is the "later" concept: the quality of the backoff
scheme chosen significantly influences the efficiency of the protocol, the ultimate channel
capacity, and the predictability of its behavior.
To assess Pure ALOHA, we need to predict its throughput, the rate of (successful)
transmission of frames. First, let's make a few simplifying assumptions:
All frames have the same length.
Stations cannot generate a frame while transmitting or trying to transmit. (That is, if a
station keeps trying to send a frame, it cannot be allowed to generate more frames to
send.)
The population of stations attempts to transmit (both new frames and old frames that
collided) according to a Poisson distribution.
Let "T" refer to the time needed to transmit one frame on the channel, and let's define
"frame-time" as a unit of time equal to T. Let "G" refer to the mean used in the Poisson
distribution over transmission-attempt amounts: that is, on average, there are G
transmission-attempts per frame-time.
Overlapping frames in the pure ALOHA protocol.
Frame-time is equal to 1 for all frames.
Consider what needs to happen for a frame to be
transmitted successfully. Let "t" refer to the time at
which we want to send a frame. We want to use the
channel for one frame-time beginning at t, and so we need all other stations to refrain from
transmitting during this time. Moreover, we need the other stations to refrain from
transmitting between t-T and t as well, because a frame sent during this interval would
overlap with our frame.
For any frame-time, the probability of there being k transmission-attempts during that
frame-time is:
Comparison of Pure Aloha and Slotted Aloha
shown on Throughput vs. Traffic Load plot.
The average amount of transmission-attempts for
2 consecutive frame-times is 2G. Hence, for any
pair of consecutive frame-times, the probability of there being k transmission-attempts
during those two frame-times is:
Therefore, the probability (Probpure) of there being zero transmission-attempts between t-T
and t+T (and thus of a successful transmission for us) is:
Probpure = e − 2G
The throughput can be calculated as the rate of transmission-attempts multiplied by the
probability of success, and so we can conclude that the throughput (Spure) is:
Spure = Ge − 2G
Slotted ALOHA
Slotted ALOHA protocol (Boxes indicate frames
Shaded boxes indicate frames which are in the same
slots.)
An improvement to the original ALOHA protocol
was "Slotted ALOHA", which introduced discrete timeslots and increased the maximum
throughput. A station can send only at the beginning of a timeslot, and thus collisions are
reduced. In this case, we only need to worry about the transmission-attempts within 1
frame-time and not 2 consecutive frame-times, since collisions can only occur during each
timeslot. Thus, the probability of there being zero transmission-attempts in a single timeslot
is: Probslotted = e − G
The probability of k packets is: Probslottedk = e − G
(1 − e − G
)k − 1
The throughput is: Sslotted = Ge − G
The maximum throughput is 1/e frames per frame-time (reached when G = 1), which is
approximately 0.368 frames per frame-time, or 36.8%.
Slotted ALOHA is used in low-data-rate tactical satellite communications networks by
military forces, in subscriber-based satellite communications networks, mobile telephony
call setup, and in the contactless RFID technologies.
Token Bus and Token Ring
Token Bus
Token Bus was a 4 Mbps Local Area Networking technology created by IBM to connect
their terminals to IBM mainframes. Token bus utilized a copper coaxial cable to connect
multiple end stations (terminals, wokstations, shared printers etc.) to the mainframe. The
coaxial cable served as a common communication bus and a token was created by the
Token Bus protocol to manage or 'arbitrate' access to the bus. Any station that holds the
token packet has permission to transmit data. The station releases the token when it is done
communicating or when a higher priority device needs to transmit (such as the mainframe).
This keeps two or more devices from transmitting information on the bus at the same time
and accidentally destroying the transmitted data.
Token Bus suffered from two limitations. Any failure in the bus caused all the devices
beyond the failure to be unable to communicate with the rest of the network. Second,
adding more stations to the bus was somewhat difficult. Any new station that was
improperly attached was unlikely to be able to communicate and all devices beyond it were
also affected. Thus, token bus networks were seen as somewhat unreliable and difficult to
expand and upgrade.
Token Ring
Token Ring was created by IBM to compete with what became known as the DIX Standard
of Ethernet (DEC/Intel/Xerox) and to improve upon their previous Token Bus technology.
Up until that time, IBM had produced solutions that started from the mainframe and ran all
the way to the desktop (or dumb terminal), allowing them to extend their SNA protocol
from the AS400's all the way down to the end user. Mainframes were so expensive that
many large corporations that purchased a mainframe as far back as 30-40 years ago are still
using these mainframe devices, so Token Ring is still out there and you will encounter it.
Token Ring is also still in use where high reliability and redundancy are important--such as
in large military craft.
Token Ring comes in standard 4 and 16 Mbsp and high-speed Token Ring at
100Mbps(IEEE 802.5t) and 1Gbps (IEEE 802.5v). Many mainframes (and until recently,
ALL IBM mainframes) used a Front End Processor (FEP) with either a Line Interface
Coupler (LIC) at 56kbps, or a Token-ring Interface Coupler (TIC) at 16 Mbps. Cisco still
produces FEP cards for their routers (as of 2004).
Token Ring uses a ring based topology and passes a token around the network to control
access to the network wiring. This token passing scheme makes conflicts in accessing the
wire unlikely and therefore total throughput is as high as typical Ethernet and Fast Ethernet
networks. The Token Ring protocol also provides features for allowing delay-sensitive
traffic, to share the network with other data, which is key to a mainframe's operation. This
feature is not available in any other LAN protocol, except Asynchronous Transfer Mode
(ATM).
Token Ring does come with a higher price tag because token ring hardware is more
complex and more expensive to manufacture. As a network technology, token ring is
passing out of use because it has a maximum speed of 16 Mbps which is slow by today's
gigabit Ethernet standards.
Token Ring
Token passing
Media Access Unit
Line Interface Coupler (LIC)
Token Ring Interface Coupler (TIC)
DEVELOPMENT OF CLIENT SERVER APPLICATION:
(i) Develop ‘telnet’ client and server which uses port other than 23.
(ii) Write a finger application which prints all available information for five users
currently logged on and are using the network for longest duration. Print the
information in ascending order of time.
Telnet
Telnet is one of the earliest protocols developed
Telnet provides reliable communication via TCP
Telnet is an Application (operates at the OSI Model's Application Layer)
Telnet provides access to the command prompt remotely
Telnet utilizes TCP/IP to support communication
Information is communicated as ASCII Text
Telnet is carried inside the payload of TCP (encapsulated in TCP)
Commands:Open close quit
Telnet was one of the first protocols developed for use over TCP/IP. Telnet is an application
designed for reliable communication via a virtual terminal. It was intended to be a bi-
directional byte-oriented communications protocol utilizing 7-bit ASCII for use in creating
communication between terminals (Internet end points) or processes across the Internet.
Telnet is one of the oldest IP protocols and from it several other protocols were developed.
A telnet server listens for connections on TCP port 23. When a connection is opened from a
telnet client to a server, the client attempts to connect to the server machine using TCP on
port 23. The client uses a local port above 1023.
The client and server will negotiate supported Telnet options and the connection will be
established. The remote server will then provide services over that TCP connection. The
client sends in ASCII text data and the server responds according to it's design. Telnet is the
most basic of all TCP based protocols. When the client receives input from the user, it
forwards that information to the telnet server.
The client normally will send in the user data one ASCII character at a time unless the
NAGLE algorithm for TCP is in use. The Nagle algorithm changes the way TCP handles
segments and can alter how data gets buffered before transmission to the other end.
Commands:
Microsoft Telnet (Windows)
Commands may be abbreviated. Supported commands are:
c - close close current connection
d - display display operating parameters
o - open hostname [port] connect to hostname (default port 23).
q - quit exit telnet
set - set set options (type 'set ?' for a list)
sen - send send strings to server
st - status print status information
u - unset unset options (type 'unset ?' for a list)
?/h - help print help information
Options for the set command
Microsoft Telnet> set ?
bsasdel Backspace will be sent as delete
crlf New line mode - Causes return key to send CR & LF
delasbs Delete will be sent as backspace
escape x x is an escape charater to enter telnet client prompt
localecho Turn on localecho.
logfile x x is current client log file
logging Turn on logging
mode x x is console or stream
ntlm Turn on NTLM authentication.
term x x is ansi, vt100, vt52, or vtnt
Default Operating Parameters
Escape Character is 'CTRL+]'
Will auth(NTLM Authentication)
Local echo off
New line mode - Causes return key to send CR & LF
Current mode: Console
Will term type
Preferred term type is ANSI
NAGLE ALGORITHM
The NAGLE algorithm makes telnet more efficient. Rather than wrap up every single
character in a complete IP datagram, the whole input buffer of the keyboard or computer is
sent at once or stored and sent as a group of characters once the return key is pressed on the
keyboard (an end of line is detected on standard input by the telnet client).
Finger Application
In computer networking, the Name/Finger protocol and the Finger user information
protocol are simple network protocols for the exchange of human-oriented status and user
information. Name/Finger protocol
The Name/Finger protocol, written by David Zimmerman, is based on Request for
comments document RFC 742 (December 1977) as an interface to the name and finger
programs that provide status reports on a particular computer system or a particular person
at network sites. The finger program was written in 1971 by Les Earnest who created the
program to solve the need of users who wanted information on other users of the network.
Information on who is logged-in was useful to check the availability of a person to meet.
This was probably the earliest form of Presence information technology that worked for
remote users over a network.
Prior to the finger program, the only way to get this information was with a who program
that showed IDs and terminal line numbers for logged-in users. Earnest named his program
after the idea that people would run their fingers down the who list to find what they were
looking for.
Finger user information protocol: Finger is based on the Transmission Control Protocol,
using TCP port 79 decimal. The local host opens a TCP connection to a remote host on the
Finger port. An RUIP (Remote User Information Program) becomes available on the remote
end of the connection to process the request. The local host sends the RUIP a one line query
based upon the Finger query specification, and waits for the RUIP to respond. The RUIP
receives and processes the query, returns an answer, then initiates the close of the
connection. The local host receives the answer and the close signal, then proceeds closing
its end of the connection.
The Finger user information protocol is based on RFC 1288 (The Finger User Information
Protocol, December 1991). Typically the server side of the protocol is implemented by a
program fingerd (for finger daemon), while the client side is implemented by the name and
finger programs which are supposed to return a friendly, human-oriented status report on
either the system at the moment or a particular person in depth. There is no required format,
and the protocol consists mostly of specifying a single command line.
The program would supply information such as whether a user is currently logged-on, e-
mail address, full name etc. As well as standard user information, finger displays the
contents of the .project and .plan files in the user's home directory. Often this file
(maintained by the user) contains either useful information about the user's current
activities, similar to micro-blogging, or alternatively all manner of humor.
Security concerns
Supplying such detailed information as e-mail addresses and full names was considered
acceptable and convenient in the early days of Internetworking, but later was considered
questionable for privacy and security reasons. Finger information has been frequently used
by hackers as a way to initiate a social engineering attack on a company's computer security
system. By using a finger client to get a list of a company's employee names, email
addresses, phone numbers, and so on, a cracker can telephone or email someone at a
company requesting information while posing as another employee. The finger daemon has
also had several exploitable security holes which crackers have used to break into systems.
The Morris worm exploited an overflow vulnerability in fingerd (among others) to spread.
The finger protocol is also incompatible with Network Address Translation (NAT) from the
private network address ranges (e.g. 192.168.0.0/16) that are used by the majority of home
and office workstations that connect to the Internet through routers or firewalls.
For these reasons, while finger was widely used during the early days of Internet, by the late
1990s the vast majority of sites on the internet no longer offered the service.