1
CHAPTER 1
INTRODUCTION
Wireless LAN (WLAN) provides flexible data communication
systems with the features and benefits of traditional LAN technologies, such
as Ethernet and Token Ring without the limitations of wires or cables. Here,
the connectivity no longer implies physical attachment and the wireless
devices are not restricted by physical connections or to fixed locations. The
infrastructure of WLANs is dynamic and mobile with the freedom and
flexibility, which can be applied to mobile devices as well as to devices
within buildings or between buildings. Also, it combines data connectivity
with user mobility. In general, it improves productivity, convenience and cost
advantage over traditional wired networks. Practically, WLANs provide the
final few meters of connectivity between the wired network and the mobile
user.
1.1 FUNDAMENTALS OF WLAN
Wireless LAN is a transmission system that uses radio waves as a
carrier for the propagation of data. It is also referred to as a wireless network
in its primary form. It consists of three fundamental components namely, (i)
Wireless Hosts (ii) Base Station / Access Point (AP) and (iii) Wireless link.
The workstations with wireless Network Interface Cards (NICs) are
connected to the base stations or to other workstations by using either Infrared
light (IR) or Radio frequencies (RF). RF provides longer range, higher
bandwidth and wider coverage. Most of the WLANs use the 2.4GHz
2
frequency band, which is reserved for unlicensed devices. Wireless devices
are often referred to as wireless clients. The base station is also called as
Access Point. IEEE 802.11 standard ensures the successful communication
among these three components and Table 1.1 describes the characteristics of
the same.
1.1.1 WLAN Architecture
IEEE 802.11 standard permits devices to establish either peer to
peer networks or networks based on fixed Access Points with which, the
mobile nodes can communicate. Hence, the standard defines two basic
network architectures, namely, the Infrastructure network and the Ad-hoc
network.
Table 1.1 Description of IEEE 802.11 Standards
Characteristics Description
Frequency Bands 2.4 GHz or 5 GHz
Maximum transmission rates
11Mbps (802.11b), 54 Mbps (802.11a/g)
Range From 10 meters to 100 meters
Physical Layer Direct Sequence Spread Spectrum (DSSS), Frequency Hopping Spread Spectrum (FHSS), Orthogonal Frequency Division Multiplexing (OFDM)
Advantages No cables, ease of installation, flexibility, mobility and competitively priced
Disadvantages Unlicensed spectrum – hence more interference, error rates and security attenuation
3
Figure 1.1 Sketch of Ad-hoc and Infra structured network
In Infrastructure network, shown in Figure 1.1, wireless hosts
communicate with a base station which allows the broadcasting, forwarding,
coordination, synchronization and bridging of packets. The base station /
Access Point is the wireless hub. It acts as the gateway for wired network to
wireless network. It is the policy manager and is situated as a part of the wired
network. All communications between stations (STAs) or between a STA and
a wired network client go through AP. The area covered by AP is technically
referred to as Basic Service Set (BSS). A Service Set Identifier (SSID)
identifies every BSS and it is the identification given to devices within a
specific cell to enable wireless communication. A SSID need not be unique.
Hence, a Basic Service Set Identifier (BSSID) is needed to identify an AP and
is usually the Media Access Control (MAC) address of the AP.
Ad hoc network, also called as Independent Basic Service Set
(IBSS), allows a group of IEEE 802.11 wireless stations to communicate with
each other, under peer-to-peer mode without an AP. It is created
spontaneously. It does not extend support for accessing wired networks. This
type of architecture is better suited for conference room set ups. In this thesis,
the focus is on Infrastructure network, due to its predominant use.
4
Table 1.2 Certified Standards of Wi-Fi
Standard Data Rate
Frequency Modulation Scheme
Range Security Certified Year
802.11 1 or 2 Mbps
2.4 GHz FHSS / DSSS
< 25m WEP/WPA 1997
802.11a Up to 54 Mbps
5GHz OFDM < 20m WEP/WPA 1999
802.11b Up to 11 Mbps
2.4 GHz DSSS <100m WEP/WPA 1999
802.11g Up to 54 Mbps
2.4 GHz DSSS / OFDM
<100m WEP/WPA, WPA-PSK
2003
1.1.2 WLAN Standards
Wireless LANs coexist with fixed infrastructure networks to
provide mobility and flexibility to users by freeing them from the constraints
of physical wires. A number of wireless data communication systems have
been developed to utilize the 2.4 GHz Industrial, Scientific & Medical (ISM)
and 5 GHz Unlicensed-National Information Infrastructure (U-NII) band. The
first IEEE 802.11 standard was created as a method of extending the IEEE
802.3 (Wired Ethernet) to venture into the wireless domain. The IEEE 802.11
standard is also referred to as Wi-Fi and the Wi-Fi alliance (an independent
organization) provides Wi-Fi certification to products that conform to the
IEEE 802.11 standard.
IEEE 802.11 has been expanded considerably to include a family of
WLAN standards. 802.11a standard operates in 5 GHz band and uses a 52-
subcarrier OFDM with a maximum raw data rate of 54 Mbps. 802.11b has a
maximum raw data rate of 11 Mbps and uses DSSS. IEEE 802.11g works in
5
2.4 GHz band, but operates at a maximum raw data rate of 54 Mbps using
OFDM. It offers backward compatibility with 802.11b.
Table 1.3 Unapproved or Under Development IEEE 802.11x Standards
Standard Description
IEEE 802.11d International roaming extensions
IEEE 802.11e Enhancements: QoS including packet bursting
IEEE 802.11f Inter-Access Point Protocol (IAPP)
IEEE 802.11h 5 GHz Spectrum, Dynamic channel / frequency selection (DCS / DFS) and Transmit Power Control for European compatibility
IEEE 802.11i Enhanced Security (ratified on 24 June 2004)
IEEE 802.11j Extensions for Japan
IEEE 802.11k Radio resource measurements
IEEE 802.11l Reserved
IEEE 802.11m Maintenance of the standard: odds and evens
IEEE 802.11n Higher throughput improvements
IEEE 802.11o Reserved
IEEE 802.11p WAVE – Wireless Access for Vehicular Environments
IEEE 802.11q Reserved
IEEE 802.11r Fast roaming
IEEE 802.11s Wireless mesh networking
IEEE 802.11T Wireless performance Prediction (WPP)- test methods and metrics
IEEE 802.11u Inter working with non 802 networks
IEEE 802.11v Wireless network management
IEEE 802.11w Protected management frames
6
IEEE 802.11e is a proposed enhancement to 802.11a and 802.11b,
to offer enhanced MAC layer - Quality of Service (QoS) features that include
prioritization of data and voice and video transmission. It enhances the
Distribution Coordination Function (DCF) and the Point Coordination
Function (PCF), through a new coordination function named as Hybrid
Coordination Function (HCF). 802.11n is a standard proposed for throughput
enhancements. Under 802.11n, the raw data rate is estimated to reach a
maximum of 540 Mbps through the use of Multiple Input and Multiple Output
(MIMO), signal processing and smart antenna techniques for transmitting
multiple data streams through multiple antennas. The certified 802.11
standards are shown in Table 1.2 and the unapproved standards are shown in
Table 1.3.
1.1.3 Challenges and Constraints
The following are the challenges and constraints present in the establishment of WLAN.
Frequency allocation
Wireless networks require all the users to operate on a common frequency band and the frequency bands for particular users are allocated and licensed in each country.
Reliability of communication channel
It is measured in average Bit Error Rate (BER). Packet loss
rates for the packetized voice cannot exceed the order of 10-2
and BER of 10-5 is acceptable for uncoded data. Automatic
Repeat Request (ARQ) and Forward Error Correction (FEC)
are used to increase reliability.
7
Security
Since the transmission medium is open to anyone within the geographical range of the transmitter, it is more difficult to secure the wireless network. Hence, data privacy is accomplished by using encryption and authentication at increased cost and decreased performance.
Interference
It is mainly caused by simultaneous transmissions and multipath fading. Collisions cause typically due to the result of multiple stations waiting for the channel and start their simultaneous transmissions. Moreover, it is also caused by hidden terminal problem.
Throughput
WLANs are currently targeted at data rates between 1-40 Mbps. The physical limitations and bandwidth do not allow the capacity of WLANs to approach that of wired LANs.
Power consumption
Wireless devices are meant to be portable, mobile and moreover typically battery powered. Hence, incorporation of energy efficient design procedures like sleep mode, idle mode, power down modes of operations and low power display etc., become mandatory. Timing beacons also play an important role in power management.
Mobility
Though freedom of mobility is the primary advantage of WLANs, system designs must accommodate handoff
8
between transmission boundaries and route traffic to mobile users.
Human safety
On going research has to confirm whether RF transmissions
from radio and cellular devices are linked to human illness
and vision impairment due to the optical transmitters of Infra
Red based WLAN systems.
1.2 NEED FOR QUALITY OF SERVICE IN WLAN
QoS is the ability to treat packets differently as they transit a
network device, based on the packet contents. It is also the ability to provide
priority assignment to different applications and users to guarantee a certain
level of performance to the data flow. The quality parameters include data
rate, delay, jitter, BER and packet dropping probability etc.. Without QoS, all
packets on the network compete for the same pool of resources, resulting
congestion in the network. Since the network capacity is insufficient, QoS
need to be guaranteed, especially for real time multimedia applications.
However, the Best-effort network service does not support QoS. The
following points indicate the necessity of Quality Assurance (QA) in the
network.
Traffic load and application requirements grow always faster
than the estimate of any good network designer. Hence, QA
schemes are required to feed the timely need.
A clear policy of priority to guarantee the better utilization
of network resources becomes mandatory, mainly because
the random distribution of bandwidth yields risky results.
9
Service prioritization is fundamental. After classifying
flows, it is required to control, prioritize and model the
network traffic so that, the critical servers and applications
will receive a guaranteed quantity of available bandwidth.
The protocols like UDP are not designed with self control
and congestion avoidance procedures. Since the real time
multimedia traffic run on UDP need to be incorporated with
QA schemes, to exercise the strict control and observation.
The propagation and growth of viruses steal the useful
bandwidth of the network. To quarantine those flows, QoS
schemes are to be applied beside other technologies.
Hence, the incorporation of QoS mechanisms in the network to
reserve the required resources, to provide service differentiation, and to avoid
congestion in the network becomes mandatory.
1.3 LITERATURE SURVEY
Providing Quality of Service guarantees for real time traffic in
wireless LAN is the primary objective of this dissertation. The quality
provisioning issues of intra networking, internetworking and the quality
assurance issues related with real time data transfer are addressed to a limited
extent. With respect to these issues, the relative literature survey on wireless
channel models, channel access schemes, congestion control, routing
mechanisms, Voice over IP (VoIP) and video streaming is carried out and
presented here.
10
1.3.1 Wireless Channel models
Communication channel dictates the performance of any
communication system. Design of wireless networks requires an accurate
characterization of the radio channel. Wireless channels due to their unreliable
behavior differ a lot from the wired channels. The received signal strength
exhibits random fluctuations in wireless environment due to its time varying
nature (Aguiar et al 2003). Wireless channel is an inherently shared medium
leading to multi-user interference. The random and shared nature makes
communication over wireless channels a difficult task (Diggavi 2006).
Received Signal-to-Noise Ratio (SNR) is used to measure channel
quality in time varying wireless channels. They are distinguished by the
propagation environment such as urban, suburban, indoor, underwater and
orbital environments. WLAN primarily operates in an indoor environment
having tremendous amount of impairment and variability. Indoor channels are
heavily dependent on the placement of walls and partitions that dictate the
signal path within the building (Anderson et al 1995). The characteristics of
an indoor radio channel vary between different environments and must be
considered at the time of modeling the radio channel.
Figure 1.2 Basic propagation mechanisms – reflection (R), Diffraction
(D) and Scattering (S)
Source Obstacle
S
R
D
11
When a WLAN RF signal radiates through its environment, it
bounces off obstructions like walls, floors and other reflective surfaces.
Figure 1.2 shows basic radio wave propagation mechanisms of reflection,
diffraction and scattering. These give rise to additional radio propagation
paths beyond the direct optical Line of Sight (LoS) between the radio
transmitter and receiver. As a result, multiple signal paths arrive at the
receiver. The characteristics of these multiple paths are variable and fairly
complex. To have a standard way to simulate them, RF channel models are
used. The channel models attempt to generalize the complexities and establish
an average behavior for the channel in an indoor environment. The efficiency
of a model is measured by the computational complexity and its accuracy is
measured by the estimation error (Hassan Ali et al 2002).
Figure 1.3 An overview of wireless channel models
Complex propagation environments present the biggest obstacle to
computational efficiency. Accuracy of a model depends on the accuracy of
the locations, size of buildings and other objects present in the environment.
There are three approaches to modeling of indoor radio channels, namely,
deterministic, statistical and site-specific modeling and are shown in Figure
1.3.
Wireless channel models
Deterministic Model
Statistical Model
Site-Specific Model
- Uniform Theory of Diffraction Model - Par-Flow Model
- Saleh Valenzuela - Log Distance Path Loss Model - WSSUS - GSCM
- Ray Tracing Model - FDTD
12
1.3.1.1 Deterministic Channel Models
Deterministic or theoretical approach is based on the principles of
physics and provides an accurate knowledge of channel behavior necessary
for multimedia transmission (Combeau et al 2004). They require exact data
about the terrain leading to a huge database of environmental characteristics.
The consideration of huge amount of terrain data makes these models highly
accurate, even when applied to different environments. Though these models
do not require any measurements to be made, measurements are used to check
the accuracy. Algorithms for deterministic modeling are highly complex and
lack in computational efficiency. This resulted in restricting them to modeling
smaller areas of micro cell or indoor environments. Uniform Theory of
Diffraction (UTD) and Frequency Domain ParFlow (FDPF) constitute
deterministic propagation models.
A. Uniform Theory of Diffraction Model
UTD is a high frequency method for solving electromagnetic
scattering problems. According to UTD (Kouyoumjian et al 1974), a high
frequency electromagnetic wave incident on an edge in a curved surface gives
rise to a reflected wave, an edge diffracted wave and an edge excited wave
which propagates along a surface ray. The pertinent rays and boundaries are
projected onto a plane perpendicular to the edge at the point of diffraction.
Diffraction coefficients are determined for a perfectly conducting wedge
illuminated by plane, cylindrical, conical and spherical waves. The results are
extended to the curved wedge.
Ray tube method (Hae won son et al 1999) is based on UTD and it
overcomes some of the limitations in ray tracing methods. It is a point-to-
point technique that guarantees high accuracy and is applicable to any
complex environment. Three types of ray tubes namely, the transmitter,
13
reflection and diffraction ray tubes are defined on the plane view of quasi 3D
environment. The ray tubes are shown in Figure 1.4. The transmitter ray tube
represents bundle of rays from a transmitter and is described by the position
of the transmitter and the tube angle of 2π radian. The reflection ray tube
represents bundle of rays reflected by wall. It is described by the position of
the image on the wall, the wall number and the tube angle are less than π
radian. The diffraction ray tube consists of family of rays diffracted by corner
and is described by the position of the corner, the corner number and the tube
angle.
Parametric formulation of UTD (Huihui wang et al 2005) enables
faster and more accurate evaluation of diffracted field in propagation
prediction models for indoor environments. It finds potential use in real-time
propagation computations. Earlier models neglected diffracted rays for the
purpose of simplicity, leading to poor estimation of diffracted field in the
shadow region, which is particularly prominent in indoor environments. By
using inverse problem theory, a better approximation of the diffraction
coefficient for a dielectric wedge is determined, leading to accurate
estimation.
B. ParFlow Model
The ParFlow model is based on the Lattice-Boltzmann method
(LBM), which is developed for gas-kinetic representation of fluid flow and
can be used for modeling electromagnetic wave propagation. The ParFlow
algorithm describes a system going from the excited state to the equilibrium
state, using a regular structure of data to allow parallel computation of the
diffusion process. It is based on the concept of partial flows used for the
discretization of the Maxwell's equations and applied either to the electrical
field, magnetic field or both (Gorce et al 2001). The indoor environment is
described by a 2D grid. It offers better accuracy when spatial resolution is
14
high. Higher resolution increases the grid size and the computation time. It
can be used in time domain and frequency domain.
Figure 1.4 Types of ray tubes
Time Domain ParFlow (TDPF) is used as a discrete simulation
method for propagation analysis in indoor and hybrid indoor-outdoor
environments. The time average of the amplitude of the electric field is
computed in each point and is based on the direct discretization of the
Huygens principle. It exploits a regular grid to propagate the free-space field
along wires. The field at each pixel is divided into four components as shown
in Figure 1.5, represents directive flows along wires. Space and time are
discretized in terms of finite, elementary units and are related with the speed
of light and the space dimension (De Sousa et al 2005). Time domain
technique is accurate, when real-time and near-field measurements are not
required.
Frequency Domain ParFlow (FDPF) is a method used to solve discrete
ParFlow equations. In a narrow band system, a linear inverse problem in the
frequency domain is used to compute the steady state. A Multi Resolution
formulation in frequency domain (MR-FDPF) is used to simulate indoor radio
wave propagation. It is based on the fact that all reflections and diffractions
Reflection ray tube
Diffraction ray tube
Transmitter
Pd1
Pd2
Pr2 Pr1
Transmitter ray tube
15
are taken into account with no impact on the computational load (Dela Roche
et al 2006). It involves two stages: a preprocessing stage where binary tree is
built and scattering matrix is computed for each node, followed by a
propagation stage where radio coverage map is computed for each source.
This method is efficient in improving the computation time, when multiple
coverage maps corresponding to different sources are considered.
Figure 1.5 Parflow node and its outward flow
1.3.1.2 Statistical Channel Models
In statistical or empirical models, the statistics of channel
parameters are collected from actual measurements at various locations of the
transmitter and receiver. As these models are independent of the layout and
structural details of the coverage area, the requirement to survey layouts for
individual applications is eliminated. They include selective parameters
measured from representative categories of coverage areas (Pahlavan et al
1995). Even when all the environmental factors can be separately recognized,
they are implicitly taken into account during modeling. The accuracy of these
models is dependent on the similarity between the environment considered for
analysis and the environment from which the measurements are taken. Since
these models average all the objects within the environment, they do not
f E
f S
f W
f N
f 0
16
report variations in signal strength around any particular object. In addition,
the relationship between site layout and channel response at a specific
location cannot be provided using statistical models. On the positive side,
they offer better computational efficiency and are easy to generalize. The
Saleh-Valenzuela model and Log-distance path loss model with Log-normal
shadowing are the empirical models used in modeling indoor radio
environments.
A. Saleh-Valenzuela (SV) Model
Saleh-Valenzuela model is based on the physical realization that the
received signal rays arrive in clusters, with each cluster comprising of several
rays (Saleh et al 1987). The clustering phenomenon is based on the
observation of measured pulse responses. The arrival times of the first rays of
the clusters and subsequent rays within each cluster are modeled as a Poisson
process with different fixed rates. It was found that the arrivals come in one or
two large groups with a 200ns observation window and the expected power of
the rays in a cluster, decayed faster than the expected power of the first ray of
the next cluster. The main drawback of this model was that it did not have any
information about the angles of arrival (Spencer at al 1997), but assumed that
they are independent random variables uniformly distributed over the
interval (0, 2π).
B. Log-distance path loss model with log-normal shadowing
This model is used to compute path loss exponent, a factor
dependent on propagation environment (Rappaport 2002). Average large
scale path loss for an arbitrary Transmitter-Receiver (T-R) separation is
expressed as a function of distance using a path loss exponent. The presence
of obstructions increases the value of path loss exponent which further
increases the signal loss. For a given fixed distance, frequency and
17
transmission power, the received signal power varies due to the objects in and
around the signal path. These stochastic, location dependent variations are
called shadowing. Random shadowing effects occurring over a large number
of measurement locations having same T-R separation, but different levels of
clutter on the propagation path results in a phenomenon referred to as log-
normal shadowing. This model includes close-in reference distance, path loss
exponent and standard deviation of the zero mean Gaussian distributed
random variable, to statistically describe the path loss model for an arbitrary
location with fixed T-R separation (Akl et al 2006). The average path loss
PL (d) for a T-R separation d becomes,
0 0( ) ( ) 10 log( / )PL d PL d n d d X (1.1)
Where,
0d - Close-in reference distance
n - Path loss exponent
- Standard deviation
X - Zero mean Gaussian distributed random variable
By accounting for variations in environmental clutter, this model
leads to measured signal, close to the average value.
C. Wide-Sense Stationary Uncorrelated Scattering (WSSUS)
model
Time and frequency selective fading occurring due to multi path
propagation is characterized using WSSUS propagation model. This model
requires two sets of parameters, Power Delay Profile (PDP) and Doppler
Power Spectra (DPS) to describe the propagation effects. A single function
called scattering function characterizes the WSSUS model. It is based on the
assumption that channel is wide sense stationary so that the autocorrelation
18
function in time is a function of only the time difference. Uncorrelated
scattering implies that the autocorrelation function in frequency is a function
of only the frequency difference. Using these assumptions, autocorrelation
function, in time and frequency yields spaced-frequency, spaced-time
autocorrelation function (Bug et al 2002). Performance of broadband mobile
communication systems can be analyzed using this model.
Figure 1.6 GSCM model
D. Geometry-Based Stochastic Channel Model (GSCM)
GSCM is based on the directional channel. It provides a
geometrical description of base station and mobile station in polar coordinates
(Cosovic et al 2002). In real propagation environment, the scatterers are
distributed in groups and are known as clusters. Figure 1.6 shows GSCM
model based on the cluster representation of the scatterers. The model is
based on a cluster of scatterers each representing single multi path
component. One cluster moves along with Mobile Station (MS) and is known
as near cluster. The rest are called far clusters, distributed throughout the cell
and each one has certain visibility regions. Each visibility region is the area
visible from the corresponding cluster for the MS on its way through the cell.
Circular regions are defined as visibility regions over the route of MS. When
MS
BS
Local scatterers
Far scatterer cluster
19
MS enters a visibility region, the far cluster becomes visible and scatterers
start to create additional paths at the receiver.
When the MS leaves this region, the cluster is made inactive.
Visibility region covers specific part of MS route depending on the cell type.
GSCM distinguishes macro cells (outdoor urban), micro cells (outdoor city)
and Pico cells (indoor). Each of them uses different parameters for the
placement of clusters and scatterers. While macro cells require small number
of clusters (1-2), Pico cells require a mean number of 16 clusters for accurate
channel modeling (Kaltenberger et al 2000).
1.3.1.3 Site-specific Channel Models
Site-specific models are based on numerical methods and can have
detailed and accurate input parameters (Iskander et al 2002). The advantage of
these models is that it can accurately simulate simple indoor environments.
The large computational overhead may prohibit these models from being used
for complex environments. Ray tracing and Finite-Difference Time-Domain
(FDTD) models fall under this category.
A. Ray Tracing Models
The ray-tracing algorithm has been used for accurately predicting
the site-specific radio propagation characteristics. It is gaining importance for
propagation simulation of micro cells and Pico cells. Ray tracing is a
technique of modeling the light path by following light rays as they interact
with optical surfaces. Radio waves are similar to light waves in that the
phenomena of reflection, refraction and scattering apply to both (Nidd et al
1997). This has facilitated ray tracing approach in predicting the signal
strength of radio waves propagating in an indoor environment. Ray-tracing
approaches lead to accurate path loss models. In this approach, the region
20
around the transmitting antenna is divided into a cluster of rays and each ray
is traced from source to receiver. The attenuation suffered in each path is
computed. Reflections and diffractions are taken into account while tracing
each distinct ray path. Finally, all the signal components that arrive at the
receiver are added together. Ray tracing method may produce only
approximate results for realistic propagation environments when there is an
inaccuracy in the environmental database.
There are several types of ray tracing methods. Brute force method
(Seidal et al 1992) involves the transmission of large number of rays with
fixed angular separation. Intersection tests are performed on each ray to
determine the scattering points followed by reception test. The advantage of
this method is its applicability to complex environments. But it is
computationally complex and accuracy is heavily dependent on separation
angle.
Image ray tracing method (Tan et al 1996 ; Naruniranat et al 1999)
is a point-to-point tracing technique that produces accurate results without the
necessity for reception tests. The accuracy of the results is dependent on the
input data accuracy. Computational time is dependent on the number of the
input obstacles and it is improved, because the rays that do not reach the
receiver are not considered. Due to the difficulty in selecting scattered rays, it
is not applied to complex environments.
Two-ray ground reflection model (Silva Jr et al 2004) considers
both the direct path and reflected propagation path between transmitter and
receiver. Two-ray model, shown in Figure 1.7, is used when a LoS path exists
between the transmitting and receiving antenna. Though this model is best
suited for predicting large scale signal strength over long distance radio
systems with tall towers, it provides reasonably accurate results for line of
sight micro cell channels in urban environments.
21
Visibility tree approach is used to carry out an exhaustive search of
propagation sequences between the transmitter and the receiver (Sanchez et al
1996). Visibility tree has a layered structure comprising of nodes and
branches. Each node represents an object and each branch represents LoS
connection between two objects. The root node represents the transmitting
antenna and the tree is constructed in a recursive manner, starting from root
node. After building the visibility tree, path of each ray is back-tracked from
the leaf to the root and the rules of geometrical optics are applied at each
traversed node. It can be used for any propagation environment, as path
selection process does not depend on geometry. The creation of visibility tree
becomes increasingly complex as one move from 2D to 3D environments.
Figure 1.7 Geometrical two ray model
B. Finite-Difference Time-Domain (FDTD) Model
FDTD method provides a simple and effective technique for
modeling the field distribution in an indoor environment. WLAN planning
requires numerous simulations for different access point locations, and thus
needs fast computation of AP’s coverage area (Gorce et al 2005). In practical
situations, FDTD is employed to reduce the complexity and to ease the
simulation of reflection and diffraction.
Radio propagation characteristics can be derived by solving
Maxwell’s equations of electromagnetic wave propagation. FDTD results in a
h A
α α
Direct path
Reflected path
A
B
h B
22
numerical solution of Maxwell’s equations. Maxwell’s time dependent
equations are approximated by a set of finite-difference equations with respect
to specific field positions on an elementary lattice (Talbi et al 1996). The
scheme was proposed by Kane Yee and the lattice structure is known as Yee
Lattice. In the Yee lattice shown in Figure 1.8, the electric field components
correspond to the edges of the cube, and the magnetic field components to the
faces. A grid is defined over the area of interest and initial conditions are
specified. By employing central differences to approximate spatial and
temporal derivatives, Maxwell’s equations are solved directly. Solutions are
determined iteratively at the nodes of the grid.
Figure1.8 Yee Lattice
FDTD uses a leapfrog scheme for marching in time wherein the E-
field and H-field updates are staggered. Spatial staggering leads to locating
each E-field vector component midway between a pair of H-field vector
components. Conversely, H-field components can also be located between
pair of E-field components. The advantage of using this explicit time-stepping
scheme is, to avoid the need for solving simultaneous equations and yielding
dissipation-free numerical wave propagation. On the negative side, an upper
bound on the time-step is necessary to ensure numerical stability. Number of
nodes and the simulation runtime increase proportionately with the size of the
analyzed environment.
H y
E z
E y
H z
H x
(i, j, k) (i+1, j, k)
(i, j, k+1)
(i+1, j+1, k)
(i+1, j+1, k+1)
23
Hence, from this survey, it is observed that the deterministic
models have proven to be computationally complex, statistical models lacked
in accuracy and site-specific models offered limited applicability making
them unsuitable for complex environments. Models are developed in the
recent past to overcome these limitations. Deterministic plus statistical models
combine the two approaches to solve some of the inherent problems
(Domazetovic et al 2005). Hybrid methods that use two or more approaches
divide the original problem into a number of sub-problems and treat each one
using the most suitable approach (Skarlatos et al 2005). The theory of Neural
Networks also yielded models like Artificial Neural Network (ANN) model,
which use multilayer perceptron to compromise the limitations offered by
deterministic and statistical models (Nescovic et al 2000). Further
advancements in environmental database and computational resources may
pave the way towards development of new models with improved accuracy.
1.3.2 Channel access schemes of wireless Medium Access Control
Layer – A Survey
The Quality of Service can be improved in Data Link Layer (DLL)
by properly selecting the channel access mechanism for real time multimedia
traffic. This part of the survey provides an eye opener for the existing medium
access schemes.
MAC specifications for IEEE 802.11 has similarities to 802.3
Ethernet wired line standard. Its MAC defines two medium access
coordination functions – the basic Distributed Coordination Function (DCF)
and the optional Point Coordination Function (PCF) (Q.Ni et al 2004). It can
operate both in contention based DCF and contention free based PCF mode
and it supports two types of transmissions – Synchronous and Asynchronous.
This standard is originally designed for best-effort services. It is to be noted
that, the error rate at physical layer of WLAN is more than three orders of
24
magnitude larger than wired LAN. Moreover, high collision rate and frequent
retransmissions cause unpredictable delays and jitters, which degrade the
quality of real time data transmissions. Hence, QoS-aware coordination is
necessary to reduce overhead, prioritize frames and prevent collisions, to meet
the delay and jitter requirement under mobile environment.
To incorporate QoS support, the general architectural approaches
called Integrated services (IntServ) and Differentiated Services (DiffServ) are
devised. Intserv provides fine-grained service guarantees to individual flows,
but its setting of states in all routers along the path is not scalable. However,
Diffserv provides coarse-grained controls to the aggregates of flows, but it is
difficult to map between different service domains or sub networks such as
802.11 WLAN. To overcome the problems of these two architectures, QoS
enhancement schemes for infrastructure and ad-hoc networks are proposed.
Figure 1.9 shows the classification of service differentiation based schemes.
To introduce priorities for IEEE 802.11 standard, three techniques
have been proposed in Access Control (AC) scheme (Aad et al 2001). They
are classified as (i) Different back-off increase function, (ii) Different DCF
Inter Frame Space (DIFS) and (iii) Different maximum frame lengths. To
introduce both priority and fairness, an access scheme called Distributed Fair
Scheduling (DFS) (Vaidya et al 2000), is proposed and it utilizes the ideas of
self clocked fair queuing in the wireless domain. To support service
differentiation, a VMAC scheme (Veres et al 2001), based on DCF is
proposed with completely distributed service quality estimation, radio
monitoring and admission control approach. Here, a virtual MAC algorithm
monitors the radio channel and estimates locally achievable service levels.
The main goal of the black burst scheme (Sobrinho et al 1996) is to minimize
the delay of real time traffic. It strictly imposes certain requirements on high
priority stations. But the main drawback of this scheme is that, it requires
25
constant access intervals for high priority traffic; otherwise, performance
degrades considerably. The DC scheme (Deng et al 1999) requires the
minimal modifications on the basic 802.11 DCF.
Figure 1.9 Classification of service differentiation based schemes
It uses two parameters of MAC, the back off interval and Inter
Frame Space (IFS) between each data transmission to provide the
differentiation. Priority based PCF and Distributed TDMA are the schemes
proposed under the station based service differentiation using PCF
enhancement. The Per-flow differentiation scheme (Aad et al 2002) proposed
under Queue based service differentiation has all packets entered in the same
queue independent of their priorities; but, it introduces mutual interferences
Service differentiation based schemes
Station based Schemes Queue based Schemes
DCF based Schemes
AC scheme
DPS
VMAC
Black burst
PCF based Schemes
Priority based PCF
AEDCF
802.11e EDCF
PerFlow scheme
DCF based
802.11e HCF
PCF based
DC scheme
Distributed TDM
26
between priorities and the possible solution is to assign different queues to
different flows in AP. To improve the differentiation than that of Per-flow
scheme, IEEE 802.11e EDCF (IEEE 802.11 WG Draft, 2003) extends the
basic DCF to support up to four DCF queues in one station and each queue
contends for Transmission Opportunity (TxOP) in one station to send the
packets. But this scheme does not consider dynamicity of wireless channel
conditions.
Hence, in Adaptive EDCF (AEDCF) (Romdhani et al 2003),
relative priorities are provisioned by adjusting the size of the Contention
Window (CW) of each traffic class by taking into account both application
requirements and network conditions. Another queue based service
differentiation scheme (Fischer 2001) proposed by IEEE 802.11e working
group using both DCF and PCF enhancements is known as Hybrid
Coordination Function (HCF). It combines the advantages of distributed
contention access of EDCF and centralized polling access of PCF methods.
1.3.3 Packet Routing – Fundamentals and a Survey
Quality improvement on packet routing enhances the quality of
service demanded by the time constrained data traffic at network layer. In
general, the routing protocols use the following metrics to evaluate and
compare the best path for the packets to travel.
Path length
Hop count
Delay
Bandwidth
Load
27
Reliability and
Communication cost
Path length is the most commonly used routing metric. It is the sum
of costs associated with each link traversed, for the routing protocols that
assign arbitrary cost to each link. Hop count is the metric that specifies the
number of passes through routers that a packet must take en-route from a
source to a destination. Routing delay refers to the length of time required to
move a packet from source to destination through the inter network. It
depends on the bandwidth of intermediate network links, the port queues at
each route along the way, network congestion on all intermediate network
links and the physical distance to be traveled. Bandwidth refers to the
available traffic capacity of a link. It is the rating of the maximum attainable
throughput on a link. Load refers to the degree to which a network source,
such as a router, is busy. Load calculation includes CPU utilization and
number of packets processed per second. Reliability in the context of routing
algorithm refers to the dependability of each network link. It is usually
described in terms of Bit Error Rate. Since the performance and the operating
expenditures are important, the communication cost need to be calculated.
Generally, the routing algorithms are classified by type. Key
differentiators include,
Static versus Dynamic
Single path versus Multi path
Flat versus Hierarchical
Host intelligent versus Route intelligent
Intra domain versus Inter domain
Link state versus Distance vector
28
So far, many QoS-based routing algorithms have been proposed.
Most of them start from extending the ability of current best-effort routing
algorithms. However, the current Internet routing protocols are based on
either Distance Vector algorithm or Link-State algorithm. In Distance Vector
algorithm, neighboring routers exchange routing information periodically.
Thus, every router can learn the routing information from others. Based on
that information, the shortest path to every destination can be computed. This
is also called as Bellman-Ford algorithm. While in Link-State algorithm,
every router advertises its link state information to the whole network; thus,
every router can receive the link-state information. Such information is
maintained in a local database of every router, from which the routing table is
calculated using Dijkstra's shortest path algorithm. The advertising is
triggered by events, and it also happens periodically.
In wireless routing protocols, the routes are selected with the
shortest-path or minimum hop count to the destination. However, the bad
signal quality and the longer hop length links are not considered in the route
selection process. Hence, to eliminate the bad links, there are several studies,
proposals and even new routing architectures that address the instability of
wireless channel and the link quality are suggested. The Roofnet team
provides an extensive experimental study of the wireless characteristics of a
multi-hop 802.11b network in an urban environment. Based on the link
measurements, it is concluded that, routing through the shortest path is not
sufficient in multi-hop wireless networks (Couto et al 2003). It is also
reported that the correlation between SNR and loss rate on the link with
intermediate quality is rather weak. Quantifying the wireless link quality by
using different information and technologies are also suggested. In
Associativity Based Routing (ABR) (Toh 1997) and Stability based Adaptive
Routing (SSA) (Dube et al 1997), temporal link stability is used as the route
selection criteria. In (Punnoose et al 1999), Global Positioning System (GPS)
29
information is used with the propagation model to predict the received signal
power which is used to calculate the link quality metric. To find the high
throughput path on the multi-hop wireless network, Dynamic Source Routing
(DSR) and Dynamic destination Sequenced Distance Vector (DSDV) are
modified to use the Expected Transmission count metric (ETX) (Couto et al
2003a) as the route selection criterion. In (Hsin Mu Tsai et al 2006), the link
quality aware routing protocol which selects the route, based on the link
quality and hop count under AODV protocol, is discussed. In this algorithm,
routes with bad links are eliminated, while maintaining the connectivity of the
network.
1.3.4 Congestion Control – An overview and a survey
Most networks fail to tell applications, how much bandwidth is
available at any given instant. As a result, applications have no basis on which
transmission is controlled. When applications send more data than that of the
network handling capacity, buffers fill up and may overflow. Then, the
application must retransmit the data, which adds more traffic and further
causes congestion in the network. Apart from this, whenever the total input
rate is greater than the output link capacity, congestion occurs. It also occurs
due to shortage of buffer space, slow links and slow processors. Hence,
congestion control is necessary to ensure the negotiated Quality of Service for
the users. To exercise congestion control at protocol level in the network,
TCP based congestion control algorithms are developed. However, protocol
design in wireless networks requires the consideration of several additional
aspects of communication than that in the wired networks. These complexities
arise from the wireless communication link as well as due to the mobility of
the hosts involved. It is understood that, there are two major issues that need
to be considered for TCP to work in WLAN: non-congestion loss and packet
reordering.
30
Non-congestion loss
Two types of non-congestion packet losses occur. The first type is
the random packet loss that occurs due to the corruption of bits. Such packets
are discarded by the routers or the end hosts. These random losses occur in
the environments where the various factors unexpectedly disturb the
communication. The second type of packet loss is the disconnection packet
loss that occurs when the mobile host completely disconnects or moves out-
of-range from the wireless network. This type of packet loss is a characteristic
of the infrastructure networks (WLANs) and occurs either when a mobile host
becomes physically too distant from the base station or when it moves
between two adjacent wireless networks (handoff). The loss will occur on
every packet transmitted until the mobile host reconnects to the original base
station or the neighboring one.
Packet reordering
The problem of packet reordering occurs, when a packet reaches
the destination sooner than the packet(s) previously sent. Since the destination
host expects in-order packet delivery, it will respond with the duplicate ACK.
When the delayed packets reach the destination, the correct cumulative ACK
is created and the communication resumes as normal. When the packet
reaches the host out of order, the host reacts exactly the same way as it would
in the case of a packet loss.
The major classification of TCP congestion control schemes
include, Slow start and Congestion avoidance, Fast retransmit and Fast
recovery.
31
1.3.4.1 Modifications of TCP on wireless
Conventional TCP schemes may suffer from a severe degradation
in performance in mixed wired and wireless environment. The types of
performance enhancement approaches include:
End-to-end
Split connection
Link layer
In split connection schemes (Bakre et al 1995; Brown et al 1997;
Wang et al 1998), a single TCP connection is split into two TCP connections.
It requires large amount of memory and increases the delay too. An example
is I-TCP. In Link Layer schemes (Balakrishnan et al 1995; Keshav et al 1997;
Ratnam et al 1998; Balakrishnan et al 1998), Forward Error Correction,
Retransmission schemes at link layer are employed. But there are no accurate
timeout mechanisms, and retransmission delays. The examples include
SNOOP and ELN. In End-to-end schemes, no modifications at intermediate
router are required. It is also simple to implement. The examples include TCP
New Reno, Vegas and Westwood.
Hence, the literature survey of TCP variants based on end to end
approach is further discussed. Until the mid 1990s, all TCPs’ set timeouts and
the measured round-trip delays were based only upon the last transmitted
packet in the transmit buffer. University of Arizona researchers, Larry
Peterson and Lawrence Brakmo introduced TCP Vegas, in which timeouts
were set and round-trip delays were measured for every packet in the transmit
buffer. In addition, TCP Vegas used the concept of additive increase and
additive decrease in the congestion window. However, this variant was not
widely deployed. In end-to-end approach, TCP-Peach (Akyildiz et al 2001) is
32
designed for satellite communication environment, where large bandwidth-
delay product is the nature. Its important assumption is that the routers must
support priority queuing. In TCP-Peach plus (Akyildiz et al 2002), the actual
data packets with lower priority replace the low priority dummy packets as
the probing packets, to further improve the throughput. TCP-Westwood
(Casetti et al 2001) is a rate based end-to-end approach, in which the sender
estimates the available network bandwidth dynamically by measuring and
averaging the rate of returning ACKs.
In TCP New Reno (Floyd et al 1999), an extension of TCP Reno
with improved fast recovery algorithm, the multiple losses cannot invoke the
slow down and fast recovery phase. It terminates the recovery, when it
receives a full ACK. It cannot distinguish between congestion and wireless
packet losses (Balakrishnan et al 1996). Moreover, reduction in transmission
rate based on Additive-Increase-Multiplicative-Decrease (AIMD) algorithm
reduces the throughput in wireless networks. In TCP New Jersey (Xu et al
2005), the bandwidth estimator and congestion warning mechanism
differentiate the cause of packet loss at the intermediate router. But its
throughput may be reduced due to the background traffic pattern. Moreover,
when it detects a packet loss or retransmission timeout is expired, it may not
recover the dropped sending rate effectively based on the cause of loss. The
TCP-NJ+ (Kim et al 2007), a modified version of TCP New Jersey, increases
the throughput by increasing the dropped sending rates, with its improved
RTO mechanism and error recovery mechanism. All these variants use
normal ACKs, to indicate the successful acknowledgements of received
packets from the receiver. But an ACK can give information about a single
packet loss to the sender. So, when multiple losses occur, the sender can know
only about the first lost packet and it has to wait for more ACKs from the
receiver to retransmit all the lost packets.
33
However, to provide optimal service for unicast multimedia flow
operating in wired Internet environment, a TCP Friendly Rate Control
(TFRC) protocol is used. It is based on TCP Reno’s throughput equation. To
eliminate the throughput degradation of TFRC in wireless networks, the
network supported and end-to-end approaches are devised. Network
supported enhancements of TFRC need the support of intermediate nodes in
the network such as routers, proxies, access points or other kinds of devices.
The typical examples are, ECN based TFRC (Choudhary et al 2003), Proxy
based TFRC (Huang et al 2002), WM-TFRC (Pyun et al 2003), and AED
based TFRC (Arya et al 2003). These schemes suffer from deployment
difficulties. In contrast, the end-to-end enhancements avoid these difficulties
since their modifications on TFRC only involve the end-to-end node.
MULTFRC (Chen et al 2004) belongs to this category and it only modifies
TFRC protocol on the sender side. Another protocol TFRC Veno (Binzhou et
al 2007) is a kind of end-to-end enhancement which modifies TFRC only at
the sender side. It replaces the throughput equation of TCP Reno by using the
more advanced equation derived from wireless TCP Veno protocol.
1.3.4.2 ECN and SNACK schemes
Explicit Congestion Notification (ECN) mechanism (Floyd 1994)
conveys the network congestion from the routers to end stations, through the
Congestion Experienced (CE) bit of the IP header. The CE bit is enabled
when the average queue occupancy of the router exceeds the threshold. The
TCP receiver echoes this information back to the sender via the Explicit
Congestion Echo (ECE) bit, until the sender takes action on the congestion
notification. Therefore, the ECN router explicitly marks the packets to alert
the sender of incipient congestion.
The Selective Negative Acknowledgement (SNACK) (Consultative
Committee Book, 1999) combines the goodness of Selective
34
Acknowledgement (SACK) (Mathis et al 1996) and Negative
Acknowledgement (NAK). In this scheme, the sender receives the entire
status of the receiver buffer (received and missed segments) within a single
ACK. Moreover, for specifying the bit-vector the SNACK requires only a
small number of bytes in TCP header to indicate all lost packets. This is
useful, when the out of sequence queue is very large. It is also useful when
large windows are operated over long delay path. SNACK improves the
performance of TCP in wireless networks (Cheng et al 2006).
1.3.5 Video Streaming fundamentals and a survey
Multimedia transmission over wireless network has added a new
dimension to the way users communicate. QoS support has a profound role in
ensuring better audio-visual experience to the end users. The stringent QoS
requirements of encoded video make video streaming over wireless links, a
challenging problem. One of an approach to provide QoS guarantees in
WLAN is through resource reservation mechanisms. A key component in the
reservation scheme is the characterization of traffic. Traffic characterization
overcomes the difficulty in resource allocation by accurately specifying the
traffic arrivals on a video connection and verifying the resource availability to
support traffic at the desired QoS. Moreover, real-time applications like video
transmission require the delivery of video content in a defined period. Hence,
the bounds on initial delay and buffer size dictate the performance of the
system. It is obvious that frequent buffer underflows and overflows affect the
QoS parameters like delay and throughput. Increasing the initial delay would
reduce the occurrence of an outage at the expense of increase in buffer size.
Therefore, determination of minimum initial delay and the minimum required
buffer size provides a guaranteed QoS in wireless environment. The related
literature survey is presented here.
35
Concord algorithm (Sreenan et al 2000) for delay-sensitive
applications uses historical information of delay probabilities to predict the
total end-to-end delay experienced by audio and video traces traversing the
Internet. Minimal playback delay and client buffer size for the optimal
smoothing of MPEG-2 video over ATM network has been discussed in (Le et
al 2000). Upper bounds for estimation of minimum delay and buffer size for
MPEG-4 video transmission over UMTS Terrestrial Radio Access Network
(UTRAN), has been derived through simulation in (Stockhammer et al 2004).
Deterministic traffic characterization by a family of rate-interval pairs is
discussed in Deterministic Bounding INterval Dependent (D-BIND) (Wrege
et al 1996). Adaptive media playout strategies discussed in (Steinbach et al
2001), are aimed at reducing the client buffer size while preserving the same
resilience against buffer underflow. They have been employed to reduce the
latency in video streaming applications. The performance of algorithms for
adaptively adjusting the play-out delay of audio packets in an interactive
packet-audio terminal application, under varying network delays is
investigated in (Ramjee et al 1994). It is observed that an adaptive algorithm
which explicitly adjusts to the sharp, spike-like increases in packet delay can
achieve a lower rate of lost packets for both the given average playout delay
and the given maximum buffer size.
1.3.6 Voice over IP – An overview
Voice over IP (VoIP) is defined as the routing of voice signals over
any IP-based network. It has the ability to make telephone calls like in
the Public Switched Telephony Network (PSTN), and to send facsimiles
over IP-based data networks with a suitable Quality of Service. It is one of
the emerging trends feasible for carrying voice and call signaling messages
over the Internet by adopting standards like H.323, SIP, etc., The
following are the issues related to VoIP.
36
A significant problem that is associated with VoIP systems
is the delay in packet transmission. This delay results in echo
and talk overlaps. An echo is usually caused, if the round
trip delay of the signal from destination back to the source is
more than 50mS. Talk overlap occurs, if the packet delay in
any direction exceeds 250mS. These problems can be solved
by implementing echo control and cancellation methods in
VoIP systems.
The inter-packet delay called jitter is another problem
associated with the VoIP transfer. It causes distortion in
talks. It can be minimized by holding the packets until the
slowest packets arrive at the destination. However, it
increases the delay in playing the voice.
Another severe problem affecting the packet network is
packet losses and it is solved by packet retransmission.
However, voice packet transmission needs different
approaches. Some of the solutions include, replaying the last
packets until new packets arrive, sending redundant
information to compensate for the loss and the hybrid
approach that combines these two methods.
Recent online security measures cannot adequately handle
VoIP processing requirements and changes in protocols and
mechanism will take time for a hassle free, secure VoIP
service. This includes internet vulnerabilities like Denial of
service attacks, Phishing, snooping and spoofing.
Backward compatibility is another major factor. VoIP
protocols do not effectively work with older firewalls and
37
Network Address Translation (NAT), which is a part of
some LAN and WAN networks.
From the related literature survey, it is found that, in the previous
work on VoIP over WLAN, the references (Chen et al 2002 ; Veeraragavan
et al 2001), assumed the use of PCF at their MAC layers to support VoIP.
Since PCF is not supported in most of the IEEE 802.11 products, references
(Bladwin et al 2001 ; Garg et al 2002 ; Hiraguri et al 2002 ; Banchs et al
2001) studied the use of DCF to support VoIP. The various schemes for
improving the voice capacity are also investigated and they all require
modification in the MAC protocol used by the VoIP stations. Moreover, there
have been many schemes proposed (Kuri et al 1999 ; Sun et al 2002 ; Tang
et al 2000) for reliable multicast of voice over IP networks.
1.4 JUSTIFICATION OF RESEARCH
The next generation wireless networks are targeted at supporting
various real time multimedia applications over packet switched networks. In
these networks, person to person communication can be enhanced with high
quality voice and video transmission. Hence, providing QoS guarantees to
various applications is an important objective in designing the forth coming
networks. Even though, there are two approaches (end to end and network
centric) practiced to support QoS guarantees, the best one devised is network
centric approach. That is routers, switches and base stations in the network are
required to provide QoS support to satisfy the demands requested by
applications. Hence, in this dissertation,
An analysis on appropriate channel access mechanisms of
MAC layer that suits real time data transfer is done, and the
following are proposed.
38
a cross layer routing algorithm (TLAODV) for secured
routing at network layer
congestion control algorithms (SNACK-TCP-ECN and
ETFRCV) at transport layer
a deterministic model for the quality transfer of MPEG-4
video streams
an integrated model for VoIP transmission under wireless
distribution system.
Some of the proposed schemes are also implemented in network
processors as application modules with constraints, to validate certain results.
1.5 AN OVERVIEW OF PROPOSED SCHEMES AND REPORT
ORGANIZATION
The success in the deployment of next generation networks
critically depends upon how efficiently the wireless networks can support
traffic flows with QoS guarantees. To achieve this goal, QoS provisioning
mechanisms need to be efficient and practical. For this reason, the focus is
primarily on designing efficient and practical algorithms for channel access,
routing, congestion control, rate control and admission control. The
dissertation also contains an integrated model for VoIP and a deterministic
model for video streaming. Apart from this, the implementations in the
network processors for the proposed schemes are presented as application
modules. An overview of the proposed models is shown in Figure 1.10.
In Chapter I, the literature survey of wireless channel models,
channel access mechanisms, routing and congestion control algorithms and
39
fundamentals of video streaming algorithms and VoIP algorithms are
discussed to the required proximity.
Chapter II analyzes the support and limitations of basic and
adaptive channel access schemes for real time traffic. It is well known that,
the channel access schemes play an important role in aiding the demand of
real time flows, to guarantee the QoS requirements. From the analysis, it is
proved that the adaptive channel access mechanisms perform better than the
basic schemes. The observations are validated with an application of efficient
video transmission using adaptive channel access techniques in an existing
QoS enhanced cross layer architecture.
Figure 1.10 An overview of proposed models
In chapter III, a wireless routing protocol named TLAODV, to
eliminate the routes with bad links is proposed. The route selection process of
this algorithm is based on a cross layer Route Metric, which considers the
PHY Layer
MAC Layer
Network Layer
Transport Layer
Application Layer
Study on Wireless channel models
Analysis of channel access mechanisms to suit real time multimedia data transfer and validate with an application
Proposed TLAODV algorithm for secured QoS enhanced routing and implementation of security
algorithms at core and edge routers
Proposed SNACK-TCP-ECN algorithm and ETFRCV algorithm for congestion control at
protocol level
Proposed a deterministic model for MPEG-4 video streaming and proposed an integrated
model for Voice over IP traffic
40
frame transmission efficiency of the MAC layer and the Signal to Noise Ratio
(SNR) of the PHY layer. Moreover, the security enhancements on the
proposed routing protocol involve the establishment of trust relationships
among nodes depending on successful and failed states of communication.
Since voice traffic is very sensitive to network delay, and voice packets are
more vulnerable to threats and attacks, the proposed algorithm is tested with
VoIP sessions. Also, the applications on firewall type screening / filtering
technique to secure the edge routers and the implementation of RSA and AES
algorithms in core routers using the network processors are also presented in
this chapter.
Though the performance of TCP is appreciable in wired
environment, it requires adequate improvement in wireless scenario. The
performance degradation is mainly because of its inability, to distinguish the
congestion losses and other types of link losses. Hence to address the issue of
loss differentiation, a SNACK – TCP with ECN algorithm is proposed in
chapter IV. It is based on the TCP variant TCP-NJ+. The SNACK (Selective
Negative Acknowledgement) is incorporated to indicate the multiple packet
losses at one time and the Explicit Congestion Control is incorporated to
forecast the congestion status. Moreover, another issue of using TCP as a
transport layer protocol in video streaming applications is also addressed with
an Extended TFRC Veno (ETFRCV) algorithm. It is an enhancement of
TFRC veno protocol, proposed to meet the special needs of video streaming.
It decouples the wireless link loss from that of congestion loss, based on the
queuing delay incurred in the routing buffer.
In chapter V, a deterministic approach towards QoS provisioning
for MPEG-4 video streaming is proposed. Since the bounds on initial delay
and playout buffer size dictate the performance of the video streaming in a
system, the computation of minimum initial delay and optimal playout buffer
41
size becomes mandatory, to provide a guaranteed QoS for a given video
sequence transmitted over WLAN. Frame loss, Peak Signal-to-Noise Ratio
(PSNR) and Mean Opinion Score (MOS) are computed for different playout
buffer sizes to prove that, the minimum playout buffer size and latency is
sufficient to provide QoS guarantee. This chapter also incorporates an
application model for an adaptive buffer management technique that reduces
the packet loss at the video player. Here, the maximum buffer size that can be
offered at the maximum transmission rate of the video packets is obtained.
This model is implemented using IXP 2400 Network Processor hardware.
In chapter VI, an integrated model for VoIP transmission over
WLAN under Wireless Distribution System (WDS) is proposed. The
conventional VoIP transmission in WLAN suffers from the problems of
header overhead being larger than the VoIP payload, and the quality of VoIP
along with the back ground traffic is not assured. The quality deterioration of
voice in WDS environment is mainly due to larger network delay. The
proposed model addresses these issues through a novel compression /
decompression algorithm and also with the improved Codec system. This
chapter also includes an application model implemented using network
processor, to regulate the traffic at the access points, with an efficient Call
Admission Control (CAC) algorithm.
The results from each chapter are consolidated and discussed in
Chapter VII. The scope for future research and the direction in which future
work can be carried out is also discussed in this chapter.