The Rapid Deployment of Wireless Networks in an
Industrial Environment
Submitted
by
Max Downey
For the Degree
of
Doctor of Philosophy
at the
Industrial Research Institute Swinburne (IRIS)
Swinburne University of Technology
Hawthorn Victoria Australia
August 2007
Abstract
This dissertation documents a doctoral research project undertaken at the Industrial
Research Institute Swinburne (IRIS) between the years of 2004 – 2007.
The objective of the research project was to investigate methods, tools and algorithms
that could support the deployment of wireless networks in an industrial environment.
Specifically, emphasis was on improving the methods available for determining network
coverage and performance. To achieve this goal, a number of propagation prediction
models (some taken from the literature and others developed during the course of this
research project) were investigated.
The primary propagation model of interest was a three dimensional (3D) ray tracing
algorithm that demonstrated an ability to produce precise predictive results with only
a small number of empirical measurements taken at a given site. Stochastic models able
to predict a network’s ability to handle many simultaneous voice-over-Internet-protocol
(VoIP) calls (under various different conditions) were also investigated and extended
during the course of this dissertation.
Finally, a simulation tool, which incorporated the models investigated, was developed.
The simulation program was designed to be suitable for both academic investigation
and to be robust enough such that it could be realistically be used in practical network
deployments, thus satisfying the needs of an applied research program by both producing
results of academic interest and a tangible outcome of use in industry.
Acknowledgements
Many people have been involved and leant their assistance over the course of this research
program and are deserving of my most grateful thanks.
The advice and criticisms of my supervisors, Dr Dario Toncich and Mr Choon Ng of
IRIS, Swinburne and Dr Anthony Overmars of The University of Melbourne, proved
invaluable. Without them, this dissertation would be lacking on many fronts.
The industry partner, Mr Jim MacDougall of Thin ICE who financially supported the
project and without whom there would have been far fewer open doors and exciting
opportunities to put the simulation described in this project to use in real world situa-
tions.
Mr Maruf Rahman, who worked along side me in this project’s initial stages.
Sephrenia and Mara for all the support they provided, in their own special little ways,
and everybody else who played a part in providing me with an opportunity to work on
such an enjoyable project.
ii
Declaration
This thesis contains no material which has been accepted for award of any other degree
or diploma in any university or college of advanced education, and to the best of my
knowledge and belief contains no material previously published or written by another
person except where due reference has been made.
Max Downey
2007
iii
Contents
1 Introduction 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 A Brief History of Wireless Communications . . . . . . . . . . . . . . . . 5
1.4 Open Systems Interconnection (OSI) Layered Network Model . . . . . . . 6
1.5 Introduction to 802.11b (WiFi) . . . . . . . . . . . . . . . . . . . . . . . . 9
1.6 Voice over Internet Protocol (VoIP) . . . . . . . . . . . . . . . . . . . . . 13
1.7 VoIP over a Wireless Channel . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.8 Research Issues/Questions to be Addressed . . . . . . . . . . . . . . . . . 16
1.9 Perceived Contributions of the Research . . . . . . . . . . . . . . . . . . . 21
2 Literature Review 23
2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Overview of Propagation Prediction . . . . . . . . . . . . . . . . . . . . . 25
2.3 Empirical and Statistical Propagation Models . . . . . . . . . . . . . . . . 26
2.4 Pseudo-Deterministic Methods . . . . . . . . . . . . . . . . . . . . . . . . 31
2.5 Deterministic Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.6 Ray Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.7 Image Based Ray Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.8 Shoot and Bounce Ray Tracing . . . . . . . . . . . . . . . . . . . . . . . . 42
2.9 Reflections and Transmissions . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.10 Antenna Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.11 Adapting Relative Permittivity . . . . . . . . . . . . . . . . . . . . . . . . 57
2.12 Propagation Prediction Packages . . . . . . . . . . . . . . . . . . . . . . . 59
iv
2.13 802.11b Standards and Simulation . . . . . . . . . . . . . . . . . . . . . . 62
2.14 802.11b Physical layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.15 Differential Binary Phase Shift Keying (DBPSK) . . . . . . . . . . . . . . 65
2.16 Differential Quadrature Phase Shit Keying (DQPSK) . . . . . . . . . . . . 66
2.17 Complementary Code Keying (CCK) . . . . . . . . . . . . . . . . . . . . . 68
2.18 802.11b MAC layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.19 VoIP Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.20 Simulating the 802.11b MAC layer . . . . . . . . . . . . . . . . . . . . . . 78
2.21 RTP/UDP/IP and Robust Header Compression . . . . . . . . . . . . . . . 80
2.22 Summary of Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3 Methodology and Implementation 87
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.2 Method / Research and Development Process . . . . . . . . . . . . . . . . 89
3.3 Propagation Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.4 Empirical and Statistical Propagation Models . . . . . . . . . . . . . . . . 92
3.4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.4.2 Simple Power Law . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.4.3 Aisle Based Path-loss . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.5 Pseudo-Deterministic Methods . . . . . . . . . . . . . . . . . . . . . . . . 97
3.5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.5.2 Partition Based Pathloss . . . . . . . . . . . . . . . . . . . . . . . . 98
3.5.3 Partition Based Path-loss with path-loss exponent . . . . . . . . . 99
3.6 Ray Tracing with Material Optimization . . . . . . . . . . . . . . . . . . . 100
3.6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.6.2 Material Property Estimation Algorithm . . . . . . . . . . . . . . . 101
3.7 MAC Layer Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
3.8 Markov Chain Analysis of the 802.11b MAC layer . . . . . . . . . . . . . 105
3.9 Garg’s VoIP Capacity Model for DCF-Basic Access . . . . . . . . . . . . . 108
3.10 A New VoIP Capacity Model for DCF-Basic Access . . . . . . . . . . . . 110
3.11 A New VoIP Capacity for DCF-RTS/CTS . . . . . . . . . . . . . . . . . . 112
3.12 A New VoIP Capacity Model for ARF . . . . . . . . . . . . . . . . . . . . 114
3.13 A New VoIP Capacity Model for RTS/CTS-ARF . . . . . . . . . . . . . . 122
3.14 A New VoIP Capacity Model for CARA . . . . . . . . . . . . . . . . . . . 124
v
3.15 A New VoIP Capacity Model for RRAA-BASIC . . . . . . . . . . . . . . . 131
3.16 Summary of Methodology and Implementation . . . . . . . . . . . . . . . 138
4 Experimental Design 140
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
4.2 Measurement Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.3 Transmitter Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
4.4 Receiver Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.5 Measurement Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
4.6 Measurement Equipment Validation . . . . . . . . . . . . . . . . . . . . . 150
4.7 Overview of Java Simulation Program . . . . . . . . . . . . . . . . . . . . 151
4.7.1 Heat Map Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
4.7.2 Simulation Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
4.7.3 VoIP Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
4.7.4 Objects Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4.7.5 Tx/Rx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4.7.6 Materials Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
4.7.7 Messages Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
4.7.8 3D-Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
4.7.9 Using the Simulation to Evaluate Propagation Models . . . . . . . 161
4.8 Matlab MAC Layer Simulation . . . . . . . . . . . . . . . . . . . . . . . . 163
4.9 Propagation simulation comparison . . . . . . . . . . . . . . . . . . . . . . 165
4.10 Network Performance Nomenclature . . . . . . . . . . . . . . . . . . . . . 167
4.11 Signal Strength Heat Maps . . . . . . . . . . . . . . . . . . . . . . . . . . 168
4.12 VoIP Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
4.13 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
5 Results 173
5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5.2 Path-loss Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
5.3 Comparison of Propagation Models . . . . . . . . . . . . . . . . . . . . . . 180
5.3.1 Beverage Bottling Factory . . . . . . . . . . . . . . . . . . . . . . . 180
5.3.2 Automotive Production Facility . . . . . . . . . . . . . . . . . . . . 184
5.3.3 Glass Manufacturing Facility . . . . . . . . . . . . . . . . . . . . . 186
vi
5.3.4 Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5.4 MAC layer simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.4.2 DCF-Basic Access Mechanism . . . . . . . . . . . . . . . . . . . . 193
5.4.3 DCF-RTS/CTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
5.4.4 Short Preamble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
5.4.5 Robust Header Compression (ROHC) . . . . . . . . . . . . . . . . 201
5.4.6 G.711a versus G729 Audio . . . . . . . . . . . . . . . . . . . . . . 202
5.4.7 ARC Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
5.4.8 Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
5.4.9 MAC Layer Simulation Validation . . . . . . . . . . . . . . . . . . 207
6 Analysis 218
6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
6.2 Impact of Propagation Models on the Rapid Deployment of Wireless Net-
works in Industrial Locations . . . . . . . . . . . . . . . . . . . . . . . . . 220
6.3 Impact of MAC Models of the Rapid Deployment of Wireless Networks
in Industrial Locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
6.4 Impact of Simulation on the Rapid Deployment of Wireless Networks in
Industrial Locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
7 Conclusion 228
7.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
7.2 Literature Review Findings . . . . . . . . . . . . . . . . . . . . . . . . . . 230
7.3 Path-loss Measurement Equipment . . . . . . . . . . . . . . . . . . . . . . 232
7.4 Propagation Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
7.5 Stochastic DCF Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
7.6 Simulation Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
7.7 Limitations and Future Research . . . . . . . . . . . . . . . . . . . . . . . 239
7.8 Closing Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Appendix 1 – Definition of Technical Terms 244
A.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
A.2 Bicubic Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
vii
A.3 Coherence bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
A.4 Gaussian Distribution Q Function . . . . . . . . . . . . . . . . . . . . . . 246
A.5 Intersymbol Interference . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
A.6 Lognormal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
A.7 Marcum Q Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
A.8 Maximum Excess Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
A.9 Mean Excess Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
A.10 Modified Bessel Function of the First Order . . . . . . . . . . . . . . . . . 248
A.11 Nakagami Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
A.12 Power Delay Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
A.13 Rayleigh Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
A.14 Rician Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
A.15 Suzuki Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
A.16 Two Ray Model (Free Space Model with Ground Reflection) . . . . . . . . 251
A.17 Weibull Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Appendix 2 – List of Acronyms 253
Appendix 3 – Publications Resulting from Research 257
References 258
viii
List of Figures
1.1 OSI Layered Network Model. . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Wireless Hub Arrangement for WiFi. . . . . . . . . . . . . . . . . . . . . . 10
1.3 Wireless Ad-Hoc Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1 IRT with no reflections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.2 IRT with one Reflection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.3 IRT with two Reflections. . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.4 Ray launching with angular increments. . . . . . . . . . . . . . . . . . . . 43
2.5 Angular Increment method showing bunching at poles. . . . . . . . . . . . 43
2.6 Varied azimuth method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.7 Icosahedron showing rays launched through vertices (each neighbouring
ray had an angular separation of 63 degrees). . . . . . . . . . . . . . . . . 45
2.8 Geodesic sphere with a tessellation frequency of (N=14). . . . . . . . . . . 46
2.9 Rakhmanov Spiral Method (with N = 700). . . . . . . . . . . . . . . . . . 47
2.10 Circle Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.11 Aliasing in a Shoot and Bounce Ray Tracer. . . . . . . . . . . . . . . . . . 48
2.12 Reception Spheres. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.13 Reception Sphere Double Counting [adapted from Durgin 1998]. . . . . . 50
2.14 Reflection and Transmission of Ray. . . . . . . . . . . . . . . . . . . . . . 53
2.15 H-Plane and E-Plane Antenna Patterns. . . . . . . . . . . . . . . . . . . . 55
2.16 Physical Layer Long Preamble. . . . . . . . . . . . . . . . . . . . . . . . . 63
2.17 Physical Layer Short Preamble. . . . . . . . . . . . . . . . . . . . . . . . . 64
2.18 Bit Error Probabilities for DBPSK and DQPSK. . . . . . . . . . . . . . . 67
2.19 Comparison of BER Versus Chipping Energy for 802.11b Modulation
Schemes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
ix
2.20 802.11b Frame. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.21 802.11b DCF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.1 Generalized factory layout. . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.2 L-M Algorithm Logic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.3 Markov Chain of 802.11 DCF. . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.4 Markov Chain of ARF (Increase or Decrease Data Rate). . . . . . . . . . 117
3.5 Markov Chain of ARF (Only Decrease Data Rate). . . . . . . . . . . . . . 118
3.6 Markov Chain of ARF (Only Increase Data Rate). . . . . . . . . . . . . . 119
3.7 Markov Chain of ARF (All Data Rates). . . . . . . . . . . . . . . . . . . . 120
3.8 Markov Chain of CARA (Increase or Decrease Data Rate). . . . . . . . . 126
3.9 Markov Chain of CARA (Only Decrease Data Rate). . . . . . . . . . . . . 128
3.10 Markov Chain of CARA (Only Increase Data Rate). . . . . . . . . . . . . 129
3.11 Markov Chain of RRAA-Basic (with four available data rates). . . . . . . 135
4.1 Block Diagram of Transmitter Rig. . . . . . . . . . . . . . . . . . . . . . . 143
4.2 Transmitter Rig. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
4.3 Block Diagram of Receiving Rig. . . . . . . . . . . . . . . . . . . . . . . . 145
4.4 Linear Axis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
4.5 Diagram of Linear Axis Showing at What Orientation Measurements were
Taken. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4.6 Makeshift Directional Antenna Constructed using the Packaging from a
Wine Bottle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
4.7 Measured Path-loss versus Theoretical Freespace Path-loss. . . . . . . . . 150
4.8 Heat Map Tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
4.9 Simulation Tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
4.10 VoIP Tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
4.11 Objects Tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4.12 Tx/Rx Tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
4.13 A Selection of Antenna Patterns as Rendered by the Simulation. . . . . . 158
4.14 Materials Tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
4.15 Messages Tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
4.16 Java Ray tracing in Bundaberg Factory. . . . . . . . . . . . . . . . . . . . 160
4.17 Flowchart of Matlab 802.11b MAC Layer Simulation. . . . . . . . . . . . . 164
x
4.18 Sample legend for propagation heat map. . . . . . . . . . . . . . . . . . . 168
4.19 Heat Maps for Different Propagation Models. . . . . . . . . . . . . . . . . 170
4.20 A contour map defining the region that can support a specified number
of simultaneous VoIP calls of a given type. . . . . . . . . . . . . . . . . . . 171
5.1 Map Showing Measurement Locations at Car Manufacturer. . . . . . . . . 177
5.2 Path-loss Replication Spread at Ford Geelong. . . . . . . . . . . . . . . . 178
5.3 Box-Plot of Path-loss Data Collected at Ford Geelong. . . . . . . . . . . . 179
5.4 3D Model of Beverage Bottling Factory. . . . . . . . . . . . . . . . . . . . 181
5.5 Results from Beverage Production Facility. . . . . . . . . . . . . . . . . . 183
5.6 Empirical Versus Predicted Power at Beverage Production Facility. . . . . 184
5.7 Results from the Automotive Production Plant. . . . . . . . . . . . . . . . 186
5.8 Results from Glass Manufacturer. . . . . . . . . . . . . . . . . . . . . . . . 187
5.9 Capacity of VoIP with SNR = 15dB per chip. . . . . . . . . . . . . . . . . 193
5.10 Capacity of VoIP as SNR decreases. . . . . . . . . . . . . . . . . . . . . . 194
5.11 Analytical capacity of different modulation schemes with varied audio
speeds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
5.12 Basic Access Vs RTS/CTS with SNR = 6dB per Chip. . . . . . . . . . . . 197
5.13 Basic Access Vs RTS/CTS with SNR = 4dB per Chip. . . . . . . . . . . . 197
5.14 Goodput versus BER for different DCF schemes. . . . . . . . . . . . . . . 199
5.15 Long Preamble Vs Short Preamble with SNR = 6dB per Chip. . . . . . . 201
5.16 ROHC BA (Short and Long Preamble) Vs Long Preamble BA with SNR
= 6dB per Chip. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.17 G711a versus G729 with SNR = 6dB per Chip. . . . . . . . . . . . . . . . 203
5.18 ARC Algorithm Comparison. . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.19 Validation of Analytic Model with Simulation SNR = 7dB. . . . . . . . . 208
5.20 Validation of Analytic Model with Simulation SNR = 6dB. . . . . . . . . 209
5.21 Predicted Goodput versus Analytic Goodput (20 ms G.711a Audio per
frame, 7dB SNR per chip). . . . . . . . . . . . . . . . . . . . . . . . . . . 211
5.22 Predicted Goodput versus Analytic Goodput (20ms G.711a Audio per
frame, 7dB SNR per chip). . . . . . . . . . . . . . . . . . . . . . . . . . . 211
5.23 Predicted Goodput versus Analytic Goodput (20ms G.711a Audio per
frame using ARF). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
xi
5.24 Predicted Goodput versus Analytic Goodput (20ms G.711a Audio per
frame using RTS/CTS ARF). . . . . . . . . . . . . . . . . . . . . . . . . . 215
5.25 Predicted Goodput versus Analytic Goodput (20ms G.711a Audio per
frame using CARA). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.26 Predicted Goodput versus Analytic Goodput (20ms G.711a Audio per
frame using RRAA-Basic). . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
A.1 Mapping of pixels from destination image to source image. . . . . . . . . . 246
A.2 Two Ray Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
xii
List of Tables
1.1 Mean Opinion Scores [ITU-T P.800]. . . . . . . . . . . . . . . . . . . . . . 15
1.2 Popular Codec maximum MOS and Bit Rate [Cisco 2005]. . . . . . . . . . 15
5.1 Received Power Measurements from Car Manufacturer. . . . . . . . . . . 178
5.2 Values for VoIP Capacity Simulations. . . . . . . . . . . . . . . . . . . . . 191
xiii
Chapter 1
Introduction
1
1.1 Overview
This dissertation details the applied research conducted during the course of a Doctoral
program undertaken at the Industrial Research Institute Swinburne (IRIS), in Mel-
bourne Australia, between 2004 and 2007. The research described herein was conducted
in collaboration with Thin-ICE Pty. Ltd., the Commonwealth Scientific and Industrial
Research Organisation (CSIRO), and the Cooperative Research Centre for Intelligent
Manufacturing Systems & Technologies (CRC-IMST). The research was part of a larger
project aimed at researching the operation of wireless networks in industrial environ-
ments.
The specific aims of this Doctoral research were to:
i) Investigate the operation of wireless networks in large, indoor, industrial
environments (a goal achieved through extensive site surveying); and,
ii) Use the gathered empirical data to develop predictive models that aided in
the rapid deployment of wireless networks, and ensure a satisfactory Quality
of Service (QoS) during operation.
The simulation models developed during the course of this research were implemented
in software, resulting in a propagation prediction tool that was useable in practical
applications. The performance of the propagation models was evaluated against other
models documented in research literature.
2
1.2 Thesis Structure
This dissertation is divided into seven chapters, an overview of which is provided here
in order to show the overall investigative research path that was followed:
• Chapter 1 (Introduction) serves to provide the context and background for this
Doctoral research, in both a historical and technical sense. It provides a description
of the problems inherent in the deployment of both wireless data and voice networks
and thus the motivation for the resulting research. It also outlines the perceived
(relative) contributions that this research made to the large body of knowledge
that already existed in the field.
• Chapter 2 (Literature Review) documents the review that was undertaken of rel-
evant texts, including, conference proceedings, refereed journal publications and
Internet sourced material, in order to determine the current state of knowledge in
the field of propagation prediction and voice over IP (VoIP). A justification for the
pursued course of research is presented as a corollary of the review. The review
highlights that this research program herein was based upon a strong foundation
of previously published research and development.
• Chapter 3 (Methodology) provides a detailed, technical description of the problems
inherent in the propagation prediction of wireless networks, and the specific prop-
agation models that this research has pursued. Following on from the propagation
models, new VoIP capacity models, which extend those identified in the literature
review, are presented.
• Chapter 4 (Experimental Design) focuses on the experimental rig developed to
conduct physical site surveys, the simulation program developed to implement
the propagation prediction algorithms (and process the resultant data) and the
heuristics used to assess the performance of each investigated model.
• Chapter 5 (Results) contains the tabulated and graphed results of the experiments
conducted with the equipment described in Chapter 4. Graphs comparing the
performance of the different propagation prediction techniques are presented, as
are heat maps (graphs showing the extrapolated received power coverage across the
entire area). Graphs demonstrating the capacity of VoIP under various conditions
3
are also presented.
• Chapter 6 (Analysis) contains a review of the results presented in Chapter 5 and
a discussion about the usefulness of each model in the context of the rapid de-
ployment of wireless networks in an industrial environment. It also identifies the
impact that this research may have upon the broader field of wireless propagation
prediction research, VoIP capacity and its potential uses in industry.
• Chapter 7 (Conclusions) provides a summary of the key findings of the research
program; its limitations and how these could be overcome through further investi-
gation.
4
1.3 A Brief History of Wireless Communications
In 1887, almost twenty years after James Clerk Maxwell penned four differential equa-
tions describing the behaviour of electric and magnetic fields, Heinrich Hertz managed to
empirically validate their implicit claim of electromagnetic waves moving at the speed of
light. This was achieved by using little more than a large induction coil, which caused a
spark to appear in a receiver several metres away; thus, modern wireless communications
was born.
In 1900 Guglielmo Marconi took out his famous patent No. 7777 for “tuned or syntonic
telegraphy”, signalling the birth of the radio1. On Christmas Eve, 1906, Reginald Fes-
senden, who had previously invented amplitude modulation (AM) and the heterodyne
principal (which allowed for full-duplex transmission and tuning of radio frequencies
using the same aerial), made the first voice radio broadcast in history2.
1948 saw the birth of digital communications, when Claude Shannon published “A Math-
ematical Theory of Communication” [Shannon 1948], an article that highlighted the
binary digit as the fundamental element in communication.
Over the course of the 20th Century, wireless communications matured and evolved
considerably from its humble beginnings and, by the end of that century, it was taken
for granted that individuals could routinely transmit voice and data with commonplace
devices. In fact, by the beginning of the 21st Century, approximately one billion peo-
ple were reported to own their own form of wireless communications device3. Wireless
communication in 21st Century factories was a logical progression of the technology de-
veloped for basic consumer communication. However, in the context of industrial com-
munication, with high-powered machinery and large amounts of clutter, factories proved
to be a complex and unforgiving environment for the propagation of RF waves. It is the
industrial environment that forms the point of interest for this Doctoral research.
1However in 1943 the US Supreme court overturned Marconi’s patent in favour of Nikola Tesla, who
is now credited with inventing the modern radio.2This fact is debatable, with a number of other people claiming the honour of making the first voice
radio broadcast.3According to the market research firm Datamonitor. (http://www.researchandmarkets.com)
5
1.4 Open Systems Interconnection (OSI) Layered Network
Model
In the 1980s, the International Standards Organisation (ISO) and the International
Telecommunication Union Standardization Sector (ITU-T) began to develop the Open
Systems Interconnection (OSI) networking suite. The OSI model is shown in Figure 1.1.
Over time, the OSI approach fell by the wayside, eclipsed by simpler protocol suites
such as the Internet’s TCP/IP, yet the abstract models, documented in [ISO/IEC 7498-
1], remained an accepted and useful framework for the development and discussion of
network protocols and standards.
The abstract OSI layered network model consists of seven different layers. Each layer
uses services provided by the layer directly below it and provides abstracted services to
the layer above it. Protocols allow an entity in one host to communicate with a remote
host at the same layer.
One of the benefits of the hierarchical layer design is modularity. The implementation
of a single layer can be modified as long as it continues to provide the services required
by the layer above it.
The Physical layer is responsible for the transmission of bits between any pair of nodes
over a physical communications channel (e.g., copper wires, optical fibres or air – for
wireless communications, etc). Elements covered by specifications for this layer in-
clude:
• Voltage/power levels;
• Maximum transmission distance; and,
• Modulation/demodulation schema, etc.
The Data Link layer (DLL) is responsible for node-to-node packet delivery. In IEEE
802 networks, Section 1.5, this is composed of two sub-layers - the Media Access Control
(MAC) and the Logical Link Control (LLC) sub-layers. The MAC sub-layer is responsi-
ble for determining when a given client is allowed to utilize the physical medium. There
are a number of different methods for achieving this, the most relevant being:
• Carrier Sense Multiple Access with Collision Detection (CSMA/CD), used by wired
6
Figure 1.1: OSI Layered Network Model.
Ethernet; and,
• Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA), used in wire-
less networks.
The MAC sub-layer is also responsible for uniquely identifying each device that is con-
nected to the network (typically using a 48-bit MAC address). The LLC sub-layer’s
functionality includes flow control, error detection/correction, packet acknowledgements
and packet retransmission.
The Network layer is responsible for bringing together individual nodes to form a single
unified network. The Network layer provides addressing (the most prevalent form being
the 32-bit IP address) and facilitates the routing of packets through the network. It can
also be responsible for flow control and packet prioritization.
The Transport layer is responsible for the end-to-end delivery of messages. Its objective
is to provide a higher-level quality of service (QoS) than is provided by the Network layer.
This is achieved by providing flow control, ensuring packets arrive in the correct order,
error correction/detection, packet acknowledgements and packet retransmissions.
The Session layer is responsible for the effective communication between two software
applications. Well known examples of protocols that resided within the Session layer
7
include:
• Hypertext Transport Protocol (HTTP), used to retrieve web pages;
• File Transfer Protocol (FTP), used to transfer files between two computers; and,
• Real-time Transport Protocol (RTP) used for robust, real-time media delivery such
as Voice over IP (VoIP).
The Presentation layer is responsible for interpreting how received data should be pre-
sented, for example, the way HTML describes the presentation of a webpage.
Finally, the Application layer, top of the OSI layered network model, is home to the ap-
plications that utilize functionality provided by the preceding layers. These applications
include Web Browsers, Email Clients, FTP Clients, etc.
8
1.5 Introduction to 802.11b (WiFi)
802.11b [IEEE 802.11b] (commonly referred to as WiFi), ratified in 1999, is an amend-
ment to the 1997, 802.11 standard [IEEE 802.11], which is in turn, a member of the IEEE
802 family of standards. This family of standards serves to specify Physical layer and
Data Link layer services and protocols for variable packet sized, local and metropolitan
area networks (LANs and MANs respectively).
One of the better known and widely used of the 802 family of standards was 802.3,
otherwise known as Ethernet. Ethernet is a wired LAN technology that supports com-
munication between nodes and/or infrastructure devices over media including:
• Coaxial cable [IEEE 802.3];
• Twisted pair [IEEE 802.3i]; and,
• Fibre-Optic [IEEE 802.3j].
The first work identifiable with Ethernet (Experimental Ethernet) was made public in
1972 and, at the time of writing this dissertation, the 802.3 standards were still being
actively worked on.
The 802.11 (and thus 802.11b) standards, like Ethernet, are a set of networking tech-
nologies, and while they share a number of similarities, they differ in their choice of
transmission medium. Whereas Ethernet specifies message transmission over fixed cop-
per or fibre lines, 802.11 is a wireless networking standard with transmission of messages
via an antenna.
The 802.11 and 802.11b standards were designed to operate in the 2.4GHz, Industrial
Scientific Medical (ISM) band. The 802.11 standard specifies operation with raw data
rates of 1 or 2 Megabits per second (Mbit/Sec or Mbps). The 802.11b standard (a
modification to the original 802.11 Physical layer specifications) raises the bar to include
5.5 and 11 Mbit/Sec. In practice, overheads introduced through methods used by wireless
stations and/or infrastructure devices to determine how media contention should be
resolved (e.g., the use of CSMA/CA in the MAC layer) mean that the maximum data
rate an application operating over such a link could realise is significantly lower.
Though there were a few commercial products released that embraced 802.11, the sheer
9
number of implementation choices offered by the standard led to problems with inter-
operability between products from different vendors, and resulted in only limited accep-
tance by the marketplace. By the time 802.11b was ratified, however, interoperability
problems had largely been remedied, and the cost of equipment had fallen significantly.
These improvements, coupled with the increased throughput offered by 802.11b, led to
its rapid adoption as the de facto wireless networking standard.
802.11b typically operates in a point-to-multipoint configuration with all traffic between
wireless stations (STAs) on the wireless network travelling through a device known as
an access point (AP), a form of wireless hub. This is shown in Figure 1.2.
Figure 1.2: Wireless Hub Arrangement for WiFi.
Other configurations are also possible, such as point-to-point ad-hoc networks that allow
STAs to communicate without the need of an AP. This is shown in Figure 1.3.
The widespread adoption of wireless networking technology highlighted advantages that
wireless networks had over wired alternatives, specifically:
10
Figure 1.3: Wireless Ad-Hoc Network.
• Greater mobility for the user (devices could be connected to a wireless network at
any location where the received signal strength was strong enough);
• Significantly lower infrastructure costs when deploying a wireless network (as the
laying of wires and resultant work was avoided); and,
• Faster deployment (as once again, substantially less physical wiring was required).
Wireless networks also exhibited a number of disadvantages over those with physical
wiring. These included the fact that:
• Wireless network performance was generally slower than the wired alternative;
• Wireless networks were more susceptible to interference and were often interference
limited;
• Security issues could arise because wireless networks transmitted their signals
through the air, thus anyone with decoding capability could gain access to in-
formation; and,
• In a wired network, signals were clearly directed along the transmission medium
11
and the only significant issue was attenuation. In a wireless network, on the other
hand, signals could be reflected off objects, pass through other objects, and multiple
copies of the same signal could arrive at the receiving antenna at slightly different
times (out of phase). This made prediction of the utility of a received signal (at a
given location) quite difficult, and one of the foci of the Doctoral research described
within this dissertation.
12
1.6 Voice over Internet Protocol (VoIP)
By 2006, the telephone had changed dramatically from its inception in 1876, when
Alexander Graham Bell4 ushered it into the world with the simple, unassuming words:
“Mr Watson, come here, I want to see you”.
The days of telephone calls comprised of analogue electrical signals, manually routed by
human operators, began to disappear in the latter half of the 20th Century, replaced by
digital signals routed by (and through) a diverse array of digital technologies.
One such technology seeing widespread adoption at the time this research program
commenced was referred to as ‘Voice over Internet Protocol’ (VoIP); the routing of voice
data over the Internet, or an Internet Protocol (IP) compatible network.
There were many reasons that VoIP was an attractive technology:
• VoIP operates over a packet switched network, while the Plain Old Telephone
Service (POTS) operates over a circuit switched network. This makes VoIP more
bandwidth efficient as packet switched networks only transmit (and thus used
bandwidth) when there is data available to be sent (e.g., somebody is talking),
whereas circuit switched networks maintain a dedicated end-to-end connection for
the duration of a call (and thus use bandwidth even when there is no activity);
• VoIP enables organisations to merge their voice and data infrastructure; and,
• At the time this research commenced, VoIP was less costly than the conventional
POTS.
4Elisha Gray and Alexander Bell submitted caveats (announcement of intent to patent) for the device
now known as ‘The Telephone’ on the same day, only a couple of hours apart. Even though Gray’s patent
was submitted after Bell’s, he still decided to challenge it. After two years of litigation, the dispute was
ruled in favour of Bell.
13
1.7 VoIP over a Wireless Channel
When transmitting non-real-time data, if a packet contained an error (which could not
be corrected by the receiver) then, typically, the receiver would request a retransmission
of that packet (repeatedly if required) until it was received, ostensibly error free. With
most conventional protocols, undetected errors can still theoretically occur but, when
utilising a good error detection scheme, do so with such a low probability that, for
practical purposes, they can be ignored. The result of a transfer such as this would be
a bit-perfect replication of the transmitted data at the receiver (if the transmission was
allowed to complete).
In the quest for more efficient bandwidth utilisation, voice signals are often compressed
before transmission and decoded on reception, using algorithms known as codecs (coder-
decoder). The most popular codecs for use with VoIP were lossy (meaning that infor-
mation was lost during the compression and a bit-perfect replication of the signal upon
reception was impossible) which, in effect, limits the maximum Mean Opinion Score
(MOS) for a voice signal, even in the presence of a perfect channel.
MOS is a heuristic for assessing the quality of a received voice signal. It was developed
by the International Telecommunication Union (ITU) and described in [ITU-T, P.800]
- a recommendation detailing ‘Methods for Subjective Determination of Transmission
Quality’.
MOS is a single number between one and five (one being the poorest quality and five
the best, as shown in Table 1.1). The standard method for arriving at an MOS score
was through repetitious, subjective evaluation by a group of listeners who listened to a
number of simple, non-technical, meaningful test sentences transmitted over a commu-
nications channel and then rated the quality.
14
MOS Quality
5 Excellent
4 Good
3 Fair
2 Poor
1 Bad
Table 1.1: Mean Opinion Scores [ITU-T P.800].
Table 1.2 shows a number of popular codecs used in VoIP applications, their respective
maximum MOS and their bit rate. The bit rate indicates the number of bits that must
be transmitted per second to deliver a call.
Codec BitRate MOS
G.711 64 Kbps 4.1
G.729 8 Kbps 3.92
G.723.1 6.3 Kbps 3.9
G.726 32 Kbps 3.85
G.728 16 Kbps 3.61
Table 1.2: Popular Codec maximum MOS and Bit Rate [Cisco 2005].
The impact that using higher and lower bit rate codecs has on the ability of a wireless
channel to carry a given number of simultaneous VoIP calls is one of the major issues
considered in this dissertation. This section served to highlight the impact that codec
selection has on the quality of the VoIP call. This obvious tradeoff between capacity
and quality is explored in depth in the remainder of this dissertation.
15
1.8 Research Issues/Questions to be Addressed
This Doctoral dissertation encompasses two complementary areas of research. The first
pertains to the coverage provided by wireless networks. As alluded to in Section 1.4, this
is one of the primary difficulties in the industrial deployment of wireless networks. It is
essential, when deploying a point-to-multipoint wireless network, to know, at the very
least, the coverage characteristics of the access point (AP) for two main reasons:
• If a signal was too weak at a given location, then STAs would not be able to
transmit/receive data over the network; and,
• If a signal leaked outside the facility in which the wireless network was deployed,
then outsiders could potentially eavesdrop on network activity, posing a security
risk.
A number of methods had been developed to give those responsible for the deployment
and management of wireless networks an idea of their coverage. These included site
surveying and propagation prediction.
The site survey approach is effectively a brute force method based upon people scout-
ing a particular location (where a wireless AP had previously been set up) physically
measuring and recording the received signal strength. After measurements are taken,
they can be extrapolated, and a map showing received power across the environment
generated. This method has a number of serious challenges:
• It is an expensive and time consuming endeavour, especially in locations that cover
a wide area;
• A site survey only provides data for one particular AP set up. If, after the site
survey, the location of the AP is deemed inappropriate and subsequently moved,
a new site survey must be undertaken;
• Site survey resolution is only as good as the number of measurements taken. This
can be problematic because experimentation, such as that conducted by [Hon-
charenko 1992], has shown that signal strength can vary significantly even within
a small footprint. This can lead to poor network performance in areas where
site-surveys predict more than adequate coverage; and,
16
• If the environment changes significantly (which in factories, can occur frequently
because of equipment and parts movement), then data collected by the site survey
can be rendered obsolete in a short time period.
A substantially more versatile class of methods involves the use of propagation prediction
techniques to try to predict the coverage provided by a given AP. These rang from simple
statistical models that are computationally straightforward (i.e., have a fast execution
time) but are prone to large error, to those that are computationally intractable, but very
precise, explicit deterministic solutions of Maxwell’s equations. Propagation prediction
techniques resolve many of the problems inherent in a manual site survey, but introduce
some of their own – such as, which model is the best to apply in a given situation?
As is often the case in engineering, the appropriate model choice depends upon a carefully
considered trade-off, in this situation, between acceptable error, model complexity and
knowledge of the environment under investigation.
The environmental knowledge requirements vary from model to model – some need no
information; some require a basic knowledge of the layout of the environment (such as
that provided by a typical floor plan), whilst others require both a floor plan and a
detailed knowledge of the composition (e.g., dielectric properties) of objects residing in
the environment.
This Doctoral research program investigated a number of models (some commonly im-
plemented in commercial site survey simulation applications, others developed over
the course of the investigation) and sought to evaluate their usefulness in the context
of:
“The Rapid Deployment of Wireless Networks in an Industrial Environment”.
For the purposes of this research, what is meant by “the rapid deployment of wireless
networks in an industrial environment” needs to be explicitly defined. Simply put,
this means “substantially reducing the time taken to plan the deployment of a wireless
network”. To achieve this, two factors were identified that affected the time it took to
plan the deployment of a wireless network in an industrial environment:
(i) The use of propagation prediction software versus reliance on site surveys;
and,
17
(ii) The reliance of propagation prediction software on a predefined database of
material types.
It is commonly accepted that the use of propagation prediction software (complementary
to a site survey, as opposed to sole reliance on a site survey) can reduce the time and
cost of planning the deployment of a wireless network. According to [Motorola 2006-2],
the savings can be as large as 50% and the time saved on the order of months. This is a
substantial savings in both time and money, and for the purpose of this research is what
is considered a “rapid deployment”.
Most commercially available propagation prediction tools (an overview of such tools is
given in Section 2.12) rely on a pre-existing database of materials, used to calculate how
RF signals are attenuated as they propagate around the modelled site5. This approach
is very effective for locations such as schools, office blocks and residential buildings, etc.,
however is not so suited to industrial environments which can contain large amounts of
site specific materials (pasteurizers, welding robots, etc.). These materials significantly
affect the attenuation of the RF signal, yet, given their obscure, site specific nature, often
are not represented in a pre-existing database. This prevents the end user from accu-
rately specifying material properties, thus limiting the usefulness of many commercially
available propagation prediction tools in an industrial environment.
The specific goal of this applied research project was to develop a propagation prediction
tool that did not rely on a preexisting database of materials and was suitable for use
in an industrial environment. Therefore, for the purpose of this research, “the rapid
deployment of wireless networks in an industrial environment” should be considered
synonymous with, “the use of suitable propagation prediction tools to aid in planning
the deployment of wireless networks in industrial locations such as factories and ware-
houses”.
An important corollary to this definition is that models which take a substantial amount
of time to compute results (multiple hours) can still facilitate the rapid deployment of
wireless networks, provided that the time taken is substantially less than that which5Some propagation prediction tools also offer modes that are not reliant on a pre-existing database
of material types, however this is generally not their preferred operating mode. The performance of
models proposed in this research is compared against, amongst other things, the performance of models
representative of propagation prediction tools operating without reliance on a database of material types.
18
would have been taken to perform a comprehensive manual site survey.
Also considered, was the capacity of VoIP over WiFi. The modular approach to net-
working meant that there were many different ways that such a VoIP system could be
implemented. The research in this dissertation focused primarily on 802.11b’s Physical
and MAC layers.
With respect to the 802.11b Physical layer Specifications, it was observed that 802.11b
specifies four different data rates (1, 2, 5.5 and 11Mbit/s) and each data rate uses a
different modulation scheme:
• 1 Mbit/s uses differential binary phase shift keying (DBPSK), with an 11bit Barker
Sequence to spread the signal;
• 2 Mbit/s uses differential quadrature phase shift keying (DQPSK), with an 11bit
Barker Sequence to spread the signal;
• 5.5 Mbit/s uses DQPSK, but instead of using a Barker Sequence to spread the
signal, it uses a technique known as Complementary Code Keying (CCK) set to
encode 4bits per symbol; and,
• 11 Mbits/s is the same as 5.5Mbit/s, except it uses CCK set to encode 8bits per
symbol.
These modulation schemes are discussed in greater technical detail in Sections 2.14 –
2.17. It is however, worthwhile to point out that each of the different data rates offered
by 802.11b utilized a different modulation scheme and each modulation scheme had
a different bit error rate (BER) for a given signal-to-noise ratio (SNR). Therefore, a
wireless network’s data rate was a parameter that had to be taken into consideration
when evaluating how channel conditions affected the performance of VoIP on a wireless
network.
With respect to the 802.11b Data Link layer specifications, 802.11b utilises CSMA/CA
as its contention resolution strategy (how different clients avoid talking at the same time,
and what procedures they followed if they do). With CSMA/CA, the probability of a
collision occurring is not negligible, therefore the number of users on the network at a
given time affects the probability that a packet may be delayed for a period long enough
to be rendered obsolete and discarded.
19
This research was complementary to the propagation prediction research because, if
the setup/parameters of a VoIP system were known, then a propagation simulation
could be used to estimate the signal-to-noise ratio of communications channel at a given
location, and an idea of the capacity that a VoIP deployment would provide could be
obtained.
20
1.9 Perceived Contributions of the Research
It is believed that, as a result of the research documented herein, a number of contribu-
tions have been made to the field of data communications. These stem from the applied
nature of the experimental research and from refereed journal publications that arose as
a consequence of the implemented methodology and analysis of data.
During the course of this research program, a number of industrial sites were surveyed,
and experimentation carried out to acquire data related to path-loss in these environ-
ments. Data from three of these sites is presented in this dissertation. The three sites
represented a large scale automotive manufacturer (discrete component manufacture); a
medium scale automotive components manufacturer (discrete component manufacture),
and a medium scale continuous process food manufacturing company. Specifically, these
were:
• An automotive manufacturer, responsible for, amongst other things, steel and alu-
minium casting and stamping and chassis construction for use in the construction
of vehicles;
• A manufacturer and marketer of flat glass for the building and automotive sectors.
The factory surveyed was primarily responsible for automotive glazing; and,
• A brewery and bottling plant that produced non-alcoholic soft drinks.
Received power data were collected at each of these sites using an automated measure-
ment apparatus (see Chapter 4 for a detailed description) that enabled a large number
of measurements to be efficiently collected and collated. The locations of all measure-
ments were recorded by hand on a printed floor plan. These data were then used to
investigate the performance of five different propagation prediction algorithms, some
commonly found in commercial propagation prediction software, others proposed in this
research:
• A Path-loss Exponent Model;
• Aisle Based Path-loss (ABP);
• Partition based path-loss (PBP);
• PBP with path-loss exponent; and,
21
• A novel ray tracing implementation.
Following on from the propagation prediction research, extensive VoIP simulations were
conducted to allow a better understanding of how VoIP performed under different con-
ditions. This new knowledge was then put to use in facilitating the more reliable de-
ployment of such networks.
A significant portion of the time expended on this research was devoted to the develop-
ment, validation and assessment of a robust simulation program.
As a consequence of the research documented herein, a number of journal papers were
prepared for publication in international refereed journals. These are listed in Appendix
3 (Publications Arising from this research).
22
Chapter 2
Literature Review
23
2.1 Overview
“The wireless telegraph is not difficult to understand. The ordinary telegraph
is like a very long cat. You pull the tail in New York, and it meows in Los
Angeles. The wireless is the same, only without the cat.”
– Albert Einstein
The purpose of this literature review is to document the current state of research in the
field of propagation prediction for wireless communications and its impact on the capac-
ity of VoIP systems. A brief history of wireless communications is presented, followed
by an overview of the general theory. Texts, conference proceedings, refereed journal
publications and Internet sourced material regarding the deterministic and statistical
modelling of multi-path signal propagation, with special interest on research conducted
in an indoor industrial environment are then discussed.
The scope of the literature review is then narrowed and a brief overview of the 802.11b
specifications (the network of primary interest in this dissertation) and work that has
been conducted in assessing 802.11b’s network performance under certain conditions
(namely with respect to how many simultaneous VoIP calls can be supported by a given
Access Point) presented.
Due to the large amount of literature in the field of wireless signal propagation, this
review has been selective. Therefore the literature included was either influential, in
that it had a large number of citations, or was a piece that was particularly relevant to
signal propagation or network performance in an indoor industrial setting.
24
2.2 Overview of Propagation Prediction
When deploying wireless networks, to provide an acceptable level of Quality of Service
(QoS), it was important to ensure that adequate coverage was provided across all ar-
eas that required it. This provided motivation for research into models that accurately
predicted radio propagation in indoor channels. The first, pioneering model was pro-
posed in 1959 [Rice 1959], however since the mid 1980s (roughly correlated with the
mainstream adoption of cellular communications) research into this field expanded dra-
matically [Hashemi 1993] and, at the time of preparing this dissertation, showed no signs
of abating.
With exception given to those models discussed to provide a historic perspective, this
literature review focuses primarily on propagation models suitable for use in the indoor
deployment of 802.11b networks and, where possible, evidence is provided to demon-
strate their suitability (or lack thereof) for use in industrial environments. The models
presented here were divided into three broad categories:
• Empirical/Statistical models – those that did not utilise detailed knowledge of the
exact nature of the wireless channel but relied upon extensive empirical measure-
ments taken at a site to produce a model. The predictive power of such models
was generally limited to the site where measurements were taken and sites that
shared similar characteristics. This class of propagation model was typically the
most computationally simple and consequently the least precise.
• Pseudo-Deterministic models – those that utilised some detailed knowledge of the
exact nature of the wireless channel, but did not attempt to solve or approximate
any form of wave propagation equation.
• Deterministic models – those that utilised detailed knowledge of the exact nature
of the wireless channel in an attempt to provide or approximate a solution to some
form of wave propagation equation. This class of models was typically the most
computationally demanding but also the most precise.
25
2.3 Empirical and Statistical Propagation Models
Propagation models that did not require an in depth knowledge of the geometric nature
of the environment, but instead relied upon extensive empirical measurements taken
at a given site were known as empirical models [Durgin 1998]. Empirical models were
useful at the surveyed site and other similar sites but had limited predictive power at
sites markedly different from the original [Iskander 2002]. However, due in part to their
simplicity, empirical models remained popular and were used on a number of occasions
to characterise industrial environments [Rappaport 1989], [Yegani 1989], [Rappaport
1989-2], [Devasirvatham 1990], [Kjesbu 2000].
In 1968 Okumura performed the first notable comprehensive survey of large tracts of
land in Japan [Okumura 1968] and presented his results as a series of curves which were
later transformed into a series of parametric formulae by Hata [Hata 1980]. It was noted
that empirical data could be represented as a simple power law (in a log-log scale, this
was a linear relationship). This model was experimentally verified numerous times in
the literature [Cox 1984], [Hata 1980], etc.
The Okumura-Hata model had several limitations, the most striking with respect to the
characterisation of a channel for use by 802.11b equipment, was the fact that beyond
a frequency of 1.5GHz the model broke down [Hata 1980], [Okumura 1968]. This was
a problem if such models were to be used in conjunction with 802.11b (which operated
around 2.4GHz) and was addressed by the European COST Action 231 group, who anal-
ysed Okumura’s equations in the upper frequency bands and published a new empirical
model known as the COST-Hata model, or more simply COST 231 [Damosso 1999].
Unfortunately, these models were based on data collected from large (up to 20km) out-
door cells, which rendered them unsuitable for use in an indoor industrial environment.
However, the methodology used to construct such models remains relevant.
One of the simplest and most commonly used empirical models (suitable for use in indoor
environments) was the path-loss exponent model (or Simple Power Law), which could
be mathematically expressed as:
PL(d) = 10n log(d
d0) + P0 (2.1)
26
Where:
d is the distance in meters between transmitter and receiver;
n is the path-loss exponent; and,
P0 is the measured path-loss (dBm) at a reference distance d0.
In this model the path-loss increased relative to the log of the distance, d, from source
to point of reception and the path-loss exponent, n, controlled the speed at which the
path-loss increased [Rappaport 1992-2]. The path-loss exponent could be arrived at by
using linear regression techniques on a large number of empirical measurements taken
at the site of interest [Durgin 1998].
Friis’ freespace propagation model [Friis 1946], [Appendix 1] described the attenuation
of radio signals transmitted between two antennas in an idealised environment devoid
of any interfering material. It was straightforward to show that the path-loss exponent
model approximated Friis’ freespace propagation model when the path-loss exponent
was equal to two (n = 2).
Experimentation conducted at five different factories and subsequent analysis in [Rap-
paport 1989] determined that the overall average path-loss exponent in factories was
n = 2.18 (varying between 1.8 for line of sight (LOS) paths, to 2.4 for lightly obstructed
paths and finally through to 2.8 for heavily obstructed paths), surprisingly close to free
space.
Empirical models provided predictions of the large-scale path-loss. In practice, however,
real world measurements showed fluctuations both above and below this large-scale pre-
dicted path-loss. These fluctuations were typically caused by multiple copies of the same
signal that had bounced off and passed through different objects in the environment and
thereby arrived at the receiver with slightly different delays and thus, different phases.
This resulted in both constructive and destructive interference, an effect referred to as
multipath interference. These fluctuations in received power, which varied over time and
as the receiver or transmitter was moved around the environment, could be divided into
two different classes [Bertoni 1988]:
• Fast Fading, characterised by rapid fluctuations in received signal strength, oc-
curred when the receiver or transmitter was moved in the local area (approxi-
27
mately ±10λ1 , though due to the rapid time varying nature of a typical channel
could also often be seen when the receiver or transmitter was stationary). Fast
fading was generally characterized by Rayleigh or Rician Fading [Bertoni 1988],
[Appendix 1] (more specifically Rayleigh if the transmitter and receiver were not
in line of sight (LOS) of each other and Rician if they were [Devasirvatham 1987]).
These fluctuations were usually the result of multipath interference. Experimental
studies described in [Rappaport 1989] and [Yegani 1989] concluded that the char-
acterization of fast fading being Rayleigh, Rician and, in some case lognormally
distributed, remained valid in an indoor factory channel.
• Slow Fading, characterised by slow fluctuations in received signal strength, oc-
curred when the receiver or transmitter was moved in the global area ( 10λ) and
was generally characterized by lognormal fading [Devasirvatham 1987], [Rappa-
port 1989], [Appendix 1]. Slow fading could be caused by shadowing (attenuation
caused by objects lying between the transmitter and the receiver).
Some research involving fast fading made use of the Nakagami distribution (also called
the m-distribution) [Nakagami 1960], [Appendix 1] which allowed both amplitude and
phase to be randomly distributed (as opposed to the Rayleigh distribution which as-
sumed that all amplitudes were equal and the phase uniformly distributed). Simulations
of fading based on analytical ray tracing techniques conducted by [Laureson 1992] showed
that the Nakagami distribution was useful for describing fast fading under certain con-
ditions.
[Suzuki 1977] postulated that the reason Rician and Rayleigh distributions explained fast
fading, but not slow fading, was that a key assumption in the theoretical justification of
these distributions was spatial homogeneity; an assumption that did not necessarily hold
over a large area. This could explain the apparent transition from Rayleigh or Rician
distributions over a local area to the lognormal distribution over a global one. In an
attempt to provide a distribution that accurately described the mobile channel in both
local and global areas, [Suzuki 1977] proposed the eponymous Suzuki distribution – a
combination of both Rayleigh and lognormal distributions [Appendix 1]. However since
the original paper was published, this distribution has largely been ignored in research,
ostensibly because of the complexity of the data reductions (as its probability density1Where lambda (λ) represented wavelength, which for 802.11b was approximately 12.5 cm.
28
function (pdf) was given in an integral form) [Hashemi 1993]. [Hashemi 1992] also noted
that there have been a number of reports containing analysis of indoor data that showed
a lognormal fit to both local and global data sets.
When conducting site surveys it was sometimes desirable to remove the effects of Fast
Fading. This could be achieved by calculating the average received power of an antenna
moved over a spatial area that had linear dimensions (typically in a straight line or
in a circular path), thus removing the rapid variations characteristic of Fast Fading,
leading to a measurement referred to as ‘the sector average’ [Honcharenko 1992]. A
more accurate sector average was obtained when both the transmitter and the receiver
are moved [Honcharenko 1995], [Valenzuela 1997].
[Rappaport 1989] claimed to have conducted the first report detailing radio propaga-
tion in a factory environment where, as previously noted, it was determined the average
path-loss exponent in a factory was n = 2.18 (based on a sample set of five factories).
It was also shown that deviations about the median were well described by a zero mean
lognormal distribution (with σ = 7.1dB). [Rappaport 1989-2] using the same dataset as
[Rappaport 1989] concluded that multipath propagation (dependant on factory inven-
tory, building structure and the surrounding topography), rather than environmental
noise, limited the capacity of radio links operating above 1GHz. This, it was concluded,
came about because while the noise signatures of industrial equipment such as RF-
stabilized arc welders, induction heaters and plastic bonders were strong in the HF and
VHF ranges they dropped off rapidly above 1GHz. It was also found that unlike the
results reported in [Devasirvatham 1990], the delay spread [Appendix 1] showed little
variation regardless of whether there was a LOS path to the receiver or not. This,
[Devasirvatham 1990] claimed, came about due to “the open expanses and vast amount
of reflecting material that readily support[ed] multipath propagation”.
In the same year as [Rappaport 1989] was published [Yegani 1989] was published, also
claiming to propose the first analytical model for an indoor factory channel. It reached
many of the same conclusions as [Rappaport 1989] had with respect to the received
power following a Rician or Rayleigh distribution. However, unlike [Rappaport 1989],
it also considered the inter-arrival times of multipath signals and, using curve fitting
algorithms, determined that they followed a Weibull distribution [Appendix 1]. [Hashemi
1993] noted that they provided no phenomenological explanation for encountering this
29
type of distribution and postulated that the good fit was likely because the Weibull
distribution, in its most general form, has three parameters, increasing its flexibility to
match the empirical data.
In [Kjesbu 2000], site surveys using 802.11 compliant equipment were conducted at three
different industrial environments – a chemical pulp factory, a cable production hall,
and a nuclear power plant. These measurements confirmed the findings of [Rappaport
1989-2] by concluding that the heavy multipath fading observed was an asset rather
than a liability and concluded that 802.11 equipment was suitable for use in industrial
environments.
From the review of statistical models in this section, it was decided that, whilst the
methodologies used to derive the Okumura-Hata and COST 231 models were of interest,
they were based on data collected from large outdoor cells, and thus would not be suitable
models to evaluate in the context of deploying wireless networks in indoor industrial
locations. On the other hand, the simple power law and Friis’ freespace propagation
model (which was shown to be equivalent to the simple power law with a pathloss
exponent fixed at 2) suffered from no such limitations and indeed, were representative
of simple models used in indoor wireless network deployment. It was therefore decided
that comparing the simple power law and Friis’ free space propagation model against
more complex models would highlight the differences in precision (and the number of
empirical measurement required to obtain accuracy) between models that required very
little information about the environment that they operated in, and those that required
substantially more.
Also of interest in this section, was the discussion on slow and fast fading. As measure-
ments discussed in this dissertation were gathered at inherently noisy locations (all effort
was taken to ensure the accuracy of the measurements – yet it was infeasible to prevent
some people from wandering around the locations and some machinery from being op-
erated) it was decided that attempting to measure and predict the sector average would
produce more robust results (this was not a surprising decision as the general consensus
in the literature was that predicting the fading envelope was not realistically feasible
or indeed worthwhile in the practical application of propagation prediction). This had
implications on both the development of the ray tracing algorithm and the construction
of the measurement equipment described later in this dissertation.
30
2.4 Pseudo-Deterministic Methods
For more precise modelling of propagation, the geometrical nature of the environment,
such as objects lying between the transmitter and receiver, must be considered [Rap-
paport 1992]. Pseudo-deterministic propagation models utilise site-specific information,
but do not attempt to solve any wave-propagation equations, [Durgin 1998].
The first pseudo-deterministic model proposed for the indoor radio channel appeared in
[Rice 1959] and provided a method to model radio attenuation into a building using a
parameter known as penetration loss. The penetration loss for a given floor of a building
could be determined by taking the difference between mean signal strength measured on
a given floor of the building and mean signal strength measured in the streets directly
outside. This penetration loss thusly represented signal attenuation as it penetrated the
building and could be used in conjunction with a model that predicted outdoor propa-
gation to determine coverage within the building. Many pseudo-deterministic methods
are a variation on this simple theme.
[Devasirvatham 1990] modelled signal attenuation in an indoor environment by using a
free space propagation model in conjunction with linear attenuation terms. Measure-
ments were taken in two different office buildings at three different frequencies (850MHz,
1.7GHz and 4.0GHz) and a comparison between a piecewise linear approximation of
measured path-loss and a two-path model [Appendix 1] with a linear attenuation term
showed this model to be more accurate.
[Rappaport 1992] gave a comparison between the path-loss exponent model and a free-
space path-loss model with linear attenuation terms (a fixed amount of attenuation
occurred for every partition of a given type lying between the transmitter and receiver).
It was shown that over a large number of measurements, the distribution of measured
path-loss tended to a zero mean, lognormal distribution, with a standard deviation of σ
dB. Thus, the path-loss, for a given distance between the transmitter and receiver could
be written in terms of the predicted path-loss, as:
PL(d) = PL(d) +Xσ (2.2)
Where:
31
Xσ is a zero mean log-normally distributed random variable.
Using regression techniques to compute the values of, where appropriate given the model
being used, n, the partition attenuation terms and σ such that the mean square error
(MSE) between predicted and measured results was minimized, it was found that using
the free-space path-loss model with linear attenuation terms resulted in a significant
reduction in standard deviation.
In [Turkmani 1993] a regression analysis method was used to determine an accurate
model (one that minimised the MSE between measured and predicted path-loss) for
predicting path loss experienced by radio transmissions into and within buildings at
900, 1800 and 2300 MHz. It was noted that one of the most difficult tasks in applying
regression analysis to a problem, such as this, was the selection of possible variables to
include in the final regression model. In [Turkmani 1993]’s paper, appropriate variables
were selected by beginning with a large number of different variables and carefully as-
sessing each variable’s quality using a number of different statistical tests, paying special
attention to any correlation observed between any two variables.
[Durgin 1998] also discussed the use of ‘partition-dependant attenuation factors’ and
presented normal equations that, given a set of path-loss measurements across a site,
and a knowledge of what objects lay in a direct line between the transmitter and the
location at which each path-loss measurement was taken, determined the partition-
dependant attenuation factors such that they minimised the variance of the empirical
versus modelled results.
[Martijn 2003] used a regression analysis method to determine path loss into a building
at 1800MHz. The building was modelled as having an associated ‘building loss’ and
floors as having an associated ‘floor loss’. These values represented the attenuation that
a signal experienced when it penetrated the walls or floors of a building.
From the review of literature in this section, it was revealed that most pseudo-deterministic
methods followed a similar theme of modelling objects lying between the transmitter
and receiver as contributing a fixed amount of attenuation depending on the type of
object. It was also revealed that this form of model was suitable for use in deploying
indoor wireless networks. For the purposes of comparison against the statistical models
and against the more complex ray tracing model, it was decided that [Durgin 1998]’s
32
partition-dependant attenuation factors was representative of this class of models.
33
2.5 Deterministic Methods
Maxwell’s differential equations can be used to describe the propagation of electromag-
netic waves. However, exact solutions could only be derived for trivial cases. Numerical
methods could be applied to obtain accurate approximations to Maxwell’s equations.
Such methods include:
• Finite Element Method, based on work by Clough [Clough 1999], where continuous
quantities are approximated by a set of evenly spaced discrete points [Weisstien
1999];
• Finite Difference Time Domain (FDTD) Method, proposed by [Yee 1966], where
Maxwell’s equations are rewritten as a series of difference equations, which are in
turn solved iteratively [Neskovic 2000]; and,
• Method of Moments (MOM) proposed by [Harrington 1967], a numerical method
for solving integral equations.
Unfortunately, the accuracy and precision of such methods is dependant on how finely
objects are segmented/sampled, rendering them useful over small areas, but computa-
tionally intractable on the scale required when considering the deployment of wireless
networks [Yang 1996], [Durgin 1998], [Neskovic 2000].
This section was included only to provide a brief historical overview of some of the alter-
natives to ray tracing that could be found in the literature. None of the aforementioned
methods were considered further as even a brief review of the literature concerning them
revealed that they were unsuitable for the deployment of indoor wireless networks.
34
2.6 Ray Tracing
A technique known as ray tracing is a viable alternative to the numerical methods
described in Section 2.5. Ray tracing provides the high frequency limits of the exact
solutions for electromagnetic fields and can provide quick, approximate solutions where
it is not feasible to obtain exact ones [Rappaport 1994]. A good explanation/derivation
of exactly how ray tracing achieved this can be found in [Durgin 1998].
Ray tracing is used extensively in the field of computer graphics (where it is used to
model visible light - electromagnetic radiation in the frequency range 4.3 − 7.5 × 1014
Hz) and, as such, computational implementations of ray tracing algorithms have been
extensively researched [Speer 1992] and implemented. This mature knowledge provided
a sturdy foundation on which propagation prediction ray tracers could be researched and
indeed many earlier implementations of ray tracing algorithms for propagation prediction
were extensively modified graphical ray tracers [Rappaport 1994].
A basic implementation of a graphical ray tracer operates by shooting rays from a
specified viewer location through an imaginary grid (typically one cell in the grid for
each pixel in the resultant image) with one ray per cell. This ray is then traced until
it intersects an object. Upon intersecting an object, a new shadow ray is created and
traced from the point of intersection to the scene’s light source. If the shadow ray
intersects an object before it reaches the light source, then the first point of intersection
is in the shadow of the object between it and the light source, otherwise it is illuminated
according to the lighting model being used. Then, a second reflected ray is generated
(such that the angle of reflection is equal to the angle of incidence with both angles
being measured from the normal at the point of intersection). This reflected ray is then
traced until it intersects an object where another shadow ray and another reflected ray
are generated. This process continues up to a specified depth (number of reflections).
Once this depth is reached, the resultant illuminations and reflections are combined to
determine the colour of the cell that the original ray was traced through [Owen 1999,
Whitted 1980].
According to [Nidd 1997] the two major differences between using ray tracing for com-
puter graphics and using ray tracing for the purposes of wireless propagation prediction
were that:
35
• With propagation prediction, all rays passing through a specified plane, regardless
of direction, were of interest; and,
• Unlike graphical ray tracing, the phase of the rays was also of interest, as it deter-
mined the level of constructive and destructive interference.
Prior to preparing this dissertation, there had been many different implementations of
ray tracing for propagation prediction, including:
• Two-dimensional (2D) – which sacrificed accuracy for computational tractability
[Honcharenko 1992];
• Three-dimensional (3D) – [Rappaport 1992];
• Hybrid 2D-3D – whereby three-dimensional ray tracing was approximated by mul-
tiple 2D iterations [Ji 1999]; and,
• Statistical – for use when detailed data about the environment was unavailable
(the environment was approximated based on its Mean Free Distance, the distance
a ray was likely to travel before it intersected an object) [Hassan-Ali 2002].
With a graphical ray tracer, rays were launched through an imaginary grid. However,
this method did not suffice when implementing a ray tracer for propagation prediction
(because, as was noted previously, rays were of interest irrespective of the direction they
were travelling).
There were essentially two main classes of launching rays for propagation prediction
[Cheu 1995], [Durgin 1998], [Ji 2001]. These were:
• An approach based upon Image Theory, which allowed paths with a specified num-
ber of reflections to be generated recursively [Rappaport 1993], [Ji 2001], [Durgin
1998]; and,
• A brute force method where all rays, even those that were not of particular interest
(though this was not known until after the rays had been launched), were traced
out [Rappaport 1992-3], [Ji 2001], [Durgin 1998].
When a ray intersects an object, one of three things could occur:
• A portion of the ray could pass through the object (with appropriate attenuation
and refraction if deemed necessary), this was known as transmission;
36
• If the ray intersected an object with dimensions large compared to the ray’s wave-
length it was reflected ; and,
• If the ray intersected an object with dimensions small compared to the ray’s wave-
length it was scattered (bounced off the object in all directions).
If enough geometrical data about the propagation environment was available, then ray
tracing provided an accurate way of estimating the delay spread [Rappaport 1992-
3].
Ray tracing techniques (based on geometric optics (GO)) could be augmented using
high frequency asymptotic techniques to model diffraction (the bending of rays around
objects). These techniques are referred to as ’high frequency approximations’ because
their accuracy improves as the size of the medium causing the diffraction increases
relative to the wavelength of the modeled field. In 1962 [Keller 1962] presented what
he called the geometrical theory of diffraction (GTD) and then, in 1974, [Kouyoumjian
1974] presented a new model that remained valid in situations where GTD broke down
(notably, transition regions adjacent to shadow and reflection boundaries). Kouyoumjian
called his new model ’The Uniform Theory of Diffraction’ (UTD). It is worthwhile to
point out that these methods for modeling diffraction are asymptotic and therefore
approximations, UTD, for example, breaks down when diffracting edges are located in
the transition regions of other diffracting edges.
It was not the primary goal of this dissertation to present an overly complicated propa-
gation model, this has been done many times before. Rather, the aim of this dissertation
was to show that a relatively simple model can produce useful results. In the spirit of this,
such high frequency asymptotic approximations are not implemented in the described
model.
The literature reviewed in this section demonstrated that ray tracing was a feasible
method for use in the indoor deployment of wireless networks, and indeed, a large portion
of the research conducted was aimed at evaluating how a specific ray tracing implementa-
tion performed when benchmarked against selected statistical and pseudo-deterministic
propagation models. This section also served to highlight the many different forms and
ray launching methods that a ray tracing implementation could take.
The two most common ray launching methods for a standard 3D ray tracing algorithm,
37
image based ray tracing and brute force ray tracing are discussed in greater detail in
the following sections (Sections 2.7 – 2.8) with focus on what problems exist in vari-
ous implementations and what steps can be taken to alleviate these problems. It was
through careful evaluation of the pros and cons of each approach that the methodology
for implementing the ray tracing algorithm described in this dissertation was decided on.
In this manner, by using well researched, robust algorithms, the proposed ray tracing
algorithm is shown to have been built on a strong foundation of prior research.
38
2.7 Image Based Ray Tracing
Ray tracing is a very computationally intensive task, therefore much work appeared
in the literature researching methods that increased its computational efficiency. One
such method involved utilizing an approach similar to Electro-Magnetic Image Theory
[Balanis 1989] and led to a class of algorithms known as Image Ray Tracers (IRT) such
as those implemented by [McKown 1991], [Lawton 1994] or [Costa 1997] etc.
Care must be taken, as [Durgin 1998] emphatically points out, when Electro-Magnetic
Image Theory is talked about in the context of IRTs, for though they are similar in
concept, they are each used in distinctly different situations.
Figures 2.1 – 2.3 provide a basic overview of how IRT operates (this series of diagrams
was adapted from Figure 1 in [McKown 1991]):
Figure 2.1: IRT with no reflections.
Figure 2.1 shows an environment that consists of two partitions, labeled X and Y , a
transmitter (Tx) and a receiver (Rx). No reflections were considered in this figure, so
all the IRT had to do, was check to see if a Line of Sight (LoS) path existed between
the receiver and the transmitter.
39
Figure 2.2: IRT with one Reflection.
To calculate the possible paths with one reflection, mirror images of the transmitter are
created around each partition (labeled IX and IY in Figure 2.2). Then a line is drawn
between IX and Rx. This line is the same length as the ray originating from Tx and
being reflected off X at the appropriate location so that it is received by Rx. The same
process is repeated for IY – Rx.
Figure 2.3: IRT with two Reflections.
The same process is then repeated, making reflections of IX around Y to form IXY and
IY around X, to form IY X (Figure 2.3). Where the line IXY – Rx is the same length
40
as the ray transmitted from Tx, reflected off X, reflected off Y and finally received by
Rx.
The above description of IRT was greatly simplified, but served to illustrate what was
meant when [Lawton 1994] stated that IRT was ‘not approximate’. With that state-
ment, [Lawton 1994] was highlighting a problem (known as aliasing) inherent in Shoot
and Bounce Ray Tracers (SBR), as opposed to implying that ray tracing itself was not
an approximation. It can be seen from Figures 2.1, 2.2 and 2.3 that the only rays con-
sidered were those that had the possibility of directly intersecting the receiver whereas
in SBR ray tracing many rays are considered without knowing whether they will in-
tersect the receiver or not. This, and the implications of this, are further explained in
Section 2.8.
41
2.8 Shoot and Bounce Ray Tracing
While Image Based Ray Tracers could be efficient under certain circumstances and did
not suffer from aliasing, they were:
• Limited to relatively simple environments where the location of all possible images
could be identified [Cheu 1995], [Tan 1995];
• Could become cumbersome when randomly oriented objects or multiple reflections
were considered [Rappaport 1992-3], [Ji 2001]; and,
• Their performance degraded exponentially with respect to the number of surfaces
in the environment [Falsafi 1996].
As noted previously, the other popular option was a technique known as “Shoot and
Bounce Ray Tracing” (SBR) [Cheu 1995] which in various literature had been referred
to as “Brute Force Ray Tracing” or “Ray Launching” [Durgin 1998].
Even though SBR suffered from aliasing, it was decided that this method was preferable
to IRT because of IRT’s limitations in the face of complex environments. The remainder
of this section describes different methods to launch rays and methods to combat aliasing.
These methods constituted further design decisions that needed to be considered when
implementing a SBR algorithm.
SBR treats the transmitter as a point source and would ideally fire rays evenly in all
directions – unfortunately, this would required an infinite number of rays and is therefore
not practical. Thus, a method for launching a finite number of rays evenly over all
directions is needed [Tan 1995]. Fortunately, the question:
“How does one place points on a sphere in a uniform manner?”
was one that had been asked by many people in many different fields. It was also the
subject of concerted research by mathematicians, biologists, chemists, physicists and, of
course, engineers alike [Saff 1997]. One also had to be careful to note that the definition
of “uniformity” could vary from situation to situation.
The most straightforward way to launch rays from the transmitter (evenly and in all
directions), was to launch them at regular angular increments for both the azimuth, and
polar, (elevation) angles [Rappaport 1994], [Durgin 1998] and [Agelet 2000] as shown in
42
Figure 2.4.
Figure 2.4: Ray launching with angular increments.
This approach was unfortunately flawed, as rays shot using the angular increment
method were sparse around the equator and bunched together at the poles [Durgin
1998], (i.e., the angular separation of rays decreased as the position they were launched
from neared the poles). This can be easily seen by looking at Figure 2.5 where the circles
traced around the azimuth shrink the closer they get to the poles.
Figure 2.5: Angular Increment method showing bunching at poles.
[Cichon 1995] improved upon the constant angular increment method and proposed that
the elevation angular increment, ∆θ, should remain constant whilst the azimuth angular
43
increment, ∆φ, should change as the value of θ changed according to the relation:
∆φ =∆θ
sin θ(2.3)
This is shown in Figure 2.6.
Figure 2.6: Varied azimuth method.
Visual inspection of Figure 2.6 shows less bunching around the poles – however, as
[Durgin 1998] noted, the localised pattern is irregular.
A solution to this problem, based upon the use of geodesic spheres, was proposed by
[Rappaport 1994] and subsequently used in other SBR implementations [Lu 1998], [Dur-
gin 1998]. Geodesic spheres, structures popularized by the American architect Richard
Buckminster Fuller, could be created by generating a triangular mesh that approximated
a true sphere through recursive subdivision [Schaubach 1994]. In [Rappaport 1994] a
geodesic sphere was created by tessellating the faces of a regular icosahedron (a poly-
gon with 20 faces, 12 vertices and each vertex joining 5 faces, with each face being an
equilateral triangle) into equal segments (the number of segments was known as the
tessellation frequency, N) and then projecting them on to the surface of a unit sphere,
shown in Figure 2.8.
Each ray launched from the centre of a regular icosahedron through one of its vertices
has an angular separation of 63 degrees from each of its five neighbours. This is shown
44
in Figure 2.7.
Figure 2.7: Icosahedron showing rays launched through vertices (each neighbouring ray
had an angular separation of 63 degrees).
However [Tan 1995-2] observed that once the faces were tessellated, regardless of tessel-
lation frequency, the global minimum and maximum angular separation of neighbouring
rays varied by as much as 20%. [Durgin 1998] expanded on this by noting that the angu-
lar separation changed slowly around the surface of the sphere and stated that because
of this, the localised angular separation behaved quite well.
45
Figure 2.8: Geodesic sphere with a tessellation frequency of (N=14).
[Saff 1997] claimed that the tessellation of an icosahedron and subsequent projection
of its vertices onto the surface of a sphere had a number of flaws. Firstly, the number
of points (vertices) was constrained to a limited set of numbers. This can be seen by
looking at the last entry of Table 5.5 in [Durgin 1998] where the number of vertices in a
geodesic sphere was related to the tessellation frequency:
Number of Geodesic Vertices = 10N2 + 2 (2.4)
Given that the tessellation frequency, N , must be an integer, it follows that:
Number of Geodesic Vertices ⊂ N
A significantly greater problem was, as [Saff 1997] described, the lack of asymptotic
uniformity. This was observed by noting that while the vertices of an icosahedron were
uniformly distributed, when tessellated and projected onto the surface of a sphere, this
46
uniformity was lost (vertices around the middle of the sphere were pushed further apart
than the rest). Because this tessellation was repeated identically (N times), increasing
the number of points did not cause a geodesic sphere to approach uniformity.
A method proposed by [Rakhmanov 1994] appeared, for large N , not to suffer from these
problems to the extent that the geodesic sphere did. This method worked by dividing a
unit sphere into N equally spaced horizontal planes, forming N circles (with the circles
at the poles actually being points). Each horizontal plane contained exactly one point.
The kth point could be found by travelling upwards along a great circle from the k-1th
point to the next horizontal plane and then travelling a certain distance anticlockwise
around the azimuth. Locating points in this manner caused them to spiral upwards
around the surface of the sphere. Numerical analysis conducted by [Saff 1997] concluded
that the distance travelled around the azimuth should be close to 3.6√N
. Given that a
point was placed on each horizontal plane and there were N horizontal planes (with
N ∈ N ) it was evident that, for this method, the number of points could be arbitrarily
chosen. This method is shown in Figure 2.9.
Figure 2.9: Rakhmanov Spiral Method (with N = 700).
Another method, detailed in [Rusin 1995] involved dividing a unit sphere into N + 1
horizontal planes and placing points, at equally spaced distances, on the circles that
these planes traced out – see Figure 2.10. Placing points in this manner resulted in
fewer points around the circles created by planes close to the poles than those created
towards the middle of the sphere (and thus did not cause point bunching as did the
constant angular separation method). This method, like the geodesic method did not
allow for an arbitrary number of points. [Rusin 1995] stated that the number of points
47
Figure 2.10: Circle Method.
was related to N by the equation:
Number of Points = 2 + 2NN−1∑i=1
sinπi
N(2.5)
In an SBR, rays travel out from the transmitter in a predefined pattern (using one of
the ray launching methods previously described). Regardless of the method employed,
as the rays travel away from the transmitter, they spread out (i.e., the distance between
two neighboring rays increases) – see Figure 2.11.
Figure 2.11: Aliasing in a Shoot and Bounce Ray Tracer.
The further apart the rays become, the more likely it is that they would miss the receiver
(if the receiver was modelled as an object with a static size) or other objects in the world
48
entirely. Errors introduced in this manner were commonly referred to in the literature
as aliasing or kinematic errors [Durgin 1998], [Cheu 1995].
A straightforward method for mitigating the effects of aliasing (with respect to rays
missing the receiver), was through the use of a reception sphere, shown in Figure 2.12.
The two dimensional analogue of the reception sphere was proposed in [Honcharenko
1991] and extended to three dimensions in [Rappaport 1992-3].
Figure 2.12: Reception Spheres.
The reception sphere method reduced the occurrence of kinematic errors by setting the
radius of the receiver (reception sphere) to be proportional to the angular separation
between the ray and its neighbours, α, and the total distance that the ray had travelled,
d. The minimum radius that a reception sphere had to possess, to ensure that it did
not miss rays, was 1√3
the distance between the ray and its neighbours at the receiver2
[Rappaport 1992-3], or:
Reception Sphere Radius ≈ ad√3
(2.6)
2More accurately it was2d sin α
2√3
, however sinX ≈ X for small X
49
This was shown to be true through simple geometry (by taking an equilateral triangle
and calculating the distance from the central point to any one of the vertices).
Unfortunately, while this method stopped the receiver from missing rays, the receiver
could, under certain conditions, count both a ray and one of the ray’s neighbours (with
both rays representing the same electromagnetic wave front). This can be seen by looking
at the ray distribution on a geodesic wave front, Figure 2.13.
Figure 2.13: Reception Sphere Double Counting [adapted from Durgin 1998].
[Durgin 1998] calculated the probability of double counting to be ( 2π3√
3−1) or 0.209.
In [Lu 1998] a variation on the typical reception sphere method was presented that
utilised multiple, smaller reception spheres to eliminate the possibility of double count-
ing rays. However, this method introduced the possibility of missing a ray entirely
(albeit with a much smaller probability than a static receiver). According to [Lu 1998]’s
calculations, the probability of this occurring was 0.0497 (a marked improvement on the
vanilla reception sphere method).
[Durgin 1998] presented what he termed “a method of distributed wavefronts” in an
attempt to resolve the kinematic errors inherent in using reception spheres. In this
50
method, all rays in line of sight to the receiver contributed power proportional to their
relative proximity from the receiver. This was achieved using a weighting function chosen
such that:
• If a ray directly intersected the receiver then it contributed 100% of the received
power and other nearby rays contributed nothing; and,
• If a ray was further from the receiver than its neighbouring rays, it contributed
nothing.
Otherwise, the sum of rays for a given wavefront was weighted using a function cho-
sen such that the contribution of rays representing a uniform wavefront summed to a
constant.
According to [Durgin 1998] there was no analytical way to produce such a function.
Therefore, a computer was required to numerically generate a weighting function that
approximated these conditions as closely as possible.
[Yun 2001] stated that [Durgin 1998]’s method of distributed wavefronts was an improve-
ment over the standard reception sphere method, however the author also noted that
it was ‘relatively complex to realise’ and ‘inherently inaccurate’. [Yun 2001] followed
this up by presenting a simple alternative based on the use of reception spheres that
eliminated double counting rays. Yun’s method discarded any duplicate rays arriving
at the receiver, with duplicate rays being defined as those with the same characteristic
sequence, that is, rays that experienced the same combination of reflections and trans-
missions off/through the same objects in the same order. Implementation and simulation
of [Yun 2001]’s method showed that it caused, on average, approximately 20% of rays
arriving at receivers to be rejected. This value matched with the theoretical number of
rays that were double counted using the vanilla reception sphere method.
It should be noted that, regardless of the method used to mitigate the effects of kinematic
errors at the receiver, the angular separation of rays shot from the transmitter, and
the area over which the rays travelled placed a limitation on the size of objects that
could be included in the simulation. That is, if rays were shot from the transmitter with
α = 0.2 and were expected to travel (including reflections and transmissions) up to 150m
before they stopped making an appreciable contribution to the received power, then the
minimum size an object in the environment could be (and expect to not introduce
51
kinematic errors) was 52cm in any dimension.
From the review of literature in this section, it could be seen that there exist a number
of different methods for launching rays and combating aliasing, some obviously more
effective than others. The two most effective methods for launching rays appeared to
be:
• Through the vertices of a geodesic sphere; or,
• From points generated using Rakhmanov’s Spiral method.
The two most effective methods for combating aliasing appeared to be:
• Durgin’s method of distributed wavefronts (which had the prerequisite of using a
geodesic sphere); or,
• Yun’s modified reception sphere method.
The results presented in Chapter 5 of this dissertation were ultimately obtained with
a SBR algorithm that used a geodesic sphere (as Rakhmanov’s spiral method did not
appear in literature that discussed ray tracing for propagation prediction) and Yun’s
modified reception sphere method (due to its reduced complexity and slightly increased
accuracy). It should be noted however, that the simulation program had functionality
that also incorporated Durgin’s modified wavefronts and Rakhmanov’s spiral method.
Finally, through cursory experimentation it was found that real world differences in
accuracy when using any of the implemented methods (for a SBR that estimated ma-
terial parameters) appeared minor - though this would make for interesting further
research.
52
2.9 Reflections and Transmissions
Methods for launching rays and minimizing the effects of kinematic errors were discussed
in Section 2.8. The following two sections, Section 2.9 - 2.10, briefly explain what
happens to a ray when it intersects an object and how antennas affect a given ray’s
power.
When a ray, called the incident ray, intersects an object, two new rays are generated, a
transmitted ray and a reflected ray, Figure 2.14.
Figure 2.14: Reflection and Transmission of Ray.
The angle of reflection, θr, is equal to the angle of incidence, θi. If the incident ray has
a unit direction vector, Ri, and the unit normal vector of the surface is N , then a vector
giving the direction of the reflected ray, Rr, can be written as:
Rr = Ri − (2Ri ·N)N (2.7)
Assuming no refraction, the direction of the transmitted ray, Rt, is the same as that of
the incident ray.
If the E-field of the incident ray is Ei, then given its polarization and the object’s
relative permittivity, εr, (a parameter that describes a material’s reflective surface prop-
53
erties), the E-field of the reflected and transmitted rays can be calculated, [Rappaport
2002]:
Γ‖ =−εr sin θi +
√εr − cos2 θi
εr sin θi +√εr − cos2 θi
(2.8)
Γ⊥ =sin θi −
√εr − cos2 θi
sin θi +√εr − cos2 θi
(2.9)
Therefore, the E-field of the reflected ray would be:
Er = ΓEi (2.10)
and the E-field of the transmitted ray:
Et = (1− Γ)Ei (2.11)
Where, in the ray tracing simulation described later in this dissertation, Γ is Γ‖ when
a ray intersects a horizontally oriented surface, such as a floor, or the roof, or Γ⊥ when
a ray intersects a vertically oriented surface, such as a wall, or the side of an industrial
machine. This approach to managing polarization is common in the literature and has
been used in numerous ray tracing implementations such as [Marinier 1996].
A more complete model would account for:
• Multiple reflections within a material;
• Scattering; and,
• Diffraction.
These were, however, for the purposes of this dissertation, ignored as each of these phe-
nomena greatly increased the number of rays that must be traced out and thus increased
the complexity of an already computationally taxing algorithm and is justified by noting
that the simulation still produces accurate and usable results in their absence.
54
2.10 Antenna Patterns
Practical antennas do not uniformly radiate an equal amount of power in all directions.
Instead, power is radiated in a complex fashion, the representation of which is known as
an antenna pattern. It was therefore important, when creating an accurate simulation,
to model the manner in which an antenna radiated power.
Formally, the antenna pattern could be written as a function of θ and φ, Pν(θ, φ) ,
defined by the antenna’s response to a point source at (θ, φ) and normalized such that
Pν(0, 0) ≡ 1 [Weisstein 2006].
A problem arose in how to convey this information in an accurate and concise manner.
A three dimensional diagram of the antenna pattern, while informative, is hard to read
accurately (i.e., to determine what the value of Pν was for particular values of θ and
φ).
On the other hand, a tabulated list of Pν for particular values of θ and φ, while accurate,
is large and unwieldy.
One method in common usage (often found in an antenna’s data sheets), which avoids
such pitfalls, is plotting the antenna’s relative gain, shown in Figure 2.15. The gain
is typically plotted in dBi, decibel gain relative to an ideal isotropic source (i.e., a
hypothetical antenna that radiates equally well in all directions).
0.2
0.4
0.6
0.8
1
30
210
60
240
90
270
120
300
150
330
180 0
(a) H-Plane or Elevation Plane.
0.2
0.4
0.6
0.8
1
30
210
60
240
90
270
120
300
150
330
180 0
(b) E-Plane or Azimuth Plane.
Figure 2.15: H-Plane and E-Plane Antenna Patterns.
If the Elevation plane, Figure 2.15(a) is defined to be f(θ) and the Azimuth Plane,
55
Figure 2.15(b), to be g(φ) then the antenna pattern can be written as:
Pν(θ, φ) = f(θ)g(φ) (2.12)
The previous two sections (Section 2.9 and 2.10) described how to model ray-material
interactions and the effects of antennas respectively. There was no discussion about
the relative merits of the described methods, as it was apparent from reviewing the
literature that these were commonly accepted practices. They were described in the
literature review for the sake of completeness.
56
2.11 Adapting Relative Permittivity
The dielectric properties of materials, such as glass and concrete, were straightforward to
determine from literature [Musil 1986]. However, when confronted with the large array
of inhomogeneous materials typically present in an industrial environment, the task
of characterising materials became a substantially more daunting and time consuming
process – for example, what was the relative permittivity of a pasteurizer full of ginger
beer? To this end, research was conducted, with the objective of determining material
properties for use in ray tracing simulations based on empirical measurements.
[Jemai 2005] presented a method for calculating a material’s electromagnetic relative
permittivity based on received measurements. A transmitter and a receiver were set up
in front of the material under investigation. The expected arrival time of rays, known
as the power delay profile (PDP) [Appendix 1], for this given receiver-transmitter set
up were then predicted using a ray tracing algorithm and empirical measurements taken
using a pulse extraction algorithm at the receiver. Measured and predicted amplitudes
for the reflected (off the material of interest) ray were then compared. Given that
the reflection coefficient was proportional to the amplitude of the reflected ray, this
coefficient could then be inferred, and in turn so could the relative permittivity of the
material under investigation.
[Rappaport 1994] presented a ray tracing algorithm for propagation prediction in micro-
cellular environments. It used AutoCAD to model large objects in the environment,
arguing that the goal of the work was to study large-scale average path-loss as opposed to
the small-scale fluctuations of a narrowband signal. The presented algorithm introduced
the concept of effective material properties, wherein the electromagnetic permittivity
of a given material was determined such that predicted results most closely matched
measured results. The heuristic [Rappaport 1994] used to judge how closely predicted
and measured solutions matched, was the summed square of the difference between
measured and predicted Power Delay Profile (PDP) [Appendix 1] sampled on a one
nanosecond sample by sample basis, and the effective material properties thusly selected
from within a given range such that this heuristic was minimized.
The manner in which the material properties were inferred by the SBR described in
this dissertation was similar in approach to the method described by [Rappaport 1994].
57
There were however, a number of notable differences. Rappaport’s method relied on
the summed square of the difference between measured and predicted PDP. This was a
preferable heuristic over that eventually decided upon for use (summed square of differ-
ence between measured and predicted pathloss), but was relatively difficult to measure
and thus not suitable for the rapid deployment of wireless networks. Secondly, [Rappa-
port 1994] described results where only one material in the environment was considered.
This was not sufficient in industrial environments, which contained many distinctly dif-
ferent materials. It was therefore one of the major goals of this research, to provide
modelling and experimentation that demonstrates whether an SBR using an inferior
(though much more readily available) heuristic to estimate the properties of multiple
material types could produce useful and precise results as compared to other models
that required similar information.
58
2.12 Propagation Prediction Packages
Prior to this Doctoral research a number of software packages had been developed,
by various ethereal entities, that implemented various propagation prediction models
suitable for use in the deployment of wireless networks both indoors and outdoors.
These software packages ranged from open-source simulations (primarily designed for
use in research) through to commercial packages costing many thousands of dollars and
actively used by commercial practitioners to deploy large-scale wireless networks.
One of the primary aims of this applied research program was the development of a tool
that could be realistically used to aid in the deployment of wireless networks. To this
end, a range of pre-existing tools were investigated, some of which are described here,
to provide an insight into the state of software propagation prediction at the time that
this dissertation was written:
• Surveyor [AirMagnet 2006]; a wireless survey software tool from AirMagnet. Sur-
veyor was a tool that helped to simplify the task of performing a site survey. It ran
on a laptop, equipped with appropriate wireless capability, and allowed the user
to enter in a floor plan of the environment where the wireless network was to be
deployed, and then set a path around which they would roam. While roaming, Sur-
veyor took measurements of the Signal to Noise Ratio (SNR), channel throughput
and signal strength. This product helped to streamline the process of conducting
a site survey, but did not implement any predictive functionality.
• PlaceBase [Ascom 2006] from Ascom Systec’s Research and Applicable Technology
Department, Switzerland, was a predictive radio frequency (RF) channel modeller.
It was based upon a patented Multi-Channel-Coupling (MCC) algorithm developed
by Ascom. According to [Ali-Rantala 2002], “MCC [did] not deal with individual
rays but consider[ed] a propagation environment to be an assembly of attenuators
and reflectors whose geometry defin[ed] a huge number of possible modes of inter-
action”. The benefit of using such an algorithm was that no recalculations were
necessary when base stations were added or moved, allowing different installation
scenarios to be quickly explored, however it was not as accurate as a pure ray
tracer was, nor did it take into account the effects of fast fading.
• The OPNET [OPNET 2006] wireless module, from OPNET allowed for simulation
59
of wireless networks using a 3D ray tracing algorithm that accounted for trans-
mission, diffraction, scattering and reflection. It was suitable for both indoor and
outdoor use (where it also modelled atmospheric and foliage attenuation).
• SIRCIM from Wireless Valley [Wireless Valley 2005] was a Matlab program that
provided full multipath delay, angle of arrival, path loss and fading data up to 60
GHz using statistical models derived from empirical data. To make predictions, it
took into account the type of environment it was operating in (e.g., buildings with
soft partitions, buildings with hard partitions, open spaced environments, etc.).
However, it did not consider a detailed floor plan of the environment. Consequently,
the results it produced were not as accurate as the more sophisticated deterministic
models and as this dissertation was being written its source code was primarily
being sold as an educational tool.
• Site-Specific System Simulator for Wireless Communications (S4W) [Virginia Tech.
2006] was a comprehensive 3D ray tracing simulation developed at the Virginia
Polytechnic Institute with a focus on parallel processing.
• LANPlanner [Motorola 2006] from Motorola (formally Site Planner from Wireless
Valley) took both information about the geometrical nature of the environment to
be modelled and information about the materials residing within the environment
and used a True Point-to-Point, Multiple Breakpoint Path Loss Exponent Model
[Hankins 2004]. This model was based on the simple power law for path loss.
Essentially, a single ray was traced between the transmitter and the receiver and
an attenuation term included for each object this line passed through. The path
loss exponent was a piecewise function of distance. LANPlanner also provided
simpler models and allowed for the inclusion of empirical measurements to ‘fine
tune’ the predicted results.
The above list was intended to be indicative rather than comprehensive. There were
many other software packages that also served similar purposes, including:
• Wireless InSite [Remcom 2006] which provided, ray tracing, FDTD, Hata, COST-
Hata and free space propagation prediction algorithms;
• CINDOOR [Uni Cantabria 2006] a ray tracing simulation created at the University
of Cantabria; and,
60
• SuperNEC [SuperNEC 2006] a MoM simulation package.
The overview of available propagation packages in this section provided an insight into
what features were required to produce a simulation package that was not only of aca-
demic interest, but also of real world use and potentially commercially viable.
61
2.13 802.11b Standards and Simulation
The following sections (Section 2.13 – 2.18) present an overview of the 802.11b Physical
and MAC layers and describe models found in the literature that have been developed in
order to simulate or predict their respective behaviours. The standards are described in
order to allow the reader to develop a working knowledge of the relevant portions of the
802.11b standard without having to resort to external literature. The models presented
are subsequently built upon in later sections to facilitate the prediction of the capacity
of Voice over IP (VoIP) in a wireless network. There are three main points of interest
in the following sections relevant to the task of determining the capacity of VoIP over
802.11b, these are:
• How each of 802.11b’s different modulation schemes affects the bit error rate for a
given signal-to-noise ratio (Sections 2.15 – 2.17);
• The amount of overhead introduced by each layer/protocol. This determined the
amount of voice data that could be contained in a packet of a given size (Sec-
tions 2.14, 2.18 and 2.21); and,
• How the number of stations vying for access to a channel affected the amount of
time each station had to wait before it was able to transmit (Sections 2.18 – 2.20).
62
2.14 802.11b Physical layer
The IEEE 802.11b standard specifies the use of Direct Sequence Spread Spectrum (DSSS)
modulation. Spread Spectrum techniques cause the transmitted signal to use more
bandwidth than the data stream that is being modulated, and provides a number of
benefits, the most important of which is a higher resilience to interference. This is
achieved by spreading the signal to be transmitted, according to a predefined pattern,
at a rate, Tc (chipping rate), which is higher than the rate at which it is fed symbols,
TS (symbol rate).
The 802.11b Physical layer header is 24 octets long and consists of two distinct parts,
the Physical Layer Convergence Protocol (PLCP) Preamble (18 octets) and the PLCP
Header (6 octets). The PLCP Preamble is used to synchronize the transmitter and
receiver and the PLCP Header contains information about the length of the ensuing
payload its transmitted and its data rate. Both the PLCP Preamble and PLCP Header
are transmitted at 1Mbps regardless of the data rate being used. It is also worthwhile to
point out that [IEEE 802.11b] specifies the option of using a short preamble (9 octets).
When this is used, the PLCP Preamble is transmitted at 1Mbps and the PLCP Header
transmitted at 2Mbps. 802.11b Physical layer with long preamble is shown in Figure 2.16
and with short preamble in Figure 2.17.
Figure 2.16: Physical Layer Long Preamble.
63
Figure 2.17: Physical Layer Short Preamble.
For transmission speeds of 1 and 2 Mbps, the pattern used to shift the phase of the
signal to be transmitted is an 11bit Barker code3 and for transmission speeds of 5.5 and
11 Mbps Complementary Code Keying (CCK), Section 2.17, is employed.
Both Barker Codes and CCK provide high values of correlation when they are aligned
and low values when they are only partially aligned. This is desirable behaviour because,
as noted earlier, multipath can result in multiple copies of the same signal arriving at
the receiver with slightly different delays. This can prove troublesome if the transmitted
signals provide high levels of correlation when they are only partially aligned [Andren
2000].
3There exist Barker codes of length 2,3,4,5,7,11 and 13 and it was conjectured that no longer such
codes exist. The Barker code of length 11 specified for use in 802.11b is [1,-1,1,1,-1,1,1,1,-1,-1,-1]
64
2.15 Differential Binary Phase Shift Keying (DBPSK)
At 1 Mbps, 802.11b employs Differential Binary Phase Shift Keying (DBPSK), a varia-
tion on Binary Phase Shift Keying (BPSK). BPSK performs poorly in the presence of
an arbitrary phase shift introduced by the channel (resulting in a 180 degree phase am-
biguity), while DBPSK does not suffer from this problem and only has a slightly higher
probability of error [Proakis 2001].
In PSK, the phase of the received signal determines what symbol was transmitted (i.e.,
in the case of BPSK, transmitting a “1” might result in the phase of the transmitted
signal being π, and transmitting a “0” might result in the phase being 0, or vice versa).
However, in the case of DPSK it is the change in phase between the received signal and
the signal received immediately prior to that signal that determines what symbol was
transmitted (i.e., in the case of DBPSK transmitting a “1” could cause the phase of
the transmitted signal to change by π, and transmitting a “0” could cause the phase to
remain the same, or vice versa).
The probability of a bit error occurring in an Additive White Gaussian Noise (AWGN)
channel when using an optimal receiver is well known for DBPSK [Proakis 2001]. A
convenient approximation of this probability is written as:
Pe =e−(
εbN0
)
2(2.13)
Where:
εbN0
is the signal-to-noise ratio per bit.
65
2.16 Differential Quadrature Phase Shit Keying (DQPSK)
At 2 Mbps, 802.11b employs Differential Quadrature (or Quaternary) Phase Shift Keying
(DQPSK). This doubles the rate of DBPSK by transmitting two symbols per bit (and
thus the transmitted signal has one of four different phases) at the expense of having a
slightly higher probability of error.
According to [Proakis 2001], while it was not difficult to obtain the probability of error
for DBPSK, it was much more so for M-ary cases where M > 2 such as DQPSK (where M
= 4). [Proakis 2001] provided an approximate solution, showing that the performance of
M-ary DPSK was approximately 3dB worse than that of M-ary PSK (an approximation
that was claimed to be valid for typically encountered SNRs). An exact solution for
DQPSK using Gray coding (and thus suitable for predicting the BER of 802.11b at
2Mbps) in an AGWN channel was also provided, Figure 2.18, which [Miller 1998] claimed
only provided the bit error rate (BER) for the special case where noise samples were
uncorrelated.
The equation presented by [Proakis 2001] was:
PB = Q1(a, b)− 12I0(ab)e
−(a2+b2)2 (2.14)
Where:
Q1(a, b) was the Marcum Q function [Appendix 1];
I0(ab) was the modified Bessel function of order zero [Appendix 1]; and,
a and b were defined as:
a =
√√√√2εb(1−√
12)
N0, b =
√√√√2εb(1 +√
12)
N0
A simpler approximation to this equation was presented in [Miller 1998]:
Pe = Q
(√1.1716εbN0
)(2.15)
Where:
Q(x) was the Gaussian distribution Q-function [Appendix 1].
66
A graph showing the relationship between bit error and SNR for DBPSK and QBPSK
(both exact and approximate solution) is given in Figure 2.18.
Figure 2.18: Bit Error Probabilities for DBPSK and DQPSK.
From Figure 2.18 it is apparent that the approximation presented in [Miller 1998] closely
followed the exact solution presented by [Proakis 2001] and it is clear that DBPSK had
a lower probability of bit error than DQPSK.
67
2.17 Complementary Code Keying (CCK)
CCK is used in conjunction with DQPSK at both the 5.5Mbps and 11Mbps data rates
and replaces the use of Barker codes to provide spectrum spreading. It was developed by
Lucent Technologies and Harris Semiconductor and was adopted by the 802.11 Working
Group in 1998.
CCK was selected over competing modulation techniques as it required approximately
the same bandwidth and could utilise the same preamble and header as pre-existing 1
and 2 Mbps wireless networks and thus facilitated interoperability [Andren 1999].
CCK is a variation and improvement on M-ary Orthogonal Keying and utilises ‘polyphase
complementary codes’ [Pearson 2000]. Complementary codes, first discovered by [Go-
lay 1961], are sets of equal, finite length sequences, that have a number of interesting
properties – the most important of which is the fact that the sum of their respective
autocorrelation sequences is zero at all points except for the zero shift.
The complementary codes introduced by [Golay 1961] were known as binary comple-
mentary codes where:
cki ∈ −1, 1
802.11b makes use of polyphase complementary codes, first proposed by [Sivaswamy
1978], where each element is a complex number of unit magnitude and arbitrary phase,
or more specifically for 802.11b at 5.5 or 11 Mbps:
cki ∈ 1,−1, j,−j
For 11Mbps CCK4, as implemented in 802.11, codewords are eight bits in length. Six
bits of the unmodulated data are used to select one of 64 complementary code sequences
and then two bits are used to modulate this using DQPSK, resulting in eight bits per4Defined as a complex vector a CCK codeword is:
c =nej[φ1+φ2+φ3+φ4], ej[φ1+φ3+φ4], ej[φ1+φ2+φ4],−ej[φ1+φ4], ej[φ1+φ2+φ3], ej[φ1+φ3],−ej[φ1+φ2], ej[φ1]
o
68
symbol. 5.5Mbps transmission uses two bits to select a code word and two bits to
modulate the codeword using DQPSK, resulting in four bits per symbol. For both
5.5Mbps and 11Mbps transmission, the chipping rate is 11Mcps and the symbol rate is
1.375Msps.
The probability of a symbol error for 11Mbps CCK was given in [Proakis 2001]5 as:
PS 11Mbps = 1− 1√2π
∫ ∞−q
2εSN0
1√2π
∫ ν+q
2εSN0
−“ν+q
2εSN0
” e−x2
2 dx
3
e−ν2
2 dv
Therefore the probability of a bit error is:
PB 11Mbps = 1− (1− PS 11Mbps)18 (2.16)
Similarly, the probability of a symbol error for 5.5Mbps is [Proakis 2001]:
PS 5.5Mbps = 1− 1√2π
∫ ∞−q
2εSN0
1√2π
∫ ν+q
2εSN0
−“ν+q
2εSN0
” e−x2
2 dx
e−ν2
2 dv
And the probability of a bit error is:
PB 5.5Mbps = 1− (1− PS 5.5Mbps)14 (2.17)
A graph showing the relationship between BER and SNR for all of 802.11b’s different
modulation schemes is given in Figure 2.19.
5The actual equation given in [Proakis 2001] and subsequently reprinted in many papers was incorrect
as it was missing a 1√2π
69
0 2 4 6 8 1010−8
10−7
10−6
10−5
10−4
10−3
10−2
10−1
SNR (Ec/N0) dB
Bit
Err
or R
ate
1 Mbps2 Mbps5.5 Mbps11 Mbps
Figure 2.19: Comparison of BER Versus Chipping Energy for 802.11b Modulation
Schemes.
70
2.18 802.11b MAC layer
The MAC sub-layer is responsible for determining when a given client is allowed to utilize
the physical medium at a specific point in time and it introduces 34 octets of overhead.
This is shown in Figure 2.20.
Figure 2.20: 802.11b Frame.
The 802.11 standards provides for two different MAC techniques:
• The Distributed Coordination Function (DCF), a MAC technique that does not
rely on a central coordinator (DCF belongs to the CSMA/CA family of protocols);
and,
• The Point Coordination Function (PCF) in which the AP signals nodes when they
are permitted to transmit.
[Hole 2004] and [Garg 2003] argued that while PCF was designed to support real time
data transfer, its implementation in 802.11 compliant hardware was optional, thus its
presence could not be relied on and, as such, it was important to investigate the real
time behaviour of systems that utilized DCF.
The DCF is a MAC technique for avoiding packet collisions (i.e., it is a technique that
aims to prevent two wireless stations from transmitting at the same time). In the 802.11
standards, it is available in two types, specifically:
• Basic Access Mechanism (the default), where a positive acknowledgement (ACK)
is transmitted upon the successful reception of a packet; and,
• Request to send (RTS)/Clear to send (CTS), where a station reserved the channel
by transmitting a special RTS frame, which if successful is acknowledged by a CTS
frame, before the data packet is transmitted. The RTS/CTS frames introduce more
71
overhead, but have the advantage of being able to resolve collisions more quickly
(the only packets that ever collide are RTS packets, which are small).
Figures 2.21 shows the operation of both Basic Access and RTS/CTS DCF.
(a) 802.11b Basic Access Mechanism.
(b) 802.11b Basic Access Mechanism with Error.
(c) 802.11b RTS/CTS Mechanism.
(d) 802.11b RTS/CTS Mechanism with Error.
Figure 2.21: 802.11b DCF.
The Short Interframe Space (SIFS) is the amount of time taken between receiving a
message and issuing a response. It results from delays inherent in decoding the received
message, formulating and transmitting a response.
The DCF Interframe Space (DIFS) is the minimum amount of time that all stations have
to wait after a channel becomes clear before being permitted to attempt transmission.
The DIFS is longer in duration than the SIFS, forcing stations to wait a sufficient amount
of time after a packet has been transmitted for an ACK to be sent.
72
To prevent a station from constantly transmitting and preventing others from having
their turn, a station that had just sent a packet must wait a random back-off time
before it is allowed to transmit again. The method used by 802.11b is known as Binary
Exponential Back-off (BEB). To determine the back off period, a random integer is
chosen from a uniform distribution:
Back-off Timer = U[0, CW]
Where:
CW is the size of the contention window, initially set to a value of CWmin.
This random number, known as the back-off timer, represents the number of slots, σ,
(periods of a fixed duration) that the station must defer transmission for. The dura-
tion of each slot provides enough time for a station to detect whether another station
has chosen that particular slot to transmit. For every slot that a station senses the
communications channel to be idle (the communications channel is considered idle if no
transmissions have been attempted for a period longer than DIFS), the back-off timer
is decremented by one. Once the back-off timer reaches zero, the station is allowed to
attempt transmission.
If a station that had just transmitted a packet did not receive an ACK (perhaps because
two stations tried to transmit at the same time or the packet was garbled in some
other fashion)6 a collision is assumed to have occurred. In this situation, the size of the
contention window, CW, is increased and a new back-off timer chosen from the uniform
distribution U[0,CW]. More formally the manner in which CW increased in size is:
CW = 2n(CWmin + 1)− 1 (2.18)
Where:
n is the number of consecutive collisions a station has experienced.6In some places in this dissertation, the term collision refers to a packet corrupted by noise, or by
another station that attempted to transmit at the same time. In other places, it refers solely to a packet
corrupted by another station that attempted to transmit. Where the distinction is important, it is
explicitly pointed out in the text.
73
For each failed transmission attempt, CW is increased in size until an arbitrary limit,
CWmax, is reached, at which point CW remains at a constant size until a successful
transmission occurs or the maximum retry limit is reached.
When a successful transmission occurs (i.e., an ACK is received) the contention window
is reset to its initial size (i.e., CW = CWmin).
[IEEE 802.11] specified that a packet should be discarded if, after a certain number of
attempts, transmission of the packet was still unsuccessful. If the packet was shorter than
(the value referred to in the standard as) dot11RTSThreshold then dot11ShortRetryLimit
attempts were allowed, otherwise dot11LongRetryLimit attempts were allowed. It was
interesting to note that in [IEEE 802.11b] dot11ShortRetryLimit was equal to 7 and
dot11LongRetryLimit was equal to 4, yet the contention window’s size remained fixed
after 5 transmission attempts (for 802.11b where CWmin = 31, this corresponded to
CWmax = 1031).
The default value of dot11RTSThreshold in [IEEE 802.11] is 3000. Therefore in this
dissertation, it was assumed that all frames smaller than 3000 bits were small and
thus limited by dot11ShortRetryLimit attempts and all other frames were limited by
dot11LongRetryLimit.
The probability that a collision occurs determines the amount of time (number of slots)
that a station has to wait before it can send a packet, and is thus an important factor
when considering the capacity of a wireless channel for VoIP.
[Heusse 2003] noted that, due to the fair nature of CSMA/CA, throughput at the MAC
layer for 802.11b clients was limited by the transmitting speed of the slowest client.
That is, a client transmitting at 1Mbps captured the channel for 11 times longer than
a client transmitting at 11Mbps does. This was an important consideration as many
clients implemented Auto Rate Control (ARC) algorithms that controlled how the speed
at which the client transmitted varied with respect to channel conditions. The IEEE
802.11b WLAN standards did not specify any specific ARC algorithm, and as such, a
number have been proposed, including:
• Auto Rate Fallback (ARF), the most popular ARC algorithm at the time this
dissertation was written [Kim 2005], originally designed for Lucent’s WaveLAN
II, where transmission speed is increased after successive successful transmissions,
74
and decreased after successive failed transmissions [Qin 2004]. One of the major
draw backs to this algorithm is that it fails to distinguish between collisions (a
situation which reducing the data rate does not improve), and packets lost due to
channel noise (a situation which reducing the data rate does improve) [Kim 2005].
• Receiver-Based AutoRate (RBAR), which utilises RTS/CTS DCF. Using RBAR,
when a client wants to transmit, it sends an RTS frame, which the AP uses, based
on the strength at which it is received, to estimate channel conditions and de-
termine the rate at which the client should transmit. This information is then
attached to the CTS frame and sent back to the client. This scheme was orig-
inally proposed in [Holland 2001] who claimed simulations demonstrated that it
performed consistently well. [Kim 2005] however, argued that whilst RBAR was
an improvement over ARF (because it did not decrease the speed of a network
when a collision occurred), it was unsuitable for use in 802.11 networks because it
required substantial changes to the 802.11 specifications to facilitate the modified
RTS/CTS packets.
• Collision Aware Rate-Adation (CARA) proposed in [Kim 2006], a variation of
ARF. CARA uses RTS/CTS frames to differentiate between channel errors and
collisions, but only after a data frame has been lost (in this manner, provides a
method for distinguishing between collisions and channel errors without needing
to transmit RTS/CTS frames before every data frame).
• Robust Rate Adaptation Algorithm (RRAA) proposed in [Wong 2006]. RRAA
takes a slightly different approach to CARA and ARF, rather than relying on
successive successful or failed transmissions, it uses the number of failed transmis-
sions that occur within a given period of time to estimate the ratio of lost frames
to frames sent. This ‘loss-ratio’ is used to determine whether the data-rate should
be increased, decreased or remain the same. In the same paper [Wong 2006] also
propose an ‘adaptive RTS/CTS filter’, which helps to distinguish between chan-
nel errors and collisions. This algorithm is referred to as RRAA when used in
conjunction with the adaptive RTS/CTS filter and RRAA-Basic, when it is not.
75
2.19 VoIP Capacity
[Bianchi 2000] developed a very accurate model for calculating the probability of a
collision occurring through the use of Markov chains for both the Basic Access method
and RTS/CTS versions of DCF. [Garg 2003] argued, however, that such a model was
needlessly complex when analysing VoIP traffic and stated that such a channel could be
modelled as having just two saturated (always had a packet ready to send), transmitters
– a client and the AP.
As shown by [Heusse 2003], if the probability of multiple collisions was deemed negligible,
then the probability that a collision occurred was simply the probability that two clients
chose their CW such that their back-off timers reached zero at the same time, or:
Pc = 1−(
1− 1CWmin
)N−1
(2.19)
If CWmin was 31 (the value specified in [IEEE 802.11b]), then the probability a collision
occurred evaluated to 0.03, matching the value given in [Garg 2003].
The approximate model put forth in [Garg 2003] was justified by noting that in the
case of VoIP, if there were n simultaneous calls going on, then the AP would transmit
n times as much data as the wireless clients (conversations were bidirectional and the
AP had to transmit packets containing voice data to each of the n clients conducting a
call). It was also noted that the time between successive packets being sent by a specific
client was orders of magnitude greater than the amount of time the channel was made
busy actually sending packets. This resulted in the probability of a collision between
two unsynchronized stations being much smaller than the probability of a collision be-
tween a station and the AP. [Garg 2003] claimed that this model was validated through
simulation.
[Garg 2003] noted, through experimentation, that the quality of all active 802.11b wire-
less VoIP sessions associated with a single AP degraded rapidly when a certain threshold
in the number of simultaneous VoIP calls was reached, and degraded only slightly below
that number (the critical number being seven simultaneous VoIP calls using G.711 on a
11Mbps link with 10ms of voice data per frame, a result confirmed by [Hole 2004]).
In [Hole 2004], the capacity of an 802.11b wireless network operating at 11Mbps was
76
examined. A mathematical upper bound on capacity was derived and its predictions
confirmed through simulation. Capacity was found to be highly dependant on the delay
budget allocated to packetisation. Following on from this, a channel with a bit error
rate (BER) of > 10−5 was shown to have a considerably lower capacity – to the extent
that a channel with a BER of > 10−3 was not even able to handle a single call. This
experimentation was performed for two different coding schemes, G.711 and G.729, and
it was shown that, unsurprisingly, G.729 provided greater capacity than G.711.
77
2.20 Simulating the 802.11b MAC layer
The simplifications used by [Garg 2003] were justifiable in a channel where packet re-
transmissions occurred solely because of collisions. However, in a more realistic, noisy
channel, where packets could be corrupted in other ways, a more in-depth analysis was
required.
In [Bianchi 1998, 2000] a two-dimensional Markov Chain analysis was employed to in-
vestigate the performance of the 802.11 DCF. This analysis was expanded upon by
[Wu 2002] to more closely model 802.11’s DCF (Wu’s model took into account the
dot11ShortRetryLimit and dot11LongRetryLimit). A brief recap of Wu’s two dimensional
discrete-time Markov Chain analysis of the 802.11 DCF is provided in Section 3.8.
It should be noted that while [Wu 2002]’s model performed admirably, it did not model
802.11b’s DCF entirely accurately. For example, it did not take into consideration an
effect referred to by [Vu 2006] as ‘Channel Capture’. Channel capture occurred because
wireless stations were allowed to choose zero as their back-off timers. In this manner,
any station that chose zero as its back-off timer would transmit immediately following
the DIFS period, after the channel ceased to be busy, and before any other station had a
chance to decrement their back-off timer. The only reason a station would have chosen
a new back-off timer while the channel was busy, was if it had attempted to transmit
(was successful and already had another frame ready to transmit, or was unsuccessful –
i.e., was involved in a collision, or the frame had been corrupted by noise).
The result of this was that in a saturated network, wireless stations that had just at-
tempted to transmit had a higher probability of attempting to transmit again than those
that were just decrementing their back-off timers. In [Foh 2005], an extension to the
Markov Chain analysis of [Bianchi 2000] that modelled the channel capture effect was
proposed. [Vu 2006] who also proposed a significantly simpler alternative, noted that
[Foh 2005]’s obtained numerical results for collision probability seemed too low for such
a saturated wireless network. The analytic models discussed in this dissertation did
not take into consideration channel capture effects (beyond, of course, their unavoidable
inclusion when simulating the DCF), as the added complexity detracted from the argu-
ments and points being made. It was also conceded by [Foh 2005] that, while a model
that did not model channel capture was fundamentally in error, this was, for many
78
common cases, not reflected significantly in the saturation throughput results.
79
2.21 RTP/UDP/IP and Robust Header Compression
The Real-time Transport Protocol (RTP) was published as an Internet Engineering
Task Force (IETF) proposed standard [RFC 1889] in January 1996 and was subsequently
adopted by the ITU as part of its H.323 series of recommendations (concerning real-time
voice, video, and data communication over packet-switched networks). RTP operates
in the Session layer of the OSI Model, providing services to the Presentation layer and
utilizing services provided by the Transport layer. It attempts to provide a robust
framework for the end-to-end delivery of data with real-time characteristics (real-time
in this context meaning that a receiver presents a media-stream to the user upon its
arrival, as opposed to storing the complete stream for presentation at a later date).
VoIP is a prime example of a real-time media stream.
The RTP header is generally 12 octets long (excluding additional contributing sources)
and contains information including:
• The type of media being transported;
• A sequence number that allowed the receiver to identify if packets had been lost;
• A time-stamp which indicated the order in which media packets should be played
out; and,
• Other data, such as a version number, marker bit, etc.
The RTP packet is usually contained within a lower-layer payload, such as UDP/IP.
The IP (or Internet Protocol) header is 20 octets long (for IPv4, ignoring any optional
‘Options’) and contains information that facilitates the best effort delivery of packets to
a named (via the familiar IP address) destination.
The UDP (or User Datagram Protocol) header is 8 octets long and provides minimal
extensions to IP. It introduces the concept of ports (which are different destinations
residing within a given host, allowing multiple services to be provided to a single host
with a given IP address) and an optional checksum that can be used to detect corruption
of the payload. If the checksum is not being used (common when transmitting real time
data, as using corrupt data is often preferable to waiting for the data to be resent), then
it is set to zero (thus the header size does not change).
80
Therefore, it could be seen that a single RTP/UDP/IP packet contains 40 octets of
protocol specific header information. If the ultimate payload was, for example, 10ms of
G.711a audio, then for every 80 octets of data, 40 octets of overhead would be introduced
by RTP/UDP/IP.
It was noted that much of the header information is either static (e.g., the media type,
port, IP address and UDP checksum), or changes in a predictable way (e.g., the RTP
sequence numbers and time stamps). These facts allow header compression to be im-
plemented at the Data Link layer and can result in the entire 40 octet RTP/UDP/IP
header being compressed down to as little as 2 octets [Perkins 2003].
The two main standards for RTP header compression are:
• Compressed RTP (CRTP) specified in [RFC 2508]; and,
• Robust Header Compression (ROHC) [Bormann 2001].
ROHC is more complex to implement than CRTP, but also much more efficient in the
presence of a noisy channel (such as a wireless one) [Perkins 2003] and as such was the
form of header compression considered in this dissertation.
ROHC operates on the principle that once the initial information about the RTP/UDP/IP
header has been transmitted (even though some information is static and some pre-
dictable, it still needs to be transmitted at least once so that the receiver can recon-
struct it), then the decompressor knows which fields are static and which fields change
in a predictable manner. The decompressor is then able to correctly predict what the
following headers should look like with only minimal help from the transmitter. This
initial information is known as the ‘context’.
ROHC has three compressor states of operation (ordered from lowest to highest levels
of compression):
• Initialise and Refresh (IR);
• First Order (FO); and,
• Second Order (SO).
IR was designed to either initialise the context, or to re-initialise the context after an
error. When ROHC is in this state of operation, the sender transmits the complete,
81
uncompressed header information. ROHC remains in this state until it is reasonably
confident that the receiver has successfully received all requisite static information, at
which point it transitions to FO.
In FO, ROHC attempts to communicate irregularities in the packet stream. This allows
the decompressor to work out how the dynamic fields are changing. FO differs from IR in
that only a few static fields can be updated and most of the information transmitted is at
least partially compressed. From this state, ROHC can either transition back to IR, if the
context changes dramatically, or to SO if ROHC is reasonably sure that the decompressor
understands the patterns with which the dynamic fields are changing.
In SO, correct decompression of packets only requires the correct decompression of the
RTP sequence number (provided that ROHC was set up correctly in the FO state).
ROHC transitions from SO if the regular patterns, defining how the dynamic fields are
updated, change.
It should be noted that completely irregular fields, such as the UDP checksum (if it was
enabled) can not be compressed and thus have to be communicated unchanged.
ROHC can operate in three modes:
(i) Unidirectional mode (U-mode). This mode is used when communication
from the decompressor back to the compressor is impossible or undesirable.
When in this mode, the decompressor can not tell the compressor when it
has received enough context information to transition to a higher state of
compression, nor can it notify the compressor that its current context infor-
mation is not up to date and it requires the compressor to transition into
a lower state of compression. To get around this problem, the compressor
transitions to a higher level of compression after a predetermined number of
packets, and once in SO, periodically transitions to lower levels of compres-
sion to ensure that the context is maintained.
(ii) Bidirectional Optimistic mode (O-mode). In this mode the compressor be-
haves in the same manner as U-mode (transitions to higher states of com-
pression after a predetermined number of packets), but also allows the de-
compressor to send acknowledgements to the compressor, which cause a state
transition.
82
(iii) Bidirectional Reliable mode (R-mode). In this mode, the only way to tran-
sition to different states is at the request of the decompressor.
Using a Bidirectional mode results in higher header compression than the Unidirectional
mode. Typically, the choice between O-mode and R-mode depends on the capacity and
loss characteristics of the back link [Perkins 2003].
[Gibson 2006] noted that when used with 802.11b, ROHC, on average, compressed the
RTP/UDP/IP header down to 4 octets (the mode it was operating in was not speci-
fied, however as there would be no problem with bidirectional communications, O-mode
or R-mode could be assumed). [Rein 2005] concluded that Bidirectional ROHC with
UDP checksum disabled compressed the RTP/UDP/IP header down to approximately
6 octets.
A difference of 2 octets had very little impact on the VoIP capacity simulations presented
later in this dissertation, and thus it was assumed, for the purposes of this dissertation,
that ROHC compressed the RTP/UDP/IP header to a length of 4 octets.
83
2.22 Summary of Findings
This section presents a brief summary of the material uncovered during the literature
review relevant to the remainder of the dissertation.
The first half of the Literature Review was dedicated to propagation models suitable
for predicting the coverage of 802.11b wireless networks in indoor environments. These
propagation models were grouped into three different categories based roughly on their
complexity, the information they required to operate and mathematically, how they ar-
rived at their predictions. A number of models from the reviewed literature (Sections 2.2
– 2.4) were chosen to act as benchmarks against which the relative performance of models
proposed in this dissertation (Chapter 3) could be compared. These included:
• Friis’ Free Space Model (Section 2.3);
• Simple Power Law (Section 2.3); and,
• Partition Based Pathloss (Section 2.4).
In Chapter 3, a number of different propagation models are proposed. The most sig-
nificant of which is a 3D Ray Tracer. It was revealed, through the literature review,
(Section 2.6) that there were many ways that a 3D Ray Tracer, for propagation predic-
tion, could be implemented. The two most popular methods identified were:
• Image Based Ray Tracing (Section 2.7); and,
• Shoot and Bounce Ray Tracing (Section 2.8).
Due to its ability to operate efficiently in complex environments with many objects, it
was decided that a Shoot and Bounce Ray Tracer would be more suitable to aid in the
rapid deployment of wireless networks in an industrial environment than an Image Based
Ray Tracer. Because of this decision, the literature review then narrowed its scope and
investigated issues specific to the implementation of Shoot and Bounce Ray Tracers.
Two major issues were identified, namely:
• How should rays be launched?
• How can kinematic errors be reduced?
Different methods for resolving these issues were discussed in Section 2.8 and by judging
84
their relative merits and previous successes in being used in Shoot and Bounce Ray
Tracers, it was decided that:
• Rays would be launched through the vertices of a geodesic sphere; and,
• Yun’s modified reception sphere method would be used to reduce kinematic errors.
Following this was a description of commonly accepted methods to calculate ray atten-
uation due to reflections and transmissions (Section 2.9) and methods to model specific
antennas (Section 2.10).
The second half of the literature review was dedicated to reviewing the 802.11b standard
and methods for modelling it. The material uncovered in this half of the literature review
was used in Chapter 3 to develop a model that allows the number of VoIP users supported
by a given 802.11b AP to be calculated.
The 802.11b standard specifies Physical layer and Data Link layer services and protocols.
A review of the Physical layer specifications (Sections 2.14 – 2.17) revealed the size of
the header (i.e., overhead introduced) and the effect that using a short preamble had
on transmission speed and header size. It was also revealed that 802.11b could operate
at one of four different transmission speeds with each different speed utilising a differ-
ent modulation scheme. Modelling the 802.11b Physical layer amounted to providing
equations sourced from the literature that provided the bit error probability for a given
signal-to-noise ratio for each modulation scheme (Sections 2.15 – 2.17).
Following on from this was a review of the 802.11b MAC layer, specifically how much
additional overhead the MAC layer added and how 802.11b implemented CSMA/CA
(Section 2.18).
The ultimate goal of the second half of the literature review was to provide sufficient
background material such that models, able to predict the maximum number of concur-
rent VoIP calls an AP under given channel conditions can support, could be derived.
As a starting point, a model that did this in a very basic manner was discussed (Sec-
tion 2.19). This model was subsequently extended (Chapter 3), using a popular method
for modelling 802.11b’s MAC layer, Markov Chain analysis (Section 3.8), to take into
account channel conditions. This research is complementary to the propagation research,
which can be used to calculate the channel conditions.
85
In this manner, channel conditions can be predicted using the propagation models and
the effects of contention on the channel calculated through the use of Markov Chain
analysis.
Finally, an overview of overheads introduced by RTP/UDP/IP and RoHC was presented
(Section 2.21). The amount of overhead introduced is important to the VoIP capacity
calculations as it directly affects the amount of actual data that can be transmitted in
a packet of a given size. The effects that overhead, and other factors such as channel
conditions and codec selection have on the capacity of VoIP over 802.11b are investigated
in greater detail in Chapter 5.
86
Chapter 3
Methodology and
Implementation
87
3.1 Overview
The purpose of this chapter is to detail the processes and procedures that were used to
achieve the objectives of this Doctoral research – specifically, to examine the operation
of wireless networks in industrial environments and develop models that facilitate the
rapid deployment of wireless networks. This chapter focuses on:
• Providing a technical treatment of what existed prior to this research (i.e., describ-
ing the models introduced in Chapter 2 in greater depth);
• Discussing the shortcomings of these models when used for the goals set out in
this dissertation; and,
• Describing how these models were extended to render them more suitable for the
specific goals of the research program.
88
3.2 Method / Research and Development Process
The literature review, documented in Chapter 2, uncovered a number of different models
commonly used when assessing the performance of wireless networks in an industrial
environment. For the purpose of propagation prediction, these were:
• Simple Power Law (Section 2.3);
• Partition Based Pathloss (Section 2.4); and,
• Ray Tracing (Sections 2.6 – 2.10).
Models specific to predicting the performance of 802.11b were also described. These
included:
• Bit Error Rates for 802.11b specific modulation schemes (Sections 2.14 – 2.17);
• Garg’s VoIP capacity model (Section 2.19); and,
• Wu’s Markov Chain analysis of the 802.11b MAC layer (Section 2.20).
The identification of models popular in the literature was the first step in a process
to assess the efficacy of various existing modelling techniques and to subsequently de-
velop and implement techniques that could lead to rapid deployment of networks in an
industrial environment. This process was as follows:
(i) Research popular, pre-existing, models from the literature (described in Chap-
ter 2);
(ii) Based on these pre-existing models, develop new models with a focus of
facilitating the rapid deployment of wireless networks (described in Chapter
3);
(iii) Develop an apparatus for gathering real world network measurements and
then gather such measurements at locations of interest (described in Chapter
4);
(iv) Develop a simulation application for both the evaluation of models and used
as a tool to facilitate the rapid deployment of wireless networks (described
in Chapter 4); and finally,
89
(v) Assess the performance of models, using empirically gathered data, based on
their efficacy in facilitating the rapid deployment of wireless networks, using
the models identified in the literature as a benchmark for determining the
performance of newly proposed models, (described in Chapters 5 and 6).
The newly proposed propagation models put forth in this chapter were:
• Aisle based pathloss – an extension of the simple power law;
• Partition based path-loss with path-loss exponent – a simple modification to par-
tition based pathloss; and,
• A ray tracing model (specifically, the method used to estimate material properties
was described).
The common theme, linking each of these propagations models (both those proposed
and those found in the literature), was the ability to use empirical measurements to
compute the required model parameters and thus produce more accurate results.
For the purposes of predicting the number of VoIP users an 802.11b network could
support, a number of different models were presented:
• New VoIP capacity models (extensions to Garg’s VoIP capacity model) that used
Wu’s Markov Chain Analysis to produce predictions in a noisy channel; and,
• Novel Markov chain models, able to predict the performance of different ARC
algorithms in a noisy channel.
The remainder of this chapter is dedicated to presenting a mathematical overview (and
derivation/justification if relevant) of each of the models mentioned above.
90
3.3 Propagation Models
When wireless networks were being deployed it was important that adequate coverage
was provided across all areas that required it to ensure an acceptable level of quality
of service (QoS). This provided motivation for research into models that accurately
predicted radio propagation in an indoor channel.
The models discussed in Sections 3.4 – 3.10 were focused primarily on propagation pre-
diction suitable for use in the indoor deployment of 802.11b networks and, as previously
noted, could be divided into three broad categories:
• Empirical/Statistical models – those that did not utilise detailed knowledge of the
exact nature of the wireless channel but instead relied upon extensive empirical
measurements taken at a site to produce a model. The predictive powers of such
models were generally limited to the site where the measurements were taken and
sites sharing similar characteristics. This class of propagation model was typically
the most computationally simple but also the least precise.
• Pseudo-Deterministic models – those that utilised some detailed knowledge of the
exact nature of the wireless channel, but did not attempt to solve or approximate
any form of wave propagation equation.
• Deterministic models – those that utilised detailed knowledge of the exact nature
of the wireless channel in an attempt to provide or approximate a solution to some
form of wave propagation equation. This class of models was typically the most
computationally demanding but also the most precise.
91
3.4 Empirical and Statistical Propagation Models
3.4.1 Overview
Propagation models that did not require an in-depth knowledge of the geometrical nature
of the environment, but instead relied upon extensive empirical measurements taken at
a given site were known as empirical models.
In Section 3.4.2, the Simple Power Law and the manner in which its pathloss exponent
can be computed using empirical measurements is presented. The Simple Power Law was
a popular statistical method for predicting pathloss that required minimal environmental
knowledge. Its relationship to Friis’ freespace propagation model is also discussed. These
comprised the simplest models that were compared against the other propagation models
discussed in this dissertation.
In Section 3.4.3, Aisle Based Pathloss (ABP) is presented. Aisle Based Pathloss was
based on the Simple Power Law and attempted to reconcile the fact that the layout
of many factories consisted of rows upon rows of parallel aisles, and therefore pathloss
through these aisles should be greater than pathloss down the aisles.
3.4.2 Simple Power Law
One of the most straightforward and commonly used empirical models (suitable for
use in indoor environments) was the path-loss exponent model or simple power law;
mathematically expressed as:
PL(d) = 10n log10
(d
d0
)+ P0 (3.1)
Where:
d is the distance between transmitter and receiver;
n is the path-loss exponent; and,
P0 is the measured path-loss at a reference distance d0.
92
In this model the path-loss increased relative to the log of the distance, d, from source
to point of reception and the path-loss exponent, n, controlled the speed at which the
path-loss increased [Rappaport 1992-2].
The path-loss exponent could be arrived at by using linear regression techniques on
a large number of empirical measurements taken at the site of interest [Durgin 1998].
Given a set of measurements in a given environment, the value of n that minimised
the mean squared error (MSE) of predicted versus measured values could readily be
computed as:
n =
N∑i=1
(Pi − P0) log10
(Di
d0
)
10N∑i=1
[log10
(did0
)]2(3.2)
Where:
N is the number of measurement locations; and,
Pi denotes the ith path loss measurement at a Tx-Rx separation of Di meters.
Using this value of n, path-loss values at locations that were not included in the mea-
suring campaign could be inferred.
Friis’ freespace propagation model [Friis 1946], described the attenuation of radio signals
transmitted between two antennas in an idealised environment devoid of any interfering
material; given by the equation:
PR = PTGTGR
(λ
4πd
)2
(3.3)
Where:
PR is the received power;
PT is the transmitted power;
GT is the transmitting antenna’s gain;
GR is the receiving antenna’s gain;
93
λ is the signal wavelength; and,
d is the distance between transmitter and receiver.
It was straightforward to show that the path-loss exponent model approximated Friis’
freespace propagation model when the path-loss exponent was equal to two.
3.4.3 Aisle Based Path-loss
The simple power law, Equation 3.1, gave results that only varied depending on distance.
Throughout the course of this research project, it was observed that many factories (both
visited and researched in literature) consisted of aisles parallel to each other, separated by
clutter. An idealised diagram of parallel aisles in a factory is depicted in Figure 3.1.
Figure 3.1: Generalized factory layout.
If Tx was a wireless AP and Rx1 and Rx2 were receivers then, intuitively, the path-loss
at Rx2 would be greater than that at Rx1, even if the distance from the transmitter to
the receiver (D1 and D2) was equal, because, to reach Rx2, the transmitted signal would
have to penetrate several layers of clutter. However, the simple power law would, when
D1 is equal to D2, predict the same path-loss for both Rx1 and Rx2.
This provided motivation for the development of a new path-loss model that utilised two
path-loss exponents, one describing attenuation along the X-axis (in Figure 3.1, down
94
the uncluttered aisle) and one describing attenuation along the Y-axis (across the aisles
and thus resulting in a greater path-loss).
The new path-loss law, Aisle Based Path-loss (ABP), is of the form:
PLaisle = 10nX log10(DX) + 10nY log10(DY ) + P0 (3.4)
Where:
nX is the path-loss exponent in the X direction;
nY is the path-loss exponent in the Y direction;
DX is f(x, y) and proportional to the relative X-axis displacement between the
transmitter and the receiver; and,
DY is f(x, y) and proportional to the relative Y-axis displacement between the
transmitter and the receiver.
ABP should degenerate back into standard power path-loss law, Equation 3.1, when nX= nY . i.e.,
10n log10 (DX) + 10n log10 (DY ) + P0 = 10n log10
(D
d0
)+ P0 (3.5)
DX and DY were then chosen such that they met the above requirements:
DX =
(√D(X2 + 1
)√d0 (Y 2 + 1)
), DY =
(√D(Y 2 + 1
)√d0 (X2 + 1)
)(3.6)
Putting Equation 3.6 back into Equation 3.4 allowed PLaisle to be found.
Given a set ofm path-loss measurements, P1 . . . Pm, at locations (DX1, DY 1) . . . (DXm, DY m),
the matrices P and D could be constructed to determine appropriate values for the
pathloss exponents, N , where:
P =
P1
...
Pm
, D = 10 log10
DX1 DY 1
......
DXm DY m
and N =
[nX
nY
]
95
Expressed in matrix notation as a system of normal equations:
DTDN = DTP (3.7)
It therefore followed that:
[nx
ny
]= (DTD)−1DTP (3.8)
were the values of nX and nY that minimised the mean squared error of the measured
and predicted path-loss.
The relative performance of ABP and the simple power law are given in Chapter 5.
96
3.5 Pseudo-Deterministic Methods
3.5.1 Overview
For more precise modelling of propagation, the geometric nature of the environment, such
as objects lying between the transmitter and receiver, must be considered [Rappaport
1992]. Models that utilized site-specific information, but did not attempt to solve any
wave-propagation equations were known as pseudo-deterministic propagation models
[Durgin 1998].
In Section 3.5.2, Partition Based Pathloss (PBP) and a set of normal equations that
allow partition-dependant attenuation factors to be determined are presented. This
particular form of PBP comes from [Durgin 1998] and is a specific implementation of a
set of models based on a common theme (i.e., introducing a fixed amount of attenuation
for a given object lying between the transmitter and the receiver) that originated with
the model proposed in [Rice 1959]. PBP methods were more complex than methods such
as the Simple Power Law, because they required information about the environment in
which they operated.
In Section 3.5.3, a minor modification to PBP that included a pathloss exponent was
presented. The phenomenological justification for including a pathloss exponent was that
PBP, described in Section 3.5.2, behaved as though it was operating in free space, and
introduced attenuation when large objects lay between the transmitter and the receiver.
In an indoor industrial location, there may be smaller environmental features (common
to a large portion of the environment) that are not practical to model as partitions (such
as wires hanging from the roof); the pathloss exponent served to model these.
These pseudo-deterministic models are compared against the statistical and ray tracing
models in Chapter 5, using equipment and computer applications described in Chapter 4
in order to assess their relative performance in the context of rapidly deploying wireless
networks in an industrial environment.
97
3.5.2 Partition Based Pathloss
In [Durgin 1998], the use of partition-dependant attenuation factors was discussed and
normal equations were presented that, given a set of path-loss measurements across a
site and knowledge of what objects lay directly between the transmitter and the location
at which measurements were taken, chose the partition-dependant attenuation factors
such that they minimised the variance of the empirical versus modelled results.
The object attenuation factors, a column vector X, were calculated by minimizing the
mean squared error of the measured versus the predicted data. If n measurement loca-
tions and m partitions were considered, X could be concisely written in matrix notation
[Durgin 1998] as:
X = (ATA)−1AT [P − 20 log10(d)] (3.9)
Where:
P is a column vector of measured path-loss;
d is a column vector representing the distance from the transmitter to the location
measurements were taken; and,
A is the distance a ray travelled through each object lying between the transmitter
and receiver (Durgin’s original formulation called for just counting the number of
objects between the transmitter and receiver)1.
A is of the form:
A =
a11 · · · am1
.... . .
...
a1n · · · amn
After solving for X, estimated path-loss for new A and d values where power measure-
ments, P , were not taken, could be found using the equation:
PL = 20 log10(d) +AX (3.10)1A was constructed using the distance a ray travelled through each object lying between the trans-
mitter and receiver, as opposed to the number of times a ray travelled through a given object, because
the simulation program (Section 4.7) that PBP was implemented in made it trivial to do so, and exper-
imentation showed that this approach produced more accurate results.
98
3.5.3 Partition Based Path-loss with path-loss exponent
A simple variation to Durgin’s method, where the path-loss exponent, n, was also free
to be optimised, was then easily derived.
By combining the logarithmic terms with the distances travelled through partitions
lying between the transmitter and the receiver, into a single matrix, Z, and the pathloss
exponent and attenuation terms into a single matrix, X, both the pathloss exponent and
partition attenuation terms could be found at the same time. Therefore, using similar
notation to that used in Section 3.5.2, if:
Z =
10 log10(d1) a11 · · · am1
......
. . ....
10 log10(dn) a1n · · · amn
and X =
n
x1
...
xm
Then, written as a system of normal equations:
ZTZX = ZTP (3.11)
Which can be solved to find X:
X = (ZTZ)−1ZTP (3.12)
Therefore:
∴ PL = ZX (3.13)
99
3.6 Ray Tracing with Material Optimization
3.6.1 Overview
As discussed in the Chapter 2, there were many design decisions that needed to be
considered when implementing a ray tracing algorithm. The ray tracer implemented
during the course of this research project was a shoot and bounce ray tracer that had
the ability to use either2 :
• Durgin’s method of distributed wavefronts [Durgin 1998]; or,
• Yun’s modified reception sphere method to combat aliasing [Yun 2001].
It could launch rays through either:
• The vertices of a geodesic sphere; or,
• Points generated using [Rakhmanov 1994]’s spiral method.
Other important details such as:
• What happened to rays when they were reflected or transmitted; and,
• How antenna patterns affected a ray’s power.
were discussed in Chapter 2.
Further implementation specific features that the ray tracer simulation incorporated,
such as loading data from a CAD file, loading electronic antenna patterns modelling the
environment and actually running simulations are described in Chapter 4.
Section 3.6.2 documents the method used by the proposed ray tracing algorithm to
estimate material parameters. The algorithm used to do this (Levenberg-Marquardt)
was taken from the literature; its use for this specific application was however, to the
best of the author’s knowledge, novel.2Though, as mentioned and justified in Chapter 2, only results using Yun’s modified reception sphere
with rays shot through the vertices of a geodesic sphere are presented in Chapter 5.
100
3.6.2 Material Property Estimation Algorithm
Ray tracing was the most complex propagation prediction method considered in this
research. The performance of the ray tracing implementation is compared against the
other propagation models and assessed relative to its suitability in performing the task
of rapidly deploying wireless networks in industrial environments in Chapter 5.
The aspect of the ray tracer that was the focus of most of the investigation was using
measurements taken at a site of interest to infer useful material properties. The algorithm
used to do this is described in the remainder of this section.
Even in the presence of detailed geometric data describing an environment (e.g., a floor
plan of a factory), information about the materials from which the environment was con-
structed could be difficult to obtain. This lack of knowledge could prevent the received
power from being accurately calculated (as material properties were required to calculate
a ray’s attenuation when it interacted with a given material), Section 2.9.
This problem could be alleviated through the theoretical calculation of coefficients using
Fresnel’s equations, the use of a pre-existing database of materials, or, as is the approach
investigated in this dissertation, the materials in an environment could be categorised
based on a small number of measurements taken in that environment and extrapolated
using optimization algorithms.
The optimization algorithm utilised by the ray tracer in this dissertation is known as the
Levenberg-Marquardt (L-M) optimization method [Levenberg 1944], [Marquardt 1963].
This method is useful in many different non-linear applications because of its ability to
converge to an optimal solution from a wide range of initial guesses.
L-M was essentially a combination of Newton-Gauss and Steepest Decent Methods, with
an additional factor, λ. The factor, λ, was introduced to prevent the problem in Newton-
Gauss methods where, if the initial guess was not close enough to the minimum, it could
overshoot the answer and then overcompensate, leading to divergent oscillations.
If λ was large, then L-M degenerated to Steepest Decent, and if λ was small, then L-M
was essentially the Newton-Gauss Method. There were many different ways of choosing
λ, a common approach began with a small λ, which was decreased at each iteration, thus
becoming more Newton like unless it was found to overshoot the answer in which case it
101
was increased by a certain positive factor (> 1). It then started to slowly decrease again
until it was found to overshoot the answer. As it approached the minimum, it should
stop overshooting, letting the Newton-Gauss method prevail until an optimal solution,
with a small value of λ, was arrived at.
Formally, let:
y = (y1 . . . yn)T (3.14)
and
p = (p1 . . . pm)T (3.15)
Where:
y is the empirically measured power at each of the n training locations; and,
p is the relative permittivity of each of the m object types in the world.
If yi was the measured power at the ith training location and f(ti|p) was the predicted
power at the ith training location, ti, then p should be chosen such that it minimized
the cost function:
S(p) =n∑i=1
(yi − f(ti|p))2 (3.16)
L-M is an iterative algorithm. To begin, a guess must be made as to the starting value
of p. If no information was available, then an uninformed guess would have to suffice,
luckily however, an educated guess about what material an object was most closely
related to could usually be made.
For example, consider that a pasteurizer might be made of metal, the walls brick and
the floor concrete. Using the relative permittivity data from sources such as [Safaai-Jazi
2002] for basic material types, informed guesses could be made as to what were good
initial values for p.
For each iteration, the current best guess, p, was replaced with a new estimate, p +
q.
If the linear approximation was considered:
102
f(p+ q) ≈ f(p) + Jq (3.17)
Where:
J is the Jacobian (matrix of first order partial derivatives) of f w.r.t. p at p.
Then, q could be determined using the following equation:
(JTJ + λI)q = −JT f (3.18)
Where:
λ is a non negative damping factor.
Once q had been determined, the steps depicted in Figure 3.2 were taken.
Figure 3.2: L-M Algorithm Logic.
The efficacy of this algorithm is demonstrated in Chapter 5 by comparing its predic-
tions against those of the other propagation prediction methods discussed in this chap-
ter.
103
3.7 MAC Layer Models
Whilst it is very useful to have propagation information about a network, this is not
the only factor that plays a role in determining what the end user perceives as network
performance. In situations where a network theoretically has adequate coverage, other
factors, such as the application it is being used for and the number of people concurrently
using the network can also affect performance. As this dissertation was being written, the
deployment of wireless networks, primarily to carry VoIP traffic, was rapidly becoming
a common occurrence.
In order to successfully evaluate how VoIP performed under different conditions, the
MAC layer (which controlled contention between multiple wireless stations (STAs) for
a single channel and thus dictated, to a large extent, the networks behaviour in the
presence of multiple STAs) had to be investigated.
The culmination of this investigation was a new VoIP capacity model that, using [Wu
2002]’s Markov chain analysis, extended [Garg 2003]’s VoIP capacity to allow predic-
tions to be made in a noisy channel. Due to a lack of available equipment (empirically
validating many of the predictions these new models make would have required more
than 30 wireless devices), accuracy of the models are validated through simulation, Sec-
tion 5.4.9.
104
3.8 Markov Chain Analysis of the 802.11b MAC layer
As discussed in Chapter 2, Markov Chain analysis could be employed to provide pre-
dictions of how the 802.11b MAC layer performed under contention. This analysis is
relevant to this dissertation as it was used in conjunction with Garg’s VoIP capacity
model, Section 3.9, to derive new VoIP capacity models in Sections 3.10 and 3.11. The
remainder of this section is devoted to a brief overview of [Wu 2002]’s Markov Chain
analysis.
The main assumption made by Wu/Bianchi’s model was that the probability a trans-
mitted packet collided, p, was independent of the current state of the station. If there
was a constant number of contending stations, n, and b(t) and s(t) were stochastic pro-
cesses representing the backoff time counter, and the contention window, CW, (how big
the contention window was) at time t (where t and t + 1 represented the beginning of
two consecutive slots) respectively, then the bidimensional process s(t), b(t) could be
modelled by the Markov Chain depicted in Figure 3.3.
Figure 3.3: Markov Chain of 802.11 DCF.
In the Markov chain shown in Figure 3.3 there are four different non-null one-step tran-
105
sition probabilities:
Pi, k|i, k + 1 = 1 k ∈ [0,Wi − 2], i ∈ [0,m] . . . (i)
P0, k|i, 0 = (1−p)W0
k ∈ [0,Wi − 1], i ∈ [0,m− 1] . . . (ii)
P0, k|i− 1, 0 = pWi
k ∈ [0,Wi − 1], i ∈ [1,m] . . . (iii)
P0, k|m, 0 = 1W0
k ∈ [0,Wi − 1] . . . (iv)
(3.19)
(i) The first transition probability represents the backoff timer decreasing by a
value of 1;
(ii) The second represents the situation where a successful transmission has oc-
curred and the contention window size is reset to W0;
(iii) The third represents an unsuccessful transmission and subsequent increasing
of the backoff stage; and,
(iv) The fourth transition probability represents the fact that once a station has
reached the maximum backoff stage, once it transmits, the contention window
is reset regardless of whether the transmission was successful or not.
With some analysis (see [Wu 2002] for a complete derivation), the probability, τ , that a
station transmitted in a randomly chosen slot time could be found to be:
τ =
2(1−pm+1)(1−2p)
W(1−(2p)m+1)(1−p)+(1−2p)(1−pm+1), m 6 m′
2(1−pm+1)(1−2p)
W“1−(2p)m
′+1”(1−p)+(1−2p)(1−pm+1)+W2m
′pm′+1(1−2p)(1−pm−m′), m > m′
(3.20)
Where:
W = CWmin + 1, which is 32 in [IEEE 802.11b];
m is the maximum backoff stage, the number of transmission attempts allowed
before the contention window is reset (m, for large packets is determined by the
value of dot11LongRetryLimit or dot11ShortRetryLimit);
m′ determines the maximum CW size (such that W = CWmin + 1 and 2m′W=
CWmax + 1), which in [IEEE 802.11b] is 5; and,
106
p is the probability that a transmitted packet encounters a collision or is received
in error.
If p is written in terms of τ , the number of stations, n, and the packet error rate, PER,
such that:
p = 1− (1− τ)n−1(1− PER) (3.21)
Then Equations 3.20 and 3.21 form a non-linear system and numeric methods can be
used to find τ and p such that both τ and p ∈ [0, 1].
Once τ is known, the probability that a packet is transmitted and is successful in a given
time slot can be found to be:
PS =nτ(1− τ)n−1
1− (1− τ)n(1− PER) (3.22)
Where PER can be found using Equation 2.13, 2.14, 2.16 or 2.17 depending on the
modulation scheme.
The probability that an occurring transmission collides because two or more stations
simultaneously transmit can also be found to be:
PC = 1− nτ(1− τ)n−1
1− (1− τ)n(3.23)
107
3.9 Garg’s VoIP Capacity Model for DCF-Basic Access
In this section [Garg 2003]’s VoIP capacity model is described, and the elements that
it does not take into account (packet retransmission due to corruption as opposed to
collisions etc.) are identified. Following on from this, in Sections 3.10 – 3.11 new VoIP
capacity models (for Basic Access and RTS/CTS respectively), that do not suffer from
the limitations apparent in Garg’s model are presented. These models are complimented
by ARC models described in Sections 3.12 – 3.15 which, when combined with the VoIP
capacity models allow the impact that different ARC algorithms have on the capacity
of VoIP to be investigated.
In [Garg 2003] the maximum number of VoIP connections supported by a wireless chan-
nel was given (with notation slightly changed to ensure consistency throughout the
dissertation) by:
nmax =⌊
TPRAV GVoIPTRAFFIC (TP + TLAY ERS + TSIFS+DIFS+ACK + TDCF )
⌋(3.24)
Where:
TP is the time required to transmit a VoIP payload;
RAV G is the average data rate of the Access Point (AP);
VoIPTRAFFIC is the amount of data the given VoIP codec required in bits per
second (assuming the use of a G711 a-law codec which required upstream traffic
and downstream traffic, was equal to 128000);
TLAY ERS is the amount of time required to transmit the additional data introduced
by the various networking layers;
TSIFS+DIFS+ACK is the time required after a packet has been sent before the
back-off timer started to count down again (i.e., the duration of a SIFS, a DIFS
and the time taken to transmit an ACK); and,
TDCF is the delay introduced by the Distributed Coordination Function (i.e., TDCFtakes into account the average number of slots waited and average number of times
a packet was transmitted before it was received successfully).
108
Under the assumption that multiple VoIP connections could be approximated by two
stations that always had a packet ready to transmit (if there were n stations with active
VoIP sessions, then the AP would transmit n times more frequently than other stations,
thus the probability of a packet colliding with a packet transmitted by another station
would be small compared to the probability of a packet colliding with one transmitted
from the AP) then could be computed to be:
TDCF = E[σ]TSLOT + (1− PS)TW (3.25)
Where:
E[σ] is the expected number of slots waited between successful packet transmissions
as seen by the channel (with contention window values specified by the 802.11b
standard, two stations (n = 2) and assuming no channel noise, E[σ] = 8.5);
TSLOT is the expected duration of a slot (which for 802.11b is a constant 20µs);
PS is the probability that a transmitted packet is received successfully (with con-
tention window values specified by the 802.11b standard, two stations (n = 2) and
assuming no channel noise, PS = 0.9677); and,
TW is the time wasted if a transmitted packet is corrupted (i.e., TP + TLAY ERS +
TSIFS+DIFS+ACK).
The model produced by [Garg 2003] showed two notable things, that:
(i) As the length of time between (and thus size of) successive packets from a
given station (assuming all stations on the network were transmitting with
the same packet sizes) increased, the value of nmax likewise increased; and,
(ii) Networks operating at a higher data rate always have a larger value of nmax
than those transmitting at a lower data rate.
By developing new models that take into account the effects of a noisy channel, it is
shown in Chapter 5 that these observations do not necessarily hold.
109
3.10 A New VoIP Capacity Model for DCF-Basic Access
Using Wu’s Markov chain analysis to extend Garg’s VoIP capacity model, a new VoIP
capacity model, that does not produce erroneous results in the presence of a noisy channel
is presented. This section considers Basic Access DCF. RTS/CTS DCF is considered
in Section 3.11. The major assumption in these models is the same as that in Garg’s
model – that is, that a non-saturated channel can be approximated by a channel with
two saturated STAs.
As noted previously, 802.11b encompassed four different modulation schemes and each
one of these schemes had a different bit error rate (BER) for a constant signal-to-noise
ratio, Figure 2.19. 802.11b did not specify any specific Forward Error Correction (FEC)
scheme therefore a packet was considered corrupt (for the purpose of this dissertation)
if a bit was incorrectly decoded. The probability of this occurring could be written in
terms of the packet error rate:
PER = 1− (1− BER)PACKET SIZE (3.26)
Where:
PER is the packet error rate;
BER is the bit error rate; and,
PACKET SIZE is the length of the packet in bits.
As PER increases, likewise does the value of E[σ]. Using the value of PS , obtained from
[Wu 2002]’s Markov chain analysis, the probability of a packet both being transmitted
and being corrupted could be written as:
PE = (1− PS) (3.27)
Therefore the expected number of slots before successful transmission as seen by the
channel could be written as:
110
E[σ] =
∑∞i=0
[∑ij=0
(CWmin+1
2
)2k]P iE(1− PE)
n
h 6 m, k = h
h > m, k = m(3.28)
Where:
h = mod (j,m′);
n is the number of wireless stations;
CWmin is the minimum contention window size; and,
PE is the probability of a packet being transmitted and being corrupt.
In [Garg 2003] the amount of time wasted transmitting data that was ultimately cor-
rupted was computed as being:
(1− PS)TW (3.29)
However, this approximation failed when a packet was, on average, transmitted more
than once before it was successfully received; therefore, the expected number of unsuc-
cessful transmissions before a successful transmission as seen by the channel should be
used instead; written as:
E[T ] =∞∑n=0
nPnE(1− PE) =PE(1− PE)(1− PE)2
(3.30)
Therefore, a better approximation of time wasted transmitting data that was ultimately
corrupted was:
E[T ]TW (3.31)
Putting Equation 3.28 and 3.31 into Equation 3.25 obtains TDCF which can then be
used in Equation 3.24 to find nmax, the capacity of VoIP in a noisey channel.
111
3.11 A New VoIP Capacity for DCF-RTS/CTS
With only a small modification, the new VoIP capacity model described in section 3.10
could also be extended to consider the RTS/CTS-DCF case.
Using Equation 3.20 and 3.21 (where PER, in Equation 3.21, was calculated using the
data length, associated overheads and RTS/CTS frame sizes) to calculate PC , Equa-
tion 3.23, the number of RTS/CTS attempts before a successful RTS/CTS exchange
occurred, could be found to be:
NumRTS/CTS =(1−RTSFAIL)RTSFAIL
(1−RTSFAIL)2(3.32)
Where the probability of an RTS/CTS exchange failing was:
RTSFAIL = 1− (1− PC)(1−BER)RTS/CTS SIZE (3.33)
Once an RTS/CTS exchange had been completed successfully, the data packet was trans-
mitted. As the successful RTS/CTS exchange had secured the channel - the expected
number of times that this data packet would need to be transmitted before it was suc-
cessful, E[T ], could be calculated without using PC :
E[T ] =PER(1− PER)
(1− PER)2(3.34)
Where:
PER is calculated using only the data length and associated overheads.
Therefore, the number of RTS/CTS exchanges before a successful data packet transmis-
sion, E[RTS], could be found to be:
E[RTS] = NumRTS/CTS(E[T ] + 1) (3.35)
TDCF RTS could then be written as:
TDCF RTS = E[σ]TSLOT + E[RTS]TRTS + E[T ]TW (3.36)
Where, in a similar manner to Equation 3.25:
112
E[σ] is the expected number of slots before successful transmission as seen by the
channel;
TSLOT is the expected duration of a slot (which for 802.11b is a constant 20µs);
TRTS is the amount of time wasted when an RTS/CTS frame collides or is cor-
rupted and is equal to the amount of time required to transmit an RTS frame,
followed by a DIFS; and,
TW is the amount of time it takes to send an RTS frame followed by a SIFS, followed
by a CTS frame and another SIFS , followed by TP +TLAY ERS+TSIFS+DIFS+ACK .
TDCF RTS could then be used in Equation 3.24 to computer the maximum number of
VoIP connections supported by a wireless channel that was using RTS/CTS, i.e.:
nmax =⌊
TPRAV GVoIPTRAFFIC (TW + TDCF RTS)
⌋(3.37)
For simplicity, the RTS/CTS model above does not accurately model the situation where
the RTS frame was successful, but the CTS frame was corrupted by noise. The inclusion
of this would however, be a simple addition.
The new VoIP capacity models presented in Sections 3.10 and 3.11 are validated through
a simulation described in Section 4.8 and results presented in Chapter 5.
113
3.12 A New VoIP Capacity Model for ARF
Sections 3.12, 3.13, 3.14 and 3.15 present models that allow the performance of four
different Auto-Rate Control (ARC) algorithms to be investigated:
(i) ARF, Section 3.12;
(ii) RTS/CTS ARF, Section 3.13;
(iii) CARA, Section 3.14; and,
(iv) RRAA-Basic, Section 3.15.
The process followed when modelling ARF, RTS/CTS ARF and CARA was similar, and
based on the assumption that there were a static number of saturated STAs vying for
access to a given channel, all using the same ARC scheme:
• Statistics about the STAs (such as the probability of a collision occuring or prob-
ability of a channel error) were estimated using the Markov chain analysis of
802.11b’s DCF described in Section 3.8;
• Absorbing Markov chains were used to model performance at each individual data
rate (to predict, given a specific probability of error, how long the ARC algorithm
would remain at that data rate and what the probability of increasing or decreasing
the data rate would be). There were three distinct cases that each needed to be
modelled with a different Markov chain, the:
(i) Fastest data rate (where the algorithm could only decrease the trans-
mission speed);
(ii) Middle data rates (where the algorithm could increase or decrease the
transmission speed); and,
(iii) Slowest data rate (where the algorithm could only increase the transmis-
sion speed).
• Once performance at each data rate had been predicted, a fourth Markov chain,
that modelled the behaviour of the ARC algorithm as it moved between the avail-
able data rates, was used to calculate the proportion of time that the ARC algo-
rithm was expected to remain at a particular data rate; and finally,
114
• These values were then used to estimate the number of simultaneous VoIP calls a
noisy channel could support if all STAs were using a particular ARC algorithm.
The approach adopted for modelling RRAA-Basic was slightly different, instead of using
absorbing Markov chains to calculate the probability of increasing or decreasing the data
rate, these probabilities were calculated using the binomial distribution.
The modular approach taken to modelling these ARC algorithms means that it would
be trivial to extend the analysis to other wireless networking standards such as 802.11a
or 802.11g (which have a greater number of data rates).
Auto Rate Fallback (ARF), originally designed for Lucent’s WaveLAN II, was the most
popular ARC algorithm at the time this dissertation was written (in that it was most
often implemented in commercial devices).
In ARF, transmission speeds are increased after a given number of successive successful
transmissions, and decreased after successive failed transmissions.
For the purposes of this dissertation it was assumed that ARF increased the transmis-
sion speed after 5 successful transmissions and decreased the transmission speed after
2 failed transmissions. The operation of ARF is described by the pseudocode given in
Algorithm 1.
As described above, three different absorbing Markov chains were used to model ARF’s
behaviour at a given data rate. The Markov chain used to model the middle data rates
(802.11b, 2Mbps and 5.5Mbps) is depicted in Figure 3.4.
115
Algorithm 1 ARF Pseudocode.1: mThreshold = 5 . Success Threshold
2: nThreshold = 2 . Error Threshold
3: m = 0 . Number of consecutive successful frames
4: n = 0 . Number of consecutive unsuccessful frames
5: while StationActive do
6: SendFrame()
7: if FrameSuccess then . ACK Received
8: n = 0
9: m = m+ 1
10: if m > mThreshold then
11: IncreaseDataRate()
12: m = 0
13: end if
14: else . No ACK Received
15: m = 0
16: n = n+ 1
17: if n > nThreshold then
18: DecreaseDateRate()
19: n = 0
20: end if
21: end if
22: end while
116
Figure 3.4: Markov Chain of ARF (Increase or Decrease Data Rate).
Where:
State [n,m] represents the state where ARF has recorded m consecutive successful
frame transmissions and n consecutive failed frame transmissions;
State [UP ] represents the point where mthreshold has been reached and ARF in-
creases the data rate;
State [DOWN ] represents the point where nthreshold has been reached and ARF
decreases the data rate; and,
p is the probability of a transmitted frame being successful, which could be found
using Equation 3.21.
Given Figure 3.4, the probability of being absorbed by the UP state (i.e., increasing the
data rate) was found to be:
Pdr =−p5 (p− 2)(p5 − p+ 1)
, dr ∈ [2Mbps, 5.5Mbps] (3.38)
And the expected amount of time (number of frame transmissions) before entering an
absorbing state, found to be:
Tdr =(2− p)(p4 + p3 + p2 + p+ 1)
(p5 − p+ 1), dr ∈ [2Mbps, 5.5Mbps] (3.39)
117
The Markov chains used to model the fastest data rate (in 802.11b, 11Mbps) and slowest
data rate (in 802.11b, 1 Mbps) were slightly different as they only had one absorbing
state. These Markov chains are shown in Figures 3.5 and 3.6 respectively.
Figure 3.5: Markov Chain of ARF (Only Decrease Data Rate).
Given Figure 3.5, it could be seen that the probability of eventually being absorbed by
the DOWN state (i.e., decreasing the data rate), 1 − P11Mbps, was 1 and the expected
amount of time (number of transmissions) before decreasing the data rate, found to
be:
T11Mbps =(2− p)(p− 1)2
(3.40)
The situation was similar for the slowest data rate, as shown in Figure 3.6.
118
Figure 3.6: Markov Chain of ARF (Only Increase Data Rate).
From Figure 3.6, it could be seen that the probability of eventually being absorbed by
the UP state, P1Mbps, was 1, and the expected amount of time (number of transmissions)
before increasing the data rate found to be:
T1Mbps =(p4 + p3 + p2 + p+ 1)
p5(3.41)
Once the probabilities that ARF would increase or decrease the data rate had been
found, a Markov chain that modelled ARF’s performance over all data rates could be
constructed. This is shown in Figure 3.7, where Pdr(dr ∈ [2Mbps, 5.5Mbps]) is the
probability of increasing the data rate (being absorbed by the UP state, as found using
the previous three Markov chains).
119
Figure 3.7: Markov Chain of ARF (All Data Rates).
Finding the equilibrium distribution for the Markov chain in Figure 3.7.
b11Mbps =−PUP5.5PUP2
(2PUP5.5 − 2PUP2PUP5.5 − 2)(3.42)
b5.5Mbps =−PUP2
(2PUP5.5 − 2PUP2PUP5.5 − 2)(3.43)
b2Mbps =(PUP5.5 − 1)
(2PUP5.5 − 2PUP2PUP5.5 − 2)(3.44)
b1Mbps =(−PUP5.5 + 1)(PUP2 − 1)
(2PUP5.5 − 2PUP2PUP5.5 − 2)(3.45)
Combining this with the analysis of ARF at each individual data rate, allowed the
relative proportion of time spent at each data rate to be found:
Propdr =Tdrbdr∑
drate∈DataRates Tdratebdrate(3.46)
120
Where:
DataRates ∈ [1Mbps, 2Mbps, 5.5Mbps, 11Mbps]
These data can then be used to calculate the capacity of a wireless channel carrying
VoIP in a similar manner to those models described in Sections 3.10 and 3.11:
nmax =
⌊ ∑dr∈DataRates
TP × dr × PropdrVoIPTRAFFIC (TP + TLAY ERS + TSIFS+DIFS+ACK + TDCF )
⌋(3.47)
Where:
TDCF comes from the analysis in Section 3.10
121
3.13 A New VoIP Capacity Model for RTS/CTS-ARF
One of the major drawbacks of ARF is that it makes no attempt to distinguish between
collisions and corruption that occurs due to channel noise. Reducing the transmission
rate will only improve performance if the errors are being caused by channel noise.
A simple way around this is to use the RTS/CTS mechanism to reserve the channel
before data is sent. In this manner, if corruption of RTS/CTS frames due to channel
noise is deemed negligible then collisions and corruptions due to channel noise can be
identified separately.
RTS/CTS ARF (as described in [Kim 2006]) always transmits using the RTS/CTS
mechanism. If an RTS is sent and a CTS not received then a collision is assumed to
have occurred, however if following an RTS/CTS exchange a data frame is sent and an
ACK not received, corruption due to channel noise is assumed to have occurred.
The operation of RTS/CTS ARF is described by the pseudocode given in Algorithm 2.
RTS/CTS ARF distinguishes between collisions and corrupted frames very effectively,
however - its overall performance is less than optimal due to the overhead incurred by
sending RTS/CTS frames before every data frame transmission.
The performance of RTS/CTS ARF can be modelled in a similar way to ARF, Sec-
tion 3.12 by using TDCF as given in Equation 3.36, and changing the value of p used in
the Markov chains to the probability that a frame is not lost due to a channel error.
122
Algorithm 2 RTS/CTS ARF Pseudocode.1: mThreshold = 5 . Success Threshold
2: nThreshold = 2 . Error Threshold
3: m = 0 . Number of consecutive successful frames
4: n = 0 . Number of consecutive unsuccessful frames
5: while StationActive do
6: SendRTS()
7: if RTSSuccess then . CTS Received
8: SendFrame()
9: if FrameSuccess then . ACK Received
10: n = 0
11: m = m+ 1
12: if m > mThreshold then
13: IncreaseDataRate()
14: m = 0
15: end if
16: else . No ACK Received
17: m = 0
18: n = n+ 1
19: if n > nThreshold then
20: DecreaseDateRate()
21: n = 0
22: end if
23: end if
24: end if
25: end while
123
3.14 A New VoIP Capacity Model for CARA
Collision Aware Rate-Adation (CARA) proposed in [Kim 2006] is a compromise between
ARF and RTS/CTS ARF that attempts to minimise both of their trade-offs. CARA
uses RTS/CTS frames to differentiate between channel errors and collisions, but only
after a data frame has been lost (in this manner reduces overhead by not transmitting
RTS/CTS frames before every data frame).
CARA maintains two counters:
m – the number of consecutive successful transmissions; and,
n – the number of consecutive unsuccessful transmissions.
And three thresholds:
mThreshold – the number of consecutive successful transmissions required before
the data rate is increased;
nThreshold – the number of consecutive failed transmissions required before the data
rate is decreased; and,
pThreshold – the number of consecutive failed transmissions required before RTS/CTS
frames are used to probe the network.
For the purpose of this dissertation the values of mThreshold, nThreshold and pThreshold
are given as 5, 2 and 1 respectively.
The operation of CARA is demonstrated in the pseudocode given in Algorithm 3.
CARA performs measurably better than both ARF and RTS/CTS ARF, however, as
noted by [Wong 2006], CARA can suffer from RTS oscillation (where an RTS frame is
sent every other transmission), especially in the situation where there are hidden STAs,
and may “still experience heavy collision losses and eventually drop the data rate”.
Like ARF and RTS ARF, the performance of CARA at a particular data rate can be
described by an absorbing Markov chain, where PCE is the probability of a collision or
an error occurring, PC is the probability of a collision occurring and PE is the probability
of a channel error occurring. (PC , PE and PCE can be found using the Markov chain
analysis described in Section 3.8).
124
Algorithm 3 CARA Pseudocode.1: mThreshold = 5 . Success Threshold
2: nThreshold = 2 . Error Threshold
3: pThreshold = 1 . RTS/CTS Threshold
4: m = 0 . Number of consecutive successful frames
5: n = 0 . Number of consecutive unsuccessful frames
6: while StationActive do
7: if n > pThreshold then
8: SendRTS()
9: if !RTSSuccess then . CTS not received
10: Continue . Skip current iteration of While loop
11: end if
12: end if
13: SendFrame()
14: if FrameSuccess then . Received ACK
15: m = m+ 1
16: n = 0
17: if m == mThreshold then
18: IncreaseDataRate()
19: m = 0
20: end if
21: else . ACK not received
22: n = n+ 1
23: m = 0
24: if n > nThreshold then
25: DecreaseDataRate()
26: n = 0
27: end if
28: end if
29: end while
125
Figure 3.8: Markov Chain of CARA (Increase or Decrease Data Rate).
The Markov chain given in Figure 3.8 is described by matrices S, which gives the tran-
sition probabilities for movement amongst non-absorbing states and A which gives the
transition probabilities for movement into the absorbing states.
S =
0 1− PCE 0 0 0 PCE
0 0 1− PCE 0 0 PCE
0 0 0 1− PCE 0 PCE
0 0 0 0 1− PCE PCE
0 0 0 0 0 PCE
0 1− PCE 0 0 0 PC
(3.48)
A =
0 0
0 0
0 0
0 0
1− PCE 0
0 (1− PC)PE
(3.49)
The ijth element of the fundamental matrix, Q, gives the total number of times state j is
expected to be visited if i is the starting state. Given S, Q could be easily found:
126
Q = (I − S)−1 (3.50)
Where:
I is a 6× 6 identity matrix
By adding together the appropriate elements of Q, the expected number of states visited
before entering an absorbing state could be found. As it was known that CARA would
use RTS/CTS if there were 1 or more successive transmission errors, this could be used to
calculate the number of states where RTS/CTS was used, Tdr RTS , expected to be visited
before absorption and the number of states where RTS/CTS was not used, Tdr Basic,
expected to be visited before absorption.
Tdr Basic =j=5∑j=1
Q1,j (3.51)
Tdr RTS = Q1,6 (3.52)
Therefore, the total number of states expected to be visited before absorption was:
Tdr = Tdr Basic + Tdr RTS (3.53)
Finally, the product QA gave the probability of ending up in either of the absorbing
states. For the matrices given above, the probability of ending up in the UP absorbing
state was:
PUP dr = (QA)1,1 (3.54)
The probability of ending up in the DOWN absorbing state was therefore:
PDOWN dr = 1− PUP dr = (QA)1,2 (3.55)
In a similar fashion to the above analysis, the expected number of states visited before
absorption could be calculated for the fastest and slowest data rates (Figures 3.9 and 3.10,
respectively).
127
For the fastest data rate (11Mbps, for 802.11b), the behaviour of CARA was modelled
by the Markov chain given in Figure 3.9.
Figure 3.9: Markov Chain of CARA (Only Decrease Data Rate).
Where:
S =
0 1− PCE 0 0 0 0 PCE
0 0 1− PCE 0 0 0 PCE
0 0 0 1− PCE 0 0 PCE
0 0 0 0 1− PCE 0 PCE
0 0 0 0 0 1− PCE PCE
0 0 0 0 0 1− PCE PCE
0 1− PCE 0 0 0 0 PC
(3.56)
Given S, the fundamental matrix, Q was found using Equation 3.50. Given Q, T11 Basic
and T11 RTS was found to be:
T11 Basic =j=6∑j=1
Q1,j (3.57)
T11 RTS = Q1,7 (3.58)
For the slowest data rate (1Mbps, for 802.11b), the behaviour of CARA was modelled
by the Markov chain given in Figure 3.10.
128
Figure 3.10: Markov Chain of CARA (Only Increase Data Rate).
Where:
S =
0 1− PCE 0 0 0 PCE 0
0 0 1− PCE 0 0 PCE 0
0 0 0 1− PCE 0 PCE 0
0 0 0 0 1− PCE PCE 0
0 0 0 0 0 PCE 0
0 1− PCE 0 0 0 PC (1− PC)PE0 1− PCE 0 0 0 0 PCE
(3.59)
Given S, the fundamental matrix, Q was found using Equation 3.50. Given Q, T1 Basic
and T1 RTS were found to be:
T1 Basic =j=5∑j=1
Q1,j (3.60)
T1 RTS = Q1,6 +Q1,7 (3.61)
Then, using the Markov chain given in Figure 3.7 and Equations 3.42, 3.43, 3.44 and 3.45,
to find the equilibrium probabilities (bdr), the proportion of time spent at each data rate
for CARA was found using Equations 3.62 and 3.63 (substituting Tdr RTS or Tdr Basic
129
for Tdr to find the proportion of time at each data rate with and without RTS/CTS
respectively):
Propdr RTS =Tdr RTSbdr∑
drate∈DataRates (Tdrate RTS + Tdrate Basic) bdrate(3.62)
Propdr Basic =Tdr Basicbdr∑
drate∈DataRates (Tdrate RTS + Tdrate Basic) bdrate(3.63)
Where:
Propdr RTS is the proportion of time spent at data rate dr where RTS/CTS was
being used;
Propdr Basic is the proportion of time spent at data rate dr where RTS/CTS was
not being used; and,
DataRates = [1Mbps, 2Mbs, 5.5Mbps, 11Mbps]
These values were then used to compute the capacity of VoIP in a noisy channel when
CARA was being used. If,
VoIPRTS = VoIPTRAFFIC (TP + TLAY ERS + TSIFS+DIFS+ACK + TDCF RTS) (3.64)
with TDCF RTS given by Equation 3.36 and other relevant parameters taken from the
analysis given in Section 3.11, and
VoIPBasic = VoIPTRAFFIC (TP + TLAY ERS + TSIFS+DIFS+ACK + TDCF ) (3.65)
with relevant parameters taken from the analysis given in Section 3.10, then the capacity
of VoIP in a noisy channel, when CARA is being used could be written:
nmax =
⌊ ∑dr∈DataRates
(TP × dr × Propdr RTS
VoIPRTS+TP × dr × Propdr Basic
VoIPBasic
)⌋(3.66)
130
3.15 A New VoIP Capacity Model for RRAA-BASIC
RRAA was proposed in [Wong 2006] and uses a different approach to ARC than ARF,
RTS/CTS ARF and CARA. It focuses on using “short term loss ratios to opportunis-
tically guide rate change decisions”. To do this, it calculates the loss-ratio over a given
number of frames (the number of frames used is determined by ewnd(dr), a value which
varies depending on the data rate being used). The loss-ratio is calculated as being:
loss ratio =# lost frames
# transmitted frames(3.67)
Along with the window size, ewnd(dr), for each data rate, RRAA has two associated
threshold values:
MTL – Maximum Tolerable Loss threshold before data rate is decreased; and
ORI – Opportunistic Rate Increase threshold at which the data rate is increased.
To determine the values of MTL and ORI for each data rate, RRAA-Basic introduces the
concept of a critical loss ratio, P ∗(dr) – where dr is the data rate the STA is currently
transmitting at. The idea behind P ∗(dr) is that when the loss-ratio is equal to P ∗(dr)
then the STA’s current goodput will be equal to that at the next lower data, dr−, if it is
assumed to suffer from no losses. The idea is therefore that if the STA begins to suffer
a loss ratio greater than that of P ∗(dr) it should decrease its data rate. In reality the
loss-ratio at dr−, will not be zero therefore a factor α is introduced, telling the STA to
expect some loss at the new data rate.
If P ∗(dr) is defined as:
P ∗(dr) = 1− tx time(dr)tx time(dr−)
(3.68)
Where
tx time(dr) is the time it takes to transmit a frame at data rate, dr.
Then:
MTL(dr) = αP ∗(dr) (3.69)
131
The problem of deciding when to increase the data rate is harder, as it is difficult to
predict what the loss ratio will be upon increasing the data rate. RRAA-Basic adopts a
heuristic approach where:
ORI(dr) =MTL(dr+)
β(3.70)
In [Wong 2006] α and β are given as 1.25 and 2 respectively.
Therefore RRAA-Basic begins at the highest data rate and transmits ewnd(dr) frames.
After ewnd(dr) frames have been transmitted and the loss-ratio computed, the STA
makes a decision to increase the data rate if the loss-ratio < ORI(dr), decrease the data
rate if the loss-ratio > MTL(dr) or to otherwise remain the same and transmit another
ewnd(dr) frames at the same data rate. Finally, after each frame is transmitted the best
case and worst case loss-ratios are computed (i.e. what would the loss ratio be if the
remaining frames in ewnd(dr) were all successful, or all failures). If the hypothetical
worst case loss-ratio is < ORI(dr) then the data rate is increased or if the hypothetical
best case loss-ratio is > MTL(dr) then the data rate is decreased.
The operation of RRAA-Basic is demonstrated below in Algorithm 4.
It is easy to see that RRAA-Basic suffers from the same problem as ARF, in that it
makes no attempt to distinguish between collisions and channel errors.
Given the loss ratio defined in Equation 3.67, it can be seen that at the end of each
window (when decision to increase or decrease the data rate is made):
#transmitted frames = ewnd(dr) (3.71)
Therefore, the data rate will decrease if:
#lost frames > MTL(dr)ewnd(dr) (3.72)
Which, expressed in terms of integers is:
#lost frames ≥ dMTL(dr)ewnd(dr)e = ZMTL(dr) (3.73)
132
Algorithm 4 RRAA-Basic Pseudocode.1: dr = highest data rate . For 802.11b this is 11Mbps
2: Counter = ewnd(dr)
3: while stationActive do
4: SendFrame()
5: UpdateLossRatio(FrameSuccess)
6: if Counter == 0 then
7: if getLossRatio() > MTL(dr) then
8: dr = DecreaseDataRate() . Sets dr to the value of the new data rate
9: else if getLossRatio() < ORI(dr) then
10: dr = IncreaseDataRate() . Sets dr to the value of the new data rate
11: end if
12: Counter = ewnd(dr)
13: else
14: if getBestCaseLossRatio() > MTL(dr) then . Assume remaining packets
15: dr = DecreaseDataRate() . are all successful.
16: Counter = ewnd(dr)
17: else if getWorstCaseLossRatio() < ORI(dr) then . Assume remaining
18: dr = IncreaseDataRate() . packets are all failures.
19: Counter = ewnd(dr)
20: end if
21: end if
22: Counter = Counter - 1
23: end while
133
Likewise, the data rate will increase if:
#lost frames ≤ bORI(dr)ewnd(dr)c = ZORI(dr) (3.74)
The probability of increasing or decreasing the transmission rate can be found using the
binomial cumulative distribution function which by definition gives the probability that
there are k or less successes in n trials (expressed using the regularized incomplete beta
function).
I.e., if the probability of a transmission error is given as p, then the probability that
there are k or less transmission errors in a trial of ewnd(dr) attempts is:
Pr (X ≤ k) = I1−p (ewnd(dr)− k, k + 1) (3.75)
Where:
IX(a, b) =a+b−1∑j=a
(a+ b− 1)!j! (a+ b− 1− j)!
xj (1− x)a+b−1−j
Therefore, the probability of increasing the data rate is:
PU DR = I1−p (ewnd(dr)− ZORI(dr), ZORI(dr) + 1) (3.76)
and the probability of decreasing the data rate is:
PD DR = 1− I1−p (ewnd(dr)− ZMTL(dr) + 1, ZMTL(dr)) (3.77)
These values can then be used to construct a Markov Chain, Figure 3.11, which models
the behaviour of RRAA-Basic when used in conjunction with 802.11b.
134
Figure 3.11: Markov Chain of RRAA-Basic (with four available data rates).
Finding the stationary distribution of the Markov chain given in Figure 3.11, gives the
following steady state probabilities:
b1Mbps =PD 11PD 5.5PD 2
PU 1 (PU 2PU 5.5 + PD 11PD 5.5 + PD 11PU 2) + PD 11PD 5.5PD 2(3.78)
b2Mbps =PD 11PD 5.5PU 1
PU 1 (PU 2PU 5.5 + PD 11PD 5.5 + PD 11PU 2) + PD 11PD 5.5PD 2(3.79)
b5.5Mbps =PU 1PU 2PD 11
PU 1 (PU 2PU 5.5 + PD 11PD 5.5 + PD 11PU 2) + PD 11PD 5.5PD 2(3.80)
b11Mbps =PU 1PU 2PU 5.5
PU 1 (PU 2PU 5.5 + PD 11PD 5.5 + PD 11PU 2) + PD 11PD 5.5PD 2(3.81)
135
Finally, to accurately calculate the proportion of time spent at each data rate, the ex-
pected number of frame transmissions before the loss ratio is reset needs to be considered.
This will typically be smaller than ewnd(dr) as, if certain conditions are met, the data
rate can change before ewnd(dr) frames have been sent (i.e., if the loss ratio is smaller
than ORI(dr) even if all remaining frames are unsuccessful or if the loss ratio is greater
the MTL(dr) even if all remaining frames are successful).
The precise number of frames can be calculated by considering the probability that the
data rate will change (move up or down) after a specific number of frames have been
transmitted given that it did not change prior to that specific number of frames being
transmitted3.
This is shown below, where:
ZORI(dr) = ewnd(dr)− ZORI(dr)
is used as a notational convenience.
EUP (dr) =ewnd(dr)∑
n=ZORI(dr)
n
(n− 1
ZORI(dr)− 1
)pn−ZORI(dr)(1− p)ZORI(dr) (3.82)
EDOWN (dr) =ewnd(dr)∑
n=ZMTL(dr)
n
(n− 1
ZMTL(dr)− 1
)pZMTL(dr)(1− p)n−ZMTL(dr) (3.83)
Therefore, the expected number of frames sent per window can be written:
frames(dr) = EUP (dr) + EDOWN (dr) + ewnd(dr)(1− PU dr − PD dr) (3.84)3The expected values were calculated using a slightly modified version of the binomial distribution.
The binomial distribution gives the probability that exactly k successes will have occurred after n trials.
If n < k then, the situation where k successes occur in less than n trials, and the remaining trials are
failures is an event that meets this definition. This is not suitable for the purposes of calculating the
above expected values, which require the probability that exactly k successes occur in exactly n trials,
equivalent to the binomial probability that exactly k − 1 successes occurred after n − 1 trials, followed
by the probability of a success occurring on the nth trial.
136
and the proportion of time spent at a particular data rate, found to be:
Propdr =bdrframes(dr)∑
drate∈DataRates bdrateframes(drate)(3.85)
These data can then be used to calculate the capacity of a wireless channel carrying
VoIP in a similar manner to those models described in Sections 3.10 and 3.11:
nmax =
⌊ ∑dr∈DataRates
TP × dr × PropdrVoIPTRAFFIC (TP + TLAY ERS + TSIFS+DIFS+ACK + TDCF )
⌋(3.86)
Where:
TDCF comes from the analysis in Section 3.10
137
3.16 Summary of Methodology and Implementation
This chapter has provided:
• An overview of what existed prior to this research (the literature review introduced
much of this material, and this chapter gave it a more technical treatment);
• The motivation and justifications for modifications made to existing models; and,
• A technical overview of what existed after the research, namely the new propaga-
tion, VoIP capacity and ARC models.
More specifically, this section provided mathematical formulae for models uncovered
during the literature that were deemed suitable for propagation prediction in an indoor
industrial environment. These were:
• Friis’ freespace propagation model (Section 3.4.2);
• The Simple Power Law (Section 3.4.2); and,
• Partition Based Path-loss (Section 3.5.2).
Formula was also provided for models useful in predicting the performance of 802.11b
networks under certain conditions:
• Wu’s Markov Chain analysis of the 802.11b MAC layer (Section 3.8); and,
• Garg’s VoIP capacity model (Section 3.9).
The propagation models were then used as a basis for other propagation models proposed
in this dissertation:
• Aisle based pathloss (Section 3.4.3); and,
• Partition based path-loss with path-loss exponent (Section 3.5.3).
In addition, it was shown how the Levenberg-Marquardt optimisation method could be
applied to the problem of predicting material properties for a ray tracing implementation
(Section 3.6).
Six new models to calculate the capacity of VoIP were provided. These models described
the performance of VoIP over 802.11b for:
138
• Basic Access DCF (Section 3.10);
• RTS/CTS DCF (Section 3.11);
• ARF (Section 3.12);
• RTS/CTS ARF (Section 3.13);
• CARA (Section 3.14); and,
• RRAA (Section 3.15).
One perceived contribution of this Doctoral research, which needs to be noted at this
point, was the Java simulation program developed during the course of this research
project, used to assess the performance of the described models. The simulation program
is introduced fully in Chapter 4, which details the tools and methods used to assess the
performance of the models discussed in this chapter. Though absent from this chapter,
the simulation program, like the models presented in this chapter, helps to facilitate
the rapid deployment of wireless networks in an industrial environment. It achieves
this by providing functionality beyond that of a mere test bench for assessing a given
model’s relative performance and providing methods by which predictions, made by the
implemented models, can be extended across the entire site under investigation and the
results easily visualised.
139
Chapter 4
Experimental Design
140
4.1 Overview
This chapter describes the tools and methodology used to conduct experiments aimed at
assessing the performance of the models described in Chapter 3 in the context of rapidly
deploying wireless networks. These tools included:
• Automated measurement equipment – used to gather empirical pathloss measure-
ments at industrial locations;
• A simulation program, written in Java, that implemented all of the models dis-
cussed in this dissertation and provided functionality that allowed the user to
model the environment under investigation; and,
• A basic 802.11b simulation program used to validate the statistical accuracy of the
new VoIP capacity models.
A methodology for using the measurement equipment is also presented. This method-
ology was followed at all sites where measurements were taken to ensure a high level of
accuracy and consistency.
Finally, metrics used to evaluate the different models are presented. For the propagation
models the average mean square error (MSE) of predicted versus measured results is
used and for actual network performance, the concepts of throughput and “goodput”
are introduced.
141
4.2 Measurement Equipment
It was noted previously that the propagation models described in this dissertation re-
quired empirical path-loss measurements to calculate their various parameters (path-loss
exponent, linear attenuation terms, etc.). It was also noted in Section 2.3, when fad-
ing was discussed, that measurements were gathered at inherently noisy locations. It
was therefore decided that attempting to measure and predict the sector average would
produce more robust results. With the need to perform sector averaging, the process
of manually gathering empirical pathloss measurements became a very time consuming
process. To this end, and to ensure the highest practical level of accuracy, the path-
loss measurements acquired during this research program were obtained using trans-
mitter and receiver equipment custom developed for the task, as shown in Figures 4.2
and 4.4.
As is standard in radio communications, the unit used for all power measurements was
dBm, which is the power ratio in decibels of measured power referenced to 1mW.
142
4.3 Transmitter Equipment
The transmitter equipment, shown in Figures 4.1 and 4.2, was designed to simulate, in a
very basic manner, a wireless access point (AP). Its operation was simple – an oscillator,
powered by a battery, generated a 2.437GHz (corresponding to 802.11b Channel 6)
signal, which was then amplified and fed through to an omnidirectional antenna which
transmitted the narrow band signal at a power of 26.98dBm. The antenna was affixed
to a stand that could be lengthened or shortened, allowing for measurements to be
taken at a range of different heights. Finally, on the wooden board, to which all the
components bar the antenna were affixed, was a cooling fan that kept the amplifier from
overheating.
Figure 4.1: Block Diagram of Transmitter Rig.
143
Figure 4.2: Transmitter Rig.
144
4.4 Receiver Equipment
The receiver equipment, shown in Figure 4.3, consisted of both the components required
to detect and measure the power of a received signal and a linear axis with controller
(Figure 4.4), which facilitated automation of sector averaging. A transmitted signal was
received through a 2.4-2.5GHz 3dB omnidirectional antenna, passed through a bandpass
filter (centred on 802.11b Channel 6) which removed extraneous noise, amplified and
then sent to a power meter (with a dynamic range of between -70dBm and 44dBm).
The power meter was connected to a personal computer (PC) via a general purpose
interface bus (GPIB) to universal serial bus (USB) interface which allowed for automated
measurements. Both this PC and the one used to drive the linear axis were powered by
their own internal batteries, and both the power meter and linear axis were powered by
a DC-AC power inverter connected to a series of batteries.
Figure 4.3: Block Diagram of Receiving Rig.
Due to Fast Fading, the rapid fluctuation of received signal due to multi-path and the
time varying nature of the channel, instantaneous measurements of received power may
not be indicative of what would be experienced, on average, in that location by a receiver.
145
The impact of this was lessened by both taking an arbitrary (but large) number of
measurements at a fixed location and by taking the average of the signal in the local area.
This was achieved by moving the antenna over a spatial area that had linear dimensions
(using the linear axis device) which removed the rapid variations characteristic of Fast
Fading. This was referred to as taking ‘the sector average’ in publications such as
[Honcharenko 1992] (discussed in greater detail in Section 2.3).
Figure 4.4: Linear Axis.
The automated sector averaging operated by mounting an omnidirectional antenna on
the linear axis. A PC (running in-sync with the laptop that recorded measurements from
the power meter) drove a motor, which allowed for 14 different measurement locations,
each separated by one quarter wavelength of a 2.4GHz signal (approximately every
0.031m) each run. There were two positions, separated by approximately 0.43m (14
quarter wavelengths) along the linear axis where the antenna could be placed (this
allowed for 28 measurement locations along a given axis). In practice, four sets of
measurements were taken at each location, one set for each antenna position in one
direction and one set for each antenna position in a direction orthogonal to the first
direction, as shown in Figure 4.5.
146
Figure 4.5: Diagram of Linear Axis Showing at What Orientation Measurements were
Taken.
147
4.5 Measurement Process
To ensure consistency, a strict measurement methodology was followed at each of the
industrial locations that measurements were taken.
Before path-loss measurements were carried out, it was important to check the spectrum
for any noise that could contaminate the results. To this end, before any set of mea-
surements were taken, the spectrum was visually inspected using a spectrum analyser
(Anritsu MS2721A Spectrum Master). The MS2721A was purpose built for the task of
deploying, maintaining and troubleshooting WiFi (802.11a, b and g) systems and as such
allowed any signals that may cause interference in the relevant portion of the spectrum
(i.e., around 2.4GHz) to be easily detected.
If interference was detected, its source could usually be determined by attaching a di-
rectional antenna shown in Figure 4.6, to the spectrum analyser, setting it to display
instantaneous measurements and then simply rotating the directional antenna full circle
and heading in the direction of the largest received signal. As this was a ’home made’
antenna, its exact characteristics were not precisely known (and were not tested in an
anechoic chamber). Numeric measurements were not obtained using this antenna. That
signals were attenuated when not inline with the antenna’s main lobe was easily deter-
mined through basic experimentation and demonstrate that the antenna was functionally
sufficient for the described task.
After the measurement equipment had been transferred to a measurement site, assem-
bled, and the power meter calibrated, the first experimental step was to undertake pre-
liminary measurements with the spectrum analyser, ensuring that there was no extrane-
ous noise in the relevant frequency band that would interfere with measurements.
Figure 4.6: Makeshift Directional Antenna Constructed using the Packaging from a Wine
Bottle.
148
Once all extraneous signals had been identified and where possible removed, the trans-
mitter was placed in its starting position and calibration measurements (typically 100
samples at 56 locations centred around the measurement point as shown above, Fig-
ure 4.5) were taken with the receiving and transmitting antennas at equal heights (1.75
metres) and separated by 1.5 metres. After this, the height of the transmitter was ad-
justed and the receiver wheeled around the site, taking measurements at a number of
different locations (where measurements were taken was subject to time and environ-
mental constraints). The transmitter and receiver locations (i.e., relative distance from
a landmark object) were noted on a map of the site. The heights and composition of
notable objects at the site were also recorded.
After a sufficient number of measurements had been taken for a single transmitter place-
ment (typically between 10 and 20 measurements), the transmitter was moved to a new
location and measurements repeated.
149
4.6 Measurement Equipment Validation
To ensure that the measurement equipment was operating correctly, a series of mea-
surements were taken in an open park location. This was chosen because it was easily
accessible and the large open area approximated free-space conditions.
The measured path-loss results were then plotted alongside theoretical freespace results,
Figure 4.7, (using Equation 3.3) to see how closely they matched.
Figure 4.7: Measured Path-loss versus Theoretical Freespace Path-loss.
It can be seen from Figure 4.7 that the measured values were similar in value to the
theoretical freespace values, with deviations being easily explained by the fact that the
oval was only an approximation of ideal freespace conditions (there was an unavoidable
grassy ground for a start).
As stated in Section 3.4, the path-loss exponent for free space was n = 2, for the above
measured results, the path-loss exponent was calculated to be n = 2.13. These results,
whilst saying little about the overall accuracy of the measurement equipment (absolute
accuracy of the measurements being more closely related to the precision to which the
power meter was calibrated and variations in the amplifier and antenna gains), suggested
that the measurement equipment was producing reasonable results.
150
4.7 Overview of Java Simulation Program
As an applied Doctoral research program undertaken in collaboration with industry,
the resulting major outcome was a simulation program that implemented the models
discussed in Chapter 3. It utilized Java Development Kit (JDK) 6.0 and the 3D graphics
Application Programming Interface (API), Java 3D 1.3.1, which acted as a wrapper on
top of Microsoft’s DirectX API. A Java implementation was specifically requested by
the industry partner and it is conceded that a C implementation would have resulted in
faster run times (though not by orders of magnitude).
For the sake of brevity, the actual implementation of the program is not discussed in great
detail as much of the programming was only tangentially related to the academic focus
of this dissertation (this included functionality such as saving/loading, importing CAD
diagrams, selecting objects and moving/resizing them, etc)1. However, an overview of
the program is provided in this section to expose its functionality and provide information
about the design choices made during its implementation. The level of detail provided
is such that a competent programmer could reproduce the program, to verify the claims
made in this dissertation.
The features described in the following sections (Section 4.7.1 – 4.7.8) also demonstrate
the simulation program’s applicability to real world network deployments, demonstrating
that it is feature rich enough to be of use outside of an academic environment and is
therefore a worthy outcome for an applied research project.
The Java Simulation Program was composed of two sections, a window at the top of the
screen that displayed the current state of the world as an interactive three dimensional
visualisation and a control panel at the bottom of the screen that facilitated the user’s
interaction with the world and provided numerical feedback about it.
4.7.1 Heat Map Tab
The Heat Map tab, Figure 4.8, was presented to the user when the program was started.
From this tab, the user could define and manipulate the output of the simulation. The
output of the simulation was a 3-dimensional mesh overlaid on the world, with the colour1For examination purposes, a copy of the source code has been provided electronically
151
and height representative of received power. This style of graphical representation of
data was known as a heat map and was referred to as such throughout the remainder of
this document.
Figure 4.8: Heat Map Tab.
The Heat Map tab’s functionality could be broken down into four main sections:
(i) Predictive Algorithm
This section allowed the selection of the algorithm used to make propagation
predictions. The user could choose between:
• Ray Tracing;
• Partition Based Path-loss; and,
• Simple Power Law/Aisle Based Pathloss.
(ii) Visualizations
The simulation could be run multiple times (using different predictive algo-
rithms) and the world could have multiple transmitters in it (each transmitter
being assigned a specific 802.11b channel). From this section the user could
choose which set of results (i.e., for what channels and algorithms) were
displayed in the form of a heat map.
(iii) Heat Map Display Options
The options in this section allowed the displayed heat map to be manipu-
lated (in a manner that altered its appearance, but did not change what it
represented). From this section, the user could:
• Adjust the height of the heat map;
152
• Specify how transparent the heat map was;
• Determine how far up the Z-axis stronger received powers should be
drawn as compared to weaker received powers; and,
• Determine the desired minimum received powers (all powers below this
value were represented as black by the heat map). This functionality
was useful when specifications required a certain minimum coverage
across an entire site.
(iv) Heat Map Properties
These were the controls where the parameters that defined how the heat map
was created could be specified, such as:
• The resolution and specific colours of the heat map;
• The height of the receivers;
• Whether material parameter training occurred; and,
• Whether rays were drawn to the screen whilst the simulation was run-
ning.
In addition to these four sections the following controls were also present:
• A button that started the simulation;
• A key showing what colours correspond to what received power on the heat map;
and,
• The Mean Square Error (MSE) of measured versus predicted power (after a sim-
ulation had been run and material parameters computed).
4.7.2 Simulation Tab
The Simulation tab, Figure 4.9, allowed the user to specify the parameters of the simu-
lation. It was broken up into three sections:
153
Figure 4.9: Simulation Tab.
(i) Power Law / PBP Options
This section allowed the user to specify the:
• Computed path-loss exponent (or exponents if ABP was toggled on);
and,
• Reference path-loss, P0, at the reference distance, d0.
Alternatively, these parameters could be calculated (with P0 and d0 calcu-
lated using the closest training point to the transmitter) if ‘Calculate Vari-
ables’ was toggled on.
(ii) Ray Tracing Options
This section allowed the user to specify the:
• Anti-aliasing scheme to be used and how the rays should be launched;
• Number of recursions used to generate the geodesic sphere from which
rays were shot (or points, if the spiral ray launching method was chosen).
See Section 2.8 for more details;
• Maximum number of reflections, transmissions or combination thereof;
and,
• System loss, the amount of power lost (or gained) in the measurement
apparatus due to lossy wires and other factors.
(iii) Ray Tracing Training Properties
This section allowed the user to adjust the parameters of the Levenberg-
154
Marquardt training algorithm.
It was found, through trial and error, that by running the L-M algorithm
a number of times with different parameters, a higher degree of precision
could be achieved. If aggressive training was toggled on, then the training
algorithm was run a number of times with predefined settings. The other
available parameters were the (See Section 3.6.2 for a more detailed descrip-
tion of the L-M algorithm):
• Initial Lambda, the starting value of λ;
• Epsilon Step Down, the factor by which λ was decreased when the
residual decreased;
• Epsilon Step Up, the factor by which λ was increased when the residual
increased;
• Error Threshold, which caused the simulation to terminate if λ repeat-
edly only changed by this amount;
• Delta H, the finite difference used in approximating the Jacobian; and,
• Max Iterations, how many iterations the algorithm should execute be-
fore it was terminated.
4.7.3 VoIP Tab
The VoIP tab, Figure 4.10, allowed the user to control the conditions and particular
network setup that VoIP cells would be plotted for.
155
Figure 4.10: VoIP Tab.
The VoIP tab allowed the user to specify:
• How many simultaneous calls a VoIP cell was required to support;
• Which Codec was utilised;
• The size of audio frames;
• If a long or short preamble was used;
• If Robust Header Compression (ROHC) was used; and,
• The noise floor of the environment under inspection.
Using these parameters, a cell could be plotted which showed the region within which
satisfactory performance was attainable even in worst case situations.
The user could then explore how changing network parameters, such as data rate or the
amount of audio transmitted per packet affected the size of the VoIP cell. In this way
coverage could be easily predicted and network parameters optimised. The section also
provided feedback, informing the user what the:
• Maximum capacity of the network was with the specified parameters, assuming no
noise; and,
• Minimum received power (in dBm) for the desired capacity was.
156
4.7.4 Objects Tab
The creation of objects was facilitated through the Object tab, Figure 4.11.
Figure 4.11: Objects Tab.
The Objects tab allowed:
• Object primitives to be created, modified and deleted from the world;
• Materials, defined in the Materials tab to be applied to objects;
• The distance an object was from the first transmitter placed to be viewed; and,
• The movement granularity (object displacement relative to mouse displacement)
to be adjusted.
4.7.5 Tx/Rx
The Tx/Rx tab, Figure 4.12, allowed the placement of transmitters, receivers and train-
ing points into the 3D world.
157
Figure 4.12: Tx/Rx Tab.
The Tx/Rx tab allowed the user to:
• Add and remove Training Points (TP), Transmitters (Tx) and Receivers (Rx) to
and from the world; and,
• Specify parameters (such as power and frequency) for these objects where appro-
priate.
When adding a transmitter to the world, the user could either select to use a perfect
Omni[directional] (an ideal isotropic source) antenna pattern or one could be loaded
from a file. The simulation accepted antenna patterns in the EDX antenna pattern file
format [EDX 2006], chosen due to the availability of a program able to convert most
of the other major antenna pattern file formats into an EDX compatible file. When a
specific antenna pattern was loaded from a file, the simulation displayed the transmitter
as a picture of the antenna pattern (which allowed, in the case of directional antennas,
the antenna to be orientated in the appropriate direction, Figure 4.13). If an antenna
was not loaded, then the antenna was displayed as purple sphere.
Figure 4.13: A Selection of Antenna Patterns as Rendered by the Simulation.
158
4.7.6 Materials Tab
Each object placed into the world was associated with a particular material type. The
properties of each material could be modified through the Material tab, Figure 4.14.
Figure 4.14: Materials Tab.
The Materials Tab allowed the:
• Creation of new, or modification of existing, materials;
• Relative permittivity and partition loss parameters to be associated with a given
material type (i.e., all objects that had a given material type would have the same
relative permittivity and partition loss); and,
• User to specify the dimensions that an object of the selected material type would
have when it was first placed in the world. This functionality speed up the mod-
elling of the 3D world – for example, if the outer walls of a factory were known
to always be 8m tall and 0.2m thick, then these could be specified as defaults for
objects that were associated with the wall material type.
4.7.7 Messages Tab
The Messages Tab, Figure 4.15, reported errors to the user (such as trying to run the
simulation with no transmitter in the world), logged the time it took to perform cer-
tain actions (how long training took etc) and provided the user with basic interface
commands.
159
Figure 4.15: Messages Tab.
4.7.8 3D-Window
The last major element of the interface to this program was the window into the 3D
world, Figure 4.16. Through this interface the user could freely move the camera around
with the keyboard, select, move, scale and rotate objects by clicking on them and, change
their properties.
Figure 4.16: Java Ray tracing in Bundaberg Factory.
Figure 4.16 showed the factory after the ray tracing simulation had finished running
and all rays had been drawn to the screen. The colour of each ray indicated what had
happened during the course of the ray’s life. Thus:
• Purple rays were the segments that came directly from Tx;
• Green rays were those that had just passed through an object;
160
• Blue rays were those that had just bounced off an object;
• Red rays were those that had intersected Rx. The redness of the received rays
indicated their relative weight (for the distributed wavefront method); and,
• The white ends of a ray indicated the direction it was travelling (at a ray’s starting
point it was coloured and as it travelled along its path towards its next intersection
it faded to white).
4.7.9 Using the Simulation to Evaluate Propagation Models
In the simulation program, there were two different types of receiving objects:
• Receivers – points where predicted power would be calculated; and,
• Training Points – points where empirical measurements had been taken and where
predicted power would also be calculated.
Receivers were typically used when predicting the power in locations where measure-
ments had not been taken, such as when generating a heat map. Training points served
two purposes, firstly they could be used to calculate model parameters and secondly they
allowed measured power at a given point to be compared against predicted power.
The process followed to evaluate a given propagation model using the Java simulation
was simple:
Step 1) Using the measurement equipment, raw power measurements were gath-
ered at different industrial locations using the described measurement method-
ology (Section 4.5).
Step 2) A virtual 3D representation of the location was then constructed using the
modelling tools provided by the simulation. The accurate locations of walls
and other objects were obtained using CAD floor plans of the environment.
Step 3) Training points were then placed into the virtual factory at locations where
empirical measurements had been taken. A transmitter was also placed in
the appropriate location and configured to match that used in Step 1.
161
Step 4) A simulation was then run for the propagation model using a subset of
the total training points to calculate model parameters.
Step 5) Predictions made by the propagation model were then compared across
all training point locations to calculate the MSE between measured and pre-
dicted pathloss, as described in Section 4.9.
Step 6) Steps 4 and 5 were then repeated multiple times, each time using a new
randomly selected subset of training points (ensuring that the new set of
training points contained the same number of elements as the previous one).
This repetition allowed the average MSE to be calculated and thus the per-
formance of the propagation model, relative to the number of data points
used to calculate its parameters was obtained.
Step 7) Steps 4 – 6 could then be repeated using a different number of training
points. In this manner, the manner in which the predictions of a propagation
model evolved as the number of training points used was increased could be
obtained.
Step 8) Finally, Steps 4 – 6 could be repeated using a different propagation model.
This allowed comparisons between the performance of different models to be
obtained.
Results obtained using this methodology are presented in Chapter 5.
162
4.8 Matlab MAC Layer Simulation
As described in sections 3.13 and 3.14, new VoIP capacity models were developed as part
of this research program. To ensure that the statistical modelling of the 802.11b DCF
was accurate, a simple simulation was written in Matlab. A flow chart, Figure 4.17,
which detailed its operation, is given here for completeness.
This simulation allowed the calculation of throughput to be measured for two saturated
nodes transmitting over a noisy channel. The Bit Error Rates (BERS), and conse-
quently packet error rates, for different transmission speeds were calculated using Equa-
tions 2.13, 2.14, 2.16 or 2.17. The flowchart was simplified (does not show measurement
logic) to make it easier to read, however by watching different attributes (e.g., count-
ing how many slots were waited between each successful transmission, or by counting
how many transmission attempts were made before it was successful) various system
attributes could be computed.
As the BERs for 802.11b’s various modulation schemes were well known, this level of
simulation served to demonstrate the accuracy of the stochastic modelling of the 802.11b
DCF under [Garg 2003]’s experimentally validated assumption that many VoIP connec-
tions could be approximated by two saturated wireless STAs.
163
Figure 4.17: Flowchart of Matlab 802.11b MAC Layer Simulation.
164
4.9 Propagation simulation comparison
When analysing the output of an estimator (such as the propagation models described
in Chapter 3) in comparison to observed values, the Mean Squared Error (MSE), is a
useful metric. It is defined, for n estimates, PLi of PLi as:
MSE =1n
(n∑i=0
[PLi − PLi
]2) (4.1)
The MSE ensured that the error metric was always positive and lent more weight to
larger errors than the mean error.
Calculating the MSE, versus the number of empirical training locations, required an
arbitrary number of such locations to be extracted from the dataset and used to calculate
n, the object attenuation values, or material relative permittivity (where appropriate).
It was, not surprisingly, found that different choices for the training locations resulted in
significant changes to the MSE. Training locations that were spread evenly around the
factory, and thus took into consideration the largest number of object types typically
performed the best (i.e., resulted in the lowest MSE), and training locations that were
tightly clustered together and thus provided less information performed the worst. To
account for this variation, the average MSE was considered. To obtain this value, n
training points were chosen from the dataset at random, without replacement, and the
MSE was calculated. This process was then repeated an arbitrary number of times with
replacement and the average MSE calculated by taking the average of these individual
MSEs.
This MSE averaging technique was known as bootstrapping and was preferable to av-
eraging over all possible combinations because of the sheer number of them (if n power
measurements were taken and r of those n measurements were used to compute the
coefficients, then there would be n!r!(n−r)! possible combinations), and the mean of the
resultant MSEs computed.
As described in the Literature Review, it is generally accepted that measurements of
large-scale path loss (those that experience slow fading) will be lognormally distributed.
As described in publications such as [Rappaport 1992] and [Durgin 1998], the predicted
165
pathloss and MSE (or variance, σ2) provide an estimate of the lognormal statistics (mean
(predicted pathloss), µ and standard deviation, σ) of the measured pathloss.
The value of σ is important as it is directly related to the precision of the path loss
predictions (and thus confidence one can have in them). The value of σ, represents an
approximately two-thirds confidence interval [Durgin 1998] i.e., if a given propagation
model predicts a path loss of PL, then the path loss at this point will be in the range[PL− σ, PL+ σ
]approximately two-thirds of the time. Given this, it is easy to un-
derstand that models that have a smaller MSE (and thus standard deviation) are more
precise than those with a larger MSE and that more confidence can therefore be placed
in their predictions.
This heuristic was chosen as a method for evaluating propagation models in the context
of the rapid deployment of wireless networks because it allowed graphs to be produced
that clearly showed how precise models were after a certain number of measurements had
been used to compute their parameters. The number of measurements taken at a given
site directly relates to the rapid deployment of wireless networks because it determines
the length of time where one must be physically present on site. For large sites, this
length of time can be non-trivial.
166
4.10 Network Performance Nomenclature
As a prelude to the VoIP capacity results presented in Chapter 5, it is informative to
present a number of the metrics commonly used to assess network performance.
There are a number of different ways to measure how much data a network is transmit-
ting. The simplest of which is:
Throughput =Bits Transmitted
Time(4.2)
Throughput is the total number of bits that can be transferred in a given unit of time.
However, as has been alluded to previously, due to network overheads such as network
layer headers and delays introduced by the DCF, this does not describe how much useful
data is actually being transmitted. To this end, the concept of Goodput is instead
used:
Goodput =Useful Bits Transmitted
Time(4.3)
Useful Bits are bits of the data that were transmitted, not those bits related to net-
work layer headers, other overheads and transmitted bits lost due to noise corruption.
Goodput is lower than Throughput, but provides a much more realistic measure of the
realisable speed of the network.
802.11b can operate at data rates of 1, 2, 5.5 and 11Mbps, however because of overheads,
this maximum data rate is seldom realised for raw data (for example, even a single station
transmitting on an 802.11b network must wait a random back off period between frames).
A measure of how much of the available bandwidth is being utilised is known as Channel
Utilisation:
Channel Utilisation =ThroughputData Rate
(4.4)
167
4.11 Signal Strength Heat Maps
When assessing the precision of various propagation models, it was only necessary to
calculate the received power at locations where measurements were made (as nothing
could be said about predictions at locations where measurements were not taken). This
is, however, not particularly helpful to the end user who would like to know about
coverage across the entire site. To help facilitate this, the idea of a heat map was
introduced.
Once the empirical measurements (taken with equipment described in Sections 4.2 –
4.4) had been used to compute the propagation model’s parameters (Sections 3.3 –
3.10) and the predicted received powers compared to the measured received powers, the
simulation program (Section 4.7) could compute the predicted path-loss across the entire
environment. This was done by placing receivers at regular, evenly spaced intervals (for
the high resolution heat map, 100×100 receivers were modelled in the environment) and
determining the power at each receiver. A polygon, divided into many square faces,
was then overlaid on the environment, with each face centred over one of the receivers.
The power at each receiver determined the colour of the polygon’s face with the lowest
measured power being painted a dark red colour through red, orange and yellow with
the highest measured power being painted a pale yellow, almost white colour. A sample
legend is shown in Figure 4.18.
Figure 4.18: Sample legend for propagation heat map.
When the ray tracing propagation model was used, increasing the number of receivers in
the environment caused the simulation’s performance to degrade rapidly (which could
limit the end user’s ability to quickly assess the coverage provided by multiple different
transmitter placements). To enable the rapid testing of potential transmitter locations
a low-resolution heatmap could be generated which only used a 10×10 grid of receivers.
The same multi-faceted polygon used when creating a high resolution heatmap was
overlaid on the environment, but the colour of the faces was calculated through the use
of bi-cubic interpolation [Appendix 1].
168
Low-resolution heatmaps facilitated the rapid deployment of wireless networks as they
allowed many different transmitter locations to be quickly tested and assessed. When a
potentially suitable transmitter location was found, a high-resolution heatmap could then
be generated to confirm that the location was indeed suitable. Examples of heatmaps
for different propagation models are shown in Figure 4.19.
169
(a) Heat Map using Simple Power Law. (b) Heat Map using Aisle Based Path-
loss.
(c) Heat Map using Partition Based Path-
loss.
(d) Heat Map using Ray Tracing.
Figure 4.19: Heat Maps for Different Propagation Models.
170
4.12 VoIP Cells
In a similar manner to the heat maps described in Section 4.11, it is also beneficial
to the end user to provide visual feedback about how effectively a network will sup-
port a given number of VoIP connections. To this end, the concept of VoIP cells was
introduced.
By using the propagation simulation to predict received power, the signal-to-noise ratio
(SNR) could be calculated ([Rappaport 2005] noted that a typical value for the noise
floor in an indoor environment was -90dBm). Using the SNR and the 802.11b VoIP
capacity models (Sections 3.10 – 3.15) cells, shown in Figure 4.20, that could support
a given number of simultaneous VoIP calls (using a specified codec at a specified data
rate) within their perimeter could be determined.
Figure 4.20: A contour map defining the region that can support a specified number of
simultaneous VoIP calls of a given type.
The Java simulation program displayed the VoIP cells as green and white contour maps.
This functionality greatly simplified the process of determining how much coverage was
enough coverage to support desired network performance.
The visualisation options described in Sections 4.11 and 4.12, demonstrate that care was
taken when implementing the Java simulation program, to provide functionality that
would render it as a useful tool in real world network deployments.
171
4.13 Summary
This chapter (and the preceding three chapters) have provided background information,
described what is going to be investigated and detailed the tools and methodology with
which the investigation will take place. More specifically:
Chapter 1 provided a general overview of some of the technologies involved in the
desired course of research and outlined the basic goals of the research project,
i.e., to develop models that aid in the rapid deployment of wireless networks in
industrial environments.
Chapter 2 provided a more detailed look at the technologies involved and described
a number of models for propagation prediction and simulation of 802.11b specific
operation.
Chapter 3 mathematically described the models taken from the literature review
and used them as a starting point to develop new models.
Chapter 4 described the tools and heuristics developed during the course of the
research project, used to compare the models developed in Chapter 3 against those
described in Chapter 2 with respect to the goals described in Chapter 1. This
chapter also described in detail the major applied outcome of this research project,
the Java Simulation Program.
Both the propagation model comparisons and VoIP capacity results are presented in
Chapter 5.
172
Chapter 5
Results
173
5.1 Overview
This chapter presents the results of experimental campaigns conducted during the course
of this research project. The results presented in this chapter fall into two categories:
• Propagation models were implemented in the simulation program (Section 4.7)
and their performance evaluated using a heuristic that described their ability to
facilitate the rapid deployment of wireless networks (Section 4.7.9); and,
• VoIP capacity models (Sections 3.10 – 3.15) were implemented and used to obtain
numeric results. These results showed how the VoIP capacity cells, described in
Section 4.12, performed and allowed the relative performance of different network
configurations to be evaluated.
The primary goals of this chapter are to:
• Instill faith in the reader that the models (and experimental results used to verify
them) incorporated in the simulation were based on sound engineering principals;
• Provide an appreciation of the data presented to the end user by the Java Simu-
lation Program;
• Demonstrate the efficacy of proposed models; and,
• Investigate when the proposed models should be used.
This chapter achieves these goals by first providing a brief overview of raw, measured
power data gathered at a car manufacturer, using the transmitter and receiver equipment
described in Chapter 4. Measurement data from other sites were excluded from the main
body of the dissertation for brevity1. While the data collected during measurement
campaigns were interesting, they were not the main focus of the research. The single
data set presented was included to provide the reader with an appreciation for the
empirical measurements used to calculate the parameters of the propagation models,
to demonstrate that data was collected in a systematic manner and to provide the
motivation and basic justification for measuring and predicting the sector average.
With respect to the propagation models, this chapter, using data from three different
industrial locations and the heuristics described in Section 4.9, presents a comparison of1For examination purposes, this data has been provided electronically
174
the propagation models described in Chapter 3. These results allowed conclusions to be
drawn about the relative suitability of each propagation model for the task of facilitating
the rapid deployment of wireless networks in an industrial environment and provided an
insight into what situations were most appropriate for specific models.
Graphs that show how the ray tracing propagation model (Section 3.6) improves in
precision, as an increasing number of empirical measurements are used to calculate
material properties, are also presented. These graphs serve to demonstrate how the
predictions of a propagation model evolve as more data points are used. It is indicative of
results an end user of the model might see (and included in this chapter to illustrate this
point), but in contrast to graphs where average error is plotted, are only minimally useful
in assessing how suitable a model is for the rapid deployment of wireless networks.
Following on from propagation results, the 802.11b DCF (both basic access and RTS/CTS)
and performance of three different ARC schemes are investigated. Graphs are presented
that show network throughput, network goodput and how many simultaneous voice calls
can be supported by a noisy channel under varied network conditions. These graphs al-
low conclusions to be drawn about the suitability of different network configurations
for performing specific tasks and provide the end user with an appreciation of the data
presented by the Java Simulation Program.
Finally, graphs are presented that compare the proposed VoIP capacity models with
the Matlab simulation, to demonstrate the statistical accuracy of the VoIP analysis
(Section 3.10 – 3.11). Results that justify the underlying assumptions of the VoIP
capacity model are also presented.
175
5.2 Path-loss Measurements
This section details empirical pathloss measurements taken at a single industrial location.
The intent of this section is to provide an appreciation of the results obtained at each
industrial site surveyed and to demonstrate the motivation and basic justification for
measuring and predicting the sector average.
During August and September 2005, a series of measurements were conducted at a car
manufacturing plant to assess the path-loss characteristics of that plant. Measurements
were taken (using the methodology described in Section 4.5) at 13 different locations
around the plant, with 50 discrete power measurements taken at each position along the
linear axis (amounting to 67200 individual power measurements). Graphical represen-
tations of the raw data collected during this measurement campaign are presented in
this section to demonstrate that while measurements taken at a single location varied
wildly, the sample mean of a large number of measurements was a valid approximation.
A floor plan showing where measurements were taken is presented in Figure 5.1 and a
summary of measured pathloss given in Table 5.1 (CAL, in this table, is a calibration
measurement, taken 1.5m from the transmitter).
176
Figure 5.1: Map Showing Measurement Locations at Car Manufacturer.
Figure 5.2, is a plot that shows each individual path-loss measurement versus its distance
from the transmitter. This plot demonstrated that the received signal, and consequently
the path-loss at a given location, varied significantly, even with relatively small changes
in displacement of the receiver (i.e., the antenna moving along the linear axis). It
was reassuring, however, to note that many of these changes in signal strength were
transitional and that the sample mean provided a reasonable approximation of the signal
levels likely to be experienced by a receiver at the given location and its immediate
surroundings as demonstrated in Figure 5.3.
The ‘×’ at the top and bottom of each vertical segment in Figure 5.3 marks the maximum
and minimum values respectively, the box in the middle represents the inter-quartile
range, (the range where between 25% and 75% of the recorded data reside) and the ‘×’
in the middle of each of these boxes represents the sample mean. From this it can be
seen that while the range of values encountered could be quite large, their inter-quartile
range and thus standard deviation was much smaller. Thus, taking a single measurement
177
CAL Rx1 Rx2 Rx3 Rx4 Rx5
Mean(dBm) 6.97 -19.27 -25.00 -33.90 -33.45 -29.03
Std Dev 0.40 5.19 5.93 5.13 4.41 5.43
Rx6 Rx7 Rx8 Rx9 Rx10 Rx11 Rx12
Mean(dBm) -24.47 -26.52 -33.29 -26.75 -27.36 -35.16 -48.59
Std Dev 4.93 4.91 4.51 5.96 5.01 4.91 3.92
Table 5.1: Received Power Measurements from Car Manufacturer.
Figure 5.2: Path-loss Replication Spread at Ford Geelong.
178
Figure 5.3: Box-Plot of Path-loss Data Collected at Ford Geelong.
and assuming that it was representative of the location in which it was taken could be
misleading, but taking a large number of measurements and using the sample mean was
a good measure.
A number of factors made it difficult to obtain fixed power measurements at a precise
location:
• Inconsistencies between the physical location and the 3D model (when constructed
inside the simulation, Section 4.7); and,
• People moving around the location where measurements are being taken (thus
subtly changing to overall geometry of the environment).
Because of this, it was difficult to make predictions in exactly the same place, under ex-
actly the same conditions as measurements were taken. For these reasons, it was decided
that the propagation prediction models (more specifically, the ray tracing simulation)
should only attempt to predict the sector average.
179
5.3 Comparison of Propagation Models
As noted previously, a number of different propagation models (of varying complexity)
have been discussed in this dissertation (Section 3.3 – 3.6). It was to be expected that the
more comprehensive (and, hence, generally more complex) the model, the more precise
it would be overall. Sometimes, however, only a limited number of measurements could
be taken at a given site (often due to time/budget or access constraints) which in some
cases resulted in the more complex models not being able to realise their full potential
(and sometimes performing more poorly than the simpler models). The ideal model
would be one that required a small number of measurements to achieve a high level
of accuracy and precision. Thus, a model suitable for the rapid deployment of wireless
networks would be one that did not require an extensive site survey and one that did not
require an exorbitant amount of time spent trying to determine the material properties of
objects in the environment. It was through these practical criteria, using data gathered
at industrial environments, that the propagation models were judged.
To this end, a comparison of five different propagation models across three different
industrial sites is presented (others were conducted during the course of the project and
showed the same general trends).
The locations described in this dissertation were chosen because they constituted a broad
slice of typical industrial environments. The three sites documented here were:
• A medium scale continuous process food manufacturing company (Section 5.3.1);
• A large scale automotive manufacturer, discrete component manufacture (Sec-
tion 5.3.2); and,
• A medium scale automotive components manufacturer, discrete component man-
ufacture (Section 5.3.3).
5.3.1 Beverage Bottling Factory
The following measurements were based on data collected on the production floor of a
bottling factory. A 3D model of the production facility, Figure 5.4, was constructed in
the Java Simulation Program (Section 4.7), based on available floor plans.
180
Figure 5.4: 3D Model of Beverage Bottling Factory.
The factory dimensions were approximately:
• 60m (east to west);
• 40m (north to south); and,
• 9m high.
It consisted of narrow walkways separated by items such as:
• Conveyor belts;
• A large pasteurizer (which could contain varying levels of the manufactured bev-
erage) in the middle;
• An office accessible by stairs sitting approximately 5m above the factory floor; and,
• Stock storage rooms on the north wall.
The heights of the various structures were recorded during the measurement campaign.
When calculating the attenuation caused by objects between the receiver and transmitter
181
for the PBP methods, seven discrete object types were taken into consideration. These
were:
• Inner walls;
• A pasteurizer;
• Conveyor belts;
• Bottling machines;
• Metallic housings above the conveyor belts;
• Crates; and,
• Pillars.
When calculating the path-loss using the ray tracing method, the:
• Floor;
• Roof;
• Outer walls;
• Office walls; and,
• Filters.
were also included. These material types were not used for PBP as measurements were
not taken in locations that rays would need to travel through these particular material
types to reach (i.e. no measurements were taken on the roof, or beneath the floor etc.).
However, they were included when using ray tracing because rays propagating around
the site could be reflected off of them.
In the measurements presented in Figure 5.5, 1000 combinations were used to obtain the
average MSE which was then plotted for n empirical training points (except in the case of
ray tracing where, due to the time it took to run the simulations, only 20 combinations
were considered). The small number of combinations used for ray tracing makes the
plotted results noisier and can result in an unintuitive increase in MSE even as more
training points are used.
182
Figure 5.5: Results from Beverage Production Facility.
Figure 5.5 showed that ray tracing, using estimated material parameters, only required
a small number of measurements before it began to produce precise predictions. For
this particular location, it was informative to also investigate how the power predictions
made by the ray tracing algorithm evolved and improved in precision (decreased in vari-
ance) as more empirical measurements were used to calculate material properties. This
is informative because it shows that the output of the ray tracing algorithm accurately
predicts many of the peaks and troughs in the power measurements caused by fading
and shows how the predictions made are related to the number of empirical measure-
ments. Figure 5.6 shows measured versus predicted results at the beverage production
facility.
183
(a) 1 Training Point (MSE 26.5). (b) 3 Training Points (MSE 10.9).
(c) 5 Training Points (MSE 10.7). (d) 37 Training Points (MSE 7.8).
Figure 5.6: Empirical Versus Predicted Power at Beverage Production Facility.
5.3.2 Automotive Production Facility
The second location investigated in this dissertation was a large scale automotive man-
ufacturer. Measurements were taken around a 145m2 area, beyond which the signal was
too attenuated to make reliable measurements. As was the case with the beverage bot-
tling facility, a 3D model of this factory was constructed. The area of the factory where
measurements were taken consisted of parallel walkways, separated by large stacks of car
parts, and was interspersed with a number of welding robots (which were operational
while the measurements were being taken – inspection with a spectrum analyser showed
that these provided no detectable interference in the 2.4GHz band, inline with the results
presented in [Rappaport 1989-2]).
184
The PBP algorithms were tested using the following material types:
• Welding robots;
• Office walls;
• Crates;
• Oasis (staff dining area);
• Large stacks of car parts; and,
• Machine tools.
When calculating the path-loss using the ray tracing method, the:
• Floor;
• Roof; and,
• Outer walls.
were also considered.
In the measurements presented in Figure 5.7, 1000 combinations were used to obtain the
average MSE which was then plotted for n empirical training points (except in the case
of ray tracing where, due to the time it took to run the simulations, only 20 combinations
were considered).
185
Figure 5.7: Results from the Automotive Production Plant.
5.3.3 Glass Manufacturing Facility
The final location investigated in this dissertation was a medium scale automotive com-
ponents manufacturer. A 3D model of this factory was constructed as per the other two
case studies. The factory was approximately 105m × 40m in area, with:
• An office block above the factory floor on the east wall;
• Glass cutting equipment hugging both north and south walls;
• A laminating room jutting out from the centre of the north wall; and,
• A large furnace in the north-west corner.
Sheets of completed glass product, ready for shipping, ran through the centre of the
factory, essentially partitioning off the north and south areas. All but one of the mea-
surements in this dataset were conducted in the north partition of the factory (breaking
the assumption that aisle based pathloss was based on). The PBP algorithms were
tested using the following material types:
186
Figure 5.8: Results from Glass Manufacturer.
• Laminate room walls;
• Sheets of glass;
• Furnace; and,
• Glass cutting machine.
When calculating the path-loss using the ray tracing method, the:
• Floor;
• Roof;
• Outer Walls; and,
• Office Walls.
were also included. The results are shown in Figure 5.8.
187
5.3.4 Observations
Figures 5.5, 5.7 and 5.8 showed the performance of propagation models at three different
sites. The comparison of the newly proposed models with well tested, pre-existing models
(identified in the Literature Review, Chapter 2) allowed conclusions to be drawn about
the accuracy and usefulness of such models, at least in the context of the test-case
sites.
• The black line represented the free-space situation (i.e., where the path-loss expo-
nent, n, equalled two). The MSE here was always constant because with a static
value of n, and no object attenuation considered, there was no way for additional
measurements to improve the accuracy of the model. Models that had errors above
those defined by this line were performing more poorly than if no assumptions had
been drawn about the environment they operated in.
• The green line represented the path-loss exponent model where n was chosen to
minimize the error between predicted and observed path-loss, Equation 3.1. Not
surprisingly, as more measurements out of the total data set were chosen, this
model’s MSE decreased, albeit slowly.
• The magenta line represented Aisle Based Path-loss, where two path-loss expo-
nents, nX and nY , were chosen to minimize the error between predicted and ob-
served path-loss, Equation 3.4. Aisle based path-loss typically converged to a solu-
tion with a lower MSE (provided assumption about the layout of the environment
were valid) than the path-loss exponent model, but at a slower rate.
• The red line represented Partition Based Path-loss with the path-loss exponent
fixed at two, Equation 3.10.
• The blue line represented Partition Based Path-loss with the Path-loss Exponent
chosen to minimize the error between measured and predicted path-loss, Equa-
tion 3.13.
• The brown line represented the ray tracing algorithm (Section 3.6) that utilised
empirical measurements to determine material properties.
It could be seen that, when only a small number of measurements were used to cal-
culate model parameters, the statistical methods had a smaller MSE than the pseudo-
188
deterministic methods (even in the case of glass manufacturing plant, where aisle based
path-loss performed poorly, possibly due to the lack of aisles in the area where mea-
surements were taken2). However, as a greater number of measurements were used,
the pseudo-deterministic methods surpassed the statistical methods in precision and ap-
proached approximately the same MSE obtained using the ray tracing method when
the majority of the data points were used. Ray tracing was precise and converged to a
small MSE with comparatively few measurements. It is interesting to note that under
certain conditions those models that use minimized MSE to calculate parameters per-
form worse than those that do not (i.e. simple power law with pathloss exponent and
simple power law with n=2). This occurs because while the minimized MSE model will
provide a closer match to the training points used to calculate the parameters, this does
not necessarily mean that it will provide a closer match over the entire data set.
In situations where the rapid deployment of wireless networks was the primary focus,
the number of measurements needed was an important consideration. The fact that the
ray tracing implementation only required a small number of measurements to achieve
a comparatively low MSE provided evidence against arguments that its high level of
precision resulted merely from having many degrees of freedom.
The results presented in Figures 5.5 – 5.8 clearly showed the performance of the prop-
agation models (Section 3.3 – 3.6), in practical (and fully operational) industrial envi-
ronments. The similar trends shown by the models at each location (selected because
they were representative of different industry segments) allowed conclusions to be drawn
about their suitability for use in industrial locations as a whole. The implications that
these results have on the rapid deployment of wireless networks in industrial environ-
ments are discussed in greater detail in Chapter 6.
2Though, the high level of error for both models renders this a victory of dubious value.
189
5.4 MAC layer simulations
5.4.1 Overview
The attenuation of a transmitted signal (as predicted by the propagation models in
Section 5.3) was not the only way in which the performance of a wireless network could
become degraded. As noted in Section 2.18, the number of stations contending for access
to a wireless channel also played a pivotal role. The degree, to which channel contention
affected the wireless network, depended in part, on the applications that were run over
the network. For applications where the bandwidth requirements were static and easily
predictable (such as voice over IP (VoIP)), the analysis given in Sections 3.10 – 3.15
allowed the maximum number of active wireless stations (STAs) on a given channel,
before substantial quality degradation, to be estimated.
In the case of VoIP, to improve bandwidth utilization, data was often compressed at
the transmitter and decompressed at the receiver (using a Codec – Coder/Decoder,
Section 1.6). Codecs were generally lossy (which meant information was lost during
compression, thus a bit perfect replication of the signal upon reception was impossible),
this resulted in a trade off between how efficiently the signal could be compressed and
the quality of the signal on decompression.
This section primarily provides a performance analysis of the commonly used ITU G711
A-law audio codec [ITU-T G.711]. However, the same analysis could have just as easily
been applied to many other codecs.
[Perkins 2003, ITU-T G.711, IEEE 802.11 and IEEE 802.11b] provided specific values
for packet lengths, delays and contention window sizes when using RTP (Real Time
Protocol) over UDP/IP (User Datagram Protocol/Internet Protocol) to transmit G.711
A-law audio on an 802.11b channel, shown in Table 5.2.
When transmitting voice, a decision must be made as to how much audio should be
collected before a packet is transmitted (i.e., for G.711a, a packet transmitted every
20ms would result in 160 octets of audio per packet – thus 64kb/s). This results in a
tradeoff between channel efficiency (more data per packet means that less time will be
spent transmitting network layer headers) and perceived audio quality (spend too long
collecting audio before it is transmitted and the conversation begins to sound disjointed);
190
RTP Header = 12 octets
UDP Header = 8 octets
IP Header = 20 octets
RTP/UDP/IP (with ROHC) = 5 octets
MAC Header = 34 octets
Physical Header = 24 octets (with long preamble)
ACK = 14 octets (also needs Physical layer header)
RTS = 20 octets (also needs Physical layer header)
CTS = 14 octets (also needs Physical layer header)
DIFS = 50µS
SIFS = 10µS
CWmin = 31
CWmax = 1023 (i.e., m = 5)
dot11ShortRetryLimit = 7 (i.e., m′ = 7 for short packets)
dot11LongRetryLimit = 4 (i.e., m′ = 4 for long packets)
dot11RTSThreshold = 3000
G.711 A-law encoding 64kb/s
G.729 encoding 8kb/s
Table 5.2: Values for VoIP Capacity Simulations.
191
common values are 20 or 30ms of audio per packet.
Unlike the propagation graphs presented in Section 5.3, the purpose of the following
results was not comparative. The goal was not to find the best model overall (though they
do demonstrate how the models derived in Section 3.10 – 3.15 are more general that that
proposed by [Garg 2003], Section 3.9), rather, there were two goals in Section 5.4:
• Through evaluation of different network set ups (using basic access versus RTS/CTS
or a long preamble versus a short preamble etc.) those suitable for VoIP and those
unsuitable for VoIP can be identified; and,
• To demonstrate the functionality provided by the Java Simulation Program, Sec-
tion 4.7, when drawing capacity cells, Section 4.12.
To achieve these goals, the performance of an 802.11b network and its response to
channels with different SNRs was tested (using the analytic models from Sections 3.10
– 3.15 and simulations) under the following configurations:
• Basic Access DCF (Section 5.4.2);
• RTS/CTS DCF (Section 5.4.3);
• SmartBEB using RTS/CTS, a modification to 802.11b’s DCF that does not per-
form badly in a contested channel (Section 5.4.3);
• Short Physical layer Preamble and Long Physical layer Preamble (Section 5.4.4);
• Robust Header Compression and Robust Header Compression using a Short Phys-
ical layer Preamble (Section 5.4.5);
• G729 Audio, a more efficient audio codec (Section 5.4.6);
• Auto-Rate Fallback (Section 5.4.7);
• RTS/CTS Auto-Rate Fallback (Section 5.4.7); and,
• CARA (Section 5.4.7).
• RRAA-Basic (Section 5.4.7).
Where appropriate, results for between 10-70ms of G.711a audio at each of 802.11b’s
four data rates were given.
192
5.4.2 DCF-Basic Access Mechanism
The following series of graphs (Figures 5.9 and 5.10) show the maximum number of
VoIP connections possible for each of 802.11b’s four data rates, depending on how much
audio was transmitted per packet and what the SNR was.
Figure 5.9: Capacity of VoIP with SNR = 15dB per chip.
With a SNR of 15dB per chip, the BER for all 802.11b modulation schemes was very
small; therefore, Figure 5.9 is similar to that found in [Garg 2003]. As the SNR per chip
was decreased (Figure 5.10) it became apparent that the maximum number of VoIP calls
that could be supported by the network also decreased, with the higher data rates and
larger packets suffering the greatest performance degradation.
193
(a) Capacity of VoIP with SNR = 8dB
per chip.
(b) Capacity of VoIP with SNR = 7dB
per chip.
(c) Capacity of VoIP with SNR = 6dB per
chip.
(d) Capacity of VoIP with SNR = 5dB
per chip.
Figure 5.10: Capacity of VoIP as SNR decreases.
194
Figure 5.10 showed that as the SNR decreased, unsurprisingly, it was the stations using
higher data rates and larger packets that took the biggest performance hit. Ostensibly,
this was because higher data rates have a larger BER for a constant SNR and larger
packets have a greater probability of becoming corrupt in the presence of noise.
It was interesting to note that with a SNR of below 6dB, utilizing a data rate of 5.5Mbps,
as opposed to 11Mbps, allowed for significantly more active VoIP connections. This can
be seen in the relative capacities versus SNR of the different modulation schemes with
a fixed audio speed as shown in Figure 5.11.
(a) 10ms audio speed. (b) 30ms audio speed.
(c) 60ms audio speed.
Figure 5.11: Analytical capacity of different modulation schemes with varied audio
speeds.
Figure 5.11 showed that in channels with a low SNR, transmitting at a slower trans-
mission rate, increased the number of VoIP calls that a channel could support. This
195
was obviously not unexpected; instead, it highlighted one of the reasons VoIP capac-
ity models that utilised channel conditions as part of the model, were desirable. The
implications of this were significant, as it meant that the range of a network, required
to support up to a fixed maximum number of simultaneous calls, could be extended by
forcing all STAs to transmit at a lower data rate (implying that an appropriately se-
lected ARC algorithm would be useful). Situations such as this acutely demonstrate the
benefit of being able to both simulate the propagation characteristics of an environment
(and thus obtain a good estimate of SNR) and then fine tune the network’s operating
parameters to achieve optimal performance and thus demonstrates how the data pre-
sented in this section contributes to the process of facilitating the rapid deployment of
wireless networks.
5.4.3 DCF-RTS/CTS
Request to Send/Clear to Send (RTS/CTS), Section 2.18, attempted to minimize the
delay caused by collisions by allowing STAs to secure channel access before data was
transmitted. STAs would first transmit an RTS frame, to which the AP would respond
with a CTS frame. Once the channel had been secured, the station could freely transmit a
data packet without fear of collision. This meant that the only time stations could collide
was when an RTS frame was transmitted. If a collision did occur, as the RTS frames
were small, only a short amount of time was wasted. The downside to this approach
was the extra overhead introduced by the need to send RTS and CTS frames. As [IEEE
802.11] specified that RTS/CTS frames should only be sent if the frame containing the
data was larger than dot11RTSThreshold, which in the standard had the default value
of 3000, this analysis was realistically only valid for the larger frame sizes (e.g., 50 and
70ms of audio). The results of this analysis are shown in Figures 5.12 – 5.13.
It can be seen from Figures 5.12 and 5.13 that the overhead introduced by RTS/CTS
frames out weighed any reduction in the time spent resolving collisions. However, as
[Nadeem 2004] demonstrated, the RTS/CTS mechanism dictated by the 802.11 standard
could be improved upon to offer greater performance. One of the limitations of the
802.11 standard was that it behaved as though all packet losses were due to collisions
with other transmitting STAs. A modification to 802.11b’s Binary Exponential Back-off
(BEB) mechanism, SmartBEB, proposed by [Nadeem 2004] used RTS/CTS frames to
196
Figure 5.12: Basic Access Vs RTS/CTS with SNR = 6dB per Chip.
Figure 5.13: Basic Access Vs RTS/CTS with SNR = 4dB per Chip.
197
rectify this identified deficiency.
SmartBEB took advantage of the fact that standard BEB could not distinguish between
a collision and a packet error and increased the contention window size in both cases.
Smart BEB, when implemented in conjunction with RTS/CTS:
• Reset the contention window when a CTS frame was received;
• Increased the contention window if a RTS frame was sent and a CTS not received;
and,
• Did not increase the contention window size if an ACK was not received.
In this manner, the contention window was only increased in size if an RTS frame collided
with another station’s RTS frame.
Unfortunately, as shown in Figure 5.14, where goodput is plotted for the Basic Access
Mechanism, RTS/CTS and RTS/CTS with SmartBEB (using a slightly modified version
of the Matlab simulation, Section 2.20), this only provided a slight improvement over the
standard BEB with RTS/CTS. It was noted that even at high BERs, the improvement
that came with using SmartBEB was very small. This demonstrated that the amount
of time spent waiting to transmit because of the DCF was dwarfed (even when the
contention window was large, as would be the case after multiple packet transmissions)
by the time wasted actually retransmitting corrupted data (even when comparatively
small packets were used).
It should be noted that no claims are being made about the inherent usefulness of Smart-
BEB, it was only included in the dissertation to further demonstrate that RTS/CTS
alone3 was unsuitable for use in VoIP under even the most forgiving circumstances.
The previous figures show that RTS/CTS is not suitable for VoIP traffic (Figures 5.12
– 5.14) – this result is also easy to show mathematically.
Using Markov Chain analysis (Section 3.8) to estimate the probability of collision, PC ,
the expected number of collisions, before a successful transmission, could be found to
be:3Selective use of RTS/CTS is shown to be useful when combined with an ARC algorithm, as demon-
strated in Section 5.4.7.
198
Figure 5.14: Goodput versus BER for different DCF schemes.
199
NUMCOL =∞∑n=0
nPnc (1− PC) =PC (1− PC)(1− PC)2
(5.1)
If the amount of time it took to send a data frame was, Tx Time, a function of data rate
and packet size, then the amount of time wasted because of collisions would be:
Time Wasted = NUMCOL × Tx Time (5.2)
If the amount of time it took to complete an RTS/CTS exchange was RTS Time, then
the amount of overhead using RTS/CTS introduced would be the expected number of
collisions multiplied by the time taken up by a collided RTS/CTS transmissions, plus the
time taken to send an RTS/CTS exchange before every data frame transmission:
RTS Overhead = (NUMCOL + 1) RTS Time (5.3)
Thus, if:
Time Wasted > RTS Overhead (5.4)
Then using RTS/CTS would provide an increase in performance, however – because of
the relatively small size of typical VoIP packets, for this condition to be true, a very
large collision probability is required. By the time this is reached, the delay caused by
colliding packets has already limited the number of VoIP connections the channel could
support. Therefore, while switching over to RTS/CTS at this point would provide higher
overall channel goodput, it would not improve the number of simultaneous calls that a
channel could support.
5.4.4 Short Preamble
As discussed in Section 2.14, the 802.11b Physical layer provided an option to utilize
what was known as a short preamble. Using the short preamble both reduced the number
of bits in, and increased the speed that the Physical layer header was transmitted at.
The realisable gains for VoIP that utilising the short preamble provides are shown in
Figure 5.15.
200
Figure 5.15: Long Preamble Vs Short Preamble with SNR = 6dB per Chip.
Figure 5.15 shows that when the short preamble was utilised, only a small increase in
capacity was realised.
5.4.5 Robust Header Compression (ROHC)
As discussed in Section 2.21, much of the RTP/UDP/IP header information does not
need to be attached in full to each packet transmitted. ROHC is a method by which
redundant overhead can be compressed. For the purpose of this analysis (and for the
reasons given in Section 2.21), it was assumed that ROHC compressed the RTP/UDP/IP
header to 4 octets.
201
Figure 5.16: ROHC BA (Short and Long Preamble) Vs Long Preamble BA with SNR
= 6dB per Chip.
Figure 5.16 shows that a small but consistent improvement in capacity was realised when
ROHC was used in conjunction with the short preamble.
5.4.6 G.711a versus G729 Audio
Different Codecs can compress voice signals to different degrees (typically a trade-off
between the level of compression and the quality of the decoded signal). Figure 5.17
shows the difference in capacity that can be realised by using a codec that provides
a higher level of compression, namely G729 [ITU-T G729]. Where G711a operates at
64kbps, G729 operates at 8kbps.
It can be seen by looking at Figure 5.17 that a substantial increase in capacity was
202
Figure 5.17: G711a versus G729 with SNR = 6dB per Chip.
realised (especially with large frames) when a codec with a higher level of compression
was used. This increase in capacity was more substantial than anything achieved through
tweaking parameters in the 802.11b specifications (long preamble versus short preamble,
basic access versus RTS/CTS and ROHC). This implies that if a substantial increase in
capacity is required, the choice of a different codec should be the first thing considered
(provided that the Mean Opinion Score, Table 1.2, of the new codec is acceptable).
5.4.7 ARC Algorithms
Auto-Rate Control (ARC) algorithms attempt to choose the data rate that optimises
throughput. Analytic models that predicted the performance of four different ARC
schemes were given in Chapter 3. These were:
• Auto-Rate Fallback (ARF), Section 3.12;
203
• RTS/CTS ARF, Section 3.13;
• CARA, Section 3.14; and,
• RRAA-Basic, Section 3.15.
Figure 5.18 compares the performance of these ARC algorithms when used to decide
upon transmission speeds for STAs transmitting packets containing 20ms of G711a au-
dio against that of STAs transmitting using fixed transmission speeds, and a hypothet-
ical optimal ARC algorithm (that always chooses the optimal transmission speed and
introduces no additional overhead).
204
Figure 5.18: ARC Algorithm Comparison.
205
It can be seen, by looking at Figure 5.18, that:
• The four ARC algorithms (solid coloured lines) did not perform as effectively as
the optimal ARC algorithm (dashed grey line) however, the differences between
CARA, ARF, RRAA-Basic and the optimal ARC algorithm were small;
• RTS/CTS ARF performed poorly when compared with ARF, RRAA-Basic and
CARA, ostensibly because with only a relatively small number of non-saturated
STAs vying for access to the channel, the impact of collisions on the performance
of the ARC algorithms (a situation that both RTS/CTS ARF and CARA were
designed to address) was minimal; and
• CARA only performed slightly worse than ARF and RRAA-Basic4, which demon-
strated that it was a suitable ARC algorithm for use on a network expected to be
carrying VoIP traffic (and would offer substantially better performance than ARF
or RRAA-Basic in a highly contested channel).
5.4.8 Observations
This Section has investigated the ability of an 802.11b network to carry VoIP traffic
through analysis of MAC layer and Physical layer performance under contention. Physi-
cal layer performance was simple to model using the equations provided in [Proakis 2001]
and the MAC layer was modelled using equations provided in Sections 3.10 – 3.15. The
primary assumption in the MAC layer models used was that a non-saturated channel
carrying VoIP traffic could be approximated by two saturated nodes, this assumption
was experimentally validated in [Garg 2003]. The MAC layer analysis was important for
two main reasons:
• It provided analytic models that were incorporated into the applied outcome of
this research project, the Java Simulation Program (allowing VoIP capacity cells
to be plotted); and,
• It highlighted how various network parameters affected the ability of a wireless
network to carry VoIP traffic.4The ARF and RRAA-Basic results for a high SNR presented in this section are slightly misleading.
The reasons for, and implications of, this are discussed further in Section 5.4.9.
206
Namely, the results presented in Section 5.4 showed that:
• Given the small size of VoIP frames, the overhead introduced by RTS/CTS trans-
missions was greater than the time they saved, even in a highly contested channel;
• Both the short preamble and RoHC provided small, but consistent improvements
in capacity and when used in conjunction, could provide a significant improvement
in performance;
• The most substantial improvement in performance came from utilising a more
efficient Codec; and,
• An ARC algorithm (such as ARF, RRAA or CARA) performed well at selecting
the appropriate transmission speed to use in a noisy channel.
5.4.9 MAC Layer Simulation Validation
To validate the statistical accuracy of the analytic model of 802.11b’s DCF presented in
this dissertation, a simple simulation was written in Matlab where two saturated 802.11b
stations (always having a packet to transmit) vied for access to the channel. A flowchart
showing the simulation’s logic was given in Figure 4.17, Section 2.20.
In order to demonstrate that the analytic model matched the Matlab simulation, channel
goodput versus data rate for a fixed SNR is plotted in Figures 5.19 and 5.20.
207
Figure 5.19: Validation of Analytic Model with Simulation SNR = 7dB.
208
Figure 5.20: Validation of Analytic Model with Simulation SNR = 6dB.
209
Figures 5.19 and 5.20 show that the analytic results very closely match the simulated
results, demonstrating that the equations presented were statistically accurate, provided
that the assumptions held.
Finally, it was demonstrated that the assumptions made by [Garg 2003] mostly held, even
in a noisy channel. This was done using a popular network simulator, the Global Mobile
Information Systems Simulation Library (GloMoSim) [Bagrodia 2001], modified to accu-
rately implement 802.11b. GloMoSim allowed multiple wireless STAs transmitting at a
constant bit rate through a noisy channel to be modelled. This set of simulations differed
from those described above, in that multiple non-saturated STAs were being simulated
as opposed to two saturated STAs. Simulations were run to investigate how average
channel goodput varied with respect to the number of active STAs. These results were
then juxtaposed with predictions made by the proposed analytic models (Sections 3.10
– 3.15), as shown in Figures 5.21 and 5.22.
210
Figure 5.21: Predicted Goodput versus Analytic Goodput (20 ms G.711a Audio per
frame, 7dB SNR per chip).
Figure 5.22: Predicted Goodput versus Analytic Goodput (20ms G.711a Audio per
frame, 7dB SNR per chip).
211
As given in Table 5.2, 128kbps of channel goodput was required for each STA transmit-
ting VoIP using the G.711a codec. The goodput required for a given number of stations
transmitting G.711a audio is shown in Figures 5.21 – 5.26 by a black line. If actual
channel goodput (plain coloured lines) was less than the offered/required goodput, then
the quality of each VoIP call carried by the channel was degraded.
Figures 5.21 – 5.26 show, that for each data rate, up until a certain number of STAs were
active, channel goodput was equal to offered/required goodput. Beyond a certain number
of active STAs, however, channel goodput diverged from offered/required goodput (this
is shown in Figures 5.21 – 5.26, by the coloured lines breaking away from the black line)
and while in most cases channel goodput still increased, it did not do so by enough
to completely support another VoIP call, thus call quality across the channel would be
degraded.
The analytic models of Sections 3.10 – 3.15 predict the maximum number of active STAs
where channel goodput is still equal to offered/required goodput. These predictions are
shown in Figures 5.21 – 5.26 as patterned horizontal lines (of the same colour as their
corresponding simulated results). If the analytic model predicted that nmax STAs could
be supported by the channel, then 128nmax was plotted as the analytic goodput (128kbps
was required per station). Figures 5.21 and 5.22 give results where no ARC algorithm
was being used and show that in all cases, the analytic goodput was equal to the point at
which the simulated goodput (coloured lines) diverged from the required goodput (black
line). This indicated that the analytic model correctly identified the point at which the
quality of VoIP calls began to degrade and thus confirmed that the assumptions made
in deriving the analytic models were reasonable.
Figures 5.23 – 5.26 give results where ARC algorithms were being used. Most of the an-
alytic predictions accurately represent the simulated results, indicating that the analytic
models provide useful approximations to real world processes. The only notable discrep-
ancies appeared in Figures 5.23 and 5.26, where the analytic predictions made by ARF
and RRAA-Basic for a channel with a SNR of 8dB (the magenta line) indicated a higher
capacity that the simulations revealed. This discrepancy occurred due to both:
• A break down of Garg’s approximation that a channel carrying VoIP can be mod-
elled as a channel containing two saturated STAs; and,
212
• ARF’s and RRAA-Basic’s failure to distinguish between channel errors and colli-
sions.
It does not indicate that the models proposed in Sections 3.12 and 3.15 are fundamentally
inaccurate. The ARF and RRAA-Basic models produce accurate results, provided that
the correct probability of a successful transmission (obtainable for a saturated channel
using the Markov chain given in Section 3.8) is used.
As ARF and RRAA-Basic do not distinguish between channel errors and collisions, when
there are a large number of STAs vying for access to the channel (and therefore the
probability of a collision occurring is not negligible), a significant number of attempted
transmissions fail because one or more STA attempts to transmit at the same time.
ARF and RRAA-Basic take this as being indicative of poor channel conditions and
attempt to increase throughput by reducing the transmission speed. As reducing the
transmission speed does not significantly alter the probability of a collision occurring,
ARF and RRAA-Basic will, at the new transmission speed, once again see poor channel
conditions and attempt to correct them by reducing the speed. This, in the worst case,
continues until ARF or RRAA-Basic reach the slowest possible transmission speed (and
still see poor channel conditions), resulting in ARF and RRAA-Basic providing less
than optimal throughput in a highly contested channel. Garg’s VoIP approximation
underestimates the probability of a collision occurring in a channel with many STAs
which leads to the over approximation of throughput by the analytic ARF and RRAA-
Basic models.
Both the RTS/CTS ARF and CARA models did not suffer from the same problems as
the ARF and RRAA-Basic models did, as these ARC algorithms have processes that
distinguish (to some degree) between collisions and channel errors and are subsequently
not as sensitive to an underestimated probability of collision.
The failure of ARF and RRAA-Basic in a contested channel indicates that an ARC
algorithm that can distinguish between collisions and channel errors, such as CARA5
(with RTS/CTS ARF being less than ideal because of its high overhead), is preferable5Along with RRAA-Basic, [Wong 2006] also proposed an adaptive RTS/CTS filter that could be
used to distinguish between channel errors and collisions. The performance of RRAA with the adaptive
RTS/CTS filter is similar to that of CARA, however an analytic model for the adaptive RTS/CTS filter
is not proposed in this dissertation and therefore its performance is not further discussed.
213
Figure 5.23: Predicted Goodput versus Analytic Goodput (20ms G.711a Audio per frame
using ARF).
in a channel where many simultaneous VoIP calls are required to be supported.
214
Figure 5.24: Predicted Goodput versus Analytic Goodput (20ms G.711a Audio per frame
using RTS/CTS ARF).
215
Figure 5.25: Predicted Goodput versus Analytic Goodput (20ms G.711a Audio per frame
using CARA).
216
Figure 5.26: Predicted Goodput versus Analytic Goodput (20ms G.711a Audio per frame
using RRAA-Basic).
217
Chapter 6
Analysis
218
6.1 Overview
This chapter describes how the models investigated in the previous chapters can be
utilised to aid in the rapid deployment of wireless networks. It also discusses where
the research contained within this dissertation fits into the greater body of research
discussed primarily in the literature review, Chapter 2. More-specifically this chapter
describes:
• The impact that propagation prediction models have on the “rapid deployment
of wireless networks”. This is achieved by taking the results presented in Chap-
ter 5, which demonstrated the relative performance of the propagation models in
an industrial environment, and describing their performance in terms of facilitat-
ing a rapid network deployment. How the newly proposed propagation models
complement those identified in literature and therefore, where these models (and
the experimental results gathered to examine them) fit into the greater body of
research, is also described.
• The impact that 802.11b MAC layer (VoIP capacity) models have on the “rapid
deployment of wireless networks”. Results showing the situations where these
models are useful were given in Chapter 5. How these situations relate to network
deployments in industrial environments and how these models can be used to aid
in the rapid deployment of wireless networks dedicated to carrying VoIP traffic are
discussed in this Chapter. Where the VoIP capacity models fit into the greater
body of research was described in detail during their derivation (Sections 3.8 –
3.15), however, an explicit overview and summary is also provided in this Chapter.
• The impact that the Java Simulation Program has on the “rapid deployment of
wireless networks”. As an applied research project, the Java Simulation Program
was the primary outcome. It is a practical implementation of both the propagation
models and the VoIP capacity models and as such aids in the rapid deployment of
wireless networks in the same manner as both of these sets of models do. Features
implemented by the Java Simulation Program that complement these sets of models
are identified in this Chapter.
219
6.2 Impact of Propagation Models on the Rapid Deploy-
ment of Wireless Networks in Industrial Locations
The propagation models presented in this dissertation consisted of those explicitly found
in the literature and, additionally, extensions to those found in the literature. Care was
taken when conducting measurements and when proposing extensions, to always provide
a benchmark against which the results could be compared and validated.
The measurement campaign detailed in this dissertation covered three different sites.
It was informative to compare experimental results gathered during the measurement
campaign against similar results that appeared in the literature. When benchmarked
(using the simple power law) against experimentation conducted by [Rappaport 1989]
at five different factories, it could be seen that Rappaport’s experiments indicated the
overall average path-loss exponent in factories to be n = 2.18 (varying between 1.8 for
line of sight (LOS) paths to 2.4 for lightly obstructed paths and finally through to 2.8 for
heavily obstructed paths), whilst the measurement campaign detailed in this dissertation
determined the average path-loss exponent in factories to be 2.07 (it varied from 1.8 at
the bottling plant through to 2 at the glass manufacturer, and 2.4 at the automotive
plant). The similarity in the results reconfirmed the findings made by [Rappaport 1989]
and instilled a measure of confidence that the data used to test the propagation mod-
els were reasonable. The results obtained were also similar to those found in [Kjesbu
2000], which investigated the propagation characteristics of three different industrial
locations.
The propagation models discussed in this dissertation were compared against each other,
with focus on how many measurements were required for a given level of precision. This
analysis was performed in order to assist in answering one of the key questions raised in
this doctoral research:
“How could the rapid deployment of wireless networks in industrial environments best be
facilitated?”
Based on an extensive review of literature, academic and commercial, it is believed that,
this was a novel analysis of otherwise well known propagation models. The propagation
model extensions, proposed in this research, were analysed in the same manner as the
220
pre-existing propagation models, which allowed informed commentary to be made about
their relative performance and, consequently, their suitability for use in rapidly deploying
wireless networks.
Therefore, the propagation modelling sections of this dissertation fits into the greater
body of research by both proposing extensions to well known, pre-existing models and
providing a novel analysis of both well known and proposed models.
The primary contribution to propagations models in this dissertation was the devel-
opment and subsequent investigation of a ray tracing algorithm that used empirical
measurements to optimise material properties. It is therefore informative to see how
this approach fits into the greater body of research. The material optimisation method
used by the ray tracing algorithm was similar in spirit to that described in [Rappa-
port 1994]; however, there were a couple of notable differences. Specifically, [Rappaport
1994]’s method:
• Chose material properties that minimised the squared difference of measured versus
predicted power delay profiles (PDP) – see [Appendix 1]. This was a better choice,
with respect to accurately arriving at desired material parameters, than the raw
power measurements used by the ray tracer described in this dissertation, because,
as [Rappaport 1994] noted, the PDP incorporated more information about the am-
plitude and time delay of individual multipath components than simple path-loss
measurements. However, in the context of the rapid deployment of wireless net-
works in an industrial location use of the PDP was untenable, as its measurement
required the use of expensive, specialised equipment1 and thus overcomplicated a
process that was sought to be simplified.
• Assumed only one material type in the environment under investigation, whereas,
it was shown in this dissertation that material optimisation was successful even
when many different material types were present.
Finally, the results presented in Chapter 5 provided information about “rule of thumb”
conditions under which specific models were useful. From the results, it could be seen
that:1For while the measurements documented in this dissertation were collected using custom built trans-
mitter and receiver equipment, similar results can be obtained by walking around the site with a WiFi
enabled laptop.
221
• If a small number of measurements were available and ray tracing was not fea-
sible, or there was no recorded information about the location of objects in the
environment, then statistical methods were preferable;
• If information about the environment and a large number of measurements were
available and ray tracing was not feasible, then the pseudo-deterministic methods
were preferable; and,
• If information about the environment was available and ray tracing feasible, then
ray tracing was preferable.
The feasibility of ray tracing was primarily related to its computational complexity. On
a 3.00GHz Pentium 4, with 512MB ram, estimating material properties and generating
a heat map using ray tracing could, depending on the complexity of the environment
under investigation, take multiple hours. If results were needed faster, then ray tracing
could be considered “not feasible”.
Requiring a few hours to run a simulation is rarely a major issue, as the physical instal-
lation of a wireless network can take many days/weeks (depending on scale), therefore
a small increase in lead-time is often inconsequential. For the person who is planning
the network deployment however, it can be useful to consider many different transmit-
ter placements, getting a rough idea of their performance using the faster models (to
allow those which are obviously unsuited to be rapidly discarded) and then, when a
suitable one seems to have been found, verify it more accurately using the ray tracing
algorithm.
222
6.3 Impact of MAC Models of the Rapid Deployment of
Wireless Networks in Industrial Locations
The VoIP capacity models presented in this dissertation were investigated under a num-
ber of different conditions, and while far from exhaustive, the methodology presented
should allow the motivated reader to easily evaluate VoIP under many other conditions
(different modulation schemes, VoIP codecs etc).
The VoIP capacity models differed from the propagation models in that they were much
less general - only useful in predicting performance when the network was to be used
for a specific task. They were included as part of this dissertation because they were
informative in (and of) themselves and they demonstrated how the propagation results
could be levered to provide more information about a network’s performance.
The inclusion of the VoIP capacity models in the Java Simulation Program demonstrated
how such models could be readily integrated into such a tool and showed how the sta-
tistical models could be related to real world network deployments and thus aid in the
rapid deployment of wireless networks in an industrial environment.
The statistical accuracy of the VoIP models proposed in this dissertation was validated
by comparing their predictions with those of a Matlab simulation. It should be noted
that the Matlab simulation only tested the case where two saturated stations were trans-
mitting (an approximation to n unsaturated nodes vying for access to the channel). The
validity of this approximation was verified with results obtained from the network sim-
ulator, GloMoSim.
The VoIP capacity models presented in this dissertation fitted into the greater body of
research in this field by taking the research performed by [Garg 2003] and extending it
using the Markov chain model presented by [Wu 2002]. Work similar to that presented in
this dissertation was presented by [Chatzimisios 2004], which focused only on goodput in
noisy wireless channels and did not relate the analysis to the different 802.11 modulation
schemes, present results specific to the network’s performance when used for VoIP or
consider a non-saturated channel. The analytic results presented in Chapter 5 appeared
to match the experimental results presented in [Hole 2004] that showed VoIP capacity
to drop off when the BER was greater than 10−5.
223
Finally, as was demonstrated at the end of Section 5.4.9, it was straightforward to obtain
similar results to those presented in Sections 5.4.2 – 5.4.7 through simulation. Given that
results obtained through simulation were more accurate than those obtained using ana-
lytic models (as the analytic models were derived based on simplifying assumptions), the
curious reader might inquire as to what purpose the models presented in Sections 3.10
– 3.15 served in facilitating the rapid deployment of wireless networks in an industrial
environment? The answer centres on how the VoIP cell (Section 4.12) functionality was
incorporated into the Java Simulation Program (Section 4.7). As shown by Figures 5.21
and 5.22, the analytic results and the simulated results were roughly the same (espe-
cially when used to calculate the number of active VoIP calls that could be supported
by a given channel). However, network simulations were a computationally expensive
and time consuming endeavour, whereas the analytic expressions were quick to evalu-
ate. Therefore, when implementing the VoIP cell functionality into the Java Simulation
program, there were a number of different approaches to consider:
• Numerical data could be collated from network simulations conducted prior to im-
plementation in the Java Simulation Program and used to plot the VoIP cells. This
would provide near-real-time feedback to the end user, but have the disadvantage
of limiting network configurations to those simulated prior to implementation.
• For each different network configuration a simulation could be run (using a net-
work simulator incorporated into the Java Simulation Program) to predict VoIP
capacity. This would allow network configurations that had not been considered
prior to implementation to be investigated by the end user, however computational
overhead would prevent results from being available in real-time, limiting ability
to rapidly evaluate many different network configurations.
• Analytic approximations could be used (as was the approach taken in this disser-
tation). This allowed both network configurations that were not considered prior
to implementation to be evaluated and allowed results to be presented to the end
user in near real-time, allowing many different network configurations to be quickly
evaluated and thus facilitating the rapid selection of an optimal configuration for
the required specifications.
Therefore, for the reasons given above, it was decided that the analytic approximations
proposed in this dissertation were the most suited approach to facilitating the rapid
224
deployment of wireless networks.
225
6.4 Impact of Simulation on the Rapid Deployment of Wire-
less Networks in Industrial Locations
A large portion of this research program was dedicated to the development of a simulation
program written in Java. One of the primary goals of this dissertation was to shed light
on how the models incorporated in the simulation operated, to instill faith in the reader
that they were based on sound engineering principals, to demonstrate when the various
models should be used and to provide an appreciation of what the resulting output
actually meant.
The Java Simulation Program was designed to be both suitable to facilitate academic
investigation into the operation of different propagation models and to be robust and
functional enough to be useful in practical, real-world situations. Indeed, many of the
site surveys conducted during the course of this project were in response to practical,
industry requests for help in deploying or troubleshooting wireless networks, to which
the Java Simulation Program was utilised to great effect.
While there were many propagation simulation programs on the market when this re-
search was conducted, none, to the best of the author’s knowledge, provided the same
level of functionality with respect to the deployment and provisioning of VoIP over wire-
less networks.
As described in Section 2.12 computer aided deployment of wireless networks has a
number of advantages over the typically hit and miss approach of placing an Access
Point and then physically measuring its coverage to ensure that it meets the desired
specifications.
It is common for an industrial location to change (as new machinery is brought in and old
machinery moved or discarded) substantially over time. These changes in layout alter
the propagation characteristics of the channel, and can result in diminished coverage
where once there was more than adequate. Using the Java Simulation Program, these
changes in network coverage can be estimated ahead of time, easily visualized in the
form of a heat map and steps taken to minimize their impact.
The VoIP capacity functionality of the simulation combined both the propagation pre-
diction models described in Sections 3.3 – 3.6 and the analytic MAC layer VoIP capacity
226
models described in Section 3.10 – 3.15. This functionality allows regions (cells) able
to support a given number of simultaneous users under worst case scenarios (e.g., when
the maximum number of users supported are actively engaging in VoIP calls and all are
at the perimeter of the cell and thus experiencing the highest permissible BER) to be
calculated. As with the heat maps, VoIP cells allowed the impact that changes in the
environment would have on perceived VoIP performance to be easily predicted.
227
Chapter 7
Conclusion
228
7.1 Overview
This, the final chapter in the dissertation, provides an overview of the arguments, ideas
and results that were presented in the previous six chapters.
The research contributions that this doctoral research proffered, and which has been
documented in this dissertation, was composed of five distinct, but complementary,
components. These were:
• A review of the current state of wireless propagation prediction and 802.11b DCF
modelling;
• The development of measurement equipment to facilitate path-loss measurements;
• Investigation into the performance of new and existing path-loss models;
• Investigation into the performance of 802.11b’s DCF and the impact that this has
on VoIP over such a network; and,
• The development of a robust tool that implements the models investigated and
developed during the course of this research project.
This chapter seeks to demonstrate how the tools developed, and research undertaken
helped to facilitate, what was the major theme of this research project – “The Rapid
Deployment of Wireless Networks in an Industrial Environment”.
Finally, it explicitly outlines the new knowledge generated during the course of this
research project, discusses the limitations of the research and proposes areas for future
study.
229
7.2 Literature Review Findings
Chapter 2 contained a Literature Review, in which the current state of knowledge in
the field of wireless propagation prediction and the simulation of 802.11b’s DCF was
presented. Information was sourced from texts, conference proceedings, refereed journal
publications and the Internet. The literature review began with coverage of propagation
prediction techniques relevant to wireless networks in industrial locations. Specifically,
propagation prediction models were divided into three distinct categories:
• Statistical propagation models;
• Pseudo-Deterministic propagation models; and,
• Deterministic propagation models.
When talking about Deterministic propagation models, the primary focus was on Ray
Tracing, which was discussed in detail. Firstly, differences between Image Based Ray
Tracing and Brute Force Ray Tracing were identified. As it had been decided that a
Brute Force Ray Tracer would be implemented in the simulation, several issues specific
to the implementation of a Brute Force Ray Tracer were then identified. These included
techniques to combat aliasing, such as:
• Reception sphere;
• Method of distributed wavefronts; and,
• Modified reception sphere.
Methods to facilitate Ray Launching, such as:
• Regular angular increments;
• Geodesic spheres; and,
• Rakhmanov’s spiral method.
were also discussed. The pros and cons of each technique as found in the literature were
presented in the discussion. This was important as these issues constituted some of the
major design decisions that needed to be considered when implementing a ray tracer for
the purposes of propagation prediction.
230
Following on from the propagation prediction techniques, an overview of the relevant
portions of the 802.11b standard (as it had been decided, due primarily to its pop-
ularity, that this was the wireless standard the research project would focus on) was
presented. Namely, this section discussed 802.11b’s Physical layer, with respect to how
much overhead was introduced and how the different modulation techniques affected
the bit error rate (BER) for a given signal-to-noise ratio (SNR). 802.11b’s MAC layer
was also discussed, with respect to how much overhead it introduced and techniques for
modelling this overhead in the presence of a number of different factors (such as number
of stations vying for access to the channel, channel BER, etc.).
The literature review added to the current field of knowledge by providing an up to date
summary of what was considered fundamental knowledge and what was currently of
interest to those researching and working in the field of wireless network deployment and
provisioning at the time this dissertation was written. Finally, it sought to provide a solid
foundation of research on top of which the applied research conducted and propagation
simulation developed could be based.
231
7.3 Path-loss Measurement Equipment
As described in Sections 4.2, 4.3 and 4.4, custom designed transmitting and receiving
equipment was developed during the course of this project. The primary aim of this
equipment was to automate the laborious task of sector averaging, which in the early
stages of the project had been done manually by repeatedly placing an antenna into
different lengths of regularly spaced PVC piping glued together to form a hexagon.
While not a revolutionary, or even particularly novel idea (many other researchers such as
[Honcharenko 1995] used similar set ups - the automation of sector averaging ostensibly
becomes an appealing idea in the face of performing it manually), the measurment
equipment was essential in allowing a large number of measurements to be gathered and
collated electronically.
232
7.4 Propagation Models
A large portion of this Doctoral research and dissertation was dedicated to the investi-
gation and development of a number of different path-loss models.
Three such models, taken from the literature because they were representative of stan-
dard propagation prediction practices, were:
• Friis Freespace Equation (Equation 3.3);
• Simple Power Law (Equation 3.1); and,
• Partition Based Path-loss (Equation 3.10).
Two variations on these models were proposed in this dissertation:
• Aisle Based Path-loss (Equation 3.4) – a modification of the simple power law, to
include two path-loss exponents; and,
• Partition Based Path-loss with a Path-loss Exponent (Equation 3.13), a simple
modification to the Partition Based Path-loss model presented in [Durgin 1998].
Finally, a ray tracing implementation was also developed. The ray tracing model was
novel in that it allowed material parameters to be determined based on empirical power
measurements.
Accuracy went a long way in determining how satisfactorily a given model performed,
however it was not the only measure that could be used. In the context of “The Rapid
Deployment of Wireless Networks in an Industrial Location”, the speed at which a model
could be generated (i.e., appropriate parameters selected, especially in the absence of
a pre-existing database describing how certain materials behaved – a reasonable as-
sumption given the eclectic assortment of machinery, equipment and stock that could
be found in a typical industrial environment) was also very important. It was to this
end, as opposed to absolute precision, that the propagation models were evaluated. It
was found, unsurprisingly, that the simpler models (statistical) converged faster, but
to a comparatively higher MSE than the more complex models (pseudo-deterministic
models).
These models were developed and assessed primarily to provide a benchmark against
233
which the ray tracing simulation, which was the major contribution to propagation
prediction presented in this dissertation, could be assessed. This was done in a similar
manner to the aforementioned propagation prediction models.
The claims made in this dissertation were not attempting to assert that the ray tracing
implementation was the most precise way of doing things; it most certainly was not.
The primary claim put forth, was simply this:
If the dielectric properties of materials were free to be optimized, much in the
same manner that the path-loss exponent was chosen for the simple power
law, or the linear attenuation terms chosen for partition based path-loss,
then ray tracing quickly converged to a low MSE (as compared to the other
models used to benchmark the ray tracing model), even if the original material
parameters were chosen at random.
Therefore, the ray tracing model described in this dissertation, while being the most com-
putationally intensive algorithm, and thus slowest to execute, facilitated the Rapid De-
ployment of Wireless Networks in an Industrial Environment (as defined in Section 1.8),
by providing a propagation prediction model that did not require a pre-existing database
of material properties and was shown to be more effective than other comparative mod-
els by only requiring a small number of measurements, relative to other models, on site
to determine what the material properties should be.
234
7.5 Stochastic DCF Models
In Sections 3.11 – 3.15, extensions were proposed to [Garg 2003]’s VoIP capacity model
allowing it to model a noisy channel. These models were then used to analyse the
capacity of VoIP networks on 802.11b links for a variety of different situations such
as:
• VoIP capacity for varying SNR values (Section 5.4.2);
• VoIP capacity for varying packet sizes (Section 5.4.2);
• VoIP capacity for different data rates (noting that different data rates used dif-
ferent modulation techniques and thus had different BERs for a constant SNR)
(Section 5.4.2);
• VoIP capacity for Basic Access and RTS/CTS DCF (Section 5.4.3);
• VoIP capacity with long preamble versus short preamble (Section 5.4.4);
• VoIP capacity when Robust Header Compression (ROHC) was used (Section 5.4.5);
• VoIP capacity when different audio codecs were used (Section 5.4.6);
• VoIP capacity when ARF was used (Section 5.4.7);
• VoIP capacity when RTS/CTS ARF was used (Section 5.4.7);
• VoIP capacity when CARA was used (Section 5.4.7); and,
• VoIP capacity when RRAA-Basic was used (Section 5.4.7).
It was, perhaps unsurprisingly, revealed through this round of modelling that:
• The short preamble and ROHC provided a small but consistent boost to the num-
ber of VoIP calls a channel could simultaneously support;
• RTS/CTS reduced the damage incurred during a collision, however the overhead
introduced because of the need to send RTS and CTS frames before any transmis-
sion outweighed any benefit;
• Codecs that compressed the voice signal to a greater degree allowed a larger number
of simultaneous VoIP calls to be realised; and,
235
• An ARC scheme that had minimal overhead, but still provided some ability to
distinguish between collisions and channel errors, such as CARA (or RRAA using
the adaptive RTS/CTS filter as described in [Wong 2006]), was best suited for a
channel expected to carry many simultaneous VoIP calls.
Accuracy of the analytic models was demonstrated through simulation, described in
Section 5.4.9.
236
7.6 Simulation Program
The simulation program, described in Section 4.7, was the primary outcome of the ap-
plied research described in this dissertation. It served to bring all of the models discussed
and evaluated during the course of this project under one umbrella and demonstrated
how they each participated in the process of facilitating, what has been the primary
focus of this dissertation, the rapid deployment of wireless networks in an industrial
location. The core argument presented in this dissertation was a simple one:
Computer aided deployments are preferable to manually performing site surveys as they
are faster, cheaper and require less resources to perform (a fairly intuitive statement and
one supported by the large number of tools available that perform just such a task),
Section 2.12.
As discussed in Section 2.12, LANPlanner [Motorola 2006] from Motorola (formally
Site Planner from Wireless Valley), one of the most popular propagation prediction
tools, used a True Point-to-Point, Multiple Breakpoint Path Loss Exponent Model to
compute path-loss. This model required each material in the environment to be explicitly
specified, a process also typical to many other propagation prediction programs.
Anecdotal evidence suggested that specifying individual material types was a laborious,
time-consuming process, further complicated by large machinery, with hard to define
material types, common in industrial locations.
Therefore – while computer aided propagation prediction could aid in the deployment
of wireless networks, in some situations, the need to specify material types could cause
inefficiencies in this process.
It was with these issues in mind, as opposed to absolute accuracy and precision, that the
simulation program was developed. The simulation program provided all the benefits
of computer aided propagation simulation without the need to rely on a large database
of pre-existing material types. That being said, there is no reason that the simulation
program could not also utilize an extensive database of material types (providing the
best of both worlds), if so desired by the end user.
The ‘VoIP capacity’ functionality of the simulation program, served both as a useful tool
to aid in deploying wireless networks that supported VoIP and demonstrated the benefit
237
of calculating the propagation characteristics of an environment in detail, as opposed to
merely identifying the limits of acceptable coverage.
238
7.7 Limitations and Future Research
Numerous outcomes were achieved through the course of this research program - the
development of a number of new propagation models, more detailed VoIP capacity mod-
els, and a simulation program suitable for use in the deployment of real-life wireless
networks; however, there is always more work that could be done. Suggestions for this
include:
• The ray tracing model implemented was relatively basic, it did not model diffrac-
tion or scattering. Although this was a conscious decision (these phenomena cause
one ray to turn into many and thus cause the simulation to become much more
processor intensive), more research could be conducted to see how the implemen-
tation of these phenomena affects the ray tracing algorithm’s ability to converge
rapidly to a low MSE.
• The VoIP functionality of the simulation program only estimated capacity as op-
posed to other measures such as quality (which could be measured using the Mean
Opinion Score). It would be interesting to investigate how noisy channels and
contention affected the quality of carried voice calls, especially in the presence of
different Forward Error Correction schemes.
• Finally, a large portion of the literature review was devoted to discussing methods
for countering aliasing and methods for launching rays. The ray tracing algorithm
described in this dissertation incorporated Durgin’s method of distributed wave-
fronts and Yun’s modified reception sphere to counter aliasing and could launch
rays through the centre of a geodesic sphere, or from the spiral points generated
using Rackhmanov’s method. This discussion took place to provide an insight into
the design decisions that needed to be made when implementing the ray tracer.
Little was said about the comparative accuracy of such methods, as it was felt that
this would detract from the main theme of the thesis, however – this remains an
interesting avenue of research.
239
7.8 Closing Comments
As noted in [Toncich 1999], industry sponsored research (as opposed to purely academic
research) is a difficult yet rewarding challenge. It is one that requires collaboration,
negotiation and most of all compromise.
There are many reasons why industry sponsored research is challenging, not least of
which is the dichotomy between the specific goals of the industry partner (who is, above
all things, trying to make money) and the specific goals of the research student (who
is, above all things, trying to rigorously test the limits of a novel idea). The need
to satisfy the desires of both parties plays a large role in determining the ultimate
direction that the research follows, invariably meaning that it will focus on a practical
(yet academically interesting) problem, grounded by what the industry partner considers
to be a worthwhile pursuit.
There are also many reasons why industry sponsored research is rewarding; one area
where it truly shines, is in the collaboration between industry and university. This
can expose the research student to a wide range of different situations, people and
places, instilling an appreciation for how the research will be used in a commercial
environment.
In the specific case of this research project, the industry partner was a small company,
Thin-ICE, that worked closely with the Australian manufacturing industry, providing
services including consultation, engineering design, integration, training and support.
The impetus for the project began when Thin-ICE was asked to assist in the deploy-
ment of a wireless network at a major automotive factory in Melbourne, Australia. To
most effectively meet the demands of their automotive partner, Thin-ICE began a col-
laboration with Swinburne University and the CRC-IMST, resulting in the formation of
a research project dedicated to investigating the behaviour of wireless networks at this
particular automotive factory (and later, industrial locations at large).
Through a time consuming process of manual site-surveying, the wireless network was
successfully deployed. With this success under their belt and other companies requiring
similar services, Thin-ICE saw the deployment of wireless networks in factories to be an
increasingly popular trend. To help them expand into this area, Thin-ICE decided that
focusing the direction of the research project on developing tools that would help them
240
efficiently plan and deploy wireless networks in industrial locations was very desirable.
This is how the direction of research was selected; a practical problem, grounded by
what Thin-ICE considered to be a worthwhile pursuit.
One of the major difficulties that existed with this chosen direction (in the eyes of a
doctoral research student) was the sheer amount of prior research that had already
been conducted on this specific topic (obvious from the literature review, and the large
number of propagation prediction tools that had previously been developed) coupled
with the need to develop a novel idea that still adequately met the needs of the industry
partner.
Through Thin-ICE, opportunities arose to visit many different industrial sites and partic-
ipate in the deployment of a range of different wireless networks. This practical exposure
helped to identify what specific problems presented themselves when deploying a wireless
network and what tools/functionality would be of most use to the industry partner in
combating these problems. This experience more than anything helped to shape what
specific features were incorporated in to the simulation program and justifies claims
made during the course of this dissertation, that the simulations was designed for both
academic investigation and real-world usage.
Towards the end of the research project, the simulation tool developed was being suc-
cessfully used by a number of different people, (some, whose only relation to the research
project, was an association with Thin-ICE), as an aid in the deployment of wireless net-
works. In the eyes of Thin-ICE, the research project was a success, they were provided
with a useful tool that helped them conduct their business. Such anecdotal evidence,
however, holds little sway in the world of the research student, who requires rigorous
evidence to substantiate his or her claims.
To try and provide such rigorous evidence is the purpose of this dissertation. It has
attempted to provide detailed evidence about how and why things closely related to the
core ideas were done (such as the specifics of implementing the ray tracing algorithm)
and avoided talking in depth about problems that were further removed from the core
ideas (such as the specifics of importing site data using CAD floor plans). It has at-
tempted to demonstrate that despite the sheer amount of pre-existing research, novel
ideas were proposed and their limits explored. Finally, it has attempted to support the
claims made with empirical as opposed to anecdotal evidence. It is therefore hoped that
241
this dissertation has provided enough evidence and detail to both convey the intended
narrative (i.e., to show how and why the simulation program was developed), and to
meet the required academic rigor (i.e., to demonstrate that design decisions made along
the way were based on sound engineering principals backed by prior research and that
the proposed ideas functioned as described).
242
Appendices
243
Appendix 1 – Definition of Technical Terms
A.1 Overview
Appendix 1 contains brief definitions and equations for terms that were not fully ex-
plained in the main body of this thesis as they were only tangentially related to the
focus of the investigation.
A.2 Bicubic Interpolation
Bicubic Interpolation, used when low-resolution heat maps are generated, allowed an
image to be enlarged and reduced the effects of aliasing on the scaled image.
The algorithm described here is based on that described by [Paul Bourke 2001].
If there were two images, the original source image and the scaled destination image,
where:
• (i, j) – represented the location of a pixel in the source image; and,
• (i′, j′) – represented the location of a pixel in the destination image.
If the source image had dimensions h and w (height and width) and the destination
image had dimensions h′ and w′, then a point in the destination image could be given
by:
i′ =iw′
w, j′ =
jh′
h
Where the division was integer division (that is, the remainder was ignored).
There would also exist a linear scaling relationship between the two images, that is pixel
(i′, j′) in the destination image would correspond to a non integer position in the source
image.
i.e.
244
Figure A.1: Mapping of pixels from destination image to source image.
x =iw′
w, y =
jh′
h
The nearest pixel coordinate in the source image would be the integer component of x
and y and the difference could be written as:
dx = x− i
dy = y − j
Typically the following interpolation formula would be applied to each of the red, green
and blue components of the image, however the heat map described in the main body
of the dissertation only required one parameter, therefore this formulae only needed to
be applied once.
The interpolated value of the pixel (i′, j′) would be given by:
F(i′, j′
)=
2∑m=−1
2∑n=−1
F (i+m, j + n)R(m− dx)R(dy − n)
Where:
245
R(x) = 16
[P (x+ 2)3 = 4P (x+ 1)3 + 6P (x)3 − 4P (x− 1)3
]P (x) =
x, x > 0
0, x 6 0
A.3 Coherence bandwidth
Coherence Bandwidth (Bc) was a dual representation of the delay spread in the frequency
domain. This specified a frequency range over which the channel would produce similar
attenuation and a linear changes in phase.
For 90% correlation of the amplitudes of the frequency components, the coherence band-
width could be approximated as [Lee 1993]:
Bc ≈ 150σ , or for 50% correlation, Bc ≈ 1
5σ
A.4 Gaussian Distribution Q Function
The Gaussian Distribution Q-Function gave the probability that a zero mean Gaussian
random variable with variance equal to one would have a value greater than x. It could
be written as:
Q(x) =1
2π
∫ ∞x
e−t2
2 dt
A.5 Intersymbol Interference
Intersymbol Interference, ISI, was caused when time-delayed multi-path signals arrived
during the following symbol. A good rule of thumb for a low bit error rate (BER) was
that:
R <12τ
Where:
R is the data rate; and,
τ is the time (after reception of the first signal) that it took the delay spread to
drop below a certain threshold [Hashemi 1993].
246
A.6 Lognormal Distribution
This was one of the most popular distributions for explaining slow fading. A simple
justification for using the lognormal distribution was that given multiple reflections in
a multipath environment one could characterize the process as being multiplicative.
In the same manner as a large number of random additive processes gave rise to a
normal distribution (via use of the central limit theorem), a large number of random
multiplicative processes gives rise to a lognormal distribution.
The lognormal pdf could be expressed as:
Pr(r) =1√
2πσre−
(ln r−µ)2
2σ2 , r > 0
It was known as the lognormal distribution because log(r) is normally (Gaussian) dis-
tributed.
A.7 Marcum Q Function
The Marcum Q Function was a function that arose in the performance analysis of some
communication systems. It could be written as:
Q(a, b) =∫ ∞b
xe−(x2+a2)
2 I0(ax)dx
A.8 Maximum Excess Delay
Maximum Excess Delay, X, was the length of time that multipath signals were being
received with power above a specified threshold.
A.9 Mean Excess Delay
Mean Excess Delay was the first moment of the power delay profile, and was defined
as:
247
mτ =∑
ν P (τν)τν∑ν P (τν)
A.10 Modified Bessel Function of the First Order
Solutions to the differential equation:
x2 d2y
dx2+ x
dy
dx+ (x2 − α2)y = 0
Were known as Bessel functions. Solutions that were finite at x = 0 were known as
Bessel Functions of the First Kind. Solutions, where the argument x was complex, were
known as Modified Bessel Functions. The Modified Bessel Function of the First Kind
could be written as:
In(z) =1π
∫ π
0ez cos θ cos(nθ)dθ
A.11 Nakagami Distribution
The Nakagami [Nakagami 1960] (also called the m-distribution) pdf in terms of r could
be expressed as:
Pr(r) =2mmr2m−1
Γ(m)Ωme−
mr2
Ω , r > 0, m >12
Where:
Γ(m) was the Gamma function;
Ω = E[r2]; and,
m = E[r2]2
V ar[r2]
248
A.12 Power Delay Profile
When a signal propagated from a transmitter to a receiver the signal could suffer one or
more reflections, this forced the signal to follow paths of different lengths. Given that
each ray was travelling at a constant speed (the speed of light) and the lengths of the
paths were different, one could infer that the different copies of the signal would arrive
at the receiver with different delays. A graph of received power versus delay was known
as the power delay profile (PDP).
A.13 Rayleigh Distribution
The Rayleigh Distribution was a popular model for describing small-scale rapid ampli-
tude fluctuations in the absence of a strong received component (i.e., no line of sight).
It assumed that all arriving signals suffered roughly the same attenuation and that their
phases were uniformly distributed over [0, 2). This distribution was first investigated by
Lord Rayleigh [Rayleigh 1880].
The Rayleigh pdf could be expressed as:
Pr(r) =r
σ2e−
r2
2σ2 , r > 0
Where:
r is the most probable value, known as the Rayleigh parameter,√π2σ is the mean; and,
(2− π2 )σ2 is the variance.
The popularity of this distribution in describing multipath fading could be attributed
to ‘its occasional empirical validation and its elegant theoretical explanation’ [Hashemi
1993].
249
A.14 Rician Distribution
This occurred when a strong received component was present in addition to the attenu-
ated Rayleigh distributed scattered paths. This strong component may be a LOS path,
or just a multipath that was attenuated less than the others. The Rician pdf was first
proposed by Rice [Rice 1944] and could be expressed as:
Pr(r) =r
σ2e−−r2+ν2
2σ2 I0
(rνσ2
), r > 0
Where:
I0 is the zeroth order Bessel function of the first kind,
ν is the magnitude of the strong received component and
σ2 is proportional to the power of the more attenuated multipaths.
It could be clearly seen that as , the magnitude of the strong received component ap-
proached 0, then the Rician pdf approached the Rayleigh pdf.
A.15 Suzuki Distribution
This distribution was originally proposed by Suzuki [Suzuki 1977] in an attempt to
provide one distribution that accurately described mobile channels in both local and
global areas. It was a combination of Rayleigh and Lognormal distributions. It has been
largely ignored in studies ostensibly because of the complexity of the data reductions
(because its pdf is given in an integral form).
The Suzuki pdf could be expressed as:
P (r) =∫ ∞
0
r
σ2e−
r2
2σ21√
2πσλe−
(lnσ−µ)
2λ2 dσ
250
Figure A.2: Two Ray Model.
A.16 Two Ray Model (Free Space Model with Ground Re-
flection)
If in addition to the direct LOS wave to the receiver, there was also a wave which is
reflected off the ground. The resulting power at the receiver could be found by adding
two phasors together.
If the LOS phasor was written as A1ejθ and the reflected phasor as A2e
jφ Then the total
received field strength would therefore be:
A1ejθ +A2e
jφ = A1ejθ
(1 +
A2
A1e−j(φ−θ)
)One can then note that
φ− θ =2πλ
∆l
Where:
l was the path length difference between the direct and reflected path.
A.17 Weibull Distribution
The Weibull distribution was originally proposed by Waloddi Weibull in 1939 as an
analytical tool for modelling the breaking strengths of materials. It had a pdf that could
be expressed as:
Pr(r) =αb
r0
(br
r0
)α−1
e−“brr0
”α, r > 0
251
Where:
α was a shape parameter;
r0 was the RMS value of r; and,
b was a normalising factor, where b =√(
2α
)Γ(
2α
)The Weibull Distribution contained the Rayleigh and exponential distributions as special
cases.
Appendix 2 – List of Acronyms
ABP Aisle Based Path-loss.
ACK Acknowledgement.
A-D Anderson-Darling (goodness-of-fit technique).
AM Amplitude Modulation.
AP Access Point.
API Application Programming Interface.
ARC Auto Rate Control.
ARF Auto Rate Fallback.
AWGN Additive White Gaussian Noise.
BA Basic Access.
BER Bit Error Rate.
BPSK Binary Phase Shift Keying.
CAD Computer Aided Design.
CARA Collision-aware Rate Adaptation.
CCK Complementary Code Keying.
CRC-IMST Co-operative research centre for intelligent manufacturing systems and technologies
(now defunct).
CRTP Compressed RTP.
CSIRO Commonwealth Scientific and Industrial Research Organisation.
CSMA/CA Carrier Sense Multiple Access with Collision Avoidance.
CSMA/CD Carrier Sense Multiple Access with Collision Detection.
CTS Clear To Send.
252
CCK Complementary Code Keying.
DBPSK Differential Binary Phase Shift Keying.
DCF Distributed Coordination Function.
DIFS DCF Interframe Space.
DLL Data Link Layer.
DPSK Differential Phase Shift Keying.
DQPSK Differential Quadrature Phase Shift Keying.
DSSS Direct Sequence Spread Spectrum.
EM Electro Magnetic.
FEC Forward Error Correction.
FO First Order (From ROHC).
FM Frequency Modulation.
FDTD Finite Difference Time Domain.
FTP File Transfer Protocol.
GPIB General Purpose Interface Bus.
GTD Geometric Theory of Diffraction.
HF High Frequency.
HTML Hypertext Markup Language.
HTTP Hypertext Transfer Protocol.
IEEE Institute of Electrical and Electronic Engineers.
IETF Internet Engineering Task Force.
IP Internet Protocol.
IR Initialise and Refresh (from ROHC).
IRIS Industrial Research Institute Swinburne.
IRT Image Ray Tracer.
ISI Intersymbol Interference.
ISM Industrial, Scientific and Medical (frequency band).
ISO International Standards Organisation.
ITU International Telecommunication Union.
JDK Java Development Kit.
LAN Local Area Network.
LLC Logical Link Control.
L-M Levenberg-Marquardt.
253
LOS Line of Sight.
MAC Media Access Control.
MAN Metropolitan Area Network.
MCC Multi-Channel Coupling.
MOM Method of Moments.
MOS Mean Opinion Score.
MSE Mean Square Error.
OSI Open Systems Interconnection.
PBP Partition Based Path-loss.
PCF Point Coordination Function.
pdf Probability Density Function.
PDP Power Delay Profile.
PER Packet Error Rate.
PLCP Physical Layer Convergence Protocol.
POTS Plain Old Telephone Service.
PSK Phase Shift Keying.
QoS Quality of Service.
RBAR Receiver Based Auto Rate.
RF Radio Frequency.
ROHC Robust Header Compression.
RMS Root Mean Squared.
RRAA Robust Rate Adaptation Algorithm.
RTP Real-time Transport Protocol.
RTS Request to Send.
Rx Receiver.
S4W Site-Specific System Simulator for Wireless Communications.
SBR Shoot and Bounce Ray Tracing.
SIFS Short Interframe Space.
SO Second Order (from ROHC).
SNR Signal to Noise Ratio.
Tx Transmitter.
UDP User Datagram Protocol.
VHF Very High Frequency.
254
VoIP Voice over IP.
WiFi Wireless Fidelity (802.11b).
WLAN Wireless Local Area Network.
Appendix 3 – Publications Resulting from Research
Downey, M., Toncich, D.J., MacDougall, J. and Overmars, A.H., “The Rapid Deploy-
ment of Wireless Networks in an Industrial Environment”, Under Review 2007,
Journal of Wireless Networks, Springer Verlag.
Downey, M., Toncich, D.J. and Overmars, A.H., “Capacity of Voice over IP in
Noisy Environments”, Under Review 2007, Journal of Wireless Networks, Springer
Verlag.
Downey, M., Toncich, D.J., MacDougall, J. and Overmars, A.H., “A Simple Markov
Chain Analysis of ARF, RTS/CTS ARF, CARA and RRAA-Basic”, Under
Review 2007, Wireless Communications and Mobile Computing, Wiley.
255
References
256
[Agelet 2000] F. A. Agelet; F. Aguago; A. Formella; J. M. Hernando Rabanos; F. Isasi de
Vicente; F. Perez Fontan, “Efficient ray-tracing acceleration techniques for radio
propagation modelling”, IEEE Transactions on Vehicular Technology, Vol. 49, No.
6 pp. 2089-2104, November 2000.
[Airmagnet 2006] Airmagnet, “AirMagnet - Enterprise Wireless Network Secu-
rity and Troubleshooting”,http://www.airmagnet.com/, Accessed on February 2006.
[Ali-Rantala 2002] P. Ali-Rantala; L. Sydnheimo; M. Keskilammi; M. Kivikoski; “Indoor
Propagation comparison between 2.45GHz and 433MHz Transmissions”, 2002
IEEE Antennas and Propagation Society International Symposium, San Antonio, Texas,
June 16-21 2002.
[Andren 1999] C.Andren; M. Webber, “CCK Modulation Delivers 11Mbps for
High Rate IEEE 802.11b”, Wireless Symposium/Portable by Design Conference,
Spring 1999.
[Andren 2000] C. Andren; K. Halford; M. Webster; “CCK, the new IEEE 802.11
standard for 2.4GHz wireless LANs”, International IC – Taipei, Conference Pro-
ceedings, May 2000.
[Ascom 2006] “Home Wireless Solutions”, http://www.ascom.com/ws, Accessed Novem-
ber 2006.
[Bagrodia 2001] R Bagrodia et al. “Glomosim: A scalable network simulation
environment, v2.03”. Parallel Computing Lab, UC Los Angeles, 2001.
[Balanis 1989] C. A. Balanis, “Advanced Engineering Electromagnetics”, John
Wiley & Sons, New York, 1989.
[Bertoni 1988] H. Bertoni, “Coverage prediction for mobile radio systems op-
erating in the 800/900 MHz frequency range”, IEEE Transactions on Vehicular
Technology (Special Issue), Vol. 37, pp. 3-72, 1988.
[Bianchi 1998] G. Bianchi, “IEEE 802.11Saturation Throughput Analysis”, IEEE
Communications Letters, Vol. 2, No. 12, pp. 318 – 320, December 1998.
[Bianchi 2000] G. Bianchi, “Performance analysis of IEEE 802.11 distributed
coordination function”, IEEE Journal on Selected Areas in Communications, Vol.
257
18, No. 3, pp. 535-547, March 2000.
[Bormann 2001] C. Bormann; et. al, “RObust Header Compression (ROHC):
Framework and Four profiles: RTP, UDP, ESP and Uncompressed” Internet
Engineering Taskforce, RFC 3095, July 200.
[Bourke 2001] P. Bourke, “Bicubic Interpolation for Image Scaling”
http://local.wasp.uwa.edu.au/ pbourke/texture colour/imageprocess/index.html, Accessed
on February 2007.
[Chatzimisios 2004] P. Chatzimisios; A. C. Boucouvalas; V. Vitsas, “Performance
Analysis of IEEE 802.11 DCF in Presence of Transmission Errors”, IEEE
Communications Society, pp. 3854-3858, 2004.
[Cheu 1995] S. H. Cheu; S. Jeng, “All SBR/image approach to indoor radio propa-
gation modelling”, Antennas and Propagation Society International Symposium, 1995.
AP-S. Digest, Vol.4, pp.1952-1955, 18th – 23rd June 1995.
[Cichon 1995] D.J. Cichon; T. Zwick; J. Lahteenmaki, “Ray Optical Indoor Modeling
in Multi-floored Buildings: Simulations and Measurements”, Antennas and
Propagation Society International Symposium, 1995. AP-S. Digest, Vol. 1, pp. 522-
525, 18th – 23rd June 1995.
[Cisco 2005] Cisco Systems, Inc. “Voice Over IP - Per Call Bandwidth Consump-
tion”, Document ID 7934, Available at: http://www.cisco.com/warp/public/788/pkt-
voice-general/bwidth consume.html, Accessed February 2007.
[Clough 1999] R. W. Clough; E. L. Wilson, “Early Finite Element Research at
Berkeley”, Fifth U.S. National Conference on Computational Mechanics, 1999.
[Costa 1997] E. Costa, “Ray Tracing Based on the Method of Images for Prop-
agation Simulation in Cellular Environments”, 10th International Conference on
Antennas and Propagation, Conference Publication No. 436, 1997.
[Cox 1984] D. Cox; R. Murray; A. Norris, “800MHz Attenuation Measured in and
Around Suburban Houses”, AT&T Bell Laboratories Technical Journal, Vol. 673,
No. 6, pp. 921-954, July-August, 1984.
[Damosso 1999] E. Damosso et. al, .“Digital Mobile Radio Towards Future Gener-
258
ation Systems”, European COST Action 231, Available at http://www.lx.it.pt/cost231/,
Accessed on February 2007.
[Devasirvatham 1987] D. M. J. Devasirvatham, “Multipath Time Delay Spread in
the Digital Portable Radio Environment”, IEEE Communications Magazine, Vol.
25, No. 6, pp. 13-21, 1987.
[Devasirvatham 1990] D. M. J. Devasirvatham; C. Banerjee; M. J. Krain; D. A. Rappa-
port, “Multi-Frequency Radiowave propagation measurements in the portable
radio environment”, In Proceedings ICC 90, 1990.
[Durgin 1998] G. Durgin, “Advanced Site-Specific Propagation Prediction Tech-
niques”, Masters Thesis, Virginia Polytechnic Institute and State University, May
1998.
[EDX 2006] EDX Wireless, “Antenna Pattern File Format”,
http://www.edx.com/tech support/kb/kb ant pat format.php, Accessed, February 2006.
[Falsafi 1996] A. Falsafi; K. Pahlavan; and G. Yang, “Transmission Techniques for
Radio LANs - A Comparative Performance Evaluation Using Ray Trac-
ing”,IEEE Journal on Selected Areas in Communications, Vol. 14, No. 3, pp. 477-491,
1996.
[Foh 2005] C. H. Foh; J. W. Tantra, “Comments on IEEE 802.11 saturation
throughput analysis with freezing of back-off counters”, IEEE Communications
Letters, Vol. 9, No. 2, pp 130-132, 2005.
[Friis 1946] H. T. Friis, “A Note on a Simple Transmission Formula”, Proceedings
of the Institute of Radio Engineers, Vol. 34, No. 5, pp.254–256, May 1946.
[Ganesh 1989] R. Ganesh; K. Pahlavan, “On the Modelling of Fading Multipath
Indoor Radio Channels”,IEEE Globecom ’89 Conference, Dallas, pp. 1346-1350,
November 1989.
[Garg 2003] S. Garg; M. Kappes, “Can I add a VoIP call?”, IEEE International
Conference on Communications, 2003. ICC ’03, Vol. 2, pp. 779-783, 11th-15th May
2003.
[Gibson 2006] J. D. Gibson; B. Wei; S. Choudhury, “Voice Communications over
259
Tandem Wireline IP and WLAN Connections”, Proceedings of the Fortieth An-
nual Asilomar Conference on Signals, Systems, and Computers, October 29 - November
1, 2006.
[Golay 1961] M. Golay, “Complementary Series”, IEEE Transactions on Information
Theory, Vol.7, No.2, pp. 82- 87, Apr 1961.
[Hankins 2004] G. Hankins; L. Vahala; J.H. Beggs, “Electromagnetic Propagation
Prediction Inside Aircraft Cabins”, NASA Langley Research Center, 2004 IEEE In-
ternational Symposium on Antennas and Propagation and USNC/URSI National Radio
Science Meeting, Monterey, California, June 20th-26th, 2004.
[Harrington 1967] R. F. Harrington, “Matrix methods for fields problems”, Pro-
ceedings of the IEEE, Vol. 55, pp. 136–149, February, 1967.
[Hashemi 1992] H. Hashemi; D. Lee; D. Ehman, “Statistical Modelling of the in-
door radio propagation channel – part II”, Proceedings of the IEEE. Vehicular
Technology Conference, VTC ’92, Denver, Colorado, pp. 839 – 843, May 1992.
[Hashemi 1993] H. Hashemi, “The Indoor Radio Propagation Channel”, Proceed-
ings of the IEEE, Vol. 81, pp. 943-968, 1993.
[Hassan-Ali 2002] M. Hassan-Ali; K. Pahlavan, “A New Statistical Model for Site-
Specific Indoor Radio Propagation Prediction Based on Geometric Optics
and Geometric Probability”, IEEE Transactions on Wireless Communications, Vol.
1, pp. 112-124, January 2002.
[Hata 1980] M. Hata, “Empirical formulae for propagation loss in land mobile
radio services”, IEEE Transactions on Vehicular Technology, VT-29, pp. 317-325,
1980.
[Heusse 2003] M. Heusse; F. Rousseau; G. Berger-Sabbatel; A. Duda, “Performance
anomaly of 802.11b,INFOCOM 2003. Twenty-Second Annual Joint Conference of the
IEEE Computer and Communications Societies. IEEE, 2003.
[Hole 2004] D. P. Hole; F. A. Tobagi, “Capacity of an IEEE 802.11b wireless LAN
supporting VoIP”, 2004 IEEE International Conference on Communications, Vol.1,
pp. 196-201, 20th-24th June 2004.
260
[Holland 2001] G. Holland; N. Vaidya; P. Bahl, “A rate-adaptive MAC protocol for
multi-hop wireless networks”, In Proceedings of ACM MOBICOM’01, Rome, Italy,
2001.
[Honcharenko 1991] W. Honcharenko; H. L. Bertoni, “UHF Propagation in Modern
Office Buildings”, poster presented at First Virginia Tech. Symposium on Wireless
Personal Communications, Blacksburg, VA, June 5, 1991.
[Honcharenko 1992] W. Honcharenko; H. L. Bertoni; J. L. Dailing; J. Qian; H. D. Yee,
“Mechanisms Governing UHF Propagation on Single Floors in Modern Office
Buildings”, IEEE Transactions on Vehicular Technology, Vol. 41, No. 4, pp. 496-504,
November 1992.
[Honcharenko 1995] W. Honcharenko; H.L. Bertoni; J.L. Dailing, “Bilateral Averaging
Over Receiving and Transmitting Areas for Accurate Measurements of Sector
Average Signal Strength Inside Buildings”, IEEE Transactions on Antennas and
Propagation, Vol. 43, No. 5, May 1995.
[Iskander 2002] M. F. Iskander; Z. Yun, “Propagation Prediction Models for Wire-
less Communication Systems”, IEEE Transactions on Microwave Theory and Tech-
niques, Vol. 50, No. 3, March 2002.
[IEEE 802.11] IEEE, “802.11: Wireless LAN Medium Access Control (MAC)
and Physical Layer (PHY) Specifications”, IEEE 802.11, November 2000.
[IEEE 802.11b] IEEE, “IEEE Standard for Wireless LAN Medium Access Con-
trol and Physical Layer Specifications”, IEEE 802.11b, November 1999.
[IEEE 802.3] IEEE, “Carrier Sense Multiple Access with Collision Detection
(CSMA/CD) Access Method and Physical Layer Specification”, IEEE 802.3,
December 2005.
[IEEE 802.3i] IEEE, “10Base-T Ethernet”, IEEE 802.3i, 1990.
[IEEE 802.3j] IEEE, “Local and Metropolitan Area Networks Fibre Optic and
Passive Star-based elements, Type 10Base-F”, IEEE 802.3j, 1993.
[ISO/IEC 7498-1] ISO/IEC, “Information Technology – Open Systems Intercon-
nection – Basic Reference Model: The Basic Model”, International Organization
261
for Standardization, 1996.
[ITU-T G.711] ITU-T, “Pulse Code Modulation (PCM) of Voice Frequencies”,
ITU-T Recommendation G.711, November 1988.
[ITU-T G.729] ITU-T, “Coding of speech at 8 kbit/s using conjugate-structure
algebraic-code-excited linear prediction (CS-ACELP)”, ITU-T Recommendation
G.729, March 2003.
[ITU-T P.800] ITU-T, “Methods for subjective determination of transmission
quality”, ITU-T Recommendation P.800, 1996.
[Jemai 2005] J. Jemai; R. Piesiewicz; T. Krner, “Calibration of an Indoor Radio
Propagation Prediction Model at 2.4 GHz by Measurements of the IEEE
802.11b Preamble”, Vehicular Technology Conference, 2005, Volume 1, pp. 111-115,
2005.
[Ji 1999] Z. Ji; B. Li; H. Wang, H. Chen; Y. Zhau, “An Improved Ray-Tracing
Propagation Model for Predicting Path Loss on Single Floors”, Microwave and
Optical Technology Letters, Vol. 22, No. 1, pp.39-41, 1999.
[Ji 2001] Z. Ji; B. Li; H. Wang; H. Chen; T. K. Sarkar, “Efficient Ray-Tracing
Methods for Propagation Prediction for Indoor Wireless Communications”,
Antennas and Propagation Magazine, IEEE , Vol. 43, No.2, pp.41-49, April 2001.
[Keller 1962] J. Keller, “Geometrical Theory of Diffraction”, Journal of the Optical
Society of America, Vol. 52, No. 2, 1962.
[Kim 2005] N. Kim, “IEEE 802.11 MAC Performance with Variable Trans-
mission Rates”, IEICE Transactions on Communications, Vol. E88-B, No. 9, pp.
3524-3531, September 2005.
[Kim 2006] J. Kim, S. Kim, S. Choi, D. Qiao, “CARA: Collision-aware Rate Adap-
tation for IEEE 802.11 WLANs”, IEEE INFOCOM, 2006.
[Kjesbu 2000] S. Kjesbu; T. Brunsvik, “Radiowave propagation in Industrial En-
vironments”, Industrial Electronics Society, IECON 2000. 26th Annual Conference of
the IEEE, Vol. 4, pp. 2425-2430, 2000.
[Kouyoujiman 1974] R. Kouyoumjian, “A Uniform Geometrical Theory of Diffrac-
262
tion for an Edge in a Perfectly Conductive Surface”, Proceedings of the IEEE,
Vol 62. Issue 11, 1974.
[Lacage 2004] M. Lacage; M. H. Manshaei; T. Turletti, “IEEE 802.11 Rate Adapta-
tion: A Practical Approach”, Proceedings of the 7th ACM international Symposium
on Modeling, Analysis and Simulation of Wireless and Mobile Systems, MSWiM ’04,
ACM Press, New York, NY, pp. 126-134, 2004.
[Lacroici 1997] D. Lacroix; C. L. Despins; G. Y. Delisle; P. Marinier; P. Luneau,
“Experimental Characterization of Putdoor microcellular quasi-static chan-
nels in the UHF and SHF bands”, 1997 IEEE International Conference on Commu-
nications, ICC ’97, Towards the Knowledge Millennium, Vol. 1, 8th-12th June 1997.
[Laureson 1992] D. I. Laurenson; A. U. H. Sheikh; S. McLaughlin, “Characterization
of the Indoor Mobile Channel using a Ray-Tracing Technique”, Proceedings of
1992 IEEE International Conference on Selected Topics in Wireless Communications,
Vancouver, B.C., pp. 65-68, June 25th-26th, 1992.
[Lawton 1994] M.C. Lawton; J. P. Mcgeehan, “The Application of a Deterministic
Ray Launching Algorithm for the Prediction of Radio Channel Characteris-
tics in Small Cell Environments”, IEEE Transactions on Vehicular technology, Vol.
43, No. 4, November 1994.
[Lee 1993] W. C. Y. Lee, “Mobile Communication Design Fundamentals”, New
York: Wiley, 1993.
[Lu 1998] W. Lu; K. T. Chan, “An Improved 3-D Ray Tracing Method for Indoor
Propagation Prediction”, Radio and Wireless Conference, 1998. RAWCON 98., pp.
9-101, 9th-12th, August 1998.
[Luebbers 1984] R. J. Luebbers, “Finite conductivity uniform GTD versus knife
edge diffraction in prediction of propagation path loss”, IEEE Transactions on
Antenna and Propagation AP-32, pp. 70-76, 1984.
[Luebbers 1984-2] R.J. Luebbers, “Propagation Prediction for Hilly Terrain using
GTD Wedge Diffraction”, IEEE Transactions on Antenna and Propagation, AP-32,
pp. 951-955, 1984.
[Marinier 1996] P. Marinier, G.Y. Delisle, L. Talbi, “A Coverage Prediction Tech-
263
nique for Indoor Wireless Millimeter Waves System”, Wireless Personal Com-
munications, Springer Netherlands, Vol. 3, No. 3, 1996.
[Martijn 2003] E.F.T. Martijn; M.H.A.J Herben, “Characterization of Radio Wave
Propagation into Buildings at 1800 MHz”, IEEE Antennas and wireless propaga-
tion letters, Vol. 2, 2003.
[McKown 1991] J. McKown; R. Hamilton Jr, “Ray Tracing as a Design Tool for
Radio Networks”, IEEE Network Magazine, 1991.
[Miller 1950] G. A. Miller and J. C. R. Licklider. “The Intelligibility of Interrupted
Speech”, Journal of the Acoustical Society of America, Vol. 22, No. 2, 1950.
[Miller 1998] L. E. Miller, J. S. Lee, “BER Expressions for Differentially Detected
pi/4 DQPSK Modulation” IEEE Transactions on Communications, Vol. 46, pp.
71–81, January 1998.
[Motorola 2006] “Motorola Enterprise Mobility–Enterprise Software and Ser-
vices – RF Design and Management” , http://www.motorola.com/Enterprise/us
/en us/solution.aspx?Navigationpath=id 801i/id 2720i/id 2732i&tabbed=Components,
Accessed, February 2007.
[Motorola 2006-2] “Save Time and Money and Reduce Uncertainty in Your
Wireless Network Planning Process”, http://www.motorola.com/Enterprise
/contentdir/en US/Enterprise/Files/White Papers/Motorola RFDesign Performance.pdf,
Accessed on July 2007.
[Musil 1986] J. Musil, F. Zacek,“Microwave Measurements of Complex Permit-
tivity by Free Space Methods and Their Applications” Elsevier, New York,
1986.
[Nadeem 2004] T. Nadeem; A. Agrawala, “IEEE 802.11 DCF Enhancements for
Noisy Environments”, 15th IEEE International Symposium on Personal, Indoor and
Mobile Radio Communications, 2004. PIMRC 2004, Vol. 1, pp. 93-97, 5th-8th Septem-
ber 2004.
[Nakagami 1960] M. Nakagami, “The m-distribution – A General Formula of
Intensity Distribution of Rapid Fading”, Statistical Methods of Radio Wave Prop-
agation, Pergammon Press, 1960.
264
[Neskovic 2000] A. Neskovic; N.Neskovic; G. Paunovic, “Modern Approaches in
Modeling of Mobile Radio Systems Propagation Environment”, IEEE Com-
munications Surveys, 2000.
[Nidd 1997] M. Nidd; S. Mann; J. Black, “Using Ray Tracing for Site-Specific
Indoor Radio Signal Strength Analysis”, Vehicular Technology Conference, Vol.2,
pp. 795-799, 1997.
[Okumura 1968] Y. Okumura et. al., “Field Strength and its Variability in VHF
and UHF Land Mobile Radio Service”, Rev of the ECL, Vol. 16, pp. 825-873,
1968.
[OPNET 2006] “Modeler Wireless Suite”, http://www.opnet.com/products/modeler
/home-1.html, Accessed, February 2007.
[Owen 1999] “Ray Tracing”, http://www.siggraph.org/education/materials/HyperGraph
/raytrace/rtrace0.htm, Accessed, February 2007.
[Pearson 2000] B. Pearson, “Complementary Code Keying Made Simple”, Intersil
AN9850.1, May 2000.
[Perez-Vegay 1997] C. Perez-Vegay; J. L. Garcia G; J. M. L. Higueraz, “A Simple
and Efficient Model for Indoor Path-Loss Prediction”, Measurement Science
and Technology, Vol. 8, Issue 10, pp. 1166-1173, 1997.
[Perkins 2003] C. Perkins, “RTP Audio and Video for the Internet”, Addison-
Wesly, 2003.
[Proakis 2001] J. G. Proakis, “Digital Communications”, Irwin/McGraw-Hill, 2001.
[Qin 2004] L. Qin; T. Kunz, “Survey on Mobile Ad Hoc Network Routing Pro-
tocols and Cross-Layer Design”, Carleton University, Systems and Computer Engi-
neering SCE-04-14, Aug. 2004.
[Rakhmanov 1994] E. A. Rakhmanov; E. B. Saft; Y. M.Zhou, “Minimal discrete
Energy on the Sphere”, Mathematical Research Letters 1, pp. 647-662, 1994.
[Rappaport 1989] T. S. Rappaport; C. D. McGillem, “UHF Fading in Factories”,
IEEE Journal on Selected Areas in Communications, Vol 7. No. 1, pp. 40-48, January
1989.
265
[Rappaport 1989-2] T. S. Rappaport, “Indoor Radio Communications for Factories
of the Future”, IEEE Communications Magazine, pp. 15-24, May 1989.
[Rappaport 1989-3] T. S. Rappaport, “Characterization of UHF Multipath Radio
Channels in Factory Buildings”, IEEE Transactions on Antennas and Propagation,
Vol. 37, No. 8, August 1989.
[Rappaport 1991] T. S. Rappaport; S. Y. Seidel, and K. Takamizawa, “Statistical
Channel Impulse Response Models for Factory and Open Plan Building Ra-
dio Communication System Design”, IEEE Transactions on Communications, Vol.
39, No. 5, pp. 794-807, May 1991.
[Rappaport 1992] S. Y. Seidel; T. S. Rappaport, “A Ray Tracing Technique to
Predict Path Loss and Delay Spread Inside Buildings”, Proceedings of 1992
IEEE GlobeCom, pp. 649-653, 1992.
[Rappaport 1992-2] T. S. Rappaport; S. Y. Seidel, “914 MHz Path Loss Predic-
tion Models for Indoor Wireless Communications in Multifloored Buildings”,
IEEE Transactions on Antennas and Propagation, Vol. 40, No. 2, pp. 207-217.
[Rappaport 1992-3] T. S. Rappaport; K. R. Schaubach; N. J. Davis, “A Ray Tracing
Method for Predicting Path Loss and Delay Spread in Microcellular Envi-
ronments”, Vehicular Technology Conference, 1992 IEEE 42nd, Vol. 2, pp. 932-935,
10th-13th May 1992.
[Rappaport 1993] T. S. Rappaport; C. M. P. Ho, “Wireless Channel Prediction in a
Modern Office Building Using an Image-Based Ray Tracing Method”, IEEE
Globecom’93, November, 1993.
[Rappaport 1994] T. S. Rappaport; S. Y. Seidel, “Site-Specific Propagation Pre-
diction for Wireless In-Building Personal Communication System Design”,
IEEE Transactions on Vehicular Technology, Vol. 43, No. 4, November, 1994.
[Rappaport 1996] T.S. Rappaport, “Wireless Communications: Principals and
Practise”, Prentice Hall, New Jersey, 1996.
[Rappaport 2005] T. S. Rappaport, “Wireless Communications: Principles and
Practice”, Prentice Hall, New Jersey, 2nd edn, 2002.
266
[Rayleigh 1880] Lord Rayleigh, “On the Resultant of a Large Number of Vibra-
tions of the Same Pitch and of Arbitrary Phases”, Phil. Mag., Vol. 10, pp. 73-78,
1880 and Vol. 27, pp. 460-469, June 1889.
[Rein 2005] S. Rein; F.H.P. Fitzek; M. Reisslein, “Voice Quality Evaluation in Wire-
less Packet Communication Systems: A Tutorial and Performance Results
for RHC”,Wireless Communications, IEEE [see also IEEE Personal Communications],
Vol.12, No.1 pp. 60- 67, February 2005.
[Remcom 2006] “Remcom - Wireless InSite - Ray Tracing / EM Propagation
Software” http://www.remcom.com/WirelessInSite/index.html, Accessed on February
2007.
[RFC 1889], Audio-Video Transport Working Group, “RTP: A Transport Protocol
for Real-Time Applications”, Request for Comments, Network Working Group, The
Internet Society, January 1996.
[RFC 2508] S. Casner; V. Jacobson, “Compressing IP/UDP/RTP Headers for
Low-Speed Serial Links”, Request for Comments, Network Working Group, The In-
ternet Society, February 1999.
[Rice 1944] S. O. Rice, “Mathematical Analysis of random noise”, Bell Systems
Technical Journal, Vol. 23, pp. 282-332, 1944 and Vol. 24, pp. 46-156, 1954.
[Rice 1959] L. P. Rice, “Radio Transmission into Buildings at 35 and 150 MHz”,
Bell Systems Technical Journal, Vol. 38, No. 1, pp. 197–210, 1959.
[Rusin 1995] D. Rusin, “Equally Spaced Points on a Sphere”, email correspondence,
http://www.math.niu.edu/ rusin/known-math/95/equispace.elect, Accessed, February
2007.
[Safaai-Jazi 2002] A. Safaai-Jazi; S. M. Riad; A. Muqaibel; A. Bayram, “Through-
the-Wall Propagation and Material Characterization”, Time Domain and RF
Measurement Laboratory, Virginia Polytechnic, November 18, 2002.
[Saff 1997] E. B. Saff; A. B. J. Kuijlaars, “Distributing many points on a sphere”,
Mathematical Intelligencer, 1997.
[Schaubach 1994] K.R. Schaubach; N. J. Davis, “Microcellular radio-channel prop-
267
agation prediction”, Antennas and Propagation Magazine, IEEE, Vol. 36, No. 4, pp.
25-34, August 1994.
[Shannon 1948] C. E. Shannon, “A Mathematical Theory of Communication”,
Bell System Technical Journal, Vol. 27, pp. 379-423, 623-656, October 1948.
[Sivaswamy 1978] R. Sivaswamy, “Multiphase Complementary Codes”,IEEE Trans-
actions on Information Theory, Vol. 24, No.5, pp. 546- 552, September 1978.
[Speer 1992] L. R. Speer, “An Updated Cross-Indexed Guide to the Ray-Tracing
Literature”, SIGGRAPH Computer Graphics, Vol. 26, pp.41-72, January 1992.
[SuperNEC 2006] “SuperNEC - EM Antenna Simulation Software & Design”,
http://www.supernec.com/index.html, Accessed, February 2007.
[Suzuki 1977] H. Suzuki, “A Statistical Model for Urban Radio Propagation”,
IEEE Transactions on Communications, Vol. COM-25, pp. 673-680, July 1977.
[Tan 1995] S. Y. Tan; H. S. Tan, “Improved Three-Dimensional Ray Tracing
Technique for Microcellular Propagation Models”, Electronics Letters, Vol. 31,
No. 17, pp. 1503–1505, Aug. 1995.
[Tan 1995-2] S. Y. Tan; H. S. Tan, “Modelling and measurements of channel im-
pulse response for an indoor wireless communication system”, IEE Proceedings
– Microwaves, Antenna’s and Propagation, Vol. 142, Iss. 5, pp. 405- 410, October
1995.
[Tan 1996] S.Y. Tan; H.S. Tan, “A Microcellular Communications Propagation
Model Based on the Uniform Theory of Diffraction and Multiple Image The-
ory”, IEEE Transactions on Antennas and Propagation, Vol.44, Iss.10, pp. 1317-1326,
October 1996.
[Tickoo 2003] O. Tickoo; B. Sikdar, “On the impact of IEEE 802.11 MAC on
traffic characteristics”, IEEE Journal on Selected Areas in Communications, Vol. 21,
No.2, pp. 189- 203, February 2003.
[Toncich 1999] D. Toncich, “Key Factors in Postgraduate Research A Guide for
Students”,Chrystobel Engineering, 1999
[Turkmani 1993] A. M. D. Turkmani; A.F. de Toledo, “Modelling of radio transmis-
268
sions into and within multistorey buildings at 900, 1800 and 2300 MHz”, IEE
Proceedings-1, Vol. 140, No. 6, December 1993.
[Uni. Cantabria 2006] “CINDOOR: Computer Tool for Planning and Design
of Wireless Systems in Enclosed Spaces”, http://www.gsr.unican.es/cindoor/, Ac-
cessed on February 2007.
[Valenzuela 1997] R.A. Valenzuela; O. Landron; D.L. Jacobs, “Estimating Local
Mean Signal Strength of Indoor Multipath Propagation”, IEEE Transactions
on Vehicular Technology, Vol. VT-46, pp. 203-212, 1997.
[Virginia Tech. 2006] “The S4WRT Parallel Ray Tracer Project”,
http://filebox.vt.edu/ cbergstr/s4w/, Accessed, February 2007.
[Vu 2006] H. L. Vu; T. Sakuri, “Collision Probability in Saturated IEEE 802.11
Networks”, Australian Telecommunication Networks & Applications Conference (ATNAC),
December 2006.
[Weisstein 1999] E. W. Weisstein, “Finite Element Method”, MathWorld–A Wolfram
Web Resource.
http://mathworld.wolfram.com/FiniteElementMethod.html, Accessed, February 2007.
[Weisstein 2006] E. W. Weisstein, “Antenna Power Pattern”, Eric Weisstein’s World
of Physics at Scienceworld.wolfram.com,
http://scienceworld.wolfram.com/physics/AntennaPowerPattern.html Accessed, Febru-
ary 2007.
[Whitted 1980] T. Whitted “An improved illumination model for shaded display”,
Communications of the ACM, Vol. 23, Iss. 6, pp. 343-349, 1980.
[Wireless Valley 2005] “SIRCIM simulation package homepage”,
http://www.wirelessvalley.com/Products/SIRCIM/SIRCIM.asp, Accessed November 2005.
[Wong 2006] S.H. Wong, S. Lu, H. Yang, Bharghavan, “Robust rate adatation for
802.11 wireless networks”, Proceedings of the 12th Annual international Congerence
on Mobile Computing and Networking, pp. 146-157, Los Angeles, CA, USA 2006.
[Wu 2002] H. Wu; Y. Peng; K. Long; S. Cheng, J. Ma, “Performance of reliable
transport protocol over IEEE 802.11 wireless LAN: analysis and enhance-
269
ment”, INFOCOM 2002. Twenty-First Annual Joint Conference of the IEEE Computer
and Communications Societies. Proceedings. IEEE, Vol. 2, pp. 599 – 607, 2002.
[Yang 1996] C. Yang, C. Ko, and B. Wu, “Field Measurements and Ray Tracing
Simulations for Indoor Wireless Communications”, Seventh IEEE International
Symposium on Personal, Indoor and Mobile Radio Communications, 1996. PIMRC’96,
Vol. 3, pp. 776 – 780, 1996.
[Yee 1966] K. S. Yee, “Numerical Solution of Initial Boundary Value Problems
Involving Maxwell’s Equations in Isotropic Media”, IEEE Transactions on An-
tennas and Propagation, Vol. AP-14, No. 4, pp. 302-307, 1966.
[Yegani 1989] P. Yegani; C. D. McGillem, “A Statistical Model for Line-of-Sight
(LOS) Factory and Radio Channels”, Proceeds of Vehicular Technology Conference,
VTC ’89, pp. 496-503, May 1989.
[Yun 2001] Z. Yun; M. F. Iskander; Z. Zhang, “Development of a new Shoot-and-
bouncing Ray (SBR) Tracing Method That Avoids Ray Double Counting”,
Digest of IEEE Antennas and Propagation Society International Symposium, Boston,
Massachusetts, Vol. 1, pp. 464-467, July 8-13, 2001.
270