A Framework for Radio Resource Management
in Heterogeneous Wireless Networks
by
Abd-Elhamid Mohamed Abd-Elhamid Taha
A thesis submitted to the
Department of Electrical and Computer Engineering
in conformity with the requirements for
the degree of Doctor of Philosophy
Queen’s University
Kingston, Ontario, Canada
September 2007
Copyright c©Abd-Elhamid M. Taha, 2007
Abstract
Heterogeneous Wireless Networks (HWNs) are composite networks made of different
wireless access technologies, possibly with overlapping coverage. Users with multi-
mode terminals in HWNs will be able to initiate connectivity in the access technology
that best suits their attributes and the requirements of their applications. The true
potential of HWNs, however, is only realized through allowing users to maintain their
sessions when toggling from one access technology to another. Such inter-technology
handoffs, called vertical handoffs, will enable users to persistently select the most
appropriate network, and not just at session initiation. For operators, HWNs pave
the road to higher profitability through more capable networks where the complemen-
tary advantages of individual access technologies are combined. However, the char-
acteristics of HWNs challenge traditional arguments for designing Radio Resource
Management (RRM) frameworks. Managing the resources of an access technology in
an HWN independently of other networks with which its overlaid risks underutiliza-
tion and resource mismanagement. The dynamic nature of user demands in HWNs
also calls for RRM modules with controlled operational cost. More importantly, the
unique characteristics of HWNs call for non-traditional solutions that exploit the
“complementarity” of the individual networks.
In this thesis, we address these issues through proposing a framework for RRM
i
in HWNs. Our framework comprises three key components. The first component
is aimed at improving allocation policies in HWNs through joint allocation policies
involving provisioning and admission control. In addition, we outline the basis for
achieving robust provisioning that accommodates variability in user demands, but
also in network capabilities. The second component is concerned with controlling the
operational cost of RRM modules. As a case study, we choose bandwidth adaptation
algorithms and optimize their performance. We also introduce the notion of stochastic
triggers which enables operators to direct the operation of a RRM module based on the
operator’s objectives and network conditions. In the third component, we introduce
a new module that exploits vertical handoffs to the benefit of network operators.
Such operator motivated vertical handoffs can be utilized in instances of congestion
control. They can also be used proactively to achieve long-term objectives such as
load balancing or service delivery cost reduction.
ii
Dedication
To Sohir, Mohamed, Walid, Nodar and Hanan.... in all love, humility and gratitude.
iii
Acknowledgments
I praise God for his blessings — those granted, and those denied.
I would like to thank Prof. Hossam S. Hassanein and Prof. Hussein Mouftah for
their supervision during the course of my studies.
Prof. Hassanein, I thank you for a wonderful experience. Your encouragement,
the opportunities that you’ve given me, the long hours, and your patience, are all
appreciated beyond words can sustain.
Prof. Mouftah, I thank you for your wisdom, advice and support, and for sharing
your valuable experience in our long discussions.
I would also like to thank members of the examination committee for their valuable
remarks and recommendations.
I am grateful to Ms. Bernice Ison and Ms. Debra Fraser for bearing with me in
my endless requests.
I acknowledge with great appreciation the funding provided by Queen’s University,
Bell University Labs and the Natural Sciences and Engineering Research Council.
Over the course of my studies, I have met many wonderful people that made my
stay in Kingston a joyful and worthwhile experience.
Nidal Nasser, a friend, a colleague and an endlessly patient roommate, helped me
in many ways from the day I stepped into Canada. I still wonder how I would have
iv
made it without his support.
Ali Roumani, Baha Jabarin and Hany Hammad, friends and roommates from 172
Barrie Street where I’ve had exceptionally great memories.
Ayman Radwan, Maged Bekheit, Karim Hamed and Marija Linjacki, fellow John
Orrians and the greatest companions one could ever ask for. Ayman: If they would
bring back the time, I would do it all over again — but with a better camera!
Hisham, Maha, Malak and Ledya Batarny; Mahmoud, Faten, Ahmed and Randa
Hasswa, Lamis Roumani; Mohamed, Sohaila and Layla Agamy; Walid, Fatheia,
Maa’ther and Razi Jabarin; and Asmaa and Youssef Nasser — thank you all for
making me part of very special times.
And all my colleagues at the Telecommunications Research Lab at Queen’s Uni-
versity, especially Zeenat, Riyami, Safwat, Ayse, Jian, Alex, Hung, Bader, Kenan,
Jiawei, Afzal, Tiantong, Nomaan, Kashif, Doha, Khaled, Abduladhim and Hassan.
You’ve made TRLab a good place to belong to.
ECE comrades, especially Abdelhamid Elshoul to whom I hold the highest regard,
and for whom I pray the best of life.
And far from Kingston but close to heart, I must thank lifelong friends who have
kept in touch regardless of the distances: Anwaar and Ahmed, Hidab and Khaled,
Ashraf Hammad, Fatma Farhoud, May Zanki, Rabih Dib, Ramez Bouari, Fatma
Hassar, Shihab Matar and Khaled Harras.
But back to Kingston, where I’ve met Najah Abu Ali. I have exhausted all possible
expressions of gratitude and appreciation, and they all seem rudimentary and they all
come short. But I would say that you’ve made me a homeland of wherever you were,
are or will be. And your family, Mohamed, Sam, Leen, Wesam and Rose Aleyadeh
v
— Thank you for having me in your lives.
And last but never least, my dear family. My parents: Sohir and Mohamed; and
my siblings: Walid, Nodar and Hanan. You were never far and you will never be. I
have had you with me throughout my stay in Kingston, and I will have you with me
wherever I go. You have given me so much, and I can’t even begin to grasp all the
things that you’ve done for me. So I stand ever in awe, ever in debt and ever in love.
It is to all of you that I dedicate this work. May you accept it. And may God always
bless your hearts.
Abd-Elhamid M. TahaKingston, Ontario
28, September 2007
vi
Table of Contents
Abstract i
Dedication iii
Acknowledgments iv
Table of Contents vii
List of Tables xi
List of Figures xii
Acronyms xv
Chapter 1:Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1 Towards 4G: A Multi-Faceted Evolution . . . . . . . . . . . . . . . . 21.2 Heterogeneous Wireless Networks . . . . . . . . . . . . . . . . . . . . 41.3 Motivations and Objectives . . . . . . . . . . . . . . . . . . . . . . . 81.4 Thesis Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.5 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Chapter 2:Background and Framework Overview . . . . . . . . . . 15
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.2 Vertical Handoffs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.3 Network Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.4 RRM issues in HWNs . . . . . . . . . . . . . . . . . . . . . . . . . . 242.5 An RRM Framework for HWNs . . . . . . . . . . . . . . . . . . . . . 27
2.5.1 Framework Objectives . . . . . . . . . . . . . . . . . . . . . . 282.5.2 Overview of Framework Components . . . . . . . . . . . . . . 292.5.3 Architectural Considerations and Framework Interactions . . . 31
vii
Chapter 3:Provisioning in HWNs . . . . . . . . . . . . . . . . . . . . 37
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.2 Motivations and Related Work . . . . . . . . . . . . . . . . . . . . . . 403.3 Considerations of Joint Provisioning . . . . . . . . . . . . . . . . . . . 453.4 A Model for Joint Allocation Policies . . . . . . . . . . . . . . . . . . 46
3.4.1 Architectural and Operational Considerations . . . . . . . . . 473.4.2 Behavioral Model and Mobility Considerations . . . . . . . . . 483.4.3 A Model for Disjoint Resource Allocation . . . . . . . . . . . . 493.4.4 Call admission Control for Disjoint Allocation . . . . . . . . . 513.4.5 A Model for Joint Resource Allocation . . . . . . . . . . . . . 513.4.6 Call admission Control for Joint Allocation . . . . . . . . . . . 53
3.5 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 533.5.1 Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . . . 533.5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.6 Towards Robust Provisioning in HWNs . . . . . . . . . . . . . . . . . 593.6.1 Single Common Service - Deterministic Demand (SCS-DD) . . 603.6.2 Single Common Service - Probabilistic Demand (SCS-PD) . . 633.6.3 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . 66
3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Chapter 4:Controlling the Cost of RRM Modules . . . . . . . . . . 71
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724.2 Elements and Considerations of BAAs . . . . . . . . . . . . . . . . . 74
4.2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754.2.2 The Role of a BAA . . . . . . . . . . . . . . . . . . . . . . . . 764.2.3 Trigger Frequency . . . . . . . . . . . . . . . . . . . . . . . . . 764.2.4 Required Measurements . . . . . . . . . . . . . . . . . . . . . 784.2.5 Conclusiveness of a BAA . . . . . . . . . . . . . . . . . . . . . 79
4.3 Overview of Previous Proposals . . . . . . . . . . . . . . . . . . . . . 804.4 Motivations and Objectives . . . . . . . . . . . . . . . . . . . . . . . 854.5 Stochastically Triggered Bandwidth Adaptation Algorithm . . . . . . 87
4.5.1 Operational Overview . . . . . . . . . . . . . . . . . . . . . . 874.5.2 Architectural Overview . . . . . . . . . . . . . . . . . . . . . . 884.5.3 Notations and Definition . . . . . . . . . . . . . . . . . . . . . 894.5.4 The Measurement Module . . . . . . . . . . . . . . . . . . . . 904.5.5 Valuation of the Probability Threshold . . . . . . . . . . . . . 914.5.6 The Gradation Modules . . . . . . . . . . . . . . . . . . . . . 944.5.7 The Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 98
viii
4.6 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 994.6.1 Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . . . 1024.6.2 Preliminary Evaluation . . . . . . . . . . . . . . . . . . . . . . 1034.6.3 The Effect of Probability Thresholds . . . . . . . . . . . . . . 1064.6.4 Tradeoffs between Blocking and Downgradation . . . . . . . . 1064.6.5 Utilizing pt in Joint Admission Control . . . . . . . . . . . . . 112
4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Chapter 5:Operator Motivated Vertical Handoffs . . . . . . . . . . 116
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1175.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1195.3 Considerations of an OMVH Module . . . . . . . . . . . . . . . . . . 121
5.3.1 RRM Framework Interactions of OMVHM . . . . . . . . . . . 1215.3.2 Triggering OMVH Module (OMVHM) . . . . . . . . . . . . . 1225.3.3 Elements of OMVHM . . . . . . . . . . . . . . . . . . . . . . 124
5.4 Designing OMVH Modules . . . . . . . . . . . . . . . . . . . . . . . . 1265.4.1 Evaluating a Migration’s Worth . . . . . . . . . . . . . . . . . 1285.4.2 The Identification Stage . . . . . . . . . . . . . . . . . . . . . 1335.4.3 The Selection Stage . . . . . . . . . . . . . . . . . . . . . . . . 1345.4.4 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . 139
5.5 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 1405.5.1 Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . . . 1415.5.2 Blocking Probability . . . . . . . . . . . . . . . . . . . . . . . 1425.5.3 Effect of Overlay Percentage . . . . . . . . . . . . . . . . . . . 1435.5.4 Effect of Load Distribution . . . . . . . . . . . . . . . . . . . . 1455.5.5 Limiting Average Number of Handoffs . . . . . . . . . . . . . 1465.5.6 The Multiclass Setting . . . . . . . . . . . . . . . . . . . . . . 147
5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Chapter 6:Service Delivery Cost Reduction . . . . . . . . . . . . . . 152
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1536.2 Motivation and Related Work . . . . . . . . . . . . . . . . . . . . . . 1566.3 Elements of SDCR . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
6.3.1 Components and Interactions . . . . . . . . . . . . . . . . . . 1586.3.2 Operational Overview . . . . . . . . . . . . . . . . . . . . . . 1596.3.3 Definitions and Notations . . . . . . . . . . . . . . . . . . . . 1596.3.4 Evaluating the Cost of Service Delivery . . . . . . . . . . . . . 1616.3.5 Identifying Candidate Users . . . . . . . . . . . . . . . . . . . 163
ix
6.3.6 The Core of SDCR . . . . . . . . . . . . . . . . . . . . . . . . 1646.4 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 166
6.4.1 Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . . . 1676.4.2 Observing the Transient Response . . . . . . . . . . . . . . . . 1686.4.3 SDCR Effectiveness and Network Load . . . . . . . . . . . . . 172
6.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Chapter 7:Conclusions and Future Directions . . . . . . . . . . . . . 177
7.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1797.2 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
x
List of Tables
3.1 Values used in allocation comparison . . . . . . . . . . . . . . . . . . 663.2 Load distributions for each of the five scenarios . . . . . . . . . . . . 683.3 Outcomes and deviations of Programs SCS-DD and PD . . . . . . . . 69
xi
List of Figures
1.1 An overlay of three access technologies. . . . . . . . . . . . . . . . . . 5
2.1 A hierarchal cellular network. . . . . . . . . . . . . . . . . . . . . . . 162.2 A heterogeneous wireless network. . . . . . . . . . . . . . . . . . . . . 172.3 General arguments on where mobility fits best in the IP stack. . . . . 212.4 Elements of network selection. . . . . . . . . . . . . . . . . . . . . . . 222.5 Schematic of the framework’s main components. . . . . . . . . . . . . 292.6 Decision hierarchy in an operator’s HWN. . . . . . . . . . . . . . . . 332.7 Framework interactions. . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.1 The user’s VH may not be accepted while residing in the overlay (a),but when moving towards outside the coverage of the local network,the VH becomes a necessity (b). . . . . . . . . . . . . . . . . . . . . . 43
3.2 A WLAN overlaid within a cellular network canvas. . . . . . . . . . . 473.3 A sample of demand measured during a single simulation run. . . . . 543.4 The blocking probability for users within the overlay with 70% of the
users requesting network 1 and 50% of users residing within the overlay. 563.5 The total blocking probability for users outside the overlay with 70%
of the users requesting network 1 and 50% of users residing within theoverlay. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.6 The total blocking probability for users inside the overlay with 50% ofthe users requesting network 1 and 70% of users residing within theoverlay. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.7 The total blocking probability for users outside the overlay with 50%of the users requesting network 1 and 70% of users residing within theoverlay. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.8 The total blocking probability for users inside the overlay with 50% ofthe users requesting network 1 and 30% of users residing within theoverlay. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.9 The total blocking probability for users outside the overlay with 50%of the users requesting network 1 and 30% of users residing within theoverlay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
xii
4.1 Different scenarios for the operation of a BAA. . . . . . . . . . . . . . 774.2 Plots for pt = r(x−s) with s = 5 and different values of r. . . . . . . . 934.3 Algorithm for bandwidth downgradation. . . . . . . . . . . . . . . . . 1004.4 Algorithm for bandwidth upgradation. . . . . . . . . . . . . . . . . . 1014.5 Blocking probability vs. load, with and without engaging adaptation. 1044.6 Downgradation degree vs. load, with and without engaging adaptation. 1044.7 Average downgradation per user vs. load, with and without engaging
adaptation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1054.8 Blocking probability vs. load, with pt varied between 0 and 1 in steps
of 0.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074.9 Downgradation degree vs. load, with pt varied between 0 and 1 in steps
of 0.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074.10 Average downgradation per user vs. load, with pt varied between 0
and 1 in steps of 0.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1084.11 Tradeoff between blocking probability and average downgradation, with
exponential(180 seconds) holding times. . . . . . . . . . . . . . . . . . 1094.12 Tradeoff between blocking probability and average downgradation, with
fixed (180 seconds) holding times. . . . . . . . . . . . . . . . . . . . . 1104.13 Tradeoff between blocking probability and average downgradation, with
uniform(0, 180 seconds) holding times. . . . . . . . . . . . . . . . . . 1104.14 Tradeoff between blocking probability and average downgradation, with
evenly mixed holding times. . . . . . . . . . . . . . . . . . . . . . . . 1114.15 Blocking probability vs. load, with and without joint admission. . . . 1134.16 Downgradation degree vs. load, with and without joint admission. . . 1144.17 Average downgradation vs. load, with and without joint admission. . 114
5.1 An instance of employing OMVH: (a) A user makes a call to a con-gested network; —(b) The network toggles the association of two usersto another GW and accepts the call. . . . . . . . . . . . . . . . . . . 118
5.2 The four stages in the OMVH operation. . . . . . . . . . . . . . . . . 1275.3 Coverage of a wireless LAN “zoned” based on the gateway’s signal
strength. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1335.4 User G is attempting to enter a loaded cellular network. Users A to F
are residing within an overlay of cellular network and a WLAN. . . . 1405.5 An overlay of cellular and WLAN coverage. . . . . . . . . . . . . . . 1415.6 Total blocking probability (i.e. for both networks) with aggregate load
varied . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435.7 Blocking probability with overlay percentage varied (relative to larger
network). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
xiii
5.8 Blocking probability with percentage of requests to larger networksvaried, with and without employing OMVH. . . . . . . . . . . . . . . 145
5.9 Effects of varied hold-off time on (a) blocking probability; (b) numberof migrations per user; (c) total number of migrations; and (d) numberof individual users migrated. . . . . . . . . . . . . . . . . . . . . . . . 146
5.10 Measured ratio with class 1 assuming 20% of the arrival rate. . . . . . 1485.11 Measured ratio with class 1 assuming 50% of the arrival rate. . . . . . 1495.12 Measured ratio with class 1 assuming 80% of the arrival rate. . . . . . 149
6.1 Components and Interactions of an SDCR module. . . . . . . . . . . 1596.2 A timeline illustrating the periodic manner in which Service Delivery
Cost Reduction (SDCR) is engaged. . . . . . . . . . . . . . . . . . . . 1606.3 An overlay of cellular and WLAN coverages. . . . . . . . . . . . . . . 1666.4 Instantaneous service delivery cost per user, with and without SDCR
being engaged. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1696.5 Instantaneous service delivery cost per user, with and without SDCR
being engaged, with SDCR employed every 200 seconds. . . . . . . . 1706.6 Instantaneous service delivery cost per user, with and without SDCR
being engaged, with a demand surge between 1000 and 1500 seconds. 1706.7 Instantaneous service delivery cost per user, with and without SDCR
being engaged, cost structure changed randomly. . . . . . . . . . . . . 1716.8 Operational aspects with (solid) and without (dashed/dotted) SDCR,
20% of requests to network 1: (a) total cost; (b) aggregate blockingprobability; (c) blocking probability in network 1; and (d) blockingprobability in network 2. . . . . . . . . . . . . . . . . . . . . . . . . . 173
6.9 Operational aspects with (solid) and without (dashed/dotted) SDCR,50% of requests to network 1: (a) total cost; (b) aggregate blockingprobability; (c) blocking probability in network 1; and (d) blockingprobability in network 2. . . . . . . . . . . . . . . . . . . . . . . . . . 173
6.10 Operational aspects with (solid) and without (dashed/dotted) SDCR,80% of requests to network 1: (a) total cost; (b) aggregate blockingprobability; (c) blocking probability in network 1; and (d) blockingprobability in network 2. . . . . . . . . . . . . . . . . . . . . . . . . . 174
6.11 Operational aspects with (solid) and without (dashed/dotted) SDCR,80% of requests to network 1: (a) median instantaneous cost; and (b)median instantaneous cost per user. . . . . . . . . . . . . . . . . . . . 175
xiv
Acronyms
3G Third Generation Wireless
3GPP Third Generation Partnership Project
3GPP2 Third Generation Partnership Project 2
4G Fourth Generation Wireless
BAA Bandwidth Adaptation Algorithm
ebu effective bandwidth unit
EDGE Enhanced Data Rates for Global Evolution
GPRS General Packet Radio Service
GSM Global System for Mobile communication
HCN Hierarchical Cellular Networks
HSPA High Speed Packet Access
HWN Heterogeneous Wireless Network
IETF Internet Engineering Task Force
IMS IP Multimedia Subsystem
IP Internet Protocol
MIP Mobile IP
MMS Multimedia Message Service
OMVH Operator Motivated Vertical Handoff
xv
1
OMVHM OMVH Module
QoS Quality of Service
RNC Radio Network Controller
RRM Radio Resource Management
RSVP Resource Reservation Protocol
SCTP Stream Control Transmission Protocol
SDCR Service Delivery Cost Reduction
SIP Session Initiation Protocol
SLA Service Level Agreement
STBAA Stochastically Triggered Bandwidth Adaptation Algorithm
SMS Short Message Service
TCP Transport Control Protocol
UMTS Universal Mobile Telecommunications System
UMVH User Motivated Vertical Handoff
VH Vertical Handoff
VPLS Virtual Private LAN Services
VOIP Voice over IP
WAP Wireless Access Protocol
WCDMA Wideband Code Division Multiple Access
WiMax Worldwide Interoperability for Microwave Access
WLAN Wireless Local Area Network
WON Wireless Overlay Network
Chapter 1
Introduction
Fourth Generation Wireless (4G) will no doubt capitalize on a unifying commu-
nication core and the complementary attributes of the different technologies and
paradigms of wireless networks. Remarkably, Internet Protocol (IP) has evolved
to become capable of connecting the various technologies, in addition to relaying
the different traffic types required by current and future applications. Beyond inter-
connection, however, the envisioned characteristics of 4G challenge Radio Resource
Management (RRM) frameworks both in design and in operation. In this chapter, we
set the premises for this thesis by describing the background and motivations for the
work, defining the objectives, outlining the approach and stating the contributions of
this work.
1.1 Towards 4G: A Multi-Faceted Evolution
The proliferation of wireless and mobile devices in the commercial context is advanc-
ing by several motivators. Immediate connectivity, for one, has long been sought at
2
CHAPTER 1. INTRODUCTION 3
different levels (personal, business, emergency, etc.) and in different environments
(home, office, buildings, street, etc.). The use of wireless connectivity became an
inexpensive and attractive alternative when wiring is financially or physically infea-
sible. The outcome of immediate connectivity for businesses also meant increased
productivity and further financial gains.
The popularity of the Internet, growing strong since the mid-nineties, is another
factor that cannot be overlooked. The Internet, in its evolution, became an important
component of lifestyle. Beyond settings whereby two or more users can now be
involved in a real-time, multimedia interaction, there is also the access to many forms
of resources (data, software, services, storage, computing power, etc.). The demand
that such access be available for wireless and mobile users is only natural.
The “4” in 4G can be understood in examining the evolution of cellular networks,
which are by far the most proliferated [DSVK07]. The first and second generations of
cellular networks were aimed at providing voice communications — essentially, wire-
less and mobile telephones. The second generation cellular, most popular of which is
the Global System for Mobile communication (GSM), achieved, and is still achiev-
ing great popularity. The growing interest in non-vocal communications, e.g. brief
messages, slowly introduced services such as Short Message Service (SMS) and Mul-
timedia Message Service (MMS). The evolutions of GSM, namely General Packet
Radio Service (GPRS) and Enhanced Data Rates for Global Evolution (EDGE), en-
abled reduced forms of Internet access (e.g. Wireless Access Protocol (WAP) on
cellular devices). Incidentally, GPRS and EDGE were given the labels “2.5G Wire-
less”. However, GPRS and EDGE rely on GSM’s infrastructure, which in itself was
initially designed for voice communication. Accordingly, the general performance of
CHAPTER 1. INTRODUCTION 4
data over GSM suffered.
To enable multimedia communications over cellular devices (watching television
shows or video conferencing) required the use of another technology, namely Wide-
band Code Division Multiple Access (WCDMA), which is the cornerstone of Third
Generation Wireless (3G) realizations. However, the most outstanding feature of 3G
networks is that they are, by design, data-switched networks. In other words, both
voice and data are intrinsically supported. Advances such as High Speed Packet
Access (HSPA) have been introduced to further enhance the performance of 3G and
their outcome have been given the label “3.5G Wireless”.
Part of the confusion resulting from the label 4G is that the so far xG labels have
been given to evolutions of cellular networks. Accordingly, 4G is expected to be a
reference to a cellular network with advanced attributes. However, it is commonly
understood that there are several wireless paradigms involved in the design of 4G
networks. More importantly, a single unifying core that is based on IP will enable
bringing these technologies together to function as one network.
1.2 Heterogeneous Wireless Networks
General visions of 4G Wireless networks are essentially visions of Heterogeneous Wire-
less Networks (HWNs) [DSVK07]. An HWN is a composite made of two or more wire-
less access technologies, each with its own characteristics in terms of coverage, Quality
of Service (QoS) assurance, implementation and operation costs, etc. HWNs are also
called Wireless Overlay Network (WON) since their coverage comprises wireless over-
lays, i.e. coverage overlaps, of different technologies. An example of a wireless overlay
is shown in Figure 1.1. In a non-restrictive view, HWNs also make use of different
CHAPTER 1. INTRODUCTION 5
connectivity paradigms such as those found in ad-hoc, mobile and mesh networks
[CAC+05] where network elements rely completely on the wireless medium to relay
information and are allowed to move, even change the network topology, without af-
fecting the network functionality. Although in this thesis we restrict our arguments
to the context of a single network operator deploying an HWN, the general definition
extends to an HWN realized by networks belong to multiple operators.
For a network operator, the objective of creating such a “composition” is to re-
alize a network that is more capable in delivering the service between the service
provider and the wireless end user. The operator’s HWN would exploit the better
characteristics of the different access technologies in terms of coverage, efficiency, or
profitability. For users, HWNs present a further degree-of-freedom in selecting the
access technology that is most appropriate for the user’s terminal capabilities and
application requirements — even connect simultaneously to more than one access
technology.
Cellular network WLAN
WiMax coverage
Figure 1.1: An overlay of three access technologies.
Elementary to the operation of HWNs is the existence of multi-mode terminals,
CHAPTER 1. INTRODUCTION 6
i.e. terminals with more than one radio interface and each enabling them to access a
different access technology. With such a capability, terminals can initiate connectivity
through the technology that most closely matches the user’s or the application’s
requirements. However, without maintaining the session through mobility or varying
network conditions, HWNs would be quite limited. Without this capability, a session
must be terminated and re-initiated if the user is interested in continuing the session
through a different interface. This capability is called inter-technology handoff or
Vertical Handoff (VH). The term “vertical” counters the term “horizontal handoff”
which characterizes a handoff taking place in networks with homogeneous access. For
example, a terminal changing association between two 802.16 base stations (BS) or
two 802.11 access points (AP) is said to undergo a horizontal handoff, while one
changing association between a BS and an AP, or vice versa, is said to undergo a
vertical handoff.
VHs are yet to be provided on a commercial scale. However, observable efforts
are being made to accelerate their introduction. Multi-mode wireless terminals are
beginning to be commercially available. For example, the Nokia E60 [nok], provides
interfaces for cellular networks (both GSM or Universal Mobile Telecommunications
System (UMTS)) and 802.11 Wireless Local Area Network (WLAN). In the future,
multi-mode terminals will evolve into using a single interface based on the emerg-
ing notion of cognitive radio [Hay05]. The IEEE 802.21 WG, initiated in March
2004, is working on enabling continuous seamless connectivity through different ac-
cess technologies [802]. There are also the efforts by the Internet Engineering Task
Force (IETF), Third Generation Partnership Project (3GPP) and Third Generation
Partnership Project 2 (3GPP2) on the IP Multimedia Subsystem (IMS) framework
CHAPTER 1. INTRODUCTION 7
[3GP] — a culmination of efforts towards the realization of an All-IP wireless network,
and one that is further being considered as the unifying element for next-generation
networks [CMVE06]. Through IMS, it becomes possible to evolve the abstraction of
access gateways — AP, BS, etc. — to Internet Gateways (IGW), where each can
be described and managed through standardized attributes. An effort of particular
interest, exemplified by the work in [MON], is one that exploits the features made
possible by the introduction of IPv6, especially the enlarged address pool and allowing
terminals to hold multiple active interfaces and multiple addresses per each interface.
Through these features, which in essence realize multi-homing at the terminal level,
it would be possible to have smooth VHs.
Once VHs are made possible, the full advantages of HWNs can be materialized.
For users, network selection becomes not only viable at session initiation, but also
throughout the user’s session. Hence, notions such as “always connect to the cheapest
network” or “connect only through secure networks” can be implemented in a manner
that is seamless to the user. HWNs, hence, can be considered as truly user-centric
or user-oriented since, by design, they persistently associate the user with the most
appropriate connectivity [FFF+06].
For operators, admission management needs not be restricted to a single network.
Coverage permitting, an operator receiving a call request can direct the request to
the network that best suits the user requirements and/or that comply with the status
of different networks. Furthermore, the operator can balance the admission load
between the different networks with less fear that a mobile user will be disconnected
if moved beyond the vicinity of the accommodating network.
Here, a more critical difference between horizontal and vertical handoffs needs to
CHAPTER 1. INTRODUCTION 8
be pointed out. Horizontal handoffs are most commonly triggered due to variations in
the signal level received at the terminal’s side, and mostly result from user’s mobility.
On the other hand, while VHs can certainly be triggered by user mobility, VHs can
also be requested or applied to users with little or no mobility. For example, the
applications run by a certain user at a single place, e.g. office or home, can vary
from the demanding Voice over IP (VOIP) and video streaming applications to the
fuzzily demanding web browsing to the basic file transfer. Security requirements
may also vary, e.g. user accessing Virtual Private LAN Services (VPLS) or bank
accounts. Based on the requirements of the applications used, different networks may
be engaged in service delivery at different times — even simultaneously if a feature
such as multi-homing is employed.
1.3 Motivations and Objectives
The notion of RRM is concerned with overseeing the distribution of radio resources to
different users, or different classes of users, and attempts to strike a balance through
catering to user requirements while achieving profitability for the network operator.
Given the scarcity of radio resources, RRM frameworks are designed to maximize
the number of services delivered while minding limitations that can be either aggre-
gate, such as overall interference or capacity bounds, or individual, a user’s delay or
jitter requirements. Different functionalities take part in RRM frameworks. For in-
stance, admission control judges whether or not a call can be admitted in a network.
Scheduling minds the packet-to-packet access between the different users. Provision-
ing attempts to recognize demand patterns in the network such as long-term resource
distribution satisfying the operator’s objectives.
CHAPTER 1. INTRODUCTION 9
Traditionally, RRM frameworks have been designed to operate within the scope
of homogeneous networks. To apply independent RRM frameworks for the different
technologies in an HWN, the resources of the different technologies will be managed
in an incoherent manner, possibly leading to underutilization and, hence, user dissat-
isfaction and operator losses. In contemplating the possible permutations of device
capabilities, users’ applications and their requirements and users’ behaviors (mobil-
ity, choice of network, etc.), one can realize the disparity that HWNs must sustain.
Different application demand different requirements in terms of QoS attributes (de-
lay, jitter, bandwidth), security, etc. At any given time, a user may request one or
more different applications. With terminals equipped with multiple interfaces, users
can chose from the available technologies, even simultaneously utilize more than one
access technology.
In delivering such services, HWNs also need to provide for QoS, including main-
tainable delivery and prioritization. This means that, for a certain service and/or for
a certain set or class of users, a harmonious set of provisions/reservations need to be
established across the various technologies.
This latter point is also relevant when considerations are made to provide seamless
handoffs between the different access technologies. Beyond prompt and quick signal-
ing, an essential component of a seamless handoff, sessions or users handed off from
one network to another must persistently receive the QoS mandated by the relevant
service agreements.
Finally, the design of the RRM framework should mind the economic attractive-
ness in terms of deployment and operation. As aforementioned, HWNs are composites
of different networks. Solutions proposed for RRM frameworks should be minimally,
CHAPTER 1. INTRODUCTION 10
if at all, invasive on the integrity of the different technologies. They should also, as
much as possible, preserve the medium and minimize overhead and unnecessary sig-
naling. At the same time, they should also be efficient in terms of time and processing
requirements.
1.4 Thesis Contributions
The high level objective of this thesis is to examine the design basis of RRM frame-
works for 4G wireless networks. The unique characteristics of such networks carry
new challenges that traditional frameworks are not designed to handle. The demand
diversity in time, space and magnitude, requires robust, distributed and efficient
RRM functionalities. The nature of the demand lends itself to a hybrid approach,
with long-term, proactive decisions handling salient demand characteristics and short-
term reactive decisions managing agile demand dynamics. Meanwhile, the framework
should mind the utilization of the different technologies in the network and should
always aim at maximizing utilization given networks capabilities and user demands.
The contributions of this thesis are as follows:
1. Joint Functionalities: The classic approach to RRM is to maintain the integrity
of each access technology. In such a setting, each technology manages its re-
sources independent of other technologies in an overlay. Our proposal is based
on the potential gains that could be achieved if the resources of networks with
overlaid coverage are jointly managed. This is mandated as users are to handoff
from one network to another, based on mobility or other factors. Accordingly,
the demand on overlaid networks is not independent. Jointly managing the re-
sources of the overlaid network leads to higher utilization and improved service
CHAPTER 1. INTRODUCTION 11
delivery than in the disjoint case. We exercised this outlook in two distinct
directions, namely provisioning and admission control. In provisioning, where
long term capacity assignments are made to different user categories, we adopted
the use of stochastic programming as means to realize robustness in long term
allocations that sustains variability not only in the demand patterns, but also in
network conditions. As for admission control, which oversees resource assign-
ments at the instantaneous scale, we described the design considerations for
joint allocation policies and investigated their effects on network performance.
2. Controlled Operational Costs: Our interest here has been in the triggering of
various RRM modules in a classical framework. As a case study, we investi-
gated a vital module to any RRM framework that is responsible for bandwidth
adaptation. We have analyzed the module, its role, its requirements and its
triggers, in addition to the interactions between these different aspects. In sim-
plifying and structuring the manner in which bandwidth adaptation algorithms
are made, the operational cost has been directly reduced. Furthermore, we
have proposed the notion of probabilistic triggers, where the trigger probability
is valuated based on certain measurements. The use of probabilistic triggers
equally showed substantial gains in operational cost.
3. Forced Vertical Handoffs: Previous works on VHs have focused on triggers mo-
tivated by user policies regarding charged cost, QoS requirements, security,
personal preference, etc. Whether the trigger decision is processed by the user
or the operator, such VHs are essentially motivated by the user. Such works
overlook that VHs may be motivated by operator requirements. For example,
CHAPTER 1. INTRODUCTION 12
an operator must be able to migrate users from one access technology to an-
other to maximize the overall or relieve a congested network. To overcome this
shortage, we introduced the novel notion of operator motivated VHs. Further,
in addition to detailing considerations involved in identifying and selecting users
for migration, we showed how such handoffs can be implemented at different
levels in an HWN’s hierarchy and with different objectives including admission
and congestion.
4. Service Delivery Cost Reduction: In future HWNs, an operator’s costs for de-
livery services to the mobile end user will be highly variable. For example, the
emerging paradigm of multihop cellular networks involves the reliance on users
to relay services from the operator’s gateways. There is also the evolving notion
of “spectrum sharing” between operators that is motivated by the introduction
of cognitive radios. Accordingly, modules are required to reduce the costs in-
curred by the operator. Towards this end, we introduced a proactive module
relying on forced vertical handoffs that acts upon an operator’s cost structure
and continuously achieves reduction in per-user service delivery cost.
1.5 Thesis Outline
The organization of this thesis is as follows.
In the following chapter, we elaborate on the evolution of wireless networks to-
wards HWNs and describe the enabling advances in technology and communication
protocols. In doing so, we review the relevant proposals made in both literature and
standardization. In addition, we detail the challenges to identify the motivations for
CHAPTER 1. INTRODUCTION 13
the proposed framework. Finally, an outline of the proposed framework is offered.
In Chapter 3, we address the notion of joint functionalities in HWNs and how
joint allocation policies can enhance the performance of RRM. By identifying the
shortcomings of traditional RRM frameworks and recent proposals for HWNs, we
establish the need for new demand categorizations and robust provisioning cores. We
then offer a model for joint provisioning based on a representative demand catego-
rization and show the effect of adopting such a categorization. We also answer the
need for robust provisioning cores using stochastic programming where variability in
both user demands and network conditions can be accommodated. Using stochastic
programming is shown to yield robust solutions with realistic expectations.
Our focus in Chapter 4 is on controlling the operational cost of RRM modules.
As a case study, we examine an important RRM module dedicated to bandwidth
adaptation depending on demand intensity and medium conditions. We outline the
general design considerations for such modules. We also overview previous proposals
and note their limitations. We then introduce the notion of probabilistic triggers and
show how they can be used in controlling the operational cost. Using our proposal,
the tradeoffs between admission ratios and user satisfaction can be enhanced and
controlled. Moreover, we show how the use of probabilistic thresholds as a network
selection metric can improve admission ratios in a heterogeneous setting.
In Chapter 5, we introduce a novel module that exploits VHs in RRM operations.
We display the feasibility and advantages of utilizing such operator motivated VHs
and enumerate the considerations for the design and implementation of a module
dedicated to their management. We also show elements involved in identifying and
selecting users to be migrated from one network to another, including factors for
CHAPTER 1. INTRODUCTION 14
user differentiation. Our results re-emphasize the need for joint RRM in HWNs
as they yield substantial improvements in blocking probabilities, regardless of load
distribution. We also implement a controller for user differentiation and show its
robustness.
We continue our discussion on operator motivated VHs in Chapter 6, with greater
attention to proactive applications of Operator Motivated Vertical Handoffs (OMVHs).
As a case study, we take the objective of reducing the operator’s costs of service de-
livery. After establishing the motivation for our choice of objective, we highlight the
elements of cost reduction and detail our proposal for robust cost management. We
also evaluate the proactive module under various conditions and show the resulting
reductions in per-user service delivery cost, in addition to module’s effects on other
performance aspects.
Finally, in Chapter 7, we conclude this thesis and hint at possible future directions.
Chapter 2
Background and Framework
Overview
In this chapter, we take a closer look at the evolution of wireless networks towards
HWNs, expanding on matters relevant to the design and implementation of RRM
frameworks. We also outline the status quo in the literature, the standardization and
the commercial venues. With sufficient impetus to our work, we provide an overview
for our proposed framework.
2.1 Introduction
A wireless overlay is realized whenever two or more wireless networks, differing at
least in one architectural aspect, have overlapping coverage. Hierarchical Cellular
Networkss (HCNs) [OGA99] such as the one shown in Figure 2.1, and more gener-
ally hierarchical wireless structures, are direct examples of WONs [KB96] where the
15
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 16
different tiers employ the same access technology, e.g. GSM, but comprise cells of dif-
ferent coverage ranges. The motivation behind deploying such networks is to enhance
service delivery through a certain categorization of users [KH04]. A possible catego-
rization can depend on users’ mobility and the type of services they request: Tiers
with wider coverage can accommodate users with high mobility and low data transfer
rate, while tiers with narrower coverage can accommodate users with low mobility
and higher data transfer rates. Assigning fast moving users to tiers with larger cells
reduces the number of handoffs undergone by users while slowly moving users take
advantage of slowly-varying medium conditions and receive high data transfer rates.
Figure 2.1: A hierarchal cellular network.
In addition to having a single access technology, HCNs essentially form a single
network in terms of management and autonomy. This means that an HCN is deployed
by a single wireless network operator. The different tiers are also backboned together,
including connectivity and signaling inside the network and between the operator’s
network and other networks, e.g. the public telephone network. Based on these
characteristics, HCNs can be considered as forming homogeneous wireless overlays.
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 17
The term “homogeneous” counters the classification of heterogeneous wireless overlays
that are formed when an overlap is created between networks employing different
access technologies. Such networks can be managed by more than one operator and
can require different backbones. Networks created in this manner, such as the one
shown in Figure 2.2, are called Heterogeneous Wireless Networks (HWNs).
Macro-cell
Micro-cell
WLAN hotspot
WLAN mesh
WiMax mesh
Satellite
Macro-cell
Micro-cell
WLAN hotspot
WLAN mesh
WiMax mesh
Satellite
Figure 2.2: A heterogeneous wireless network.
For an operator, a definite motivation in deploying an HWN is higher revenues
though exploiting the properties of each access technology [AFLP03, OKV+06, PKC+06].
For instance, technologies differ in their coverage capabilities. They also differ in how
well they manage service delivery and uphold QoS guarantees in terms of bandwidth
allocation, delay, jitter, etc. As well, different technologies vary in cost and speed of
deployment, in addition to operational cost. More subtle but important properties
include security capabilities and the band within which each technology operates.
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 18
The latter property becomes of great value in interference control between differ-
ent technologies or when faced with legal or economic considerations for spectrum
licensing.
Currently, heterogeneous wireless overlays do exist. Electronics has also advanced
to enable more than one radio interface to be implemented on the same circuit
[KLD06]. Terminals have even been introduced to the market, e.g. Apple’s iPhone
and Nokia’s E60, that enable users to connect to at least three radio interfaces, in-
cluding GSM, WCDMA and IEEE 802.11. Users are hence capable of initiating
connectivity through any of these access technologies, assuming contract or fees have
been made in advance.
However, to unlock the true potential of HWNs, users must be able to maintain
their active sessions from one access technology to another without the need to dis-
connect and re-connect. The objective of this chapter is to discuss the notion of
such inter-technology handoffs, in addition to their requirements, triggers and appli-
cations. We also outline the challenge such handoffs impose on traditional resource
management systems.
The remainder of this chapter is organized as follows. In the following section,
we present the notion of inter-technology handoff and discuss how advances in IP
are enabling inter-technology handoffs. In Section 2.3, we go deeper into the triggers
of vertical handoffs in addition to considerations made when selecting the most ap-
propriate network for a user. We then outline in Section 2.4 the challenges imposed
by inter-technology handoffs on resource management frameworks and overview the
status quo in the literature, standardization and commercial venues. Finally, we
introduce our framework and overview its components in Section 2.5.
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 19
2.2 Vertical Handoffs
The term Vertical Handoff (VH) first appeared in [KB96, SK99] — possibly the
earliest work on wireless overlay networks employing different technologies. The word
“vertical” opposes “horizontal” in “horizontal handoffs” which describes instances
when a terminal is switched between two gateways of the same access technology, i.e.
an intra-technology handoff.
To realize a meaningful handoff, common grounds needed to be established be-
tween the different access technologies, even if under the same operator. This was
noted in [KB96, SK99] where IP was chosen for backhauling with Mobile IP (MIP)
as a mobility management protocol. The choice of IP as a unifying fabric for het-
erogeneous wireless networks can be justified as follows. Being a packet-switched
paradigm, IP results in a more efficient network compared to circuit-switched net-
works [KR07]. It also adds the advantage of making HWNs readily compatible with
the Internet core. This further means a solid support for current Internet applications
and services. The fact that IP is the basis of the Internet and that it has sustained
its growth still attests to IP’s capability to be extended to further applications.
More generally, there are different ways to integrate wireless IP networks [LPMK05,
Sal04, SM05]. Depending on how two networks are integrated, functionalities such as
handoff or resource management vary in their implementation. However, a minimum
level of cooperation is assumed in all settings. Interworking between different tech-
nologies, called coupling, can be generally classified as being loose or tight [LPMK05].
Loose coupling carries the feature of simple implementation (minor enhancements re-
quired) but results in high handoff delays. In such settings, the two networks need
not belong to the same operator. On the other hand, tight coupling is made when,
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 20
say, a WLAN is considered and treated as an access gateway within a UMTS network,
even in terms of signaling and management. This setting is desired when networks
are managed by a single network operator.
Beyond integration, mobility management is required for VHs to be performed.
As aforementioned, MIP was used in the settings of [KB96, SK99]. However, MIP
has evolved much since its initial introduction. Because of its large handoff delays
due to slow “movement detection” and large home-registration delays, however, it
has mostly been regarded as a portability and not a mobility solution. Nevertheless,
much extensions have been made both to accommodate intra-domain mobility and
QoS maintenance [THM05c]. For example, the work in [DNE02] enables MIP with
fast handoffs by pre-reservations using Resource Reservation Protocol (RSVP). Still,
MIP stands as a network-layer solution and accordingly is regarded as non-friendly
towards upper layers, especially the transport layer which has a high sensitivity to
service disruptions. Accordingly, proposals for IP mobility at other layers were sought.
A thorough discussion on candidate layers where Internet mobility would fit can
be found in [Edd04]. Figure 2.3 summarizes the general arguments on IP mobility.
Proposals for transport-layer-centric solutions, e.g. [TXA05], have the advantage
of communicating with legacy transport-layer protocols such as Transport Control
Protocol (TCP). There are also proposals that employ Stream Control Transmission
Protocol (SCTP). A solution that capitalizes on terminal multi-homing facilitated
by IPv6 can be found in [SZ06b]. However, recent interest has been made on solution
based on the striving Session Initiation Protocol (SIP), which is growingly being
used for various application such as presence and instant messaging, in addition to
video and audio streaming and communication, e.g. [BAD06]. SIP is also taking
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 21
on a more special role as it stands at the core of IMS, which, among other things,
is quickly becoming an attractive and comprehensive solution for deploying IP-based
networking cores. Notwithstanding, the search is still out for a cross-layer IP mobility
solution that is transport-layer friendly.
Session Layer
Network Layer
Transport Layer
Data-Link Layer
Physical Layer
A super-(transport layer) solution would require heavy inter-layer processing …
A sub-(network layer) solution would devastate upper-layer performance, and limit QoS offered
A transport-layer based solution would maintain smooth end-to-end performance, and require minimal to no changes in other layers
Figure 2.3: General arguments on where mobility fits best in the IP stack.
The term “multi-homing” originates within the context of backhauling Autonomous
Systems (ASys) in the Internet. When an ASy is multi-homed, it means that it can
connect to the Internet core through multiple points. A multi-homed ASy can indi-
cate to other ASys which points of entry (ingress) are preferable. The ASy itself can
also select which egress points to chose for different traffic. In this manner, multi-
homing relieves ASys from single-point failures and makes possible load balancing for
both ingress and egress traffic.
Such advantages are sought when implementing multi-homing at the terminal
level. IPv6, through a larger address space and its capability of multiple IP addresses
on one interface offers the basis for terminal multi-homing. IPv6 also enables termi-
nals to maintain multiple simultaneous sessions or connections on different interfaces.
These features help realize smooth handoffs that are also transport-layer-friendly. Put
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 22
simply, a terminal can initiate a new and full IP connectivity with a new gateway
before disconnecting with its current one. It can also arrange for redirecting services
and sessions towards newly established connections, hence the feasibility of seamless
handoffs.
2.3 Network Selection
HWNs will give users several options when they need to initiate a session or connec-
tion. With VHs feasible, selecting the most appropriate network for the user becomes
persistently possible and not just at session initiation. The core of network selection,
as shown in Figure 2.4, is essentially a compromise between user policies and profile
on one hand and operator policies on the other.
User OperatorNetwork Selection
Network Capabilities
Network Conditions
User Preferences
Application Requirements
Terminal Capabilities
User Attributes
Figure 2.4: Elements of network selection.
User policies dictate requirements based on user’s contract, applications and ter-
minal capabilities, in addition to behavior profile (mobility, application frequency,
etc.). User preferences may specify the verbal description of expected QoS, mone-
tary readiness, service availability, seamlessness, etc. Terminal capabilities include for
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 23
example radio interfaces (type, quality, power), presentation capabilities (speakers,
display, codec) and loaded protocol set. Application requirements dictate metrics
such as bit error rate, jitter, bandwidth, encryption level, etc. The final consider-
ation from the user perspective is the user’s profile, which includes attributes such
as current position relative to available networks in the overlay, mobility, type of
applications usually requested, etc.
On the operator’s side, network selection depends on operator’s objectives. The
operator, for example, could be interested in load balancing and may direct users
accordingly. If the user is within the coverage of more than one network, the operator
would also consider the capabilities of the different networks. For various reasons,
the capabilities of the different networks under an operator may temporally change in
terms of coverage, available services, QoS, reliability or even cost. In terms of policies,
in addition to load balancing the operator maybe interested in dictating the type of
services delivered through the operator’s network or the type of terminals that can
be serviced in a certain access technology.
The complexity of network selection hence depends on the number of consider-
ations taken into account. At the basic level, if a certain network is best given a
certain attribute X, and the same attribute is the most significant for the user, then
the user will be associated with this very network whenever possible. If we consider
this attribute to be, say, signal strength, then triggers for VHs will be the same as
for horizontal handoffs. However, reasons to trigger a VH can be more complex and
varied.
Consider, for example, a user residing in an overlay of two access technologies. If
the user attempts to initiate the connection in one access technology, the operator
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 24
will first consider, based on the user’s profile and position, whether this is the best
network to accommodate the user. If it is possible to accept the user in the requested
network, the operator will grant the request. Otherwise, the operator may redirect
the user’s request to the second network. Once admitted, the user could seek his
initial choice and the operator’s decision may change depending on the conditions of
the overlay networks at the time. However, if the user starts to move beyond the
overlay area or the user’s terminal’s reception become obstructed, the user’s priority
to handoff may rise.
If the operator employs more advanced form of cognition, e.g. location profiling
[AK06, SZ07], the operator may judge whether it is best to admit a request of a user
traversing an overlay area to a technology with limited local coverage. However, for
the same setting, the operator may direct the user to the local network to enhance the
user’s successful handoff probability chances. Another application can be borrowed
from the context of HCNs where, based on the user’s mobility, the user would be
associated with the network that minimizes the number of handoffs (whether vertical
or horizontal) and hence decrease the user’s dropping probability.
2.4 RRM issues in HWNs
Network selection is an involved issue in HWNs. Rightfully, it is receiving substantial
attention in research. This issue has been addressed for both new requests and ac-
tive calls persistently seeking best connection [XV05, LLZ06, KJ07, BL06, SNW06,
WKG99, HLTT06, YPMM01, GGZZ04, SZ06a]. In general, most of the work in the
literature utilizes a form of “network score function” where the different networks
are rated based through normalizing and weighing different criteria. For example, if
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 25
connection cost is more important to the user than the QoS received, then cost would
carry a larger weight. Some of the works, however, have taken the received signal
strength as the sole element of network choice in HWNs. And while signal strength
is mandatory in determining a network’s availability, it remains only a single factor
in determining the most appropriate network. Notwithstanding, general complexity
of network selection have lead researchers to propose heuristics [XV05], the use of
logical, rule-based [SS05] and policy-based [SZC07] frameworks and neural networks
[Zha04].
Resulting demand patters in HWNs are hard to predict. However, attempts have
been made to deduce demand patterns based on projected behavioral models for
users in HWNs [ZLS06, SGB06, SHM04, VSP03]. Using these models, the design
of efficient RRM components may become more accessible. Nevertheless, the offered
models remain within the scope of projections for future use and the difficulty of
managing HWNs persists.
Traditionally, provisioning has taken a key role in RRM frameworks. In cellular
networks, for example, making provisions for handoff calls is important since the
consumer reaction to a dropped call is more aggravated than when blocking a newly
incoming call [LK02, DZ05, NA95, GB06]. In HCNs, too, similar interest has been
made in appropriately designating resources in different tiers in the networks [OGA99,
KH04]. In HWNs, however, provisioning becomes a more challenging objective. From
one side, an appropriate categorization of demand is needed. On the other, realizing
robust provisioning becomes more challenging in light of highly dynamic demand
patterns. Beyond this, however, it becomes important that independently managing
the resources of the various networks potentially carries resource underutilization.
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 26
Several proposals have been made seeking comprehensive RRM solutions for HWNs.
Notions such as Common RRM (CRRM) and Joint Call Admission Control (JCAC)
are emerging, e.g. [SBK+07, SCD06, THH02, LBG06] are emerging. Other ap-
proaches [YK07, SJZS05] making joint considerations have also been made. We note,
however, that some work have merely taken considerations made for homogeneous
networks and applied them for heterogeneous settings without modifications. The
work in [YK07], for example, directly applies considerations for homogeneous net-
works, without any modifications, to a heterogeneous setting. More critically, the
authors take all VHs to be at the same level of significance and require provisioning
similar to that of horizontal handoff in homogeneous networks.
Therefore, there is still a need for solutions that more closely accommodate and
exploit the characteristics of HWNs. For example, joint allocation policies, become
necessary to enhance the network resource management. However, in order to imple-
ment profitable provisioning and allocations, meaningful demand categorization that
fit the nature of HWNs are required. Traditional demand categorization (e.g. new vs.
handoff, real-time vs. non-real-time, etc.) are limited in light of the user-centricity
of HWNs. The highly variable user demand in HWNs also calls for allocation poli-
cies that are robust and can accommodate the unique characteristics of individual
networks.
RRM modules in HWNs are also going to be involved with many decisions at any
given time. Some decisions involve processing, signaling or both. It is hence essential
to re-examine traditional RRM modules with the objective of controlling their costs.
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 27
In addition, the bulk of work addressing VH decision management focuses on sat-
isfying user requirements. Since the possibilities of user preferences, terminals, appli-
cation requirements, etc., are endless, VHs challenge RRM frameworks. However, VH
can also be exploited by RRM functionalities in order to satisfy operator-motivated
objectives. This is not addressed in the literature.
2.5 An RRM Framework for HWNs
Prior to introducing the framework proposed in this thesis, we will discuss the basis
for our approach and the high level objectives of the framework.
Despite the challenges presented by heterogeneity, HWNs equally offer opportuni-
ties for enhanced operation. In identifying the actual capabilities of individual access
technology, an RRM designer is given more liberty in realizing a more capable net-
work. More specifically, it is only through exploiting the complementary nature of
the individual technologies that an RRM framework can be designed.
In addition, traditional arguments for design and operation of RRM modules
need to be examined in light of the projected nature of demand in HWNs. For
example, the user-centricity of HWNs implies highly dynamic demand patterns that,
in effect, impose on RRM frameworks processing and signaling requirements. A
potential disadvantage of these requirements is wasting network resources.
In examining traditional frameworks, also, it is important to realize that inde-
pendently managing the resources of various technologies under an operator risks
underutilization and resource mismanagement. However, if RRM functionalities were
to be designed so that resources in different technologies can be jointly managed,
either through distributed or centralized management, then more operational points
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 28
can be accessible to the operator.
We summarize the general basis of our approach through the following:
1. Utilizing joint functionalities.
2. Controlling the operational requirements of RRM modules.
3. Exploiting the complementarity of different access technologies.
2.5.1 Framework Objectives
HWNs are to maintain the capabilities attained by today’s 3G networks in accommo-
dating the various forms of traffic for different applications. However, HWNs must
also accommodate future applications and their traffic characteristics. In other words,
RRM frameworks for HWNs must not be tied to specific modes of traffic.
Also important is to provide, as much as possible, for the user’s Service Level
Agreement (SLA) or application requirements across the different networks. This
includes a harmonious set of reservations for mobile users. Since the various tech-
nologies are disparate in their capabilities, the notion of absolute QoS preservations
across heterogeneous wireless networks should be dismissed. Instead, the more real-
istic notion of “relative QoS” preservation should be adopted.
Another objective for the framework is to be economically attractive both in de-
ployment and an in operation. To avert deployment difficulties, the individual design
integrity or operation of the different access technologies should be maintained as
much as possible. Put differently, the implementation of the framework elements or
ideas should be minimally invasive on existing implementations. In terms of opera-
tion, the objective is to maintain a reduced implementation complexity that would
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 29
result in minimal processing and signaling. This requirement stems from the projected
nature of demand patterns in HWNs.
2.5.2 Overview of Framework Components
The framework, schematized in Figure 2.5, comprises three major components
• Joint allocation policies
• Modules with controlled operational cost
• Operator Motivated Vertical Handoffs
Joint Provisioning
OMVH (Proactive)
CO
User profile, attributesNetwork measurements,
Admission quotasOMVH decisions
OMVH (Reactive)
Joint A/C
OC
conditionsAdmission requests
Admission decisions
Figure 2.5: Schematic of the framework’s main components.
The following elaborates on the details of each component.
Joint Allocation Policies
Resource provisioning and admission control are at the heart of any RRM framework.
In the proposed framework, admission control considers the user’s profile and position
in its admission decision so that a satisfying level of user experience is maintained.
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 30
To do this, users within overlays are assigned the best network that fits both their
requirements and the capabilities of the networks in the overlay. To regulate the op-
eration of the admission control module, a provisioning module was also implemented
that, based on the demand patterns and user categorization, controls the operation of
the admission control. However, unlike traditional RRM frameworks, provisioning is
performed over the resources of different networks to utilize their individual resources
in the most effective manner.
Periodically, the provisioning module decides how to distribute the resources of
the different networks based on a certain categorization of users. Within the decision
epoch, the demand, based on both active and rejected connections, is sampled to
predict the demand during the next decision epoch. The operation of the admission
control module acts within the bounds set by provisioning.
Controlled Operational Cost
General mechanisms for controlling the operational cost of RRM modules are exam-
ined in this component. As a case study, we chose a significant module utilized in
multimedia wireless networks that is responsible for adapting user allocations based on
both demand and network conditions. Through the proposed mechanisms, the opera-
tional profile and tradeoffs of the examined module are investigated. Two advantages
are gained in identifying these tradeoffs: first, the module can be directed towards
the best operational mode depending on the temporal objectives and requirements
of the operator; second, examining the tradeoffs of different RRM modules paves the
road to an effective and meaningful plug n’ play framework design for future wireless
networks.
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 31
Operator Motivated Vertical Handoffs
While VHs impose challenges on RRM frameworks, they can be exploited to the
benefit of operators. In this component, a module is designed to oversee the identi-
fication and selection of user terminals to be migrated from one network to another
based on demand requirements and network conditions. For example, when the op-
erator recognizes an imminent case of congestion or overload, or when the operator
observes a certain network below its operational objective, the operator can migrate
users whether to relieve the congested network or increase the utilization of the less
loaded network.
Two OMVH modules are designed within this component; one reactive intended
for applications of immediacy, the other proactive and is used to seek long-term
objectives. For example, the reactive OMVH module can aid the operation of the
admission control module in relieving allocations to serve a certain customer or a set
of customers. Another application of the reactive OMVH is to immediately correct
the allocations after a provisioning decision is made to seek the provisioning objective
sooner. On the other hand, proactive applications of OMVH can be utilized in load
balancing or reducing the operator’s cost of service delivery.
2.5.3 Architectural Considerations and Framework Interac-
tions
Prior to describing the interactions between the framework components, a description
of the operational architecture and considerations is due. For an HWN overseen
by a single network operator, a hierarchy of management decisions at the different
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 32
network levels exists. This hierarchy, as shown in Figure 2.6, confirms to a multi-
tiered connectivity structure whereby the considerations for a single access gateway
are made at the lowest level while considerations for the entire network managed by
the operator are made at the highest level. In between, two further levels can be
recognized. Directly above the gateway level is the overlay level where the scope of
decision for one network spans the conditions and the capabilities of other networks
overlaid in its coverage. Note that the decisions at the gateway and overlay level are
both performed at the same network entity, i.e. the access gateway. The difference
between these two levels lies in the scope of a decision effect. At the gateway level,
the decisions made do not require information from other networks in the overlay,
while such information is required at the overlay level. Given that access gateways
are responsible for call-to-call and packet-to-packet management, decisions at the
gateway and overlay levels are made in the finest granularity of decisions epochs in
the network.
Higher in the network hierarchy is the cluster level, where a single management
entity, e.g. an Radio Network Controller (RNC), assumes the responsibility of man-
aging the resources of an entire cluster or a group of gateways. The decisions made at
this level are mainly concerned with satisfying certain objectives in intervals longer
than those considered at lower levels, e.g. joint provisioning. Note that it is pos-
sible in certain cases to impose a virtual clustering, i.e. one not based on physical
connectivity. This is case when, for example, a geographical clustering is used.
At the operator level, a single management entity oversees a set of clusters that
comprise the entire network. The decision frequency at this level is the least and is
performed at the coarsest granularity of decision epochs.
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 33
Operator LevelOperator LevelSpans the entire operator’s network, where a single entity oversees multiple clusters.
Cluster LevelClustering can be either logical, e.g. RNC overseeing
t f t h l i ti l i t la set of technologies creating an overlay, or virtual.
Overlay Levelbl h h f d hEstablishes the scope of consideration when a
gateway makes a decision based on the conditions of other networks overlaid within its coverage.
Gateway LevelEstablishes the scope of consideration when an access gateway considers decisions affecting only its coveragecoverage.
Figure 2.6: Decision hierarchy in an operator’s HWN.
The above description distinguishes the location of different types of decision along
the network decision hierarchy. It also identifies the scope of ramifications for decisions
made at each level. However, the decision hierarchy does not mandate a specific mode
of operation, viz. centralized vs. distributed. Factors affecting the choice of mode
are mainly those affecting a decision’s effectiveness such as the potential resources to
be consumed by signaling and processing, signaling delay, etc.
We now describe the interactions between the framework components. For illus-
tration, the interactions are schematized in Figure 2.7.
The joint admission control component attends to considerations at both the
gateway and the overlay levels. At the gateway level, the joint admission control
responds to requests that require a decision based only on information about the
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 34
Operator Level OMVH
(proactive)
Proactive OMVH module operates within the provisioning bounds and based on operator objectives.
Joint Provisioning
Cluster Level
Joint provisioning sets admission quotas for different user categories based on Level
OMVH (reactive)
demand measurements.
Reactive OMVH modules
Overlay Level
Joint Admission Control
respond to admission requests, congestions instances and provisioning decisions.
Gateway
Joint Admission ControlJoint admission control checks admission possibility within provisioning quotas, and involves BAA, preemption or Gateway
Level BAA Preemption… …
COC
reactive OMVH when needed.
Mechanisms for cost control are potentially applicable to all reactive components.
Figure 2.7: Framework interactions.
gateway’s own coverage. In other words, the scope of decision is limited to what
affects only the gateway’s operational point and not other networks overlaid within
the gateway’s coverage. In this sense, functionalities accessible to the gateway include
bandwidth adaptation and pre-emption. Such functionalities respond to the joint
admission control component when the need arises, e.g. instances of congestion.
For users making requests within coverage overlays, the joint admission control
tries to accommodate the user in the most appropriate network. The choice of network
depends on several factors from both the user and the operator sides. Elements
of network choice were previously discussed at length in Section 2.3. Knowing the
conditions and capabilities of other networks in the overlay, the joint admission control
is capable of affecting the operational point of more than one network at a time. For
example, if the joint admission control employs a reactive OMVH to, say, free some
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 35
resources from a particular network, then both the migrating and receiving networks
are affected.
Reactive OMVH modules also respond to indicators and measurements at the
cluster level. Examples include instances of congestion and load imbalance, i.e. when
the load in the different networks deviates for the operator’s utilization objectives.
Such engagements are less frequent that the engagement of OMVH at the overlay
level.
The framework component concerned with controlling the operational cost of
RRM modules is applicable to almost any reactive RRM element. While in this
thesis we have focused on controlling the operational cost of bandwidth adaptation
algorithms, the arguments presented herein can be exercised over and extended to
other components.
Both joint provisioning and proactive OMVH modules operate between the op-
erator level and the cluster level. Together, they aim at materializing the long term
objectives for the network operator.
The joint provisioning component receives inputs from two directions. From the
different networks in the structure, provisioning receives measurements relative to the
physical capabilities and condition of each network. From the joint admission control
module, provisioning receives demand distributions. The provisioning module then
processes these inputs and relays admission quotas for different user categories to the
joint admission control component. In all its decisions, the joint admission control is
bound by the admission quotas. Joint provisioning may also rely on reactive OMVH
modules. For example, it is possible once new admission quotas are computed that
the operator may desire to achieve the objective operational points in the respective
CHAPTER 2. BACKGROUND AND FRAMEWORK OVERVIEW 36
network as soon as possible.
Proactive OMVH modules are the least frequently engaged in the proposed RRM
framework. Their objective is to adjust the network associations of users across
the entire operator’s network in a manner that conforms to the joint provisioning
component, in addition to other objectives.
Chapter 3
Provisioning in HWNs
The functionalities of admission control and provisioning play an important role in
any resource management framework. In an HWN, it is possible that these function-
alities operate in each network independently of the conditions and capabilities of
other networks, whether or not an overlay exists. However, such disjoint operation
potentially leads to underutilization. More generally, if the RRM functionalities of
the different networks are performed jointly, the wireless network operator would be
able to utilize each technology as much as possible depending on its characteristics.
In this chapter, we examine how admission control and provisioning can be per-
formed jointly. In doing so, we discuss the relevant considerations and implementation
issues. We also describe how robust provisioning can be achieved for HWNs whereby
the variability of network capabilities and network conditions can be taken into ac-
count, in addition to accommodating demand variability.1
1A partial exposition of the work presented in this chapter has previously been made in [THM04,THM05a, THM06a].
37
CHAPTER 3. PROVISIONING IN HWNS 38
3.1 Introduction
Consider an instance where users with multi-interface terminals reside in a certain
coverage overlay of networks. If most of the users seek one network technology over the
other, this will ultimately result in loading the selected network and underutilizing
the other. The selected network will yield low admission ratios (higher blocking
probabilities) and the QoS of admitted users may actually degrade.
Such scenarios can be avoided if a network can be aware of the resources available
in different networks with which it shares coverage, and if a mechanism can be involved
whereby user requirements and characteristics can be recognized by the network. In
such settings, where the resources of different networks are jointly managed, users can
be associated with networks that most fit their profile and, at the same time, that
enhance the performance, i.e. operational stability and revenue, for the operator.
More generally, we can enumerate some key motivators of implementing joint
functionalities as follows:
• Exercise transparency: A user seeks the services from the operator, and is sat-
isfied as long as the service is delivered well. The user does not seek being
connected to a specific technology.
• Avoid underutilization: Network capabilities are resources that need to be care-
fully managed. If the resource of one network is depleted, the operator can
start redirecting calls to other networks sharing depleted network’s coverage. If
resources are managed independently, certain networks may become congested
while others are extremely underutilized.
• Exploit complementarity: Not all wireless technologies are the same. Some
CHAPTER 3. PROVISIONING IN HWNS 39
target low mobility users and provide high data rates, while others are aimed at
users with high mobility and offer higher data rates. Technologies also vary in
their sensitivity to weather, congestion, security attacks, etc. As one network,
each technology can be used to its full potential.
• Accommodate user characteristics: Users may opt to change their association for
QoS, cost, security, etc. This may occur at both session initiation or during an
active session. A session can be moved as a whole or can be partially delivered
through more than one network. Having joint functionalities will cater to such
connections.
• Simultaneous connectivity: The last point extends to achieving QoS through
using multiple connections at a time. More than any other motivator, this
highlights the need to coordinate allocations across the different networks.
One objective of this chapter is to highlight the gains achieved when admission
control and provisioning are performed jointly. Specifically, we describe the new
considerations to be involved when provisioning resource in HWNs, including behavior
and demand identification and measurement.
Another objective of this work is to set a general framework for designing provi-
sioning cores in HWNs whereby not only demand variability can be accommodated,
but also variability in network capabilities. Towards this end, we discuss how the uti-
lization of stochastic programming can be used in achieving this objective and detail
the relevant considerations.
The remainder of this chapter is organized as follows. We review relevant literature
in Section 3.2. We then discuss considerations of network selection and admission
control in Section 3.3. In Section 3.4, we offer a model for joint admission control and
CHAPTER 3. PROVISIONING IN HWNS 40
provisioning, followed by an evaluation setup made to examine the characteristics of
joint functionalities in Section 3.5. In Section 3.6, we present a framework for robust
provisioning in HWNs. Finally, in Section 3.7 we summarize.
3.2 Motivations and Related Work
Admission control and resource provisioning serve two complementary objectives.
Admission control handles user’s requests and grants them when sufficient resources
can be verified to be available. On the other hand, resources provisioning conforms
to the operator’s long-term objectives of maximizing resource utilization or service
differentiation. More importantly, provisioning regulates the operation of admission
control by computing allocation policies. Together, they aim to strike the balance
between operator profitability and user satisfaction.
Provisioning, however, requires a meaningful demand categorization. In cellular
networks, for example, much work has been made in prioritizing (horizontal) handoffs
over new connection requests, .e.g. [LK02, DZ05, NA95, GCB03]. This is because
users are more dissatisfied with dropping active connections than with blocking new
ones. In the introduction of multimedia applications, another categorization emerged
based on traffic mode, e.g. real time and non- real time applications. Examples of
work in this direction can be found in [KKCN00, ZZR06, YWL04]. In HCNs, demand
categorization included both the mobility and the application requirements of users,
e.g. [OGA99, KH04].
CHAPTER 3. PROVISIONING IN HWNS 41
Once a demand categorization has been established, an RRM framework provi-
sions resources in two stages: demand quantification and provisioning allocation. De-
mand quantification can be performed either through modeling or through measure-
ment. Theoretical provisioning frameworks, e.g. [TBA98, XCW01], assume certain
demand characteristics, e.g. [ZD97] upon which a provisioning core can be devised.
Methods relying on measurement and estimation, e.g. [ZvdBC+01, GCB03], hence
lean more towards being practically implementable.
Depending on the nature of resources “put aside” for provisioning, provisioning
cores can be broadly classified into two, although hybrids are certainly plausible. In
the first class, provisioning is done on actual resources that cannot be allocated except
for the service they are provisioned for. A more expanded look on approaches in this
class, including the commonly known “guard band”, can be found in [Eps99]. The
second class provisions on what can be called “virtual capacity”, which means that
provisioning is made on resources that, even if allocated for active connections, can
be re-assigned to certain incoming calls. Examples of these include threshold-based
provisioning, e.g. [KKCN00] and adaptation, e.g. [ZZR06].
Once the provisioning core finalizes its computations, admission or allocation quo-
tas are relayed to the admission control module which, in part, begins to operate
within the new bounds set. A survey on various admission control proposals can be
found in [GB06].
The discussed proposals were made for wireless networks with homogeneous access.
The characteristics of such networks differ from HWNs in several manners. In HWNs,
users are able to initiate connectivity in more than one access technology. More
importantly, users are able to switch between different access technologies based on
CHAPTER 3. PROVISIONING IN HWNS 42
an intricate set of triggers, including user profile and preferences. Only in a subset
of triggers does a VH need to be prioritized over new calls. In fact, some triggers
may assume a priority lower than a new call. For example, if an immobile user is
requesting a VH from a lightly loaded technology to a moderately loaded one, the
operator may refuse the request if there is no apparent necessity. Accordingly, new
demand categorizations are required for provisioning resources in HWNs.
To further illustrate, consider the setting shown in Figure 3.1. The mobile terminal
is residing in an overlay area of two networks. If the mobile initiates the connection
through the network with the larger coverage, the operator may judge that it is better
to accommodate the call in the network with the narrower coverage. If during the
active session the user chooses to handoff while still immobile, the operator may deem
the handoff unnecessary and reject the request. It is important to note that in such
instances rejecting the handoff request does not mean dropping the call, and that
connection would continue in smaller network. If the user starts moving, 3.1 the VH
would be deemed necessary as otherwise the user’s connection would be lost.
Moreover, maintaining independent resource allocation policies in individual ac-
cess technologies may lead to a load imbalance or congestion. However, if the operator
exploits the wireless overlays then better allocation policies can be achieved. For pro-
visioning, this means that the resources of networks with overlapping coverage would
be jointly managed. As for admission control, users can be directed to the most
capable network if they cannot be accommodated by their requested network.
As RRM in HWNs has only recently started to receive attention, not many con-
tributions have been made. We note, however, the following proposal. Both Xing and
CHAPTER 3. PROVISIONING IN HWNS 43
Operator
(a)
O tOperator
(b)
8
Figure 3.1: The user’s VH may not be accepted while residing in the overlay (a), butwhen moving towards outside the coverage of the local network, the VH becomes anecessity (b).
CHAPTER 3. PROVISIONING IN HWNS 44
Venkatasubramanian [XV05] and Huang et al [HLTT06] considered the problem of
network (technology) selection in an HWN based on multiple metrics. In both works
the problem was shown to be NP-hard and, accordingly, heuristics were proposed.
This indicates the general complexity of network selection, and explains the motiva-
tions behind solutions based on the use of logical, rule-based [SS05] and policy-based
[SZC07] frameworks and neural networks [Zha04].
In [SCD06], Suleiman et al combine the use of a threshold-based provisioning and
a two level categorization of handoffs. The first distinguishes horizontal handoffs
from VHs, the second voice from data calls. A hierarchy is established for admission
control schemes, the Vertical and Horizontal Call Admission Control (VCAC and
HCAC), each handling its respective traffic. Together, the two schemes make Joint
CAC (JCAC). The authors discuss possibilities of load balancing but do not mention
details. The authors also do not detail how the thresholds are decided upon and
whether they accommodate varying demands.
Allusions to joint functionalities can be found in works advocating Common RRM
(CRRM). Work by Tolli et al in [THH02] examine an abstract but theoretically de-
tailed setup were an evaluation of the performance of several traffic modes is made
in a hierarchical network. The work shows the potential gain that can be achieved
through joint functionalities. The work by Lopez-Benitez and Gozalvez in [LBG06] at-
tempts to make network selection with the objective of maximizing overall throughput
while minding fairness among users. Skehill et al [SBK+07] provide an architectural
overview of how CRRM can implemented in a UMTS/WLAN setup.
Parallel to these directions is the interest in VH management where several pro-
posals have been made to associate users, as much as possible, with networks that
CHAPTER 3. PROVISIONING IN HWNS 45
better fit their profile and preferences.
Accordingly, we note that while the notion of joint RRM functionalities is gaining
appeal, until now only traditional demand categorizations have been used. Moreover,
the provisioning cores utilized do not react to demand variability. This is more critical
since the user-centric nature of HWNs is projected to produce demand patterns that
require more thought out solutions.
3.3 Considerations of Joint Provisioning
There are two possible options of joint provisioning, namely cooperative or joint man-
agement. In the cooperative setting, resource management decisions are done locally
in the individual networks while sufficient information about surrounding or overlaid
networks are promptly available. This is different from joint management where spe-
cific resource management entities are realized that are continuously informed of the
status and demands in the different networks and are set to respond with resource
management decisions. In the case of a single operator overseeing multiple network
technologies, our focus in this thesis, the joint management setting becomes more
accessible. In either case, however, there needs to be prompt and meaningful means
of messaging employed between the networks and the management entities.
It is theoretically hard to devise a model that captures the user behavior in HWNs.
While substantial advances have been made in user mobility [Bet01, MPS+07], cap-
turing the different factors involved in triggered VHs (preferences, selected applica-
tions and services, etc.) makes modeling more difficult. This difficulty lends itself to
solutions based on measurement and estimation.
However, a more subtle consideration is the identification of user dynamics in
CHAPTER 3. PROVISIONING IN HWNS 46
HWNs. It is understood that the independent demands for each network, whether
generally or of different services, can be identified. This also necessitates the iden-
tification of rejections or drops in each network. However, further categorizations of
demands are required for HWNs where, for example, position, available interfaces or
mobility, can be used as basis of categorization.
3.4 A Model for Joint Allocation Policies
In this section, we detail a general model for examining joint allocation policies in
HWNs. Consider the scenario illustrated in Figure 3.2, where a WLAN hotspot is
overlaid within the coverage of a cellular network. For the purpose of this study, it
is assumed that within the coverage overlap there are no possibilities for horizontal
handoffs. Hence, in the general population, three types of wireless users can be
identified: Single mode cellular users, single mode WLAN users, and dual mode
users; i.e. users who are capable of associating with both types of networks, whether
simultaneously or one network at any given time. On the other hand, single mode
users are only capable of connecting to one type of network at any given time.
We consider users with single and dual mode terminals requesting demands in an
HWN composed of two access technologies having a partial overlay. The objective
is to consider the allocations provided for users with dual mode terminals as they
can be provided from both networks. As aforementioned, there are several ways
to partition network resources between different users, [Eps99]. As an example, we
choose complete partitioning, i.e. resources are completely divided between the two
types of users and allocations made for a certain class of users cannot be utilized by
the other.
CHAPTER 3. PROVISIONING IN HWNS 47
Figure 3.2: A WLAN overlaid within a cellular network canvas.
3.4.1 Architectural and Operational Considerations
In a heterogeneous wireless environment, allocations can be made in more than one
way. It is possible, for example, for a cellular operator to overlay a WLAN where
the resources in the hotspot would be controlled by a certain RNC within the ad-
ministrative structure of the cellular network. In another scenario, an entrepreneur
may set up a hotspot that is administratively independent from the cellular network,
yet provides a continuous connectivity service to cellular users. The interconnection
could be direct, similar to the one just described, or indirect, made through the In-
ternet [TL02]. In either case a service level agreement would be set up between the
cellular operator and the entrepreneur regarding how resources will be allocated and
what the type of services are to be supported.
In this work no specific type of interconnection, or coupling, is specified. There
are also no restrictions assumed on the types of entities that perform the resource
allocation. For the disjoint allocation discussed below, it could be assumed that
the resources of each network are managed independently. However, for the joint
allocation, a certain degree of cooperation is naturally assumed. In cases of direct
CHAPTER 3. PROVISIONING IN HWNS 48
interconnection, resources could be managed by an RNC-like entity. In cases of
indirect interconnection, a form of heterogeneity broker [CMS+02] could be assumed
to manage the resources at an appropriate level in the architectural hierarchy.
The allocation schemes presented below are meant to be made in a proactive
manner. During fixed intervals of duration T , each network collects statistics of
different demand constituents, i.e. the demands of single and dual mode terminals in
each network. Statistics could be collected over a single interval or multiple similar
intervals, e.g. the same half-hour period in the five business weekdays. Such statistics
are then reported to the resource management entity which, in turn, computes and
informs the different networks of the best admission quotas for the next interval.
3.4.2 Behavioral Model and Mobility Considerations
Within the overlay area, users generate call requests according to a certain proba-
bilistic distribution. For instance, consider user u residing within an overlay created
by the set of networks No. Requests for network i are made with probability pi where
∑i∈No(u)
pi = 1 (3.1)
Once a request has been granted; i.e. call admitted, the call remains active for a
period following another predefined probabilistic distribution. During a call, a user
may switch his association from one access network to another. The probability that
a user will maintain the initial association is pij, i = j. The probability that a user
will instead switch the initial association is pij, i 6= j. Therefore, for a certain network
CHAPTER 3. PROVISIONING IN HWNS 49
i, the following applies. ∑j
pij = 1 ∀j (3.2)
Users within the area of interest are assumed to be of very limited mobility, i.e.
no users are assumed to enter or depart the overlay area during an active connection.
3.4.3 A Model for Disjoint Resource Allocation
Allocating the resources in a disjoint manner means that each network makes its
allocation decision independently of other networks in the overlay area. For example,
suppose that there exists a single basic service that is commonly provided in both
networks. Denote the maximum capacity of a network within the overlay area, say
network N , as B.
Let D, R and A respectively refer to the demand, rejection (unsatisfied demand)
and allocation for users such that
R = D − A (3.3)
The demand for the network resources is created by users within the overlay area ,
and the requests made are either new or handoff between the networks. Denote the
demand for users with single interface and dual interfaces by Dmono and Dduo, respec-
tively. Similarly, denote the allocations made for single-interface and dual-interface
requests by Amono and Aduo, respectively. At any point in time, the summation of the
CHAPTER 3. PROVISIONING IN HWNS 50
allocations should be equal to the maximum capacity, i.e.
Amono + Aduo = B (3.4)
A nominal allocation policy is employed with the objective of allocating the avail-
able resources proportional to the demand of each category. Let ρ represent the
differential rejection ratio and be defined as follows.
ρ = (Rmono/Dmono)− (Rduo/Dduo) (3.5)
Then, the objective function becomes as follows.
min
(|ρ|+ |Rmono
Dmono
|+ |Rduo
Dduo
|)
(3.6)
Prior to describing the provisioning core in full, a note should be made regarding the
implied business model and the means by which fairness can be controlled. Thus far,
it is clear that rejections in single-mode and dual-mode requests are treated equally.
This can be easily changed by varying the coefficients of the rejection ratios. As for
fairness control, this can be either induced by varying the coefficients of the rejection
ratios in equation (3.5) or varying the coefficients of the differential rejection ratio in
equation (3.6).
The provisioning core for disjoint resource allocation is described in full as follows.
This provisioning core is run independently in each network. Once the computa-
tion for the allocations are done, they are passed to the admission control to regulate
CHAPTER 3. PROVISIONING IN HWNS 51
Min(|ρ|+ |Rmono
Dmono|+ |Rduo
Dduo|)
Subject to Amono + Aduo = BRmono = Dmono − Amono
ρ = (Rmono/Dmono)− (Rduo/Dduo)
its operation.
3.4.4 Call admission Control for Disjoint Allocation
Arguments for admission control and resource assignment in a disjoint setting merely
extend the ones made for disjoint allocation. That is, requests are processed in each
network independently from the status of other networks providing service within the
overlay area. Upon the arrival of a connection request for service in a certain network,
the request is processed against the available allocations within the network, and is
granted or denied depending on resource availability. Single-mode and dual-mode
requests are processed only against their respective allocations.
3.4.5 A Model for Joint Resource Allocation
A significant difference distinguishes the joint allocation policy from the disjoint one.
The joint allocation policy considers the resources of all the networks within the
overlay area as complementary resources that can satisfy several objectives. The
example employed in this work considers the total demand of dual-mode requests
made in both networks, and makes allocations accordingly.
Using the extended notation just described, instead of two constraints of the form
above, one for dual-mode requests in each network, a single constraint is made as
CHAPTER 3. PROVISIONING IN HWNS 52
follows
Rtot,duo = (DN1,duo + DN2,duo)− (AN1,duo + AN2,duo) (3.7)
Naturally, this requires redefining the differential rejection ratio. For brevity, denote
the total demand for dual mode users demand by Dtot,duo, i.e.
Dtot,duo = DN1,duo + DN2,duo (3.8)
Then, the differential rejection ratios can be expressed as follows
ρN1 = (Rtot,duo/Dtot,duo)− (RN1,mono/DN1,mono) (3.9)
ρN2 = (Rtot,duo/Dtot,duo)− (RN2,mono/DN2,mono) (3.10)
Appropriate changes to the objective functions and other adjustments can be observed
in the following. This provisioning core is run at an entity that oversees the resources
of both networks. Once the allocation for the two networks are computed, the re-
spective allocation quotas are passed to the admission controller in each network.
Min (|ρN1|+ |ρN2|+ |Rtot,duo/Dtot,duo|+ |RN1,mono/DN1,mono|)Subject to AN1,mono + AN1,duo = BN1
AN2,mono + AN2,duo = BN2
Rtot,duo = (DN1,duo + DN2,duo)− (AN1,duo + AN2,duo)RN1,mono = DN1,mono − AN1,mono
RN2,mono = DN2,mono − AN2,mono
ρN1 = (Rtot,duo/Dtot,duo)− (RN1,mono/DN1,mono)ρN2 = (Rtot,duo/Dtot,duo)− (RN2,mono/DN2,mono)
CHAPTER 3. PROVISIONING IN HWNS 53
3.4.6 Call admission Control for Joint Allocation
Once the allocations have been made within a framework for joint allocation, requests
made for new calls are compared to the resources available in all networks within the
overlay area. Accordingly, and upon the arrival of a connection request, the request
is first processed against the availabilities of the network to which it was first made.
If this network does not have sufficient resources at the time of request arrival, the
network checks to see if another network can provide appropriate accommodations.
If enough resources are available, the request is handed off.
3.5 Performance Evaluation
In what follows, we examine the operational aspects of our model for joint allocation
policies and compare them with those of disjoint allocations. Simulation experiments
were carried out in an event-driven simulation built utilizing C++ and MATLAB. The
provisioning core was implemented through a Linear Programming (LP) formulation
that is solved using the GLPK package of the GNU project [GLP].
3.5.1 Simulation Setup
An overlay involving two networks is employed. The two networks are of equal cov-
erage range, share a coverage overlap of 4/7 of their areas. A fixed number of users
was uniformly distributed over the total geographic area the coverage of both net-
works. However, the ratio between single-mode and dual-mode users is controllable.
Users make connection requests with an aggregate inter-arrival time that is exponen-
tial with controllable mean. The capacities of networks 1 and 2 are respectively 80
CHAPTER 3. PROVISIONING IN HWNS 54
and 40 effective effective bandwidth units (ebus). In both networks, a single service
is defined that is allocated 4ebus regardless of the network choice. The connection
holding time is exponentially distributed with a mean of 180 seconds. The percentage
of users selecting network 1 is also varied.
In the first 15 minutes, the bandwidth in each network is equally divided among
single mode and dual mode users. At the end of the first 15 minutes, demand samples
collected every 30 seconds are used to provision the bandwidth in the next 15 minutes.
The procedure is repeated every 15 minutes. The demand was computed based on
the active and blocked calls, and the median demand was taken as representative.
An example of demand measurements is shown in Figure 3.3.
0 500 1000 1500 2000 2500 3000 35000
100
200
300
400
500
600
700
simulated time (seconds)
dem
and
(ebu
s)
Figure 3.3: A sample of demand measured during a single simulation run.
To emulate user behavior, preference-triggered VHs were set to occur with an
average rate of twice per minute. Each time, a user is randomly selected from users
within the overlay area. A user is allowed to request a VH more than once. When a
CHAPTER 3. PROVISIONING IN HWNS 55
request is made, the target network checks whether sufficient bandwidth is available.
If so, the call is admitted. If not, the call continues in its original network
Experiments ran for a simulated time of 3600 seconds. Each shown result repre-
sents the outcome of ten experiments. Note that the values used here are arbitrary,
and that other values were used in the intensive investigation performed displayed
similar trends to the ones presented below.
3.5.2 Results
Figures 3.4 and 3.5 show the blocking probability against the aggregate arrival rate,
with 70% of the requests directed towards network 1 and 50% of the users being
with dual-mode terminals. The results are shown with and without employing the
joint allocation policy. The results show an increase in the blocking probability of
single-mode users on account of a decrease in the same metric for dual mode users
within the overlay. On closer examination of Figure 3.5, however, we would note
that the blocking probability at an average arrival rate of 10 calls per minute is
almost null, while the maximum increase, i.e. at 25 calls per minute, is around
10%. Considering that aggregate capacity of the system is equivalent to 30 users
and that the average holding time is 180s, 25 calls per minute is an extremely high
load. Furthermore, the provisioning employed in our work depends on complete
partitioning. Accordingly, enhancing the blocking for one class directly lowers the
blocking probability for the other. The decrease in the blocking probability for dual
mode users within the overlay can be accounted for by the fact that in the differential
rejection ratios for the joint provisioning core, the demand for users outside the overlay
in each network is respectively compared with the total demand of users within the
CHAPTER 3. PROVISIONING IN HWNS 56
overlay. This counters the comparison between users outside and inside the overlay
for each network independently.
10 12.5 15 17.5 20 22.5 250.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
0.55
0.6
0.65
aggregate arrival rate (calls per minute)
bloc
king
pro
babi
lity
disjoint
joint
Figure 3.4: The blocking probability for users within the overlay with 70% of theusers requesting network 1 and 50% of users residing within the overlay.
10 12.5 15 17.5 20 22.5 250.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
aggregate arrival rate (calls per minute)
bloc
king
pro
babi
lity
disjoint
joint
Figure 3.5: The total blocking probability for users outside the overlay with 70% ofthe users requesting network 1 and 50% of users residing within the overlay.
The results shown in Figures 3.6 and 3.7 show the blocking probability against
the aggregate arrival rate, with 50% of the users selecting network 1 and 70% residing
within the overlay. Note that the maximum increase in blocking probability for users
outside the overlay is now 5%, and occurs at 25 calls per minute. Nevertheless,
CHAPTER 3. PROVISIONING IN HWNS 57
substantial gains are achieved for users within the overlay (from 7% to 20%). This
can be justified by the fact that more users are now residing in the overlay area and,
accordingly, the aggregate demand when compared with the total demand of single-
mode users in both networks becomes sizeable. At the same time, the reduction in
the number of users outside the overlay also plays a factor in reducing the blocking
probability in Figure 3.7.
10 12.5 15 17.5 20 22.5 250.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
0.55
0.6
0.65
aggregate arrival rate (calls per minute)
bloc
king
pro
babi
lity
disjoint
joint
Figure 3.6: The total blocking probability for users inside the overlay with 50% ofthe users requesting network 1 and 70% of users residing within the overlay.
10 12.5 15 17.5 20 22.5 250.25
0.3
0.35
0.4
0.45
0.5
0.55
0.6
0.65
0.7
aggregate arrival rate (calls per minute)
bloc
king
pro
babi
lity
joint
disjoint
Figure 3.7: The total blocking probability for users outside the overlay with 50% ofthe users requesting network 1 and 70% of users residing within the overlay.
CHAPTER 3. PROVISIONING IN HWNS 58
For Figures 3.8 and 3.9 shown below, the percentage of users requesting network
1 was maintained at 50% while the percentage of users residing within the overlay
was set to 30%. Note that a reasonable reduction in blocking probability for users
within the overlay is still maintained. However, as would be expected, the blocking
probability for users outside the overlay increases significantly, especially as the arrival
rate increases.
10 12.5 15 17.5 20 22.5 25
0.2
0.25
0.3
0.35
0.4
0.45
0.5
0.55
0.6
0.65
aggregate arrival rate (calls per minute)
bloc
king
pro
babi
lity
disjoint
joint
Figure 3.8: The total blocking probability for users inside the overlay with 50% ofthe users requesting network 1 and 30% of users residing within the overlay.
10 12.5 15 17.5 20 22.5 250.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
aggregate arrival rate (calls per minute)
bloc
king
pro
babi
lity
disjoint
joint
Figure 3.9: The total blocking probability for users outside the overlay with 50% ofthe users requesting network 1 and 30% of users residing within the overlay
CHAPTER 3. PROVISIONING IN HWNS 59
3.6 Towards Robust Provisioning in HWNs
As aforementioned, vertical handoffs can be initiated by user requirements, reacting
to the awareness of different access networks with different capabilities or, even, with
different economic options. These added, and possibly more frequent, triggers to VHs
result in irregularities in demand, posing challenges on how resource management in
such environments is to be provided.
In Section 3.2 we classified provisioning cores into those based on modeling and
those based measurements and predictions. Since modeling user behavior becomes
ever more complex in HWNs, provisioning based measurements and predictions be-
come more practically attractive. The measurement component relays a distribution
of the sampled demand to the prediction module. In turn, the prediction component
processes the distribution using statistical methods to yield representative values for
the provisioning core. Examples of statistical methods used are mean, median, wiener
filter, and minimum mean square error.
In this section, we advocate the use of stochastic programming as it allows by-
passing the prediction procedure and computes allocations directly from the demand
distributions. Stochastic programming is a mathematical programming technique
that, unlike deterministic programming models, allows for probabilistic variability
in its entries. This means that, as a provisioning core, stochastic programming can
compute allocations while processing variability in demand in addition to variability
in network conditions.
For illustration, a representative scenario is described where a single data ser-
vice of fixed bandwidth requirements is provided in both networks. The goal here
is to accommodate the maximum number of users in both networks residing within
CHAPTER 3. PROVISIONING IN HWNS 60
the hotspot area while minimizing the costs of resources underutilization and de-
mand rejection. The accommodation preference of each user type is signified through
some return value per unit allocation. In this manner, maximizing the allocation
concurrently maximizes the network operator’s profits. In what follows, an optimiza-
tion model for demand known with certainty is outlined in order to understand the
main operational considerations. Next, the formulation for resource optimization with
probabilistic demands is presented. Following that, a numerical example is detailed
to observe the main characteristics of a solution obtained under uncertainty. Without
loss of generality, all users are assumed to be dual mode users. Expanding the model
to a further user categorization can be easily made.
3.6.1 Single Common Service - Deterministic Demand (SCS-
DD)
Suppose that there exists a single basic service that is commonly provided in two
overlaid networks. Also, let the bandwidth requirements of this single common service
(SCS), denoted Q, be fixed. Refer to the larger network, i.e. the cellular network,
as N1 and the network providing the hotspot as N2. The maximum capacities for
N1 and N2 are B1 and B2, respectively. The term maximum capacity of one network
refers to the capacity that can be provided for respective users such that at least Q
can be provided for each user. For N1, maximum capacity further implies that is the
capacity allocation by N1 for users residing within area covered by the hotspot but
requesting services from N1. These capacities represent the capacities available at the
time of request arrival. Hence, the maximum number of users that can be supported
by N1 and N2 is B1 / Q and B2 / Q, respectively.
CHAPTER 3. PROVISIONING IN HWNS 61
In the following, index ij is used to distinguish between entities related to different
types of users. Entities indexed with i = j are related with users to be admitted in
one network, while entities indexed with i 6= j are related with users changing
networks. Let Dij, Rij and Aij respectively refer to the demand, rejection (unsatisfied
demand) and allocation for users ij, such that
Rij = Dij − Aij (3.11)
When the total allocation in a certain network is less than the available resources,
i.e.
∑i
Aij = Aj <Bj
Q
then network j is underutilized by Uj, defined as
Uj =Bj
Q− Aj (3.12)
The constituents comprising the return function of the management operation can
now be detailed. Let profit per allocated ij user be xij. Also, let the costs of unit
underutilization for networks j and the interconnection be yj and yv, respectively.
Finally, denote the cost per user rejection by zij. With this in mind, the total return
function can be stated as
ΠSCS =∑∀i,j
xij · Aij −∑∀i,j
zij ·Rij −∑∀j
yj · Uj − yv · Uv . (3.13)
CHAPTER 3. PROVISIONING IN HWNS 62
The manner in which various profits and costs are valuated is beyond the scope of
this work. However, it should be noted that while in such valuations there is an
inherent emphasis on stability, the network economics may very well depend even
on the hour-to-hour network dynamics. There are also other aspects that are to be
considered. For example, the cost of utilizing (or underutilizing) the interconnection
is bound to vary with the bandwidth abundance.
The linear program can be hence be stated as follows
Program SCS-DDMax ΠSCS
Subject to Aj + Uj =Bj
Q∀j
Aij + Rij = Dij ∀i, jAll variables positive
The formulation for the Program SCS-DD signifies the inherent characteristic of
any allocation policy mechanism. Simply stated, the allocation for different types
of demands is made by weighing the demand against the resources. As such, this
formulation could be extended to implement different types of policies. For example,
while the allocations for Aij with i 6= j could serve as guard bandwidth, fixed
proportions of the resources can be further allocated to accommodate operational
discrepancies, be them on part of the network or the demand.
The more important facet of the formulation is that it could be used to study the
behavioral aspects of such allocation policies. For example, several interconnection
modes have been proposed for VHs [TL02]. The formulation could be used in eval-
uating which interconnection method, occasionally called coupling, would be more
desirable from either the network or the demand’s.
CHAPTER 3. PROVISIONING IN HWNS 63
3.6.2 Single Common Service - Probabilistic Demand (SCS-
PD)
Making proactive allocations with uncertain demand is a challenging problem that
LP cannot contain. Stochastic Linear Programming (SLP), a subset of Stochastic
Programming (SP) methods, presents techniques that make viable considering the
probabilistic nature of demands in HWNs. Prior to discussing the utilization of SLP
in making proactive allocation policies, a brief digression on the differences between
LP and SLP is due. For further information on SLP and other forms of SP, please
refer to [KW94].
A problem formulated in linear programming can take on the form
maxcT x|Ax = b, x ∈ X
(3.14)
where X ⊂ Rn.
A formulation of this form usually implies that the problem is only comprised of
deterministic parameters, specified in coefficients in the objective function and condi-
tions, respectively cT and A, and the conditions’ constraining values, b. While many
practical decision problems can be solved using linear programming, where param-
eters involved are constant over sufficiently long periods, there are problems, such
as the one approached herein, where solutions are required with certain parameters
possessing a considerable degree of uncertainty. Limiting the discussions to situations
where uncertainty only exists in the right hand side, i.e.
maxcT x|Ax = b, Tx = ξ, x ∈ X
(3.15)
CHAPTER 3. PROVISIONING IN HWNS 64
where ξ is random, solving such problem reduces to changing the formulation into
a deterministic LP equivalent by enumerating the possible outcome scenarios while
associating each scenario with its probability. One way of achieving this is by setting
a penalty for not satisfying the constraint posed by each scenario. For example,
suppose that there are K possible scenarios and that the probability associated with
the kth scenario k, denoted is pk. Let qk be the penalty per unit of not satisfying the
kth constraint, i.e. per unit difference between Tx and ξk,. With this in mind, the
formulation of the deterministic equivalent problem becomes
max
cT x +
K∑k=1
pk(qk)T yk|Ax = b, Tkx + yk = ξk, k = 1 : K, x ∈ X
(3.16)
Here, yk represents the slackness due to randomness, i.e. not satisfying the condition
caused by the probabilistic nature of ξk. For more tolerance, SLP also provides a
venue for types of problem where stochastic constraints need not be absolutely held,
and it is acceptable that these constraints hold with a prescribed probabilities. Such
constraints, called chance or probabilistic constraints, are added to the formulation
in the form
P (Tkx + yk 6 ξk) > p, p ∈ (0, 1) . (3.17)
It is now possible to present the formulation for the SCS with probabilistic demands.
Let S be the set of all possible scenarios. In every scenario s ∈ S the demand Dij(s)
takes on specific values with a predetermined probability. The probability that the
current Dij(s) is a specific value, i.e. P(Dij(s) = D) = pij(s).
The demand uncertainty can be imposed on Program SCS-DD through the allocation-
rejection-demand constraints, where the penalty can be applied to the rejection. In
CHAPTER 3. PROVISIONING IN HWNS 65
this manner, the penalty (cost) of unit rejection is zij(s). As such, the return function
to be maximized becomes
ΠSCS−PD =∑∀i,j
xij · Aij −∑∀i,j
∑s∈S
pij(s) · zij(s) ·Rij(s)
−∑∀j
yj · Uj − yv · Uv
(3.18)
With this, the SLP can be stated as follows
Program SCS-PDMax ΠSCS−PD
Subject to Aj + Uj =Bj
Q∀j∑
i6=j
Aij + Uv = Bv
Aij + Rij(s) = Dij(s) ∀i, j, s ∈ SAll variables positive.
Rightfully, in order to reduce the problem dimensionality, one can limit the con-
sideration to scenarios associated with significant probabilities. Given a probabil-
ity significance threshold, pth, an added condition can be that only scenarios with
pij(s) > pt,∀x, jm be considered. The exact value of pth remains a design factor
that can be subjected to optimization.
While this formulation is aimed at probabilistic demands in a given time period,
the formulation is also extendable to other types of uncertainties. In certain situa-
tions, the returns and costs can be uncertain. For example, when a cell undergoes
overload, i.e. becomes a hotspot, it may borrow bandwidth from neighboring cells
that belong to different network operators. In this case, the cost of service is coupled
with the anticipated demand. This is one example where operational dynamics affect
economic considerations.
CHAPTER 3. PROVISIONING IN HWNS 66
3.6.3 Illustrative Example
In order to compare Programs SCS-DD and PD, the load on N1, notated L1, is
given sampler distributions. In other words, the total demand on N1 is given values
ranging from slight under-loading to slight overloading, with each value associated
with a predefined probability. Denote the optimal value of the objective function in a
program with Φ. The percentage deviation of the ΦPD and the average ΦDD relative
to the optimal scenario for each demand instance is computed. The values used in
comparison are given in Table 3.1.
Table 3.1: Values used in allocation comparison
Symbol Value Symbol ValueB1 50 y1 3B2 200 y2 3Bv 50 yv 1.5Q 1 z11 2
X11 3 z21 3X21 5 z12 5X12 5 z22 4X22 3 L2 0.7
The load on N2, notated L2, will be fixed at 70% of the resources, i.e. 0.7 · 200
= 140. Different demands will take fixed proportions of the load: In N1, D11 and
D21 will comprise 70% and 30% of load, respectively; in N2, D22 and D21 will also
respectively comprise 70% and 30% proportions of L2, respectively.
For completion of comparison, the values of zij(s) in Program SCS-PD will take
values equal to those used in ΦDD, i.e. equal penalties will be made each ij pair.
Table 3.2 shows five sample distributions. In each distribution, there are five
scenarios, i.e. S = [1,5], each with a predetermined probability. For clarification and
CHAPTER 3. PROVISIONING IN HWNS 67
simplification of notation, the probability of each load instance will be as follows
P (L1 = L) = P (D11(s) = 0.7 · L) = P (D21(s) = 0.7 · L) (3.19)
In other words, in each distribution
p11(s) = p21(s) = p1(s) (3.20)
for all scenarios.
The average optimal solution for each distribution, ΦAvg, is calculated in the
following manner: Let Φs be the optimal return for load L1 in scenario s, then the
average optimal solution for each distribution is
ΦAvg =∑
s∈[1,5]
p1(s) · Φs (3.21)
Let ΦPD be the optimal solution for Program SCS-PD in a given distribution. The
deviation, ∆s, of ΦPD from Φs is hence
∆s =
∣∣∣∣Φs − ΦPD
Φs
∣∣∣∣ (3.22)
The average deviation of ΦPD, notated ∆PD, follows.
E [∆s] =∑
s
p1(s) ·∆s (3.23)
CHAPTER 3. PROVISIONING IN HWNS 68
In a similar fashion ∆Avg, the average deviation of ΦAvg can be calculated.
E [∆Avg] =∑
s
p1(s) ·∆s (3.24)
However, for ∆Avg the value of ∆sis calculated differently than for ∆PD .
∆Avg =
∣∣∣∣Φs − ΦAvg
Φs
∣∣∣∣ (3.25)
The outcomes and deviations of the above settings are presented in Table 3.3. For
all the distributions, it is apparent that ∆PD is consistently less than ∆Avg. This
indicates that the solution of Program SCS-PD is closer to the different optimal
outcomes of Program SCS-DD.
Table 3.3 also shows that returns of the Program SCS-DD are always higher than
those of Program SCS-PD. While this might tempt one to criticize the stochastic
formulation, there’s an important fact that should not be overlooked. In any give
distribution, a solution of any of the scenarios bears a significant probability of being
infeasible for other scenarios. This is contrary to the solution of the stochastic pro-
gram, or rather the specific formulation that is used in this work, where any solution
is feasible in all scenarios — hence, the robustness.
Table 3.2: Load distributions for each of the five scenarios
Distribution 1 2 3 4 5P (L1 = 0.7) 3/15 1/15 5/15 4/15 2/15P (L1 = 0.85) 3/15 2/15 4/15 3/15 3/15P (L1 = 1) 3/15 3/15 3/15 1/15 4/15
P (L1 = 1.05) 3/15 4/15 2/15 3/15 3/15P (L1 = 1.1) 3/15 5/15 1/15 4/15 2/15
CHAPTER 3. PROVISIONING IN HWNS 69
Table 3.3: Outcomes and deviations of Programs SCS-DD and PD
Distribution ΦAvg ΦPD ∆Avg (%) ∆PD (%)1 417.1000 349.9000 26.6230 15.81922 428.1000 342.2333 25.2030 19.89263 406.1000 357.5667 28.5581 13.02134 414.9000 351.4333 27.0100 14.93505 419.3000 348.3667 26.2359 16.6989
Similar results were observed when other inputs were used.
CHAPTER 3. PROVISIONING IN HWNS 70
3.7 Summary
In this chapter, we made the case for joint functionalities in HWNs. We outlined the
main issues with traditional RRM frameworks, including ineffective demand catego-
rization and resource underutilization due to disjoint management. We then investi-
gated a model for joint allocation policies based on complete partitioning and using a
representative demand categorization. The objective of the model was to appropriate
resources of two overlaid networks with preference to users with dual-mode terminals.
The model showed consistent results for dual-mode users as the allocations for the
total demand was made from both networks instead of each network independently.
Given reasonable distribution between users outside and inside the overlay, the reduc-
tions in blocking probability for users inside the overlay varied between 7% to 20%,
while the increase in the blocking probability for single mode users was little affected,
even at high loads (maximum 5%). When distributing 70% beyond the overlay area,
users outside the overlay experienced an expected increase (up to 15)%.
We also presented an outline for implementing robust provisioning cores that
bypass prediction procedures and compute allocations directly from sampled demand
distributions. Based on stochastic programming, the proposed provisioning core can
accommodate variability in both demand and network conditions. In an illustrative
example, the proposed provisioning yielded distribution-wide feasible allocations. It
also yielded a cost estimation closer to the outcome of individual demand instances
and to the average outcome.
Chapter 4
Controlling the Cost of RRM
Modules
Integral to the design of RRM frameworks for future wireless and mobile networks
is controlling the operational cost of a framework’s components. The objective of
this chapter is to shed some light on how this can be achieved in a certain module,
namely bandwidth adaptation. Bandwidth Adaptation Algorithm (BAA) assume an
important role in wireless networks. They exploit the nature of adaptive multimedia
applications to the benefit of both the user and the network. Their basic operation
entails varying the user allocations depending on the demand intensity and the net-
work conditions. Several proposals have been made for BAAs, each with a different
objective and approach. However, a common drawback in previous algorithms is
the persistent engagement of bandwidth adaptation whenever it is requested by an
admission control module. This leads to high operational cost, and results in user
dissatisfaction. Such persistence can be even more costly in future networks where,
in addition to the already demanding intra-access technology handoffs, inter-access
71
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 72
technology handoffs will viable.
In this chapter, we introduce the Stochastically Triggered Bandwidth Adaptation
Algorithm (STBAA). Through a probabilistic trigger, we are able to control the
operational cost and provide means for controlling an enhanced tradeoff between
admission and operational guarantees. The core of our algorithm is a simplified,
structured and optimized BAA with no assumptions on the underlying traffic model.
We also provide measures that ensure a stable operation and a maintainable user
satisfaction.1
4.1 Introduction
The initial introduction of multimedia applications in wireless networks motivated
the conception of a certain RRM functionality, namely bandwidth adaptation, which
exploits the applications’ adaptive nature. The term “adaptive nature” refers to the
fact that the quality of delivery can be varied without affecting the delivery of the
content. This characteristic enables Bandwidth Adaptation Algorithms (BAAs) to
respond to both network conditions and demand patterns. The extent to which a BAA
can vary the quality of delivery, however, is controlled by the guarantees the RRM
framework attempts to uphold. Such guarantees, detailed in the SLA set between
the network operator and the users, include admission guarantees, e.g. blocking
and dropping probabilities, and operational guarantees describing QoS a connection
receives while its active.
For example, BAAs respond to severe network conditions, e.g. rising interference,
1A partial exposition of the work presented in this chapter has previously been made in [THM05b,THM05d, THM06d].
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 73
or added demand by reducing the allocations of the active users. In the case of added
demand, a BAA is engaged in seeking to maintain certain admission guarantees.
When substantial bandwidth becomes abundant, either due to users departing the
network or improving medium conditions, a BAA attempts to distribute the newly
available bandwidth between connections with reduced allocations. Here, BAA is
engaged to maintain the operational guarantees. In either operation, however, the
BAA handles a tradeoff between satisfying the admission guarantees and fulfilling the
operational requirements.
Despite the advantages of BAA, however, its operation can be associated with dif-
ferent costs. To begin with, there is the basic operational cost relative to computing
the adjustable bandwidth and selecting users to have their allocations adjusted. The
persistent operation of a BAA, e.g. triggering the module for every arriving request
when the network is overloaded, might result in excessive and/or frequent adjust-
ments, both of which can be undesirable from the user perspective, and can have
a devastating effect on the network operator’s economics. Furthermore, adjusting
a user’s allocation requires an exchange between the users and the network. Thus,
there’s an overhead cost associated with bandwidth adaptation.
As will be displayed in Section 4.3, previous proposals for BAA do address some of
the above concerns. However, a common drawback in previous proposals is that BAA
is engaged whenever requested, without considering the operational cost or evaluating
the economic worth of this engagement. In other words, there is a general lack of
cost control. Moreover, certain proposals are designed based on certain assumptions
regarding demand characteristics. Against the demand heterogeneity to be found in
future networks, such models stand at a disadvantage.
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 74
We believe that such control is achievable through understanding the tradeoff
between admission and operational guarantees. In this chapter, we propose means
to control the costs relative the bandwidth adaptation procedure. This control is
achieved by exchanging the common deterministic trigger of BAAs by a stochastic
one. Moreover, through designing a structured and optimized BAA core, we enhance
the aforementioned tradeoff. The BAA core is also designed based on generalized
arguments, and is not based on specific assumptions with respect to demand or traffic
patterns.
This chapter is organized as follows. In Section 4.2 the general elements and con-
siderations of BAAs are discussed in order to make the overview of previous proposals
more accessible. The literature detailed in Section 4.3 is discussed with emphasis on
the objectives, metrics and the approaches of each proposal when selecting connec-
tions to be adapted. The motivation and rationale for our work are presented in
Section 4.4. Our proposal, STBAA, is described at length in Section 4.5. The results
of evaluating the general goodness of our algorithm are shown in Section 4.6. Finally,
in Section 4.7, we summarize
4.2 Elements and Considerations of BAAs
In this section, we detail the general characteristics of BAAs prior to exploring pre-
vious proposals and introducing the facets of our work. In what follows, we refer to
the general elements and considerations that have been taken into account in other
BAA proposals.
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 75
4.2.1 Definitions
For the considerations of a BAA, a service or a class of services is commonly char-
acterized by a set of allocations, Ω, where Ω = ω1, ω2, ...ωi..., ωN with N being the
number of possible allocations for the service. The allocations are strictly increasing
in the order of the subscript i, i.e. ωi < ωi+1, for all i. Of the possible allocations,
a certain allocation, denoted ωref , is commonly referred to as the target or reference
allocation. It is this allocation that a BAA, possibly with other modules in an RRM
framework, attempts to provide for the user as much as possible. In Ω, ωref can
hold the value of either the minimum or maximum allocation, or a value in between.
During its lifetime, if a connection is given an allocation larger than ωref , the con-
nection is said to be upgraded2. Similarly, if a connection is given an allocation less
than ωref , the connection is said to be downgraded. Furthermore, in this work we
refer the operation of reducing user allocations by downgradation and the operation
of increasing user allocations by upgradation. It is hence that we will sometimes
interchange the use of adaptation and gradation, in addition to using the latter in
reference to the status of a certain class.
It is important here to note that the definition of Ω for a certain class is the
first element in defining the extent to which a class can be up- or down-graded. For
example, if Ω for a certain class is defined with only one possible allocation, then
this single allocation becomes the only possible allocation that a user may receive.
If Ω is defined with two or more allocations, then the extent of upgradation, or
upgradeability, is defined by the difference between the target allocation and the
2The reader should note that part of this work is an attempt to present generalized argumentsfor Bandwidth Adaptation Algorithms (BAA). In doing this, we also devise and encourage the useof certain terms, such as gradation and gradeability, the objective of which is to establish a conciseand non-ambiguous terminology for the different aspects of BAAs.
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 76
maximum allocation. Similarly, for such a class, the extent of downgradation, or
downgradeability, is defined by the difference between the target allocation and the
minimum allocation in Ω. In the general reference to either downgradeability or
upgradeability, we shall use the term gradeability.
4.2.2 The Role of a BAA
Figure 4.1 illustrates representative actions that can be performed by a BAA. When
a network is unable to admit further users with all its active users at their target allo-
cation, the system can initiate the selection of certain users to be downgraded so that
sufficient bandwidth can be released to accommodate the incoming request. When
responding to devastating network conditions, the objective of the BAA becomes not
as much ‘making room’ as downgrading the nominal allocations to the practical capa-
bilities of the network. The objectives of upgradation are equally varied. Whenever
a call departs the network, BAA checks whether there are connections with down-
graded allocations. If so, the objective of the BAA is to bring as much connections as
possible to their target allocations. In the instance of network conditions improving,
the objective of the upgradation module is to upgrade the nominal allocations of the
active networks to the actual capabilities of the network.
4.2.3 Trigger Frequency
It is certainly possible to consider different types of triggers for the operation of BAA.
A BAA can be set to react only to changes in the state of the system. In such a reactive
setting, the cost of engaging a BAA becomes highly dependent on the dynamicity of
both the users and the medium. However, a BAA can also be set to operate in a
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 77
Upon the arrival of a new call request …
Upon the completion of an active call …
Reduce the allocation of two calls to accommodate the incoming request.
Re-allocate the released bandwidth.
Upon the availability of bandwidth, e.g. reduction in interference …
Re-allocate the released bandwidth.
Figure 4.1: Different scenarios for the operation of a BAA.
proactive setting. For example, adaptation can be set to be periodically engaged at
the end of fixed time intervals. During these intervals, the system conditions and
user demand would be measured in manners that suffice making a sound adaptation
decision at the end of each interval. While the costs relative to BAAs in such a setting
become more or less fixed, this approach bears the following complexities: First, the
time interval would need to be either appropriately fixed or continually adjusted
relative to, say, the average of the call holding and/or residence time. Second, the
adaptation decision at the end of each interval to either downgrade or upgrade would
respectively need a prediction of the bandwidth to be required or available during the
following interval. Such a prediction mechanism calls for elaborate considerations in
both measurement (sampling) and computation. Notwithstanding, such a solution
may indeed be possible, provided the existence of a prediction module within the
collective RRM framework.
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 78
4.2.4 Required Measurements
At the most basic level, a BAA needs to know the extent to which it can downgrade
or upgrade a user or a certain class of users. As aforementioned, this is primarily
defined by Ω. Hence, whenever bandwidth adaptation is to be engaged, the system
needs to compute how much of the allocated bandwidth is either downgradeable or
upgradeable, depending on the type of the trigger.
However, the gradeability of a certain class is not a sufficient measure to judge
whether a certain class is viable for gradation. For example, if a class has been
persistently downgraded over a certain period of time, the system, where possible,
should seek to downgrade other classes more often. This temporal monitoring of
gradation can be materialized using different measures.
For example, a system can calculate the number of downgraded or upgraded calls
in a network, and favor between the different classes based only on this number,
independent of the allocations associated with the graded users. Another possible
measure would be the percentage of the users graded in each class. A measure rela-
tive to the actual allocations made to the different connections is the average offset
between the allocation and the reference allocation. Note that, in addition to utilizing
such measures in favoring between the different classes, these measures can be used
in deciding whether or not a certain class should be downgraded. For example, in the
case of using the percentage of downgraded users or their average offset from their
target allocation, threshold values can be used that, if violated for a certain class, the
class would not be considered for downgradation.
In using any of the above measures, the consideration of a temporal average, or
median, serves two purposes: The first is to protect users of a certain class from
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 79
persistent gradation, while the second is to avoid the effect of high fluctuations in
such measurements.
4.2.5 Conclusiveness of a BAA
A BAA can be either conclusive or inconclusive. A conclusive BAA would not be
engaged unless it is known that it will provide the bandwidth required to, say, ac-
commodate an incoming call, or that there are downgraded calls that need a newly
available bandwidth. An inconclusive BAA is akin to a search program, and is en-
gaged for the dual purpose of checking whether there is sufficient bandwidth to be
released (distributed) and releasing (distributing) this bandwidth if its available (re-
quired).
There is usually a relationship between the conclusiveness of a BAA and the type
of measurements employed. There is also an inherent tradeoff between the cost of
the measurement and the cost of the algorithm. However, it is possible to argue
that utilizing a conclusive algorithm would, in general, reduce the operational cost
of a BAA. This is because the measurements required by a BAA, i.e. the gradeable
bandwidth and the gradeability of a certain class, may be readily and persistently
made by the collective RRM framework within which the BAA is employed.
However, it is possible that the nature of the wireless network hinders the utiliza-
tion of a conclusive algorithm. This is especially true for networks that are affected
by the increased number of users such as contention-based networks, e.g. WLAN.
In such systems, it becomes difficult to ascertain different measures required by the
BAA at the instant it is engaged. Nevertheless, the nature of these systems can still
be circumvented through controlling the access into the network, and maintaining
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 80
the access level in the range where such measures can be easily quantified. This, of
course, does come at a cost of network underutilization.
4.3 Overview of Previous Proposals
Proposals for bandwidth adaptation can be broadly classified based on their assump-
tions regarding the nature of the underlying traffic model. Under the assumption
of continuous or fluid allocations, bandwidth adaptation becomes easier to handle
and implement. This is especially true when attempting to realize a form of fairness
either at the level of one class of service, or between different classes. A model of
digitized or discrete allocations, however, stands closer to the nature of practical al-
locations, which are dominantly made at specific rates. Nevertheless, the design and
implementation of bandwidth adaptation for discrete allocations is more challenging.
In what follows, we will restrain ourselves to the discussion of BAA proposals
made for traffic models with discrete allocations. Example of bandwidth adaptation
proposals based on fluid models are the works of El-Kadi, Olariu and Abdel-Wahab
in [EKOAW02] and of Seth and Fapojuwo in [SF03].
For discrete allocations, several proposals have been made [TBA98, KCBN98,
KCBN99, KCBN03, KPCD99, KCCD99, KCD02, XCW00, XCW01, XCW02, XC01b,
XC01a].
Talukdar, Badrinath and Acharya (TBA) in [TBA98] propose three algorithms:
Minimum Adaptation, Fair Adaptation and Average-Fair Adaptation. The objec-
tive of the first algorithm is to strictly minimize the number of users downgraded to
satisfy a certain request. The second algorithm attempts to consistently achieve a
max-min fairness in allocating and re-allocating the resources to the active calls. The
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 81
Average-Fair algorithm attempts to strike a balance between the objectives of mini-
mizing overhead and achieving fairness through considering the temporal averages of
bandwidth allocations. In TBA’s setting, the general operational aim for a class is to
maximize its allocation as much as possible. While the authors detail algorithms for
the single class scenario, they do show that their work is extendible to the multi-class
setting.
In [KCBN98, KCBN99, KCBN03], Kwon, Choi, Bisdikian and Naghshineh (KCBN)
detail a bandwidth adaptation, or re-allocation, algorithm that provides a guaranteed
state of gradation. In [KCBN98], the authors detail a re-allocation algorithm to op-
erate with a specific admission control module proposed by Naghshineh and Schwarz
in [NS96]. The objective of the proposed algorithm is to reduce the Degraded Period
Ratio (DPR) per call, which represents the portion of time a call spends in the system
while being allocated less than its target allocation. In the adaptation algorithm, the
DPR is only used in the upgradation process. In the downgradation process, the
algorithm bounds the degree of downgradation by necessity. Specifically, the system
first downgrades as little calls as necessary to make available the target bandwidth
for an incoming call. If the released bandwidth is insufficient, the system then tries to
release sufficient bandwidth by downgrading the calls to their minimum bandwidth.
KCBN adjust the gradation algorithms to work with a measurement-based ad-
mission control In [KCBN99]. Instead of maintaining the measurement state of DPR
for each user, the authors propose a per-class indicator, named the degradation prob-
ability, PD, and defined as the temporal average of the ratio between the number
of downgraded calls to the total number of ongoing calls in the system. The two
algorithms differ in the measurement and triggers. Also, since PD is used instead of
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 82
the per-call DPR, no sorting takes place in the upgradation procedure and calls are
processed in a random order.
A different approach is made by Kwon, Park, Choi and Das (KPCD) in [KPCD99],
Kwon, Choi, Choi and Das (KCCD) in [KCCD99], and Kwon, Choi and Das (KCD)
in [KCD02]. The common emphasis in these three works is overcoming the possi-
ble complexities of optimal adaptation algorithms through near optimal bandwidth
adaptation. In [KPCD99], KPCD detail the operational requirements from a BAA, in
addition to its possible objectives. They note that the requirement of maximizing rev-
enue and the objective of maximizing a call’s quality are operationally synonymous.
The authors also note that a BAA should have a low operational complexity. Similar
to TBA, KPCD also state that there is a tradeoff between minimizing the number
of adaptations and maximizing the fairness among users. However, the motivation of
KPCD is different from that of TBA — while TBA are motivated by reducing the
overhead associated with adaptation, KPCD are concerned with reducing the number
of variations or disruptions experienced by a call during its lifetime.
The authors describe three algorithms: BAA for Revenue Only (BAA-R), which
aims strictly at maximizing the revenue of active calls; BAA for Revenue and Anti-
Adaptation (BAA-RA), which attempts to maximize the revenue of active calls while
minimizing the number of adaptations; and BAA for Revenue and Fairness (BAA-
RF), which attempts to maximize the revenue while establishing a specific sense of
fairness within each class and between the different classes. While the three algorithms
are described in [KPCD99], the BAA-R is analyzed in further details in [KCCD99],
and the three algorithms are further elaborated on in [KCD02].
Xiao, Chen and Wang (XCW) in [XCW00, XCW01, XCW02] and Xiao and Chen
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 83
(XC) in [XC01b, XC01a] address bandwidth adaptation from a different perspective
and utilize different metrics. In [XCW00], XCW introduce the notions of Degradation
Ratio (DR) and Degradation Degree (DD). Also in [XCW00] the authors introduce
the notion of Degraded Area Size (DAS). The temporal average of the number of
users downgraded in the system is DR. The temporal average of the summation of
the offset between the assigned bandwidth and target bandwidth of the downgraded
users is DD. The DAS is the temporal average of the product of the number of users
downgraded each by the offset of the user’s allocation from the target allocation. The
objective of the algorithm in [XCW00] is to maintain a DAS less than or equal a
target DAS, i.e. DASqos. This stated, the authors provide an admission control that
always accepts handoffs, while denying access to new calls when the DAS is below
DASqos. Note that the DAS considered in the latter condition may be that of the
cell to which the call request was made, or the average DAS for both the cell and
its neighbors. For upgradation, XCW’s BAA orders calls according to their DD, and
upgrades as much calls as possible to the target allocation. For downgradation, the
extent to which active calls are downgraded depends on the type of the call. If the
arriving call is a handoff call, active calls can be reduced to the minimum allocation
possible. If the arriving calls is a new one, active calls can only be downgraded to
the target allocation.
In [XCW01], XCW propose another BAA with the objective of achieving fairness
at both the inter- and intra-class levels. This work is similar in spirit to KPCD’S
BAA-RF, but attempts to achieve another sense of fairness. Different from their work
in [XCW00], XCW utilize a staged admission control where calls are momentarily
admitted until sufficient bandwidth is verified to be available. The DD and the
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 84
DR for each class are verified to be below respective thresholds. A BAA, called
Fair Bandwidth Allocation (FBA), is then initiated. Inter-class fairness is achieved
by dividing the allocations based on the arrival rates of the different classes. The
manner in which the arrival rates are estimated is not discussed by the authors,
but it is assumed that the arrival rates for all classes are exponential. For intra-
class fairness, FBA accommodates the discrete nature of allocation by the manner in
which it performs adaptation, i.e. when upgrading, calls that are most downgraded
are processed first, and when downgrading, calls that are most upgraded are processed
first.
XC further introduce two different notions in [XC01b]. The first, is that of utilizing
a weighted average in the measurement of DD and DR, controlling the sensitivity of
the temporal average to the value of the most recent measurement. It is these adjusted
averages that are used in the admission decision. Similar to the admission control in
[XCW01], the admission control here is also staged. The second notion introduced
in this work is that of a leveled adaptation. In the Two-Level BAA (TL-BAA) for
example, downgradation is first performed by first downgrading all calls to their target
allocations. If this does not provide sufficient bandwidth, then calls are downgraded
to the minimum allocation possible. The upgradation of the TL-BAA operates in a
similar manner. In the generalized K Level BAA (KL-BAA), where K is the number
of possible allocations in a class, adaptation is performed by reducing or increasing
the adaptation of all calls by only one allocation at a time. This work is further
elaborated upon by XCW in [XCW02]
The notion of multi-class fairness is re-visited by XC in [XC01a]. Unlike FBA,
the authors propose maintaining predefined ratios between the DD and DR of the
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 85
different classes. They also propose a KL-BAA that is somewhat different than the one
proposed in [XC01b] as it operates with the objective of maintaining the predefined
ratios.
4.4 Motivations and Objectives
Reviewing the work displayed, it can be noticed that different proposals were made
for BAA with different objectives. It can also be understood that fulfilling certain
objectives comes at the cost dissatisfying others. For example, attempting to achieve
any sense of fairness, which represents a form of satisfying operational guarantees,
comes at the cost of satisfying admission guarantees. Meanwhile, attempting to max-
imize the number of admitted users comes at the cost of dissatisfying already active
users. It is possible, however, to strike a balance between the different objectives, as
is the case with TBA’s Average-Fair BAA.
There are other points that can be equally observed. Some algorithms require
holding a measurement state, e.g. DPR, for each connection. While this results in an
ultimate control of user status, it bears costly processing and memory space. Given
the expected dynamicity of future wireless networks, a solution depending on such
per-user granularity in terms of measurements would not scale.
Certain BAAs also make strong assumptions regarding the underlying traffic
model, or are restrained to a certain type of admission control. This is a drawback
that needs to be addressed. A BAA in future wireless networks should be able to cope
with unpredictable traffic, and should not rely heavily on any traffic assumptions.
The notion of staged admission is plausible as long as a low dynamicity is as-
sumed. In other words, to momentarily accept calls while verifying that there is
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 86
sufficient downgradeable bandwidth can be acceptable as long as the minimum inter-
arrival time allows for processing the calls independently. The verification procedure
can naturally be multi-threaded, but this leads to more complexities. A preferred
setting would be one where such delays and complexities can be avoided without the
dependence on non-scalable measurement for user or class status. Utilizing a carefully
designed conclusive BAA would achieve such an objective.
The fact that adaptation is persistently triggered whenever possible does not serve
the economic and operational objectives of a network operator. Persistent adaptation
results in intensive processing, high overhead, and user dissatisfaction, each of which
bear short- and long-term costs. From another perspective, the above algorithms do
not allow means to judge the worth of performing the adaptation at any given instant.
The work presented in this chapter attempts to address these issues. Specifically,
we are aiming at designing a BAA based on generalized arguments. The BAA should
also assume sufficient flexibility to be tailored to different operational objectives. In
doing this, we design a BAA with loose assumptions on the underlying traffic model,
and simplify, structure and optimize the procedure for user selection in both the
downgradation and the upgradation modules. We distinguish between the network
level and the class level, and control the selection of users across different classes to
reduce to the operational cost. To that end, the nature of our proposed algorithm is
conclusive, avoiding measurements with per-user granularity and relying mostly on
measurements readily accessible in any wireless system.
A more important feature of our BAA, however, is that it provides means for
enhancing and controlling the tradeoff between the admission and operational guar-
antees. The same means also provide for controlling the operational cost, and selecting
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 87
the proper time to initiate the adaptation procedure.
4.5 Stochastically Triggered Bandwidth Adapta-
tion Algorithm
The essence of the STBAA is replacing the deterministic manner in which adapta-
tion is initiated with a probabilistic trigger. That is, when sufficient bandwidth is
verified to be downgradeable or upgradeable, the algorithm first consults a certain
probabilistic threshold. In case of downgradation, the probabilistic trigger may serve
as a protection against users undergoing further reduction in their allocations. The
role of the trigger in the upgradation process, especially in a network with a highly
dynamic traffic, is to control the number of adaptations that users may undergo in
a very short period. Hence, the role of the trigger and other design features in the
algorithm detailed below is to provide a stable, adjustable service delivery to the
wireless end users, while at the same time control the operational cost of a BAA.
In what follows we detail the different aspects of STBAA. The general archi-
tectural assumptions is that the BAA is operating at either the level of a cell or at
the level of a higher entity, e.g. an RNC, that overlooks a cluster of cells. As the
operation of a BAA is tied with the operation of an admission control module, it is
assumed that both algorithms are implemented at the same level.
4.5.1 Operational Overview
Upon receiving a call request, an admission control would first verify that there is
sufficient unallocated bandwidth that can satisfy the incoming request. If so, the call
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 88
is accepted.
If the call request is received when the network is at its full capacity, or when
the unallocated bandwidth is insufficient to grant the request, the admission control
considers adaptation. The manner in which adaptation is invoked is made in stages.
At the first stage, the system would verify from the different measurements that the
allocations of the active users can be downgraded to satisfy the incoming request.
Once this is verified, the system generates a random number from a (0,1) uniform
distribution, and compares this number against a persistently valuated probabilistic
threshold. Only if the randomly generated number is less than the probabilistic
threshold would downgradation be engaged. If neither the downgradeable bandwidth
is sufficient nor is the probabilistic threshold constraint satisfied, the call is rejected.
When a call departs the network, the system contemplates upgradation. The
system first verifies whether there are any downgraded calls. If there are, the sys-
tem resorts to a probabilistic trigger. If there are no downgraded calls, or if the
probabilistic trigger fails, upgradation is not performed.
4.5.2 Architectural Overview
The coordination of three separate modules is required to achieve the objectives of
STBAA. The first module is concerned with collecting and computing the various
measures required for the operation of the gradation or adaptation mechanisms. The
second module handles the valuation of the probability threshold, and may interact
with the measurement module to consider the status of the system in this valuation.
The third module, called the gradation module, comprises two submodules — one for
downgradation, the other for upgradation. The gradation modules interact with the
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 89
measurement module for their operation, and require the valuation of the probability
threshold to be engaged.
We detail the three modules below. Prior to doing so, however, some basic nota-
tions and definitions are introduced.
4.5.3 Notations and Definition
We define a number of classes, N , where class N is the highest class. A set of allo-
catable bandwidths, Ωi, is associated with each class i such that Ωi = ωi,1, ωi,2 · · · ,
where i and j of bandwidth ω refer to its class and rank. The higher j is, the higher
is allocation of the rank, i.e. ωi,j < ωi,j+1. For each class define ωi,ref as the value
that the system guarantees the user, such that ω1,ref ≤ · · · ≤ ωN,ref
During its lifetime, if a call is allocated a bandwidth that is greater than ωi,ref ,
the call is said to be upgraded. Similarly, if a call is allocated a bandwidth less
than ωi,ref , the call is said to be downgraded. If |Ωi| = 1, then the class is defined
to have one allocatable bandwidth that can neither be upgraded nor downgraded.
If |Ωi| ≥ 1, then the system allows for the allocations in this class to be upgraded
and/or downgraded, depending on the class definition. Hence, the purpose of Ωi
is to define the set of bandwidths upon which the operator and the user agree for
class i calls to undergo during different system conditions. Note that the value of
ωi,ref need not be a single value but can be a range of values — in which case the
definitions of upgraded and downgraded refer to a call being allocated more or less
than, respectively, of maxωi,ref and minωi,ref. In this work, however, we only
consider classes with a single reference allocation.
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 90
4.5.4 The Measurement Module
The objective of the measurement module is to compute the satisfaction level of each
class, in addition to providing the gradation modules with the gradeable bandwidth
for the system.
For each class, the measurement module computes the average gradation. Denote
bi,u as the bandwidth allocated to call u of class i. Define di,u as the difference between
bi,u and ωi,ref , i.e. di,u = bi,u−ωi,ref . Note that, depending on position of ωi,ref within
the class definition, di,u can be either positive, zero or negative. The total gradation
of class i, Di, can be defined as Di =∑u
di,u where u ∈ Ui and Ui is the set of users
belonging to class i and currently in the system. The average gradation for class i,
which is the measure of its satisfaction, is di = Di
|Ui| . The total gradeable bandwidth for
each class, i.e. each class’s downgradeable and upgradeable bandwidth, is computed
as follows. Denote by Di,max the total gradation of class i when all its active users
are allocated the maximum allowable bandwidth, i.e. when bi,u = maxΩi, ∀u ∈ Ui.
Similarly, denote by Di,min the total gradation of class i when all its active users are
allocated the minimum allowable bandwidth, i.e. when bi,u = minΩi, ∀u ∈ Ui.
The total upgradeable bandwidth of class i, i.e. how far can class i be upgraded,
is Di,up = Di,max − Di. Similarly, the total downgradeable bandwidth of class i is
Di,down = Di −Di,min.
For the stable operation of STBAA the gradation metric should not be used
in its instantaneous value. Also, to protect the different classes from being over-
downgraded, the classes should not report their total downgradeable bandwidth in
full. Similarly, to accommodate high dynamicity, a class’s total upgradeable band-
width should not relay its upgradeable in full.
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 91
The measurement module computes a temporal average of the gradation, denoted
by di(τ), where τ indicates the instant at which the temporal average is computed.
There are two possible ways to compute di(τ): one, through periodic sampling; the
other, through sampling whenever adaptation is engaged.
The gradeable bandwidth relayed to the gradation modules shall be referred to as
the allowed gradeable bandwidth. Denote the allowed downgradeable bandwidth of
class i by BDi. Given αi, then BDi = bαi ·Di,downc, where αi is the allowanceratio
and holds a value between 0 and 1. The allowed downgradeable bandwidth for the
system as a whole, BD, can then be calculated by BD =∑i
BDi . The allowed
upgradeable bandwidth for class i, denoted by BUi, and for the system, BU , can be
computed in a similar manner.
Note that the allowance ratio for either the downgradeable and upgradeable band-
width need not to be fixed throughout the system operation. For example, αi could be
made to be a function of the current average gradation of the class, i.e. αi = fai(di(τ)).
The function fai(di(τ)) would allow more bandwidth to be downgradeable when di(τ)
indicates high satisfaction, and less bandwidth to be downgraded when otherwise.
4.5.5 Valuation of the Probability Threshold
The valuation of the probability threshold, denoted pt, can be done in different man-
ners, depending on the objective of the valuation, and also depending whether the
valuation is made for upgradation or downgradation.
In its simplest form, pt can take on a constant value, e.g. pt = p where p ∈
[0, 1]. The ramifications of utilizing this simple form, however, are non-trivial as they
directly represent the control of the tradeoff between the admission guarantees and
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 92
the operational guarantees. The higher the probability, the more the BAA core will
be engaged, and the more the users will be downgraded. Reducing the threshold
probability will result in a lower number of admissions, and hence a lower number of
adaptations, but will result in higher operational satisfaction.
Maintaining the independence from user status, the valuation can still be pro-
grammable. For example, the threshold probability can be made to increase when-
ever a downgradation is requested and denied, and reset to an initial value whenever
the downgradation request is granted. The increase can be made linear, exponential,
etc. An exponential increase can take on the form pt = rx−s, where x ∈ [0, · · · , s]
and x ∈ R+ is the number of requests made until the calculation instant. Here, the
values of r and s dictate the responsiveness of the threshold. Figure 4.2 shows several
examples for s = 5. The probability of adaptation gradually increases with every
request until adaptation is performed. By the fifth request, the probability threshold
takes on the value of 1. The upper ten lines in the figure show pt with r differed
between 1.05 and 2 in steps of 0.1. The lowest line shows pt for r = 100, a value that
makes pt behave in a manner as if engaging adaptation every fifth request.
The valuation of the probability threshold, however, need not to be the same
for all call requests. For example, a probability threshold can be different for each
class. More importantly, the valuation can be made to depend on the status of each
class. For instance, the valuation of probability threshold for class i, denoted pt,i,
can be directly related to di(τ), i.e. pt,i = fpt,i(di(τ)). In tying ft,i to the class
status, a powerful protection is offered to the users of the respective class. It is here
that the versatility of stochastic triggers can be observed. Through fpt,i, different
considerations can be instilled into the valuation of the probability threshold. For
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 93
0 1 2 3 4 50
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
p t
Figure 4.2: Plots for pt = r(x−s) with s = 5 and different values of r.
example, fpt,ican be used to make appropriate considerations regarding the worth
of performing adaptation at the instant a request is made, in light of the status and
other attributes of the system. An example of other attributes can be the demand
intensity.
A point should be made here regarding the roles and objectives of the allowance
ratio and a status-based valuation for the probability threshold. While these two
elements appear to have contradictory roles, it should be observed that each serve
a complementary purpose. The objective of the probability threshold is to control
when should a gradation be made. However, the objective of the allowance ratio is
to control the extent of any gradation action once a decision is made to engage either
downgradation or upgradation.
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 94
4.5.6 The Gradation Modules
There are two gradation modules. The downgradation module is concerned with
selecting the users whose allocation are to be reduced. The upgradation module is
concerned with selecting users whose allocations are to be increased. As will be shown
below, the general considerations of both modules are somewhat similar.
For gradation, we employ a hierarchical design that enables the control of fairness
at both the system level and the class level. Furthermore, the utilization of a hierar-
chical design reduces the operational complexity. By first selecting the classes to be
downgraded then selecting the users under each chosen classes, the modules avoid the
needless consideration of all the classes every time a request for adaptation is made.
Considerations of the Downgradation Module
Generally speaking, there is a cost incurred by the system when downgrading a user
from a certain allocation to a lower one. In more specific terms, this cost can represent
either a short-term cost, e.g. the difference in the revenue rate, or a long-term cost, e.g.
customer loss to a competitor as a result of frequent downgradations or adaptation.
An added advantage of a hierarchical design is that different cost elements can be
induced into the design at different levels. For example, the favoring between the
different classes can be based on long-term cost elements, while user selection at the
class level can be based on short-term cost elements.
As aforementioned, the downgradation is performed in two steps, each at a dif-
ferent level. The first step is performed at the system level and returns the required
downgradation for each class i, denoted ADi. The second step is performed at the
class level and returns the number of calls to be downgraded from rank k to rank l
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 95
of class i, for all k < l.
The downgradation module can be set to seek any given objective that is a function
of the downgradation for each class. Denote such an objective function by Φa. As an
example, we assume that each unit of degradation at the system level is associated
with a cost of ci, and make the objective of the downgradation at the system level to
minimize the total degradation cost at the system level, i.e.
Minimize Φa =∑
i
ci · ADi (4.1)
The extent of downgradation for each class, however, is limited by the class’s down-
gradeability, i.e. BDi,
ADi ≤ BDi i ∈ 1, ..., N (4.2)
Also, the summation of the downgraded bandwidth from all classes should satisfy the
bandwidth requested from the downgradation module, denoted by Breq.
∑i
ADi ≥ Breq (4.3)
Once the downgradations for each class has been computed at the system level, the
downgradation module now performs the selection of users. Naturally, only classes
with positive ADi are considered for downgradation at this level. We maintain the
sample objective expressed at the system level. However, it should be noted that, as
in the case of the system level, the downgradation module at the class level can be
made to seek any objective.
Denote by mkl the number of calls to be downgraded from rank k to rank l, for
all k > l. Assume that there is a cost associated with downgrading a call from rank k
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 96
to rank l and denote this cost by ckl. The objective at the class level hence becomes
as follows.
Minimize Φi =∑k,l
ckl ·mkl ∀k > l (4.4)
The first constraint to apply in engaging the class-level downgradation procedure is
that the bandwidth released by user selection should satisfy the value of ADi set by
the downgradation procedure at the system-level.
∑k,l
mkl · (βk − βl) = ADi ∀k > l (4.5)
The second constraint, or set of constraints, to apply is a balancing one. Specifically,
it should be exercised that the total number of calls to be downgraded from each rank
k cannot exceed the number of active users in the same rank.
∑l
mkl ≤ |Ui,k| (4.6)
These constraints are the most basic constraints for any downgradation module. Nat-
urally, more constraints can be added. For example, at the class-level, it is possible
to add constraints that prohibit downgradation by more than one allocation.
Considerations of the Upgradation Module
Similar to the downgradation module, the upgradation module operates at both the
system and class level.
The objective of the upgradation module is to distribute a certain available band-
width, denoted by Bavail. At the system level, the model computes the upgradation
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 97
for each class i, denoted by AUi. In computing the various AUi’s, the system con-
siders both revenues and costs. The possible costs incurred by the system during
the upgradation product are those resulting from user experiencing variations in the
service quality. The considerations for the short-term and long-term revenues for the
upgradation module is akin to the costs considered for the downgradation module.
Denote the revenue per unit upgrade for class i by ei. An objective of the upgra-
dation module can hence be to consider users of classes whose selection can maximize
the overall revenue.
Maximize∑
i
ei · AUi (4.7)
At the system-level, the upgradation considers constraints similar in nature to the
system-level constraints imposed on downgradation. Namely, constraints regarding
the allowable upgradeable bandwidth and another regarding accommodating Bavail.
AUi ≤ BUi i ∈ 1, ..., N (4.8)
∑i
Ai ≤ Bavail (4.9)
At the class level, the upgradation module continues to seek maximizing the revenue
of the upgradation process. Denote by ekl the revenue associated with upgrading a
call from rank k to rank l. Also denote by nkl the number of calls to be downgraded
from rank k to rank l, for all k < l. The upgradation objective at the class level hence
becomes as follows
Maximize∑k,l
ekl · nkl ∀k < l (4.10)
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 98
The first constraint to apply is the class absorbing AUi.
∑k,l
nkl · (βl − βk) = AUi ∀k < l (4.11)
The second of constraints maintain the calls upgraded from any rank k do not exceed
the number of active users in the same rank.
∑k
nkl ≤ |Ui,k| (4.12)
4.5.7 The Algorithm
We now detail how the different elements described above come together in the op-
eration of STBAA.
Figure 4.3 shows the algorithm for bandwidth downgradation which, for reference,
is given the name tryDowngrade. When an admission control module calls tryDown-
grade, it passes on the bandwidth required to be released, i.e. Breq. The algorithm
first computes the values required for verifying that there is sufficient bandwidth in
the system to satisfy Breq. These values include the temporal average of gradation
for each class i, di(τ), the allowed downgradeable bandwidth for class i, BDi, and
the allowed downgradeable bandwidth in the system, BD. The algorithm then veri-
fies that that BD is sufficient to satisfy Breq. If sufficient downgradeable bandwidth
exists, tryDowngrade proceeds to evaluate the probability threshold, pt, in the man-
ner outlined in Section 4.5.5. A random number, prandgen is then generated from a
uniform distribution between 0 and 1. If prandgen holds a value greater than pt, the
downgradation is denied. If not, tryDowngrade proceeds with selecting classes and
users for downgradation according to the constraints described in Section 4.5.6. Once
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 99
users are selected, downgradation is performed and the admission control module is
notified that the downgradation has been successful.
The upgradation algorithm, shown in Figure 4.4, operates in a similar manner.
Upon noticing the availability of a substantial bandwidth, Bavail, the admission con-
trol modules passes on this value to the tryUpgrade algorithm. In turn, tryUpgrade
verifies whether there are any downgraded calls in the system, and valuates the prob-
ability threshold. If there are downgraded calls, and the probability threshold is
satisfied, tryUpgrade proceeds with the upgradation. If not, the admission control is
notified that upgradation has not been performed.
It should be noted here that pt need not to be the same for both downgradation and
upgradation. For example, the network operator may opt for a low pt for upgradation
to protect active users from undergoing frequent adaptations, especially in the case of
high demand intensity where an upgradation may prove to be pointless. However, a
different policy can be set through utilizing a high valued pt, which would represent a
greedy approach to upgradation, and an attempt to increase user allocations whenever
possible.
4.6 Performance Evaluation
In what follows, we detail the setup and sample results for simulation experiments
that were carried to evaluate the general goodness of STBAA. Prior to describing
our implementation, however, certain remarks are due.
The algorithm was implemented within a generic RRM framework with an admis-
sion control that only admits users at their target allocation. We made no measures
for admission prioritization such as those made for minimizing the handoff dropping
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 100
tryDowngrade(Breq)
Compute di(τ) and BDi for each class, and BD, as in Section 4.5.4;
if BD ≥ Breq
Valuate pt, as in Section 4.5.5;
prandgen = uniform(0, 1);
if prandgen > pt
return reject;
else
Valuate ADi to satisfy Breq using equations 4.1 to 4.3;
for all ADi > 0
Valuate mk,l to satisfy ADi using equations 4.4 to 4.6;
rof
Perform downgradation;
return accept;
fi
else
return reject;
fi
Figure 4.3: Algorithm for bandwidth downgradation.
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 101
tryUpgrade(Bavail)
Compute di(τ) and BUi for each class, and BU , as in Section 4.5.4;
if BU > 0
Valuate pt, as in Section 4.5.5;
prandgen = uniform(0, 1);
if prandgen > pt
return reject;
else
Valuate AUi to satisfy Bavail using equations 4.7 to 4.9;
for all AUi > 0
Valuate nk,l to satisfy AUi using to equations 4.10 to 4.12;
rof
Perform upgradation;
return accept;
fi
else
return reject;
fi
Figure 4.4: Algorithm for bandwidth upgradation.
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 102
probability, nor did we employ resource pre-emption. We believe that such specific
objectives stand beyond the precise requirements for a BAA, and are more appropri-
ately achieved by other, carefully sought RRM modules for admission and congestion
control. Nevertheless, we maintain that the proposed BAA algorithm can be incor-
porated in any RRM framework.
A further note should also be taken regarding comparisons with other proposals.
As was displayed in Section 4.3, different proposals were previously made, each with a
different objective. It should hence be noted that providing a comparison between the
different proposals presented thus far in the literature poses as a cumbersome task.
More often than not, the adaptation algorithm overlaps with the general admission
control of the proposed framework, making it hard to evaluate the proposed adapta-
tion procedure and to distinguish the benefits reaped specifically from the BAA.
4.6.1 Simulation Setup
In what follows, we examine the operational aspects of STBAA. Simulation ex-
periments were carried out in an event-driven simulation built utilizing C++ and
MATLAB. The STBAA core was implemented through a Mixed Integer Linear Pro-
gramming (MILP) formulation that solved using the GLPK package of the GNU
project (GNU) [GLP].
The simulation environment consists of a single cell with a maximum capacity of 50
units. A single class was defined with Ω = 4, 3, 2, 1, all in ebu, and an exponentially
distributed holding time with a mean of 180 seconds. The reference allocation is the
highest possible. Arrivals were made to be of exponentially distributed inter-arrival
time, with an average rate that was varied between 10 and 20 calls per minute. The
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 103
simulation experiments ran for 3600 time units. Each result shown represents the
average outcome of ten experiments.
Note that the values used here are arbitrary, and that other values were used in
the intensive investigation performed displayed similar trends to the ones presented
below.
4.6.2 Preliminary Evaluation
A preliminary setting is used to verify the basic operation of the BAA, i.e. without
employing a probabilistic trigger. In these evaluations, α was set to 0.0. In addition,
these results are used to observe the basic effects of employing adaptation in an RRM
framework.
The costs and profits for the adaptation modules were set as follows. In downgra-
dation, costs were set to the positive (absolute) difference in allocation multiplied
by the duration of the call up to this point. This way, recently admitted calls were
made available for adaptation before calls that have been in the system longer. On
the other hand, the contrary was applied in upgradation where the profits were set to
the positive difference in allocation multiplied by the reciprocal of a call’s duration.
In this manner, calls that have been longer in the system are upgraded last. These
settings extend to the rest of the experiments below.
The results for this setting are shown in Figures 4.5 to 4.7. In Figure 4.5, the
blocking probability for the class defined above is shown with and without adaptation.
Given the class’s definition, engaging adaptation results in a reduction of about 60%
at low load while, as expected, the reduction decreases somewhat (to 55%) at the
highest simulated load, i.e. 20 calls per minute. This is because as the network load
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 104
10 11 12 13 14 15 16 17 18 19 200
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
arrival rate (calls per minute)
bloc
king
pro
babi
lity
w/o adapt
w/ adapt
Figure 4.5: Blocking probability vs. load, with and without engaging adaptation.
10 11 12 13 14 15 16 17 18 19 200
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
arrival rate (calls per minute)
dow
ngra
datio
n de
gree
(no
rmal
ized
)
w/o adapt
w/ adapt
Figure 4.6: Downgradation degree vs. load, with and without engaging adaptation.
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 105
10 11 12 13 14 15 16 17 18 19 20−3
−2.5
−2
−1.5
−1
−0.5
0
arrival rate (calls per minute)
aver
age
dow
ngra
datio
n (p
er u
ser)
w/o adapt
w/adapt
Figure 4.7: Average downgradation per user vs. load, with and without engagingadaptation.
increases, there becomes less “room” for the BAA to maneuver.
In Figure 4.6, the downgradation degree is plotted against the load. We define the
downgradation degree as amount of current downgradation relative to the maximum
possible downgradation. Since the extent of adaptation was not limited as downgra-
dation increases, i.e. α being null, the downgradation is extreme, ranging from 75%
to 95% as the load is increase. This highlights the role of α in the design of STBAA.
This also indicates the flip-side of reducing the blocking probability.
To take a closer look at how the downgradation degree reflects on per user per-
ception, Figure 4.7 shows the average downgradation per user while the arrival rate
is varied. The downgradation degree decreases from around -2.25 to around -3.5
as the load is increased. Naturally, the average downgradation, as is the case with
downgradation degree, is 0 when adaptation is not engaged.
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 106
4.6.3 The Effect of Probability Thresholds
Using the same settings as above, the decision to trigger downgradation was consid-
ered against a fixed probability threshold pt that was varied from 0 (no adaptation)
to 1 (full adaptation) in steps of 0.2. For upgradation, the probability threshold was
consistently set to 1. The results are shown in Figures 4.8 to 4.10, where Figure
4.8 shows the blocking probability, Figure 4.9 shows the downgradation degree and
Figure 4.10 shows the average downgradation per user, all plotted against load and
with pt varied as described.
It can be observed from the figures that considerable gains can be achieved by
reducing the number of adaptations by 20%, even 40%, since the result difference in
blocking probability and downgradation degree or average downgradation is subtle.
Furthermore it is apparent what can be achieved by similar minor reductions in
the number of adaptations and, hence, the operational cost. The essential tradeoff
sought in this work between pt and blocking probability on one side and pt and
downgradation degree (or average downgradation) on another can be deduced from
each graph independently.
The following section holds a more direct examination of the tradeoff between
blocking probability and average downgradation.
4.6.4 Tradeoffs between Blocking and Downgradation
We now take a direct look at the tradeoff between the blocking probability and av-
erage downgradation per user. The displayed tradeoffs could alternatively have been
shown between blocking probability and downgradation degree. As either (average
downgradation or downgradation degree) is reflective of the other, we choose the per
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 107
10 11 12 13 14 15 16 17 18 19 200
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
arrival rate (calls per minute)
bloc
king
pro
babi
lity
pt = 0.0
pt = 0.2
pt = 0.4
pt = 0.6
pt = 0.8
pt = 1.0
Figure 4.8: Blocking probability vs. load, with pt varied between 0 and 1 in steps of0.2.
10 11 12 13 14 15 16 17 18 19 20−3
−2.5
−2
−1.5
−1
−0.5
0
arrival rate (calls per minute)
aver
age
dow
ngra
datio
n (p
er u
ser)
pt = 0.0
pt = 0.2
pt = 0.4
pt = 0.6
pt = 0.8
pt = 1.0
Figure 4.9: Downgradation degree vs. load, with pt varied between 0 and 1 in stepsof 0.2.
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 108
10 11 12 13 14 15 16 17 18 19 200
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
arrival rate (calls per minute)
dow
ngra
datio
n de
gree
(no
rmal
ized
)
pt = 0.0
pt = 0.2
pt = 0.4
pt = 0.6
pt = 0.8
pt = 1.0
Figure 4.10: Average downgradation per user vs. load, with pt varied between 0 and1 in steps of 0.2.
user average downgradation since it is more indicative of user perspective.
The tradeoff for the simulation settings described above is shown in Figure 4.11.
Recall that the holding time for the class was exponentially distributed with a mean
of 180 seconds.
We also examined the tradeoffs for two other representative holding times, namely
fixed (180 seconds) and uniformly distributed between zero and 180 seconds. The
results for these two settings are respectively provided in Figure 4.12 and Figure
4.13.
Finally, we examined the tradeoffs for an even mix of traffic, i.e. third exponen-
tial(180 seconds), third fixed (at 180 seconds), and a third uniform(0 seconds, 180
seconds).
What is important to note here is that, based on the observed tradeoffs, an admis-
sion control system can be designed that is more effective in controlling the resources
of the underlying system. For example, once the operator recognizes the load on the
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 109
system, which is easily measured from online data, in addition to recognizing the type
of services being requested by the user (i.e. the active traffic mix), the operator can
move to the best operational point that satisfies the standing objectives.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8−3
−2.5
−2
−1.5
−1
−0.5
0
blocking probability
aver
age
dow
ngra
datio
n (p
er u
ser)
10 calls/minute
12 calls/minute
14 calls/minute16 calls/minute
18 calls/minute
20 calls/minute
Figure 4.11: Tradeoff between blocking probability and average downgradation, withexponential(180 seconds) holding times.
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 110
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8−3
−2.5
−2
−1.5
−1
−0.5
0
blocking probability
aver
age
dow
ngra
datio
n (p
er u
ser)
10 calls/minute
12 calls/minute
14 calls/minute16 calls/minute
18 calls/minute
20 calls/minute
Figure 4.12: Tradeoff between blocking probability and average downgradation, withfixed (180 seconds) holding times.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8−3
−2.5
−2
−1.5
−1
−0.5
0
blocking probability
aver
age
dow
ngra
datio
n (p
er u
ser)
10 calls/minute
12 calls/minute
14 calls/minute16 calls/minute
18 calls/minute
20 calls/minute
Figure 4.13: Tradeoff between blocking probability and average downgradation, withuniform(0, 180 seconds) holding times.
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 111
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8−3
−2.5
−2
−1.5
−1
−0.5
0
blocking probability
aver
age
dow
ngra
datio
n (p
er u
ser)
10 calls/minute
12 calls/minute
14 calls/minute16 calls/minute
18 calls/minute
20 calls/minute
Figure 4.14: Tradeoff between blocking probability and average downgradation, withevenly mixed holding times.
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 112
4.6.5 Utilizing pt in Joint Admission Control
The nominal advantages of STBAA, in addition to its generally applicable notions,
stand as substantially advantageous within heterogeneous contexts. Here, we attempt
to shed some light on a further advantage of STBAA.
In previous chapters, we discussed the possible advantages of joint functionalities,
especially the role of admission control in heterogeneous wireless networks. When
applying STBAA to the various networks operating in an overlay, it is possible to
use pt, the probability threshold of engagement, as a network selection criteria. More
specifically, when comparing between networks already engaging adaptations with
probability thresholds, it is possible to direct incoming traffic — whenever possible
— to the network with maximum threshold, since this directly means a higher chance
of being admitted into the operator’s network.
To evaluate such a setup, we utilized the single class defined above (with exponen-
tial holding time) in three networks with overlapping coverages. The network with
largest coverage assumed the whole simulation area, i.e. 100%; the second network
assumed 70% of the coverage, and the third network 40%. Users were uniformly
distributed over the whole simulation area. The capacities of the networks were,
respectively, 50, 35 and 20, all ebus. To simplify the scenarios, we did not involve
mobility. However, involving mobility cognition in the joint admission control can be
easily incorporated.
The difference between the joint and the disjoint admission is as follows. Given
a unique threshold for each network, a request to each network is handled indepen-
dently if the admission control is disjoint, i.e. the call’s odds are compared against
the threshold of the network it requested. On the other hand, when admission is
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 113
performed jointly, the caller’s request is compared against the maximum threshold
covering the caller’s area. Once a certain network begins to engage adaptation, it
starts utilization a probability threshold. The probability thresholds are randomly
generated at each admission instant, i.e. they are not fixed for the three networks.
The results are shown in Figures 4.15 to 4.17. Note the reduction in the blocking
probability shown in Figure 4.15, where the reduction is 5% can be observed at a load
of 10 calls per minute and increases to 15% at 20 calls per minute. This is a consider-
able gain, and results in a better utilization of network resources since, in continuing
the theme of last chapter, a user is associated with the most appropriate network.
In Figures 4.16 and 4.17, the downgradation degree and the average downgradation
are shown. A consequence of increased admission into the operator’s network is an
increase and the number of adaptations performed and, as result, an increase in the
downgradation degree (or average downgradation).
10 11 12 13 14 15 16 17 18 19 200
0.05
0.1
0.15
0.2
0.25
0.3
0.35
arrival rate (calls per minute)
bloc
king
pro
babi
lity
disjoint AC
joint AC
Figure 4.15: Blocking probability vs. load, with and without joint admission.
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 114
10 11 12 13 14 15 16 17 18 19 200.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
arrival rate (calls per minute)
dow
ngra
datio
n de
gree
(no
rmal
ized
)disjoint AC
joint AC
Figure 4.16: Downgradation degree vs. load, with and without joint admission.
10 11 12 13 14 15 16 17 18 19 20−2
−1.8
−1.6
−1.4
−1.2
−1
−0.8
−0.6
−0.4
arrival rate (calls per minute)
aver
age
dow
ngra
datio
n (p
er u
ser)
disjoint AC
joint AC
Figure 4.17: Average downgradation vs. load, with and without joint admission.
CHAPTER 4. CONTROLLING THE COST OF RRM MODULES 115
4.7 Summary
The objective of this chapter was to ascertain the feasibility of controlling the opera-
tional requirements of RRM modules in general. As a case study, we examined a key
RRM model that is dedicated to bandwidth adaptation based on medium conditions
and demand magnitudes. We discussed the general attributes of efficient adaptation
algorithms and how such attributes can be achieved through design. In order to
control the algorithm’s cost, we introduced the notion of probabilistic triggers which
enable operators to employ adaptation according to their objectives. This is possi-
ble through extracting the operational tradeoffs between admission ratios and user
satisfaction.
In evaluating our proposal, the general goodness of STBAA was displayed through
regular operation, achieving a reduction in blocking probability up to 60% under the
evaluated settings. This, however, reflected on the downgradation degree, reaching
around 95% at high load. Next we showed possible tradeoffs between admission ratios
and average downgradation in order to show the potential of utilizing probabilistic
triggers. The resulting tradeoffs, extracted using different connection characteristics,
can be utilized by network operators to direct the operational state of STBAA based
on their objectives. Finally, we tested STBAA in a heterogeneous setting of three
networks and utilized the threshold probability as an admission metric. Based on
the evaluated settings, reduction of up to 15% were achieved in the overall blocking
probability. This reduction, however, came at the cost of an increase in the total
number of adaptations in the three networks.
Chapter 5
Operator Motivated Vertical
Handoffs
Much of the literature discusses seamless handoffs that also preserve QoS. However,
network operators can exploit vertical handoffs as a RRM means to relieve congestion,
load balance and uphold QoS requirements. Nevertheless, this exploitation requires
rigorous study in order to realize its full potential.
In this chapter, we advocate the use and study of OMVH as a powerful RRM
tool. We also discuss the different factors involved in the design and operation of an
OMVH. Furthermore, we provide an RRM setup for future wireless networks where
an OMVHM is employed and is set to interact with other RRM modules.1
1A partial exposition of the work presented in this chapter has previously been made in [THM06b,THM06c, THM07b].
116
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 117
5.1 Introduction
As we have shown in previous chapters, VHs bear certain challenges to the designer
of an HWN — especially from an RRM perspective. User requirements and prefer-
ence will be dependent on user’s position and mobility, in addition to the capabilities
and attributes of the networks traversed by the user during an active session. De-
signing RRM frameworks for HWNs will ultimately call for non-traditional modeling
methods that include previously not-accounted-for characteristics. The disparity of
the different networks overseen by a single operator poses several challenges. The
pressing question of defining a network policy and accommodating a user policy, viz.
preferences, and how to reach points of compromise also comes into light.
However, while VHs motivated by user preferences bear their own challenges, the
general notion of VH can be exploited by the operator. Consider, for example, the
scenario shown in Figure 5.1 where a user with a single mode terminal attempts to
access an operator’s network that is currently loaded. Under normal circumstances,
the operator would not be able to accommodate the user, and the request would be
denied. However, if the operator can recognize users that can be migrated to other
networks with the objective of “making room” for the user’s request, the call would
be accepted. In fact, if the operator can recognize an overload instance in a certain
network, users can be migrated from the overloaded network to other networks.
It is hence possible to categorize VHs based on the motivator. We shall refer
to a VH motivated by user preferences, which can be initiated based on a user’s
request or upon the operator processing the user’s preferences, as a User Motivated
Vertical Handoff (UMVH). An OMVH is one that is driven by the network needs
and requirements. We acknowledge that a clear distinction between the two types of
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 118
(b)
(a)
Figure 5.1: An instance of employing OMVH: (a) A user makes a call to a congestednetwork; —(b) The network toggles the association of two users to another GW andaccepts the call.
handoffs may seem to be difficult to identify, but it should be understood that, while
a UMVH works in a user’s favor and is usually user initiated, an OMVH, even if
beneficial to the user, is ultimately performed to the specific benefit of the operator.
In this chapter, we advocate the use of OMVH as a critical RRM tool in next
generation wireless networks. In particular, we are concerned with the elements in-
volved in the design of a reactive module that is dedicated to the selection of users
to undergo OMVH. This includes factors affecting user selection such as the number
and type of their radio interfaces, the applications running on a user’s terminal and
the effect of these applications being served in a different network. Against the type
of interfaces, the module checks the vacancies in other networks, and whether other
networks can deliver the service at an acceptable level. In a setting where multiple
grades of services are offered, additional considerations need to be made, e.g. regard-
ing the number of active users in a class or their allocations relative to other classes.
In a representative implementation, we employ OMVHs in a setup also employing
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 119
admission control.
The remainder of this chapter is organized as follows. In the next section we
overview the relevant literature. Next, in Section 5.3 we elaborate on what triggers
an engagement of an OMVHM, and discuss considerations relevant to its operations.
In Section 5.4, we introduce our approach in designing an OMVHM. We then offer a
performance evaluation in Section 5.5. Finally, we summarize the chapter in Section
5.6.
5.2 Related Work
In this section, we attempt to shed some light on the advantages of VH to the op-
erator. This does not deviate from maintaining the user-centricity of HWNs; rather,
it complements it. Nevertheless, we maintain that, from the user’s perspective, the
choice of underlying technology is irrelevant as long as the user’s service level agree-
ment is upheld.
Much work has been done on continuously associating the user with the best
network from the user’s perspective. Chen et al [CSC+04] utilize a score function that
take into factors such a network properties, available interfaces and user preferences.
Persistently, the network with the highest score is the one chosen to accommodate
the user. Chen et al in [CLH04] present a similar proposal, but with emphasis on
system discovery. A variant approach is one where certain policies are practiced
by both users, through their preferences, and systems in order to strike a balance in
maximizing user satisfaction, and network utilization and system revenue. Work such
as Murray et al’s [MMP03] and Zhu and Mcnair’s [ZM06] present proposals under
this category.
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 120
However, Calvagna and Di Modica in [CM05] note the ramifications of undergo-
ing a vertical handoff; e.g. service disruption due to handoff latency, and suggest
a conservative attitude towards fulfilling user’s preferences. The authors recognize
that strictly meeting user preferences; e.g. always going for the network with the
best throughput, may not necessarily be of benefit to the user. Given the possible
detrimental effects on certain applications, there are times a vertical handoff, even to
a more capable network, may not be desirable. Hence is the consideration of the cost
of VHs which the authors discuss. However, it is worth noting that much effort is
being made in making VHs (and IP handoffs in general) more seamless. This can be
seen, for example, in [SZ06b, WBBD05] where multi-homing at the terminal level is
exploited. It can also be observed in the standardization efforts described in Chapter
2. Accordingly, the ramifications of a VH will eventually be limited to the feasibility
of meeting the user’s SLA across the different networks and not the VH itself.
Most relevant to the work presented herein is the work by Lincke-Salecker in
[LS05]. Through a Markovian model, the author investigates policies for session
selection, transfer and return, with the objective of investigating the effect of different
policies on blocking ratios. The author, however, does not take into account the
possible user and operational costs of VHs. Also, the objective of the author’s work
was to realize RRM policies that overlook networks under an operator and not specific
modules that can be either deployable or addible to traditional RRM frameworks.
Note that in our work we did not consider return policies as we believe that they
stand beyond the specific objective of an OMVHM module and that other RRM
modules exist to ensure the avoidance of needless VHs and ping-pong effects.
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 121
5.3 Considerations of an OMVH Module
In future wireless networks, an operator will assume the provision of networks with
various access technologies. It is possible to realize a centralized entity that simul-
taneously manages the resources of all the networks. However, centralization bears
delay, overhead, and general inefficiency, making a distributed solution whereby com-
plementary RRM modules reside in the different networks more desirable. In the
following discussions, we make considerations for the latter setting. We further as-
sume the existence of means for exchanging the capabilities, status and demands of
each network between such modules.
5.3.1 RRM Framework Interactions of OMVHM
The RRM framework into which the OMVHM is induced, or into which the OMVHM
notion is employed, should weigh whether engaging OMVHM is the current best ac-
tion for both the system and the user. If the framework employs an adaptation and/or
a pre-emption module, the procedure for admission control can sequentially check for
available bandwidth, adaptable bandwidth, moveable bandwidth, then bandwidth
that can be preempted. This “sequence” represents a design emphasis that down-
grading a user’s allocation is preferred over making the user undergo an OMVH, and
so on. On the other hand, the system can simultaneously weigh all the options, i.e.
adaptation, OMVH, etc., for each user, or for class of users, and accordingly select
the best option in an individual or class-based manner. In either setting, it is the role
of an admission control module to provision the general selection of action for each
user, or each class of users, according to the status of the system and other networks
in the overlay area.
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 122
It should be noted that much of a OMVHM’s operation depends on the existence
of a form of cognition. For example, modules need to exist that recognize the type of
interfaces that a user has, and that negotiate with other networks to understand their
capabilities and services at different times, including their positional availabilities.
Moreover, user cognition should also include identifying profile-based expectations.
For instance, if the user is known or predicted to traverse to certain path, and if
the selection of the network to which this user might be forced to handoff will affect
service delivery, then such cognition needs to be employed.
5.3.2 Triggering OMVHM
There are different triggers that indicate to the operator that a migration of certain
users is required. Triggers vary in their urgency: from the need to relieve congestion
or respond to a special admission request, to the proactive load balancing and other
objectives that exploit operator’s capability to rearrange user associations.
In responding to admission requests, certain users may have special requests that
cannot be satisfied in other networks. This could be due to the priority and/or the
requirements of the user or the service. There are also instances that depend on the
capabilities of the user’s terminals. A possible example can be found in transitional
scenarios when the user’s terminal cannot access all technologies in the overlay.
When a certain network nears or reaches congestion, the operator, if feasible, may
opt to migrate certain users who are within the coverage of other networks. It is
worth noting that congestion may not necessarily arise from traffic dynamics. For
example, due to medium variations, increased levels of interference may hinder the
network from maintaining the required QoS levels.
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 123
Load balancing is an involved objective. It is possible to engage load balancing
with the objective of maintaining certain utilization levels in different networks over-
seen by the operator. It is also possible that a network takes advantage of sudden
availabilities in other networks with which it shares an overlay. Reasons behind such
sudden availabilities include improvement in medium conditions and group move-
ments; e.g. end of working hours.
The trigger examples discussed above are exercised from the point of view of
a single network in an overlay. Other applications of OMVHs exist higher in the
network management hierarchy. For example, when an entity assumes control over
a cluster of networks with overlapping coverages, it is possible to consider the user
constituencies of different networks at the same time, and seek — as much as possible
— to rearrange user association to achieve a certain objective. Here, and between any
two arbitrary networks, migration can take place in both directions. More generally,
the association of any user is made such that the overall objective is achieved. The
nature and expense of this mechanism make such applications more natural in the
context of proactive, as opposed to reactive, applications that are aimed at long-term
goals of the operator. This includes load balancing and reduction of service delivery
cost. We will discuss such applications in depth in the next chapter. Here, our focus
will be on reactive applications of OMVH.
To put matters into perspective, it is essential to note that the flexibilities offered
by OMVH are intrinsically limited. An operator’s capability to migrate a certain user
from one network to another is bound by the user’s service agreement, the user’s ter-
minal capabilities, the capabilities of the networks to which the user can be migrated.
However, contemplating the possible enumeration of user applications, preferences,
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 124
terminal capabilities, network configurations, etc, and the fact that wireless overlays
are to comprise a substantial part of any operator’s coverage, the potential of OMVHs
becomes hard to overlook.
5.3.3 Elements of OMVHM
When an OMVHM is triggered within a network, the operator begins with identi-
fying users that can be migrated to other networks. The following are some of the
considerations that can be made in identifying such users.
• Interface capability: What type of interfaces does the use have? Which
interfaces are currently on?
• Status of other networks in the overlay: Would the networks available
in the overlay area be able to provide an acceptable service delivery at one (or
more) of the user’s interfaces?
• Position capability: Which networks are available at the user’s location?
Which network would be able to accommodate the user and his behavioral
profile; e.g. user’s expected mobility pattern?
• Application sensitivity: How sensitive are the applications running at the
user terminal to vertical handoffs [CM05]? To which networks would the vertical
handoff disruptions; e.g. packet losses, be minimized?
• Overhead-bound vs. released capacity: Should the network handoff few
users with high allocations, or many users with low allocations? Given that
there is a signaling overhead associated with vertical handoffs, is there an upper
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 125
bound on the number of users that can be handed off to other networks, in total
and/or individual?
• Users satisfaction: How willing is the user to undergo a vertical handoff?
Has this user been recently forcibly handed off? What are the short and long
term costs of forcing a user into an OMVH?
These values are general, and stand as applicable when deciding whether an ar-
bitrary user can be migrated to a certain network. Of course, other factors can be
involved in the identification. For example, it may matter how long has the service
been delivered to a user.
When the operator becomes interested in differentiating users or their services,
further considerations can be made. As an example, a certain class of users can be
favored or disfavored based on one of the following.
• Occupied bandwidth: The target allocation of each class can differ from one
class to another. On a finer granularity, users of a certain class may be few but
with large allocation, but users of another class may occupy the same allocation
with a larger number of users.
• Releasable bandwidth: More subtle than the aspect of occupied bandwidth
is that of releasable bandwidth. For example, a class may occupy a large band-
width, but other networks in the overlay can only accept users of another class
that may occupy bandwidth of less significance.
• Active and releasable connections: Beyond bandwidth, there are other
connection attributes, such as cost, that can be taken into account. The system
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 126
may be interested in the number of users (connections) currently being serviced,
or the number of users (connections) that other networks in the overlay can
accommodate.
• Class satisfaction: If users of a certain class are generally more capable to be
migrated than users of other classes, the network can deplete their “capability”
at a rate less than that at which other classes are depleted so that their worth
is preserved for more dire situations. The inverse is also possible — where the
more “capable” users are transferred first.
Other aspects can be added to consideration. What is important to note, however,
is that a multi-class OMVHM may operate under many different objectives — includ-
ing one that satisfies more than one objective. For example, an OMVHM may be set
to operate in a manner that depletes classes with the highest occupied bandwidth,
yet with the lowest or average worth to be vertically handed off.
5.4 Designing OMVH Modules
In what follows, we detail the considerations involved in designing a module iden-
tifying and selecting users to be migrated from one network to another based on a
certain trigger.
Readily, we can recognize that there are four distinct stages in the operation, as
schematized in Figure 5.2, namely Trigger, Identification, Selection and Migration.
In the Trigger stage, the system recognizes that there are sufficient reasons to
initiate migration. Consider, for example, a scenario when an admission request is
made and there are insufficient resources in the requested network. If the network
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 127
management options are exhausted, e.g. downgradation can no longer be made, then
the option of migration becomes attractive.
In Chapter 4, we discussed the difference between conclusive and inconclusive
engagement of RRM. A similar point can be made here regarding the “trigger” of an
OMVHM. There are instances where it will be possible to know whether migration is
feasible, i.e. sufficient resources can be freed, prior to engaging the OMVHM module.
However, there are also instances when an inconclusive engagement is an unavoidable.
The choice is highly dependent on the type of trigger and the number of users and/or
networks involved; i.e. at which layer in the network management hierarchy is the
OMVHM engaged. Our focus in this chapter will be on conclusive engagements.
IdentificationIdentify candidate users that can be involved in migration
TriggerRecognize conditions to trigger migration
SelectionSelecting users for migration based on trigger type and operational objectives
MigrationInitiating signaling, authentication, etc. required for migrating selected users.
OMVH
Network conditions, statistics of measurements, admission requests, etc.
User profile and attributes, application requirements, terminal capabilities
Network capabilities, operational history, operational objective
Handoff decisions initiating signaling and authentication.
Figure 5.2: The four stages in the OMVH operation.
The purpose of the Identification stage is to make the network aware of which
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 128
users can be migrated to which networks overlaying its coverage. This identification
depends on the considerations discussed in Subsection 5.3.3.
The Selection stage is the core of the OMVHM and is where the main thrust
in responding to the Trigger. In other words, based on the requirements set by
the trigger; e.g. certain bandwidth is to be freed, and the sets populated by the
Identification stage, in addition to the capabilities of the relative networks in the
overlay, the Selection stage selects users based on a certain criteria while satisfying a
specific system objective.
The Migration stage oversees all the signaling required between the operator and
the selected users. This may involve more than one management layer (physical,
network, etc.) and is performed in the means accessible to the network operator.
The details of this stage are beyond the scope of this work.
In what follows we will elaborate on the Identification and Selection stages. How-
ever, we digress to introduce a metric that aids the Identification stage.
5.4.1 Evaluating a Migration’s Worth
In selecting users to be migrated from one network to another, there is an apparent
need for a metric that distinguishes whether each user can be migrated or not. Factors
involved in evaluating such a metric include a judgment of whether this migration
is physically feasible in terms of capability of the receiving network to deliver the
service, and the position of the user as it determines the signal strength or quality
received at the user’s terminal. At another, there is the judgment of how worthwhile
is this migration. For example, if a user is recognized to be traversing and not staying
in the overlay area, this needs to be indicated. Another example include the type of
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 129
service the user is receiving and how long has it been active; If a user is receiving a
file and the transfer is about to end, then it is — generally — more preferable that
the user maintains the transfer in the current network. A third level refers to certain
user perceptions. For example, given an understanding of VHs seamlessness in an
operators’ HWN in addition to its effects on different types of services, it may be
necessary avoid migrating a certain user several times within a certain time duration.
Also, the difference between the QoS received in the initiating and the receiving
networks, be it positive or negative, should be noted by the metric.
Therefore, the required metric generally indicates the worth of a user’s migration
from one network to another to the OMVH decision framework. The inherent advan-
tage of using such a metric is the potential simplification of the migration decision
process, where the alternative would be simultaneously processing all the different
factors — directly making the decision process hard to design and operationally ex-
pensive. Another aspect to note is that in presenting such a metric as a substitute to
processing the many different factors, normalization of the different values becomes
unavoidable. However, while it may be generally argued that normalization may
be infeasible, we argue that each factor can essentially be valued in common terms;
specifically, the worth to the decision process.
For completion, a clarification is due regarding the interpretation of the metric.
The valuation of such a metric may share certain characteristics with the valuation
of typical utility or cost functions. However, the desired metric is not meant to
reflect utility nor cost, be it for either the user or for the operator. Rather, the
metric is specifically aimed at evaluating the benefit to the decision process. As a
specific example, the migration of a user currently occupying a certain bandwidth in
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 130
the migrating network is worth more than a user occupying less bandwidth, as the
objective is to free as much room as possible. This worth, however, has no bearings
on the perceived QoS at the user nor the operators revenue.
Naturally, there are several manners in which this metric can be valuated. Here, we
attempt to offer a generalized form. Consider a function, denoted W , that evaluates
the worth of migrating user uid from current network to network n. The valuation is
performed as follows.
W (uid, net n) = ΦΠ(uid, net n) · ΦΣ(uid, net n) (5.1)
In essence, the function reflects our recognition that there are two types of factors
that affect the valuation of a migration’s worth: multiplicative and additive. The
function ΦΠ valuates the multiplicative components, and can be expanded as follows,
ΦΠ(uid, net n) =
NΠ∏f=1
hf (uid, net n) (5.2)
where NΠ is the number of multiplicative factors. The functions hf may represent
imperative factors, i.e. do-or-die factors with binary (0/1) valuations. Examples of
these factors include whether the user’s terminal has an interface for the receiving
network, whether the user is within the coverage of the receiving network or whether
available in the receiving network. If the answer to any of these is negative, the worth
of migration must be nullified.
The functions hf may also represent factors that weigh on a migration’s worth
and, accordingly, hold values between 0 and 1. Here, the number of migrations a user
have undergone within a certain time window can be indicated.
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 131
The function ΦΣ valuates the additive components of W , and can be expanded as
follows,
ΦΣ(uid, net n) =
NΣ∑f=1
wf · gf (uid, net n) (5.3)
where NΣ is the number of additive factors. The functions gf , valuated between
0 and 1, represent the worth of the value or the change in the value of a certain
service attribute; e.g. change in allocations, user location with respect to access
gateway in the receiving network, mobility, etc. The weights, wf , determine the
relative significance of each additive element in computing a migration’s worth, with
a condition thatNΣ∑f=1
wf = 1.
The reader should note that the worth function can be applied in different man-
ners. For example, at any given time a user is likely to be receiving more than one
service. In such a case, the worth of the user’s migration can involve the different
services in a single expression, or be the summation of the worth of migrating indi-
vidual services. The latter application lends itself to scenarios where service delivery
for a user, or even a single service, can be simultaneously made over more than one
network. Notwithstanding, the exact manner in which the worth function is applied
is inconsequential to the arguments presented herein.
To give further insight into how different factors are valuated, we offer the fol-
lowing examples. We have already mentioned that the operator should avoid making
users undergo several vertical handoffs in a certain time frame. For example, in a
preceding time window of τ v, the operator can observe the number of vertical hand-
offs a particular user has undergone, denoted Nv(td − τv), where td is the decision
instance. A function hv, representing a multiplicative factor, can then be made to
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 132
the aforementioned effect as follows.
hv =
1 Nv(td − τv) = 0
0.4 Nv(td − τv) = 1
0 Nv(td − τv) > 1
(5.4)
A migration’s worth can also be decided by a user’s current allocation. The manner
in which allocations in the receiving network, i.e. allocations a user receives after
migration, affect a migration’s worth in an intricate manner. On the one hand, the
size of allocation relative to the receiving network’s capacity affects the number of
users that can be migrated. On the other, a user’s perceived QoS, and hence short
and long term appreciation to the operator, are affected by the ratio of allocations
between the target and migrating network. For illustration, we focus on the latter
aspect. Let Qt and Qs be the allocations in the target and migrating networks,
and gq be the additive factor concerned with a migration’s worth relative change in
allocation. A crude valuation of gq could take the following form.
gq =
1 Qt > Qs
0.8 Qt = Qs
0.2 Qt < Qs
(5.5)
Again, we note that such valuations can be considered for the aggregate allocation or
the allocations for the different services.
Another example of an additive factor is one that is based on average signal
strength received at the user’s terminal. Figure 5.3. depicts on access point with its
coverage divided into three zones based on certain ranges of signal strength. Given
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 133
zone 2
zone 3
zone 1
Figure 5.3: Coverage of a wireless LAN “zoned” based on the gateway’s signalstrength.
fluctuations in signal strength measurements, readings can be stabilized through nom-
inal regressive methods (median, FFT, etc.) [ZGGZ03]. It is hence possible to eval-
uate a migration’s worth based on signal strength as follows.
gsnr =
1 user in zone 1
0.7 user in zone 2
0.2 user in zone 3
0 otherwise
(5.6)
5.4.2 The Identification Stage
In the Identification stage, the network essentially attempts to recognize sets of can-
didate users that can be migrated to each network n. Specifically, if we denote such
a set by A(net n), and denote the set of users residing in the current network by Um,
then A(net n) can be defined as follows.
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 134
A(net n) = uid : uid ∈ Um, W (uid, net n) ≥ Wth (5.7)
where Wth is a minimum threshold of migration worth below which the network or
operator judges that a migration should not be considered. It is for simplicity of
presentation that we assume that there is a single threshold for all users, services or
networks under a single operator. Granted, the threshold’s value should more appro-
priately be set to react to the network conditions and the type of users comprising
the networks in addition to their active applications.
5.4.3 The Selection Stage
The result of the Selection stage is sets of users to be migrated to each individual
network in the overlay. These sets are essentially subsets of candidate sets, i.e. A,
and their elements are selected according to a certain objective. Denote the set of
all users selected for migration from the migrating network by V . We then have the
following definitions.
V =⋃
n∈Om
V (net n) (5.8)
where Om is the set of networks overlaying the migrating network and V (net n), i.e.
the set of users selected to migrate to network n, is defined by
V (net n) ⊆ A(net n) (5.9)
User selection can be made based on different criteria. One criteria that we perceive
as rational and feasible is to maximize the net worth of the OMVHM engagement.
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 135
Define W (V ) to be the total worth of migrating the selected users, i.e.
W (V ) =∑
n∈Om
∑uid∈V n
W (uid, net n) (5.10)
where we have utilized V nas a shorthand for V (net n). The objective then becomes
arg max W (V ) (5.11)
We note that other constraints need to be minded in user selection. For example, it
is natural that a user can only be associated with a single network at a given time.
V i ∩ V j = ∅ ∀i 6= j (5.12)
It is also necessary that the OMVHM releases, at least, the resources required by
the trigger. Define Q(uid) as the resources required by user uid in the user’s current
network. Also, define Q(V ) as the total resources required by the users in V . Denoting
the resources required by the trigger by Qreq, then Qreq should be the lower bound
released by the migration process, i.e.
Q(V ) ≥ Qreq (5.13)
Due to the discrete nature of the allocation, and in order to amount the allocations
released per trigger, we utilize a fragmentation ratio, f , in upper bounding the amount
of resources released, i.e.
Q(V ) ≤ f ·Qreq (5.14)
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 136
Naturally, the number of users that can be migrated to a certain network is limited
by the amount of available resources in that receiving network. Denote the amount
of available resources in network j by Qjavail. Also, denote that resources occupied by
users in V after being migrated by Q(V j)+, i.e. after having received their alloca-
tions in their respective receiving networks. Then, for each receiving network j, the
following condition applies.
Q(V j)+ ≤ Qjavail (5.15)
These are the basic set of considerations required for the operation of a single-class
OMVHM. However, this model can easily be extended. For example, it is possible
that in migrating users to the various receiving networks, users would be migrated
equally between them or that migration would be proportional to the capabilities of
relative receiving networks.
The Multiclass Setting
As aforementioned, in a multiclass setting further considerations need to be applied in
favoring between different classes of users. Prior to describing the operation of multi-
class OMVHM, we introduce a notion of class ratios that facilitates differentiation.
We define a class’s VH give, denoted G, as the extent to which the class be utilized
in migrating users to the receiving network. The units and computation of G depends
on the measure utilized in favoring classes over each other. This measure, for example,
can be the number of migrated users, the bandwidth associated with the migrated
users, or the worth depleted by the migrated users. Regardless of the criterion chosen
for selection, a class’s give represents its support of the OMVHM operation.
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 137
There are different means to calculate G. For example, if the measure was released
bandwidth, the network may seek to maintain a specific, fixed ratio for the bandwidth
released between different classes. In a more dynamic setting, computation would be
based on active measures like the bandwidth occupied by each class. In the following,
we detail a possible implementation.
Assume that the system maintains a certain ratio, ρ, between the worth released
from two classes i and j. Denote by Wx,Tot the total worth depleted by class x thus
far. Hence, OMVHM would seek the following equality.
Wi,Tot
Wj,Tot
= ρ (5.16)
Consider now the case where Wi,Tot is greater than Wj,Tot. For the system to maintain
the ratio, OMVHM needs to fixate Wi,Tot, and allow more worth to be released from
class j. Ratios here becomes the worth to be released from class j in order for
OMVHM to maintain ρ. Denote the give of class x by Gx, then
Gj =Wi,Tot
ρ−Wj,Tot (5.17)
This computation can be easily extended to more than two classes. At times, the
gives computed may not be sufficient to accommodate an incoming call. In this case,
the value can be adjusted to the least sufficient give.
We now turn our focus to the description of a multi-class OMVHM. Denote the set
of classes defined under the operator by S. The set of users in network n are denoted
Un. The set of class s users in network n are denoted Uns . The set of candidate class
s users to a receiving network n is denoted Ans and is defined as follows.
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 138
Ans = uid : uid ∈ Um
s , W (uid, net n) ≥ Wth (5.18)
where the superscript m indicates the migrating network. Consequently, the set of
class s users selected according to the operating objective to be migrated to n, denoted
V ns , is defined by
V ns ⊆ An
s (5.19)
Here, too, we note that, in addition to the fact that a user can only be associated with
a single class at a given time, a user can only be migrated to one receiving network,
i.e.
V is ∩ V j
s = ∅ ∀i 6= j (5.20)
We redefine the set V here as follows.
V =⋃
n∈Om
⋃s∈S
V ns (5.21)
We similarly redefine W (V ) as follows.
W (V ) =∑
n∈Om
∑s∈S
∑uid∈V n
s
W (uid, net n) (5.22)
The objective then becomes
arg max W (V ) (5.23)
Given that the two constraints regarding the total released resources, i.e. Q(V ), and
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 139
resources required by the trigger, i.e. Qreq are maintained.
Constraints on the available resources can be defined as for the single class setting,
or with individual constraints for each user class, i.e. denoting the amount of available
resources for class s in network j by Qjs, avail, the condition becomes
Q(V js )+ ≤ Qj
s, avail (5.24)
Define Vs as follows.
Vs =⋃
n∈Om
V ns (5.25)
Also, define the function G(Vs) as the total give consumed by Vs, and denote the
bound on class’s give by Gs. Then the OMVHM needs to maintain the following
G(Vs) ≤ Gs (5.26)
5.4.4 Illustrative Example
Consider the scenario in Figure 5.4 Users A, B and C are class 1 users, and are all
allocated 6 ebus. Their worth to be migrated to the WLAN is 0.4. Users D, E and
F are class 2 users and are respectively allocated 4ebus, 2ebus, and 2ebus. Their
respective worth is 0.2, 0.6 and 0.6 User G’s request mandates an allocation of 8ebus.
The network’s fragmentation ratio is set at 1.3, making the upper bound of releasable
allocations to be 10.4bu.
The WLAN can only accept one class 1 user and three class 2 users. Employing
the OMVHM without the give budget would result in selecting either user A, B or
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 140
E
A
D
F
B
C
G
Figure 5.4: User G is attempting to enter a loaded cellular network. Users A to Fare residing within an overlay of cellular network and a WLAN.
C, and E and F.
Suppose now that the give of class 1 was zero, and of class 2 was a worth of 0.7.
Since 0.7 for class 2 would not result in releasing sufficient bandwidth to accommodate
user G, the give is adjusted to the least sufficient worth, which is (0.2 + 0.6 + 0.6 =)
1.4. Hence, users D, E and F would be migrated.
If the give of class 2 was zero, the OMVHM would not be engaged at any value of
give for class 1 since the support of the WLAN cannot result in sufficient bandwidth
to accommodate G.
5.5 Performance Evaluation
In what follows, we examine the operational aspects of OMVH. Simulation experi-
ments were carried out in an event-driven simulation built utilizing C++ and MAT-
LAB. The OMVHM core was implemented through a Mixed Integer Linear Program-
ming (MILP) formulation that solved using the GLPK package of the GNU project
[GLP].
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 141
terminal cellular BSWLAN AP
Figure 5.5: An overlay of cellular and WLAN coverage.
5.5.1 Simulation Setup
An overlay involving two networks, as shown in Figure 5.5, is employed. We will refer
to the network with the persistently larger coverage by network 1 and the second
network by network 2. For purposes of evaluation, the coverages of the two networks
are concentric. A fixed number of users is uniformly distributed over the area of
the larger coverage. Users make connection requests with an aggregate inter-arrival
time that is exponential with controllable mean. The capacities of networks 1 and 2
are respectively 80 and 40 ebus. Initially, a single service defined in both networks
that is allocated 6ebus in network 1 and 4ebus in network 2. The connection holding
time exponentially distributed with a mean of 150 seconds, regardless of the network
choice. All users are assumed to be dual mode users, i.e. can request and receive
services in both networks. Experiments ran for a simulated time of 3600 seconds.
Each shown result represents the outcome of ten experiments.
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 142
Note that the values used here are arbitrary, and that other values were used in
the intensive investigation performed displayed similar trends to the ones presented
below.
5.5.2 Blocking Probability
When OMVH is not employed, a connection is considered blocked if the available
(unallocated) bandwidth cannot satisfy the request. When OMVH is employed, a call
is considered blocked if neither the available bandwidth is sufficient nor is migration
possible. Blocking probability is the ratio between the number of blocked connections
and the number of requests made throughout the simulation time.
Figure 5.6 shows the blocking probability for both networks with the arrival rate
is varied between 10 and 25 calls per minute in steps of 2.5. In generating the
requests, 70% of the calls generated were aimed at network 1. Also, the percentage
coverage of network 2 relative to network 1’s coverage was set to 60%, i.e. partial
overlay. Furthermore, the worth of migration for all users in the overlay in both
directions was set to 1. The general goodness of OMVH can be observed, where a
reduction in of around 10% is observed at 10 calls per minute. The reduction however
diminishes as the load increases 25 calls per minute. This is readily understandable
since, when possible, each migrating network considers the receiving network as a
virtual capacity extension. As the load increases, however, there becomes less and
less “room to maneuver” for the OMVHM and effect of OMVH is unnoticeable.
Note that this and other evaluations to follow are extreme in that every request
made in a case of overload is considered to merit a migration. In reality, the decision
to migrate will be determined on case by case basis.
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 143
10 12.5 15 17.5 20 22.5 250
0.1
0.2
0.3
0.4
0.5
aggregate arrival rate (calls per minute)
bloc
king
pro
babi
lity
w/o OMVHw/ OMVH
Figure 5.6: Total blocking probability (i.e. for both networks) with aggregate loadvaried
Also note that Figure 5.6 is made for the total blocking probability. Naturally,
reductions would be more apparent in network 1 than in network 2 due to the first
receiving 70% of call requests. It is natural that the blocking probability for network
2 may actually worsen.
5.5.3 Effect of Overlay Percentage
We turn our attention now to study the effect of overlay percentage on the perfor-
mance of OMVH. In this setup, requests are only made to network 1 with a fixed
arrival rate of 10 calls per minute. With users still uniformly distributed over the
coverage of network 1, the percentage overlay of the second network was changed
from 0 to 100% in steps of 10%. Again, all users within the overlay were transferable.
Figure 5.7 shows the results.
In the figure, the plot with circle bullets represents the case when OMVH is
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 144
0 10 20 30 40 50 60 70 80 90 1000
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
overlap percentage (%)
bloc
king
pro
babi
lity
w/o OMVHw/ OMVH: fixed capacityw/ OMVH: increasing capacity
Figure 5.7: Blocking probability with overlay percentage varied (relative to largernetwork).
not employed. Naturally, the blocking probability remains the same since migration
is not permitted. The plot with the triangle bullets represents the case when the
area was changed but capacity was fixed at 40 ebus. This is different than in the
case for the plot with square bullets were the area was incremented in steps of 4
ebus. The difference shows the independent effects of capacity limitation and the
user distribution between the overlay. Note that the plot for fixed capacity saturates
between 50% and 60% coverage overlap, indicating that, from this point, an increase
in capacity is more useful than an increase in coverage area. Such outcomes can be
used when, for example, deploying networks to relieve persistent hotspots.
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 145
5.5.4 Effect of Load Distribution
The objective of this experiment is to reemphasize again the value of joint of resource
management but within the context of OMVH. The percentage of requests made to
network 1 was varied between 0 and 1. Migration was allowed in both directions. All
users were considered transferable. The results are shown in Figure 5.8, where the
total blocking probability is plotted against the percentage of requests to network 1.
At the extremes, i.e. towards 0 and 100%, the blocking probability leans towards
the blocking probability of the two networks in absence of migration. However, when
OMVH is employed, the increase in the blocking probability is steady and less than
blocking probability when OMVH is not employed — even at its lowest instance, i.e.
at 50%. This affirms that OMVH is another means not only of maintaining admission
guarantees, but also for increasing resource utilization.
0 10 20 30 40 50 60 70 80 90 1000
0.1
0.2
0.3
0.4
0.5
0.6
0.7
percentage of requests to network 1 (%)
bloc
king
pro
babi
lity
w/o OMVHw/ OMVH
Figure 5.8: Blocking probability with percentage of requests to larger networks varied,with and without employing OMVH.
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 146
5.5.5 Limiting Average Number of Handoffs
As aforementioned, the valuation of worth is important as an indicator of a migration’s
value to the OMVH operation. As an example, we investigated the effect of having
a hold-off time once a user is migrated and until the connection terminates. The
hold-off time was varied from 0, i.e. no hold-off, to 3600 seconds, i.e. the length of
the simulation time. The latter extreme implies that any connection can be migrated
no more than once in its lifetime. In generating the requests, 60% of the calls were
directed to network 1. The results are shown in Figure 5.9.
0 1000 2000 30000.08
0.085
0.09
0.095
0.1
hold time (seconds)
bloc
king
pro
babi
lity
(a)
0 1000 2000 30001.4
1.5
1.6
1.7
1.8
hold time (seconds)
mig
ratio
ns p
er u
ser
(mea
n)(b)
0 1000 2000 300070
80
90
100
hold time (seconds)
tota
l num
ber
of m
igra
tions
(c)
0 1000 2000 300045
50
55
60
65
hold time (seconds)
num
ber
of u
sers
mig
rate
d
(d)
Figure 5.9: Effects of varied hold-off time on (a) blocking probability; (b) numberof migrations per user; (c) total number of migrations; and (d) number of individualusers migrated.
The four subplots in the figure display different aspects of the effect of the hold-off
time. Figure 5.9(a) displays the effect on blocking probability, where as the duration
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 147
of the hold-off time is increased, the blocking probability increases. On the other
hand, figure 5.9(b) shows the average number of migrations per user. There are
two things to note here. First, that increasing the hold-off time reduces the average
number of migrations per user. Second, that instead of the average migrations per
user converging to 1, it converges around 1.45 migrations per user. This is because the
number of users is limited (200) and, within a single simulation run, it is possible that
the same user may make more than one request. Also, once a connection is terminated,
the hold-off time is reset. Figures 5.9(c) and 5.9(d) show the constituents of the mean
in Figure 5.9(b). Naturally, the total number of migrations increases. However, it
is informative to note in Figure 5.9(d) the increase in the number of users involved
per migration between hold-off time 0 and around 150 seconds. This indicates the
byproduct of fairness among users that can be achieved when considering hold-off
times in evaluating a migration’s worth.
5.5.6 The Multiclass Setting
To evaluate the operation of OMVH in the Multiclass setting, we implemented a
controller on top of the OMVHM to set the bounds on each class’s give based on
measurements of expended worth per class. The implemented controller can work
with other criterion and the choice of worth is only for illustration.
In addition to the service class defined above (exponential duration with mean
150 seconds, 6 ebus and 4 ebus in networks 1 and 2, respectively), we defined another
class with a fixed holding time equal to 300 seconds and which receives allocations
of 4ebus and 2ebus in networks 1 and 2, respectively. In generating the requests, the
percentage of requests made for each class was varied. Figures 5.10, 5.11 and 5.12
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 148
show the results of the measured ratios. The percentage of requests made for class 1
are 20% in Figure 5.10, 50% in Figure 5.11 and 80% in Figure 5.12. In each setting,
the ratios set between the expended worth for class 1 (exponential holding time) to
that of class 2 (fixed holding time) was set to either 0.5, 1.0, 3.0 and 5.0.
We note that, despite the difficulty to maintain ratios greater than 1 when the
connections are mostly class 2 connections, the performance of the control remains
satisfactory. From the three figures, too, it seems that the controller can perform well
despite varying class occupation.
10 15 20 250
1
2
3
4
5
6
arrival rate (calls per minute)
mea
sure
d ra
tio (
wor
th/w
orth
)
ρ = 0.5
ρ = 1.0
ρ = 3.0
ρ = 5.0
Figure 5.10: Measured ratio with class 1 assuming 20% of the arrival rate.
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 149
10 15 20 250
1
2
3
4
5
6
arrival rate (calls per minute)
mea
sure
d ra
tio (
wor
th/w
orth
)
ρ = 0.5
ρ = 1.0
ρ = 3.0
ρ = 5.0
Figure 5.11: Measured ratio with class 1 assuming 50% of the arrival rate.
10 12.5 15 17.5 20 22.5 250
1
2
3
4
5
6
arrival rate (calls per minute)
mea
sure
d ra
tio (
wor
th/w
orth
)
ρ = 0.5
ρ = 1.0
ρ = 3.0
ρ = 5.0
Figure 5.12: Measured ratio with class 1 assuming 80% of the arrival rate.
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 150
5.6 Summary
In this chapter, we introduced the novel notion of OMVH through which the operator
can exploit VHs in HWNs. After displaying the feasibility and the advantages from
utilizing OMVHs, we enumerated the considerations that must be accounted for when
designing a module dedicated to OMVH management. We also showed the stages of
operations for such modules, including the identification and selection of users to be
migrated. To aid the identification stage, we proposed the notion of a migration’s
worth and showed examples of how it can be computed. In addition, we detailed how
differentiation based on different user categorizations can be made.
OMVHs showed good performance in evaluation. In the tested setting, a reduction
in blocking probability of 10% was observed in light loads. The reduction naturally
diminishes as the load increases. We also displayed the effect of percentage over-
lay in order to differentiate the effect of capacity limitation, coverage limitation and
user distribution. Such relationships can be useful in practical network deployments.
In another settings, we varied the load between the two overlaid networks and al-
lowed migrations in both directions. Reductions up to 50% in blocking probability
were achieved. More importantly, it was observed the even the highest blocking for
OMVH was below the lowest blocking when OMVH was not employed. This con-
firms the need for joint functionalities in HWNs, not only for upholding admission
guarantees, but also for increasing resource utilization. In addition, we evaluated the
effect of hold-off time on the performance of OMVH. For hold-off times comparable
to average connection duration, the hold-off time resulted in a fairness effect as it
results in more users being involved in migration. Naturally, however, as the hold-off
time increases, the blocking probability increases up to a saturation point and the
CHAPTER 5. OPERATOR MOTIVATED VERTICAL HANDOFFS 151
number of migrations per user is decreased. Finally, we tested a multiclass controller
for different pre-set ratios of expended worth and under different load distributions
between the two networks. The controller performed well and was able to maintain
the pre-set ratios to reasonable degrees.
This chapter dealt mainly with reactive applications of OMVHs. In the next
chapter we show how OMVH can be used in proactive applications and show a detailed
example aimed at reducing the operator’s cost in service delivery.
Chapter 6
Service Delivery Cost Reduction
In Chapter 4, we introduced the notion of OMVH as means for operators to exploit
VHs to their benefit, while maintaining the user’s SLA. In doing so, we alluded to
the design of a reactive module aimed at aiding admission and congestion control
mechanisms in HWNs. The objective of this chapter is to showcase an example of
proactive applications of OMVHs, namely SDCR.
Despite the possible employment of cost reduction mechanisms based on long
term observations, we argue that the user-centric nature of HWN and the possible
permutations of triggers for VHs may effectively render user assignment beyond the
operator’s control. In this chapter, we investigate the potential of a module operated
by the operator dedicated to reducing the cost of service delivery. In doing so, we
discuss the considerations of such a module, including the factors affecting the cost
of service delivery and the elements involved in identifying and selecting users to
undergo a forced VH.1
1A partial exposition of the work presented in this chapter has previously been made in [THM07a].
152
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 153
6.1 Introduction
As discussed in Section 2.5, OMVHs can be employed at different levels in the hi-
erarchy of the operator’s network. Employing OMVHs at the overlay level — the
focus of the previous chapter — means that the gateway is mostly concerned with
the selection of users within its coverage to be migrated to other gateways sharing its
coverage. The functionality at this level will be mostly reactive, responding to and
assisting other modules, primarily those concerned with admission and congestion
control. This also means that, at the gateway and overlay levels, the employments of
OMVHs will be most frequent but with the least impact — consequently calling for
light and effective modules that can be engaged in real time.
At a higher level, e.g. an RNC overseeing a cluster of gateways, OMVHs would be
made in a more proactive manner and would probably be employed less frequently.
At such level, OMVHs can be utilized in modules overlooking load balancing and
other modules operating towards similar precautionary measures. At the operator
level, or perhaps at a level that overlooks a cluster of RNCs, the least frequent use of
OMVHs would be made. This use would be towards establishing and conforming to
network-wide policies. For instance, of the key advantages of HWNs is the viability of
managing the cost of service delivery — that is, where applicable and perhaps given
a geographic representation of user demand, the operator would contemplate policies
where the delivery of various services would be made through the most cost-effective
networks, while minding the constraints and operations of the different networks.
At the operator level, the rate at which OMVHs are employed is the highest
and would probably involve the migration of the largest number of users. The en-
gagement may also vary with time and position — for example, during the day at
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 154
offices-populated areas and in the evenings at residential areas. The low frequency
of employment allows for the employment of more extensive schemes. However, the
operation can be designed to be distributed over several RNCs.
The focus of this chapter is the design of proactive OMVH modules that can be
applied at either the RNC/cluster level, or at the operator level. Two notable exam-
ples are proactive load balancing and the reduction of the monetary cost of service
delivery. Since the latter encompasses the former, we will use it in our exposition.
Between the user and the service provider, the operator strives to maximize rev-
enues while upholding specific SLAs on either side. This integrally involves reducing
the cost of service delivery through the selection of technology to use. The traditional
setting implies a stable cost structure (i.e. arrangement of costs) based on equipment
and band licensing. However, the dynamic and user-centric nature of HWNs intro-
duces further considerations that call for more actions at shorter decision time frames.
For example, the effect of network load on the cost of service or the possibility of so-
liciting user participation in service delivery through multihop relaying makes the
cost structure highly dependent on user and demand density. More critically, how-
ever, are the effects of employing cognitive radios in future wireless networks. Such
radios exploit gaps or holes of non-utilized times in different bands across the radio
spectrum. This opens the door for spectrum trading and brokerage, both between
operators and between operators and users.
The objective of this chapter is hence to investigate the design and operation of
an SDCR module. In doing so, we will detail the considerations required in cost
evaluation and user selection. We also discuss implementation issues and, through
an illustrating implementation, show the potential gains for operators. To the best
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 155
of our knowledge, this work is the first of its kind targeted towards the context of
HWN.
The remainder of this chapter is as follows. Next section elaborates on the motiva-
tion for designing a dedicated module for SDCR and overviews related work. Section
6.3 discusses the elements of designing such a module, including considerations for
user selection and estimating the cost of service delivery. Section 6.4 then presents
results that attest to the potential of SDCR modules. Finally, Section 6.5 summarizes
the chapter.
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 156
6.2 Motivation and Related Work
The aim of this work is to explore the utilization of OMVHs with the objective of
reducing the overall cost of service delivery in an HWN. We acknowledge that the
essential profitability measures may be realized through network management based
on long term observations of user and network characteristics. However, the unpre-
dictability of user and network dynamics in HWNs, especially due to VH triggers,
strongly dictates the need for profitability measures that react to short term obser-
vations. Specifically, HWNs will be largely designed to be “user-centric”. Contem-
plating the possible permutations of terminal capabilities, application requirements
and user preferences, and adding possible variations in time and space, it becomes
clear that the user distribution across the various technologies can be beyond the
control of, say, cost-based admission control policies. Therefore, elements are needed
to persistently adjust the user distribution across the different networks.
The above relates to the difficulty in keeping users associated with networks based
on the operator’s basic network costs, e.g. delivery through a WLAN is cheaper
than through cellular based on equipment and licensing costs. Towards this end,
a cost-based technology assignment has been investigated by Bria in [Bri05]. The
author contemplated the issue of technology selection for broadcast messages in a
cellular network overlaid with a broadcasting system such as Digital Audio or Video
Broadcast (DAB,DVB). However, the author addressed technology selection only at
session initiation and not during an active session. The author’s discussion was also
limited to broadcast messages of known size.
Further factors are involved in evaluating the cost [FSC06, Gin00]. For example,
the interest in pricing-based RRM techniques, e.g. is essentially motivated by the
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 157
fact that the cost of service delivery actually increases as the load on the utilized net-
work increases [DaS00, CKW00, PT00, WS06]. Another factor is multihop relay via
dormant, partially or fully active user terminals. While the resulting mixed paradigm
have long been investigated for its effects on capacity increase and load balancing
[CAC+05, YT04], especially in interference-limited systems, equal attention has been
given to its economic consequences, e.g. [Mai05, LL05]. Utilizing such relays means
possibly delivering service to more users for a relatively diminished marginal cost.
Consequently, the cost of service delivery for an operator in a certain wireless over-
lay may vary based on the number of users willing to participate. As to feasibility,
Khadivi et al showed in [KSST06] that multihopping in an HWN context actually
enhances the performance of VHs.
A more critical factor, however, is the effect of introducing cognitive radios in fu-
ture wireless networks. Based on observations that licensed and regulated spectrum is
largely underutilized [Wal05], interest is now high in overcoming this underutilization
through exploiting spectrum gaps or holes in licensed spectrum. This allows more
entities to share the spectrum. The capability to sense and switch between different
bands in the spectrum is technically provided by cognitive radios [Hay05]. However,
it is expected that in regulating access to spectrum bands that forms of negotiations
and brokerage will exist [FH06, OGN07]. Among several ramifications, this notably
leads to variability in operator’s cost to access a certain spectrum.
6.3 Elements of SDCR
As aforementioned, wireless overlays will comprise a substantial part of any network’s
coverage. SDCR hence needs to be implemented for the entire network. However, the
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 158
expected scale of HWNs in the future dictates that a modular approach be employed.
In this work, we will focus on an operational level that oversees a single wireless
overlay. The arguments presented below can naturally be extended to a fully fledged
HWN implementation with careful examination to operational synchrony and network
structure.
6.3.1 Components and Interactions
The general operation of the SDCR module inherits and extends the arguments pre-
sented in Section 5.4. The module will seek its objective within the limits allowed
by other network management modules. Essentially, however, the SDCR operation
is bounded by the capabilities of the different technologies in the overlay. Depending
on a dynamic cost structure, SDCR will perform its computations.
Figure 6.1 shows such and other interactions of the SDCR operation. Information
required for the operation of SDCR is persistently gathered from the different tech-
nologies. This information includes the status and capabilities (e.g. current achievable
capacities) of the different networks involved in the overlay. It also includes identifi-
cation information about users and their profiles (i.e., mobility, preferences, position,
etc.). The technologies also relay measurements of either individual or aggregate
traffic in the network, and prediction of usage in a certain period duration to come.
Once SDCR performs its computation, it informs the networks and the relative
VH management modules of its selection of users to be migrated.
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 159
SDCR
RRM/Network Mgmt. Platform
Relaying network conditions, user profiles and relative measurements
Relaying forced vertical handoff decisions
Figure 6.1: Components and Interactions of an SDCR module.
6.3.2 Operational Overview
In our implementation, SDCR is to be periodically operated with an inter-operation
time denoted by τI . At the end of each τI , the module is engaged and, based on the
information acquired during the previous τI , attempts to find a more cost efficient
distribution of users. An operational timeline is schematized in Figure 6.2.
Valuating an appropriate τI largely depends on the characteristics of the services
delivered. A short τI may render SDCR useless in instances of high user activity
while a long τI may overlook potential reductions. Over the long run, the duration
of τI can be varied. It is also possible for the SDCR to operate in a non-periodic
manner. However, in such a setting, further overhead may be required to select an
appropriate operational time.
6.3.3 Definitions and Notations
We consider an operator overseeing a wireless overlay whose technologies are managed
by either a single management entity or a group of cooperating management entities.
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 160
τI
ti ti+1 ti+2 ti+3
2it++
Figure 6.2: A timeline illustrating the periodic manner in which SDCR is engaged.
We denote the set of networks in the wireless overlay by N and the set of services
provided by the operator in the wireless overlay by S. It is assumed that the services
are commonly defined in the different networks, and the possibility stands that the
services are offered in different networks with different QoS. Moreover, services need
not be available in all the networks at all times.
Let U be the set of all users associated with all the networks and all the ser-
vices in the wireless overlay. Similarly, let Un and Uns respectively denote the set of
users associated with all services in network n ∈ N , and the set of users associated
with service s ∈ S in network n. At times, we will require indicating the temporal
dependence of the different sets, e.g. Uns (t).
Without loss of generality, we will assume herein that a user in the overlay can
only be associated with a single network and use a single service at any given time.
These relationships can be expressed as follows.
U =⋃i∈N
U i (6.1)
Un =⋃s∈S
Uns n ∈ N (6.2)
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 161
U i ∩ U j = ∅ i 6= j; i, j ∈ N (6.3)
Unk ∩ Un
l = ∅ k 6= l; k, l ∈ S, n ∈ N (6.4)
Note that the recognition of the specific access gateway within a network can be
added in identifying the different user sets. However, we will maintain the notation
to emphasize the underlying arguments.
6.3.4 Evaluating the Cost of Service Delivery
In this section, we elaborate on some of the factors involved in evaluating the cost of
service delivery.
At the basic level, equipment cost and licensing affect the connectivity costs over
the long term, and are relatively stable. In performing SDCR, it might be expected
the resulting cost structure will dictate a certain flow of user migrations (i.e. from
cellular to WLAN). However, when accounting for the expected duration of calls,
and depending on the type of service, users can be migrated in the opposite direction
in minimizing the total service delivery cost.
Cost variability is induced when a network load nears or reaches congestion. While
some studies have attributed this cost as a marginal cost on the user side, the cost
incurred by the operator is realized by dissatisfied users seeking other operators and
opportunity costs of serving of a certain user in a specific network [DaS00]. To
evaluate this cost2, a network needs to predict the load in the next τI . Several
2The choice of methods for evaluating service cost has no bearings on the generality ofthe proposed SDCR.
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 162
operationally-efficient and online measurement techniques have been proposed for
aggregate user traffic; e.g. the work in [ZvdBC+01, GCB03, DFS03, GWZ05]. At
the individual level, a connection’s duration can easily be extracted from per-service
statistics. As for usage, the per-user effective requirements can also be estimated
[Kel96]. Note that evaluating the cost over a τI does not mean that the service is
delivered over the whole duration.
For the operator to realistically solicit user participation in connectivity [IB03],
it must employ a rewards system for compensating, say, a terminal’s battery deple-
tion. This is the approach employed in [LL05]. The value of the reward will be
directly proportional to the added cost in service delivery. Depending on the service
requirements (e.g. delay) the operator will make decisions such the number of users
to involve/reward to deliver the service the certain user. As an example of how the
cost would be evaluated, consider the service delivered to a user reached in two hops.
The total cost would be the basic connectivity cost for the first hop plus the reward
instead of the basic connectivity cost for each user. To evaluate the feasibility and
the cost of utilizing multihop relay, the network needs to predict the density of in-
active/partially active users in the area of interest, in addition to the extent of their
willingness in participation. Prediction based on Wiener processes, one utilized for
demand estimation in [ZvdBC+01], is highly useful in this prospect.
As aforementioned, the introduction of cognitive radio introduces a further vari-
ability in spectrum access costs for network operators [Hay05, FH06, OGN07]. We
note two of the possible ways in which access costs can be evaluated. In the first,
an added entity, the spectrum broker, collects information about spectrum demand
and supply. Access cost can hence be evaluated through a regulated price or through
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 163
auctioning. Note that the brokerage entity can also emulate a cooperative setting
where the different operators negotiate access costs. In the second scenario, where
a non-cooperative mode is established, game theoretic approaches can be utilized to
evaluated estimated access costs.
It must be emphasized that the information and measurements required for the
above are commonly collected and/or processed for various nominal objectives in
an operator’s management information base, e.g. accounting, resource management,
etc. In fact, an implementation of SDCR would be minimally invasive in terms of
architecture.
6.3.5 Identifying Candidate Users
In Subsection 5.3.1, we discussed at length the notion of a migration’s worth and
what is involved in its valuation. Here, we only extend the notation. We compute
the worth of uid to be moved from network i to network j, denoted by W i,j(ux) as
follows.
W i,j(uid) = Φi,jΠ (uid) · Φi,j
Σ (uid) (6.5)
where Φi,jΠ (ux) is a function of multiplicative factors and Φi,j
Σ (ux) is a weighted
summation of additive factors. The former strongly dictates the possibility of migra-
tion, e.g. whether the service is available in j or whether ux’s terminal does not have
the required interface. They can also indicate other attributes that would prevent the
network from persistently migrating users in a short duration of time. The additive
factors mostly describe the quality of service delivery in network j. For example,
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 164
they can characterize to the variation of allocation between the two networks or sig-
nal quality. The set of candidate users using class s between network i and network
j, denoted by Ai,js , can be populated as follows.
Ai,js =
ux : ux ∈ U i
s, Wi,j(ux) ≥ Wth
(6.6)
6.3.6 The Core of SDCR
The outcome of engaging the SDCR module is sets of users, each indicating the users
using service s to be migrated from a network i to network j. Each such set, denoted
by V i,js , will be selected from the respective set of candidate users, i.e. V i,j
s ⊆ Ai,js .
Respectively denote by V n,Os and V O,n
s service s users to be migrated from network n
to all other networks, and from all other networks to network n, i.e.
V n,Os =
⋃j 6=nn,j∈N
V n,js (6.7)
V O,ns =
⋃i6=ni,n∈N
V i,ns (6.8)
If the SDCR module was engaged at a certain time t, then the constituents of U is
just after the engagement, i.e. at time t+, can be computed as follows.
U is(t
+) = U is(t) + V O,i
s (t)− V i,Os (t) (6.9)
where t+ refers to the instance just after SDCR completing its operation.
Denote the cost of user ux, where ux ∈ U , in the interval between t and t + τI by
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 165
C(ux, t). Accordingly, the cost of service delivery to U in the same duration can be
expressed as follows.
C(U(ti)) =∑ux∈U
C(ux, t) (6.10)
The objective of the SDCR module can be stated as follows.
arg minV i,j
s ∈Ai,js
C(U(t+)) (6.11)
In seeking this objective, the SDCR needs to consider certain constraints. For
example, since a user can be a candidate migrate to more than one technology, it is
essential to limit the migration to a single technology.
V i,js ∩ V i,k
s = ∅ ∀i 6= j 6= k (6.12)
Additionally, the operation of the SDCR is bounded by the capabilities of the tech-
nologies within the overlay. Denote by Q(Un) the QoS allocations, e.g. bandwidth,
provided for Un, and by Qn the maximum QoS allocations available in network n, ei-
ther absolutely or for the purpose of cost reduction. The constraints can be described
as follows.
Q(Un(t+)) ≤ Qn (6.13)
Equally important, the SDCR operation must guaranteed that a user’s QoS is
guaranteed, i.e.
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 166
Q(ux, t) ≥ QSLA(ux) ∀ux ∈ U (6.14)
These are the basic constraints for the operation of any SDCR module. Other
constraints can certainly be added.
Once the users to be migrated are selected, the selection is passed on to the
relevant network management entities to perform the OMVHs.
6.4 Performance Evaluation
In what follows, we examine the operational aspects of SDCR. Simulation experi-
ments were carried out in an event-driven simulation built utilizing C++ and MAT-
LAB. The SDCR core was implemented through a Mixed Integer Linear Programming
(MILP) formulation that solved using the GLPK package of the GNU project [GLP].
terminal cellular BSWLAN AP
Figure 6.3: An overlay of cellular and WLAN coverages.
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 167
6.4.1 Simulation Setup
We extended the simulation environment utilized in Chapter 5. For clarity, how-
ever, we reiterate the simulation setup herein and redraw Figure 5.5 in Figure 6.3.
An overlay involving two networks is used. We will refer to the network with the
persistently larger coverage by network 1 and the other network by network 2. For
purposes of evaluation, the coverages of the two networks are concentric. A fixed
number of users was uniformly distributed over the area of the larger coverage. Users
make connection requests with an aggregate inter-arrival time that is exponential
with controllable mean. The capacities of networks 1 and 2 are respectively 80 and
40 ebus. In both networks, two classes of services are defined. The first, service 1, is
allocated 6ebus in network 1 and 4 ebus in network 2. The second service requires
4 and 2 ebus, respectively in networks 1 and 2. The connection holding time of the
first service is exponentially distributed with a mean of 150 seconds, regardless of the
network choice. Connection holding times for the second service is fixed, regardless
of the network choice, for a duration of 300 seconds. All users are assumed to be dual
mode users, i.e. can request and receive services in both networks. Experiments ran
for a simulated time of 3600 seconds. Each shown result represents the outcome of
ten experiments.
Note that the values used here are arbitrary, and that other values were used in
the intensive investigation performed displayed similar trends to the ones presented
below.
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 168
6.4.2 Observing the Transient Response
Figure 6.4 shows the temporal response for the two networks, with and without
SDCR being employed. In the figure, the instantaneous service delivery cost per user
is shown. The cost structure was fixed for classes 1 and 2 (6 and 4 in network 1, and
2 and 2 in network 2, all per ebu per second.) The aggregate arrival rate was set
to 10 calls per minute with 60% of the calls seeking network 1 and 70% of the calls
requesting service 1. Coverage ratio was set to 1:0.6, with users uniformly distributed
of the coverage area of the larger coverage, i.e. network 1. This roughly means that
60% of the users will reside in the overlap area. At each decision instant, the migration
of each user within the overlap area is randomly assigned. The worth threshold, Wth,
was set to 0.3. SDCR was employed every 20 seconds. The instantaneous cost was
also sampled every 20 seconds.
The figure shows the general effectiveness of the SDCR module. Note that in the
graphs there are two horizontal lines indicating the median cost per user, in both
instances, the median cost per user is lower when the SDCR is employed by about
16%.
A subtle aspect of the SDCR module can be observed in Figure 6.5, and that is the
effect of the value of τI . In the figure, instead of employing SDCR every 20 seconds,
it is employed every 200 second, i.e. a 10x factor. We note that value of median cost
when SDCR is employed is higher than in the case of τI = 20 seconds, resulting in a
reduction of 5% compared to 16% observed in Figure 6.4 . More importantly, there
are times when the instantaneous cost under SDCR is higher than when SDCR is
not employed. This can be explained as follows. First, we previously mentioned that
the choice of τI depends, in part, on the characteristics of services employed. For
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 169
0 500 1000 1500 2000 2500 3000 35008
10
12
14
16
18
20
22
24
26
28
simulated time (seconds)
inst
anta
nous
ser
vice
del
iver
y co
st p
er u
ser
w/o SDCR
w/ SDCR
w/o SDCR (median)
w/ SDCR (median)
Figure 6.4: Instantaneous service delivery cost per user, with and without SDCRbeing engaged.
exponentially distributed service 1 and fixed duration service, the call holding times
are 150 (mean) and 300, respectively. Hence, at τI = 200, and minding the arrival
rate, the SDCR module is bound to miss reduction opportunities. Second, that the
bound on worst case cost occurs not when SDCR is not engaged, but when each user
is persistently assigned to most expensive network. Equivalently, optimal cost occurs
if a cost reduction module is persistently applied. Minding the overhead in terms of
processing and signaling resulting from such a setting, the conscious design choice of
making SDCR operate in a periodic manner should be noted.
In Figure 6.6, the same settings are applied with the exception of a demand surge
between 1000 and 1500 seconds. During the surge, all new users request network 1,
while the choice of service is evenly divided between the two services. It should be
observed that the performance of the SDCR module is maintained during the surge
resulting in a reduction of 16%.
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 170
0 500 1000 1500 2000 2500 3000 35008
10
12
14
16
18
20
22
24
26
simulated time (seconds)
inst
anta
neou
s se
rvic
e de
liver
y co
st p
er u
ser
w/o SDCR
w/ SDCR
w/o SDCR (median)
w/ SDCR (median)
Figure 6.5: Instantaneous service delivery cost per user, with and without SDCRbeing engaged, with SDCR employed every 200 seconds.
0 500 1000 1500 2000 2500 3000 35008
10
12
14
16
18
20
22
24
simulated time (seconds)
inst
anta
neou
s se
rvic
e de
liver
y co
st p
er u
ser
w/o SDCRw/ SDCRw/o SDCR (median)w/ SDCR (median)
Figure 6.6: Instantaneous service delivery cost per user, with and without SDCRbeing engaged, with a demand surge between 1000 and 1500 seconds.
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 171
The next figure, Figure 6.7, shows an extreme evaluation of the SDCR module.
Service costs in network 1 were varied between 50% and 150% of their original values,
while service costs in network 2 were varied between 100% and 300% of their original
values. The changes were applied at every change in the simulation state, i.e. arrival,
departure, engaging SDCR or measurement. In order to make the figure more legible
we slightly reduced the sampling to a measurement every 40 seconds. Again, despite
instantaneous cost variations whether or not the SDCR is employed, employing SDCR
results in a lower median cost. In this case, the achieved reduction is 18%.
0 500 1,000 1,500 2,000 2,500 3,000 3,50012
14
16
18
20
22
24
26
28
30
32
simulated time (seconds)
inst
anta
neou
s se
rvic
e de
liver
y co
st p
er u
ser w/o SDCR
w/ SDCRw/o SDCR (median)w/ SDCR (median)
Figure 6.7: Instantaneous service delivery cost per user, with and without SDCRbeing engaged, cost structure changed randomly.
Before proceeding, it is worth noting that the actual reduction in cost is highly
dependent on the outcome of the cost valuation scheme. If there is not a significant
variation in the cost of service delivery between technologies in the wireless overlay,
the actual reduction caused by SDCR will accordingly decrease.
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 172
6.4.3 SDCR Effectiveness and Network Load
Figures 6.8 to 6.11 show some operational aspects with and without employing SDCR
and under different aggregate loads. The aggregate arrival rate was changed from 10
to 25 requests per minute, in steps of 2.5 requests per minute. To accommodate the
increasing load, SDCR frequency was increased to once every 5 simulated seconds. In
all, requests were evenly distributed between the two services. However, the portion
of requests choosing a certain network was varied. Otherwise, all of the above settings
were maintained, except for the surge and the variable cost settings.
In observing subplots (a) in Figures 6.8 and 6.9, we note that employing SDCR
results in lowering the total cost (an average 4×105 in Figure 6.8(a) and up to 4×105)
in Figure 6.9(a)). Note that in the figures the percentage of requests made to network
1 were respectively 20% and 50%. Understanding the subplots (b) to (c) in the same
two figures can be eased by minding the cost structure, which basically has services
less expensive to deliver in the network 2 than in network 1. This naturally results in
blocking probability generally enhancing in network 1 and worsening in network 2.
This very facet results in an important effect to be observed in Figure 6.12. where
the percentage of requests made to network 1 is raised to 80%. Specifically, in Figure
6.10(a) where employing SDCR actually results in an increase in the total cost of
service delivery. However, observing subplots (b) and (c) in Figure 6.10 indicates a
substantial drop in the blocking probability when employing SDCR (unlike the effect
in Figures 6.8 and 6.10) This indicates that employing SDCR, and given the cost
structure, results in more users being accommodated in the systems. As more users
are accommodated in the system, the total delivery cost is bound to increase. More-
over, observing subplot (b) in Figure 6.11, where the per user median instantaneous
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 173
10 15 20 254
6
8
10
12
14x 105
aggregate arrival rate (calls per minute)
tota
l cos
t
(a)
10 15 20 250.4
0.5
0.6
0.7
0.8
aggregate arrival rate (calls per minute)
bloc
king
pro
babi
lity
(agg
rega
te)
(b)
10 15 20 250
0.1
0.2
0.3
0.4
aggregate arrival rate (calls per minute)
bloc
king
pro
babi
lity
(net
1) (c)
10 15 20 250.4
0.6
0.8
1
aggregate arrival rate (calls per minute)
bloc
king
pro
babi
lity
(net
2) (d)
Figure 6.8: Operational aspects with (solid) and without (dashed/dotted) SDCR,20% of requests to network 1: (a) total cost; (b) aggregate blocking probability; (c)blocking probability in network 1; and (d) blocking probability in network 2.
10 15 20 250.8
1
1.2
1.4
1.6x 10
6
aggregate arrival rate (calls per minute)
tota
l cos
t
10 15 20 250.2
0.4
0.6
0.8
1
aggregate arrival rate (calls per minute)
bloc
king
pro
babi
lity
(agg
rega
te)
10 15 20 250
0.2
0.4
0.6
0.8
aggregate arrival rate (calls per minute)
bloc
king
pro
babi
lity
(net
1)
10 15 20 250.2
0.4
0.6
0.8
1
aggregate arrival rate (calls per minute)
bloc
king
pro
babi
lity
(net
2)
Figure 6.9: Operational aspects with (solid) and without (dashed/dotted) SDCR,50% of requests to network 1: (a) total cost; (b) aggregate blocking probability; (c)blocking probability in network 1; and (d) blocking probability in network 2.
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 174
cost is plotted against the load, it becomes readily apparent that, despite the increase
in the total cost of service delivery, the actual per user cost is still reduced.
10 15 20 251.2
1.3
1.4
1.5
1.6x 106
aggregate arrival rate (calls per minute)
tota
l cos
t(a)
10 15 20 250.2
0.4
0.6
0.8
1
aggregate arrival rate (calls per minute)
bloc
king
pro
babi
lity
(agg
rega
te) (b)
10 15 20 250
0.2
0.4
0.6
0.8
aggregate arrival rate (calls per minute)
bloc
king
pro
babi
lity
(net
1) (c)
10 15 20 250
0.5
1
aggregate arrival rate (calls per minute)
bloc
king
pro
babi
lity
(net
2) (d)
Figure 6.10: Operational aspects with (solid) and without (dashed/dotted) SDCR,80% of requests to network 1: (a) total cost; (b) aggregate blocking probability; (c)blocking probability in network 1; and (d) blocking probability in network 2.
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 175
10 12.5 15 17.5 20 22.5 25350
400
450
aggregate arrival rate (calls per minute)
med
ian
inst
anta
neou
s co
st
10 12.5 15 17.5 20 22.5 2513
14
15
16
17
aggregate arrival rate (calls per minute)med
ian
inst
anta
neou
s co
st p
er u
ser
Figure 6.11: Operational aspects with (solid) and without (dashed/dotted) SDCR,80% of requests to network 1: (a) median instantaneous cost; and (b) median instan-taneous cost per user.
CHAPTER 6. SERVICE DELIVERY COST REDUCTION 176
6.5 Summary
In this chapter, we continued the display of applications for OMVHs. Specifically,
we studied how proactive and periodic implementations can be used to attain certain
operational objectives. Projecting variability in the cost of service delivery in future
HWNs, we chose the objective of reducing the operator’s cost for service delivery to
the mobile end user, showing the elements involved in the design and operation of
SDCR, an OMVHM dedicated to this objective.
SDCR holds potential gains and stands independent, but robust, of the manners
in which an operator’s cost are evaluated and the variability these cost structures
yield. In a setup for fixed cost structure, a reduction of 16% was achieved in per user
cost. In examining SDCR’s dependence on frequency of operation, we observed the
reduction decreased to 5% as the frequency was decreased by an order of magnitude.
We also evaluated the module under demand surges and extreme variations in the
cost structure, respectively observing reductions of 16% and 18%. In addition, we
evaluated the effectiveness of SDCR when varying the load between the two overlaid
networks. Reductions in total cost up to 4×105 was achieved. And while it is possible
based on load distribution and cost structure that SDCR results in a higher total cost
due to increased admission, SDCR was found to persistently reduce the per-user cost.
Notwithstanding, in implementing SDCR the designer should mind the variation
of costs in the operator’s cost structure. This is because the magnitude of SDCR’s
gains are dependent on this variation.
Chapter 7
Conclusions and Future Directions
Heterogeneous Wireless Networks (HWNs) are composite networks made up of dif-
ferent access technologies (GSM, WLAN, WCDMA, Worldwide Interoperability for
Microwave Access (WiMax), etc.). For a network operator, the objective of creating
such a composite is to achieve profitable gains through utilizing the resources of each
technology to its full potential by exploiting its individual capabilities. Given that
each access technology is different in terms of coverage range, band, QoS mainte-
nance, security, etc., an HWN — as a composite — becomes a more capable and
powerful network. It is hence that in the complementarity of the different wireless
access technologies that the capabilities of an HWN should be realized.
On the users’ side, HWNs offer more choices. Given a multi-mode terminal and
appropriate service contracts, a user can initiate connectivity in any of the wireless
access technologies. The selection is based on the network that best serves the user’s
attributes in terms of application requirement, terminal capabilities, behavioral profile
(mobility, types of applications, etc.). The true potential of HWNs, however, is only
unlocked when users can maintain their active sessions when toggling association
177
CHAPTER 7. CONCLUSIONS AND FUTURE DIRECTIONS 178
between one access technology and another. This inter-technology handoff, called
Vertical Handoff (VH), is the capital of HWN and directly implies that users can
persistently select the “most appropriate network”, and not only at session initiation.
However, considering the possible enumerations of user behaviors and applica-
tion choices, in addition to their terminal capabilities, it is expected that traditional
design arguments for RRM frameworks will not be able to accommodate service re-
quirements. For instance, optimizing the resources of each access individually and
independently from other networks may lead to underutilization and resource mis-
management. Furthermore, since RRM modules in HWN are to be continuously
involved in many decisions at any given time, it becomes a requirement to control
and reduce the operational cost of various RRM modules. More importantly, it may
be essential to look beyond traditional design arguments and examine more closely
the unique characteristics of HWNs.
In this thesis, we proposed a framework for designing RRM frameworks for HWNs.
Our approach was based on three directions. First, we explored the notion of joint
functionalities and how they may lead to enhanced performance. Our example was re-
alized through joint allocation policies involving provisioning and admission control.
We have also introduced a structure for robust provisioning to accommodate vari-
ability not only in demand but also in network capabilities. Second, we examined a
traditional RRM module concerned with exploiting the adaptive nature of multimedia
applications, namely bandwidth adaptation. After reviewing the literature, we offered
qualitative design guidelines for bandwidth adaptation. More importantly, however,
we examined the module’s tradeoff between admission ratios and user satisfaction and
proposed an algorithm with reduced complexity and controlled operational cost. The
CHAPTER 7. CONCLUSIONS AND FUTURE DIRECTIONS 179
final element of our framework involved an exploitation of VHs towards the benefits
of the network. This exploitation complements the many works that have sought to
trigger VHs based on user preferences. We introduced the notion of Operator Mo-
tivated Vertical Handoffs (OMVHs), and enumerated the general considerations for
user selection and migration, in addition to user differentiation. We also discussed
the different requirements for reactive and proactive applications of such handoffs.
7.1 Conclusions
Key to implementing joint functionalities, which were discussed in Chapter 3, is how
the resources of individual networks can be jointly managed. However, in order to set
joint allocation policies, meaningful demand categorizations are required. For the rep-
resentative model that we examined for joint provisioning and admission control, we
categorized users based on their terminal capabilities (whether single-mode or dual-
mode) and their positioning with respect to the overlay created by two networks. We
made considerations for sharing the resources of two networks for dual-mode users
residing within the overlay, instead of managing the resources in each network in-
dependently. In evaluation, the joint model yielded reduced blocking probabilities
for dual-mode users at the cost of slight increases for single-mode users given evenly
distributed loads. However, the blocking probabilities for single-mode users suffered
when their demand was increased. Also in Chapter 3 we detailed how the use of
stochastic programming can be used in implementing a robust provisioning module
that accommodates variability in the demand and variability in network capabili-
ties, e.g. change in capacity due to interference. In addition to providing feasible
CHAPTER 7. CONCLUSIONS AND FUTURE DIRECTIONS 180
allocations, stochastic programming offers more realistic evaluation of losses in un-
derutilization.
Chapter 4 was dedicated to controlling the operational cost (processing, signal-
ing) of RRM modules. As a case study, we chose Bandwidth Adaptation Algorithms
(BAAs) which are concerned with increasing or decreasing user allocations based on
demand intensity and network conditions. Traditionally, BAAs are operationally ex-
pensive as they are always triggered regardless of the number and allocation state
of active users. This motivated our design of an optimized adaptation core with re-
duced complexity. We also introduced the notion of stochastic triggers to control the
tradeoff between admission ratios and user satisfaction. Through utilizing tradeoff
graphs, a network operator can direct the state of BAA based on the operational
objectives and the current state of user allocations. In the evaluated settings, the
adaptation core results in substantial reductions (up to 3/5) in blocking probabil-
ity. This reduction, however, comes at the cost of increased number of adaptations
(almost full downgradation). We then displayed several tradeoff graphs for different
traffic modes. Based on these graphs, operator can direct the operation of adaptation
according to operational objectives. Moreover, we utilized the probabilistic trigger
threshold as a network selection metric in an HWN settings to re-stress the gains of
joint functionalities. The setup resulted in reducing the overall blocking probability
at the cost of an increase in the downgradation degree.
In Chapters 5 and 6 we introduced the notion of Operator Motivated Vertical
Handoffs (OMVHs) which exploits VHs to the benefit of the network operator while
maintaining the user’s service requirements. In Chapter 5, we detailed the considera-
tions for designing an OMVH in terms of elements of user identification and selection.
CHAPTER 7. CONCLUSIONS AND FUTURE DIRECTIONS 181
To aid the identification metric, we utilized a notion of migration’s worth that evalu-
ated how useful is a migration to the operator decision framework. In differentiating
between user classes, we evaluated the give of each class based on different possible
categorizations. In the evaluated settings, a reduction of 10% in blocking probabil-
ity was achieved. Such reductions, however, diminish as the network load increases
as the OMVH modules become less capable of user migration. We also studied the
effect of overlay percentage in order to identify the effect of capacity limitations, cov-
erage limitations and user distributions. The outcomes of such studies are useful for
operators in practical deployments. In varying the load between the two overlaid
networks, employing OMVH persistently resulted in better performance than when
it is not employed. This confirmed the potential of joint functionalities in HWNs.
When a hold-off time was introduced in the computation of a migration’s worth, the
tradeoffs between hold-off time duration and blocking probabilities or average number
of migrations was observed. However, when the hold-off time is comparable to the
average connection duration, a form of fairness is realized as more users are involved
in OMVH. Finally, we evaluated the robustness of a controller for upholding pre-set
ratios for the expended worth of two user classes. The controller was able to maintain
the pre-set ratios udder different class load distributions.
Chapter 6 extended the premises laid in Chapter 5 to proactive applications. After
elaborating on different applications of OMVH in an operator’s network hierarchy, we
took a closer look at the design of proactive applications. As an example, we chose the
objective of reducing the cost of service delivery in HWNs. Two reasons motivated this
choice. First, overlays are to assume substantial portions of an operator’s network in
the future. Second, it is projected that costs incurred by operators for service delivery
CHAPTER 7. CONCLUSIONS AND FUTURE DIRECTIONS 182
are going to experience short-term variability due to emerging wireless paradigms.
Accordingly, significant gains can be achieved in directing users to less costly networks
in terms of service delivery. In the chapter, we described the elements involved in
the design and operation of an SDCR module, which accommodates different cost
valuation mechanisms in addition to different variances in cost structures. Given
a fixed cost structure, the SDCR displayed reduction in the median cost per-user
under various conditions including demand surges. SDCR also reduced the per-user
cost under the extreme conditions of highly variable cost structure. SDCR reduces
the total costs but not under all conditions. Employing SDCR under certain cost
structures may actually result in an increased total cost due to increased admission.
However, SDCR persistently maintained a reduction in per-user cost. Nevertheless,
an implementer should note that the actual gains of SDCR are dependent on the
magnitude of difference between the various costs in an operator’s cost structure.
7.2 Future Directions
An important aspect of resource provisioning is demand categorization. An example
of traditional categorization is dividing users into new and handoff calls and mak-
ing considerations for the latter to lower the chances of dropping. The introduction
of data-switched multimedia lead to another categorization based on traffic require-
ments. However, in HWNs, a further categorization is required to encompass the
various motivations for connectivity on the user side and the capabilities of various
access technologies on the operator’s side. One possible categorization of demand, one
that minds the existence of overlays, is based on geographical categorization. How-
ever, care should be taken in order to accommodate mobile demand. The emergence
CHAPTER 7. CONCLUSIONS AND FUTURE DIRECTIONS 183
of ad hoc and mobile networks means that bulky demands may move together from
one part of the network to another. The target demand categorization will ultimately
lead to simplified network management decisions. For example, if the categoriza-
tion enabled the representation of user demand, and other attributes, and network
capabilities as vectors, then it is possible to reduce the admission procedure to a
cross-product operation. Such representation would also enable us to consider mobil-
ity whether for single or multiple users. Provisioning would also be more accessible
since it would be directly apparent how to direct the resources of individual networks
to different demand categories.
Our work in Chapter 4 implicitly motivates a form of “plug n’ play” approach
to designing and operating RRM frameworks. In extending the work to other RRM
modules, both traditional and to come, it would be possible to identify the effect of
utilizing combinations of the various modules on network performance, in addition
to predicting their combined tradeoffs. Also in the setting of an HWN, such feature
will enable more effective RRM decisions when the behavior of engagement different
modules in the different networks making on overlay can be inferred a priori.
This latter point is also applicable in further investigating OMVHs in general.
The introduction of OMVH adds another option in a operator’s arsenal in addition
to adaptation and preemption. In a sense, OMVH can be viewed as an adaptation
across multiple networks. In order to design a capable decision framework, there
needs to be means to evaluate the most effective decision at any given time. This
also requires understanding the inter-relations between various RRM modules in an
operating framework. As for the effort on service delivery cost reduction, there re-
mains work in establishing guidelines for performing SDCR in a scaling network with
CHAPTER 7. CONCLUSIONS AND FUTURE DIRECTIONS 184
several wireless overlays.
Finally, we note that in this thesis we have made considerations for an HWN de-
ployed by a single network operator. While in future networks cooperation between
different operators will be inevitable, there remains certain challenges that need to be
overcome. Measures need to be made, for instance, in order to make inter-operator
resource management profitable for all operators involved. Furthermore, the push for
using IMS as core will have significant effects even on the economic setup. For exam-
ple, it would be possible for an entrepreneur to deploy a WLAN or multiple WLANs
and have agreements with more than one cellular operator. On the other hand, the
role of the operator will be somewhat redefined since an IMS-based core will allow a
differentiation between an operator and a service provider. Accordingly, cooperation
is also inevitable between operators and service providers. These changes press for
more carefully thought out RRM frameworks for individual players (operators, cover-
age entrepreneurs and service providers) to maximize their individual revenues, while
at the same maintain acceptable user satisfaction levels.
References
[3GP] 3GPP. IP Multimedia Subsystem (IMS). http://www.3gpp.org/ftp/
Specs/archive/23 series/23.228/23228-810.zip.
[802] The IEEE 802.11 Working Group. http://www.ieee802.org/21/.
[AFLP03] D. Avidor, D. Furman, J. Ling, and C. Papadias. On the financialimpact of capacity-enhancing technologies to wireless operators. IEEEWireless Communications, 10(4):62–65, August 2003.
[AK06] F. Ahdi and B. H. Khalaj. Vertical handoff initiation using road topol-ogy and mobility prediction. In IEEE Wireless Communications andNetworking Conference, volume 1, pages 593–598, April 2006.
[BAD06] N. Banerjee, A. Acharya, and S. K. Das. Seamless sip-based mobilityfor multimedia applications. IEEE Network, 20(2):6–13, 2006.
[Bet01] C. Bettstetter. Smooth is better than sharp: a random mobility modelfor simulation of wireless networks. In Proceedings of the 4th ACMinternational workshop on Modeling, analysis and simulation of wirelessand mobile systems, pages 19–27, 2001.
[BL06] F. Bari and V. Leung. Service delivery over heterogeneous wirelesssystems: networks selection aspects. In Proceeding of the 2006 inter-national conference on Communications and mobile computing, pages251–256, 2006.
[Bri05] A. Bria. Cost-based resource management in hybrid cellular-broadcasting systems. In Vehicular Technology Conference, volume 5,pages 3183–3187, May/June 2005.
[CAC+05] D. Cavalcanti, D. Agrawal, C. Cordeiro, Bin Xie, and A. Kumar. Issuesin integrating cellular networks WLANs, AND MANETs: a futuris-tic heterogeneous wireless network. IEEE Wireless Communications,12(3):30–41, June 2005.
185
REFERENCES 186
[CKW00] C. Courcoubetis, F.P. Kelly, and V.A. Sirisand R. Weber. A study ofsimple usage-based charging schemes for broadband networks. Telecom-munication Systems, 15:323–343(21), 2000.
[CLH04] W.-T. Chen, J.-C. Liu, and H.-K. Huang. An adaptive scheme forvertical handoff in wireless overlay networks. In Proceedings of the TenthInternational Conference on Parallel and Distributed Systems, pages541–548, July 2004.
[CM05] A. Calvagna and G. Di Modica. A cost-based approach to verticalhandover policies between wifi and gprs: Research articles. WirelessCommunications and Mobile Computing, 5(6):603–617, 2005.
[CMS+02] J.-C. Chen, A. McAuley, V. Sarangan, S. Baba, and Y. Ohba. Dynamicservice negotiation protocol (DSNP) and wireless Diffserv. In IEEEInternational Conference on Communications, volume 2, pages 1033–1038, 2002.
[CMVE06] A. Cuevas, J. I. Moreno, P. Vidales, and H. Einsiedler. The IMS serviceplatform: a solution for next-generation network operators to be morethan bit pipes. IEEE Communications Magazine, 44(8):75–81, August2006.
[CSC+04] L.-J. Chen, T. Sun, B. Chen, V. Rajendran, and M. Gerla. A smart deci-sion model for vertical handoff. In Proceedings of The 4th InternationalWorkshop on Wireless Internet and Reconfigurability, 2004.
[DaS00] L. DaSilva. Pricing for qos-enabled networks: a survey, 2000.
[DFS03] K. L. Dias, S. F. L. Fernandes, and D. F. H. Sadok. Predictive calladmission control for all-ip wireless and mobile networks. In Proceedingsof the 2003 IFIP/ACM Latin America Networking Conference, pages131–139, 2003.
[DNE02] F. Du, L. M. Ni, and A. H. Esfahanian. HOPOVER: a new handoffprotocol for overlay networks. In IEEE International Conference onCommunications, volume 5, pages 3234–3239, 2002.
[DSVK07] S. Dekleva, J.P. Shim, U. Varshney, and G. Knoerzer. Evolution andemerging issues in mobile wireless networks. Commun. ACM, 50(6):38–43, 2007.
REFERENCES 187
[DZ05] J. Diederich and M. Zitterbart. Handoff prioritization schemes usingearly blocking. IEEE Communications Surveys & Tutorials, 7(2):26–45, Second Quarter 2005.
[Edd04] W. M. Eddy. At what layer does mobility belong? IEEE Communica-tions Magazine, 42(10):155–159, October 2004.
[EKOAW02] M. El-Kadi, S. Olariu, and H. Abdel-Wahab. A rate-based borrowingscheme for qos provisioning in multimedia wireless networks. IEEETransactions on Parallel and Distributed Systems, 13(2):156–166, 2002.
[Eps99] B. M. Epstein. Resource allocation algorithms for multi-class wirelessnetworks. PhD thesis, Columbia University, 1999.
[FFF+06] S. Frattasi, H. Fathi, F. H. P. Fitzek, R. Prasad, and M. D. Katz. Defin-ing 4g technology from the users perspective. IEEE Network, 20(1):35–41, January/February 2006.
[FH06] M. Felegyhazi and J.-P. Hubaux. Wireless operators in a shared spec-trum. Mobile Computing and Communications Review, 10(4):13–14,2006.
[FSC06] L. Ferreira, A. Serrador, and L. M. Correia. Concepts of simultaneoususe in mobile and wireless communications. Wireless Personal Commu-nications, 37(3-4):317–328, 2006.
[GB06] M. Ghaderi and R. Boutaba. Call admission control in mobile cellularnetworks: a comprehensive survey. In Wireless Communications andMobile Computing, pages 69–93, 2006.
[GCB03] M. Ghaderi, J. Capka, and R. Boutaba. Prediction-based admissioncontrol for Diffserv wireless Internet. In Vehicular Technology Confer-ence, volume 3, pages 1974–1978, October 2003.
[GGZZ04] C. Guo, Z. Guo, Q. Zhang, and W. Zhu. A seamless and proactiveend-to-end mobility solution for roaming across heterogeneous wire-less networks. IEEE Journal on Selected Areas in Communications,22(5):834–848, June 2004.
[Gin00] P. Ginzboorg. Seven comments on charging and billing. Communica-tions of the ACM, 43(11):89–92, 2000.
[GLP] GNU linear programming kit. http://www.gnu.org/software/glpk/
glpk.html.
REFERENCES 188
[GWZ05] L. Gao, Z. Wang, and T. Zhang. Online internet traffic predictionmodels based on mmse. In Proceedings of International Conference onComputer Network and Mobile Computing, pages 1253–1262, 2005.
[Hay05] S. Haykin. Cognitive radio: brain-empowered wireless communica-tions. IEEE Journal onSelected Areas in Communications, 23(2):201–220, 2005.
[HLTT06] L. Huang, Y. Liu, S. Thilakawardana, and R. Tafazolli. Network-centricuser assignment in the next generation mobile networks. IEEE Com-munications Letters, 10(12):822–824, December 2006.
[IB03] Y. Iraqi and R. Boutaba. The degree of participation concept in ad hocnetworks. In Proceedings of the Eighth IEEE International Symposiumon Computers and Communications, 2003.
[KB96] R. H. Katz and E. A. Brewer. The case for wireless overlay networks.In Tomasz Imielinski and Henry F. Korth, editors, Mobile Computing,pages 621–650. Kluwer Academic Publishers, 1996.
[KCBN98] T. Kwon, Y. Choi, C. Bisdikian, and M. Naghsineh. Call admission con-trol or adaptive multimedia in wireless/mobile networks. In Proceedingsof the 1st ACM international workshop on Wireless mobile multimedia,pages 111–116, New York, NY, USA, 1998. ACM Press.
[KCBN99] T. Kwon, Y. Choi, C. Bisdikian, and M. Naghshineh. Measurement-based call admission control for adaptive multimedia in wireless/mobilenetworks. In IEEE Wireless Communications and Networking Confer-ence, pages 540–544, New Orleans, LA, September 1999.
[KCBN03] T. Kwon, Y. Choi, C. Bisdikian, and M. Naghshineh. Qos provisioningin wireless/mobile multimedia networks using an adaptive framework.Wireless Networks, 9(1):51–59, 2003.
[KCCD99] T. Kwon, J. Choi, Y. Choi, and S. Das. Near optimal bandwidth adap-tation algorithm for adaptive multimediaservices in wireless/mobile net-works. In Vehicular Technology Conference, volume 2, pages 874–878,Amsterdam, Netherlands, 1999.
[KCD02] T. Kwon, Y. Choi, and S. K. Das. Bandwidth adaptation algorithmsfor adaptive multimedia services in mobile cellular networks. WirelessPersonal Communications, 22(3):337–357, 2002.
REFERENCES 189
[Kel96] F. P. Kelly. Notes on effective bandwidths. In F.P. Kelly, S. Zachary,and I.B. Ziedins, editors, Stochastic Networks: Theory and Applications,number 4 in Royal Statistical Society Lecture Notes Series, pages 141–168. Oxford University Press, 1996.
[KH04] T. E. Klein and S.-J. Han. Assignment strategies for mobile data usersin hierarchical overlay networks: performance of optimal and adap-tive strategies. IEEE Journal on Selected Areas in Communications,22(5):849–861, June 2004.
[KJ07] M. R. Kibria and A. Jamalipour. On designing issues of the next gen-eration mobile network. IEEE Network, 21(1):6–13, January 2007.
[KKCN00] T. Kwon, S. Kim, Y. Choi, and M. Naghshineh. Threshold-type calladmission control in wireless/mobile multimedia networks using priori-tised adaptive framework. Electronics Letters, 36(9):852–854, 2000.
[KLD06] A. Koukab, Y. Lei, and M. J. Declercq. A GSM-GPRS/UMTS FDD-TDD/WLAN 802.11a-b-g multi-standard carrier generation system.IEEE Journal of Solid-State Circuits, 41(7):1513–1521, July 2006.
[KPCD99] T. Kwon, I. Park, Y. Choi, and S. Das. Bandwidth adaption algo-rithms with multi-objectives for adaptive multimedia services in wire-less/mobile networks. In Proceedings of the 2nd ACM internationalworkshop on Wireless mobile multimedia, pages 51–59, 1999.
[KR07] J. F. Kurose and K. W. Ross. Computer Networking: A Top-DownApproach (4th Edition). Addison-Wesley Longman Publishing Co., Inc.,Boston, MA, USA, 2007.
[KSST06] P. Khadivi, S. Samavi, H. Saidi, and T. D. Todd. Handoff in HybridWireless Networks based on Self Organization. In IEEE InternationalConference on Communications, volume 5, pages 1996–2001, June 2006.
[KW94] P. Kall and S. W. Wallace. Stochastic Programming (Wiley-InterscienceSeries in Systems and Optimization). Wiley John & Sons, August 1994.
[LBG06] M. Lopez-Benıtez and J. Gozalvez. Qos provisioning in beyond 3g het-erogeneous wireless systems through common radio resource manage-ment algorithms. In Proceedings of the 2nd ACM international work-shop on Quality of service & security for wireless and mobile networks,pages 59–66, New York, NY, USA, 2006. ACM Press.
REFERENCES 190
[LK02] K. Lee and S. Kim. Optimization for adaptive bandwidth reservation inwireless multimedia networks. Comput. Networks, 38(5):631–643, 2002.
[LL05] M. Lindstrom and P. Lungaro. Resource delegation and rewards tostimulate forwarding in multihop cellular networks. Vehicular Technol-ogy Conference, 4:2152–2156, May 2005.
[LLZ06] X. Liu, V. O. K. Li, and P. Zhang. Joint Radio Resource Managementthrough Vertical Handoffs in 4g Networks. IEEE Global Telecommuni-cations Conference, pages 1–5, November 2006.
[LPMK05] G. Lampropoulos, N. Passas, L. Merakos, and A. Kaloxylos. Han-dover management architectures in integrated WLAN/cellular net-works. IEEE Communications Surveys & Tutorials, 7(4):30–44, FourthQuarter 2005.
[LS05] Susan J. Lincke-Salecker. Vertical handover policies for common ra-dio resource management: Research articles. International Journal ofCommunications Systems, 18(6):527–543, 2005.
[Mai05] P. Maille. Allowing multi-hops in cellular networks: an economic anal-ysis. In Proceedings of the 8th ACM international symposium on Mod-eling, analysis and simulation of wireless and mobile systems, pages12–19, 2005.
[MMP03] K. Murray, R. Mathur, and D. Pesch. Intelligent access and mobilitymanagement in heterogeneous wireless networks using policy. In Pro-ceedings of the 1st International Symposium on Information and Com-munication Technologies, pages 181–186, 2003.
[MON] Mobile nodes and multiple interfaces in ipv6. http://www.ietf.org/
html.charters/monami6-charter.html.
[MPS+07] G. Marfia, G. Pau, E. De Sena, E. Giordano, and M. Gerla. Evaluatingvehicle network strategies for downtown portland: opportunistic infras-tructure and the importance of realistic mobility models. In Proceedingsof the 1st international MobiSys workshop on Mobile opportunistic net-working, pages 47–51, 2007.
[NA95] M. Naghshineh and A. S. Acampora. QOS provisioning in micro-cellularnetworks supporting multimediatraffic. In Annual IEEE Conference onComputers and Communications, pages 1075–1084, Boston, MA, USA,April 1995.
REFERENCES 191
[nok] Nokia e60. http://europe.nokia.com/A4145124.
[NS96] M. Naghshineh and M. Schwartz. Distributed call admission control inmobile/wireless networks. IEEE Journal on Selected Areas in Commu-nications, 14(4):711–717, May 1996.
[OGA99] L. Ortigoza-Guerrero and H. Aghvami. Resource Allocation in Hier-archical Cellular Systems. Artech House, Inc., Norwood, MA, USA,1999.
[OGN07] S. Olafsson, B. Glover, and M. Nekovee. Future management of spec-trum. BT Technology Journal, 25(2):52–63, 2007.
[OKV+06] B. T. Olsen, D. Katsianis, D. Varoutas, K. Stordahl, J. Harno, N. K.Elnegaard, I. Welling, F. Loizillon, T. Monath, and P. Cadro. Technoe-conomic evaluation of the major telecommunication investment optionsfor European players. IEEE Network, 20(4):6–15, July/August 2006.
[PKC+06] M. Prytz, P. Karlsson, C. Cedervall, A. Bria, and I. Karla. InfrastructureCost Benefits of Ambient Networks Multi-radio Access. In VehicularTechnology Conference, volume 2, pages 648–652, 2006.
[PT00] I. Ch. Paschalidis and J. N. Tsitsiklis. Congestion-dependent pricing ofnetwork services. IEEE/ACM Transactions on Networking, 8(2):171–184, 2000.
[Sal04] A. K. Salkintzis. Interworking techniques and architectures forWLAN/3g integration toward 4g mobile data networks. IEEE Wire-less Communications, 11(3):50–61, June 2004.
[SBK+07] R. Skehill, M. Barry, W. Kent, M. Ocallaghan, N. Gawley, and S. Mc-grath. The common RRM approach to admission control for convergedheterogeneous wireless networks. IEEE [see also IEEE Personal Com-munications] Wireless Communications, 14(2):48–56, April 2007.
[SCD06] K. H. Suleiman, H. . Chan, and M. E. Dlodlo. Load balancing in thecall admission control of heterogeneous wireless networks. In Proceedingof the 2006 international conference on Communications and mobilecomputing, pages 245–250, New York, NY, USA, 2006. ACM Press.
[SF03] M. Seth and A. O. Fapojuwo. Adaptive resource management for multi-media wireless networks. In Vehicular Technology Conference, volume 3,pages 1668–1672, October 2003.
REFERENCES 192
[SGB06] L. Suciu, K. Guillouard, and J.-M. Bonnin. A methodology for assessingthe vertical handover algorithms in heterogeneous wireless networks.In Proceedings of the 2006 workshop on Broadband wireless access forubiquitous networking, page 4, New York, NY, USA, 2006. ACM Press.
[SHM04] N. Shenoy, B. Hartpence, and R. M. Mantilla. A mobility model for costanalysis in integrated cellular/wlans. In Proceedings of Ninth Interna-tional Conference on Computer Communications and Networks, pages175–180. IEEE, 2004.
[SJZS05] W. Song, H. Jiang, W. Zhuang, and X. Shen. Resource management forQos support in cellular/WLAN interworking. IEEE Network, 19(5):12–18, September 2005.
[SK99] M. Stemm and R. H. Katz. Vertical handoffs in wireless overlay net-works. Mobile Network and Applications, 3(4):335–350, 1999.
[SM05] N. Shenoy and R. Montalvo. A framework for seamless roaming acrosscellular and wireless local area networks. IEEE Wireless Communica-tions, 12(3):50–57, June 2005.
[SNW06] E. Stevens-Navarro and V. W. S. Wong. Comparison between Verti-cal Handoff Decision Algorithms for Heterogeneous Wireless Networks.Vehicular Technology Conference, 2:947–951, 2006.
[SS05] A. Sur and D. C. Sicker. Multi layer rules based framework for verticalhandoff. In 2nd International Conference on Broadband Networks, pages571–580, October 2005.
[SZ06a] W. Shen and Q.-A. Zeng. A Novel Decision Strategy of Vertical Handoffin Overlay Wireless Networks. In Fifth IEEE International Symposiumon Network Computing and Applications, pages 227–230, July 2006.
[SZ06b] F. Siddiqui and S. Zeadally. SCTP multihoming support for handoffsacross heterogeneous networks. Fourth Annual Conference on Commu-nication Networks and Services Research, pages 243–250, 2006.
[SZ07] W. Shen and Q.-A. Zeng. Two novel resource management schemesfor integrated wireless networks. In Fourth International Conferenceon Information Technology: New Generations, volume 0, pages 25–30,2007.
[SZC07] W. Song, W. Zhuang, and Y. Cheng. Load balancing for cellular/WLANintegrated networks. IEEE Network, 21(1):27–33, January 2007.
REFERENCES 193
[TBA98] A. K. Talukdar, B. R. Badrinath, and A. Acharya. Rate adaptationschemes in networks with mobile hosts. In Proceedings of the 4th annualACM/IEEE international conference on Mobile computing and network-ing, pages 169–180, 1998.
[THH02] A. Tolli, P. Hakalin, and H. Holma. Performance evaluation of commonradio resource management (CRRM). In IEEE International Conferenceon Communications, volume 5, pages 3429–3433, 2002.
[THM04] A.-E. M. Taha, H. S. Hassanein, and H. T. Mouftah. On robust al-location policies in wireless heterogeneous networks. In Proceedings ofthe International Conference on Quality of Service in HeterogeneousWired/Wireless Networks, pages 198–205, October 2004.
[THM05a] A.-E. M. Taha, H. S. Hassanein, and H. T. Mouftah. The effect of jointallocation policies on preference-triggered vertical handoffs. In IEEE In-ternational Conference on Wireless and Mobile Computing, Networkingand Communications, volume 2, pages 57–63, August 2005.
[THM05b] A.-E. M. Taha, H. S. Hassanein, and H. T. Mouftah. Evaluating the per-formance of stochastically triggered bandwidth adaptation algorithms.In Proceedings of the IEEE Conference on Local Computer Networks,pages 538–545, November 2005.
[THM05c] A.-E. M. Taha, H. S. Hassanein, and H. T. Mouftah. Extensions forInternet Qos paradigms to mobile IP: a survey. IEEE CommunicationsMagazine, 43(5):132–139, May 2005.
[THM05d] A.-E. M. Taha, H. S. Hassanein, and H. T. Mouftah. On reducing theoperational cost of bandwidth adaptation algorithms. In Proceedings ofthe International Conference on Broadband Networks, pages 839–841,October 2005.
[THM06a] A.-E. M. Taha, H. S. Hassanein, and H. T. Mouftah. Combinatorialoptimization in communication networks, chapter 6: Stochastic Pro-gramming in Allocation Policies for Heterogeneous Wireless Networks,pages 171 – 188. Springer, April 2006.
[THM06b] A.-E. M. Taha, H. S. Hassanein, and H. T. Mouftah. Exploiting VerticalHandoffs in Next Generation Radio Resource Management. In IEEEInternational Conference on Communications, volume 5, pages 2083–2088, June 2006.
REFERENCES 194
[THM06c] A.-E. M. Taha, H. S. Hassanein, and H. T. Mouftah. A multi-class,sub-adaptation module for forced vertical handoffs. In Proceedings ofthe 11th IEEE Symposium on Computers and Communications, pages175–180, June 2006.
[THM06d] A.-E. M. Taha, H. S. Hassanein, and H. T. Mouftah. Resource, Mobilityand Security Management in Wireless Networks and Mobile Commu-nications, chapter 9: A Cost-Controlled Bandwidth Adaptation Algo-rithm for Multimedia Wireless Networks, pages 271 – 302. AuerbachPublications, October 2006.
[THM07a] A.-E. M. Taha, H. S. Hassanein, and H. T. Mouftah. Reducing the costof service delivery in heterogeneous wireless networks. IEEE GlobalTelecommunications Conference, November 2007.
[THM07b] A.-E. M. Taha, H. S. Hassanein, and H. T. Mouftah. Vertical handoffsas a radio resource management tool. Submitted, April 2007.
[TL02] S.-L. Tsao and C.-C. Lin. Design and evaluation of UMTS-WLANinterworking strategies. In Vehicular Technology Conference, volume 2,pages 777–781, 2002.
[TXA05] Y. Tian, K. Xu, and N. Ansari. TCP in wireless environments: problemsand solutions. IEEE Communications Magazine, 43(3), March 2005.
[VSP03] E. Vanem, S. Svaet, and F. Paint. Effects of multiple access alternativesin heterogeneous wireless networks. In IEEE Wireless Communicationsand Networking, volume 3, pages 1696–1700, March 2003.
[Wal05] J. Walko. Cognitive radio. IEE Review, 51(5):34–37, 2005.
[WBBD05] W. Wu, N. Banerjee, K. Basu, and S. K. Das. SIP-based vertical hand-off between WWANs and WLANs. IEEE Wireless Communications,12(3):66–72, June 2005.
[WKG99] H. J. Wang, R. H. Katz, and J. Giese. Policy-enabled handoffs acrossheterogeneous wireless networks. In Proceedings of the Second IEEEWorkshop on Mobile Computer Systems and Applications, page 51,1999.
[WS06] X. Wang and H. Schulzrinne. Pricing network resources for adaptiveapplications. IEEE/ACM Transactions on Networking, 14(3):506–519,2006.
REFERENCES 195
[XC01a] Y. Xiao and C. L. P. Chen. Improving degradation and fairness for mo-bile adaptive multimedia wireless networks. In Proceedings of the NinthInternational Conference on Computer Communications and Networks,pages 598–601, 2001.
[XC01b] Y. Xiao and C. L. P. Chen. Qos for adaptive multimedia in wire-less/mobile networks. In Proceedings of the Ninth International Sympo-sium in Modeling, Analysis and Simulation of Computer and Telecom-munication Systems, page 81, Washington, DC, USA, 2001. IEEE Com-puter Society.
[XCW00] Y. Xiao, C. L. P. Chen, and Y. Wang. Quality of service and call ad-mission control for adaptive multimedia services in wireless/mobile net-works. In Proceedings of the IEEE National Aerospace and ElectronicsConference, pages 214–220, Dayton, OH, USA, 2000.
[XCW01] Y. Xiao, C. L. P. Chen, and Y. Wang. Fair bandwidth allocationfor multi-class of adaptive multimediaservices in wireless/mobile net-works. In Vehicular Technology Conference, volume 3, pages 2081–2085,Rhodes, Greece, 2001.
[XCW02] Y. Xiao, C. L. P. Chen, and B. Wang. Bandwidth degradation qos provi-sioning for adaptive multimedia in wireless/mobile networks. ComputerCommunications, 25(13):1153–1161, 2002.
[XV05] B. Xing and N. Venkatasubramanian. Multi-constraint dynamic accessselection in always best connected networks. In The Second Annual In-ternational Conference on Mobile and Ubiquitous Systems: Networkingand Services, pages 56–64, July 2005.
[YK07] F. Yu and V. Krishnamurthy. Optimal joint session admission controlin integrated wlan and cdma cellular networks with vertical handoff.IEEE Transactions on Mobile Computing, 6(1):126–139, 2007.
[YPMM01] M. Ylianttila, M. Pande, J. Makela, and P. Mahonen. Optimizationscheme for mobile users performing vertical handoffs between IEEE802.11 and GPRS/EDGE networks. In IEEE Global Telecommunica-tions Conference, volume 6, pages 3439–3443, San Antonio, TX, Novem-ber 2001.
[YT04] E. Yanmaz and O. K. Tonguz. Dynamic load balancing and sharingperformance of integrated wireless networks. IEEE Journal on SelectedAreas in Communications, 22(5):862–872, June 2004.
REFERENCES 196
[YWL04] F. Yu, V. W. S. Wong, and V. C. M. Leung. Efficient qos provisioningfor adaptive multimedia in mobile communication networks by reinforce-ment learning. In Proceedings of the First International Conference onBroadband Networks, pages 579–588, 2004.
[ZD97] M. M. Zonoozi and P. Dassanayake. User mobility modeling and char-acterization of mobility patterns. IEEE Journal on Selected Areas inCommunications, 15(7):1239–1252, September 1997.
[ZGGZ03] Q. Zhang, C. Guo, Z. Guo, and W. Zhu. Efficient mobility managementfor vertical handoff between WWAN and WLAN. IEEE Communica-tions Magazine, 41(11):102–108, November 2003.
[Zha04] W. Zhang. Handover decision using fuzzy MADM in heterogeneousnetworks. IEEE Wireless Communications and Networking Conference,2:653–658, March 2004.
[ZLS06] A. H. Zahran, B. Liang, and A. Saleh. Modeling and PerformanceAnalysis of Beyond 3g Integrated Wireless Networks. In InternationalConference on Communications, volume 4, pages 1819–1824, June 2006.
[ZM06] F. Zhu and J. McNair. Multiservice vertical handoff decision algo-rithms. EURASIP Journal on Wireless Communications and Network-ing, 2006:Article ID 25861, 13 pages, 2006.
[ZvdBC+01] T. Zhang, E. van den Berg, J. Chennikara, P. Agrawal, Jyh-Cheng Chen,and T. Kodama. Local predictive resource reservation for handoff inmultimediawireless IP networks. IEEE Journal on Selected Areas inCommunications, 19(10):1931–1941, October 2001.
[ZZR06] J. Zheng, Y. Zhang, and E. Regentova. Virtual guard channel for hand-off calls in integrated voice/data wireless networks. IEEE Communica-tions Letters, 10(4):263–265, April 2006.