Downlink TCP Proxy Solutions over
HSDPA with Multiple Data Flow
DANIELE GIRELLA
Master’s Degree Project
Stockholm, Sweden 2007
Downlink TCP Proxy Solutions over
HSDPA with Multiple Data Flow
DANIELE GIRELLA
Master’s Degree Project
February 2007
Automatic Control Group
School of Electrical Engineering
Abstract
In recent years, several proxy solutions have been proposed to improve perfor-
mance of TCP over wireless. The wide popularity of this protocol has pushed
for its adoption also in communication contexts, the wireless systems, where
the protocol was not intended to be applied. This is the case of High Speed
Downlink Packet Data Access (HSDPA), which is an enhancement of third
generation wireless systems in that it provides control mechanisms to increase
system performance. Despite shorter end-to-end delays, and more reliable suc-
cessful packet transmission, improving solutions for TCP over HSDPA are still
necessary. The goal of this Master thesis project is to explore the possibility
of design TCP proxy solutions to enhance user’s data rates over HSDPA. As a
relevant part of our activity, we have implemented a TCP proxy solution over
HSDPA through a ns2 simulator environment, extending the EURANE simu-
lator. EURANE has been developed within the SEACORN European project,
and it introduces three additional nodes to existing UMTS modules for ns2: the
Radio Network Controller, the Base Station and the User Equipment. The func-
tionality of these additional nodes allow for the support of the new features
introduced by HSDPA. The extension of the EURANE simulator includes all of
these HSDPA new features, a proxy solution, as well as some TCP enhancing
protocols (such as Eifel). The simulator allows for performance comparison of
existing TCP solution over wireless, and the proxy we have studied in this the-
sis. An analysis of the effects of multi-user data flows over TCP performance
have been also addressed.
iii
Introduction
HSDPA (High Speed Downlink Packet Access) represents a new high-speed
data transfer feature whose aim is to empower UMTS downlink data rates.
The need of increasing downlink data rates is due to the spreading of new 3G
mobile services -such as web browsing, streaming live video, network gaming-
that require high downlink sources and short latency.
The impressive increase in data rate is achieved by implementing a fast and
complex channel control mechanism based upon short physical layer frames,
Adaptive Modulation and Coding (AMC), fast Hybrid-Automatic Repeat re-
Quest (H-ARQ) and fast scheduling.
The HSDPA functionalities define three new channel types: High-Speed Down-
link Shared Channel (HS-DSCH), High-Speed Shared Control Channel (HS-SCCH)
and High-Speed Dedicated Physical Control Channel (HS-DPCCH). HS-DSCH is
multiplexed both in time and in code. In HSDPA each TTI lasts 2 ms compared
to 10 ms (or more) of UMTS. This reduction of TTI size permits to achieve a
shorter round trip delay between the User Equipment and the Node B, and
improve the link adaptation rate and efficiency of the AMC.
The distinctive characteristic of 3rd Generation wireless networks is packet
data services. The information provided by these services are, in the major-
ity of the cases, accessible on the Internet which for the almost entirety works
with TCP traffic. Thus, there is a wide interest in extending TCP application in
mobile and wireless networks. The main problem of extending TCP over wire-
less networks is that it has been designed for wired networks where packet
losses are almost negligible and where delays are mainly caused by conges-
tion. Instead, in wireless networks the main source of packet losses is the link
level errors of the radio channel, which may seriously degrade the achievable
v
vi INTRODUCTION
throughput of the TCP protocol.
It is well known that the main problem with TCP over networks having
both wired and wireless links is that packet losses are mistaken by the TCP
sender as being due to network congestion. The consequences are that TCP
drops its transmission window and often experiences time out, resulting in
degraded throughput.
The proposals to optimize TCP for wireless links can be divided into three
categories: link layer, end-to-end and split connections.
Link layer solutions (as Snoop Protocol) try to reduce the error rate of the link
through some kind of retransmission mechanism. As the data rate of the wire-
less link increase, there will be more time for multiple link level retransmis-
sions before timeout occurs at the TCP layer, making link layer solutions more
viable.
End-to-end solutions (as Eifel Protocol) try to modify the TCP implementation
at the sender and/or receiver and/or intermediate routers, or optimizing the
parameters used by the TCP connection to achieve good performance.
Split connections (as Proxy Solutions) try to separate the TCP used in the wire-
less link from the one used in the wired one. The optimization procedure can
be done separately on the wired and wireless part.
In Chapter 1 will be introduced the High-Speed Downlink Packet Access
concept and its main new features, such as the new channel types, the Adaptive
Modulation and Coding, the Hybrid Automatic Repeat reQuest and the fast
scheduling. In the last section will be introduced the proposed evolution for
HSDPA.
In Chapter 2 will be reported a TCP overview regarding the architecture of
this protocol, its problems over 3G networks and a short a description of some
TCP versions.
In Chapter 3 will be introduced some TCP enhancing solutions, such as
Eifel and Snoop protocols, proxy and flow aggregation solutions, and so on.
In Chapter 4, using the network simulator ns-2 and a HSDPA implemen-
tation called EURANE, a comparative study of all the above solutions in an
HSDPA scenario will be provided.
Acknowledgements
A thesis is the result of several years of study and hard work. Each of these
years is marked by bad and good days, each of these days is marked by bad
and good moments. On this way, one meets a lot of people that, in one way or
another, influence our life at that time. Many people have been a part of my
graduate education, as teachers, friends, and workmates. To all of them I want
to say thank you.
First of all, I want to express my gratitude to my supervisor, Carlo Fischione,
for the guidance, the support, and the many highlighting meetings he has pro-
vided me during this work. Thanks also for proposing me this master thesis
project.
I am very grateful to my swedish examiner, Karl H. Johansson, and to my ital-
ian examiner, Fortunato Santucci, for putting their faith in me and for giving
me the opportunity of doing my thesis in a world-class research group as the
Automatic Control Group of KTH.
Thanks to Pablo Soldati for his willingness and for giving me countless and
priceless advices during all my stay in Sweden. Thanks also to Alberto Sper-
anzon for helping me when I was in trouble with some control systems and to
Niels Moller for helping me with ns2.
Now is the moment to thank all those that have left a mark on my life dur-
ing my five years of studying at University of L’Aquila. My first thought goes
to Marco Fiorenzi, the best workmate I could have wished for. He has always
spurred me on to do my best, and to do it in that moment. Marco has been a
perfect workmate, a fantastic fellow traveller during the months we stayed in
Sweden but, first of all, he has been a real friend. Is thanks to him if I am here
now and if I have already finished my studies. Thanks to Gianluca Colantoni
vii
viii ACKNOWLEDGEMENTS
for its priceless friendship, for all the amusing and unique moments we have
past together and for the large heart he has always demonstrated to posses. A
special thought goes to Maria Ranieri, whose role in my life during all these
years is hard to explain by words. The simplest thing I can say is that she
was there, always, and she has always given me much more than I deserved.
Thanks also to Davide and Matteo Pacifico for their support, their willingness
and their unique capacity to solve every kind of problem I had. I would also
like to thank Massimo Paglia. Massimo has been a competent workmate, a
wise interlocutor and an excellent companion for enjoying.
Finally, a special thanks to those closest to me. Arianna, who shared my
happiness, and made me happy. Thanks for the love, patience, understanding,
and for putting your unreserved confidence in me. Thanks for being a so spe-
cial person.
My last (but not least!) thought goes to my family. I want to thank my father
Gabriele, my mother Uliana, and my sister Silvia for their understanding, end-
less patience and encouragement when it was most required. Is only thanks to
them if I have achieved this goal, and is only thanks to them if am what I am
today.
Thank you to all.
Contents
1 High Speed Downlink Packet Access (HSDPA) 1
1.1 HSDPA Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Channel Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.1 Adaptive Modulation and Coding . . . . . . . . . . . . . 6
1.3.2 Fast Hybrid Automatic Repeat reQuest . . . . . . . . . . 9
1.3.3 Fast Scheduling . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4 Comparative Study of HSDPA and WCDMA . . . . . . . . . . . 12
1.5 Evolution of HSDPA . . . . . . . . . . . . . . . . . . . . . . . . . 13
2 TCP Overview 21
2.1 TCP Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 TCP Problems over 3G Networks . . . . . . . . . . . . . . . . . . 26
2.3 TCP Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4 Round Trip Time and mean number of retransmissions for TCP
over 3G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3 TCP Enhancing Solutions 37
3.1 Proxy Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2 Flow Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3 Eifel Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.4 Snoop Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.5 Further Enhancing Protocols . . . . . . . . . . . . . . . . . . . . 47
ix
x CONTENTS
4 Simulation 53
4.1 ns-2 Simulator and EURANE extension . . . . . . . . . . . . . . 53
4.2 Simulation Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.3 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5 Conclusions 73
References 75
List of Figures
1.1 HS-PDSCH channel time and code multiplexing . . . . . . . . . 3
1.2 HS-SCCH frame structure . . . . . . . . . . . . . . . . . . . . . . 4
1.3 HS-DPCCH frame structure [1] . . . . . . . . . . . . . . . . . . . 5
1.4 HSDPA channel functionality . . . . . . . . . . . . . . . . . . . . 5
1.5 HSDPA physical layer . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6 HSDPA UE categories . . . . . . . . . . . . . . . . . . . . . . . . 9
1.7 IR and CC state diagrams . . . . . . . . . . . . . . . . . . . . . . 10
1.8 An example of Chase Combining retransmission . . . . . . . . . 11
1.9 An example of Incremental Redundancy retransmission . . . . . 11
1.10 HSUPA peak throughput rates . . . . . . . . . . . . . . . . . . . 15
2.1 TCP slow start and congestion avoidance phase . . . . . . . . . 24
2.2 TCP fast retransmit and fast recovery phase . . . . . . . . . . . . 25
2.3 Mean value Ns as a function of BLER [26] . . . . . . . . . . . . . 35
2.4 Variance σ2 as a function of BLER [26] . . . . . . . . . . . . . . . 35
3.1 Proxy solution architecture . . . . . . . . . . . . . . . . . . . . . . 38
3.2 RNF signalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3 TCP flow aggregation scheme [30] . . . . . . . . . . . . . . . . . 40
3.4 Sample logical aggregate for a give Mobile Host [30] . . . . . . . 41
3.5 Eifel procedure [35]. . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.6 Snoop procedure [39] . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.1 UE side MAC architecture [50] . . . . . . . . . . . . . . . . . . . 55
4.2 UTRAN side overall MAC architecture [50] . . . . . . . . . . . . 55
4.3 Main characteristics of EURANE’s schedulers [52] . . . . . . . . 57
xi
xii LIST OF FIGURES
4.4 Overview of physical layer model used in EURANE [52] . . . . 58
4.5 Simulation scenario . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.6 Available link bandwidth . . . . . . . . . . . . . . . . . . . . . . 61
4.7 Network architecture . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.8 UE’s throughput in the simple scenario . . . . . . . . . . . . . . 63
4.9 Server’s congestion window in simple and RNFProxy scenarios 64
4.10 Trends obtained in simple scenario setting server’s cwnd to 19 . 65
4.11 Throughput improvements by adding Eifel and Snoop protocols 66
4.12 UE’s throughput in RNFProxy scenario . . . . . . . . . . . . . . 68
4.13 Throughput improvements by adding Eifel and Snoop protocols
to RNFProxy scenario . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.14 UE’s throughput in RNFProxy scenario with both Eifel and Snoop
protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.15 Comparison between throughput’s trend in simple scenario and
in RNFProxy scenario . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.16 Comparison between throughput’s trend in RNFProxy scenario
(with and without enhancing protocols) . . . . . . . . . . . . . . 71
4.17 Comparison between the throughput experienced interposing a
RNFProxy and that experienced with a SimpleProxy . . . . . . . 71
List of Tables
1.1 2G to 3G throughput comparison . . . . . . . . . . . . . . . . . . . . 2
1.2 Comparison between DSCH and HS-DSCH basic properties . . 13
2.1 TCP versions comparison . . . . . . . . . . . . . . . . . . . . . . 33
4.1 Scenario’s characteristics . . . . . . . . . . . . . . . . . . . . . . . 61
4.2 Simulation parameters . . . . . . . . . . . . . . . . . . . . . . . . 62
4.3 Summary of simulation results . . . . . . . . . . . . . . . . . . . 72
xiii
List of Abbreviations
3G Third Generation
3GPP 3rd Generation Partnership Project
ACB Aggregate Control Block
ACK Acknowledgment
AMC Adaptive Modulation and Coding
ARQ Automatic Repeat Request
ATCP Aggregate TCP
BCH Broadcast Channel
BDP Bandwidth Delay Product
BER Bit Error Rate
BLER Block Error Rate
C/I Carrier to Interference Ratio
CC Chase Combining
CDMA Code Division Multiple Access
CQI Channel Quality Indicator
CRC Cyclic Redundancy Check
CWND Congestion window
DCH Dedicated Channel
DPCH Dedicated Physical Channel
DSCH Downlink Shared Channel
DUPACK Duplicate Acknowledgment
E-DCH Enhanced Dedicated Channel
EDGE Enhanced Data rates for Global Evolution
EURANE Enhanced UMTS Radio Access Network Extension
xv
xvi LIST OF ABBREVIATIONS
FACH Forward Access Channel
FACK Forward Acknowledgment
FDD Frequency Division Duplex
FH Fixed Host
GGSN Gateway GPRS Support Node
GPRS General Packet Radio Service
GSM Global System for Mobile communication
H-ARQ Hybrid Automatic Repeat Request
HSDPA High-Speed Downlink Packet Access
HS-DPCCH High-Speed Dedicated Physical Control Channel
HS-DSCH High-speed Downlink Shared Channel
HS-PDSCH High-Speed Physical Downlink Shared Channel
HS-SCCH High-Speed Shared Control Channel
HSPA High-Speed Packet Access
HSUPA High Speed Uplink Packet Access
IR Incremental Redundancy
LTE Long Term Evolution
MAC Medium Access Control
MAC-b Medium Access Control for BCH
MAC-c Medium Access Control for PCH
MAC-d Medium Access Control for DCH
MAC-hs Medium Access Control high-speed
MAC-sh Medium Access Control for DSCH
MCS Modulation and Coding Scheme
MH Mobile Host
MIMO Multiple Input Multiple Output
MSR Mobile Support Router
MSS Maximum Segment Size
OFDM Orthogonal Frequency Division Multiplexing
OFDMA Orthogonal Frequency Division Multiplexing Access
PCH Paging Channel
PF Proportional Fair
QAM Quadrature amplitude modulation
LIST OF ABBREVIATIONS xvii
QPSK Quadrature Phase Shift Keying
RACH Random Access Channel
RLC Radio Link Control
RNC Radio Network Controller
RNF Radio Network Feedback
RR Round Robin
RTO Retransmission Timeout
RTT Round Trip Time
RWND Receiver Window
SACK Selective Acknowledgment
SAW Stop And Wait
SDMA Space Division Multiple Access
SF Spreading Factor
SGSN Serving GPRS Support Node
SH Supervisory Host
SIR Signal to Interference Ratio
SSTHRESH Slow Start Threshold
SYN Synchronize
TCP Transmission Control Protocol
TDD Time Division Duplex
TLE Transmission Layer Efficiency
TTI Transmission Time Interval
UDP User Datagram Protocol
UE User Equipment
UMTS Universal Mobile Telecommunication System
UTRAN UMTS Terrestrial Radio Access Network
VOIP Voice over IP
WCDMA Wideband Code Division Multiple Access
WLAN Wireless Local Area Network
Chapter 1
High Speed Downlink Packet
Access (HSDPA)
1.1 HSDPA Concept
HSDPA (High Speed Downlink Packet Access) represents a new high-speed
data transfer feature released by the 3rd Generation Partnership Project (3GPP)
with the aim of empower UMTS downlink data rates. The need of increasing
downlink data rates is due to the spreading of new 3G mobile services - such
as web browsing, streaming live video, network gaming - which require high
downlink sources whereas uplink is used only for control signalling. HSDPA
offers a way to increase downlink capacity within the existing spectrum by a
factor 2:3 compared to 3G Release 99. In Table 1.1 a comparison among 2G (the
basic GSM), 2.5G (GPRS and EDGE) and 3G (UMTS Rel. 99 and HSDPA Rel. 5)
downloading data rates is shown.
Another important enhancement introduced by HSDPA is a three-to-five-
fold sector throughput increase, which means more data users on a single fre-
quency (or carrier). The impressive increase of data rate is achieved by im-
plementing a fast and complex channel control mechanism based upon short
physical layer frames (cf. sec. 1.2), Adaptive Modulation and Coding (AMC)
(cf. sec. 1.3.1), fast Hybrid-Automatic Repeat reQuest (H-ARQ) (cf. sec. 1.3.2)
and fast scheduling (cf. sec. 1.3.3).
1
2 CHAPTER 1. High Speed Downlink Packet Access (HSDPA)
GSM GPRS EDGE UMTS HSDPA
Typical max.
data rate
9.6 kbps 40 kbps 120 kbps 384 kbps 0.9÷10 Mbps
Theoret. peak
data rate
14.4 kbps 171 kbps 473 kbps 2 Mbps 14.4 Mbps
Table 1.1: 2G to 3G throughput comparison
It is important to note that HSDPA is a pure access evolution without any
core networks impacts, except for minor changes due to the higher bandwidth
access. For instance, in the 3GPP Rel. 5 the maximum throughput set into the
signalling protocol has been increased from 2 Mbps to 16 Mbps in order to
support the theoretical maximum limit of HSDPA data rate (14.4 Mbps). It
follows that the deployment of HSDPA is very cost effective since the incre-
mental cost is mainly due to Node B and Radio Network Controller (RNC)
hardware/software upgrade while the operator cost to provide data services is
significantly reduced. In a typical dense urban environment, the operator cost
to deliver a megabyte of data traffic is about three cents with HSDPA while it
increases to about seven cents for UMTS. This is due to the high improvements
in spectral efficiency introduced by HSDPA.
1.2 Channel Structure
The HSDPA functionality defines three new channel types (see Fig. 1.4):
- High-Speed Downlink Shared Channel (HS-DSCH)
- High-Speed Shared Control Channel (HS-SCCH)
- High-Speed Dedicated Physical Control Channel (HS-DPCCH)
HS-DSCH is very similar to the DSCH transport channel defined in Rel. 99.
HS-DSCH has been introduced in Rel. 5 as the primary radio bearer and its
resources can be shared between all active HSDPA users in the cell. To obtain
1.2. Channel Structure 3
higher data rates and greater spectral efficiency, the fast power control and
variable spreading factor of the DSCH are replaced in Rel. 5 by short packet
size, multicode operation, and techniques such as AMC and HARQ on the HS-
DSCH. Another difference from DSCH is that the scheduling with HS-DSCH is
done at the Node B rather than RNC. The HS-DSCH is mapped onto a pool of
physical channels (i.e. channelization codes) denominated HS-PDSCHs (High
Speed Physical Downlink Shared Channel) to be shared among all the HSDPA
users on a time multiplexed manner. HS-PDSCHs are multiplexed both in time
and in code. In Rel. 5, timeslots have the same length as in Rel. 99 (0.67 ms) but
differently from the latter where each Transmission Time Interval (TTI) consists
of 15 slots (i.e. each TTI lasts 10 ms), in HSDPA each TTI consists of three slots
(i.e. 2 ms). This reduction of TTI size permits to achieve a shorter round trip
delay between the User Equipment (UE) and the Node B, and improves the link
adaptation rate and efficiency of the AMC. Within each 2 ms TTI, a constant
Spreading Factor (SF) of 16 is used with a maximum of 15 parallel channels for
HS-PDSCHs. These channels may all be assigned to one user during the TTI,
or may be split among several users (see Figure 1.1).
Sp
read
ing
Co
des
user 1 user 2 user 3 user 4 user 5
HSDPA frame = 2 ms
Standard Rel. 99 frame = 10 ms
Figure 1.1: HS-PDSCH channel time and code multiplexing
In order to support the HS-DSCH operation, an HSDPA UE needs new con-
trol channels: the HS-SCCH in the downlink direction and the HS-DPCCH in
4 CHAPTER 1. High Speed Downlink Packet Access (HSDPA)
the uplink direction.
HS-SCCH is a fixed rate (60 Kbps, SF=128) channel used for carrying down-
link signaling between the Node B and the UE before the beginning of each
scheduled TTI. This channel indicates the UE when there is data on the HS-
DSCH that is addressed to that specific UE, and gives the UE the fast chang-
ing parameters that are needed for HS-DSCH reception. This includes HARQ-
related information and the parameters of the HS-DSCH transport format se-
lected by the link adaptation mechanism (see Figure 1.2).
T = 0.67 ms slot2 x T = 1.33 msslot
code set , UE id modulation scheme ,
other information
Figure 1.2: HS-SCCH frame structure
HS-DPCCH (SF=256) is an uplink low bandwidth channel used to carry
both ACK/NACK signaling indicating whether the corresponding downlink
transmission was successfully decoded and Channel Quality Indicator (CQI)
used to achieve link adaptation. To aid the power control operation of the HS-
DPCCH, an associated Dedicated Physical Channel (DPCH) is run for every
user (see Figure 1.3).
Figure 1.5 describes the downlink and uplink channel structure of HSDPA.
1.2. Channel Structure 5
Figure 1.3: HS-DPCCH frame structure [1]
CQI ( HS-DPCCH )
Downlink Tranfer Information ( HS-SCCH )
Data Transfer ( HS-DSCH )
ACK / NACK (HS-DPCCH)
Figure 1.4: HSDPA channel functionality
6 CHAPTER 1. High Speed Downlink Packet Access (HSDPA)
Slot (0.67 ms)
CQI ACK
Up
link
Dow
nli
nk
DL associated DPCH (for each HSDPA user)
HS-SCCH
HS-PDSCH #1
HS-PDSCH #15
UL associated DPCH (for each HSDPA user)
HS-DPCCH
TTI (2 ms)
∼ 7.5 slots
CQI CQI
HS-PDSCH #2
Figure 1.5: HSDPA physical layer
1.3 New Features
As mentioned in previous section, HSDPA introduces three new features:
• Adaptive Modulation and Coding (AMC).
• Hybrid Automatic Repeat reQuest (HARQ).
• Fast Scheduling.
1.3.1 Adaptive Modulation and Coding
Adaptive Modulation and Coding (AMC) represents a fundamental feature of HS-
DPA. It consists of continuously optimizing the modulation scheme, the code
rate, the number of codes employed and the transmit power per code. This
otpimization is based on various sources [2]:
• Channel Quality Indicator (CQI): the UE sends in the uplink a report de-
nominated CQI that provides implicit information about the instanta-
neous signal quality received by the user. The CQI specifies the modula-
tion, the number of codes and the transport block size the UE can support
1.3. New Features 7
with a detection error no higher than 10% [3]. This error is referred to the
first transmission and to a reference HS-PDSCH power. The RNC com-
mands the UE to report the CQI every 2, 4, 8, 10, 20, 40, 80 or 160 ms [4]
or to disable the report. In [3], the complete set of reference CQI reports
is defined.
• Power Measurements on the Associated DPCH: every user to be mapped
on to HS-PDSCH runs a parallel DPCH for signalling purposes, whose
transmission power can be used to gain knowledge about the instan-
taneous status of the user’s channel quality. This information may be
employed for link adaptation [5] as well as for packet scheduling. The
advantages of using this information are that no additional signalling is
required, and that it is available on a slot basis. However, it is limited to
the case when the HS-DSCH and the DPCH apply the same type of detec-
tor (e.g. a conventional Rake), and can not be used when the associated
DPCH enters soft handover.
• Hybrid ARQ Acknowledgements: the acknowledgement corresponding to
the HARQ protocol may provide an estimation of the user’s channel
quality too, although this information is expected to be less frequent than
previous ones because it is only received when the user is served. Hence,
it does not provide instantaneous channel quality information. Note that
it also lacks the channel quality resolution provided by the two previous
metrics since a single information bit is reported.
• Buffer Size: the amount of data in the Medium Access Control (MAC)
buffer could also be applied in combination with previous information
to select the transmission parameters.
HSDPA uses higher order modulation schemes as 16-quadrature amplitude
modulation (16-QAM) besides the existing QPSK used for Rel. 99 channels.
The modulation to be used is adapted according to the radio channel condi-
tions. The HS-DSCH encoding scheme is based on the Rel. 99 rate (1/3 turbo
encoder) but adds rate matching with puncturing and repetition to improve
the granularity of the effective code rate (1/4, 1/2, 5/8, 3/4). Different combi-
nations of modulation and channel coding-rate can be used to provide different
8 CHAPTER 1. High Speed Downlink Packet Access (HSDPA)
peak data rates. In HSDPA, users close to the Node-B are generally assigned
higher modulation with higher code rates (e.g. 16-QAM and 3/4 code rate),
and both decreases as the distance between UE and Node-B increases.
The HSDPA-capable UE can support the use of 5, 10 and 15 multi-codes. When
a UE receives 15 multi-codes with a 16-QAM modulation scheme and no cod-
ing (effective code rate of one), the maximum peak data rate it can experience
is 14.4 Mbits.
Rel. 5 defines twelve new categories for HSDPA UEs according to the fol-
lowing parameters (see Figure 1.6):
- Maximum number of HS-DSCH multi-codes that the UE can simultane-
ously receive (5, 10 or 15).
- Minimum inter-TTI time, which defines the minimum time between the
beginning of two consecutive transmissions to that UE. An inter-TTI of
one means that the UE can receive HS-DSCH packets during consecutive
TTIs (i.e. every 2 ms); an inter-TTI of two means that the scheduler would
need to skip one TTI between consecutive transmissions to that UE.
- Maximum number of HS-DSCH transport block bits received within an
HS-DSCH TTI. The combination of this parameter and the inter-TTI in-
terval determines the UE peak data rate.
- The maximum number of soft channel bits over all the HARQ processes.
A UE with a low number of soft channel bits will not be able to support
Incremental Redundancy (cf. sec. 1.3.2) for the highest peak data rates
and its performance will thus be slightly lower than for a UE supporting
a larger number of soft channels.
- Supported modulations (QPSK only or both QPSK and 16-QAM).
AMC provides a link adaptation functionality at Node B is in charge of
adapting the modulation, the coding format, and the number of multi-codes to
the instantaneous radio conditions.
1.3. New Features 9
UE categoryof codes
Minimum inter Transport ch. Total number of soft bits
Modulation Max. peak
1 5 3 7300 19200 QPSK & 16-QAM 1.2 Mbps
2 5 3 7300 28800 QPSK & 16-QAM 1.2 Mbps
3 5 2 7300 28800 QPSK & 16-QAM 1.8 Mbps
4 5 2 7300 38400 QPSK & 16-QAM 1.8 Mbps
5 5 1 7300 57600 QPSK & 16-QAM 3.6 Mbps
6 5 1 7300 67200 QPSK & 16-QAM 3.6 Mbps
7 10 1 14600 115200 QPSK & 16-QAM 7.2 Mbps
8 10 1 14600 134400 QPSK & 16-QAM 7.2 Mbps
9 15 1 20432 172800 QPSK & 16-QAM 10.2 Mbps
10 15 1 28776 172800 QPSK & 16-QAM 14.4 Mbps
11 5 2 3650 14400 QPSK only 0.9 Mbps
12 5 1 3650 QPSK only 1.8 Mbps
TTI interval
Max. number
data rate bits per TTI
Figure 1.6: HSDPA UE categories
1.3.2 Fast Hybrid Automatic Repeat reQuest
HSDPA uses HARQ (Hybrid Automatic Repeat Request) retransmission mecha-
nism with Stop and Wait (SAW) protocol. HARQ mechanism allows the UE
to rapidly request retransmission of erroneous transport blocks until they are
successfully received. HARQ functionality is implemented at MAC-hs (Medium
Access Control - high speed) layer, which is a new sub-layer for HSDPA. MAC-hs
is terminated at node B, instead of RLC (Radio Link Control) which is termi-
nated at RNC (Radio Network Controller). This involves a shorter retransmis-
sion delay (< 10 ms) for HSDPA than Rel. 99 (up to 100 ms). In order to better
use the waiting between acknowledgments, multiple processes can run for the
same UE using separate TTIs. This is referred to as N-channel SAW (N = up
to six for Advanced Node B implementation). In this way, while a channel
is waiting for an acknowledgment, the remaining N − 1 channels continue to
transmit. HSDPA support both Chase Combining (CC) [6] and Incremental Re-
dundancy (IR).
CC consists in the retransmission from Node B of the same set of coded
symbols of the original packet. The decoder at the receiver combines these
multiple copies of the transmitted packet weighted by the received SNR prior
to decoding (see Figure 1.8). This type of combining provides time diversity
and soft combining gain at a low complexity cost and imposes the least de-
manding UE memory requirements of all Hybrid ARQ strategies. The com-
10 CHAPTER 1. High Speed Downlink Packet Access (HSDPA)
bination process incurs a minor combining loss to be around 0.2 − 0.3 dB per
retransmission [7].
The state diagram of Figure 1.7(a) summarizes how the Chase Combining al-
gorithm works.
DataBlock
ErrorDetection
AcceptData Block
Transmission
Block in Erro
r
Retransmit the
No E
rror
No Error
Deliver to Upper Layer
Block in Error
ErrorDetection
HARQ
same block
(a) Chase Combining
DataBlock
ErrorDetection
AcceptData Block
OriginalTransmission
Block in Erro
r
transmission
No E
rror
No Error
Deliver to Upper Layer
Block in Error
ErrorDetection
HARQ
New
(b) Incremental Redundancy
Figure 1.7: IR and CC state diagrams
IR, on the other hand, sends different redundancy information during the
re-transmissions (see Figure 1.9). This leads to an incremental increasing of the
coding gain that can result in fewer retransmissions than for CC. IR is then par-
ticularly useful when the initial transmission uses high coding rates (e.g. 3/4)
but it implies higher memory requirements for the mobile receivers and larger
amount of control signaling compared to Chase Combining. Incremental Re-
dundancy can be further classified in Partial IR and Full IR. Partial IR includes
the systematic bits in every coded word, which implies that every retransmis-
sion is self-decodable, whereas Full IR only includes parity bits, and therefore
its retransmissions are not self-decodable. According to [7], Full IR only pro-
vides a significant coding gain for effective coding rates higher than 0.4 − 0.5,
because for lower coding rates the additional coding rate is negligible, since
the coding scheme is based on a 1/3 coding structure. On the other hand, for
higher effective coding rates the coding gain can be significant, for example a
coding rate of 0.8 provides around 2 dB gain in Vehicular A (3km/h) scenario
with a QPSK modulation.
The state diagram of Figure 1.7(b) summarizes how the Chase Combining al-
gorithm works.
For a performance comparison of HARQ with Chase Combining and Incre-
1.3. New Features 11
mental Redundancy for HSDPA systems see [8].
Effective rate after
R = 1
R = 1/2
R = 1/3
Original Transmission
1 Retransmission
2 Retransmission
st
nd
+
+
Datasoft combining
Figure 1.8: An example of Chase Combining retransmission
Original Data
Coded bits. Rate = 1/3
Effective rate after soft combining at decoder stage
R = 1
R = 1/2
R = 1/3
Original Transmission
1 Retransmission
2 Retransmission
st
nd
+
+
Figure 1.9: An example of Incremental Redundancy retransmission
1.3.3 Fast Scheduling
The scheduler is a fundamental element of HSDPA, it affects its behavior and
its performance. At each TTI, the scheduler determines toward which terminal
(or terminals) the HS-DSCH should transmit and, together with AMC, at which
data rate. The HSDPA scheduler is located at the Node B. The algorithms used
to schedule are Round Robin (RR), Maximum Carrier to Interference (Max C/I)
and Proportional Fair (PF).
• RR schedules users with a first-in first-out approach. This approach in-
volves a high fairness among all users, but at the same time it produces
a reduction of the overall system throughput since users can be served
even when they are experiencing weak signal.
12 CHAPTER 1. High Speed Downlink Packet Access (HSDPA)
• Maximum C/I schedules only users that are experiencing the maximum
C/I during that TTI. This scheme provides the maximum throughput for
the system but it produces unfairness of treatment among users penaliz-
ing those located at cell edge.
• PF offers a good trade-off between RR (high fairness and low throughput)
and Maximum C/I (low fairness and high throughput). PF schedules
users according the ratio between their instantaneous achievable data
rate and their average served data rate.
1.4 Comparative Study of HSDPA and WCDMA
We have described how HSDPA Rel. 5 represents an evolution of WCDMA
Rel. 99 consisting in the introduction of a high speed transport channel (HS-
DSCH) and three new features: fast scheduling, fast link adaptation and fast
Hybrid ARQ. The aim of these three new tools is that of providing a rapid
adaptation to changing radio conditions. To achieve this aim, their functional-
ities are placed at the node B instead of the RNC as for the WCDMA.
As depicted in Table 1.2 [9], some CDMA features have been changed in the
HSDPA. In particular, Table 1.2 shows how the CDMA fast power control has
been replaced by fast Adaptive Modulation and Coding (AMC) causing an HS-
DPA power efficiency gain due to the elimination of the power control over-
head. In addition, AMC provides a fast link adaptation, which is achieved
following the policy that better are the link conditions experienced by the ter-
minal, higher is the data with which they are served. Another change concerns
the Spreading Factor (SF): in the CDMA it varies between 4 and 256 while in
the HSDPA it assumes a fixed value of 16. To support different data rates, the
HSDPA supports a wide combination of channel coding rates and modulation
format while WCDMA implements only the combination TC=1/3 and QPSK.
In order to increase the AMC efficiency and the link adaptation rate, the packet
duration has been reduced from 10 or 20 ms (Rel. 99) to a fixed value of 2 ms
(Rel. 5). To decrease the round trip time (RTT), i.e. the round trip delay, the
MAC funtionality of HS-DSCH has been placed at the node-B instead that at
the RNC.
1.5. EVOLUTION OF HSDPA 13
Another difference is about the retransmission functionality. The WCDMA
Rel. 99 implements a simple ARQ scheme (the retransmitted packet are identi-
cal to those of the first transmission) while HSDPA Rel. 5 implements an HARQ
which supports both Chase Combining (CC) [6] and Incremental Redundancy
(IR). The last difference is about the CRC policy: while in the WCDMA the
CRC is implemented for each transport block, in the HSDPA it is implemented
for each TTI (i.e. it uses a single CRC for all transport blocks in the TTI) with a
consequent decrease of the overhead.
Feature Rel.99 DSCH Rel.5 HS-DSCH
Variable spreading factor Yes (4 - 256) No (16)
Fast power control Yes (1500 Hz) No
Fast rate control No (QPSK, TC=1/3) Yes (AMC, 500 Hz)
Fast L1 HARQ No (∼ 100 ms) Yes (∼ 10 ms)
HARQ with soft combining No CC or IR
TTI 10 or 20 ms 2 ms
Location of Mac RNC Node-B
CRC attachment per transport block per TTI
Peak data rate ∼ 2 Mbps ∼ 10 Mbps
Table 1.2: Comparison between DSCH and HS-DSCH basic properties
1.5 Evolution of HSDPA
HSDPA is making impressive inroads in the commercial service arena. The
Global mobile Suppliers Association (GSA) survey “HSDPA Operator Commitments”
published on January 2, 2007 reports 140 HSDPA networks in various stages of
deployment in 64 countries, of which 93 have commercially launched in 51
countries [10]. It means that HSDPA is today delivering commercial mobile
broadband services in North and South America, throughout Europe (includ-
ing 24 of the 27 EU nations), Asia, Africa, the Middle East and Australia.
14 CHAPTER 1. High Speed Downlink Packet Access (HSDPA)
HSUPA
Whereas HSDPA optimizes downlink performance, High Speed Uplink Packet
Access (HSUPA), which uses the Enhanced Dedicated Channel (E-DCH), consti-
tutes a set of improvements that optimize uplink performance. These improve-
ments include higher throughputs, reduced latency, and increased spectral ef-
ficiency. HSUPA is standardized in Release 6. HSUPA will result in an approx-
imately 85 percent increase in overall cell throughput on the uplink and an ap-
proximately 50 percent gain in user throughput. HSUPA also reduces packet
delays. Such an improved uplink will benefit users in a number of ways. For
instance, some user applications transmit large amounts of data from the mo-
bile station, such as sending video clips or large presentation files. For future
applications such as VoIP, improvements will balance the capacity of the up-
link with the capacity of the downlink. HSUPA achieves its performance gains
through the following approaches:
- An enhanced dedicated physical channel.
- A short TTI, as low as 2 ms, which allows faster responses to changing
radio conditions and error conditions.
- Fast Node-B-based scheduling, which allows the base station to efficiently
allocate radio resources.
- Fast Hybrid ARQ, which improves the efficiency of error processing.
The combination of TTI, fast scheduling, and Fast Hybrid ARQ also serves
to reduce latency, which can benefit many applications as much as improved
throughput. HSUPA can operate with or without HSDPA in the downlink,
though it is likely that most networks will use the two approaches together.
The improved uplink mechanisms also translate to better coverage, and for ru-
ral deployments, larger cell sizes. Apart from improving uplink performance,
E-UL improves HSDPA performance by making more room for acknowledg-
ment traffic and by reducing overall latency. HSUPA can achieve different
throughput rates based on various parameters, including the number of codes
used, the spreading factor of the codes, the TTI value, and the transport block
size in bytes, as illustrated in Figure 1.10.
1.5. EVOLUTION OF HSDPA 15
categoryCodes
Spreading Transport TTI
1 1 4 7296 10 0.73 Mbps
2 2 4 14592 10 1.46 Mbps
2 2 4 2919 2 1.46 Mbps
3 2 4 14592 10 1.46 Mbps
4 2 2 20000 10 2.00 Mbps
4 2 2 5837 2 2.90 Mbps
5 2 2 20000 10 2.00 Mbps
6 2 + 2 2 + 4 20000 10 2.00 Mbps
6 2 + 2 2 + 4 11520 2 5.76 Mbps
FactorData rate
block size HSUPA
Figure 1.10: HSUPA peak throughput rates
The combination of HSDPA and HSUPA is called High-Speed Packet Ac-
cess (HSPA).
Evolution of HSPA (HSPA+)
Wireless and networking technologists are developing a continual series of en-
hancements for HSPA, some of which are being specified in Release 6 and Re-
lease 7, and some of which are being studied for Release 8. 3GPP has specified
a number of advanced receiver designs, including Type 1 which uses mobile
receive diversity, Type 2 which uses channel equalization and Type 3, which
includes a combination of receive diversity and channel equalization.
The first approach, specified in Rel. 6, is mobile-receive diversity. This
technique relies on the optimal combining of received signals from separate
receiving antennas. The antenna spacing yields signals that have somewhat
independent fading characteristics. Hence, the combined signal can be more
effectively decoded, which results in a downlink capacity gain of up to 50 per-
cent when employed in conjunction with techniques such as channel equal-
ization. Receive diversity is effective even for small devices such as PC Card
modems and smartphones. Current receiver architectures based on rake re-
ceivers are effective for speeds up to a few megabits per second. But at higher
speeds, the combination of reduced symbol period and multipath interference
results in inter-symbol interference and diminishes rake receiver performance.
This problem can be solved by advanced receiver architectures such as chan-
nel equalizers that yield an additional 20 percent gain over HSDPA with re-
16 CHAPTER 1. High Speed Downlink Packet Access (HSDPA)
ceive diversity. Alternative advanced receiver approaches include interference
cancellation and generalized rake receivers (G-Rake). Different vendors are
emphasizing different approaches. However, the performance requirements
for advanced receiver architectures are specified in 3GPP Release 6. The com-
bination of mobile receive diversity and channel equalization (Type 3) is espe-
cially attractive as it results in a large gain independently of the radio channel.
What makes such enhancements attractive is that no changes are required to
the networks except increased capacity within the infrastructure to support
the higher bandwidth. Moreover, the network can support a combination of
devices, including both earlier devices that do not include these enhancements
and those that do. Device vendors can selectively apply these enhancements
to their higher performing devices.
Another capability being standardized is Multiple Input Multiple Output.
MIMO refers to a technique that employs multiple transmit antennas and mul-
tiple receive antennas, often in combination with multiple radios and multiple
parallel data streams. The most common use of the term “MIMO” applies to
spatial multiplexing. The transmitter sends different data streams over each
antenna. Whereas multipath is an impediment for other radio systems, MIMO
actually exploits multipath, relying on signals to travel across different com-
munications paths. This results in multiple data paths effectively operating
somewhat in parallel and, through appropriate decoding, in a multiplicative
gain in throughput. Tests of MIMO have proven very promising in WLANs
operating in relative isolation, where interference is not a dominant factor. Spa-
tial multiplexing MIMO should also benefit HSPA “hotspots” serving local ar-
eas such as airports, campuses, and malls, where the technology will increase
capacity and peak data rates. However, in a fully loaded network with inter-
ference from adjacent cells, overall capacity gains will be more modest, in the
range of 20 to 33 percent over mobile-receive diversity. Although MIMO can
significantly improve peak rates, other techniques such as Space Division Mul-
tiple Access (SDMA) (also a form of MIMO) may be even more effective than
MIMO for improving capacity in high spectral efficiency systems using a reuse
factor of 1. 3GPP has enhanced the system to support SDMA operation as part
of Rel. 6. In Rel. 7, Continuous Packet Connectivity enhancements reduce the
1.5. EVOLUTION OF HSDPA 17
uplink interference created by dedicated physical control channels of packet
data users when they have no user data to transmit. This helps increase the
limit for the number of HSUPA users that can stay connected at the same time.
3GPP currently has a study item referred to as “HSPA Evolution” or “HSPA+”
that is not yet in a formal specification development stage. The intent is to cre-
ate a highly optimized version of HSPA that employs both Rel. 7 features and
other incremental features such as interference cancellation and optimizations
to reduce latency.
The goals of HSPA+ are to:
- Exploit the full potential of a CDMA approach before moving to an OFDM
platform in 3GPP LTE.
- Achieve performance comparable to Long Term Evolution (LTE) in 5 MHz
of spectrum.
- Provide smooth interworking between HSPA+ and LTE that facilitates
operation of both technologies. As such, operators may choose to lever-
age the SAE planned for LTE.
- Allow operation in a packet-only mode for both voice and data.
- Be backward compatible with previous systems while incurring no per-
formance degradation with either earlier or newer devices.
- Facilitate migration from current HSPA infrastructure to HSPA+ infras-
tructure.
3GPP Long Term Evolution (LTE)
Although HSPA and HSPA+ offer a highly efficient broadband wireless ser-
vice that will likely enjoy success for the remainder of the decade, 3GPP is also
working on a project called Long Term Evolution. LTE will allow operators
to achieve even higher peak throughputs in higher spectrum bandwidth. Ini-
tial possible deployment is targeted for 2009. LTE uses Orthogonal Frequency
Division Multiple Access (OFDMA) on the downlink, which is well suited to
achieve high peak data rates in high spectrum bandwidth. WCDMA radio
18 CHAPTER 1. High Speed Downlink Packet Access (HSDPA)
technology is about as efficient as OFDM for delivering peak data rates of
about 10 Mbps in 5 MHz of bandwidth. However, achieving peak rates in
the 100 Mbps range with wider radio channels would result in highly complex
terminals and is not practical with current technology. It is here that OFDM
provides a practical implementation advantage. Scheduling approaches in the
frequency domain can also minimize interference, and hence boost spectral ef-
ficiency. On the uplink, however, a pure OFDMA approach results in high Peak
to Average Ratio (PAR) of the signal, which compromises power efficiency and
ultimately battery life. Hence, LTE uses an approach called SC-FDMA, which
has some similarities with OFDMA but will have a 2 to 6 dB PAR advantage
over the OFDMA method used by other technologies such as IEEE 802.16e.
LTE goals include:
- Downlink peak data rates up to 100 Mbps with 20 MHz bandwidth.
- Uplink peak data rates up to 50 Mbps with 20 MHz bandwidth.
- Operation in both TDD and FDD modes.
- Scalable bandwidth up to 20 MHz, covering 1.25 MHz, 2.5 MHz, 5 MHz,
10 MHz, 15 MHz, and 20 MHz in the study phase. 1.6 MHz wide chan-
nels are under consideration for the unpaired frequency band, where a
TDD approach will be used.
- Increase spectral efficiency over Rel. 6 HSPA by a factor of two to four.
- Reduce latency to 10 ms round-trip time between user equipment and
the base station and to less than 100 ms transition time from inactive to
active.
The overall intent is to provide for an extremely high-performance radio-
access technology that offers full vehicular speed mobility and that can readily
coexist with HSPA and earlier networks. Because of scalable bandwidth, oper-
ators will be able to easily migrate their networks and users from HSPA to LTE
over time.
The impressive improvements in the achievable peak data rates due to LTE
will lead, in the next years, to the spreading of rich multimedia services and
1.5. EVOLUTION OF HSDPA 19
applications over wireless networks. Since these services require the using of
TCP (Transmission Control Protocol), TCP issues, performance, and enhancing
solutions over HSDPA networks will be extensively discussed in Chapter 2 and
in Chapter 3.
Chapter 2
TCP Overview
2.1 TCP Architecture
The distinctive characteristic of 3rd Generation wireless networks is packet
data services. The information provided by these services are, in the major-
ity of the cases, accessible on the Internet. Since internet communications are
for the almost entirety constituted by TCP traffic, the research community is
showing a wide interest in extending TCP application in mobile and wireless
networks.
TCP is a connection oriented transport protocol which provides a reliable
byte stream to the application layer [11]. Reliability is achieved using ARQ
mechanism based on positive acknowledgments. TCP provides transparent
segmentation and reassembly of user data and handles flow and congestion
control. TCP packets are cumulatively acknowledged when they arrive in se-
quence, out of sequence packets cause the generation of duplicate acknowledg-
ments. TCP manages a retransmission timer which is started when a segment
is transmitted. Retransmission timers are continuously updated on a weighted
average of previous round trip time (RTT) measurements, i.e. the time it takes
from the transmission of a segment until the acknowledgment is received. TCP
sender detects a loss either when multiple duplicate acknowledgments (the de-
fault value is 3) arrive, implying that the next packet was lost, or when a re-
transmission timeout (RTO) expires. The RTO value is calculated dynamically
21
22 CHAPTER 2. TCP Overview
based on RTT measurements. This explains why accuracy in RTT measure-
ments is critical: delayed timeouts slow down recovery, while early ones may
lead to redundant retransmissions.
A prime concern for TCP is congestion. Today all TCP implementations are
required to use algorithms for congestion control, namely slow start, conges-
tion avoidance, fast retransmit and fast recovery [12].
Slow Start and Congestion Avoidance
Since TCP was initially designed to be used in wired networks where transmis-
sion losses are extremely low (BERs in the order of 10−10, and down to 10−12
for optical links), TCP assumes that all losses are due to congestion. Therefore,
when TCP detects packet losses, it reacts both retransmitting the lost packet
and reducing the transmission rate. In this way it allows router queues to
drain. Afterwards, it gradually increases the transmission rate to probe the
network’s capacity.
The purpose of slow start and congestion avoidance is to prevent the conges-
tion from occurring varying the transmission rate. TCP maintains a congestion
window (cwnd), which represents an estimate of the number of segments that
can be injected into the network without causing congestion (a segment is any
TCP data or acknowledgment packet (or both)). The initial value of the con-
gestion window is between one and four segments [13]. The receiver maintains
an advertised windows (rwnd) which indicates the maximum number of bytes it
can accept. The value of the rwnd is sent back to the sender together with each
segment going back. At any moment, the amount of outstanding data (wnd)
is limited by the minimum of cwnd and rwnd, i.e. new packets are only sent
if allowed by both congestion window and receiver’s advertised window, as
summarized by
wnd = min(rwnd, cwnd) (2.1)
In the slow start phase, the congestion window is increased by one segment
for each acknowledgment received (cwnd = cwnd+1). This phase is used both
when new connections are established and after retransmissions due to time-
2.1. TCP Architecture 23
outs occurring. The slow start phase causes an exponential increase of the
congestion window and it lasts until a timeout occurs or a threshold value
(ssthresh) is reached. When the cwnd reaches the ssthresh value, the slow start
phase ends and the congestion avoidance phase starts. While the slow start al-
gorithm opens the congestion window quickly to reach the limit capacity of the
link as rapid as possible, the congestion avoidance algorithm is conceived to
transmit at a safe operating point and increase the congestion window slowly
to probe the network for more bandwidth becoming available.
In the congestion avoidance phase, the congestion window is increased by one
packet per round trip time, which gives a linear increase of the window. More
precisely, for each non duplicate ACK received the cwnd is increased according
to the following equation:
cwnd = cwnd + MSS ∗MSS/cwnd (2.2)
Equation (2.2) provides an acceptable approximation to the underlying prin-
ciple of increasing cwnd by 1 full-sized segment per RTT [12].
When a timeout occurs, the ssthresh is reduced to one-half the current win-
dow size (equation (2.3)), the congestion window is reduced to one MSS (Max-
imum Segment Size), and the slow start phase in entered again.
ssthresh = min(rwnd, cwnd)/2 (2.3)
Figure 2.1 shows an example of how the congestion window changes dur-
ing the slow start and the congestion avoidance phase. In this example the
initial ssthresh is set to 16 and a timeout occurs after 8 round trip times. At that
time, the cwnd assumes a value of 20, hence the new threshold after timeout
(new sstresh) is set to 10.
24 CHAPTER 2. TCP Overview
Co
ng
est
ion
Win
do
w(s
eg
me
nts
)
Round trip times
0
2
4
6
10
12
14
16
18
ssthresh
20
8
8 9 13121110 141 2 3 4 5 6 70
new ssthresh
Timeout
Figure 2.1: TCP slow start and congestion avoidance phase
Fast Retransmit and Fast Recovery
The fast retransmit and fast recovery algorithms allow TCP to detect data loss
before the transmission timer expires. These algorithms permit to increase TCP
performance in two ways: allowing earlier loss detection and retransmission
and not reducing the transmission rate as much as after timeout.
When an out of order segment arrives, the receiver transmits an acknowl-
edgment referred to the segment it was expected to receive. The purpose of
this duplicate acknowledgment (dupack) is to inform the sender that a segment
was received out of order, and to tell it what sequence number is expected. The
fast retransmit and fast recovery algorithms are usually implemented together
as follows.
After receiving three dupacks in a row, the sender concludes that the miss-
ing segment was lost. Therefore, TCP performs a direct retransmission of the
missing segment after the reception of the third dupack (the fourth acknowl-
edgment) even if the retransmission timer has not expired. The ssthresh is set
to the same value as in the case of timeout (equation (2.3)). After the retrans-
mission, fast recovery is performed until all lost data is recovered. The con-
2.1. TCP Architecture 25
gestion window is set to three segments more than ssthresh. These additional
three segments take account of the number of segments (three) that have left
the network and which the receiver has buffered. For each additional dupli-
cate acknowledgment received, the cwnd is incremented by one (cwnd=cwnd+1)
as well as in slow start phase, since each dupack indicates that one segment
has left the network. The fast recovery phase ends when a non-duplicate ac-
knowledgment arrives. The congestion window is then set to the same value as
ssthresh and it is incremented by one segment for RTT as well as in congestion
avoidance phase (equation (2.2)).
With fast retransmit and fast recovery, TCP is able to avoid unnecessary
slow starts due to minor congestion incidents (dupacks are indicators of some
kind of network congestion but it is not as strict as a timeout).
Co
ng
est
ion
Win
do
w(s
eg
me
nts
)
Round trip times
0
2
4
6
10
12
14
16
18
ssthresh
20
8
8 9 13121110 141 2 3 4 5 6 70
new ssthresh
3 duplicate ACK(Fast retransmitting)
rd
Fast recovery
New ACK
Figure 2.2: TCP fast retransmit and fast recovery phase
26 CHAPTER 2. TCP Overview
2.2 TCP Problems over 3G Networks
TCP has been designed for wired networks where packet losses are almost
negligible and where packet losses and delays are mainly caused by conges-
tion. Instead, in wireless networks the main source of packet losses is the link
level error of the radio channel, which may seriously degrade the achievable
throughput of the TCP protocol. Thus, TCP performance over wireless net-
works can differ from TCP performance over wired networks.
The main problem with TCP performance in networks that have both wired
and wireless links is that packet losses that occur because of bad channel con-
ditions are mistaken by the TCP sender as being due to network congestion,
causing it to drop its transmission window, resulting in degraded throughput.
From a wireless performance point of view, the flow control represents one of
the most important aspects of TCP. The flow control is in charge of determin-
ing the load offered by the sender to achieve maximum connection throughput
while preventing network congestion or receiver’s buffer overflow.
The main characteristics of wireless networks that can affect TCP’s perfor-
mance are the following:
• Block Error Rate
As mentioned above, in wired networks losses are mainly due to conges-
tion caused by buffer overflows. Wireless networks are instead charac-
terized by high bit error rate (BER). If these errors are not corrected, they
lead to block error rate (BLER). Since TCP flow and congestion control
mechanisms assume that losses are only due to congestion, when packet
losses due to corruption in the wireless link occur, TCP congestion control
mechanism will react reducing the cwnd and resetting the retransmission
timer. This TCP erroneous interpretation of errors leads to poor perfor-
mance due to under utilization of the bandwidth and to very high delay
jitter.
• Latency
Latency in 3G wireless networks is mainly due to transmission delays in
the radio access network and to the extensive processing required at the
physical layer. Larger latency can be mistaken for congestion.
2.2. TCP Problems over 3G Networks 27
• Delay spikes
A delay spike is a sudden increase in the latency of the link [14] . The
main causes of delay spikes are:
- Link layer recovery from a outage due to a temporal loss of radio
coverage (e.g. driving into a tunnel)
- Inter-frequency handovers or inter-system handovers. Inter-frequen-
cy handovers occur when the UE is handed over another operators
Node B that uses different frequency; inter-system handovers occur
passing from a technologies to another (e.g. from 2G to 3G).
- High priority traffic (e.g. voice) can block low priority applications
(e.g. data connection) whether terminals do not handle both voice
and data connection at the same time. In this case, low priority ap-
plications can be suspended so that high priority ones can be com-
pleted.
Delay spikes can cause spurious TCP timeouts (cf. sec. 3.3), unnecessary
retransmissions and a multiplicative decrease in the cwnd size.
• Serial Timeouts
When the connection is paused for a certain time (for example, due to
hard-handover), several retransmissions of the same segment can be lost
during this pause. Since TCP uses an exponential backoff mechanism,
when a timeout occurs TCP increases the retransmission timeout by some
factor (usually, a doubling) before retransmitting the unacknowledged
data. This increasing lasts until the RTO reaches a limit value (usually,
about a minute). This means that when mobile resumes its connection,
there is the possibility that no data will be transmitted for up to a minute,
degrading the performance drastically.
• Data Rates
Data rates in wireless networks are very dynamic due to mobility, vary-
ing channel conditions, effects from other users and even from varying
demands from the connection. Moreover, when user move into another
cell he can experience a sudden change in available data rate. An increas-
ing in the available bandwidth can lead to an under utilization of it due
28 CHAPTER 2. TCP Overview
to TCP slow start phase. On the other hand, when the data rate decrease,
the TCP congestion control mechanism takes care of it but sudden RTT
increase can cause a spurious TCP timeout [14].
2.3 TCP Versions
In this section some different congestion control and avoidance mechanisms
will be studied, which have been proposed for TCP/IP protocols, namely:
Tahoe, Reno, NewReno, Westwood Vegas, SACK and FACK.
Each of the above implementations suggest a different mechanism to deter-
mine when a segment should be retransmitted and how should the sender be-
have when it encounters congestion. In addition, they suggest what pattern of
transmission they have to follow to avoid congestion.
TCP Tahoe
TCP Tahoe refers to the TCP congestion control algorithm proposed in [15].
This implementation adds new algorithms and refinements to earlier imple-
mentations. The new algorithm include slow-start, congestion avoidance and fast
retransmit (cf. sec. 2.1). The refinements include a modification to the round
trip time estimator used to set retransmission timeout values.
The problem of Tahoe is that it takes a complete timeout interval to detect a
packet loss. In addition, it performs slow start if a packet loss is detected even
if some packet can still flow through the network. This leads to an abrupt re-
ducing of the flow.
TCP Reno
TCP Reno retains the enhancements incorporated into Tahoe adding to the fast
recovery phase the fast recovery algorithm [16].
TCP Reno provides an important enhancement compared to TCP Tahoe, pre-
venting the communication path (usually called “pipe”) from going empty after
fast retransmit, thereby avoiding the need to slow start to re-fill it after a single
packet loss.
2.3. TCP VERSIONS 29
Reno’s fast recovery mechanism is optimized for the case when a single packet
is dropped from a window of data but it can suffer from performance problems
when multiple packets are dropped from a window of data. In the case of mul-
tiple packets dropped, Reno’s performance are almost the same as Tahoe. This
is due to the fact that the fast recovery algorithm mechanism implemented by
TCP Reno can lead to a stall. Indeed, TCP Reno goes out of fast recovery when
it receives a new partial ACK (i.e. a new ACK which does not represent an
ACK for all outstanding data). That means that if a lot of segments from the
same window are lost, TCP Reno is pulled out of fast recovery too soon, and it
may stall since no new packets can be sent.
TCP NewReno
NewReno [17] represents a slight modification over TCP Reno. It is able to de-
tect multiple packet losses and thus it appears much more efficient than TCP
Reno when they occur. NewReno, as well as Reno, enters into fast retransmit
when it receives multiple duplicate packets, but differently from the latter it
does not exit from fast recovery phase until all outstanding data at the time it
entered fast recovery are acknowledged. This means that in NewReno partial
ACK do not take TCP out of fast recovery but they are treated as an indica-
tor that the packet immediately following the acknowledged packet in the se-
quence space has been lost, and should be retransmitted. Thus, when multiple
packets are lost from a single window of data, NewReno can recover without a
retransmission timeout, retransmitting one lost packet per round trip time un-
til all of the lost packets from that window have been retransmitted. NewReno
exits fast recovery phase when all data outstanding when this phase was initi-
ated has been acknowledged (i.e., it exits fast recovery when all data injected
into network, and still waiting for an acknowledgment at the moment that fast
recovery was initiated, has acknowledged).
The main NewReno’s issue is that it takes one round trip time to detect each
packet loss.
30 CHAPTER 2. TCP Overview
TCP Westwood
TCP Westwood represents a modified version of TCP Reno since it enhances
the window control and backoff process [18].
Westwood sender monitors the acknowledgment stream it receives and from
it estimates the data rate currently achieved by the connection. Whenever the
sender perceives a packet loss (i.e. a timeout occurs or 3 DUPACKs are re-
ceived), the sender uses the bandwidth estimate to properly set the congestion
window and the slow start threshold. By backing off to cwnd and ssthresh val-
ues that are based on the estimated available bandwidth (rather than simply
halving the current values as Reno does), TCP Westwood avoids reductions of
cwnd and ssthresh that can be excessive or insufficient. In this way TCP West-
wood ensures both faster recovery and more effective congestion avoidance.
Experimental studies reveal the benefits of the intelligent backoff strategy in
TCP Westwood: better throughput, goodput and delay performance.
TCP SACK
TCP with Selective Acknowledgment represents an extension of TCP Reno and
NewReno. It provides a solution both to the problem of the detection of mul-
tiple lost packets and to the retransmission of more than one lost packet per
round trip time.
TCP SACK requires that segments are acknowledged selectively rather than
cumulatively. It uses the option field in the TCP header to store a set of prop-
erly received sequence numbers [19].
During fast recovery, SACK maintains a variable called pipe, that represents
the estimated number of packets outstanding on the link. The sender only
sends new or retransmitted data when the value of pipe is less than the cwnd.
The variable pipe is incremented each time the sender sends a packet, and is
decremented when the sender receives duplicate ACK with a SACK option re-
porting that new data has been correctly received. When the sender is allowed
to send a packet, it sends the next packet known as missing at the receiver if
such a packet exists, otherwise it sends a new packet. When a retransmitted
packet is lost, SACK detects it through a classic RTO and then goes into slow
2.3. TCP VERSIONS 31
start. The sender only goes out of fast recovery when an ACK is received ac-
knowledging all data that was outstanding when fast recovery was entered.
Because of this, SACK appears closer to NewReno than to Reno, since partial
ACKs do not pull the sender out of fast recovery.
TCP FACK
TCP with Forward Acknowledgment is an extension of TCP SACK. It has the
same functionalities of TCP SACK but it introduces some improvements com-
pared to it:
- A more precise estimation of outstanding. It uses SACK option to better
estimate the amount of data in transit [20].
- A data smoothing. It introduces a better way to halve the window when
congestion is detected. When the cwnd is immediately halved, the sender
stops transmitting for a while and then resumes when enough data has
left the network. This unequal distribution of segments over one RTT can
be avoided when the window is gradually decreased [20].
- A new slow start and congestion control. When congestion occur, the
window should be halved according to the multiplicative decrease of the
correct cwnd. Since the sender identifies congestion at least one RTT after
it happened, if during that RTT it was in Slow Start mode, then the cur-
rent cwnd will be almost double than the cwnd when congestion occurred.
Therefore, in this case, the cwnd is first halved to estimate the correct cwnd
that should be further decreased.
TCP Vegas
In contrast to the TCP Reno algorithm which induces congestion to learn the
available network capacity, Vegas algorithm anticipates the onset of congestion
by monitoring the difference between the rate it is expecting to see and the rate
it is actually realizing [21]. Vegas’ strategy is to adjust the source’s sending rate
(i.e. the cwnd) in an attempt to keep a small number of packets buffered in the
routers along the transmission path. The TCP Vegas sender stores the current
32 CHAPTER 2. TCP Overview
value of the system clock for each segment it sends. By doing so, it is able to
know the exact RTT for each sent packet.
The main innovations introduced by TCP Vegas are the following:
- New retransmission mechanism. When a duplicate acknowledgment is re-
ceived, the sender checks if (current time - segment transmission time) > RTT.
If it is true, the sender provides a retransmission without waiting for the
classic retransmission timeout nor for three duplicate ACKs. To catch
any other segments that may have been lost prior to the retransmission,
when a duplicate acknowledgment is received, if it is the first or second
one after a fresh acknowledgment then it again checks the timeout val-
ues and if the segment time exceeds the timeout value then it retransmits
the segment without waiting for a duplicate ACK. In this way Vegas can
detect multiple packet losses.
Moreover, it only reduces its window if the retransmitted segment was
sent after the last decrease. Thus it also overcome Reno’s shortcoming
of reducing the congestion window multiple time when multiple packets
are lost.
- New congestion control mechanism. TCP Vegas does not use segment losses
to signal that there is congestion. It determines congestion by calculating
the difference between the calculated throughput and the value it would
achieve if the network was not congested. If that difference is smaller
than a boundary, the window is increased linearly to make use of the
available bandwidth, otherwise it is decreased linearly to prevent over
saturating the bandwidth. The throughput of an uncongested network is
defined as the window size in bytes divided by the BaseRTT, which is the
value of the RTT in an uncongested network.
- New slow start mechanism. The cwnd is doubled every time the RTT chang-
es instead of every RTT. The reason for this modification is that when a
connection starts for the first time the sender has no idea of the available
bandwidth. Thus it may happen that during exponential increase it over
shoots the available bandwidth by a big amount inducing congestion.
2.4. ROUND TRIP TIME AND MEAN NUMBER OF RETRANSMISSIONS FOR TCPOVER 3G 33
The slow start phase in terminated when a boundary value is reached in
the difference between the current RTT and the last RTT. This represents
a modification compared to others TCP versions where the boundary is
set in the cwnd size.
TCP
Tahoe
TCP
Reno
TCP
N.Reno
TCP
West.
TCP
SACK
TCP
FACK
TCP
Vegas
Slow Start Yes Yes Yes Yes Yes Enhanc.
Version
Enhanc.
Version
Congestion
Avoidance
Yes Yes Yes Yes Yes Yes Enhanc.
Version
Fast Retransmit Yes Yes Yes Yes Yes Yes Yes
Fast Recovery No Yes Enhanc.
Version
Enhanc.
Version
Enhanc.
Version
Enhanc.
Version
Yes
Retransmission
mechanism
Normal Normal Normal Normal Normal Normal New
mechan.
Congestion Con-
trol mechanism
Normal Normal Normal Normal Normal New
mechan.
New
mechan.
Selective ACK
mechanism
No No No No Yes Yes No
Table 2.1: TCP versions comparison
2.4 Round Trip Time and mean number of retrans-
missions for TCP over 3G
A correct estimate of the round trip time is fundamental. The round trip time
represents a merit figure of any connection since it gives an indication on how
fast the transmitter can react to any event that occurs in the connection. It could
be defined as the elapsed period since the transmitter sends a packet until it re-
ceives the corresponding acknowledgement. With the purpose of accelerating
such transmitter response time, the round trip time should be minimized as
much as possible.
34 CHAPTER 2. TCP Overview
In HSDPA, the size of a TCP segment is 1500 byte and each TTIs lasts 2 ms.
According to the modulation and coding schemes used on the radio interface,
transmitting a TCP segment requires since 12 up to 60 TTIs. How well known,
the wireless channel presents variable characteristics both from the point of
view of link conditions (expressed in terms of block error rate (BER)) and from
that of transmission time delay.
Let [22] NTTI(i) the number of transmissions of TTI i due to HARQ, Tj the
transmission time of a segment on the radio interface (it depends by the bit rate
chosen by the the scheduler), RTTwired the average RTT of the wired part of
the network, ns the number of TTIs needed to transmit a TCP segment when
no errors occurs on the radio interface. Then the round trip time (RTT) of the
whole link (wired part plus wireless part) is given by:
RTT =∑ns
i=1 NTTI(i)ns
Tj + RTTwired (2.4)
The term:
Ni =ns∑
i=1
NTTI(i)ns
(2.5)
represents the number of transmissions of a TCP segment (Ni). Since errors
on each TTI are independent and identical distributed (i.i.d.) [23], Ni can be
modelled by a Gaussian variable. Then, also the RTT expressed by equation 2.4
can be modelled by a Gaussian variable. Is now possible to define the mean Ns
[23] [22] [24] [25] and the variance σ2 [23] [24] of Ni:
Ns =1 + Pe − PePs
1− PePs(2.6)
σ2 =Pe(1− Pe + PePs)
(1− PePs)2(2.7)
where Ps is the probability of errors after soft combining two successive trans-
mission of the same information block and Pe is the probability of errors after
2.4. ROUND TRIP TIME AND MEAN NUMBER OF RETRANSMISSIONS FOR TCPOVER 3G 35
decoding the information block, i.e. it represents the BLER. In such way, we
have defined Ni ∼ N(Ns, σ2).
From Figure 2.3 and Figure 2.4 we can extract Ns and σ2 values corresponding
to different values of BLER.
Figure 2.3: Mean value Ns as a function of BLER [26]
Figure 2.4: Variance σ2 as a function of BLER [26]
Chapter 3
TCP Enhancing Solutions
The proposals to optimize TCP for wireless links can be divided into three cat-
egories: link layer, end-to-end and split connection solutions.
Link layer solutions try to reduce the error rate of the link through some kind of
retransmission mechanism. As the data rate of the wireless link increase, there
will be more time for multiple link level retransmissions before timeout occurs
at the TCP layer, making link layer solutions more viable. In sec. 3.4 a link layer
solution named Snoop will be analysed.
End-to-end solutions try to modify the TCP implementation at the sender and/or
receiver and/or intermediate routers, or optimizing the parameters used by the
TCP connection to achieve good performance. The end-to-end solution named
Eifel will be analysed in sec. 3.3
Split connections try to separate the TCP used in the wireless link from the one
used in the wired one. The optimization procedure can there be done sepa-
rately on the wired and wireless part. A proxy solution will be analysed in
sec. 3.1.
The solutions proposed in sections 3.1, 3.3 and 3.4 will be then utilized dur-
ing the simulations of Chapter 4.
3.1 Proxy Solution
Proxy solutions consist in splitting the connection between the sender (i.e. the
server) and the terminal (i.e. the UE) by means of an interposed proxy. This
37
38 CHAPTER 3. TCP Enhancing Solutions
solution permits to split the connection server←→terminal into one connection
between the server and the proxy, and another between the proxy and the ter-
minal (see Figure 3.1). In this way, the server will continue to see an ordinary
wired network while changing in the system will be made only to the proxy
and possibly to the terminal. This solution has been introduced by [27] and it
is also known with the name split TCP.
RNC Node B ServerTerminal
TCP
(a)
ProxyRNC Node B ServerTerminal
TCP TCP
(b)
Figure 3.1: Proxy solution architecture
An accurate studying about proxy solution over WCDMA networks is re-
ported in [28], where it is shown how local knowledge (in the proxy) about the
state of a TCP connection can be used to enhance performance by shortcutting
the ACKs transmission or packets retransmission. Moreover, it demonstrates
that split TCP solution is particularly useful for radio links with high data rates,
since they are characterized by a large bandwidth delay product. The proxy so-
lution used in this thesis is the one proposed by [29], which allows to improve
both the user experience of wireless internet and the utilization of the existing
infrastructure.
The proxy-based scheme introduced in [29] uses a new custom protocol be-
tween the RNC and the proxy. This protocol provides information from the
data-link layer within the RNC to the transport layer within the proxy. This
3.2. Flow Aggregation 39
communication is called Radio Network Feedback (RNF) and it is sent via UDP
(User Datagram Protocol). The RNF message is sent from the RNC to the proxy
every time the available link bandwidth over the wireless channel is computed.
The link bandwidth represents the instantaneous channel capacity of the wire-
less link, computed with a given frequency. When the proxy receives the RNF
message, it takes appropriate action by adjusting the TCP window size. The
computation of the cwnd in the proxy also takes into consideration the queue
in the RNC. It is important to note that bandwidth variations act as a distur-
bance which is possible to measure but not to affect, while the queue length is
a parameter that is possible to affect. This is the reason why the part of RNF
message concerning the available bandwidth is a feed-forward while the part
concerning the queue length is a feedback.
Figure 3.2 shows how the RNF signalling works.
RNFProxyRNC
Node B ServerUE
RNF message
- recompute cwnd- update cwnd
Variable BW
queue
Control sender’s rate
Figure 3.2: RNF signalling
3.2 Flow Aggregation
In conventional TCP implementations every connection is independent and
thus for each is kept a different state information (such as cwnd, ssthresh and
so on). However, since all TCP connections to a mobile host share the same
wireless link, they are statistically dependant thus flows to the same mobile
host might share certain TCP state information. The solution proposed in [30]
treats all the flows to the same mobile host as a single aggregate. The scheme is
depicted in Figure 3.3. Treating all TCP flows to a particular mobile host as an
40 CHAPTER 3. TCP Enhancing Solutions
aggregate, is possible to perform better scheduling and flow control in order to
maximise link utilization, reduce latency, and improve fairness between flows.
The introduced proxy can share state including a single congestion window
and RTT estimates across all TCP connection within the aggregate. Sharing
state information enables all the connections in an aggregate to have better,
reliable, and more recent knowledge of the wireless link. All the state infor-
mation are then grouped together into one structure called Aggregate Control
Block (ACB). Details of this structure are given in Figure 3.4.
S11
S12
MHn
Mobile Hosts
Base Station
Aggregation Proxy
Sn1
Sn2
Sn3
SchedulerPer Aggregate
For MH1
For MHnAggregate_n
Aggregate_1
Fixed Hosts
(with Aggregate_1)
Flow Control
Flow Control(with Aggregate_n)
MH1
Figure 3.3: TCP flow aggregation scheme [30]
The wired proxy interface is called AggregateTCP (ATCP) client while the
wireless one is called ATCP sender. Packets are received by the ATCP client
into small per-connection queues. The ATCP sender feeds these packets into
a scheduler operating in behalf of the whole aggregate. A single congestion
window is maintained for the whole aggregate. Every time the level of unac-
knowledged data on the wireless link drops one MSS below the current conges-
tion window, the scheduler select a connection with queued data from which
3.2. Flow Aggregation 41
AggregateTCP Clients
List of unacknowledged pkts
Scheduler
AggregateTCP Sender
To mobile host
Figure 3.4: Sample logical aggregate for a give Mobile Host [30]
a further segment will be sent. During this selection, the scheduler must re-
spect the mobile host’s receive window for each of the individual flows. After
transmitted, packets are kept in a queue of unacknowledged data until they are
acknowledged by the mobile host. In this way, in case of losses signalled by
the mobile host or deduced from the expiry of the aggregate’s retransmission
timer, the ATCP sender can perform a retransmission of lost packets withdraw-
ing them from this queue. Another characteristic of this solution is the early
ACK’ing employed by ATCP sender. The ATCP sender acknowledges packets
received from hosts as soon as they arrive, before they are received by the des-
tination end system. However early acknowledgments are never used for FINs
(i.e. for packets used to terminate the connection) and this mitigates the effect
of this policy on TCP’s end-to-end semantic. Connection scheduling strategies
employed by this proxy can be different, depending by the nature of the incom-
ing traffic. In this solution, to select from which connection transmit is used a
combination of priority-based and ticket-based stride scheduling [31]. Stride
scheduling is a deterministic allocation mechanism for time-shared resources.
Resources are allocated in discrete time slices. Resource rights are represented
by tickets-abstract, first-class objects that can be issued in different amounts
and passed between clients. Throughput rates for active clients are directly
proportional to their ticket allocations. Client response time are inversely pro-
portional to ticket allocations. Three state variables are associated with each
client: tickets, stride and pass. The tickets field specifies the client’s resource
allocation, relative to other clients. The stride field is inversely proportional to
42 CHAPTER 3. TCP Enhancing Solutions
tickets, and represents the interval between selection, measured in passes. The
pass field represents the virtual time index for the client’s next selection. Per-
forming a resource allocation is very simple: the client with the minimum pass
is selected, and its pass is advanced by its stride. If more than one client has
the same minimum pass value, then any of them may be selected. The previ-
ous scheduling strategies permit to give strict priority to interactive flow (like
telnet) and sharing out the remaining bandwidth between other applications
(such as WWW, FTP and so on). To optimize link performance, the proxy uses
the following three key mechanisms.
ATCP Sender congestion window strategy. The poor performance of TCP
over wireless networks are mainly due to under-utilization of available band-
width during the first few seconds of a connection due to the pessimistic nature
of the slow start algorithm. The ATCP sender uses a fixed congestion window
shared across all connections in the aggregate. The size of the window is fixed
at a relatively static estimate of the link BDP (Bandwidth Delay Product). The
congestion can’t grow beyond this value (this is called TCP cwnd clamping)
and the slow start is eliminated. Furthermore, a fair sharing of bandwidth
among users is ensured by the underlying network. Once the mobile proxy is
successful in sending an amount of Cclamp data, it goes in a self-clocking state.
When the mobile proxy is in this state, it checks out one segment (from what-
ever connection the scheduler has selected) each time it receives an ACK for an
equivalent amount of data from the receiver. Fixing Cclamp at an ideal value, if
there is data to transmit the link will be never under utilised and the queuing
at the CGSN gateway will be minimal. The value of Cclamp is usually a 30%
higher than the calculates BDP. This excess is required due to link jitter, use of
delayed ACKs by the TCP receiver in the mobile host and ACK compression
occurring due to the link layer.
ATCP Client flow control scheme. As introduced in previous sections, when
the proxy “early ACKs” packets, it stores them in a buffer until they are suc-
cessfully delivered to the mobile host. In this way, committing buffer space,
the proxy must pay attention how much data it accepts on each connection to
avoid to be swamped. The control of accepted data can be performed through
3.3. Eifel Protocol 43
the receive window it advertises to hosts. The proxy must then ensure that
sufficient data from connections is buffered avoiding that the link goes idle un-
necessarily (e.g. it may need to buffer more data from senders with long RTTs
respect to other mobile hosts) and must limit the total amount of buffer space
committed to each mobile host.
ATCP Sender error detection and recovery. Over a wireless link, a packet
lost can be due both to bursty radio losses and to cell reselection due to cell
update procedure. TCP detects these losses through duplicate ACKs or time-
outs. Since TCP doesn’t know the nature of the losses, it reacts by invoking
congestion control measures such as fast retransmit or slow start. Since link
conditions return to a good state after the loss, the invocation of backoff often
leads to under-utilization of the available bandwidth. This solution aim is to
perform an aggressive recovery from transient losses which permits to keep
the link in a state of full utilization. To achieve this objective TCP uses SACK
signals which allows the receiver to inform the sender of packets which it has
received out of order. In this way the sender can retransmit missing packets
selectively.
3.3 Eifel Protocol
An important aim in wireless communications is the sender’s ability of cor-
rectly estimating the round trip time and the retransmission timeout. This esti-
mation can be inaccurate due to wireless link delay spikes that can lead the TCP
to bad estimate the RTT and consequently the RTO. Delay spikes (cf. sec. 2.2)
are defined as a situation where the round trip time suddenly increases for a
short duration of time, and then drops to the previous value. This can lead
to two undesired events: spurious timeouts and spurious fast retransmits. As ex-
plained in sec. 2.1, the TCP sender uses two different error recovery strategies:
timeout-based retransmission and dupack-based retransmission. The problem
of spurious timeouts affects the first strategy while spurious fast retransmits
the latter. In dupack-based retransmission, a retransmission (known as fast re-
transmission) is triggered when three (this is the default threshold but it can
44 CHAPTER 3. TCP Enhancing Solutions
be changed) successive dupacks for the same sequence number have been re-
ceived. According to [32], we can define spurious timeouts as timeouts that
would not have occurred had the sender waited longer. Since TCP receiver
generates a duplicate ACK for each segment that arrives out-of-order, this can
result in a spurious fast retransmit if three or more data segments arrive out-
of-order at a TCP receiver, and at least three of the resulting duplicate ACKs
arrive at the TCP sender. Spurious timeouts and spurious retransmission cause
the so called retransmission ambiguity.
The Eifel algorithm was designed with the specific aim of improving TCP
performance in presence of delay spikes [33]. This algorithm uses extra infor-
mation in the ACKs to eliminate the retransmission ambiguity. In particular,
it assigns to each segment and to the corresponding ACK a timestamp that
allows the sender to distinguish the ACK for the original transmission from
the ACK for its retransmission. The timestamp is a 12 bytes field added in
the header of the segment. When the sender detects a timeout or a triple du-
pack for a packet, it reacts retransmitting the interested packet. During this
operation, the sender stores both a copy of the first retransmission’s packet (ir-
respective of whether the retransmission is triggered by a timeout or a fast re-
transmission) and the size assumed in that moment by the congestion window.
Before providing the retransmission, the sender sets the congestion window to
one segment. When the ACK for the retransmitted segment comes back, the
sender compares ACK’s timestamp with the one it had stored previously. If
the timestamp is smaller than the one stored, the sender concludes that the re-
transmission was spurious and therefore unnecessary. In this case, the sender
restores the congestion window to the pre-retransmission value. Otherwise,
if it detects that the retransmission was not spurious, it sets the congestion
window at the half of the pre-retransmission value. In the case that two re-
transmissions have occurred, it is also halved. In the case that three or more
retransmissions have occurred, the congestion window is set to one segment
[34].
Figure 3.5 shows how Eifel protocol works when timeouts occur (Figure 3.5(a))
and when ACKs arrive (Figure 3.5(b)).
3.3. Eifel Protocol 45
Timeout or 3 dupackrd
Increment retx count for
1. Save timestamp of first retx
2. Save cwnd and ssthresh
Transmission count
the concerned packet
Set cwnd=1 segment
Half the cwnd
Half the ssthresh
1
2
3 or more
in tmst_first_retx
(a) When timeouts occur
ACK arrives
ACK for
Yes
1. Restore cwnd and ssthresh
2. Proceed as normal TCP
Proceed as normal TCP
1. Reset tmst_first_retx
2. Proceed as normal TCP
Timestamp of
ACK < tmst_first_retx
retransmitted packet and
retransmissionCount=1?
Yes
No
No
Spurious retransmission
(b) When ACKs arrive
Figure 3.5: Eifel procedure [35].
Although Eifel algorithm represents a powerful solution against retrans-
mission ambiguity, it presents two drawbacks. The first is the header overhead
incurred by additional 12 bytes required for the TCP timestamp option field
in the TCP header. This overhead reduces the transmission transport layer ef-
ficiency (TLE), defined as the ratio of bandwidth used by the transport layer
segment payload to the total size of a segment:
TLE =Payoload size of a segment
Total size of a segment(3.1)
As shown in (3.1), the smaller is the total size of a segment, the smaller is
its TLE [36]. The second drawback is that Eifel algorithm introduces perfor-
mance improvements on TCP transmissions only when the network presents
delay spikes without packet losses. Contrariwise, in case of delays spikes with
packet losses, Eifel suffers from long transmission stall introducing a worsen-
ing in transmission performance respect to other solutions (e.g. TCP Reno). A
transmission performance improvement in presence of delay spikes and sev-
eral packet losses is achievable combining Eifel with TCP NewReno [37].
46 CHAPTER 3. TCP Enhancing Solutions
3.4 Snoop Protocol
The Snoop protocol is a TCP-aware link layer protocol designed to improve
TCP performance over networks made up of wired and single-hop wireless
links [38] [39]. The Snoop protocol works by deploying a Snoop agent at the
base station and performing retransmissions of lost segments based on du-
plicate acknowledgments (which are a strong indicator of lost packets) and
locally estimated last-hop round trip times. The agent also suppresses dupli-
cate acknowledgments corresponding to wireless losses from the TCP sender,
thereby preventing unnecessary congestion control invocations at the sender.
A retransmission over the wireless link is triggered at the base station (which
is assumed to be the receiver) when a duplicate acknowledgment arrives from
the mobile station or after a link layer timeout period. Figures 3.6(a) and 3.6(b)
show how the Snoop protocol works when ACKs or packets arrive, respec-
tively.
Ack arrives
No
Dup ack?No
New ack?Yes
Yes
Discard
First one?No
DiscardYes
Common case
Spurious ack
Next pkt lostLater dup acksfor lost packet
Retransmit lostpacket with highpriority
1. Free buffers2. Update RTT
estimate3. Propaga te
ack to sender
(a) When ACKs arrive
Yes
Packet arrives
New pkt?No
1. Forward packet
2. Reset local rexmit
counter
In-sequence?
Yes
1. Cache packet
2. Forward to mobile
1. Mark as cong. loss
2. Forward pkt
Congestion loss
Common case
Sender rexmission
No
(b) When packets arrive
Figure 3.6: Snoop procedure [39]
This combination of local retransmissions based on suppression of dupli-
cate acknowledgments, is the reason for classifying Snoop as a transport-aware
reliable link protocol. The state maintained at the base station is soft, which
does not complicate handoffs or overly increase their latency.
Simulation studies [38] have shown that for BER greater than 5x10−7 the
3.5. Further Enhancing Protocols 47
throughput of the connection is improved up to 2000%. This protocol does not
work if packets are encrypted (it is not able to “see-through” them) and it does
not perform well if the wireless RTT is very high, as this leads to redundant
retransmissions.
3.5 Further Enhancing Protocols
Further enhancing protocols have been defined to improve TCP performance.
Some of these are listed in this section.
Split connections
• Indirect TCP (I-TCP)
I-TCP [40] is a transport layer protocol based on the Indirect Protocol
model for mobile hosts, which suggests that any interaction between a
mobile host (MH or UE) and a machine on the fixed network (FH) should
be split into two separate interactions: one between the MH and its Mo-
bile Support Router (MSR) over the wireless medium and another be-
tween the MSR and the FH over the fixed network. Data sent to the wire-
less host is first received by the MSR. The MSR sends acknowledgement
to the FH on behalf of the MH, and forwards the data to the MH on a
separate connection. The MSR and the MH need not use TCP for com-
munication. They can use a variation of TCP that is tuned for wireless
links and is also aware of mobility. The FH only sees an image of its peer
MH that in fact resides on the MSR. If the MH switches cells during the
lifetime of an I-TCP connection the state of the connection is passed on to
the new MSR.
The main drawback of I-TCP is that end-to-end semantic of TCP acknowl-
edgements is violated, since acknowledgements can reach the FH even
before the packets can reach the MH. Another incovenient of I-TCP is
that every packet incurs the overhead of going through the TCP protocol
processing twice at the base station, as compared to just once in a non-
split-connection approach.
48 CHAPTER 3. TCP Enhancing Solutions
• Multiple TCP (MTCP)
The MTCP protocol [41] is almost the same as the I-TCP. It is also based
on the Indirect Protocol model for mobile hosts, which suggests that any
interaction between a mobile host and a machine on the fixed network
should be split into two separate interactions. No change is required in
the TCP software on the FH, while a session layer protocol is introduced
on top of the transport protocol in the MH and the MSR. The session
layer protocol is designed to exploit available knowledge of the wireless
link characteristics and host migration and to compensate for the highly
unpredictable and unreliable link between the MSR and the MH.
The MTCP protocol suffers from the same disadvantages as the I-TCP.
• M-TCP
The M-TCP protocol [42] is very similar to the I-TCP, but tries to over-
come the disadvantages of I-TCP. Similar to I-TCP, M-TCP also splits a
TCP connection into two - one from the MH to an intermediate intelli-
gent station (called Supervisory Host, SH) and another between this in-
termediate station to the FH. The TCP sender on the fixed network uses
unmodified TCP to send data to the SH while the SH uses M-TCP for de-
livering data to the MH. When the host sends segments, the SH receives
them and passes them to the MH. Unlike I-TCP or MTCP, ACKs will not
be sent to the sender until they are received by the UE.
• Mobile End Transport Protocol (METP)
The Mobile End Transport Protocol [43] replaces TCP/IP over wireless
link with a simpler protocol that uses smaller headers. Thus, the func-
tionalities needed for communication with an Internet host are shifted
from the wireless host to the base station. The protocol tries to take ad-
vantage of the base station support since functions of mobile devices are
limited. Shifting the majority of the network protocols to the base station
has the advantage of delegating some work of the wireless host to the
more powerful base station and hiding the communication between the
base station and mobile host to the external network. The protocol also
exploits link-layer acknowledgements and retransmissions for quick re-
covery from losses over the wireless link. This distinguishes METP from
3.5. Further Enhancing Protocols 49
other split-connection approaches like I-TCP and Mobile-TCP.
Eliminating TCP and IP layers from the mobile hosts, METP also elim-
inates TCP/IP headers from the packets transmitted over the wireless
link. In this way the mobile host only needs simple multiplexing and
demultiplexing mechanisms. METP does away with the IP layer in the
wireless link by taking advantage of the fact that the hop between the
base station and mobile host is the first or last in the connection. Thus,
the METP at the base station accepts an IP packet destined for the mo-
bile hosts as if it were meant for itself, strips its IP header, and delivers it
to the higher layer. The METP at the base station handles any TCP con-
nection involving the mobile host. The base station temporarily stores
packets sent by the mobile host, before they are forwarded by METP to
the wired network, in a buffer. Similarly, data packets meant for the mo-
bile host are received at the base station, stored in the receive buffer, and
then forwarded to the mobile host.
• Multiple Acknowledgments
Multiple Acknowledgments method [44] distinguishes the losses due to
congestion or other errors on the wired link to those on the wireless link.
It is similar to the Snoop protocol, described above. Instead of splitting
the connection into two parts as in the I-TCP, Multiple Acknowledgments
method, it generates two types of ACKs:
- ACKp: this partial acknowledgment with sequence number Na in-
forms the sender that the packet(s) with sequence numbers up to
Na - 1 had been received by the base station.
- ACKc: this complete acknowledgment has the same semantics as
the normal TCP acknowledgment, i.e it indicates the MH has re-
ceived the packet.
ACKp, in particular, indicates that the base station has some problems in
sending data to the mobile link.
Following these two ACK definitions, two RTT and RTO values are also
defined, one end-to-end and one from base station to MH. These RTT
and RTO values will be estimated accordingly when ACKp or ACKc is
50 CHAPTER 3. TCP Enhancing Solutions
received.
End-to-end Solutions
• Internet Control Message Protocol (ICMP)
The ICMP protocol [45] tries to avoid spurious timeouts by using an ex-
plicit feedback mechanism. Instead of trying to hide problems due to
the wireless link, this solution proposes to transmit an explicit notifica-
tion to the TCP sender. In this way the sender can distinguish whether
the losses are due to congestion or wireless errors, thus cutting down its
congestion window only when necessary. This protocol uses a message,
called ICMP-DEFER, for explicit notification. Another message, called
ICMP-RETRANSMIT, is generated by the base station when all the local
retransmission attempts have been exhausted.
When data is lost over the wireless link, the base station generates an
ICMP-DEFER message and sends it to TCP sender. This policy ensures
that within one round trip time TCP will receive either an acknowledg-
ment or an ICMP message. Moreover, this ensures that end-to-end re-
transmissions do not start while link layer retransmissions may be going
on. A lack of both is a proof of congestion loss. TCP can then distinguish
between two kinds of losses.
When the TCP sender receives an ICMP-DEFER message, it resets its
transmission timer without changing its cwnd and ssthresh. Postponing
the timer of a RTO, the base station has sufficient time to exhaust the
local retransmission attempt for the lost packet. In case of successive
failures of packet transmission, an ICMP-RETRANSMIT message is sent.
When TCP sender receives the ICMP-RETRANSMIT message, it reacts
retransmitting the indicated segment. As soon as the destination receives
subsequent packets it generates duplicate ACKs. When the source TCP
receives the first of such duplicate ACKs, it switches to fast recovery al-
gorithm. When it finally receives a new ACK it comes out of the fast
recovery algorithm and resets cwnd to the value prior to its entering the
fast recovery phase.
3.5. Further Enhancing Protocols 51
• Fast Retransmit Enhancement
This Fast-Retransmit approach [46] does not split the TCP connection nei-
ther does it require the TCP at the FH to be modified. However a modi-
fied version of TCP is used at the MH. This approach addresses the issue
of TCP when communication resumes after a handoff. The unmodified
TCP at the sender assumes the delay caused by a handoff process to be
due to congestion (since TCP assumes that all delays are caused by con-
gestion), and when a timeout occurs it reduces its window size and re-
transmits these packets. Often, handoffs complete relatively quickly, and
long waits are required by the mobile before timeouts occur at the sender
and packets start getting retransmitted. This is because of coarse timeout
granularities in most TCP implementations. The fast retransmit approach
alleviates this problem by having the mobile host send a certain threshold
number of duplicate acknowledgments to the sender, a step that causes
TCP at the sender to immediately reduce its window size and retransmit
packets starting from the first missing one (for which the duplicate ac-
knowledgment was sent). This method is shown to reduce the maximum
possible latency of one minute due to serial timeouts to about 50 ms [46].
Another way to enhance fast retransmit phase when multiple packets are
lost within a window is described in [17].
• Selective Acknowledgment (SACK)
With Selective Acknowledgment [19], the data receiver can inform the
sender about all segments arrived successfully. Thus, the sender needs to
retransmit only the segments that have actually been lost. This changes
TCP from a Go-back N protocol to a selective request protocol. SACK
uses a TCP option that is comprised of a set of ordered pairs of left and
right edge block numbers that specify the sequence numbers of the pack-
ets that were properly received (cf. sec. 2.3).
52 CHAPTER 3. TCP Enhancing Solutions
Link Layer Solutions
• Delayed Duplicate ACKs
Delayed Duplicate ACK [47] is a TCP-Unaware approach to improve per-
formance of TCP over wireless without taking TCP-specific actions at the
intermediate nodes. The base station performs link level retransmissions
for packets lost on the wireless link due to transmission errors. However,
link level retransmissions do not guarantee in-order delivery of packets.
They are triggered by link level acknowledgements, thus making the base
station TCP-Unaware.
To reduce the interference between TCP retransmissions and link level
retransmissions the receiver delays the third and subsequent duplicate
ACKs by some time duration, making sure that TCP retransmission is
not triggered at the sender. The advantage of this scheme over Snoop is
that the base station link layer is not aware of TCP, and that means that it
works even if TCP’s headers are encrypted.
Chapter 4
Simulation
4.1 ns-2 Simulator and EURANE extension
ns-2 Simulator
The network simulator 2 is an object-oriented simulator developed as part of the
VINT project at the University of California in Berkeley [48]. It is an open-
source simulator widely used in the academic community. The simulator is
event-driven and runs in a non-realtime fashion.
ns is based on two languages: C++ for the object oriented simulator and an
OTcl interpreter to execute user’s command scripts.
The OTcl language represents an object oriented extension of Tcl. It allows
users to define arbitrary network topologies composed of nodes, routers, links
and shared media. It also permits to define which protocols they want to use
and to attach them to nodes, usually as agents (agents are the objects that ac-
tually produce and consume packets). In addition, it allows users to define the
form of the output they want to obtain from the simulator.
The simulator suite also includes a graphical visualizer called network animator
(nam) to assist the users get more insights about their simulation by visualiz-
ing packet trace data. nam animation shows network topology, packet flows,
queued and dropped packets at buffers.
ns is a discrete event simulator, where the advance of time depends on the
timing of events which are maintained by a scheduler. An event is an object in
53
54 CHAPTER 4. Simulation
the C++ hierarchy with an unique ID, a scheduled time and the pointer to an
object that handles the event.
EURANE
The Enhanced UMTS Radio Access Network (EURANE) [49] represents a ns-2 sim-
ulator developed within the SEACORN project for Ericsson Teleccomunicatie
B.V.
EURANE introduces three additional nodes to existing UMTS modules for
ns-2:
- Radio Network Controller (RNC)
- Base Station (BS)
- User Equipment (UE)
whose functionality allow for the support of the following transport channels:
• Forward Access Channel (FACH)
• Random Access Channel (RACH)
• Dedicated Channel (DCH)
• High-Speed Downlink Shared Channel (HS-DSCH)
Common channels (FACH and RACH) and the dedicated channel DCH use
a standard error model provided by ns-2 while high speed channel (HS-DSCH)
uses pre-computed input files (usually generated with Matlab) as error model
and BLER curve.
The main functionality additions to ns-2 come in the form of the RLC Ac-
knowledged Mode (AM), Unacknowledged Mode (UM), Mac-d/-c/sh support
for RACH/FACH and DCH, and MAC-hs support for HS-DSCH, i.e. HSDPA.
Figures 4.1 and 4.2 show, respectively, the overall MAC architecture, sup-
porting HSDPA, at the UE side and at the UTRAN side (Node B and RNC).
4.1. ns-2 Simulator and EURANE extension 55
MAC-d
FACH RACH
DCCH DTCH DTCH
DSCH DCH DCH
MAC Control
USCH ( TDD only )
CPCH ( FDD only )
CTCH BCCH CCCH SHCCH( TDD only )
PCCH
PCH FACH
MAC-c/sh
USCH ( TDD only )
DSCH
MAC-hs
HS-DSCH HS-DSCH
Associated Uplink
Signalling Associated Downlink
Signalling
Figure 4.1: UE side MAC architecture [50]
HS-DSCH
Associated Uplink
Signalling Associated Downlink
Signalling
FACH RACH
DCCH DTCH DTCH
DSCH
MAC Control
Iur or local
MAC Control
DCH DCH
MAC-d
USCH TDD only
MAC-c/sh
CPCH FDD only
CCCH CTCH BCCH SHCCH TDD only
PCCH
FACH PCH USCH TDD only
DSCH Iub
MAC Control
MAC-hs
Configuration without MAC-c/sh
Configuration with MAC-c/sh
Configuration with MAC-c/sh
Figure 4.2: UTRAN side overall MAC architecture [50]
56 CHAPTER 4. Simulation
In the Unacknowledged Mode, no retransmission protocol is in use and data
delivery is not guaranteed. Received erroneous data is either marked erro-
neous or discarded depending on the configuration. A Radio Link Control
(RLC) entity in the unacknowledged mode is defined as unidirectional be-
cause no association between the uplink and downlink is needed. The unac-
knowledged mode is used, for example, by Voice-over-IP (VoIP) applications,
in which RLC level retransmissions are not required.
In the Acknowledged Mode, an HARQ mechanism is used for error correction.
Segmentation, concatenation, padding and duplicate detection are provided
by means of header fields added to the data. The AM entity is bi-directional
and capable of “piggybacking” an indication of the status of the link in the
opposite direction into user data. The AM is the normal RLC mode for packet-
type services, such as web browsing and file downloading.
The transmission of the MAC-hs protocol data units to their respective
UEs is achieved through the use of parallel Stop-and-Wait HARQ processes.
The implemented HARQ scheme in EURANE uses Chase-Combining, which
utilises retransmissions to obtain a higher likelihood of packet acknowledg-
ment.
EURANE implements three scheduling methods:
• Round Robin (RR)
• Fair Channel-Dependent Scheduling (FCDS)
• Max C/I
As introduced in section 1.3.3, the RR method is based on fair-share principle
while the Max C/I method is based on the current channel conditions. The
FCDS method [51] instead represents a trade-off between RR and Max C/I.
It permits to get a higher fairness than Max C/I and a more efficient power
use than RR. The decrease in fairness, due to a more efficient power use, still
satisfies a constraint on the number of packets that may be delayed (or lost,
depending on the type of application) at most. The latter is given in statisti-
cal measures as the probability that the time for each particular UE to wait for
the next packet does not exceed a critical amount of milliseconds. In practise,
4.1. ns-2 Simulator and EURANE extension 57
the signal is fluctuating around a mean value that exhibits slow trends as well.
This underlying slow fluctuation accounts for the distance from the base sta-
tion. The time scale of so-called fading variations in the signal itself, due to
multi-path reception and/or shadow fading, is much smaller than that of the
variations of this so-called local mean. The scheduling is done based on the
relative power, i.e. the instantaneous power relative to its own recent history.
So, the transmission level of all mobile terminals is first translated with respect
to their local means, and subsequently normalised with their local standard
deviations. A transmission is scheduled to the UE that has the lowest value for
this so-called relative power.
In EURANE, when the FCDS method is selected, a parameter called alpha
must be set. This parameter defines the amount of weighting used in the algo-
rithm. A value of 0.0 would equate to the Round Robin case, while a value of
1.0 would equate to the Max C/I case.
Max C/I
- unfair
- efficient power use
Round Robin
- fair
- inefficient power use
FCDS
- rather fair
- moderate power use
Figure 4.3: Main characteristics of EURANE’s schedulers [52]
The Channel Quality Indicator is a 5-bit feedback from the receiving UE to
the transmitting Node B. Each CQI value represents a specific combination of
number of codes, modulation and code rate, resulting in a specific transport
block size. The period over which this is determined is three TTIs long and
ends one TTI before the current block.
In HSDPA the interference is the sum of intra-cell and inter-cell interfer-
ence. Both have a noise-like character. EURANE considers both inter-cell and
58 CHAPTER 4. Simulation
intra-cell interference constant. The intra-cell interference is added at the input
of the channel model, the inter-cell interference is added at the input of the re-
ceiver (see Figure 4.4). This simplification does not impose severe limitations
to the accuracy of the overall model, as the variance of the interference power
is mainly due to the number of sources transmitting, which is nearly constant
during the holding time of a connection [52].
Figure 4.4: Overview of physical layer model used in EURANE [52]
The channel model consists of three parts: distance loss (A), shadowing (S)
and multi-path (R). Each part is considered independent from each other. The
attenuation is then defined as follows [52]:
L = A + S + R (4.1)
The attenuation term (A) is described by the Okamura-Hata propagation refer-
ence model for suburban areas:
A(x) = 129.4 + 10·β·log10(x) (4.2)
where β = 3.52 represents the path loss exponent and x is the distance from
Node B to UE (expressed in kilometers).
The slow fading (S) is caused by obstacles in the propagation path between
the UE and the Node B. The common assumption that shadowing is indepen-
4.1. ns-2 Simulator and EURANE extension 59
dent from one location to another is not valid in a dynamic model with mobile
users, where location dependent correlation must be accounted for in order to
provide continuity. The correlated slow fading contribution to the total loss in
constructed from the following algorithm:
S(x + ∆x) = a·S(x) + b·σ·N (4.3)
where ∆x is the distance between two subsequent time samples and N a ran-
dom variable that satisfies the standard normal distribution. The parameter b
is usually taken such that the standard deviation of the vector containing all re-
alisations equals σ. This prescribes that b2 = 1− a2. The remaining parameter,
a, is determined by the following demand which concerns the autocorrelation
function of S:
E[S(x)·S(x + ∆x)] = E[a·(S(x))2 + b·σ·N ] = a·σ2 (4.4)
This expression should be equal to exp(−∆x/D)·σ2 and results in the demand
that a = exp(−∆x/D) with D the correlation distance. In our simulations, D
is taken equal to 40 m. In a Pedestrian A scenario (users move at 3 km/h), a
correlation distance of 40 m corresponds to a correlation time of about fifty sec-
onds. A typical value for the standard deviation is suburban areas is σ = 8 dB
[53]. The typical length unit in shadow fading is only related with the size of
the objects that block or absorb the propagated signal.
For the last term, the fast fading contribution (R), a Rician distribution is as-
sumed. Fast fading is caused by multipath propagation. For Rician fading, the
distance between the fading dips is related by the carrier frequency and speed
of sound. As a result, the fades for HSDPA are shorter (in distance and time)
compared to the GSM situation.
60 CHAPTER 4. Simulation
4.2 Simulation Scenario
The simulation scenario is depicted in Figure 4.5.
RNC
BS
UE
UE
UE
UE
UE
UE
UE
UE
UE
UE
Core Network
Figure 4.5: Simulation scenario
It consists of 10 UEs, one of which is the UE of reference while the remain-
ing are considered competitors. The UE of reference is located at a distance
of 450 meters from the Node B. Competing UEs are located at a distance that
varies from 50 up to 750 meters from Node B. All the UEs are considered being
pedestrian and they move at a speed of 3 Km/h. When not differently speci-
fied, the term “UE” refers to the UE of reference. Table 4.1 shows the scenario’s
characteristics. Figure 4.7 shows the network architecture, Table 4.2 shows its
parameters.
In these conditions, the available link bandwidth for the UE (i.e. the wire-
less channel capacity experienced by the UE) has the trend depicted in Fig-
ure 4.6 with an average value of 1480 Kbits/s.
The proxy solution used in this chapter is based on [29] (cf. sec. 3.1).
The sampling time of the link bandwidth is set to 70 ms since this is the
time it takes to be computed by the UE (about 40 ms, [54]) and then to reach
the RNFProxy within the RNF message. RNC’s and Node B’s buffer size are set
to EURANE default value (500 and 250, respectively). Modifying these values,
simulation results can vary significantly. However, is not the purpose of this
4.2. Simulation Scenario 61
Parameter Value
Number of active UEs 10
Number of competing UEs 9
Distance of reference UE from Node B 450 m
Distance of competing UEs from Node B 50-750 m
Speed of UEs 3 Km/h
Path loss exponent 3.52
Correlation in shadow fading 40 m
Standard deviation in shadow fading 8 dB
Table 4.1: Scenario’s characteristics
0.0 5.0 10.0 15.0
Link bandwidth
Rate (Mbits/s)
Time (s)0.000
0.200
0.400
0.600
0.800
1.000
1.200
1.400
1.600
1.800
2.000
2.200
Figure 4.6: Available link bandwidth
ProxyRNC Node B ServerUE
Figure 4.7: Network architecture
62 CHAPTER 4. Simulation
Parameter Value
UE - Node B distance 450 m
Node B - RNC link delay 15 ms
Node B - RNC link capacity 622 Mbit
RNC - Proxy link delay 0.1 ms
RNC - Proxy link capacity 622 Mbit
Proxy - Server link delay 60 ms
Proxy - Server link capacity 10 Mbit
RNC buffer size 500
Node B buffer size 250
Scheduling scheme FCDS (alpha=0.5)
UE’s elaboration delay 40 ms
Requested file size 4 MByte
Sampling time 70 ms
Total simulation time 15 s
Table 4.2: Simulation parameters
thesis investigate these changing.
The TCP version used in our simulations is TCP Reno. Though some other
versions of TCP (such as SACK) could lead to better performance [55], TCP
Reno holds a significant role in the future mobile applications since it is widely
utilized in Internet.
The communication between UE and server is started by UE sending a
download request message (SYN message) to the server. When the server re-
ceives the SYN message, it responds acknowledging it by means of a SYN-ACK
message. Once the UE receives the SYN-ACK, it responds sending an ACK to
the server. The connection between UE and server is then open and the server
can start to send to the requested file.
4.3. Simulation Results 63
4.3 Simulation Results
The first simulation carried out regards the effective transmission rate of the UE
in the simple scenario (neither proxy between server and RNC nor enhancing
protocols on Node B).
0.0 5.0 10.0 15.0
Link bandwidth
Throughput
Rate (Mbits/s)
Time (s)0.000
0.200
0.400
0.600
0.800
1.000
1.200
1.400
1.600
1.800
2.000
2.200
Figure 4.8: UE’s throughput in the simple scenario
In this scenario, the average value of the UE’s transmission rate is 624 Kbits/s.
Server’s congestion window is showed in Figure 4.9(a). The initial value of the
ssthresh was set to 62. This value has been chosen since it ensures that the TCP
sender does not enter congestion avoidance phase prematurely, allowing bet-
ter performance. Otherwise, using a smaller initial ssthresh as well as the one
used in the RNFProxy scenario (ssthresh=19), the server enters soon into con-
gestion avoidance phase and the transmission rate experienced by the UE is
lower (about 574 Kbits/s) (see Figure 4.10(a)). Figure 4.10(b) shows server’s
congestion window when, in the simple scenario, the ssthresh is set to 19.
From Figure 4.9(a) is possible to gather some important details about the
simulation. At t=1.7 s, server’s cwnd reaches the ssthresh value, the slow start
ends and the congestion avoidance phase starts. At t=2.1 s, the server receives
64 CHAPTER 4. Simulation
three duplicate acknowledgments. The ssthresh is then set to 31 and the cwnd
is reduced according to fast recovery algorithm (cf. sec. 2.1). Since the server
does not receive ACK that acknowledges new data before a RTO expires, at
t=2.7 s the ssthresh is halved again and the cwnd is set to one. The slow start
phase is then started again.
cwnd
Packets
Time (s)0
5
10
15
20
25
30
35
40
45
50
55
60
65
0.0 5.0 10.0 15.0
(a) Simple scenario
cwnd
Packets
Time (s)0
10
20
30
40
50
60
70
0.0 5.0 10.0 15.0
(b) RNFProxy scenario
Figure 4.9: Server’s congestion window in simple and RNFProxy scenarios
4.3. Simulation Results 65
0.0 5.0 10.0 15.0
Rate (Mbits/s)
Time (s)0.000
0.200
0.400
0.600
0.800
1.000
1.200
1.400
1.600
1.800
2.000
2.200 Link bandwidth
Reno
(a) UE’s data rate
cwnd
0
5
10
15
20
25
30
35
40
45
50
0.0 5.0 10.0 15.0
Packets
Time (s)
(b) Server’s cwnd
Figure 4.10: Trends obtained in simple scenario setting server’s cwnd to 19
66 CHAPTER 4. Simulation
Link bandwidth
Throughput
Rate (Mbits/s)
Time (s)0.000
0.200
0.400
0.600
0.800
1.000
1.200
1.400
1.600
1.800
2.000
2.200
0.0 5.0 10.0 15.0
(a) Eifel protocol
0.0 5.0 10.0 15.0
Link bandwidth
Throughput
Rate (Mbits/s)
Time (s)0.000
0.200
0.400
0.600
0.800
1.000
1.200
1.400
1.600
1.800
2.000
2.200
(b) Snoop protocol
Figure 4.11: Throughput improvements by adding Eifel and Snoop protocols
4.3. Simulation Results 67
Figure 4.11 shows the UE’s data rate when Snoop (Figure 4.11(b)) and Eifel
(Figure 4.11(a)) protocols are implemented on Node B. Adding Eifel protocol
the average throughput is 629 Kbits/s, adding Snoop it is worth 666 Kbits/s.
These values show how the improvement achieved adding Snoop is higher
than the one achieved with Eifel. This is due to the fact that in this scenario
the number of dupacks that occur is much higher than the number of spurious
timeouts. This leads to a substantial performance improvement when Snoop
protocol is implemented since it hides a great number of dupacks to the server,
saving it from reducing its congestion window. Eifel benefits are instead less
clear since the number of spurious timeouts during a 15 s simulation and in a
not so critical scenario is very low.
Figure 4.12 shows how the UE’s throughput raises introducing the RNF-
Proxy between RNC and server. In this case, server’s initial ssthresh is set to
a smaller value (ssthresh=19) than in simple scenario case (ssthresh=62). The
trend of server’s congestion window when RNFProxy is introduced is depicted
in Figure 4.9(b). Figure 4.13 shows throughput’s trend in RNFProxy scenario
adding Eifel protocol (Fig. 4.13(a)) and Snoop protocol (Fig. 4.13(b)). In RNF-
Proxy scenario the average value of the throughput is 1110 Kbits/s, in RNF-
Proxy plus Eifel scenario it is worth 1111 Kbits/s and in RNFProxy plus Snoop
it is worth 1130 Kbits/s.
When RNFProxy is added, the enhancement introduced by Snoop and Eifel are
less clear than in the simple scenario. This is due to the fact that the RNFProxy
provides by itself to cut the number of dupacks and spurious timeouts arriving
to the server. Thus, its introduction decreases the “work” for Snoop and Eifel
protocols, making their benefits less plain. Figure 4.16 shows a comparison be-
tween RNFProxy behavior with and without enhancing protocols on Node B.
Figure 4.14 shows the trend adding both Eifel and Snoop protocols to RNF-
Proxy scenario.
Despite the lower initial value of server’s ssthresh, improvements achieved
by adding the RNFProxy are plain and can be estimated in Figure 4.15. Thanks
to RNFProxy, there is a significant improvement of startup performance. This
68 CHAPTER 4. Simulation
Link bandwidth
Throughput
Rate (Mbits/s)
Time (s)0.000
0.200
0.400
0.600
0.800
1.000
1.200
1.400
1.600
1.800
2.000
2.200
0.0 5.0 10.0 15.0
Figure 4.12: UE’s throughput in RNFProxy scenario
is why in simple scenario the server does not have information about the avail-
able bandwidth of the wireless channel, therefore it has to begin the transmis-
sion with the lowest rate possible. By the contrary, since RNFProxy knows the
available link bandwidth, it can set its congestion window to fully utilize the
available link bandwidth.
Knowledge about the available link bandwidth also leads to enhanced perfor-
mance of RNFProxy compared to that of SimpleProxy. This is why in the Sim-
pleProxy scenario the proxy acts only as a splitter, that is it splits the connection
between server and UE in two parts: one between server and proxy, and the
other between proxy and UE. As introduced in sec. 3.1, by splitting the con-
nection between server and UE, the transmission rate experienced by the UE
is larger. This is due to the fact that proxy shortcuts ACKs receiving and pack-
ets retransmission. Figure 4.17 shows a comparison between the throughput
experienced by UE using a SimpleProxy or a RNFProxy. With SimplePproxy
the average value of UE’s data rate is 975 Kbits/s, with RNFProxy it is worth
1110 Kbits/s.
4.3. Simulation Results 69
Link bandwidth
Throughput
Rate (Mbits/s)
Time (s)0.000
0.200
0.400
0.600
0.800
1.000
1.200
1.400
1.600
1.800
2.000
2.200
0.0 5.0 10.0 15.0
(a) RNFProxy + Eifel
Link bandwidth
Throughput
Rate (Mbits/s)
Time (s)0.000
0.200
0.400
0.600
0.800
1.000
1.200
1.400
1.600
1.800
2.000
2.200
0.0 5.0 10.0 15.0
(b) RNFProxy + Snoop
Figure 4.13: Throughput improvements by adding Eifel and Snoop protocols
to RNFProxy scenario
70 CHAPTER 4. Simulation
Link bandwidth
Throughput
Rate (Mbits/s)
Time (s)0.000
0.200
0.400
0.600
0.800
1.000
1.200
1.400
1.600
1.800
2.000
2.200
0.0 5.0 10.0 15.0
Figure 4.14: UE’s throughput in RNFProxy scenario with both Eifel and Snoop
protocols
0.0 5.0 10.0 15.0
Rate (Mbits/s)
Time (s)0.000
0.200
0.400
0.600
0.800
1.000
1.200
1.400
1.600
1.800
2.000
2.200
RNFProxy
Link bandwidth
Simple scenario
Figure 4.15: Comparison between throughput’s trend in simple scenario and
in RNFProxy scenario
4.3. Simulation Results 71
0.0 5.0 10.0 15.0
Rate (Mbits/s)
Time (s)0.000
0.200
0.400
0.600
0.800
1.000
1.200
1.400
1.600
1.800
2.000
2.200
RNFProxy only
Link bandwidth
RNFProxy + S. + E.
Figure 4.16: Comparison between throughput’s trend in RNFProxy scenario
(with and without enhancing protocols)
0.0 5.0 10.0 15.0
Rate (Mbits/s)
Time (s)0.000
0.200
0.400
0.600
0.800
1.000
1.200
1.400
1.600
1.800
2.000
2.200
RNFProxy
Link bandwidth
SimpleProxy
Figure 4.17: Comparison between the throughput experienced interposing a
RNFProxy and that experienced with a SimpleProxy
72 CHAPTER 4. Simulation
Implemented Solution Average data rate
Simple scenario (no proxy) 624 Kbits/s
Simple scenario + Eifel 629 Kbits/s
Simple scenario + Snoop 666 Kbits/s
SimpleProxy scenario 975 Kbits/s
RNFProxy scenario 1110 Kbits/s
RNFProxy scenario + Eifel 1111 Kbits/s
RNFProxy scenario + Snoop 1130 Kbits/s
RNFProxy scenario + Eifel + Snoop 1131 Kbits/s
Table 4.3: Summary of simulation results
Chapter 5
Conclusions
In this thesis, a proxy solution to improve users data rate over HSDPA network
has been investigated. The studied solution is based on the signalling scheme
introduced in [29], which uses a new custom protocol between RNC and proxy.
This protocol provides information from the data-link layer within the RNC to
the transport layer within the proxy including the instantaneous available link
bandwidth on the wireless channel and the queue length in the RNC. This com-
munication is called Radio Network Feedback (RNF). Furthermore, the impact of
Eifel and Snoop protocols over users transmission performance has been in-
vestigated.
Simulation results show that the RNFProxy solution allows to enhance sig-
nificantly transmission’s startup performance. In the considered scenario, we
have obtained an increasing of the average data rate of about 80% introduc-
ing the RNFProxy . This is why in simple scenario where proxy is not imple-
mented, the server does not have information about the available bandwidth
of the wireless channel, therefore it has to begin the transmission with the low-
est rate possible. By the contrary, since RNFProxy knows the available link
bandwidth, it can set its congestion window to fully utilize the available link
bandwidth. Moreover, simulations show that implementing Snoop and Eifel
protocols on Node B, the data rate experienced by the UE improves further. By
adding Eifel and Snoop protocols to the RNFProxy solution, the average data
rate has been increased of a further 2%. This increasing is almost entirety due
73
74 CHAPTER 5. CONCLUSIONS
to the Snoop protocol. This is why Eifel protocol works on spurious timeouts
and spurious fast retransmits, which are events that in a 15 seconds simulation
and in a not so critical simulation scenario as the one used this thesis (there
are only 10 concurring users and the reference UE is located at just 450 me-
ters from Node B), are not very frequent. Eifel and Snoop’s benefits may result
more plain in a more critical scenario and if investigated in long-lived simula-
tions.
The achieved performance improvements can be measured in terms of end
user’s experience, as well as from the mobile operator’s point of view. The
spreading of interactive and real time services over the new high-speed mobile
networks increases more and more the interest in reducing end-to-end-delay
and delay variation. Furthermore, operators are interested in fully utilizing
the poor and expensive radio resources, as well as to support the maximum
number of users per cell and at the maximum allowable data rate. Further-
more, since mobile operators are interested in making their systems scalable,
proxy solution allows them to make network adaptations without the need to
change neither the remote servers, nor the mobile terminals.
References
[1] 3GPP, “TS 25.211 Technical Specification Group Radio Access Network;
Physical channels and mapping of transport channels onto physical chan-
nels (FDD)(Release 7),” March 2006, v7.0.0.
[2] P. J. A. Gutierrez, “Packet Scheduling and Quality of Service in HSDPA,”
Ph.D. dissertation, Department of Communication Technology Institute
of Electronic Systems, Aalborg University, 2003.
[3] 3GPP, “TS 25.214 Physical Layer Procedures (FDD),” v5.11.0.
[4] ——, “TS 25.331 Radio Resource Control (RRC),” v5.5.0.
[5] K. Miyoshi, T. Uehara, and M. Kasapidis, “Link Adaptation Method
for High Speed Downlink Packet Access for W-CDMA,” Wireless Per-
sonal Multimedia Communications (WPMC) Proceedings, vol. 2, pp. 455–460,
September 2001.
[6] D. Chase, “Code Combining - A maximum likelihood decoding approach
for combining an arbitrary number of noisy packets,” IEEE Transactions on
communications, vol. COM-33, no. 5, pp. 385–393, May 1985.
[7] F. Frederiksen and T. E. Kolding, “Performance and Modeling of
WCDMA/HSDPA Transmission/H-ARQ Schemes,” Proceedings of the
IEEE 56th Vehicular Technology Conference (VTC), vol. 1, pp. 472–476, 2002.
[8] P. Frenger, S. Parkvall, and E. Dahlman, “Performance Comparison of
HARQ with Chase Combining and Incremental Redundancy for HS-
DPA,” IEEE, pp. 1829–1833, 2001.
75
76 REFERENCES
[9] C.-S. Chiu and C.-C. Lin, “Comparative Downlink Shared Channel Per-
formance Evaluation of WCDMA Release 99 and HSDPA,” in Proceedings
of the 2004 IEEE International Conference on Networking, Sensing & Control,
Taipei, Taiwan, March 21-23 2004, pp. 1165–1170.
[10] Global mobile Suppliers Association (GSA), http://www.gsacom.com.
[11] J. Postel, “RFC 793: Transmission Control Protocol,” IETF, Tech. Rep.,
September 1981.
[12] M. Allman, V. Paxson, and W. R. Stevens, “RFC 2581: TCP congestion
control,” IETF, Tech. Rep., April 1999.
[13] M. Allman, S. Floyd, and C. Partridge, “RFC 3390: Increasing TCP’s initial
window,” IETF, Tech. Rep., October 2002.
[14] H. Inamura, R. Ludwig, A. Gurtov, and F. Khafizov, “RFC 3481: TCP over
Second (2.5G) and Third (3G) Generation Wireless Networks,” IETF, Tech.
Rep., February 2003.
[15] V. Jacobson, “Congestion Avoidance and Control.” SIGCOMM Sympo-
sium on Communications Architectures and Protocols, 1999, pp. 314–329.
[16] ——, “Modified TCP Congestion Avoidance Algorithm,” end2end inter-
est mailing list, Tech. Rep., April 1990.
[17] S. Floyd, T. Henderson, and A. Gurtov, “RFC 3782: The NewReno Modi-
fication to TCP’s Fast Recovery Algorithm,” IETF, Tech. Rep., April 2004.
[18] C. Casetti, M. Gerla, S. Mascolo, M. Sanadidi, and R. Wang, “TCP West-
wood: End-to-End Congestion Control for Wired/Wireless Networks,”
Wireless Networks, vol. 8, pp. 467–479, 2002.
[19] M. Mathis, S. Floyd, and A. Romanow, “RFC 2018: TCP Selective Ac-
knowledgment Options,” IETF, Tech. Rep., October 1996.
[20] M. Mathis and J. Mahdavi, “Forward Acknowledgment: Refining TCP
Congestion Control,” ACM SIGCOMM, 1996.
[21] L. Brakmo, S. O. Malley, and L. Peterson, “TCP Vegas: New Tecniques for
Congestion Detection and Avoidance,” ACM SIGCOMM, 1994.
REFERENCES 77
[22] M. Assaad and D. Zeghlache, “On the capacity of HSDPA.” IEEE
GLOBECOM, 2003, pp. 60–64.
[23] M. Assaad, B. Jouaber, and D. Zeghlache, “Effect of TCP on UMTS-
HSDPA system performance and capacity,” Globecom 2004.
[24] M. Assaad and D. Zeghlache, “Scheduling study in HSDPA system,” in
IEEE 16th International Symposium on Personal, Indoor and Mobile Radio Com-
munications”, 2005, pp. 1890–1894.
[25] ——, “Cross-layer design in HSDPA system to reduce the TCP effect,”
IEEE Journal on selected areas in communications, vol. 24, no. 3, March 2006.
[26] M. Assaad, B. Jouaber, and D. Zeghlache, “TCP Performance over UMTS-
HSDPA System,” Telecommunication Systems, vol. 27, no. 2-4, October 2004.
[27] J. Border, M. Kojo, J. Griner, G. Montenegro, and Z. Shelby, “RFC 3135:
Performance enhancing proxies intended to mitigate link-related degra-
dations,” IETF, Tech. Rep., June 2001.
[28] M. Holze, M. Meyer, and J. Sachs, “Performance Evaluation of a TCP
Proxy in WCDMA Networks,” IEEE Wireless Communications, Octo-
ber 2003.
[29] N. Moller, I. C. Molero, K. H. Johansson, J. Petersson, R. Skog, and
A. Arvidsson, “Using Radio Network Feedback to Improve TCP Perfor-
mance over Cellular Networks,” Proc. of the 44th IEEE Conference on Deci-
sion and Control, December 2005.
[30] R. Chakravorty, S. Katti, J. Crowcroft, and I. Pratt, “Flow Aggregation for
Enhanced TCP over Wide-Area Wireless,” Proceedings of the IEEE Infocom,
2003.
[31] C. A. Waldspurger and W. E. Weihl, “Stride Scheduling: Deterministic
Proportional-Share Resource Management,” MIT Laboratory for Com-
puter Science Cambridge, TM 528, 2005.
[32] A. Gurtov and R. Ludwig, “Responding to spurious timeouts in TCP,”
Proceedings of IEEE Infocom, 2003.
78 REFERENCES
[33] R. Ludwig and R. H. Katz, “The Eifel algorithm: making TCP robust
against spurious retransmissions,” ACM Computer Communications Re-
view, vol. 30, no. 1, pp. 30–36, January 2000.
[34] R. Ludwig and A. Gurtov, “The Eifel response algorithm for TCP,” IETF,
Tech. Rep. RFC 4015, February 2005.
[35] O. Teyeb and J. Wigard, Deliverable 2.11: Emulation of TCP Performance Over
WCDMA, FACE: Future Adaptive Communication Environment, June
2003.
[36] S. Fu and M. Atiquzzaman, “DualRTT: detecting spurious timeouts in
wireless mobile environments,” Performance, Computing, and Communi-
cations Conference, 2005. 24th IEEE International Conference, pp. 129– 133,
April 2005.
[37] Y. Guan, B. den Broeck, J. Potemans, J. Theunis, D. Li, E. V. Lil, and A. V.
de Capelle, “Simulation study of TCP Eifel algorithms,” Opnetwork, 2005.
[38] H. Balakrishnan, S. Seshan, E. Amir, and R. H. Katz, “Improving TCP/IP
Performance over Wireless Networks,” ACM Wireless Networks, Novem-
ber 1995.
[39] H. Balakrishnan, S. Seshan, and R. H. Katz, “Improving Reliable Transport
and Handoff Performance in Cellular Wireless Networks,” ACM Wireless
Networks, vol. 1, no. 4, 1995.
[40] A. V. Bakre and B. R. Badrinath, “Implementation and performance eval-
uation of Indirect TCP,” IEEE Transactions on computers, vol. 46, no. 3, pp.
260–278, 1997.
[41] R. Yavatkar and N. Bhagawat, “Improving End to End Performance of
TCP over Mobile Internetworks,” IEEE Workshop on Mobile Computing Sys-
tems and Applications, 1994.
[42] K. Brown and S. Singh, “M-TCP: TCP for Mobile Cellular Networks,”
ACM SIGCOMM Computer Communication Review, vol. 27, no. 5, pp. 19–42,
October 1997.
REFERENCES 79
[43] K.-Y. Wang and S. K. Tripathi, “Mobile-End Transport Protocol: An Alter-
native to TCP/IP over Wireless Links,” Proceedings IEEE INFOCOM, 1998.
[44] S. Biaz, M. Mehta, S. West, and N. H. Vaidya, “TCP over Wireless Net-
works Using Multiple Acknowledgements,” Texas A&M University, Tech.
Rep. 97-001, 1997.
[45] S. Goel and D. Sanghi, “Improving TCP Performance Over Wireless
Links,” Proceedings of TENCONS, Tech. Rep., 1998.
[46] R. Caceres and L. Iftode, “Improving the Performance of Reliable Trans-
port Protocols in Mobile Computing Environments,” IEEE Journal of Se-
lected Areas in Communications, vol. 13, no. 5, June 1995.
[47] N. Vaidya, M. Mehta, C. Perkins, and G. Montenegro, “Delayed
Duplicate-Acknowledgements: A Proposal to Improve Performance of
TCP on Wireless Links,” Texas A&M University, Tech. Rep. 99-003, 1999.
[48] “The network simulator ns-2,” http://www.isi.edu/nsnam/ns/.
[49] “EURANE,” http://www.ti-wmc.nl/eurane/.
[50] 3GPP, “TS 25.308 Technical Specification Group Radio Access Network;
High Speed Downlink Packet Access (HSDPA); Overall description,” De-
cember 2004, v5.7.
[51] I. de Bruin, G. Heijenk, M. E. Zarki, and J. L. Zan, “Fair channel-dependent
scheduling in CDMA systems,” Proceedings IST Mobile & Wireless Commu-
nications Summit, pp. 737–741, June 2003.
[52] N. Whillans, SEACORN. End-to-end network model for Enhanced UMTS, Oc-
tober 2003, http://www.ti-wmc.nl/eurane/D32v2Seacorn.pdf.gz.
[53] W. C. Jakes, Microwave mobile communications. Wiley, 1974.
[54] Y.-S. Kim, “VoIP Service on HSDPA in Mixed Traffic Scenario,” Proceed-
ings of the sixth IEEE International Conference on Computer and Information
Technology, 2006.
80 REFERENCES
[55] F. Xinz and A. Jamalipour, “TCP throughput and fairness performance
in presence of delay spikes in wireless networks,” International Journal of
Communication Systems, 15 March 2005.