+ All Categories
Home > Documents > RD Network Services: Differentiation through Performance

RD Network Services: Differentiation through Performance

Date post: 11-Feb-2022
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
12
RD Network Services: Differentiation through Performance Incentives Maxim Podlesny [email protected] Sergey Gorinsky [email protected] Applied Research Laboratory Department of Computer Science and Engineering Washington University in St. Louis One Brookings Drive, Campus Box 1045 St. Louis, Missouri 63130-4899, USA ABSTRACT With the Internet offering a single best-effort service, there have been numerous proposals of diversified network ser- vices that align better with the divergent needs of differ- ent distributed applications. The failure of these innova- tive architectures to gain wide deployment is primarily due to economic and legacy issues, rather than technical short- comings. We propose a new paradigm for network service differentiation where design principles account explicitly for the multiplicity of Internet service providers and users as well as their economic interests in environments with partly deployed new services. Our key idea is to base the service differentiation on performance itself, rather than price. The proposed RD (Rate-Delay) services enable a user to choose between a higher transmission rate or low queuing delay at a congested network link. An RD router supports the two ser- vices by maintaining two queues per output link and achieves the intended rate-delay differentiation through simple link scheduling and dynamic buffer sizing. After analytically de- riving specific rules for RD router operation, we conduct extensive simulations that confirm effectiveness of the RD services geared for incremental deployment in the Internet. Categories and Subject Descriptors C.2.6 [Computer-Communication Networks]: Internet- working; C.2.1 [Computer-Communication Networks]: Network Architecture and Design General Terms Algorithms, Design, Economics, Performance This work was sponsored in part by U.S. National Science Foundation (NSF) grants CNS-0626661 and REC-0632580. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SIGCOMM’08, August 17–22, 2008, Seattle, Washington, USA. Copyright 2008 ACM 978-1-60558-175-0/08/08 ...$5.00. Keywords Service Differentiation, Transmission Rate, Queuing Delay, Incremental Deployment, Legacy Traffic, Legacy Infrastruc- ture, Performance Incentive 1. INTRODUCTION The mismatch between the single best-effort service of the current Internet and diverse communication needs of differ- ent distributed applications has led to numerous proposals of alternative architectures with diversified network services. A prominent representative of the architectural innovations, IntServ (Integrated Services) [4] offers users a rich choice of services, including end-to-end rate and delay guarantees provided to packet flows by means of admission control and link scheduling mechanisms such as WFQ (Weighted Fair Queuing) [9] or EDF (Earliest Deadline First) [10]. While IntServ failed to gain ubiquitous adoption, early IntServ ret- rospectives attributed the failure to the complexity of sup- porting the per-flow performance guarantees, especially in busy backbone routers. The proposal of DiffServ (Differ- entiated Services) [3] addresses the scalability concerns by restricting complex operations to the Internet edges and of- fering just few services at the granularity of traffic classes, rather than individual flows. Despite the technically simpler design, DiffServ also failed to deploy widely. The IntServ and DiffServ experiences reveal that technical merits of an innovative architecture are neither the only nor the most important factor in determining its success. Eco- nomic and legacy issues become a crucial consideration be- cause the Internet of today is a loose confederation of infras- tructures owned by numerous commercial entities, govern- ments, and private individuals [7]. The multiplicity of the in- dependent stakeholders and their economic interests implies that partial deployment of a new service is an unavoidable and potentially long-term condition. Hence, a successful ar- chitecture should provide ISPs (Internet Service Providers) and users with incentives to adopt the new service despite the partial deployment. In this paper, we investigate a novel paradigm for network service differentiation that makes deployability the primary design concern. We explicitly postulate that partial deploy- ment is unavoidable and that the new design should be at- tractive for early adopters even if other ISPs or users refuse to espouse the innovation. Besides, we demand that the ben- efits of the service diversification should not come at the ex-
Transcript

RD Network Services:

Differentiation through Performance Incentives∗

Maxim Podlesny

[email protected]

Sergey Gorinsky

[email protected]

Applied Research LaboratoryDepartment of Computer Science and Engineering

Washington University in St. LouisOne Brookings Drive, Campus Box 1045

St. Louis, Missouri 63130-4899, USA

ABSTRACT

With the Internet offering a single best-effort service, therehave been numerous proposals of diversified network ser-vices that align better with the divergent needs of differ-ent distributed applications. The failure of these innova-tive architectures to gain wide deployment is primarily dueto economic and legacy issues, rather than technical short-comings. We propose a new paradigm for network servicedifferentiation where design principles account explicitly forthe multiplicity of Internet service providers and users aswell as their economic interests in environments with partlydeployed new services. Our key idea is to base the servicedifferentiation on performance itself, rather than price. Theproposed RD (Rate-Delay) services enable a user to choosebetween a higher transmission rate or low queuing delay at acongested network link. An RD router supports the two ser-vices by maintaining two queues per output link and achievesthe intended rate-delay differentiation through simple linkscheduling and dynamic buffer sizing. After analytically de-riving specific rules for RD router operation, we conductextensive simulations that confirm effectiveness of the RDservices geared for incremental deployment in the Internet.

Categories and Subject Descriptors

C.2.6 [Computer-Communication Networks]: Internet-working; C.2.1 [Computer-Communication Networks]:Network Architecture and Design

General Terms

Algorithms, Design, Economics, Performance

∗This work was sponsored in part by U.S. National ScienceFoundation (NSF) grants CNS-0626661 and REC-0632580.

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.SIGCOMM’08, August 17–22, 2008, Seattle, Washington, USA.Copyright 2008 ACM 978-1-60558-175-0/08/08 ...$5.00.

Keywords

Service Differentiation, Transmission Rate, Queuing Delay,Incremental Deployment, Legacy Traffic, Legacy Infrastruc-ture, Performance Incentive

1. INTRODUCTIONThe mismatch between the single best-effort service of the

current Internet and diverse communication needs of differ-ent distributed applications has led to numerous proposalsof alternative architectures with diversified network services.A prominent representative of the architectural innovations,IntServ (Integrated Services) [4] offers users a rich choiceof services, including end-to-end rate and delay guaranteesprovided to packet flows by means of admission control andlink scheduling mechanisms such as WFQ (Weighted FairQueuing) [9] or EDF (Earliest Deadline First) [10]. WhileIntServ failed to gain ubiquitous adoption, early IntServ ret-rospectives attributed the failure to the complexity of sup-porting the per-flow performance guarantees, especially inbusy backbone routers. The proposal of DiffServ (Differ-entiated Services) [3] addresses the scalability concerns byrestricting complex operations to the Internet edges and of-fering just few services at the granularity of traffic classes,rather than individual flows. Despite the technically simplerdesign, DiffServ also failed to deploy widely.

The IntServ and DiffServ experiences reveal that technicalmerits of an innovative architecture are neither the only northe most important factor in determining its success. Eco-nomic and legacy issues become a crucial consideration be-cause the Internet of today is a loose confederation of infras-tructures owned by numerous commercial entities, govern-ments, and private individuals [7]. The multiplicity of the in-dependent stakeholders and their economic interests impliesthat partial deployment of a new service is an unavoidableand potentially long-term condition. Hence, a successful ar-chitecture should provide ISPs (Internet Service Providers)and users with incentives to adopt the new service despitethe partial deployment.

In this paper, we investigate a novel paradigm for networkservice differentiation that makes deployability the primarydesign concern. We explicitly postulate that partial deploy-ment is unavoidable and that the new design should be at-tractive for early adopters even if other ISPs or users refuseto espouse the innovation. Besides, we demand that the ben-efits of the service diversification should not come at the ex-

255

pense of legacy traffic. The imposed constraints are potent.In particular, they imply that the new architecture may notassume even for the Internet edges that most ISPs will sup-port admission control, traffic shaping, metering, billing, orany other mechanism added by the architecture.

The above design principles lead us to the key idea ofmaking performance itself an incentive for network servicedifferentiation. While prior studies have established a fun-damental trade-off between link utilization and queuing de-lay [22, 33], the Internet practice favors full utilization ofbottleneck links at the price of high queuing delay. Unfor-tunately, delay-sensitive applications suffer dearly from thelong queues created by throughput-greedy applications atshared bottleneck links. Our proposal of RD (Rate-Delay)services resolves this tension by offering two classes of ser-vice: an R (Rate) service puts an emphasis on a high trans-mission rate, and a D (Delay) service supports low queuingdelay. Each of the services is neither better nor worse per sebut is simply different, and its relative utility for a user isdetermined by whether the user’s application favors a highrate or low delay. Hence, the RD architecture provides theuser with an incentive and complete freedom to select theservice class that is most appropriate for the application.The user chooses the R or D service by marking a bit in theheaders of transmitted packets.

We view the interest of users in the D service as an indirectbut powerful incentive for ISPs to adopt the RD services. Byswitching to the RD architecture, an ISP attracts additionalcustomers and thereby increases revenue. We also envisionan RD certification program championed by early adopters.The RD certification will serve as a catalyst for virulent de-ployment of the RD architecture because being RD-certifiedwill give an ISP a differentiation advantage over legacy ISPswhen competing with them for users and provider peeringagreements.

To support the RD services on an output link, the routermaintains two FIFO (First-In First-Out) queues and achievesthe intended rate-delay differentiation through simple linkscheduling and dynamic buffer sizing. The simplicity makesthe RD design amenable to easy implementation even athigh-capacity links. RD routers treat legacy traffic as be-longing to the R class. After analytically deriving algorithmsfor RD router operation, we report extensive simulation re-sults that confirm effectiveness of the RD services and theirfitness for incremental deployment in the Internet.

Both services of the proposed RD architecture are stillbest-effort and do not promise any rate or loss guarantees.The proposal modifies forwarding but not routing. Althoughthe RD services provide users and ISPs with incentives toadopt the services, the architecture does not eliminate mostsecurity problems of the Internet, and a malicious ISP candisrupt the rate and delay characteristics of transient RDtraffic. While security is not the main focus of this study,we believe that the RD services do not introduce any funda-mentally new vulnerabilities. For example, although a usercan mark some packets as R-class and other packets as D-class to increase throughput, such behavior is essentially thesame as the well-known Internet technique of running mul-tiple flows in parallel. Moreover, the two-queue RD designalleviates some existing threats, e.g., if a D flow transmits ex-cessively to create heavy losses for other flows at the sharedbottleneck link, the RD router limits the damage from thedenial-of-service attack to the D class and preserves the high

transmission rates of concurrent legacy and R flows. Never-theless, new behavioral patterns induced by the RD archi-tecture and their security aspects clearly deserve thoroughseparate investigation. It is possible that design for incre-mental deployment is intrinsically less robust, and some se-curity concerns in such architectures have to be addressedlegally rather than through purely technical means.

The rest of the paper has the following structure. Section 2clarifies our design principles. Section 3 outlines the concep-tual framework of the RD services. Section 4 lays analyti-cal foundations for RD router operation. Section 5 presentsdetails of our design. Section 6 reports the extensive per-formance evaluation. Section 7 discusses related work. Sec-tion 8 suggests directions for future work. Finally, Section 9concludes the paper with a summary of its contributions.

2. MODEL AND PRINCIPLESWe model the Internet as an interconnection of network

domains owned and operated by numerous independent ISPs.ISPs generate revenue by selling network services to their di-rect customers. Users are the customers whose applicationsrun at end hosts and send flows of packets over the Inter-net. In general, a network path that connects the end hostsof a distributed application traverses an infrastructure thatbelongs to multiple ISPs.

While different applications have different communicationneeds, the single best-effort service of the current Internetmatches the interests of the users imperfectly. In responseto this tension, various architectures with diversified networkservices have been proposed. Although technically brilliant,even the best of the proposals failed to gain wide deploy-ment. We attribute the failures to ignoring the serious eco-nomic challenges of deploying a new service in a confederatedinfrastructure governed by numerous independent stakehold-ers. Instead of treating the deployment as an afterthought,we base our design on principles that explicitly acknowledgethe multiplicity of Internet parties and their economic ratio-nale in deciding whether to adopt new services.

First, we explicitly recognize that partial deployment isan unavoidable and potentially long-term condition for anynewly adopted service. Hence, the new design should beattractive for early adopters even if other ISPs or users refuseto embrace the innovation:

Principle 1. A new service should incorporate incentivesfor both ISPs and end users to adopt the service despite thecontinued presence of legacy traffic or other ISPs that do notespouse the new service.

The above principle has a more specific but neverthelessimportant implication that the new design should not worsenthe service provided to legacy Internet users. Doing other-wise is against the economic interests of ISPs due to thedanger of losing a large number of current customers whokeep communicating via legacy technologies. This consider-ation leads us to the following principle:

Principle 2. Adoption of a new service should not pe-nalize legacy traffic.

3. CONCEPTUAL DESIGNBelow, we apply the principles from Section 2 to derive

a conceptual design for Rate-Delay (RD) services, our solu-tion to the problem of network service differentiation. As

256

the name reflects, the RD services enable a user to choosebetween a higher transmission rate or low queuing delay ata congested network link.

Our Principle 1 prescribes providing both end users andISPs with incentives for early adoption of the RD services.The constraint of the partial deployment excludes the com-mon approach of pricing and billing, e.g., because a usershould be able to opt for the RD services despite accessingthe Internet through a legacy ISP that provides no billingor any other support for service differentiation. With di-rect financial incentives not being an option, our key idea isto make the performance itself a cornerstone of the servicedifferentiation. While the performance is subject to a fun-damental trade-off between link utilization and queuing de-lay [22, 33], different applications desire different resolutionsto the tension between the two components of the perfor-mance. Hence, the RD services consist of two classes:

• an R (Rate) service puts an emphasis on a high trans-mission rate;

• a D (Delay) service supports low queuing delay.

Each of the two services is neither better nor worse per sebut is merely different, and its relative utility for a user isdetermined by whether the user’s application favors a highrate or low delay. Since the network services are alignedwith the application needs, each user receives an incentive toselect the service of the most appropriate type, and the RDservice architecture empowers the user to do such selectionby marking the headers of transmitted packets.

An ISP finds the RD services attractive due to the poten-tial to boost revenue by adding customers who are interestedin the D service. We envisage an RD certification programchampioned by a nucleus of early adopters. The RD certifi-cation will serve as a catalyst for virulent deployment of theRD architecture because being RD-certified will give an ISPa differentiation advantage over legacy ISPs when competingwith them for users and provider peering agreements.

To support the RD services on an output link, the routermaintains two queues for packets destined to the link. Werefer to the queues as an R queue and D queue. Dependingon whether an incoming packet is marked for the R or Dservice, the router appends the packet to the R or D queuerespectively. The packets within each queue are served inthe FIFO (First-In First-Out) order. Whenever there is dataqueued for transmission, the router keeps the link busy, i.e.,the RD services are work-conserving.

By deciding whether the next packet is transmitted fromthe R or D queue, the router realizes the intended rate dif-ferentiation between the R and D services. In particular, thelink capacity is allocated to maintain a rate ratio of

k =rR

rD

> 1 (1)

where rR and rD refer to per-flow forwarding rates for packetflows from the R and D class respectively.

The router supports the desired delay differentiation be-tween the R and D services through buffer sizing for theR and D queues. As common in current Internet routers,the size of the R buffer is chosen large enough so that theoscillating transmission of TCP (Transmission Control Pro-tocol) [25] and other legacy end-to-end congestion controlprotocols utilizes the available link rate fully. The D bufferis configured to a much smaller dynamic size to ensure that

queuing delay for each forwarded packet of the D class islow and at most d. The assurance of low maximum queuingdelay is attractive for delay-sensitive applications and easilyverifiable by outside parties. An interesting direction for fu-ture studies is an alternative design for the D service wherequeuing delay stays low on average but is allowed to spikeoccasionally in order to support a smaller loss rate.

In agreement with our overall design philosophy, parame-ters k and d are independently determined by the ISP thatowns the router. The ISP uses the parameters as explicitlevers over the provided RD services. Our subsequent ex-perimental study reveals suggested values for parameters k

and d.As per our Principle 2, adoption of the RD services by

an ISP should not penalize traffic from legacy end hosts.While the R service and legacy Internet service are similar inputting the emphasis on a high transmission rate rather thanlow queuing delay, the legacy traffic and any other packetsthat do not explicitly identify themselves as belonging to theD class are treated by an RD router as belonging to the Rclass, i.e., the router diverts such traffic into the R queue.Since those flows that opt for the D service acquire the lowqueuing delay by releasing some fraction of the link capacity,the adopters of the D service also benefit the legacy flows byenabling them to communicate at higher rates.

Due to the potentially partial deployment of the RD ser-vices, R and D flows might be bottlenecked at a link belong-ing to a legacy ISP. Furthermore, the R and D flows mightshare the bottleneck link with legacy traffic. This has animportant design implication that end-to-end transmissioncontrol protocols for the R and D services have to be com-patible with TCP. Our paper reports experiments with TCPNewReno [15], Paced TCP [1], and TFRC (TCP-FriendlyRate Control) [14] as end-to-end transport protocols for Dflows. While losses at the smaller D buffer are expectedlyhigher, a separate investigation is needed to clarify how muchthe D service can benefit from new TCP-compatible trans-port protocols that address the higher losses by employing al-ternative mechanisms for congestion control or reliability [5,18, 21, 29].

4. ANALYTICAL FOUNDATIONWhile Section 3 outlined the conceptual design of the RD

services, we now present an analytical foundation for ourspecific implementation of RD routers.

4.1 Notation and assumptionsConsider an output link of an RD router. Let C denote

the link capacity and n be the number of flows traversing thelink. We use nR and nD to represent the number of flowsfrom the R and D class respectively. Since the router treatslegacy traffic as belonging to the R class, we have

nR + nD = n. (2)

For analytical purposes, we assume that both R and D queuesare continuously backlogged and hence

RR + RD = C (3)

where RR and RD refer to the service rates for the R and Dqueues respectively. Also, our analysis assumes that everyflow within each class transmits at its respective fair rate,rR or rD:

RR = nRrR and RD = nDrD. (4)

257

Variable Semantics

x class of the service, R or D

nx number of flows from the x class

Bx buffer allocation for the x queue

qx size of the x queue

Lx amount of data transmitted fromthe x queue since the last reset of Lx

p packet

tp arrival time of p

S packet size

Figure 1: Internal variables of the RD router algo-rithms in Figures 3, 4, and 5.

Parameter Semantics

k ratio of per-flow rates for R and D flows

d upper limit on queuing delay of D packets

b timestamp vector size

T update period

E flow expiration period

Figure 2: Parameters of the RD router algorithms.

Our experiments with dynamic realistic traffic including a lotof short-lived flows confirm that the above assumptions donot undermine the intended effectiveness of the RD servicesin practice.

We denote the sizes of the R and D queues as qR and qD

respectively and the buffer allocations for the queues as BR

and BD respectively. If the corresponding buffer does nothave enough free space for an arriving packet, the routerdiscards the packet.

4.2 Sizing and serving the R and D queuesCombining Equations 1, 3, and 4, we determine that the

service rates for the R and D queues should be respectivelyequal to

RR =knRC

nD + knR

and RD =nDC

nD + knR

. (5)

To ensure that queuing delay for any packet forwardedfrom the D queue does not exceed d, the buffer allocationfor the queue should be bounded from above as follows:

BD ≤ RDd. (6)

Taking the second of Equations 5 into account, we establishthe following buffer allocation for the D queue:

BD =nDCd

nD + knR

. (7)

In practice, we expect BD to be much smaller than overallbuffer B that the router has for the link. Manufacturersequip current Internet routers with substantial memory sothat router operators could configure the link buffer to ahigh value Bmax, chosen to support throughput-greedy TCPtraffic effectively [37]. Thus, we recommend to allocate thebuffer for the R queue to the smallest of B −BD and Bmax

(and expect Bmax to be the common setting in practice):

BR = min

Bmax; B −nDCd

nD + knR

ff

. (8)

p← received packet;x← class of p;S ← size of p;if qx + S ≤ Bx

append p to the tail of the x queue;qx ← qx + S;if x = D

tp ← current time;else

discard p

Figure 3: Router operation upon receiving a packetdestined to the RD link.

5. DESIGN DETAILS

5.1 End hostsAs per our discussion at the end of Section 3, the RD

services restrict end-to-end transmission control protocolsto being compatible with TCP. The only extra support re-quired from end hosts is the ability to mark a transmittedpacket as belonging to the D class. We implement this re-quirement by employing the currently unused bit 7 in theTOS (Type of Service) field of the IP (Internet Protocol)datagram header [32]. To choose the D service, the bit is setto 1. The default value of 0 corresponds to the R service.Thus, the RD services preserve the IP datagram format.

5.2 RoutersThe main challenge for transforming the analytical in-

sights of Section 4 into specific algorithms for RD routeroperation lies in the dynamic nature of Internet traffic. Inparticular, while Equations 5, 7, and 8 depend on nR andnD, the numbers of R and D flows change over time. Hence,the RD router periodically updates its values of nR and nD.Sections 5.2.1, 5.2.2, and 5.2.3 describe our algorithms forprocessing a packet arrival, serving the queues, and updat-ing the algorithmic variables at the RD router respectively.Figure 1 summarizes the internal variables of the algorithms.In addition to the internal variables, a number of parame-ters characterize the RD router operation. Figure 2 sums upthese parameters.

5.2.1 Handling a packet arrival

Figure 3 presents our simple algorithm for dealing withpacket arrivals. When the router receives a packet destinedto the link, the router examines the seventh TOS bit in thepacket header to determine whether the packet belongs tothe R or D class. If the corresponding buffer is already full,the router discards the packet. Otherwise, the router ap-pends the packet to the tail of the corresponding queue. Be-sides, if the enqueued packet belongs to the D class, therouter remembers the arrival time of the packet until thepacket reaches the head of the queue. Since the D bufferis typically small, storing the arrival times does not requiresignificant memory.

5.2.2 Serving the R and D queues

The arrival times of enqueued D packets are used by thealgorithm that serves the queues. The algorithm uses thetimes to ensure that queuing delay of forwarded D pack-

258

\* select the queue to transmit from *\if qR > 0 and qD > 0

if knRLD > nDLR

x← R;else

x← D;else \* exactly one of the R and D buffers is empty *\

x← class of the non-empty buffer;p← first packet in the x queue;S ← size of p;if x = D\* enforce the delay constraint of the D service *\while current time - tp > d and qD > 0

discard p;qD ← qD − S;p← first packet in the D queue;S ← size of p;

if p != null\* update the L variables *\if qR > 0 and qD > 0

Lx ← Lx + S;else \* one of the R and D buffers is empty *\

LR ← 0; LD ← 0;transmit p into the link;qx ← qx − S

Figure 4: Router operation when the RD link is idle,and the link buffer is non-empty.

ets does not exceed upper bound d. More specifically, ifthe packet at the head of the D queue has been queued forlonger, the router discards the packet. The situation mightarise due to the dynamic nature of Internet traffic: sincethe population of flows changes, the service rate for the Dqueue might decrease after the packet arrives. Our initialversion of this algorithm did not include the last-momentenforcement of the queuing delay constraint. Experimentalresults for the initial version were similar to those reported inSection 6: while the loss rates did not differ much, the max-imum observed queuing delay exceeded d by about 0.5 ms.It remains to be seen whether the strict enforcement of thequeuing delay constraint is worth the price of tracking thearrival times of the enqueued D packets.

Figure 4 reports further details of the algorithm for serv-ing the R and D queues. While the RD services are work-conserving, the router transmits into the link whenever thelink buffer is non-empty. Since the router can transmit atmost one packet at a time, the intended split of link capac-ity C into service rates RR and RD can be only approxi-mated. The router does so by:

• monitoring LR and LD, the amounts of data transmit-ted from the R and D queues respectively since the lastreset of these variables;

• transmitting from such queue that LR

LD

approximatesRR

RD= knR

nDmost closely.

More specifically, when knRLD > nDLR, the router trans-mits from the R queue; otherwise, the router selects the Dqueue.

We derived the above algorithm from the assumption thatall flows within a class transmit at the same fair rate, rR

update nR and nD as per Section 5.2.3;update BR and BD as per Section 5.2.3;LR ← 0; LD ← 0;if qD > BD

discard all packets from the D queue;qD ← 0;

else while qR > BR

p← last packet in the R queue;S ← size of p;discard p;qR ← qR − S

Figure 5: Update of the RD algorithmic variablesupon timeout.

or rD. While the assumption is clearly unrealistic, one spe-cific problematic scenario occurs when the total transmissionrate of the D flows is much less than nDrD, the maximumservice rate for the D queue. Then, a throughput-greedyflow has an incentive to mark its packets as D packets andthereby achieve a much higher forwarding rate than the oneoffered by the intended R service. Although this scenariohas not surfaced in our extensive simulations, and the unin-tended selection of the D service by the throughput-greedyflow does not disrupt the D service, this issue deserves closeconsideration. Our future study will explore in detail theimplications of the diversity in flow rates and user behaviors(including deliberate denial-of-service attacks) for the RDservices.

5.2.3 Updating the algorithmic variables

Whereas nR and nD play important roles in the presentedRD router algorithms, we compare two approaches to com-puting the numbers of flows: explicit notification from endhosts and independent inference by the router. Since ourdesign principles allow a possibility that many users do notembrace the RD services, it is likely that the router servesmany legacy flows and needs to do at least some implicit in-ference. Furthermore, since we favor solutions with minimalmodification of the current infrastructure, the router in ourRD implementation estimates nR and nD without any helpfrom end hosts.

To estimate the numbers of flows, we apply the timestamp-vector algorithm [27] separately to the R and D classes. Ourexperiments confirm the excellent performance of the algo-rithm. Using a hash function, the algorithm maps each re-ceived packet into an element of the array called a timestampvector. The timestamp vector accommodates b elements.The algorithm inspects the timestamp vector with period T

and considers a flow inactive if the timestamp vector doesnot register any packets of the flow during last period E.Following the guidelines in [12] and assuming E = 1 s, 105

active flows, and standard deviation ǫ = 0.05, we recom-mend b = 18,000 as the default setting for the timestampvector size.

The RD router updates nR and nD with period T . At thesame time, the router updates the buffer allocations for the Rand D queues. Even if nR or nD is zero, the router allocatesa non-zero buffer for each of the queues. Our experimentalresults suggest that the specific allocation split is not too im-portant; in the reported experiments, we initialize the buffer

259

TCP NewReno Paced TCP TFRC

(a)

Time, s

Uti

liza

tio

n

0

0.2

0.4

0.6

0.8

1

0 10 20 30 40 50 60

both services

R service

D service

Time, s

Uti

liza

tio

n

0

0.2

0.4

0.6

0.8

1

0 10 20 30 40 50 60

both services

R service

D service

Time, s

Uti

liza

tio

n

0

0.2

0.4

0.6

0.8

1

0 10 20 30 40 50 60

both services

R service

D service

(b)

Time, s

De

lay

, m

s

0

100

200

300

400

0 10 20 30 40 50 60

Time, s

De

lay

, m

s

0

100

200

300

400

0 10 20 30 40 50 60

Time, s

De

lay

, m

s

0

100

200

300

400

0 10 20 30 40 50 60

(c)

Time, s

De

lay

, m

s

0

2

4

6

8

10

0 10 20 30 40 50 60

Time, s

De

lay

, m

s

0

2

4

6

8

10

0 10 20 30 40 50 60

Time, s

De

lay

, m

s

0

2

4

6

8

10

0 10 20 30 40 50 60

(d)

Time, s

Lo

ss

, %

0

10

20

30

40

0 10 20 30 40 50 60

Time, s

Lo

ss

, %

0

10

20

30

40

0 10 20 30 40 50 60

Time, s

Lo

ss

, %

0

10

20

30

40

0 10 20 30 40 50 60

(e)

Time, s

Lo

ss

, %

0

10

20

30

40

0 10 20 30 40 50 60

Time, s

Lo

ss

, %

0

10

20

30

40

0 10 20 30 40 50 60

Time, s

Lo

ss

, %

0

10

20

30

40

0 10 20 30 40 50 60

Figure 6: Using TCP NewReno, Paced TCP, or TFRC for D flows: (a) bottleneck link utilization; (b) queuingdelay for R packets; (c) queuing delay for D packets; (d) loss rate for R flows; (e) loss rate for D flows.

allocations to BD = 4Cd4+k

and BR = min {Bmax; B −BD},which correspond to the 1:4 ratio between the numbers offlows from the R and D classes. If both nR and nD are pos-itive, the router updates the buffer allocations according toEquations 7 and 8.

The update of BR and BD can make one of them smallerthan the corresponding queue size. Figure 5 describes howthe router deals with this issue. If the updated BR is lessthan qR, the router discards packets from the tail of the Rqueue until qR becomes at most BR. The discards ensurethat the D service receives the intended buffer allocation. IfBD is decreased below qD, the router flushes all packets fromthe D queue. Emptying the D buffer assures that neither ofthe packets will be queued for longer than d and thus needto be discarded after reaching the head of the queue. Thelonger queueing might occur otherwise because the decreaseof BD also proportionally reduces the service rate for theD queue. Although the D buffer is typically small, discard-

ing the burst of packets might affect the loss rate negativelyand be even unnecessary because it might be still possible toforward at least some of the discarded D packets in time de-spite the reduced service rate. While our experiments showacceptably low loss rates with this implementation of the al-gorithm, we will explore more subtle discard policies in ourfuture work.

To select update period T , we observe that reducing T

increases the computational overhead. Also, the operationmight become unstable unless T is much larger than d. How-ever, with larger T , the design responds slower to changesin the network conditions. Our experiments show that T =400 ms offers a reasonable trade-off between these factors.

6. PERFORMANCE EVALUATIONIn this section, we evaluate performance of the RD ser-

vices through simulations using version 2.29 of ns-2 [30].

260

Flow arrival rate, Hz

Uti

liza

tio

n

0

0.2

0.4

0.6

0.8

1

0 100 200 300 400

R service

D service

Flow arrival rate, Hz

Lo

ss

, %

0

10

20

30

40

0 100 200 300 400

(a) Link utilization (b) Loss rate for D flows

Figure 7: Impact of the web-like traffic.

Unless explicitly stated otherwise, all flows employ TCPNewReno [15] and data packets of size 1 KB. Each link bufferis configured to B = Bmax = C ·250 ms where C is the capac-ity of the link. Every experiment lasts 60 s and is repeatedfive times for each of the considered parameter settings. Thedefault settings include k = 2, d = 10 ms, b = 18,000, T =400 ms, E = 1 s, Tavg = 200 ms, and Tq = 10 ms, where Tavg

refers to the averaging interval for the bottleneck link uti-lization and loss rate, and Tq denotes the averaging intervalfor queuing delay. We also average the utilization and lossrate over the whole experiment with exclusion of its first fiveseconds. While queuing delay for the D service is at most d

by design, all our experiments confirm that maximum delayof D packets satisfies and closely approximates this upperlimit.

Section 6.1 evaluates the RD services in a wide varietyof scenarios that include different transport protocols for Dflows, both long-lived and short-lived traffic, diverse bottle-neck link capacities, various settings for the delay constraintof the D service, Exponential and Pareto-distributed flow in-terarrival times, and sudden changes in the numbers of R andD flows. Section 6.2 continues the assessment in multi-ISPtopologies and, in particular, examines whether the RD ser-vices are deployable despite the continued presence of legacyISPs and without penalizing legacy traffic.

6.1 Basic propertiesTo understand basic properties of the RD services, this

section experiments in a traditional dumbbell topology wherethe core bottleneck and access links have capacities 100 Mbpsand 200 Mbps respectively. The bottleneck link carries 100R flows and 100 D flows in both directions and has propa-gation delay 50 ms. We choose propagation delays for theaccess links so that propagation RTT (Round-Trip Time) forthe flows is uniformly distributed between 104 ms and 300 ms.

6.1.1 Various transport protocols for D flows

While the RD services restrict end-to-end transmissioncontrol to being compatible with TCP, we illustrate howthe RD design performs when the D flows employ TCPNewReno [15], Paced TCP [1, 8], or TFRC [14]. All flowsstay throughout the experiment. With k = 2 and equalnumbers of R and D flows, we expect the R and D servicesto utilize the bottleneck link capacity fully with the 2:1 ra-tio. Figure 6 mostly confirms this expectation and also plotsqueuing delay and loss rates for both services. For the R ser-vice, maximum queuing delay is about 375 ms, as expectedfor the link that allocates two thirds of its capacity C to theR flows and has the buffer sized to the product of C and250 ms. Queuing delay for the D service fluctuates between0 and d = 10 ms. Due to slower detection of congestion and

Bottleneck link capacity, Mbps

Uti

liza

tio

n

0

0.2

0.4

0.6

0.8

1

0 200 400 600 800 1000

R service

D service

Bottleneck link capacity, Mbps

Lo

ss

, %

0

10

20

30

40

50

60

0 200 400 600 800 1000

(a) Link utilization (b) Loss rate for D flows

Figure 8: Scalability of the RD services with respectto the bottleneck link capacity.

Delay constraint, ms

Uti

liza

tio

n

0

0.2

0.4

0.6

0.8

1

2 4 6 8 10 12 14 16

R service

D service

Delay constraint, ms

Lo

ss

, %

0

2

4

6

8

2 4 6 8 10 12 14 16

(a) Link utilization (b) Loss rate for D flows

Figure 9: Sensitivity to the delay constraint of theD service.

higher loss synchronization, Paced TCP yields larger lossesthan TCP NewReno. Among the three evaluated protocols,TFRC supports the smallest loss rate and most balancedrate differentiation. These superior properties make TFRCan attractive option for transmission control of D flows.

6.1.2 Short-lived flows and their intensity

To see how short-lived flows affect the RD services, we en-hance the traffic mix on the bottleneck link in this and sub-sequent three experimental series with web-like flows fromtwo sources: one source generates R flows, and the othertransmits D flows. The sizes of the web-like flows are Pareto-distributed with the average of 30 packets and shape indexof 1.3. The flows arrive according to a Poisson process. Inthe experiments of this section, the average arrival rate variesfrom 1 Hz to 400 Hz. When the flows arrive more frequently,the traffic mix becomes burstier and imposes higher load onthe bottleneck link. As expected, these factors drive up theloss rate for the D service. Figure 7 reveals that despitethe increasing losses, the RD services closely maintain theintended 2:1 per-flow rate ratio for the R and D flows.

6.1.3 Link capacity scalability

In this series of experiments, we vary the bottleneck linkcapacity from 1 Mbps to 1 Gbps while keeping the accesslink capacities twice as large. The average arrival rate forthe web-like flows in this and next sections stays at 50 Hz.Figure 8 shows that the rates of the R and D flows deviatefrom the intended 2:1 ratio significantly only for the lowestexamined capacities close to 1 Mbps. The deviation occursdue to the extremely small buffering available for D packetsin those settings. In particular, satisfying the 10-ms de-lay constraint at the 1-Mbps bottleneck link reduces the Dbuffer to about one packet, and the minimal buffering causesheavy losses and effectively shuts down the D service. As thebottleneck link capacity grows, the loss rate for the D flowsdecreases exponentially.

261

Average arrival rate 50 Hz Average arrival rate 100 Hz Average arrival rate 200 Hz

(a)

Time, s

Uti

liza

tio

n

0

0.2

0.4

0.6

0.8

1

0 10 20 30 40 50 60

Time, s

Uti

liza

tio

n

0

0.2

0.4

0.6

0.8

1

0 10 20 30 40 50 60

Time, s

Uti

liza

tio

n

0

0.2

0.4

0.6

0.8

1

0 10 20 30 40 50 60

both services

R service

D service

(b)

Time, s

De

lay

, m

s

0

2

4

6

8

10

0 10 20 30 40 50 60

Time, s

De

lay

, m

s

0

2

4

6

8

10

0 10 20 30 40 50 60

Time, s

De

lay

, m

s

0

2

4

6

8

10

0 10 20 30 40 50 60

(c)

Time, s

Lo

ss

, %

0

10

20

30

40

50

0 10 20 30 40 50 60

Time, s

Lo

ss

, %

0

10

20

30

40

50

0 10 20 30 40 50 60

Time, s

Lo

ss

, %

0

10

20

30

40

50

0 10 20 30 40 50 60

Figure 10: Performance of the RD services with the Pareto distribution for the interarrival times of theweb-like flows: (a) link utilization, the legend of the rightmost graph applies to all three utilization graphs;(b) queuing delay for D packets; (c) loss rate for D flows.

Time, s

Uti

liza

tio

n

0

0.2

0.4

0.6

0.8

1

0 20 40 60 80 100 120 140 160 180

both services

R service

D service

Time, s

De

lay

, m

s

0

2

4

6

8

10

0 20 40 60 80 100 120 140 160 180

Time, s

Lo

ss

, %

0

10

20

30

40

50

0 20 40 60 80 100 120 140 160 180

(a) Link utilization (b) Queuing delay for D packets (c) Loss rate for D flows

Figure 11: Reaction to sudden changes in the numbers of R and D flows.

6.1.4 Sensitivity to the delay constraint

To examine sensitivity of the RD services to d, we varythe delay constraint of the D service from 3 ms to 15 ms.Figure 9 demonstrates that the per-flow rate ratio for the Rand D flows stays close to the intended 2:1. As d increases,the loss rate for the D service decreases from about 8% toabout 5% due to the increasing size of the D buffer.

6.1.5 Heavy-tailed flow interarrival times

While Section 6.1.2 experiments with the web-like trafficwhere the flow interarrival times adhere to the Exponentialdistribution, we now modify that arrangement to the Paretodistribution with the shape index of 1.1. The only othertraffic besides the web-like flows comes from 50 R flows and50 D flows that traverse the reverse direction of the bottle-neck link throughout the experiment. The access links forthe web-like flows have capacity 1 Gbps. The Pareto inter-arrival times make the traffic bursty and highly dynamic.Figure 10 reflects the high dynamism of the R and D flowcounts by showing the widely fluctuating utilization of thebottleneck link by either R or D service. When the flows

arrive at average rate 50 Hz, their average cumulative loadis low, and they rarely congest the bottleneck link. Arrivalrate 100 Hz makes the congestion instances more frequentand intense. Increasing the average arrival rate to 200 Hzcreates persistent overload of the bottleneck link. Togetherwith the burstiness of the arrival process, the persistent over-load causes heavy losses for the D service.

6.1.6 Sudden changes in the numbers of flows

To investigate how the RD services react to sudden changesin the numbers of R and D flows, we experiment with thefollowing traffic. 100 R flows start at time 0. 50 D flows jointhem 20 s later. 50 additional D flows arrive at time 40 s andthereby equalize the flow counts for the two services at 100.At time 60 s, 80 D flows finish. 80 other D flows arrive attime 80 s. All R flows leave at time 100 s but 20 new Rflows start 40 s later. Finally, 80 extra R flows arrive attime 160 s and reestablish the parity in the numbers of Rand D flows. Figure 11 shows that the RD design respondsto the changes promptly and appropriately: reflecting thecurrent ratio of the flow counts, the per-flow rate ratio for


Recommended