+ All Categories
Home > Documents > Utility Accrual Real-Time Scheduling under Variable Cost Functions

Utility Accrual Real-Time Scheduling under Variable Cost Functions

Date post: 25-Jan-2023
Category:
Upload: vt
View: 3 times
Download: 0 times
Share this document with a friend
17
Utility Accrual Real-Time Scheduling under Variable Cost Functions Umut Balli, Haisang Wu, Member, IEEE, Binoy Ravindran, Senior Member, IEEE, Jonathan Stephen Anderson, and E. Douglas Jensen, Member, IEEE Abstract—We present a utility accrual real-time scheduling algorithm called CIC-VCUA for tasks whose execution times are functions of their starting times (and, potentially, other factors). We model such variable execution times using variable cost functions (or VCFs). The algorithm considers application activities that are subject to time/utility function time constraints, execution times described using VCFs, and mutual exclusion constraints on concurrent sharing of non-CPU resources. We consider the twofold scheduling objective of 1) assuring that the maximum interval between any two consecutive, successful completions of job instances in an activity must not exceed the activity period (an application-specific objective) and 2) maximizing the system’s total accrued utility while satisfying mutual exclusion resource constraints. Since the scheduling problem is intractable, CIC-VCUA is a polynomial-time heuristic algorithm. The algorithm statically computes worst-case task sojourn times, dynamically selects tasks for execution based on their potential utility density, and completes tasks at specific times. We establish that CIC-VCUA achieves optimal timeliness during underloads, and tightly upper bounds inter and intratask completion times. Our simulation experiments confirm the algorithm’s effectiveness and superiority. Index Terms—Variable-cost functions, time/utility functions, utility accrual scheduling, real-time scheduling, overload scheduling, dynamic scheduling, resource management, mutual exclusion. Ç 1 INTRODUCTION E MBEDDED real-time systems that are emerging in many domains, such as robotic systems in the space domain (e.g., NASA/JPL’s Mars Rover [1]) and control systems in the defense domain (e.g., airborne trackers [2]), are fundamentally distinguished by the fact that they operate in environments with dynamically uncertain properties. These uncertainties include transient and sustained resource overloads due to context-dependent activity execution times and arbitrary activity arrival patterns. Nevertheless, such systems call for the strongest possible assurances on activity timeliness behavior. Another important distinguishing feature of these systems is their relatively long execution time magnitudes compared to traditional real-time systems—e.g., on the order of milli- seconds to seconds or seconds to minutes. When resource overloads occur, meeting time constraints (for example, deadlines) of all application activities is impossible as the demand exceeds the supply. The urgency of an activity is typically orthogonal to the relative importance of the activity—e.g., the most urgent activity can be the least important, the most urgent can be the most important, and vice versa. Hence, when overloads occur, completing the most important activities irrespective of activity urgency is often desirable. Thus, a clear distinction has to be made between the urgency and the importance of activities, during overloads. During underloads, such a distinction need not be made because deadline-based scheduling algorithms such as EDF are optimal for those situations [3]—i.e., they can satisfy all deadlines. Deadlines by themselves cannot express both urgency and importance. Thus, we employ the abstraction of time/ utility functions (or TUFs) [4] that express the utility of completing an application activity as a function of that activity’s completion time. We specify a deadline as a binary-valued, downward “step” shaped TUF; Fig. 1a shows examples; a classical deadline has unit utility values [1, 0]. Note that a TUF decouples importance and urgency —i.e., urgency is measured as a deadline on the X-axis and importance is denoted by utility on the Y-axis. Many real-time systems also have activities that are subject to nondeadline time constraints, such as those where the utility attained for activity completion varies (e.g., decreases, increases) with completion time. This is in contrast to general deadlines, where a positive utility is attained for completing the activity anytime before the deadline, after which zero or negative utility is attained. Figs. 1b and 1c show examples of such time constraints from two actual experimental applications in the defense domain: 1) an Airborne Warning And Control System (AWACS) built by The MITRE Corporation and The Open Group (TOG) [2] and 2) a battle management (BM)/ command and control (C2) application built by General Dynamics (GD) and Carnegie Mellon University (CMU) [5]. Details of these applications can be found in [2] and [5], respectively; for brevity, they are omitted here. IEEE TRANSACTIONS ON COMPUTERS, VOL. 56, NO. 3, MARCH 2007 1 . U. Balli is with Openwave Systems, Inc., Redwood City, CA 94063. E-mail: [email protected]. . H. Wu is with Juniper Networks, Inc., Sunnyvale, CA 94089. E-mail: [email protected]. . B. Ravindran and J.S. Anderson are with the Real-Time Systems Laboratory, Electrical and Computer Engineering Department, Virginia Tech, Blacksburg, VA 24061. E-mail: {binoy, andersoj}@vt.edu. . E.D. Jensen is with The MITRE Corporation, Bedford, MA 01730. E-mail: [email protected]. Manuscript received 25 Nov. 2005; revised 20 June 2006; accepted 6 Sept. 2006; published online 22 Jan. 2007. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference IEEECS Log Number TC-0418-1105. 0018-9340/07/$25.00 ß 2007 IEEE Published by the IEEE Computer Society
Transcript

Utility Accrual Real-Time Scheduling underVariable Cost Functions

Umut Balli, Haisang Wu, Member, IEEE, Binoy Ravindran, Senior Member, IEEE,

Jonathan Stephen Anderson, and E. Douglas Jensen, Member, IEEE

Abstract—We present a utility accrual real-time scheduling algorithm called CIC-VCUA for tasks whose execution times are functions

of their starting times (and, potentially, other factors). We model such variable execution times using variable cost functions (or VCFs).

The algorithm considers application activities that are subject to time/utility function time constraints, execution times described using

VCFs, and mutual exclusion constraints on concurrent sharing of non-CPU resources. We consider the twofold scheduling objective of

1) assuring that the maximum interval between any two consecutive, successful completions of job instances in an activity must not

exceed the activity period (an application-specific objective) and 2) maximizing the system’s total accrued utility while satisfying mutual

exclusion resource constraints. Since the scheduling problem is intractable, CIC-VCUA is a polynomial-time heuristic algorithm. The

algorithm statically computes worst-case task sojourn times, dynamically selects tasks for execution based on their potential utility

density, and completes tasks at specific times. We establish that CIC-VCUA achieves optimal timeliness during underloads, and tightly

upper bounds inter and intratask completion times. Our simulation experiments confirm the algorithm’s effectiveness and superiority.

Index Terms—Variable-cost functions, time/utility functions, utility accrual scheduling, real-time scheduling, overload scheduling,

dynamic scheduling, resource management, mutual exclusion.

Ç

1 INTRODUCTION

EMBEDDED real-time systems that are emerging in manydomains, such as robotic systems in the space domain

(e.g., NASA/JPL’s Mars Rover [1]) and control systems inthe defense domain (e.g., airborne trackers [2]), arefundamentally distinguished by the fact that they operatein environments with dynamically uncertain properties.These uncertainties include transient and sustainedresource overloads due to context-dependent activityexecution times and arbitrary activity arrival patterns.Nevertheless, such systems call for the strongest possibleassurances on activity timeliness behavior. Anotherimportant distinguishing feature of these systems is theirrelatively long execution time magnitudes compared totraditional real-time systems—e.g., on the order of milli-seconds to seconds or seconds to minutes.

When resource overloads occur, meeting time constraints

(for example, deadlines) of all application activities is

impossible as the demand exceeds the supply. The urgency

of an activity is typically orthogonal to the relative

importance of the activity—e.g., the most urgent activity

can be the least important, the most urgent can be the most

important, and vice versa. Hence, when overloads occur,

completing the most important activities irrespective of

activity urgency is often desirable. Thus, a clear distinction

has to be made between the urgency and the importance of

activities, during overloads. During underloads, such a

distinction need not be made because deadline-based

scheduling algorithms such as EDF are optimal for those

situations [3]—i.e., they can satisfy all deadlines.Deadlines by themselves cannot express both urgency

and importance. Thus, we employ the abstraction of time/

utility functions (or TUFs) [4] that express the utility of

completing an application activity as a function of that

activity’s completion time. We specify a deadline as a

binary-valued, downward “step” shaped TUF; Fig. 1a

shows examples; a classical deadline has unit utility values

[1, 0]. Note that a TUF decouples importance and urgency

—i.e., urgency is measured as a deadline on the X-axis and

importance is denoted by utility on the Y-axis.Many real-time systems also have activities that are

subject to nondeadline time constraints, such as those where

the utility attained for activity completion varies (e.g.,

decreases, increases) with completion time. This is in

contrast to general deadlines, where a positive utility is

attained for completing the activity anytime before the

deadline, after which zero or negative utility is attained.

Figs. 1b and 1c show examples of such time constraints

from two actual experimental applications in the defense

domain: 1) an Airborne Warning And Control System

(AWACS) built by The MITRE Corporation and The Open

Group (TOG) [2] and 2) a battle management (BM)/

command and control (C2) application built by General

Dynamics (GD) and Carnegie Mellon University (CMU) [5].

Details of these applications can be found in [2] and [5],

respectively; for brevity, they are omitted here.

IEEE TRANSACTIONS ON COMPUTERS, VOL. 56, NO. 3, MARCH 2007 1

. U. Balli is with Openwave Systems, Inc., Redwood City, CA 94063.E-mail: [email protected].

. H. Wu is with Juniper Networks, Inc., Sunnyvale, CA 94089.E-mail: [email protected].

. B. Ravindran and J.S. Anderson are with the Real-Time SystemsLaboratory, Electrical and Computer Engineering Department, VirginiaTech, Blacksburg, VA 24061. E-mail: {binoy, andersoj}@vt.edu.

. E.D. Jensen is with The MITRE Corporation, Bedford, MA 01730.E-mail: [email protected].

Manuscript received 25 Nov. 2005; revised 20 June 2006; accepted 6 Sept.2006; published online 22 Jan. 2007.For information on obtaining reprints of this article, please send e-mail to:[email protected], and reference IEEECS Log Number TC-0418-1105.

0018-9340/07/$25.00 � 2007 IEEE Published by the IEEE Computer Society

When activity time constraints are specified using TUFs,which subsume deadlines, the scheduling criterion is basedon accrued utility, such as maximizing the sum of theactivities’ attained utilities. We call such criteria utilityaccrual (or UA) criteria and call scheduling algorithms thatoptimize them UA scheduling algorithms.

UA algorithms that maximize summed utility underdownward step TUFs (or deadlines) [6], [7], [8] default toEDF during underloads since EDF can satisfy all deadlinesduring those situations. Consequently, they obtain themaximum possible accrued utility during underloads.When overloads occur, UA algorithms favor activities thatare more important (since more utility can be attained fromthem), irrespective of their urgency. Thus, UA algorithms’timeliness behavior subsumes the optimal timelinessbehavior of deadline scheduling.

In this paper, we focus on variable cost scheduling. In thecontext of this paper, “cost” means the duration of anactivity, a term that comes from one of the interesting andimportant applications for such scheduling. The modelpresented requires scheduling activities consisting ofsequences of jobs whose durations (e.g., execution times)vary depending on when they begin or on how long theparent activity has been running or on other factors. For thealgorithms presented, the varying cost is specified by a costfunction which specifies the job’s duration as a function ofits start time. Thus, even if there were no new activityarrivals, the load to be scheduled changes while theactivities are being performed.

Previous efforts on deadline-based and UA schedulingdo not consider variable cost scheduling. For example,previous UA scheduling algorithms [6], [7], [8] do not allowtask execution times to vary while tasks are beingperformed. The imprecise computation [9] and IRIS (IncreasingReward with Increasing Service) [10] models include optionalparts in addition to the mandatory parts of task executiontimes. However, these models are different from UA andvariable cost scheduling because, in these models, thelonger the optional part executes, the higher the rewardbecomes. On the other hand, in UA scheduling, the utility(reward) can only be accrued by an activity when it iscompleted and the utility value is decided by the comple-tion time. Further, there are no optional parts in variablecost scheduling—task execution times only contain themandatory parts and they may vary while the tasks arebeing performed.

The task model of the TAFT scheduler [11] allowsvariable task execution times. In TAFT, a task is allowedto have a main part and an exception part (which isexecuted when the main part misses its time constraint).

The execution time of the main part is described as an

“expected-case execution time” (the execution time of the

exception part is described as a worst-case execution

time). The authors describe the expected-case execution

time of a task as a “measure for the time that a certain

percentage of instances of the task needs for a successful

completion.” This model is fundamentally different from

our cost function model, where the task execution time

depends upon when the task starts its execution (or other

factors). The execution time represented on our cost

function is a deterministic estimate, which is a function of

time. In contrast, TAFT’s expected-case execution time is

time-independent.Thus, no previous efforts have studied the problem space

that intersects UA scheduling and variable cost scheduling. In

this paper, we precisely focus on this unexplored problem

space. We consider repeatedly occurring application activ-

ities whose time constraints are specified using TUFs. The

execution times of activities are described by cost functions,

which may vary as the activities are being performed.

Activities may concurrently, but mutually exclusively, share

non-CPU resources. For such a model, we consider a twofold

scheduling criterion: 1) assure that the maximum interval

between any two consecutive, successful completions of job

instances in an activity must not exceed the activity period (an

application-specific criterion) and 2) maximize the system’s

summed utility.This problem is NP-hard. We present a polynomial-time

heuristic algorithm for the problem, called Completion

Interval Constrained Variable Cost Utility Accrual Algorithm

(or CIC-VCUA). We prove several timeliness properties of

the algorithm, including optimal timeliness during under-

loads and tight upper bounds on completion times between

tasks (i.e., activities) and between jobs of one task. Further,

we establish that the algorithm is deadlock-free and safe.

Finally, our experimental simulation studies confirm CIC-

VCUA’s effectiveness and superiority.Thus, the paper’s contribution is the CIC-VCUA algo-

rithm. To the best of our knowledge, we are not aware of any

other efforts that solve the problem solved by CIC-VCUA.The rest of the paper is organized as follows: In Section 2,

we describe a motivating application for variable cost

scheduling; in Section 3, we outline our activity and

timeliness models and state the scheduling criterion. We

present CIC-VCUA in Section 4 and establish the algor-

ithm’s properties in Section 5. The experimental measure-

ments are reported in Section 6. Finally, we conclude the

paper and identify future work in Section 7.

2 IEEE TRANSACTIONS ON COMPUTERS, VOL. 56, NO. 3, MARCH 2007

Fig. 1. Example TUF time constraints. (a) Some step TUFs. (b) MITRE/TOG AWACS TUF [2]. (c) GD/CMU BM/C2 TUFs [5].

2 MOTIVATING APPLICATION

One application context of interest to us for variable costscheduling is an air-to-air radar tracking problem for whichno scheduling algorithms and performance assurances havebeen publicly available. To motivate the work in this paper,we simplify and omit some characteristics of the trackingproblem to expedite the creation of an initial plausiblescheduling approach that can be generalized in subsequentwork. This problem is representative of a large class ofrelated variable-cost scheduling problems which arise insensor systems with both sensor collection and dataprocessing times which are strongly dependent on thephysical geometry of the sensor and target observation area.

This type of tracking problem employs an Active,Electronically Steerable Array (AESA) radar to provide anend-to-end tracking service supporting the evolution of atrack through the phases described below. Examples ofsuch systems include the radar systems installed in certainUnited States and European tactical aircraft and navalsurface craft [12]. An AESA radar uses an array of antennasto form a single virtual “beam” by varying the power,transmission frequencies, and sampling rates across theindividual antenna elements. Consequently, the timerequired to deliver or measure a given amount of powerin a particular direction (the pulse width) is a function ofthe relative geometry of the antenna, the desired beamshape and signal to noise ratio, and the relative geometry ofthe antenna and target.

Generally, a single task executed by such a radar consistsof a coherent ensemble of “dwells,” which are pulses ofenergy. Each such task is strongly a function of geometryand desired quality of the return signal to be collected. Forinstance, the number of individual dwells transmitted in anair tracking task dictates the amount of ambiguity in theresulting range and range-rate measurements. This ambi-guity is also a function of the actual range and range-rate ofthe target and, thus, the time required to achieve aconsistently accurate collection varies as the relativegeometry of the radar and the target varies.

Some common civilian applications of AESA radars,such as the collection of synthetic aperture radar (SAR)imagery [13] for agricultural and forestry use and scientificresearch, exploit a train of these tasks, scheduled coher-ently, over macroscopic timescales (101-102 seconds). Thetotal end-to-end time required for such an extended task isagain a function of the system geometry and, thus, varies insojourn and execution time. A task’s sojourn time is the timebetween its arrival and completion.

The problem notionally consists of three componenttasks:1 1) searching a segment of the airspace to find anyairborne moving objects (track initiation), 2) maintaining atrack for each of those objects until some deadline time, and3) identifying the object using characteristics of the returnpulses. An example of identification is the IdentificationFriend or Foe (IFF) system, but many more complicatedmechanisms exist.

Those three tasks for a given object nominally occur inthat order, but identification can occur almost any time

while tracking. For each of the three tasks, the radar mustmake one or more measurements by illuminating the objectwith dwells, then await any return echoes. For convenience,we denote the entire sequence of transmit-wait-receive as adwell. Tasks for any object may be interleaved with anyother task by interleaving dwells.

For the tracking tasks, the dwells occur at a revisit ratethat is defined by the interval between two successivedwells—regular, but not necessarily periodic. The revisitrate for any particular object must be maintained for a longenough time to obtain acceptable values for certainapplication-level quality of service (AQoS) metrics.

One such critical AQoS metric is track quality [14], whichis a measure of the error in our estimate of the given object’slocation and motion. Achieving any particular track qualityvalue imposes a lower bound on the revisit rate of the objectbeing tracked since the estimate error increases quickly intime after each measurement. Optimal and minimum revisitrates are defined by the probability of detecting the objectwith the next dwell—failure to meet a minimum revisit rateimplies an increased chance of missing the object on thenext dwell [15].

In this application, we associate with each task a costfunction which specifies the required duration for a dwell as afunction of execution time. This activity cost varies with manyfactors, including the type of dwell and the relative geometryof the sensor and target object. For instance, the number andduration of dwells required to search a segment of airspacedepends on the relative positions of the radar platform, thescanned airspace, and the objects in that space. Depending onthe relative motion of the radar platform and the object, it maybe better either to procrastinate dwells (intentionally insertidle time in the radar schedule) or to perform dwells early.

The cost function for each task varies with each object’srange and look angle (azimuth off the sensor’s nominalboresight)—i.e., having the form fðr; �Þ. The cost function isderived from the particular tracking problem and anequation known as the radar equation [16]. The radarequation relates the measured energy received to thegeometry of the object, the sensor, and the emitted energy.

Two examples of cost functions are shown in Fig. 2.Fig. 2a shows the cost function—the amount of timerequired on the radar front-end to collect sufficient quantityand quality data—for a target object flying at a higheraltitude and faster velocity than the radar platform; Fig. 2bshows the cost function for a target object circling the radarplatform at a constant range. In these cases, the costachieves a minimum when the target object is along thesensor’s boresight. Additionally, the cost increases as apolynomial function of the range (absolute distance) to theobject. The specific cost functions can be derived in partfrom the physical properties of the transceiver andantennae.

Note that radar dwell scheduling has been extensivelystudied by real-time researchers in the past—e.g., [17], [18],[19], [20], [21]. However, our work fundamentally differsfrom these works in the variable execution time model ofthe dwell jobs. These referenced works on dwell schedulingassume that the execution time of a radar dwell job is aconstant and time-independent, enforcing this by forcing alldwells to take a fixed time. In contrast, our work (based onthe applications of interest to us) assumes that the execution

BALLI ET AL.: UTILITY ACCRUAL REAL-TIME SCHEDULING UNDER VARIABLE COST FUNCTIONS 3

1. Hereafter, we use the terms task and activity interchangeably.

time of the dwell job depends on the time at which the jobstarts its execution and the distance between the radar andthe target (besides other factors) and, consequently, varieswith time (see Section 3.4).

3 MODELS AND OBJECTIVE

3.1 System and Task Model

We consider a preemptive system which consists of a set ofperiodic (dwell) tasks, denoted as T ¼ fT1; T2; � � � ; Tng. Eachtask Ti contains a collection of instances or jobs. The periodof a task Ti is denoted as Pi. Dwells are not necessarilyperiodic, but, here, we assume this for simplicity. Each taskhas a begin time and an end time between which executionof all jobs of the task must be completed.

An instance of a task is called a job and we refer to the jthjob of task Ti, which is also the jth invocation of Ti, as Ji;j. Thebasic scheduling entity that we consider is the job abstraction.Thus, we use J to denote a job without being task specific, asseen by the scheduler at any scheduling event; Jk can be usedto represent a job in the scheduling queue.

3.2 Resource Model

Jobs can access non-CPU resources, which, in general, areserially reusable. Examples include physical resources, suchas disks, and logical resources, such as locks. Similarly tothe resource access models for fixed-priority scheduling [22]and those for UA scheduling [7], [23], we consider a single-unit resource model. Thus, only a single instance is presentfor each resource in the system and a job must explicitlyspecify the resource that it needs.

Resources can be shared and can be subject to mutualexclusion constraints. A job may request multiple sharedresources during its lifetime. The requested time intervalsfor holding resources may be nested, overlapped, ordisjoint. We assume that a job explicitly releases all grantedresources before the end of its execution.

Jobs of different tasks can have precedence constraints.For example, a job Jk can become eligible for execution onlyafter a job Jl has completed because Jk may requireJl’s results. As in [7], [23], we program such precedencesas resource dependencies.

3.3 Timeliness Model

A job’s time constraint is specified using a TUF. Jobs of atask have the same TUF. We use Uið�Þ to denote task Ti’sTUF and use Ui;jð�Þ to denote the TUF of Ti’s jth job.Without being task specific, Jk:U means the TUF of a job Jk;thus, completion of Jk at a time t will yield a utility Jk:UðtÞ.

TUFs can be classified into unimodal and multimodalfunctions. Unimodal TUFs are those for which any decreasein utility cannot be followed by an increase. Examples areshown in Fig. 1. TUFs which are not unimodal aremultimodal. In this paper, we focus on nonincreasingunimodal TUFs as they encompass the majority of the timeconstraints in our motivating applications. Figs. 1a and 1band two TUFs in Fig. 1c show examples.

Each TUF Ui;j; i 2 f1; � � � ; ng has an initial time Ii;j and atermination time Xi;j. Initial and termination times are theearliest and the latest times for which the TUF is defined,respectively. We assume that Ii;j is equal to the arrival timeof Ji;j. Further, the period Pi and the relative terminationtime Xi of the task Ti are both equal to Xi;j � Ii;j.

If a job’s termination time is reached and its execution hasnot been completed, an exception is raised and the job isimmediately aborted. Our abortion model follows that of [7],[23] and is based on the observation that, if time constraintsare not satisfied, it is desirable to place the affected portions ofthe system and the physical process being controlled intoacceptable operating states. Aborting activities provides anopportunity to perform the necessary transformations.Although our algorithm does not require that all activitiesbe aborted (when their time constraints are not met), it isadvantageous to exploit the fact that some can be.

3.4 Task Execution Time Model

As motivated in Section 2, after a job is released, itsexecution time may vary with time. Thus, we define avariable cost function (or VCF) for each job, which describesthe job execution time as a function of its starting time. Jobsof a task have the same VCFs, so a VCF is also defined for atask. We use Cið�Þ to denote task Ti’s VCF and use Ci;jð�Þ todenote Ti’s jth job’s VCF.

For a job Ji;j, the x-axis of its VCF is the absolute timerelative to the job’s arrival time, the y-axis represents its

4 IEEE TRANSACTIONS ON COMPUTERS, VOL. 56, NO. 3, MARCH 2007

Fig. 2. Example cost functions. (a) Cost function (radar collection time) for a target flying at a higher altitude and faster velocity than the radar

platform. (b) Cost function (radar collection time) for a target circling the radar platform at a constant range.

execution time Ci;jðtÞ, and the origin shows the executiontime of Ji;j when it is just released.

The cost functions of AESA applications can beincreasing, decreasing, or strictly convex shaped. In thispaper, we only consider unimodal VCFs. Fig. 3 showsexamples of VCFs. For jobs whose execution times do notvary with time after their arrivals, we define a constantVCF. Fig. 3a shows a constant VCF. Fig. 3b and Fig. 3cshow an increasing VCF and a decreasing VCF, respec-tively. From Figs. 3b and 3c, we can observe that job Ji;j’sVCF starts from a nonzero value C0. We also assume thatCi;jðtÞ is bounded by another nonzero value, C1, whichimplies that, after tbnd, Ci;jðt � tbndÞ ¼ C1.

Without being task specific, Jk:VCF or Jk:C means theVCF of a job Jk; the execution time of Jk at a time tcur will beJk:CðtcurÞ ¼ Jk:VCFðtcurÞ. Hereafter, in the discussion ofTUFs and VCFs, we interchangeably use the terms task andjob if no confusion is raised.

3.5 Scheduling Objective

A successful completion of a job means that the job has metits termination time. With this definition, we consider atwofold scheduling criterion: 1) assure that the maximuminterval between any two consecutive, successful comple-tions of jobs of a task must not exceed the task period and2) maximize the system’s summed utility. Furthermore,mutual exclusion constraints on all shared resources mustbe respected.

Note that, with VCFs, it is difficult to statically calculatethe system load since it dynamically varies with time. Forexample, a load consisting of a constant number of tasks at atask arrival—one that is an underload—can graduallyincrease and can eventually become an overload evenwithout new task arrivals, due to increasing VCFs. Thus, ifthe dynamic system load is so high such that schedulingobjective 1) cannot be satisfied for each task, some tasksmay be dropped and consequently aborted. In such cases,tasks that are not dropped are still subject to the twoscheduling objectives and mutual exclusion constraints onall shared resources.

4 THE CIC-VCUA ALGORITHM

This section describes the CIC-VCUA algorithm. In Sec-tion 4.1, we first discuss the scheduling metric used by CIC-VCUA, the Potential Utility Density (or PUD). CIC-VCUAconsists of two steps: static calculation (Sections 4.2 to 4.3),and the dynamic step (Sections 4.4 to 4.7).

In the static steps of CIC-VCUA, we first find themaximum possible execution time for each task based on itsVCF. Then, we label each task as either selected or skipped,based on their PUDs and contribution to the system load.

For the selected tasks, the algorithm determines the worst-case sojourn time of each task and attempts to complete alljobs of the task at the same time relative to their arrivals.

After the static step, at each scheduling event, CIC-VCUA builds the dependency chain for each job in theready queue and calculates its PUD. The algorithm thensorts them based on their PUDs, in a nonincreasing order.Next, the algorithm inserts the jobs into a tentative schedulein the order of their critical times (earliest critical time first),while respecting their resource dependencies and time-liness feasibilities. Finally, CIC-VCUA determines the job toexecute, as well as the amount of time for which it will beexecuted, so as to make sure all jobs of a task have identicalsojourn times.

Finally, in Section 4.8, we analyze the asymptotic timecomplexity of the algorithm.

4.1 Algorithm Rationale

The potential utility that can be accrued by executing a jobdefines a measure of its “return on investment.” Because ofthe unpredictability of future events (e.g., during over-loads), scheduling events that may happen later, such as jobcompletions and new job arrivals, cannot be considered atthe time when the scheduler is invoked. Thus, a reasonableheuristic is to favor “high return” jobs over “low return”jobs in the schedule. This will increase the likelihood ofmaximizing the summed utility.

The metric used by CIC-VCUA to determine the returnon investment for a job is called the PUD, which wasoriginally developed in [7]. The PUD of a job measures theamount of utility that can be accrued per unit time byexecuting the job itself and other job(s) that it depends upon(due to mutual exclusion constraints on resources held bythe other jobs).

To compute job Jk’s PUD at current time tcur, CIC-VCUAconsiders Jk’s expected completion time, which is denotedas Jk:FinT , and the expected utility by executing Jk and itsdependent jobs. For each job Jl that is in Jk’s dependencychain and needs to be completed before executing Jk, Jl’sexpected completion time is denoted as Jl:FinT . The PUDof Jk is then computed as:

Jk:UðJk:FinT Þ þP

Jl2J 0ks dependency chainJl:UðJl:F inT Þ

Jk:FinT � tcur:

4.2 Static Job Selection

We assume that if a job cannot complete before itstermination time even though it is scheduled immediately,it is infeasible and can be safely aborted. The process oftesting the feasibility of a job will be described in Section 4.4.

BALLI ET AL.: UTILITY ACCRUAL REAL-TIME SCHEDULING UNDER VARIABLE COST FUNCTIONS 5

Fig. 3. Example variable cost functions for a job Ji;j. (a) Constant VCF. (b) Increasing VCF. (c) Decreasing VCF.

To test for feasibility, we have to find the maximum

possible task execution times. Depending on the VCF

shapes, the maximum Ci;j for each task can be calculated.

For jobs with increasing VCFs, by solving the inequality

Ci;jðtÞ þ t � Xi;j, we can derive the latest possible starting

time tbi;j of job Ji;j such that Ci;jðtbi;jÞ þ tbi;j ¼ Xi;j. Ci;jðtbi;jÞcorresponds to the maximum possible execution time of Ji;jand CiðtbiÞ describes this parameter at the task level. For jobs

with nonincreasing VCFs, a job Ji;j’s maximum execution

time is Ci;jðt0i;jÞ.Therefore, although a job’s execution time changes

with its starting time, it is possible for us to derive a

system load bound, loadb, which will never be exceeded

by the system’s dynamic load. For increasing VCFs, we

derive loadb ¼Pn

I¼1Ciðtbi ÞPi

; for nonincreasing VCFs, we

define loadb ¼Pn

i¼1Ciðt0i ÞPi

. If a constant VCF is defined for

each task, then a task’s execution time is constant and loadb

here is the same as the system utilization definition in [24].Since the system dynamic load may gradually increase

even without new task arrivals, the task instances to beexecuted must be carefully selected in order to accrue moreutility. Such a selection process is guided by the PUDmetric. Toward this, we use a job selection flag, which labelseach job as skipped or selected. The selection processconsiders the parameters of the task set such as VCFs andTUFs. We associate with each job Ji;j a label SELi;j, whereSELi;j ¼ skipped indicates that the job is skipped andSELi;j ¼ selected indicates that it is selected for execution.At runtime, only jobs whose labels are set to selected aredispatched. Thus, the problem becomes choosing the joblabels toward optimizing our scheduling objective.

We label jobs in a static and dynamic fashion, based on the

workload information used by the scheduler. In the offline

(static) part of CIC-VCUA, we select task instances before the

application starts. Initially, all tasks in T are labeled as skipped,

i.e., SELi;j ¼ skipped, 8i 2 1; � � � ; n; 8j. At tcur ¼ t0i , assuming

that tasks are independent of each other, we calculate the

PUD of each task, which in value is also the PUD of each task’s

first job, i.e., PUDi ¼ Ui;1ðCi;1ðt0i ÞÞCi;1ðt0i Þ

. We also calculate the

maximum possible execution time of each task (CiðtbiÞ for

increasing VCFs andCiðt0i Þ for nonincreasing VCFs) and then

choose the subtask set T0. T0 consists of n0 tasks with the

largest PUDs such that, for increasing VCFs,

load0b ¼Xn0I¼10

CiðtbiÞPi� 1;

and, for nonincreasing VCFs, load0b ¼Pn0

I¼10Ciðt0i ÞPi� 1 and n0

is the maximum possible number of tasks to be selected.Note that if loadb � 1, then n0 ¼ n. Thus, we favor tasks withlarger PUDs and label the n0 tasks in T0 as selected, i.e.,SELi;j ¼ selected, 8i 2 10; � � � ; n0; 8j.

4.3 Worst-Case Task Sojourn Times

CIC-VCUA’s first objective is to assure that the max-imum interval between any two consecutive, successful

completions of jobs of a task Ti does not exceed its periodPi. In order to satisfy this scheduling objective, thealgorithm determines the worst-case sojourn time of eachtask and attempts to complete all jobs of a task at the sametime relative to their arrivals. Doing so ensures that all jobsJi;j of a task Ti have identical sojourn times, satisfying thealgorithm’s first objective.

As we know Xi ¼ Pi, for tasks with step TUFs, the notionof termination time is the same as that of deadline. Thus, theEarliest Deadline First (or EDF) algorithm is also denoted asEarliest Termination First (abbreviated as EXF) in this paper.In the process of sojourn time calculation, we only considerselected tasks, i.e., T0.

For the selected task set T0 with load0b � 1, the onlinescheduling process of CIC-VCUA is essentially EXF (wedescribe this in Section 4.4). Thus, we define T:wcST todenote the worst-case sojourn time of each task T when thetask set T0 is scheduled by EXF. For a job J of task T , wedenote its worst-case sojourn time as J:FinT . This is alsothe latest possible time for jobs of T to complete withoutcausing any load increase or abortions of other jobs.

Fig. 4 shows the timeline of the job Ji;j, where t0i andJi;j:X indicate the release and termination times of Ji;j,respectively, and Ji;j:F inT is less than Ji;j:X.

For task set T0 with load0b � 1:0, our algorithm defaults toEXF. Hence, in order to find sojourn times under CIC-VCUA, we use the paradigm for finding sojourn timesunder EXF. Sojourn times under EXF can be determinedusing the notion of deadline busy period. A deadline d busyperiod is a busy period during which only jobs withabsolute deadlines that are smaller than or equal to dexecute [25], [26]. This is needed because it allows us todetermine how long it takes for each task to complete in thepresence of other tasks.

First, it is necessary to calculate the synchronous busyperiod before calculating the individual deadline busyperiods of tasks. The synchronous busy period, denoted L,is the interval of time during which the processor is not idle.Further, if all the first job instances of tasks were to bereleased synchronously (the worst-case scenario), it wouldtake L time units for all jobs to complete. Thus, the busyperiod bounds the individual completion times or deadlinebusy periods of tasks. The busy period is given by [27]:

L0 ¼ 0; Lmþ1 ¼W Lmð Þ;where WðtÞ ¼Xn0i¼1

t

Ti

� �Ci:

The busy period is found when the iteration ends atLm ¼ Lmþ1. After L is calculated, the individual deadlinebusy periods, Li, can be calculated. We need to determinewhich tasks will be executed before our target task. For a taskTi which arrives at time a, it is intuitive that, before Ti’sabsolute termination timeaþXi, only tasks with termination

6 IEEE TRANSACTIONS ON COMPUTERS, VOL. 56, NO. 3, MARCH 2007

Fig. 4. A job example.

times shorter than or equal to aþXi can be executed. Thedeadline busy period of a task Ti with an arrival time a isgiven by [26]:

L0i að Þ ¼ 0; Lmþ1

i að Þ ¼Wi a; Lmi ðaÞ

� �þ 1þ a

Xi

� �� Ci;

ð1Þ

where

Wi a; tð Þ ¼X

j 6¼i;Tj:X�aþXi

mint

Xj

� �; 1þ aþXi �Xj

Xj

� �� Cj:

In calculating the deadline busy period Li in (1), the firstterm, Wiða; tÞ, calculates the higher priority workloadsarriving in time window ½a; t� that have to be satisfied beforeexecuting task Ti and the second term accounts for Ti’sinstances that have to be executed. The iterative computa-tion will stop when Lmþ1

i ðaÞ ¼ Lmi ðaÞ. Algorithm 1 showsthe calculation of the maximum deadline busy period.

Algorithm 1 uses the task list � , which is ordered bynondecreasing termination times. The algorithm examinesthe list, starting from the task with the maximum Xi, whichhas the length L. Tasks that have absolute termination timesshorter than that of Ti are inserted into proper terminationtime positions. Such positions are defined byE ¼

Sni¼1ðmXi þXi : m � 0Þ ¼ ðe1; e2 . . .Þ. After Li is calcu-

lated, the bound for this task becomes Li; it also becomes abound for the next task in � . The algorithm then movesdown the list and calculates the maximum Li for the nexttask until all tasks are considered.

After determining Li, the worst-case sojourn time of atask Ti becomes Ti:wcST ¼ maxðTi:wcST ðaÞÞ for a � 0,where Ti:wcST ðaÞ ¼ maxðCi; LiðaÞ � aÞ. CIC-VCUA ensuresthat each job completes at its worst-case sojourn time after itis released so that jobs can meet the bound constraint onconsecutive completions.

For a task Ti, the finish time of the first job of the task isJi;1:F inT ¼ Ti:wcST . We can determine the finish times ofthe subsequent jobs of a task as Ji;j:F inT ¼ Ji;j�1:F inT þ Pi.

During schedule construction, CIC-VCUA “pushesback” the completion times of jobs further toward theirtermination times as system load changes with variableexecution times. In order to satisfy the bound constraintacross the range of loadb � 1:0, the algorithm uses the worst-case sojourn times as predicted finish times. For loadb > 1:0,CIC-VCUA pushes job finish times such that they occurslightly before the job termination times. Pushing finishtimes closer to termination times for higher loads isnecessary for two reasons: First, the worst-case sojourntime calculation becomes more unpredictable for differentVCF shapes (since the calculation uses Ciðt0i Þ). Second, task

sojourn times are already close to their termination times

because of the load.

4.4 Dynamic UA Scheduling

After the initial static steps, CIC-VCUA selects the largest

subtask set consisting of the highest PUD tasks whose

dynamic load will not cause a system overload. The

algorithm then adopts the preemptive earliest termination

time first (or EXF) scheduling policy, which is optimal from

the feasibility point of view [24].At each arrival of a job Ji;j, its finish time Ji;j:F inT is

calculated from the task sojourn time Ti:wcST , the period

Pi, and its predecessor’s finish time. After we have

Ji;j:F inT ¼ Ji;j�1:F inT þ Pi, the job is executed until only

a very small amount of execution time of the job, denoted

�, is left to be executed. At this time, if the absolute time is

far from Ji;j:F inT , then job Ji;j is preempted. Later, it will

be selected again at J:FinT to be completed.� is a small quantity of time selected ðJ:C � �Þ so that

the interference caused by the execution of � time units (to

finish the job) to other jobs is negligible. � is used to delay

the completion of jobs so that, at their finish times J:FinT ,

they only need to run � units of time to finish. If two or

more jobs have identical finish times, then � is also used to

break the tie. When a job J’s remaining execution time is

only � and it is preempted and will be resumed at J:FinT ,

we say that the job J is ready to complete.Since tasks are preemptive, CIC-VCUA’s scheduling

events include:

1. a job’s arrival,2. the expiration of a time constraint such as the arrival

of a TUF’s termination time, when the CPU is idle,3. a job’s completion,4. a resource request, and5. a resource release.

To describe the algorithm, we define the following

variables and auxiliary functions:

. J r ¼ fJ1; J2; � � � ; Jmg is the current unscheduled jobset; � is the ordered output schedule. Jk 2 J r is a job.Jk:X is its termination time; Jk:FinT is its finish timeand Jk:SEL is the job selection flag.

. selectJobð�Þ returns a job to execute with theamount of time it will execute.

. hedOfð�Þ returns the first job in �.

. sortByPUDð�Þ returns � ordered by nonincreasingPUD. If two or more jobs have the same PUD, thenthe job(s) with the largest execution time will appearbefore any others with the same PUDs.

. ownerðRÞ denotes the set of jobs that are currentlyholding resource R; reqResðJÞ returns the resourcerequested by job J .

. insertðJ; �; IÞ inserts job J in the ordered list � atthe position indicated by index I; if there are alreadyentries in � at the index I, J is inserted before them.After insertion, the index of J in � is I.

. removeðJ; �; IÞ removes job J from ordered list � atthe position indicated by index I; if J is not presentat the position I in �, the function takes no action.

BALLI ET AL.: UTILITY ACCRUAL REAL-TIME SCHEDULING UNDER VARIABLE COST FUNCTIONS 7

. lookupðJ; �Þ returns the index value associated withthe first occurrence of job J in the ordered list �.

. feasibleð�Þ returns a Boolean value indicatingschedule �’s feasibility. For � to be feasible, thepredicted completion time of each job in � mustnever exceed its termination time.

A high-level description of CIC-VCUA is shown in

Algorithm 2. At the beginning of each scheduling event,

when CIC-VCUA is invoked at time tcur, the algorithm first

checks the feasibility of all the jobs in the current ready

queue. If a job is infeasible, then it can be safely aborted

(line 6). Otherwise, the algorithm constructs the job’s

dependency list (line 8) and then calculates its PUD (line 9).

At line 10, jobs are sorted by their PUDs in a

nonincreasing order. In each step of the for loop from

line 11 to line 14, the job with the largest PUD and its

dependencies are inserted into � by insertByEXF().

Thus, � becomes a feasible schedule that is ordered by job

termination times in a nondecreasing order. Then, the

selectJob() function finds a job in � and returns it for

execution.For each job J , CIC-VCUA will compare � with

J:FinT � tcur when the job has only � remaining execution

time units. If � < J:FinT � tcur, then job J is preempted

and another job may be selected for execution. Later, when

J:FinT � tcur ¼ �, job J will preempt the current running

task so that it can finish at J:FinT .Such monitoring, preemption, and resumption are

realized by the procedure selectJob(). This procedure

selects a job with the earliest finish time from �. If this job is

not ready to complete, then it ensures that the job executes

J:C �� time units. Otherwise, selectJob() runs that job

to completion. After finishing such jobs, the algorithm seeks

another job to execute.

4.5 Resource and Deadlock Handling

Before CIC-VCUA can compute job partial schedules, the

dependency chain of each job must be determined. This is

described in Algorithm 3.

Algorithm 3 follows the chain of resource request andownership. For convenience, the input job Jk is alsoincluded in its own dependency list. Each job Jl other thanJk in the dependency list has a successor job that needs aresource which is currently held by Jl. Algorithm 3 stopseither because a predecessor job does not need any resourceor the requested resource is free. Note that “�” denotes anappend operation. Thus, the dependency list starts with Jk’sfarthest predecessor and ends with Jk.

To handle deadlocks, we consider a deadlock detectionand resolution strategy, instead of a deadlock prevention oravoidance strategy. Our rationale for this is that deadlockprevention or avoidance strategies normally pose extrarequirements—for example, resources must always berequested in ascending order of their identifiers.

Further, restricted resource access operations that canprevent or avoid deadlocks, as done in many priority-basedresource access protocols, are not appropriate for the classof application systems on which we focus here. Forexample, the Priority Ceiling protocol [22] assumes thatthe highest priority of jobs accessing a resource is known.Likewise, the Stack Resource policy [28] assumes preemp-tive “levels” of threads a priori. Such assumptions are toorestrictive for our application systems—the resources thatwill be needed, the length of time for which they will beneeded, and the order of accessing them are all staticallyunknown.

Recall that we are assuming a single-unit resourcerequest model. For such a model, the presence of a cyclein the resource graph is the necessary and sufficientcondition for a deadlock to occur. Thus, the complexity ofdetecting a deadlock can be mitigated by a straightforwardcycle-detection algorithm.

The deadlock detection and resolution algorithm (Algo-rithm 4) is invoked by the scheduler whenever a job requests aresource. Initially, there is no deadlock in the system. Byinduction, it can be shown that a deadlock can occur if andonly if the edge that arises in the resource graph due to thenew resource request lies on a cycle. Thus, it is sufficient tocheck if the new edge resulting from the job’s resource requestproduces a cycle in the resource graph.

8 IEEE TRANSACTIONS ON COMPUTERS, VOL. 56, NO. 3, MARCH 2007

To resolve the deadlock, some job needs to be aborted. If

a job Jl were to be aborted, then its timeliness utility is lost.

To minimize such loss, we compute the Local PUD (or

LoPUD) of each job at tcur. A job’s LoPUD is defined as the

utility that the job can potentially accrue by itself at the

current time, if it were to continue its execution. The

algorithm aborts the job with the minimal LoPUD in the

cycle to resolve a deadlock. Before aborting the job, the

resources held by the job are released and returned to a

consistent state.

4.6 Manipulating Partial Schedules

The calculatePUD() algorithm (Algorithm 5) accepts a

job Jk and its dependency list and determines Jk’s PUD. It

assumes that jobs in Jk:Dep finish at their predicted finish

times J:FinT from the current position in the schedule,

while following the dependencies.

To compute Jk’s PUD, CIC-VCUA considers each job Jlthat is in Jk’s dependency chain Jk:Dep, which needs to be

completed before executing Jk, since they hold resources

that Jk needs. (Note that buildDep() includes Jk’s

dependents and Jk in Jk:Dep.) First, the algorithm calculates

the total utility U that can be accrued by executing Jk and its

dependents and completing them at their respective finish

times J:FinT . The total execution times of Jk and its

dependents are aggregated in the variable tc. calculate-

PUD() determines Jk’s PUD as U=tc (line 7).The details of insertByEXF() in line 13 of Algorithm 2

are shown in Algorithm 6. insertByEXF() updates the

tentative schedule � by attempting to insert each job along

with all of its dependents in �. The updated � is an ordered

list of jobs, where each job is placed according to the

termination time that it should meet. Note that the time

constraint that a job should meet is not necessarily the job’s

termination time. In fact, the index value of each job in � is

the actual time constraint that the job must meet.

A job may need to meet an earlier termination time in

order to enable another job to meet its time constraint.

Whenever a job J is considered for insertion in �, it is

scheduled to meet its own termination time. However,

J’s dependents must execute before J can execute and,

therefore, must precede it in the schedule. The index values

of the dependencies can be changed with insert() in

line 13 of Algorithm 6.The variable CuXT is used to keep track of this

information. Initially, it is set to be the termination time of

job Jk, which is tentatively added to the schedule (line 6,

Algorithm 6). Thereafter, any job in Jk:Dep with a later time

constraint than CuXT is required to meet CuXT . If,

however, a job has a tighter termination time than CuXT ,

then it is scheduled to meet the tighter termination time and

CuXT is advanced to that time since all jobs left in Jk:Dep

must complete by then (lines 12-13, Algorithm 6). Finally, if

this insertion produces a feasible schedule, then the jobs are

included in the schedule; otherwise, the schedule is not

changed (lines 14-15).It is worth noting that the real-time constraint that a job

has to meet is its finish time, J:FinT . The procedure

insertByEXF() resolves resource dependencies and,

accordingly, may change the order of task execution.

4.7 Selecting a Job for Execution

The procedure selectJob() (Algorithm 7) determines the

job that will be executed, as well as the amount of time for

which it needs to be executed.

At the beginning of the algorithm, the job with the

earliest finish time in �, denoted Earliest, is found.

selectJob() starts by checking whether the currently

running job CurRunning holds resources (line 4). If so, the

algorithm ensures that this job is executed so that the held

resources are freed. Then, Earliest is selected to be the

BALLI ET AL.: UTILITY ACCRUAL REAL-TIME SCHEDULING UNDER VARIABLE COST FUNCTIONS 9

running task if its finish time has arrived (line 7) and thealgorithm returns with Jexe ¼ Earliest.

If line 7 cannot determine the job Jexe that needs tocomplete, then the algorithm checks if a previous job, Prev,exists (line 11). If Prev exists, then it means that Prev iscurrently holding resources and has to be executed torelease those resources. However, at the same time, jobs thatare ready to complete must be finished without delay.Therefore, jobs holding resources can only be preemptedby ready jobs and Prev’s execution has to follow that of theready jobs. So, lines 11-14 ensure that Prev is favored forexecution after a job completes.

If line 7 cannot return Jexe and no Prev exists, then thealgorithm seeks to select a job that can be executed in �(lines 15-18). The first job with J:CðtcurÞ > � in � is selectedto execute until its remaining execution time is only �(line 25). With task arrivals and completions, the contentsand the order of � change in terms of both resourcedependencies and finish times. However, the algorithmensures that each job J is selected to complete at its finishtime J:FinT . If no tasks can be found to execute at line 20,then the algorithm idles the processor until either theearliest finish time or the arrival of a new job.

An example of how CIC-VCUA executes jobs is shown inFig. 5. Upward arrows indicate both job arrivals andtermination times and black boxes denote �. In thisexample, jobs J1;1 and J3;1 arrive at the same time. However,since J1;1’s finish time is earlier, CIC-VCUA selects J1;1 forexecution and creates a preemption point at timeJ1;1:CðtcurÞ ��. As J1;1 is preempted, T2 arrives with anearlier finish time than that of J3;1 and runs untilJ2;1:CðtcurÞ ��. After J2;1’s preemption, J3;1 is executed,but it gets preempted because J1;1’s finish time, J1;1:F inT ,arrives and J1;1 executes to completion. Then, J3;1 resumes;however, it is again preempted to let J2;1 finish. After this,J3;1 resumes and completes at its finish time.

4.8 Asymptotic Time Complexity

To analyze the complexity of CIC-VCUA (Algorithm 2), weconsider a ready queue of n jobs and a maximum ofr resources. In the worst case, buildDep() will build adependency list with a length n, so the for-loop from

lines 4-9 will be repeated Oðn2Þ times in the worst case.sortByPUD()’s complexity is Oðn lognÞ.

Complexity of the for-loop body starting from line 11is dominated by insertByEXF() (Algorithm 6). Itscomplexity is dominated by the for-loop (lines 7-13,Algorithm 6), which requires Oðn lognÞ time since theloop will be executed no more than n times and eachexecution requires OðlognÞ time for insert(), re-

move(), and lookup() operations on the tentativeschedule. Therefore, CIC-VCUA’s worst-case complexityis 2�Oðn2Þ þOðn lognÞ þ n�Oðn lognÞ ¼ Oðn2 lognÞ.

CIC-VCUA’s asymptotic cost is similar to that of manypast UA scheduling algorithms such as [6], [7], [8]. Ourprior implementation experiences with UA scheduling atthe middleware-level have shown that the overheads are ofthe magnitude of submilliseconds [29] (submicrosecondoverheads may be possible at the kernel-level). Weanticipate a similar overhead magnitude for CIC-VCUA(on a similar platform).

As mentioned before, systems such as AESA that weconsider in this paper are distinguished by their relativelylong execution time magnitudes—e.g., milliseconds toseconds or seconds to minutes. Thus, although CIC-VCUAhas a higher overhead than traditional real-time schedulingalgorithms, this high cost is justified for applications withlonger execution time magnitudes such as those on whichwe focus here (of course, this high cost cannot be justifiedfor every application).2

5 ALGORITHM PROPERTIES

5.1 Nontimeliness Properties

We now discuss CIC-VCUA’s nontimeliness properties, i.e.,deadlock-freedom, correctness, and mutual exclusion. CIC-VCUA respects resource dependencies by ensuring that thejob selected for execution can execute immediately. Thus,

10 IEEE TRANSACTIONS ON COMPUTERS, VOL. 56, NO. 3, MARCH 2007

2. When UA scheduling is desired with low overhead, solutions andtrade-offs exist. These include linear-time stochastic UA scheduling [30],UA scheduling with nonblocking synchronization for concurrent, mutuallyexclusive resource sharing [31], and using special-purpose hardwareaccelerators for UA scheduling (analogous to floating-point coprocessors)[32].

Fig. 5. Example of a task set.

no job is ever selected for normal execution if it is resource-dependent on some other job.

Theorem 1. CIC-VCUA ensures deadlock-freedom.

Proof. A cycle in the resource graph is the sufficient andnecessary condition for a deadlock in the single-unitresource request model. CIC-VCUA does not allow sucha cycle by deadlock detection and resolution, so it isdeadlock free. tu

Lemma 2. In insertByEXF()’s output, all the dependents of ajob must execute before the job can execute and, therefore, mustprecede it in the schedule.

Proof. insertByEXF() maintains an output queue that isordered by job termination times while respectingresource dependencies. Consider job Jk and its depen-dent Jl. If Jl:X is earlier than Jk:X, then Jl will beinserted before Jk in the schedule. If Jl:X is later thanJk:X, then Jl:X is advanced to be Jk:X by the operationwith CuXT . According to the definition of insert(),after advancing the termination time, Jl will be insertedbefore Jk. tu

Theorem 3. When a job Jk that requests a resource R is selectedfor execution by CIC-VCUA, Jk’s requested resource R will befree. We call this, CIC-VCUA’s correctness property.

Proof. From Lemma 2, the output schedule � is correct.Thus, CIC-VCUA is correct. tu

Thus, if a resource is not available for a job Jk’s request, jobsholding the resource will become Jk’s predecessors. Wepresent CIC-VCUA’s mutual exclusion property by acorollary.

Corollary 4. CIC-VCUA satisfies mutual exclusion constraintsin resource operations.

5.2 Timeliness Properties

We now consider CIC-VCUA’s timeliness properties andcompare the algorithm with other algorithms. Specifically,we consider the following two conditions: 1) a set ofindependent periodic tasks subject to step TUFs and2) sufficient processor cycles exist for meeting all tasktermination times—i.e., there is no overload, and loadb � 1.

Theorem 5. Under conditions 1) and 2), a schedule produced byEDF [3] is also produced by CIC-VCUA, yielding equal totalutilities. Not coincidentally, this is simply a termination time-ordered schedule.

Proof. We prove this by examining Algorithm 2. Forperiodic tasks, during nonoverload situations, � fromAlgorithm 2 is termination time-ordered due to theproperties of the procedure insertByEXF(). Thetermination time that we consider is analogous to adeadline in [3]. As proved in [3], [24], a deadline-orderedschedule is optimal (with respect to meeting all dead-lines) for preemptive task sets when there are nooverloads. Thus, � yields the same total utility aspreemptive EDF. tu

Some important corollaries about CIC-VCUA’s time-liness behavior during nonoverload situations can bededuced from EDF’s optimality [33].

Corollary 6. Under conditions 1) and 2), CIC-VCUA always

meets all task termination times.

With the previous theorems and corollaries, we derive

algorithm properties in terms of CIC-VCUA’s scheduling

objective.

Theorem 7. CIC-VCUA assures that the maximum time interval

between any two consecutive, successful completions of jobs of

a task does not exceed the task period.

Proof. Let Ji;j:ST and Ji;jþ1:ST be the sojourn times of

two consecutive, successfully completed jobs Ji;j and

Ji;jþ1 of task Ti, respectively. Also, let Ti:wcST be the

worst-case sojourn time of task Ti (and of all its jobs).

Under CIC-VCUA, the maximum time interval be-

tween completions of Ji;j and Ji;jþ1 will be equal to

Pi þ Ji;jþ1:ST � Ji;j:ST , i.e., Ji;jþ1:F inT � Ji;j:F inT . So,

in order to have a maximum interval bound of Pi, we

should have Ji;jþ1:ST ¼ Ji;j:ST .We know that the first job of Ti has

Ji;1:FinT ¼ Ji;1:ST ¼ Ti:wcST . Under CIC-VCUA, theconsecutive completions of the following jobs will keepJi;jþ1:ST ¼ Ji;j:ST ¼ Ti:wcST , i.e., sojourn times of alljobs are equal to the task’s worst-case sojourn time.Therefore, Ji;jþ1:F inT � Ji; j:F inT ¼ Pi. So, for any task,the time interval between two consecutive, successfulcompletions of its jobs does not exceed the length of thetask period. tu

Following Theorem 7, during underloads, every job of

task Ti completes within their completion time bound

Ti:wcST after its arrival. Other jobs of other tasks abide by

the same rule. During system overloads, when loadb > 1,

CIC-VCUA dynamically selects tasks with the highest

PUDs among the task set until total load0b � 1. Therefore,

the bound on consecutive job completions still holds for the

selected subtask set.

6 EXPERIMENTAL RESULTS

We experimentally evaluated CIC-VCUA through a de-

tailed simulation study. We first describe our experimental

settings, and then report our results.

6.1 Experimental Settings

We selected task sets with 16 tasks in three applications,

denoted A1, A2, and A3. Task parameters are summarized

in Table 1. Within each range, the period P is uniformly

distributed. The synthesized task sets simulate the varied

mix of short and long periods.

BALLI ET AL.: UTILITY ACCRUAL REAL-TIME SCHEDULING UNDER VARIABLE COST FUNCTIONS 11

TABLE 1Task Settings

The Umaxs of the TUFs in A1, A2, and A3 are uniformlygenerated within each range. We define a linearly increasingVCF ¼ k� tþ Co, a linearly decreasing VCF ¼ �k� tþ Co,and a constant VCF ¼ Co for each task. The parameter k isuniformly generated within the range [0, 0.1]. We changethe mean value of Co and generate normally distributedvalues to adjust the system load loadb. In all of ourexperiments, the � value is set to be 2� 10�4. Finish timesof tasks J:FinT are pushed to their termination times whenloadb > 0:9 in order to avoid the unpredictability of sojourntime calculations.

6.2 Performance on Completion Interval

We assign to each task a step TUF and first consider CIC-VCUA’s performance on scheduling objective 1. For the16 tasks, we vary loadb from less than 0.1 to larger than 1.8and evaluate the maximum interval between any twoconsecutive, successful completions of jobs of each task andof the whole task set (containing all tasks). We define theformer as the maximum intratask completion interval andthe latter as the maximum intertask completion interval.

We consider two classes of VCFs for the task sets:homogeneous and heterogeneous. A task set that consists oftasks with only one type of VCF shapes is referred to as ahomogeneous set. On the other hand, in a heterogeneous set, tasksof the set can have any of the VCF shapes specified in Table 1.In the following experiments, we consider step TUFs.

6.2.1 Homogeneous VCFs

In the experiments in this section, we use constant VCFs forall tasks. Fig. 6 shows the maximum intra and intertaskcompletion intervals, as loadb varies. In Fig. 6a, we onlyshow six tasks as examples selected from the task set tostudy their maximum intratask completion interval.

To validate Theorem 7, we show periods of the selectedtasks from Fig. 6a in Table 2. From Fig. 6a and Table 2, we

observe that, in all loadb regions, the maximum intratask

completion interval of each task is less than or equal to the

length of its period. During overloads, the selected tasks are

labeled as selected since they have high PUDs. So, they can

always satisfy their bound constraints. Therefore, plots in

Fig. 6a validate Theorem 7.As a comparison, we also study the maximum intertask

completion interval of the whole task set in Fig. 6b. The

minimum period of the task set is 3. From the figure, we

observe that, during underloads, the maximum intertask

completion interval is less than 20 and, during overloads,

the maximum intertask completion interval may exceed 20

in order to satisfy intratask completion bounds.Experiments for homogeneous sets with monotonically

increasing and decreasing VCFs under various TUF shapes

yield results similar to those shown in Fig. 6. These are

omitted here for brevity; however, they can be found in [34].

6.2.2 Heterogeous VCFs

For the experiments in this section, we generate random

VCFs for each task. The shapes we use for VCFs are

described in Section 3.4. Fig. 7 shows the maximum intra

and intertask completion intervals as loadb varies. In Fig. 7a,

we selected five tasks to study their maximum intratask

completion interval. The periods of these tasks are shown in

Table 3.From Fig. 7a, we again observe that the maximum

intratask completion interval of each task is less than or

equal to the length of its period, in all loadb regions. During

overloads, similarly to the homogeneous set scenario,

skipped tasks with low PUDs never get a chance to execute.

Hence, plots in Fig. 7a also validate Theorem 7.Fig. 7b shows the maximum intertask completion

interval of the whole task set. The minimum period of the

task set is 3. We observe results similar to that of the

homogeneous set scenario. Results for heterogeneous VCFs

under various TUF shapes show consistent results. These

are again omitted here for brevity, but can be found in [34].

6.3 Performance on Utility Accrual

We now evaluate CIC-VCUA’s performance on scheduling

objective 2). In these experiments, we consider constant

VCFs. For such VCFs, CIC-VCUA can be compared with

12 IEEE TRANSACTIONS ON COMPUTERS, VOL. 56, NO. 3, MARCH 2007

Fig. 6. Maximum intra and intertask completion interval for a homogeneous set, constant VCFs, step TUFs. (a) Intratask. (b) Intertask.

TABLE 2Tasks and Their Periods for a Homogeneous Set

other UA algorithms (that cannot deal with nonconstantVCFs and varying execution times).

We consider step and decreasing TUFs. Our firstexperiments compare CIC-VCUA with RUA [8], DASA[7], LBESA [6], VCUA [35], and EDF without abortion (orEDF-NABT) [3] to evaluate performance under step TUFs(all of these algorithms allow step TUFs). We then compareCIC-VCUA with RUA, DASA, VCUA, and LBESA, underdecreasing TUFs (these algorithms allow decreasing TUFs).

Figs. 8 and 9 show the accrued utility ratio (or AUR) andtermination time meet rate (or XMR) of the algorithms asloadb increases. AUR is the ratio of the total accrued utilityto the total maximum utility and XMR is the ratio of thenumber of jobs meeting their termination times to the totalnumber of job releases.

Fig. 8 shows the AUR and XMR of the algorithms understep TUFs. From Fig. 8a, we observe that CIC-VCUA hasalmost the same AUR as that of DASA, RUA, and LBESA.However, from Fig. 8b, we observe that CIC-VCUA suffershigher termination time misses than other algorithmsduring high loads. This is because CIC-VCUA staticallylabels some tasks as skipped so that it can satisfy thecompletion interval bounds.

The AUR and XMR of the algorithms under decreasingTUFs are shown in Figs. 9a and 9b, respectively. We observethat CIC-VCUA yields less AUR and less XMR than otheralgorithms for decreasing TUFs and much less so thanunder step TUFs. This is clearly due to the algorithm’sprocrastination of jobs to satisfy the completion timeinterval bound—CIC-VCUA’s primary scheduling objec-tive. None of the other algorithms are designed to satisfythe completion time interval bound (see Section 6.4 forresults that illustrate this). Maximizing AUR is only CIC-VCUA’s secondary objective.

Thus, job procrastination results in reduced AUR andXMR for CIC-VCUA with respect to other algorithms.

Further, this reduction is more significant under decreasingTUFs than under step TUFs, clearly because earliercompletion results in greater AUR under decreasing TUFsbut not under step TUFs.

Figs. 8 and 9 show that, when loadb > 0:9, CIC-VCUAstarts to miss termination times and its XMR drops, but itsAUR drops much more slowly since tasks with higherPUDs are statically selected. Further, AURs and XMRs inFig. 8 validate Theorem 5 and Corollary 6.

Our experiments with monotonically increasing anddecreasing VCFs yield similar results to those shown inFigs. 8 and 9. Those are again omitted here, but can befound in [34].

6.4 Results under Resource Dependency

To construct dependent task sets, we consider task setswhere jobs may randomly request and release resourcesfrom an available set of resources during their life spans.The resource request and release times are uniformlydistributed within a job’s life cycle before the job is readyto complete. That is, resource request and release are servicedbefore the job’s remaining execution time is only �. Weconducted experiments on task sets and five sharedresources. Table 4 displays six tasks that we selected tostudy their maximum inter and intratask completionintervals.

For these experiments, we compare CIC-VCUA withDASA since DASA is a UA scheduling algorithm thatallows resource dependencies and exhibits good perfor-mance. Fig. 10 shows the results. With our experimentalsettings, we have only limited performance loss in oursimulation, but we expect more performance drop withlarger task sets and more resources.

Fig. 10a shows both AURs (on the left Y-axis) and theintratask completion intervals (on the right Y-axis) of arandomly selected task for CIC-VCUA and DASA as Loadbvaries. In terms of AUR, CIC-VCUA performs as well asDASA. Also, we observe from Fig. 10a that the intrataskcompletion interval of DASA increases as load increasesand it exceeds the bound of one period. However, CIC-VCUA maintains the intratask completion interval as aconstant, equal to the task’s period, under different systemloads. Additionally, Fig. 10b shows the XMR comparison ofboth algorithms. During overloads, in terms of XMR, DASA

BALLI ET AL.: UTILITY ACCRUAL REAL-TIME SCHEDULING UNDER VARIABLE COST FUNCTIONS 13

Fig. 7. Maximum intra and intertask completion interval for a heterogenous set. (a) Intratask. (b) Intertask.

TABLE 3Tasks and Their Periods for a Heterogeneous Set

and CIC-VCUA exhibit different behaviors because of theirdifferent schedule construction process.

Fig. 11 shows the performance of CIC-VCUA underdecreasing VCFs and step TUFs. As Fig. 11a shows, CIC-VCUA exhibits good AUR even for task sets with resourcedependencies. The XMR decrease is due to the staticselection, which favors high PUD tasks. Fig. 11b showsthe maximum intratask and intertask completion intervalsof the selected tasks. Clearly, CIC-VCUA bounds theintratask completion interval to be one period.

Fig. 12 shows CIC-VCUA’s performance under increas-ing VCFs and step TUFs. As Fig. 12a shows, the algorithmAUR and XMR are similar to the case of nonincreasingVCFs. Fig. 12b shows the inter and intratask completionintervals of selected tasks. Again, we observe that CIC-VCUA bounds the intratask completion interval to oneperiod.

From Fig. 12, we observe that the algorithm’s perfor-mance under resource dependencies is similar to that underno resource dependencies. However, there is a small

performance loss due to mutual exclusion requirements.The higher the number of shared resources, the greater thisperformance loss is. This is because CIC-VCUA respectsresource dependencies in scheduling, which, in the worstcase, may cause jobs to be executed in the reverse order ofPUDs or termination times. With such dependent task sets,the algorithm suffers performance losses, especially duringhigh loads.

Our experiments with monotonically increasing anddecreasing VCFs under various other TUF shapes yieldsimilar results to those shown in Figs. 11 and 12 [34].

7 CONCLUSIONS, FUTURE WORK

In this paper, we present a real-time scheduling algorithmcalled CIC-VCUA that focuses on the problem spaceintersecting UA scheduling and variable cost scheduling.The algorithm considers tasks which are subject to TUFtime constraints and mutual exclusion constraints onshared non-CPU resources and whose execution times arefunctions of their starting times. CIC-VCUA considers atwofold objective: 1) bound the maximum interval betweenany two consecutive, successful completion of jobs of a taskto the task’s period and 2) maximize the system’s totalutility while satisfying all resource dependencies. Thisproblem can be shown to be NP-hard. CIC-VCUAheuristically solves the problem in polynomial-time. Weestablish that CIC-VCUA achieves optimal total utility

14 IEEE TRANSACTIONS ON COMPUTERS, VOL. 56, NO. 3, MARCH 2007

Fig. 8. AUR and XMR of CIC-VCUA and other UA algorithms, constant VCFs, step TUFs. (a) AUR. (b) XMR.

Fig. 9. AUR and XMR of CIC-VCUA and other UA algorithms, constant VCFs, decreasing TUFs. (a) AUR. (b) XMR.

TABLE 4Tasks and Their Periods for

Resource Dependency Experiments

during underloads, and tight upper bounds on inter and

intratask completion times. Our experimental studies

confirm the algorithm’s effectiveness and superiority.This paper only scratched the surface of the VCF

scheduling problem; so many problems are open for further

research. Immediate research directions include relaxing

some of our task model assumptions. For example, our

work assumed that dwell jobs are arbitrarily preemptible.

This is not generally true of AESA systems as preempting a

dwell is sometimes expensive. (Our preliminary work [35]

which led to this work considered a fully nonpreemptive

task model, which is also restrictive.) Thus, CIC-VCUA can

be extended for a task model which includes nonpreemp-

tion and preemption with nonnegligible cost.Further, timescales associated with VCFs and TUFs can

vary widely in AESA systems. For example, the TUFs

associated with each dwell may have termination times in

the range of tens to hundreds of milliseconds; the VCFs may

BALLI ET AL.: UTILITY ACCRUAL REAL-TIME SCHEDULING UNDER VARIABLE COST FUNCTIONS 15

Fig. 10. CIC-VCUA versus DASA under resource dependency, constant VCFs, step TUFs. (a) AUR. (b) XMR.

Fig. 11. CIC-VCUA performance under resource dependency, decreasing VCFs, step TUFs. (a) AUR and XMR. (b) Completion intervals.

Fig. 12. CIC-VCUA performance under resource dependency, increasing VCFs, step TUFs. (a) AUR and XMR. (b) Completion intervals.

only change significantly over the course of tens or

hundreds of seconds. This facet of the model can be

exploited in future work.Our periodic task arrival model can also be relaxed—e.g.,

to the unimodal arbitrary arrival model (or UAM) [31].

UAM embodies a “stronger” adversary than most arrival

models.

ACKNOWLEDGMENTS

This work was supported by the US Office of Naval

Research under Grant N00014-00-1-0549 and The MITRE

Corporation under Grant 52917. Coauthor E. Douglas

Jensen’s contributions to this work were sponsored by the

MITRE Corporation Technology Program. The authors

thank Dr. Raymond Clark of The MITRE Corporation for

his inputs on the VCF problem. Preliminary results of this

work appeared in [35].

REFERENCES

[1] R.K. Clark, E.D. Jensen, and N.F. Rouquette, “Software Organiza-tion to Facilitate Dynamic Processor Scheduling,” Proc. IEEEParallel and Distributed Processing Symp., Apr. 2004.

[2] R. Clark, E.D. Jensen, A. Kanevsky, J. Maurer, P. Wallace, T.Wheeler, Y. Zhang, D. Wells, T. Lawrence, and P. Hurley, “AnAdaptive, Distributed Airborne Tracking System,” Proc. IEEEWorkshop Parallel and Distributed Systems, pp. 353-362, Apr. 1999.

[3] W. Horn, “Some Simple Scheduling Algorithms,” Naval ResearchLogistics Quarterly, vol. 21, pp. 177-185, 1974.

[4] E.D. Jensen, C.D. Locke, and H. Tokuda, “A Time-DrivenScheduling Model for Real-Time Systems,” Proc. IEEE Real-TimeSystems Symp., pp. 112-122, Dec. 1985.

[5] D.P. Maynard, S.E. Shipman, R.K. Clark, J.D. Northcutt, R.B.Kegley, B.A. Zimmerman, and P.J. Keleher, “An Example Real-Time Command, Control, and Battle Management Application forAlpha,” Archons Project Technical Report 88121, Dept. ofComputer Science, Carnegie Mellon Univ., Dec. 1988.

[6] C.D. Locke, “Best-Effort Decision Making for Real-Time Schedul-ing,” PhD dissertation, Carnegie Mellon Univ., CMU-CS-86-134,1986, http://www.real-time.org, June 2005.

[7] R.K. Clark, “Scheduling Dependent Real-Time Activities,” PhDdissertation, CMU-CS-90-155, Carnegie Mellon Univ., 1990,http://www.real-time.org, June 2005.

[8] H. Wu, B. Ravindran, E.D. Jensen, and U. Balli, “Utility AccrualScheduling under Arbitrary Time/Utility Functions and Multi-Unit Resource Constraints,” Proc. 10th Int’l Conf. Real-Time andEmbedded Computing Systems and Applications (RTCSA), pp. 80-98,Aug. 2004.

[9] J.W.S. Liu, W.-K. Shih, K.-J. Lin, R. Bettati, and J.-Y. Chung,“Imprecise Computations,” Proc. IEEE, vol. 82, no. 1, pp. 83-94,Jan. 1994.

[10] J.K. Dey, J.F. Kurose, and D. Towsley, “On-Line SchedulingPolicies for a Class of IRIS (Increasing Reward with IncreasingService) Real-Time Tasks,” IEEE Trans. Computers, vol. 45, no. 7,pp. 802-813, July 1996.

[11] L.B. Becker, E. Nett, S. Schemmer, and M. Gergeleit, “RobustScheduling in Team-Robotics,” J. Systems and Software, vol. 77,no. 1, pp. 3-16, 2005.

[12] J.A. Malas, “F-22 Radar Development,” Proc. IEEE Nat’l Aerospaceand Electronics Conf., vol. 2, pp. 831-839, July 1997.

[13] J.C. Curlander and R.N. McDonough, Synthetic Aperture Radar:Systems and Signal Processing. Wiley-Interscience, 1991.

[14] T.W. Jeffrey, “Track Quality Estimation for Multiple-TargetTracking Radars,” Proc. 1989 IEEE Nat’l Radar Conf., pp. 76-79,Mar. 1989.

[15] G. van Keuk and S.S. Blackman, “On Phased-Array RadarTracking and Parameter Control,” IEEE Trans. Aerospace andElectronic Systems, vol. 29, no. 1, pp. 186-194, Jan. 1993.

[16] G.W. Stimson, Introduction to Airborne Radar, second ed. SciTechPublishing, Jan. 1998.

[17] S. Gopalakrishnan, M. Caccamo, C.-S. Shih, C.-G. Lee, and L. Sha,“Finite-Horizon Scheduling of Radar Dwells with Online Tem-plate Construction,” Real-Time Systems, vol. 33, no. 1-3, pp. 47-75,2006.

[18] S. Gopalakrishnan, P.G. Chi-Sheng Shih, M. Caccamo, L. Sha, andC.-G. Lee, “Radar Dwell Scheduling with Temporal Distance andEnergy Constraints,” Proc. Int’l Radar Conf., Oct. 2004.

[19] C.-S. Shih, P. Ganti, S. Gopalakrishnan, M. Caccamo, and L. Sha,“Synthesizing Task Periods for Dwells in Multi-Function PhasedArray Radars,” Proc. IEEE Radar Conf., pp. 145-150, Apr. 2004.

[20] C.-S. Shih, S. Gopalakrishnan, P. Ganti, M. Caccamo, and L. Sha,“Template-Based Real-Time Dwell Scheduling with EnergyConstraint,” Proc. IEEE Real-Time and Embedded Technology andApplications Symp., May 2003.

[21] C.-S. Shih, S. Gopalakrishnan, P. Ganti, M. Caccamo, and L. Sha,“Scheduling Real-Time Dwells Using Tasks with SyntheticPeriods,” Proc. IEEE Real-Time Systems Symp., 2003.

[22] L. Sha, R. Rajkumar, and J.P. Lehoczky, “Priority InheritanceProtocols: An Approach to Real-Time Synchronization,” IEEETrans. Computers, vol. 39, no. 9, pp. 1175-1185, Sept. 1990.

[23] P. Li, H. Wu, B. Ravindran, and E.D. Jensen, “A Utility AccrualScheduling Algorithm for Real-Time Activities with MutualExclusion Resource Constraints,” IEEE Trans. Computers, vol. 55,no. 4, pp. 454-469, Apr. 2006.

[24] C.L. Liu and J.W. Layland, “Scheduling Algorithms for Multi-programming in a Hard Real-Time Environment,” J. ACM, vol. 20,no. 1, pp. 46-61, 1973.

[25] L. George, N. Roivierre, and M. Spuri, “Preemptive and Non-Preemptive Real-Time Uni-Processor Scheduling,” Rapport deRecherche RR-2966, INRIA, Le Chesnay Cedex, France, 1996.

[26] J.A. Stankovic, M. Spuri, K. Ramamritham, and G.C. Buttazzo,“Response Times under EDF Scheduling,” Deadline Scheduling forReal-Time Systems, chapter 4, pp. 67-87. Kluwer Academic, 1998.

[27] J.A. Stankovic, M. Spuri, K. Ramamritham, and G.C. Buttazzo,“Fundamentals of EDF Scheduling,” Deadline Scheduling for Real-Time Systems, chapter 3, pp. 27-67. Kluwer Academic, 1998.

[28] T.P. Baker, “Stack-Based Scheduling of Real-Time Processes,”J. Real-Time Systems, vol. 3, no. 1, pp. 67-99, Mar. 1991.

[29] P. Li, B. Ravindran, S. Suhaib, and S. Feizabadi, “A FormallyVerified Application-Level Framework for Real-Time Schedulingon POSIX Real-Time Operating Systems,” IEEE Trans. SoftwareEng., vol. 30, no. 9, pp. 613-629, Sept. 2004.

[30] P. Li and B. Ravindran, “Fast, Best-Effort Real-Time SchedulingAlgorithms,” IEEE Trans. Computers, vol. 53, no. 9, pp. 1159-1175,Sept. 2004.

[31] H. Cho, “Utility Accrual Scheduling with Non-Blocking Synchro-nization on Uniprocessors and Multiprocessors,” PhD dissertationproposal, Virginia Tech, 2005, http://www.ee.vt.edu/~real-time/cho_proposal05.pdf.

[32] J.D. Northcutt, Mechanisms for Reliable Distributed Real-TimeOperating Systems: The Alpha Kernel. Academic Press, 1987.

[33] M. Dertouzos, “Control Robotics: The Procedural Control ofPhysical Processes,” Information Processing, vol. 74, 1974.

[34] U. Balli, “Utility Accrual Real-Time Scheduling under VariableCost Functions,” master’s thesis, Virginia Tech, 2005, http://scholar.lib.vt.edu/theses/available/etd-08052005-155355/.

[35] H. Wu, U. Balli, B. Ravindran, and E.D. Jensen, “Utility AccrualReal-Time Scheduling Under Variable Cost Functions,” Proc. 11thInt’l Conf. Embedded and Real-Time Computing Systems and Applica-tions (RTCSA), pp. 213-219, Aug. 2005.

Umut Balli received the BS (2003) and MS(2005) degrees in computer engineering, bothfrom Virginia Polytechnic Institute and StateUniversity, Blacksburg. He is currently a con-sultant at Openwave Systems, Inc, RedwoodCity, California. His master’s thesis research,funded by The MITRE Corporation, focused onvariable cost functions in utility accrual real-timescheduling. His other research interests includereal-time systems, resource management, and

human-computer interaction.

16 IEEE TRANSACTIONS ON COMPUTERS, VOL. 56, NO. 3, MARCH 2007

Haisang Wu received the PhD degree incomputer engineering from Virginia Tech in2005 and the BE (cum laude) and MSE (summacum laude) degrees, both in electronic engineer-ing, from Tsinghua University, Beijing, China, in1999 and 2002, respectively. He is currently asenior kernel engineer at Juniper Networks, Inc.,Sunnyvale, California. His PhD dissertationresearch, funded by the US Office of NavalResearch and The MITRE Corporation, focused

on energy-efficient scheduling for real-time embedded systems. Someresults from this research have been transitioned to programs MITRE isengaged in for the US Department of Defense. His other researchinterests include operating systems, distributed systems, real-time andnetwork security, and network traffic engineering and routing. He hasauthored or coauthored more than 20 peer reviewed publications inenergy-efficient scheduling, real-time scheduling, distributed systems,and networking. He is a member of the IEEE.

Binoy Ravindran is an associate professor inthe Bradley Department of Electrical and Com-puter Engineering at Virginia Tech. He isfascinated and challenged by building adaptivereal-time software, i.e., real-time software thatcan dynamically adapt to uncertainties in theiroperating environments and satisfy time con-straints with application-specific predictability.Toward that end, he focuses on time/utilityfunction (TUF)/utility accrual (UA) real-time

scheduling and resource management, an adaptive time-criticalresource management paradigm invented by E. Douglas Jensen (almost35 years ago) and a central concept behind the Alpha distributed real-time kernel. He and his students have recently developed several newresults on TUF/UA scheduling and resource management, includingthose on stochastic scheduling, distributed scheduling, energy con-sumption, memory management and garbage collection, and nonblock-ing synchronization. Many of these new results have been transitionedfor use in US Department of Defense programs. He is a senior memberof the IEEE and the IEEE Computer Society.

Jonathan Stephen Anderson received the BSdegree in mathematics and physics from East-ern Nazarene College, Quincy, Massachusetts.He is a doctoral student at Virginia Tech and alead systems engineer at the MITRE Corpora-tion. He joined MITRE in 1998, working insupport of several developmental and fieldedUS Air Force command and control systems, aswell as internal research and developmentprojects. His research interests are in the area

of dynamic, distributed, time-constrained systems.

E. Douglas Jensen is the senior consultingscientist of the Computing and Software Divisionof the Technology and Innovation Directorate atthe MITRE Corporation. His principal focus is ontime-critical resource management in dynamicdistributed object systems, particularly for com-bat platform and battle management applica-tions. He directs and conducts research,performs technology transition, and consults onUS Department of Defense programs. He joined

MITRE from previous positions at HP, Digital Equipment, and othercomputer companies. From 1979 to 1987, he was on the faculty of theComputer Science Department at Carnegie Mellon University, where hecreated and directed the largest academic real-time research group ofits time. Prior to that, he was employed in the real-time computerindustry, where he engaged in research and advanced technologydevelopment of distributed real-time computer systems, hardware, andsoftware. He is considered one of the original pioneers, and leadingvisionaries, of distributed real-time computer systems and is widelysought throughout the world as a speaker and consultant. He is amember of the IEEE.

. For more information on this or any other computing topic,please visit our Digital Library at www.computer.org/publications/dlib.

BALLI ET AL.: UTILITY ACCRUAL REAL-TIME SCHEDULING UNDER VARIABLE COST FUNCTIONS 17


Recommended