+ All Categories
Home > Documents > Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1...

Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1...

Date post: 10-Aug-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
26
Efficiency and Optimality of Static Slowdown for Periodic Tasks in Real-Time Embedded Systems Ravindra Jejurikar Rajesh K. Gupta Center for Embedded Computer Systems, Department of Information and Computer Science, University of California at Irvine, Irvine, CA 92697 E-mail: jezz,rgupta @ics.uci.edu CECS Technical Report #02-03 March 18, 2002 Abstract Slowdown factors determine the extent of slowdown a computing system can experience based on functional and performance requirements. Frequency scaling of a processor based on slowdown factors can lead to considerable power savings. The slowdown factors are computed from the task set executing on the processor. Slowdown factors, which are independent of the workload, are easy to implement and can also be used in configuringparts of hardware which cannot be altered at runtime. However, com- puting the optimal constant slowdown factor is non-trivial for periodic tasks with deadlines not equal to their period. We give a density slowdown method which efficiently computes a constant slowdown factor. We present an algorithm to compute the optimal constant slowdown factor for a task set. The optimal constant slowdown factor is computed by looking at all the job intervals up to the hyper-period of the task set. Based on this, we compute the optimal static schedule that builds upon the EDF schedule. We have 30% energy gains over the other techniques by deploying the optimal static slowdown factors. Dynamic slowdown techniques further enhance the power savings. We show that dynamic slowdown factors can be easily incorporated over these static slowdown factors, thereby having up to 50% more energy savings over the static slowdown. 1
Transcript
Page 1: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

Efficiency and Optimality of Static Slowdown for Periodic Tasks inReal-Time Embedded Systems

Ravindra Jejurikar Rajesh K. Gupta

Center for Embedded Computer Systems,Department of Information and Computer Science,

University of California at Irvine,Irvine, CA 92697

E-mail: jezz,rgupta @ics.uci.edu

CECS Technical Report #02-03

March 18, 2002

Abstract

Slowdown factors determine the extent of slowdown a computing system can experience based onfunctional and performance requirements. Frequency scaling of a processor based on slowdown factorscan lead to considerable power savings. The slowdown factors are computed from the task set executingon the processor. Slowdown factors, which are independent of the workload, are easy to implement andcan also be used in configuring parts of hardware which cannot be altered at runtime. However, com-puting the optimal constant slowdown factor is non-trivial for periodic tasks with deadlines not equalto their period. We give a density slowdown method which efficiently computes a constant slowdownfactor. We present an algorithm to compute the optimal constant slowdown factor for a task set. Theoptimal constant slowdown factor is computed by looking at all the job intervals up to the hyper-periodof the task set. Based on this, we compute the optimal static schedule that builds upon the EDF schedule.We have 30% energy gains over the other techniques by deploying the optimal static slowdown factors.Dynamic slowdown techniques further enhance the power savings. We show that dynamic slowdownfactors can be easily incorporated over these static slowdown factors, thereby having up to 50% moreenergy savings over the static slowdown.

1

Page 2: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

Contents

1 Introduction 1

2 Preliminaries 22.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2 Variable speed Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.3 Motivating example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

3 Computing the Optimal Static Slowdown Factors 43.1 Density slowdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

3.1.1 Deadline equals period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.1.2 Deadline period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

3.2 Optimal constant slowdown factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.2.1 Periodic tasks with zero phase . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.2.2 Bounds for periodic tasks with arbitrary phase . . . . . . . . . . . . . . . . . . . 8

3.3 Algorithm to compute the optimal constant slowdown . . . . . . . . . . . . . . . . . . . 93.3.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.3.2 Time Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3.4 Optimal slowdown schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.4.1 YDS Optimal Schedule algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 103.4.2 Optimal Slowdown when D p . . . . . . . . . . . . . . . . . . . . . . . . . . 103.4.3 Implementing within a RTOS . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4 Addition of Dynamic Slowdown 11

5 Implementation and Experimental Results 115.1 Static slowdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135.2 Computation time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165.3 Dynamic slowdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

6 Conclusions and Future Work 19

A Appendix 22A.1 Periodic task set examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2

Page 3: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

List of Figures

1 Motivation for Optimal Static slowdown techniques (a) Task arrival times and deadlines.(b) Slowdown s 0 70, task T1 2 misses deadline. (c) Optimal Constant Slowdowns 0 75. (d) Optimal speed schedule. (e) Optimal slowdown function. . . . . . . . . . 4

2 Algorithm to compute the optimal constant slowdown for periodic tasks . . . . . . . . . 83 Algorithm for computing optimal power schedule under EDF scheduling . . . . . . . . . 104 Generic simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Power function f s vs. s2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Comparison of the energy consumption using the static slowdown factors computed by

various techniques. All jobs have a workload equal . . . . . . . . . . . . . . . . . . . . 147 Comparison of the optimal constant slowdown to the optimal slowdown schedule . . . . 158 Percentage gains of the optimal constant slowdown over the density slowdown . . . . . . 159 Percentage gain of the optimal slowdown over the RMA slowdown . . . . . . . . . . . . 1610 Dynamic Slowdown for INS task set with deadline reduced to 90% of the original dead-

line. (a) Dynamic slowdown and energy consumption in each technique. (b) Comparisonof Deadlines miss(%) (c) Energy saving (%) gained by implementing dynamic slowdownover the static slowdown. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

11 Dynamic Slowdown for avionics task set with deadline reduced to 80% of the originaldeadline. (a) Dynamic slowdown and energy consumption in each technique. (b) Com-parison of Deadlines miss(%) (c) Energy saving (%) gained by implementing dynamicslowdown over the static slowdown. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

List of Tables

1 Computation time for the optimal constant slowdown factor . . . . . . . . . . . . . . . . 162 Computation time for the optimal slowdown schedule . . . . . . . . . . . . . . . . . . . 17

3

Page 4: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

1 Introduction

Power is one of the important metrics for optimization in the design and operation of embeddedsystems. There are two primary ways to reduce power consumption in embedded computing systems:processor shutdown and processor slowdown. Slowdown using frequency or voltage scaling is moreeffective in power consumption. Scaling the frequency and voltage of a processor leads to an increasein the execution time of a job. In real-time systems, we want to minimize energy while adhering tothe deadline of the tasks. Power and deadlines are often contradictory goals and we have to judiciouslymanage time and power to achieve our goal of minimizing energy. DVS (Dynamic Voltage Scaling)techniques exploit the idle time of the processor to reduce the energy consumption of a system. We dealwith computing the optimal voltage schedule for a periodic task set.

In this paper, we focus on the system level power management via computation of static slowdownfactors. We assume a real-time system where the tasks run periodically in the system and have deadlines.These jobs are to be scheduled on a single processor system based on a preemptive scheduler suchas EDF [11] or Rate Monotonic Scheduler (RMS) [10]. Shin and Choi [17] have presented a powerconscious fixed priority scheduler for hard real-time systems. Due to the periodic nature of the jobs thearrival time of the next job is known. If there is only one job in the ready queue, the processor is sloweddown so that the job finishes just before the next job arrival. Further enhancements [18] were givenwhere the idea of uniform slowdown has been exploited. In this technique, rate monotonic analysis isperformed on the task set to compute a constant static slowdown factor for the processor. Gruian [5]observed that performing more iterations gives better slowdown factors for the individual task types.

Yao, Demers and Shanker [19] presented an optimal off-line speed schedule for a set of N jobs. Therunning time of their algorithm is O N2 and can be reduced to O(N log2N) by the use of segment trees[14]. The analysis and correctness of the algorithm is based on an underlying EDF scheduler, which is anoptimal scheduler [11]. An optimal schedule for tasks with different power consumption characteristicsis considered by Aydin, Melhem and Mosse [1]. The authors [2] have proven that the utilization factoris the optimal slowdown when the deadline is equal to the period. Quan and Hu [15] discuss off-linealgorithms for the case of fixed priority scheduling.

Since the worst case execution time (WCET) of a task is not usually reached, there is dynamic slackin the system. Pillai and Shin [13] recalculate the slowdown when a task finishes before its worst caseexecution time. They use the dynamic slack while meeting the deadlines.

All the above techniques guarantee meeting the deadline. Systems where it is critical to meet all thedeadlines are termed as hard real-time systems. On the other hand, we have systems like communicationdevices and multimedia where we can afford to miss a few deadlines. Such systems are termed as softreal-time systems. One can implement aggressive power saving techniques for soft-real time systems.In these systems, the run time characteristics of the tasks can be exploited for more energy savings.Kumar and Srivastava [8] extended the work by Shin [17] by predicting the execution time of the job. Acomprehensive work is done by Raghunathan and Srivastava [16] explaining a generalized frameworkto incorporate static and dynamic slowdown factors.

In this paper, we consider the problem of scheduling periodic tasks where the deadlines can be lessthan their period. We give a constant slowdown technique called the density slowdown. We show thatwhile not optimal, this slowdown factor can be efficiently computed and leads to good power savingresults in practice. We also give an off-line algorithm to compute the optimal constant slowdown factorfor periodic tasks. Based on this, we give an algorithm to compute the optimal slowdown schedule.

1

Page 5: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

We gain as much as 40% energy savings over the RMA slowdown technique. The optimal constantslowdown has a gain of 30% over the density slowdown. We show that these techniques can be easilyaugmented by dynamic slowdown technique to further enhance the energy savings. On an average thedynamic slowdown factors result in a further saving of 25% over the static slowdown. They can be easilyimplemented in a RTOS, leading to energy efficient systems.

The rest of the paper is organized as follows: Section 2 formulates the problem with a motivatingexample. In section 3, we give the algorithms and prove their correctness. Section 4 deals with usingdynamic slowdown factors over the static slowdown factors. The implementation and experimentalresults are given in section 5. Section 6 concludes the paper with future directions.

2 Preliminaries

2.1 System Model

A periodic task set of n periodic real time tasks is represented as T T1 Tn . Each task Ti isrepresented by a 3-tuple pi Di ei where pi is the period of the task, Di is the relative deadline, andei is the WCET for the task. Each invocation of the task is called a job and the jth invocation of taskTi is denoted as Ti j. All tasks are assumed to be independent and preemptive. An aperiodic task set isrepresented as a set of N jobs. A job jk is represented by a 3-tuple ak bk ek where ak is the arrivaltime of the job, bk is the deadline of the job with bk ak and ek is the maximum number of cycles(WCET) required by the job. The time interval ak bk is referred to as the job interval and ek is theweight of the interval. The hyper-period of a periodic task set is defined as the least common multiple(lcm) of the periods of all the tasks.

The tasks are scheduled on a single processor which supports multiple frequencies. Every frequencylevel has a power consumption value and is also referred to as power state of the processor. Our aim isto schedule the given task set and the processor speed such that all tasks meet their deadlines and theenergy consumption is minimized. The processor speed can be varied to minimize energy usage. Theslowdown factor at a given instance is the ratio of the scheduled speed to the maximum speed. If theprocessor speed is a constant value over the entire time interval, it is called a constant slowdown. Werefer to the set of slowdown factors over a given time interval as the speed schedule for that interval. Theexecution time of a job is proportional to the processor speed. The goal is to have a speed schedule forthe processor which minimizes the energy consumption while meeting deadlines.

We list the definitions and terms used in the rest of the paper. The utilization factor for a task Ti isdefined as ui ei pi. The processor utilization U for a task set is the sum of the utilization factorsfor each task. U n

i 1 ui 1 is a necessary condition for the feasibility of any schedule [11]. Thedensity of a task is defined as the ratio of the execution time ek to the minimum of period and deadline,density Ti ei min pi Di . The density of the system is defined as the sum of the density of each taskin the system. It is denoted by n

i 1 density Ti . The phase i for a periodic task Ti is the releasetime of the first instance (job Ti 1) of the task. A set of tasks are said to be in phase means that they havethe same release time.

Assuming all jobs are represented by their job intervals along the time line we define the following.

2

Page 6: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

Intensity of an interval: The intensity of an interval I z z denoted by g I is defined [19] as

g I k ek

z z(1)

where the sum is over all job intervals jk with ak bk z z i.e. all jobs with their intervals lyingcompletely within z z .g(I) is a lower bound on the average processing speed in the interval I by any feasible schedule.

Critical Interval: The interval I that maximizes g I is called the critical interval for the job setJ. The set of jobs JI k ak bk I is called the critical job set.

Any time interval is represented by an ordered pair a b and it is always true that b a.

2.2 Variable speed Processors

A wide range of processors support variable voltage and frequency levels. Voltage and frequencylevels are in a way coupled together. When we change the speed of a processor we change its operat-ing frequency. We proportionately change the voltage to a value which is supported at that operatingfrequency. The important point to note is when we perform a slowdown we change both the frequencyand voltage of the processor. We use the terms slowdown state and power state interchangeably. Weassume that the speed can be varied continuously from Smin to the maximum supported speed Smax.We normalize the speed to the maximum speed to have a continuous operating range of smin 1 , wheresmin Smin Smax.

We define a slowdown function f t over a time interval. A slowdown function over a time intervalts t f is a function t s, from time to slowdown factor, where t ts t f , t has integer values and

s smin 1 . This function gives the slowdown value at a time instance t. The slowdown function hasvalues at discrete time intervals i.e. the speed cannot change within a time interval ti ti 1 A slowdownfunction is used to indicate changes in processor speed. The function f t can be stored as a set of ti si

pairs, where time ti indicates a change in the slowdown factor to value si. We define the size of theslowdown function as the minimum number of t s pairs needed to represent the slowdown function.The size of the slowdown function is also referred to as the size of the solution.

2.3 Motivating example

Consider a simple real time system with 2 periodic tasks having the following parameters :

J1 2 2 1 J2 5 3 1 (2)

This task set is shown in figure 1(a). The jobs for each task are shown at their arrival time with theirWCET at full speed. We have explicitly shown the deadlines where they are different from the period.The jobs are assumed to be scheduled on a single processor by an EDF scheduler. There is a feasible EDFschedule for the task set at full speed. The processor utilization U for this task set is 1 2 1 5 0 7.If the processor utilization U is used as a slowdown factor job J1 2 misses its deadline. This is shownin figure 1(b). Three units of work has to be done in first 4 time units. At a slowdown of 0 7, it requires3 0 7 4 285 time units and one task misses its deadline. It is clear that the utilization cannot be usedas a slowdown factor. Note that there are three jobs of unit workload to be finished within the interval

3

Page 7: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

s ’s’ is the speedfor the job

T2

T1

task

0 1 2 3 4 5 6 7 8 9 10

task

T2

T1

0 1 2 3 4 5 6 7 8 9 10

time

time

T2

T1

task

0 1 2 3 4 5 6 7 8 9 10

T2

T1

task

0 1 2 3 4 5 6 7 8 9 10time

0 1 2 3 4 5 6 7 8 9 10time

time

deadline deadline

0.75 0.75

idle

(c)

0.75

0.750.75

(d)

0.66

X -missed

0.70.7

0.7

(b)

1

1 1

1

1 1 1

(a)

(e)

all t

asks

0.75 0.50

0.500.75

0.75

0.75 0.75 0.75 0.75 0.75

Figure 1. Motivation for Optimal Static slowdown techniques (a) Task arrival times and deadlines. (b) Slowdown s 0 70,

task T1 2 misses deadline. (c) Optimal Constant Slowdown s 0 75. (d) Optimal speed schedule. (e) Optimal slowdown

function.

0 4 . So a lower bound on the constant slowdown factor is 3 4 0 75. A schedule with a constantslowdown of s 0 75 is shown in part (c). This is the optimal constant slowdown in that if we have aconstant slowdown less than 0.75, it will result in a deadline miss.

We guarantee meeting all deadlines at a constant slowdown factor of s 0 75. However there is inher-ent slack in the system. The system remains idle in the time interval 9 33 10 as shown in figure 1(c).We can utilize this slack to get improved slowdown factors. This will result in the slowdown valuesbeing a function f t over time. The optimal slowdown schedule is shown in part(d) of figure 1. Theslowdown factor is 0 75 in the interval 0 8 and 0 5 in the interval 8 10 . If we lower the speed atany point, it will result in a deadline miss. The slowdown function for the optimal solution is shown inpart(e). The slowdown function for the time interval 0 10 is represented as 0 0 75 8 0 5 and hasa size of 2.

3 Computing the Optimal Static Slowdown Factors

We compute the optimal slowdown factor for a system with an underlying EDF scheduler. In thissection, we give an algorithm to compute the optimal constant slowdown factor.

4

Page 8: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

3.1 Density slowdown

A constant slowdown for the processor is a desired feature. There is an overhead associated withchanging power states and a constant slowdown is preferred as it eliminates this overhead. A constantslowdown is a desired feature if the resource does not support run time change in the operating speed.

3.1.1 Deadline equals period

It is known that a system of independent, preemptive tasks with relative deadlines equal to their respec-tive periods can be feasibly scheduled on a resource (processor) if and only if their total utilization Uis less than or equal to 1 [11]. Given a task set with D p, we can slowdown the processor by a factorof U [2]. This slowdown will increase the processor utilization to 1. We can schedule the task set at thisslowdown state and guarantee feasibility. This is the optimal slowdown factor for the task set.

For the case deadline greater than the period, setting the slowdown factor to U will ensure the com-pletion of the jobs by the end of their period. As D p, none of the deadlines are missed. Furthermore,the processor utilization is a lower bound on the slowdown factor (by definition of utilization factor),This implies that s U is optimal for the case D p.

Thus for the case of D p, s U is optimal and can be efficiently computed.

3.1.2 Deadline period

For the case D p, a necessary and sufficient condition for the task set to be schedulable is not known.The task set is schedulable if the density of the system, 1 [11]. If 1, we set the slowdown factors , else ( 1) we let the processor run at full-speed s 1 . In other words, the slowdown of thesystem is s min 1 . We call this constant slowdown as the density slowdown.

Lemma 1 Under EDF scheduling, a constant slowdown of s min 1 will not introduce a deadline

miss.

Proof 1 We lower the speed of the processor, only for the case 1. After slowdown, the new density of

the system increases to 1. As the system is schedulable when 1, the task set will be schedulable and

will not introduce any deadline miss. Thus slowing down the processor does not introduce any deadline

miss.

The time complexity of computing the density slowdown is O n , where n is the number of tasks inthe system. The density of each task is computed in constant time and we add them to compute thedensity of the system.

3.2 Optimal constant slowdown factors

We now consider the optimal constant slowdown factor for the case D p. Density slowdown is notoptimal and we give an algorithm to compute the optimal constant slowdown factor for a periodic taskset.

5

Page 9: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

Theorem 2 ([19] ) : Given that I is a critical interval for a job set J and S is the optimal schedule for

J, then the maximum speed of S is g I and S runs at that speed for the entire interval I . This theorem

is given by Yao, Demers and Shenker [19].

This theorem gives an optimal constant slowdown factor for a given set of jobs J.

3.2.1 Periodic tasks with zero phase

Given a periodic task set T T1 T2 Tn , we give an algorithm to compute the optimal constantslowdown factor. The optimal constant slowdown factors is computed by looking at all the job intervalsup to the hyper-period of the task set. Given a periodic task Ti pi Di ei , we generate job intervalsfor this task up to the hyper-period. The job intervals will be Ti k ai k k pi ai k Di ei Thecritical intensity of this job set up to the hyper-period is the optimal constant slowdown factor. We firstpresent the lemmas which are used in proving the result.

Lemma 3 Let g be the intensity function for a periodic task set with an arbitrary phase for each task

and g0 be the intensity function for the same task set by setting the phase values of each task to zero. For

any interval ax by ,

g ax by g0 0 by ax (3)

Proof 2 Consider an interval ax by for the given task set and let l by ax be the length of the

interval. To compute the intensity of a given interval we consider all the job intervals lying completely

within the given interval. Let ci l be the number of jobs of task Ti in the interval ax by . The upper

bound on the number of jobs in this interval is given as

ci ll pi if pi l pi Di l

l pi if pi l pi Di l

This bound is achieved when a job arrives at the beginning of the time interval, viz. at time instance ax.

In all the tasks having zero phase, a job of each task arrives at time zero and this bound is achieved for

the time interval 0 l . Thus the number of jobs of task Ti by setting i 0 within the interval 0 l is

greater than or equal to the number of jobs in the interval ax by . Thus the workload and thereby the

intensity g0 in the interval 0 l is greater than or equal to the intensity g of the interval ax by .

Lemma 4 Given a periodic task set with zero phase, let H be the hyper-period and g be the maximum

intensity over all intervals 0 h where h H. Then for any time instance l, it is true that g g 0 l .

6

Page 10: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

Proof 3 Consider an interval 0 l where l H. We prove that the intensity of the interval 0 l is less

than or equal to g . We break this interval into two intervals 0 H and H l . Since the job sequence

and hence the job intervals repeat in every hyper-period interval, all earlier jobs finish before the hyper-

period. As no job interval crosses the hyper-period time instance we can partition the jobs at time t H.

This results in job intervals up to H and those from H to l. As no job interval crosses the hyper-period

time, the intensity over the interval 0 l is the weighted average of the intensities of the two intervals.

Since H is the hyper-period and all tasks have zero phase, the intensity of the interval H l is the same

as the intensity of the interval 0 l H . So both intervals have an intensity no greater than g . As the

intensity of both the intervals is less than or equal to g , the intensity of the complete interval which is

the weighted average of both intervals will be less than or equal to g . This proves our claim.

Lemma 5 Given a periodic task set and a time instance tc, let gmax be the critical intensity of the interval

0 tc . Then gmax is the maximum over the intensity of the intervals 0 b where b is the deadline of some

job interval and b tc.

Proof 4 By lemma 3, the critical interval up to time instance tc is the maximum intensity of all intervals

0 t where 0 t tc. We show that we can consider only those time intervals 0 b where b is the

deadline of some job. Consider the jobs sorted by their deadlines. Let di and di 1 be the deadlines of

two consecutive jobs where di di 1. For all time instances t such that di t di 1, the workload

over the interval 0 t is the same as the workload over the interval 0 di . This is true because the

execution time of a job is added to the workload of an interval only if the job interval lies completely in

the interval. Since t di and both intervals have the same workload we have g 0 t g 0 di . Hence it

suffices to look at time instances b where b is the deadline of some job in the interval 0 tc .

Theorem 6 Let g be the maximum intensity over all intervals 0 b where b is the deadline of some job

and b H. A constant slowdown of s g is the optimal constant slowdown.

Proof 5 By lemma 3 the critical interval begins at time instance zero. We are given that g is the

maximum intensity over all intervals 0 b where b is the deadline of some job. This is the critical

intensity up to the hyper-period by lemma 5. Lemma 4 states that the critical intensity g up to the

hyper-period interval is the maximum intensity ever reached. Thus this constant slowdown guarantees

7

Page 11: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

Optimal-constant-slowdown(T1 Tn):

(1)H= lcm(p1 pn)

(2) Jk is defined as a 3-tuple ak bk ek

(3) Ji Tj k k p j k p j D j e j ;

(5) J ni 1

H pij 1 Ti j

(6) Sort the job intervals Jk J with bk

(7) Let J Jk 1 k N

(8) bi b j where i j

(9) gkki 1 eibk

(10)s max gk ;

(11)return s;

Figure 2. Algorithm to compute the optimal constant slowdown for periodic tasks

meeting all deadlines. The intensity over an interval is a lower bound on the processor speed in that

interval. Hence g is a lower bound on the constant slowdown. This proves optimality of g .

3.2.2 Bounds for periodic tasks with arbitrary phase

Let T T1 Tn be a periodic task set and k be the phase associated with each task Tk. An immediateresult that follows is that the optimal constant slowdown obtained by setting the phase values to zero isan upper bound on the constant slowdown factor for the task set T .

Lemma 7 Transform the task set by setting the phase of each task to zero. The optimal constant slow-

down factor for the transformed task set is an upper bound on the constant slowdown factor.

Proof 6 Let g0 be the optimal constant slowdown factor for the transformed task set. Let g represent the

intensity function for the given task set with the specified phase values and g0 be the intensity function

over the transformed task set obtained by setting the phase of each task to zero. By Lemma 3, the

intensity of any interval ax by in the given task set is less than or equal to the intensity of the interval

0 by ax in the transformed task set. i.e. g ax by g0 0 by ax By theorem 6, the transformed

8

Page 12: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

task set always satisfies g0 0 by ax g0. From these two inequalities it follows that for any interval

ax by , g ax by g0. Thus g0 is an upper bound on the constant slowdown factor.

3.3 Algorithm to compute the optimal constant slowdown

Theorem 6 gives an algorithm to compute the optimal constant slowdown for n periodic tasks. Wedescribe the algorithm and give the time complexity of the algorithm.

3.3.1 Algorithm

The algorithm to compute the optimal constant slowdown is given in figure 2. The hyper-period ofthe task set is the lcm of the periods of the task set and is calculated in line (1). For each task Ti,the job intervals Ti j up to the hyper-period H are generated in line (5). A job interval is a 3-tupleJk ak bk ek where ak is the arrival time bk is the deadline and ek is the WCET of the job. In line(6), the job intervals are sorted by their deadline bk. Let N be the total number of job intervals up to H .Since the job intervals are sorted by their deadline all the earlier job intervals Ji 0 i k will lie inthe interval 0 bk . Hence the workload in this interval is the sum of all the earlier job intervals. Theintensity up to the job interval is the workload divided by the deadline. This is expressed in line (9).We compute the maximum over the intensity values gk. The maximum intensity is the optimal constantslowdown.

3.3.2 Time Complexity

We discuss the time complexity of the above algorithm. The algorithm consists of two main parts:sorting the N job intervals and computing the intensity for each interval.

Computing the maximum intensity: Given a sorted sequence of jobs we can compute the intensityof each interval in time O N . We iterate through each job Jk and keep track of the summation of theworkload up to that job. For each job interval Jk we add the workload of the new job interval to thesummation up to the previous job interval to get the new summation. The total workload divided by thedeadline gives the intensity up to that job interval. As we compute the intensity of an interval we alsokeep track of the maximum value of the intensity. Each job interval is visited once and the operationsat each job interval takes constant time. Thus the subroutine takes O N time to compute the criticalintensity for a job set sorted by deadline.

Sorting the job intervals: Given a job set J of N job intervals, a trivial way to sort is to generate alljob intervals and perform a sort operation on J. This will require time O NlogN [4]. However we canperform the sort operation faster by using the fact that the job intervals are generated from n periodictasks. We maintain a heap of one single job of each task type. The heap is initialized with the first jobof each task. It is based on the deadline of the job interval. The find-min operation [4] on the heap givesthe job interval with minimum deadline. This job interval is deleted from the heap and the next job fromthe same task is added to the heap. The correctness is ensured as the deadline of the jobs of a task keepsincreasing. The heap will have at most n elements. The search, delete-min and add operations can bedone in time O log n [4]. Since the following operations are performed once per job interval the timerequired for sorting is O Nlogn . As n N this is faster than a trivial sorting subroutine.

It is seen that sorting is the dominant factor and the complexity of the given algorithm is O(N log n).

9

Page 13: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

Optimal-EDF-Schedule(task set T):

(1)H= lcm(p1 pn)

(2) Jk is defined as a 3-tuple ak bk ek

(3) Ji Tj k k p j k p j D j e j ;

(4) J ni 1

H pij 1 Ti j

(5) YDS Optimal-Schedule(J);

(6) return optimal slowdown function;

Figure 3. Algorithm for computing optimal power schedule under EDF scheduling

3.4 Optimal slowdown schedule

Given a task set, we are interested in computing the optimal slowdown function f t . In the previoussection, we have seen that the constant static slowdown factors are optimal for the case D p. In thissection, we compute optimal slowdown schedule when D p. It is based on Yao Demers and Shenker’soptimal off-line scheduling algorithm for aperiodic tasks [19]. In future, it will be referred as the YDSOptimal Schedule Algorithm

3.4.1 YDS Optimal Schedule algorithm

Given a set of N jobs with their arrival times and relative deadlines, the optimal slowdown for this jobset can be computed by YDS Optimal schedule algorithm [19]. In the algorithm, the jobs are representedas intervals called job intervals. A job interval is represented by a 3-tuple ai ei di , ai being thearrival time of the job, di the deadline and ei the number of cycles (workload) of the job. The solutionis a slowdown function based on an EDF scheduler. The algorithm runs in time polynomial in N. Thedetails and the correctness of the algorithm are given in [19]. The algorithm takes as an input a set ofjobs over a time period and returns a slowdown function f t over this time period.

3.4.2 Optimal Slowdown when D p

The procedure to compute the optimal slowdown factors when D p is shown in figure 3. The hyper-period H is the least common multiple (lcm) of the periods of all the tasks. H is computed in line 1of figure 3. For each task Ti, consider all job intervals up to the hyper-period. This is done in line4 of figure 3. We compute the optimal slowdown factors for this job set by YDS optimal schedule

algorithm in line 5 . It computes a slowdown function f t , giving optimal slowdown values in theinterval 0 hyperperiod 1 .

10

Page 14: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

YDS optimal schedule algorithm is optimal for the given set of jobs. In our case, it comprises alljobs up to the hyper-period. The same job sequence will be repeated in every future hyper-period in-terval. The same slowdown function is used in every hyper period interval. As the same job sequenceis repeated, this gives an optimal slowdown function. The algorithm looks up to the hyper-period tocompute the optimal-schedule. This can potentially have an exponential number of jobs and result in alarge computation time. However, this computation is done off-line.

3.4.3 Implementing within a RTOS

The computation to find the optimal schedule could be large, depending on the example. However theimplementation of the optimal slowdown policy in an RTOS requires only the slowdown function. Theimportant metric for this technique is the size of the solution which indicates the voltage changes alongthe time line. We have seen in our experiments that the size of the solution is small. On an average thesize is only of the order of tens. We store the solution in the operating system and the memory overheadis only a few hundred bytes.

4 Addition of Dynamic Slowdown

The worst case execution time of a job is rarely reached. This results in the dynamic slack in thesystem. We can use this dynamic slack to further slow down the processor to have more energy savings.Dynamic slowdown techniques [16] [13] are of two types. One which guarantees meeting all deadlines[13] and the other where we are allowed to miss some deadlines [8]. In the first type, the WCET isalways assumed and the present dynamic slack is utilized. Some techniques [8] anticipate the futureslack to perform aggressive power management. These methods can end up missing a few deadlines.

We use the technique given by Kumar and Srivastava [8] to compute dynamic slowdown factors. Thistechnique is meant for soft real-time systems. In this technique, the history of the execution times ofthe jobs is maintained and this information is used to predict the next execution time. The WCET isdivided into slots, each indicating a window of the execution time. At the completion of each job, theslot referring to its execution time is incremented. The predicted time is the weighted average of theseslots. The dynamic slowdown factor sd as a function of the predicted execution time epred is given assd epred ei.

The static factors ss are computed taking the worst case execution into account. The dynamic factorsd represents the predicted execution time as a fraction of the WCET. If the prediction is true a furtherslowdown by a factor of sd will guarantee meeting the deadline. This suggests a further slowdown by afactor of sd . The new slowdown taking sd into consideration is s ss sd . Since sd 1, the slowdowncomputed at runtime will be lower than the static slowdown factors. This achieves better power savings.However we can incur some deadline misses due to wrong prediction.

5 Implementation and Experimental Results

We have written a simulator in parsec [9], a C based discrete event simulation language. We haveimplemented the scheduling techniques and algorithms in this simulator. We have implemented a generic

11

Page 15: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

simulator to enable us to use the same simulator for all the scheduling algorithms. The simulator is asshown in figure 4. It consists of two main entities, the task manager and the Real Time operatingsystem(RTOS). The task manager has the information of the entire task set. It generates jobs for eachtask type depending on its period and sends it to the RTOS entity.

RegulatorStatic speed Profile

Manager

s = densitys = no slowdowns = optimal EDFs = rate monotonic analysis

static Slowdown Methods

Dynamic slowdown

EDF/fixed priority

Scheduling

NO / Prediction

Knobs

- Worst case- Random time- random distribution

T1Task

TnTask

Task Execution Time

Knobs

Task Mamanger

Resource(processor)

Predictionhistory tables

jobjobjob

RTOS

PARSEC Simultion Platform

Figure 4. Generic simulator

The RTOS is the heart of the simulator. It schedules the jobs on the resource(processor) and checksfor deadline misses. It changes the processor from one slowdown state to another to minimize the energyconsumed. The operating system supports APIs to change the scheduling policy and to select a particularstatic and dynamic slowdown algorithm. For the implementation of our optimal EDF slowdown tech-nique, we also need a static speed regulator. The slowdown function is stored here and this entity issuescommands to change the speed of the processor. It changes the speed according to the solution. Theprofile manager profiles the energy consumed by each task and calculates the total energy consumptionof the system. It keeps track of all the relevant parameters viz. energy consumed, missed deadlines,voltage changes and context switches. The OS also manages the history tables needed in predictingexecution times of tasks.

We use the power model as given in [16] [8] to compute the energy usage of the system. The powerP as a function of slowdown is given by

P f s 0 248 s3 0 225 s2 0 0256 s 311 16 s2 282 24 s 0 0064 s 0 014112 s2

(4)The above equation is obtained by substituting Vdd 5V and Vth 0 8V and equating the power andspeed equations given below. The speed s is the inverse of the delay.

Pswitching Ce f fV2dd f (5)

DelaykVdd

Vdd Vth2

1f

(6)

The plot of the power function in shown in figure 5. It is seen that it tracks s2 closely. The switchingcapacitance and the relation between gate delay and the operating speed are used to accurately derivethe power function.

12

Page 16: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

0

0.2

0.4

0.6

0.8

1

0 0.2 0.4 0.6 0.8 1

Pow

er(P

)

slowdown factor(s)

P=f(s)P=s2

Figure 5. Power function f s vs. s2

5.1 Static slowdown

We compare the processor energy usage for the following techniques:

RMA slowdown : The algorithm to compute the slowdown factors for each task is given by Gruian[5]. It computes the static factors by performing Rate Monotonic Analysis (RMA), looking at awindow size of the maximum of the task periods. The case of D p is also considered. Thisalgorithm has good results in practice.

Density slowdown: This algorithm sets a constant slowdown factor s min 1 . This is aconstant slowdown technique and guarantees meeting the deadlines.

Optimal constant slowdown: The constant slowdown factor is computed by looking at a windowsize equal to the hyper-period of the task set. The algorithm is given in the previous section.

Optimal slowdown schedule: Here the optimal slowdown factors are computed by looking at allthe jobs up to the hyper-period of the task set. The algorithm gives the optimal energy schedule[6] for the periodic task set.

The above algorithms were used for three application sets, Avionics task set [12], INS (Inertial Nav-igation Control) task set [3] and CNC(Computer numerical control) task set [7], which are the task setsused in [8] [17].

Since we are considering the case D p, we have generated new task sets by decreasing the deadlinesof the original task sets. Almost all the tasks in the examples have their deadlines equal to the period. Wehave obtained new examples by decreasing the deadline as a percentage of the original deadline. In thenew examples, the new deadlines are set to 100%, 95%, 90%, 85%, 80% & 75% of the original deadline.We simulated the system for a time period equal to the hyper-period of the task set and have computedthe energy consumed in this time period. Figure 6 shows the comparison of the energy consumptionof the various techniques using static slowdown factors. The workload for all jobs is its WCET. Thisensures having the same workload for all cases. In all the techniques the processor is turned off wheneverit is idle. The energy consumption for each example is shown in their respective graph in figure 6. Thedensity slowdown consumes the maximum power in all the examples. The optimal constant slowdown

13

Page 17: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T
Page 18: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

Figure 7. Comparison of the optimal constant slowdown to the optimal slowdown schedule

and results in large savings. In the avionics task set the percentage gains seem to have decreased asopposed to our intuition. In this task set all the tasks have stringent deadlines compared to the taskperiods and the density of the system is high. Even in the case where the deadlines are not alteredthe density of the system is greater than 1. So the density slowdown method does not perform anyslowdown. As the deadline is made stricter the density of the system increases and hence the slowdownfor the system remains 1. So the density slowdown consumes the same energy with the variation of thedeadlines. In the optimal constant slowdown method the constant slowdown increases with the decreasein deadlines. This results in more energy consumption as deadlines are made stricter. This is the reasonwhy the percentage gains seem to be decreasing. But in reality the optimal constant slowdown continuesto perform better than the density slowdown.

Figure 8. Percentage gains of the optimal constant slowdown over the density slowdown

The energy gains of the optimal constant slowdown over the density slowdown are as much as 30%.On an average we have 20% energy savings. We achieve more gain as the deadline decreases.

Among the known static slowdown techniques the RMA slowdown is known to perform well. Infigure 9, we show the percentage gains of the optimal slowdown method over the RMA slowdown. Wehave as much as 40% gains over the RMA slowdown. We have considerable power savings in all theexamples. The gains steadily increase with the decrease in deadline. This is a large gain and will resultin energy efficient devices thereby increasing the battery life.

15

Page 19: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

Figure 9. Percentage gain of the optimal slowdown over the RMA slowdown

Table 1. Computation time for the optimal constant slowdown factor

Deadline D Avionics CNC INS

D 1 00 Dorig 0.0296 sec 0.0213 sec 0.0988 sec

D 0 95 Dorig 0.0308 sec 0.0215 sec 0.1091 sec

D 0 90 Dorig 0.0320 sec 0.0221 sec 0.1082 sec

D 0 85 Dorig 0.0309 sec 0.0223 sec 0.1156 sec

D 0 80 Dorig 0.0309 sec 0.0221 sec 0.1103 sec

D 0 75 Dorig 0.0303 sec 0.0265 sec 0.1084 sec

5.2 Computation time

The computation time for the density slowdown method is almost negligible as we only execute n di-vision and n addition operations. The RMA slowdown also looks at a smaller window and has a runningtime of the order of tens of milliseconds. Since these are very small numbers compared to the optimaltechniques we have not explicitly shown the computation times in tables. The computation time for theoptimal constant factor is of higher order magnitude as compared to the density slowdown. Howeverit is of many orders faster than the computation of the optimal slowdown schedule. Computation timedepends on the number of job intervals arising from the hyper-period (N) and this could be exponentiallylarge. However, in practice it is efficient and on an average takes a fraction of a second. We conductedthe experiments on a sparc SUNW, Sun-Blade-100 running SunOS. The computation time for the ex-amples are given in table 1. The off-line computation cost is independent of the deadline variation.Even with the decrease in deadline the number of job intervals remain the same and result in the samecomputation cost. However, we gain higher energy savings as the deadline becomes stricter.

The computation time for the optimal schedule is shown in table 2. It can be seen that the time tocompute the optimal schedule is quite high and needs to be done off-line. Computation time depends onthe number of jobs arising from the hyper-period (N) and the number of iterations needed to computethe optimal schedule. The time for the INS example is large and it is only a few seconds for the other

16

Page 20: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

Table 2. Computation time for the optimal slowdown schedule

Deadline D Avionics CNC INS

D 1 00 Dorig 1.3846 sec 1.472 sec 1610.34 sec

D 0 95 Dorig 4.6856 sec 2.9504 sec 1628.44 sec

D 0 90 Dorig 4.6817 sec 2.9515 sec 1633.32 sec

D 0 85 Dorig 4.6653 sec 2.9541 sec 1642.31 sec

D 0 80 Dorig 4.6654 sec 3.383 sec 2123.24 sec

D 0 75 Dorig 8.7276 sec 3.4105 sec 3542.03 sec

examples. For the INS example the hyper-period is equally large as the other examples. However thereis a task with a small period which results in a large number of job intervals. Since the running time is afunction of number of job intervals, it is quite large. From the table it is seen that the off-line computationcost increases with the reduction of the deadline. However this extra computation cost is compensatedby higher energy savings.

5.3 Dynamic slowdown

Dynamic slowdown can be used to utilize the dynamic slack. It assumes that the operating speed of theprocessor can be changed at runtime. We performed experiments with dynamic slowdown techniques toutilize the dynamic slack. We used the execution time prediction technique [8] to calculate the dynamicfactors. Similar to the work in [8] and [17], we vary the best case execution time (BCET) of a task asa percentage of its WCET. Tasks were generated by a Gaussian distribution with mean, WCETBCET 2 and a standard deviation, WCET BCET 6. It is seen that the deadline misses arecomparable to the other techniques and we have more energy savings. We continue to save about 20%more energy.

Figures 10 and 11 show the energy consumption, deadline miss and energy gains by dynamic slow-down. Figure 10 is a comparison for the INS task set where the deadline is reduced to 90% of the originaldeadline. As the BCET increases the workload and hence the energy consumption increases. This is dueto the fact that task set is generated with a Gaussian distribution with and proportional to the BCET.

It is seen that the optimal constant slowdown algorithm continues to be more energy efficient than thedensity slowdown. As we are having a better slowdown, mis-predictions are more likely to result in adeadline miss. It is clear that we will miss more deadlines compared to that of the density slowdowntechnique. However the percentage of deadline miss is not far from the density slowdown. It is seenthat the percentage of deadline miss is large for smaller values of BCET. This is due to the manner inwhich the mean and deviation are derived for the Gaussian distribution. For smaller values of BCET,the deviation WCET BCET 6 is large and the execution time of jobs deviate more from themean value. This wide variation in the execution time causes more mispredictions, resulting in moredeadline miss. For higher values of BCET, the deviation is less and the prediction is more accurate. Asthe BCET increases the percentage of deadline miss keeps decreasing and all the algorithms have almostno deadline miss. Thus for higher BCET values we get the energy gains for free.

17

Page 21: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T
Page 22: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T
Page 23: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

to compute the optimal constant factor and in 2 of the 3 examples, the extra energy consumed (comparedto the optimal) is less than 3%. This makes the technique practically fast and very energy efficient. Thesetechniques can be easily implemented in a RTOS. This will have a great impact on the energy utilizationof portable and battery operated devices.

In future, we plan to compute bounds on the constant slowdown factor by looking at a smaller windowsize. This will help develop faster algorithms to compute the optimal constant slowdown and the optimalslowdown schedule. We have computed optimal constant slowdown factor for an EDF scheduler. Asa future work, we plan to compute the optimal schedules for other scheduling policies like RMS andfixed priority scheduling. We will be implementing the techniques in a RTOS like eCos and measure thepower consumed on a real processor.

References

[1] H. Aydin, R. Melhem, D. Mosse, and P. M. Alvarez. Determining optimal processor speeds forperiodic real-time tasks with different power characteristics. In Euromicro Conference on Real-Time Systems, Delft, Holland, June 2001.

[2] H. Aydin, R. Melhem, D. Mosse, and P. M. Alvarez. Dynamic and aggressive scheduling tech-niques for power-aware real-time systems. In Real-Time Systems Symposium, London, England,December 2001.

[3] A. Burns, K. Tindell, and A. Wellings. Effective analysis for engineering real-time fixed priorityschedulers. In IEEE Transaction on Software Engineering, volume 21, May 1995.

[4] T. Cormen, C. Leiserson, and R. Rivest. Introduction to Algorithms. Prentice-Hall, 1990.

[5] F. Gruian. Hard real-time scheduling for low-energy using stochastic data and dvs processors. InInternational Symposium on Low Power Electronics and Design, pages 46–51, 2001.

[6] R. Jejurikar and R. Gupta. Computing optimal static slowdown factors under EDF scheduling. InTechnical Report #02-02, University of California Irvine, Jan. 2002.

[7] N. Kim, N. Ryu, S. Hong, M. Saksena, C. Choi, and H. Shin. Visual assessment of visual real-timesystems: a case study on cnc controller. In IEEE Real-Time Systems Symposium, Dec. 1996.

[8] P. Kumar and M. Srivastava. Predictive strategies for low-power rtos scheduling. In Proceedingsof IEEE International Conference on Computer Design: VLSI in Computers and Processors, pages343–348, 2000.

[9] P. C. Laboratory. Parsec: A c-based simulation language. University of California Los Angeles.http://pcl.cs.ucla.edu/projects/parsec.

[10] C. L. Liu and J. W. Layland. Scheduling algorithms for multiprogramming in a hard real timeenvironment. In Journal of the ACM, pages 46–61, 1973.

[11] J. W. S. Liu. Real-Time Systems. Prentice-Hall, 2000.

20

Page 24: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

[12] C. Locke, D. Vogel, and T. Mesler. Building a predictable avionics platform in ada: a case study.In Proceedings IEEE Real-Time Systems Symposium, 1991.

[13] P. Pillai and K. G. Shin. Real-time dynamic voltage scaling for low-power embedded operatingsystems. In Proceedings of 18th Symposium on Operating Systems Principles, 2001.

[14] F. P. Preparata and M. lan Shamos. Computational Geometry, An Introduction. Springer Verlag,1985.

[15] G. Quan and X. Hu. Energy efficient fixed-priority scheduling for real-time systems on variablevoltage processors. In Proceedings of the Design Automation Conference, pages 828–833, June2001.

[16] V. Raghunathan, P. Spanos, and M. Srivastava. Adaptive power-fidelity in energy aware wirelessembedded systems. In IEEE Real-Time Systems Symposium, 2001.

[17] Y. Shin and K. Choi. Power conscious fixed priority scheduling for hard real-time systems. InProceedings of the Design Automation Conference, 1999.

[18] Y. Shin, K. Choi, and T. Sakurai. Power optimization of real-time embedded systems on variablespeed processors. In Proceeding of the International Conference on Computer-Aided Design, pages365–368, 2000.

[19] F. Yao, A. J. Demers, and S. Shenker. A scheduling model for reduced CPU energy. In IEEESymposium on Foundations of Computer Science, pages 374–382, 1995.

21

Page 25: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

A Appendix

A.1 Periodic task set examples

The examples used in the experiments are given below. These are periodic task sets and the periods,deadlines and WCETs are specified.

INS (Inertial Navigation Control) task set

------------------------------------------

Period Deadline WCET

2500 2500 1180

40000 40000 4280

625000 625000 10280

1000000 1000000 20280

1000000 1000000 100280

1250000 1250000 25000

CNC(Computer Numerical Control) task set

----------------------------------------

Period Deadline WCET

2400 2400 35

2400 2400 40

2400 2400 165

2400 2400 165

9600 4000 570

7800 4000 570

4800 4800 180

4800 4800 720

22

Page 26: Efficiencyand Optimality of Static Slowdown for …cecs.uci.edu/technical_report/TR02-03.pdf2.1 System Model A periodic task set of n periodic real time tasks is represented as T

Avionics task set

-----------------

Period Deadline WCET

200000 5000 3000

25000 25000 2000

25000 25000 5000

40000 40000 1000

50000 50000 3000

50000 50000 5000

59000* 59000* 8000

80000 80000 9000

80000 80000 2000

100000 100000 5000

200000 200000 1000

200000 200000 3000

200000 200000 1000

200000 200000 1000

200000 200000 3000

1000000 1000000 1000

1000000 1000000 1000

Slight modification in Avionics exampleNote : In the avionics task set there is a task with period 59000. This causes the hyperperiod of the taskset to be very large and the number of jobs to be considered increases drastically. We tried to computethe optimal slowdown factors with no modification. However the process did not complete till a fewdays. So we decreased the period of that task. This will ensure the correctness of the slowdown factorsand we compute the optimal slowdown factors for the modified task set. We set the new period anddeadline for that particular task to 50000. The WCET is left unchanged. We ran the experiments withthis modification.

23


Recommended