+ All Categories
Home > Documents > CS 6501 Cyber Physical Systems Real-Time Scheduling Module John A. Stankovic Spring 2015.

CS 6501 Cyber Physical Systems Real-Time Scheduling Module John A. Stankovic Spring 2015.

Date post: 30-Dec-2015
Category:
Upload: benedict-harvey
View: 214 times
Download: 0 times
Share this document with a friend
Popular Tags:
129
CS 6501 Cyber Physical Systems Real-Time Scheduling Module John A. Stankovic Spring 2015
Transcript

CS 6501

Cyber Physical Systems

Real-Time Scheduling Module

John A. StankovicSpring 2015

Real-Time Scheduling

• Taxonomy/Tasks/Definitions• Introduction to RT-Scheduling

– RMA– EDF– Heuristics– Classical Notions (Operations

Research)– Later Topic (Using Feedback Control)

Goals

• Infinite Variety for RT Scheduling

• Key Principles and Anomalies• Key Algorithms• Key Analysis

• Need to apply to CPS

TaxonomyReal-Time Scheduling

Soft Hard

Static (RMA) Dynamic (EDF)

Centralized Decentralized Centralized Decentralized

(multi-processor)

Real-Time Systems• A real-time system is one in which

the correctness of the system depends not only on the

– logical result of computations, but also

– on the time at which the results are produced.

Fast Computing

• Real-Time is not equal to fast computing

• There was the man who drowned in the river that had an average depth of six inches.

Real-Time

• Not real-time– Go fast enough, fast as possible– Minimize average response time

• Real-time– Does “a task” meet its time

requirement (for each such task)

What are Tasks?Process/Task Process/Task Process/Task

Operating System

RunningReadyWaiting

PCBs

Impact of OS

• System Calls - Extra computation• OS tasks – extra delays• Interrupts – unpredictable (control)• Semaphores

• Important: RT scheduling analysis focuses on logical analysis

Real-Time Constraints on Tasks

• Hard and Soft RT Tasks (per task)

• Safety critical

V

T

V

T

T

V

minus infinity

Hard deadlines

Firm

Hard

Where do deadlines come from?

• Read a sensor and process the reading periodically (every reading is usually not critical, but treated as if it were)

• Process (analyze) current state of system and control actuator periodically

• Interrupts – push button/enter state – activates tasks – must complete by D– Close emergency door in nuclear plant– Incoming missile detected

• Etc.

Real-Time Scheduling Problem

• Set of tasks T1, T2, T3, …• Set of constraints: D, WCET (C), …

• Schedulable – yes or no• Create an order for task execution so that all

the constraints are met• Min or max an objective

• Example: For N independent tasks => create an order so the each task obtains its WCET and meets its deadline

More Examples

• Example: For a set of N tasks => create an order so the each task obtains its WCET, meets its deadline and meets precedence constraints

• Example: For N independent tasks => create an order so the each task obtains its WCET and meets its deadline and minimizes total completion time

Definitions

T1

c1

a1 r1 d1 d1*

_|_______|_____________|____|___ time

T1 = task 1

c1 = worst case execution time (WCET)

a1 = arrival time <what does this mean?>

r1 = future release time (start time)

d1 = deadline

Definitions (continued)

d1* = deadline tolerance (a CPS reality)

L1 = d1 - C1 Laxity

Aperiodic

Independent

Preemptive

Most often a1 = r1

ArrivesD

Laxity

Periodic TasksT3

T2

|__________|_________|_________|_______

P2 P2 P2

P3

P2, P3 = period (usually D2 = P2; D3 = P3)

Arrives

Executes AnywhereIn the period

Precedence Constraints

Task 1 Task 2 Task 3 Task 4

Task 1Task 2

Task 3

Task 4

Consider Another Form of Precedence

Communicating Processes

T4

T4

T3

T3

Shared Resources (Semaphores)

CS1 CS1

CS2

CS2

R1 R2

T1 T2 T3

Data StructureFileDeviceetc

Rate Monotonic (RM) Algorithm

[Liu/Layland]• Assign a fixed priority based on period of task

• The shorter the period the higher the priority

Task Period Priority

Ex T1 P = 5 1

T2 P = 10 2

T3 P = 20 3

Note: period = deadline; independent tasks

Schedulable Utilization

The degree of resource utilization at or below which all hard deadlines can be guaranteed.

Known as the Utilization bound for a given algorithm (e.g., RM or EDF)

RM Analysis

Result: any set of independent periodic tasks can be scheduled if processor utilization < 69%

- a priori analysis (100% guarantee)

An Example

T1 Util = 1/5 = 20%

T2 Util = 3/10 = 30%

T3 Util = 2/20 = 10%

60% Guaranteed

0 5 10 15

T1 T2 T3 T1 T3 T1 T2 T1

Task Set: T1 (1,5) T2(3,10) T3(2,20) where (C,D=P)

A Priori AnalysisQ: Does it actually execute in above

fashion? Could it?

Q: What is policy and what is mechanism?

- Fix priority assignment*- Table driven

* But how do you activate the next instance of a task?

Rate Monotonic AnalysisGuarantees that n periodic tasks can always be

guaranteed to meet their deadlines if the resource util of the tasks is less than n(21/n - 1)

# of Tasks Utilization Bound

1 1

2 .828

3 .779

.

.

10 .718

.

.

N large .69

What if Bound Not Met?

• Does this mean you will NOT make deadlines?

• NO!

• You must continue to analyze!

Questions

• What does a guarantee mean?

• What does predictability mean?

Exact Analysis Approach Using n(21/n - 1) formula is pessimistic!The “exact” analysis approach based on scheduling points

0 100 200 300

|----------------|------------|-----------|--------------------|

sched. points for Task 3

T1 P = D = 100 C= 40

T2 P = D = 150 C= 40

T3 P = D = 300 C= 90

Util = 40% + 26.6% + 30%Util = 96.6%

Bound (for 3 tasks) –> 78%

Check to see if each task can complete before its first deadline (one by one).

Use scheduling points for that task- Task T’s first deadline and the ends of periods of higher priority tasks prior to T’s first deadline.

Exact Analysis Approach

Using n( 2 1/n – 1) formula is pessimistic!

The “exact” analysis approach based on scheduling points

T1 Period = 100 C = 40

T2 Period = 150 C = 40

T3 Period = 300 C = 90

Notice Preemptions!

T3

Util290/300

Assumptions - <limiting> <But, many extensions> Periodic processes All tasks independent (no precedence) No sharing resources (other than cpu) Preemptive (at all points) Uni-processor Known worst case times All tasks of equal importance

Consider Overload!Which task misses?

Low pri = long period = regardless of importance

Importance: Period Transformation

Example T0 = period = 75, but not very important T1 = period = 100, but very important

T1 period = 100T1/2 period = 50 with C/2

• Artificial• When shared resources are involved this is much

more difficult

Hybrid Task Sets

Periodic Tasks with Hard Deadlines (D=P)

For Aperiodics Non-real-time (give good average response

time) With deadlines (and guarantees)

One example of adding complexity

Solutions

Background Polling Server Deferrable Server Priority Exchange Sporadic Server Slack Stealing Etc.

Issues: Implementation

Complexity Impact on Utilization

Bound Analyze Aperiodics

Guarantees Average response time

Trivial Extension: Add Aperiodics by Background Processing

0 5 10 15

T1 T2 T3 T1 T3 T1 T2 T1

Periodic Task Set: T1 (1,5) T2(3,10) T3(2,20) where (C,P=D)

• Aperiodics have no deadline (or very soft deadline) • Run as background task when idle

• Idle times occur• In open slots in schedule and• When tasks run less than WCET and No periodic

Question

Can you analyze the performance of the aperiodics?

Yes (let’s look at an example on previous slide)

Note: even though a task is aperiodic IT EXISTS AND READY TO BE ACTIVATED

Trivial Extension: Add Aperiodics by Polling Server

0 5 10 15

T1 T2 T3 T1 T3 T1 T2 T1

Periodic Task Set: T1 (1,5) T2(3,10) T3(2,20) where (C,P=D)

Polling Server – runs as periodic task in RM (example -> T1 might be capacity for aperiodics)

If no aperiodics – wait until next period

Can have more than 1 polling server; different priorities

Polling Server

What priority/period What capacity

Just use RM bounds for schedulability (of periodics)

Can also analyze guarantees for aperiodics (see text)

Deferrable Server

See Figure 5.7

Server can delay its start during the period Impact: Utilization bound drops to 65% Improves average response time for

aperiodics over polling server

Priority Exchange (Fig 5.14) Periodic server for aperiodics executes at highest priority Execute server if aperiodics waiting Preserve capacity: If no aperiodics, exchange with highest

priority periodic task

Higher Utilization bound than Deferrable Server

Priority Exchange

1 5 No need to wait until time 5

Task Numbers 5, 1, 2Task 5 -> Aperiodic Server; Period = 5Tasks 1 and 2 Period = 10

Can wait foraperiodic arrival

Summary for Hybrid Task Sets

Priority Exchange (P.E.)• Difficult to implement• Good scheduling boundDeferrable Server (D.S.)• Easier to implement than P.E.• Schedule bound lower• Very good for aperiodicsSporadic Server• Implementation difficulty is between the other 2• Scheduling bound similar to P.E.• Aperiodic performance similar to D.S.

How to deal with shared resources?

• Priority Ceiling Protocol

Priority Ceiling

• Based on Rate Monotonic

• Define a priority ceiling for every critical section (CS)

• The ceiling value = pri of highest pri task which may enter the CS

Example

T1 T2 T3 T4 T5

CS1 CS2 CS3

Ceil = 3 Ceil = 5 Ceil = 4

Priority 5 is highest; 1 is lowest

Priority Ceiling

Protocol (2 parts)(Note: high task number means high priority)

1. Priority inheritance (PI)

T3 CS

Raise to 5 (only while in CS)

T5 (preempts, then waits if it needs CS)

2. Priority ceilingAllow Ti into CS if its priority > system-wide priority which is highest pri ceiling of all CS in use by any task

In other words: if task priority not higher than ceiling of a CS being used by another task.

if CS = free may still block Similar to deadlock avoidance scheme found in OS

Example

T1 T2 T3 T4 T5

CS1 CS2 CS3

Ceil = 3 Ceil = 5 Ceil = 4

Example 1: T3 activated wants to enter CS2

Enter? Yes No other CS in use

Example 2: While T3 is in CS2, T4 is activated.

Preempts T3. Wants to enter CS3

Enter? No Ceil of CS2

= 5 > T4 (pri)

T3 (pri)

CS1: Ceil = 3 CS2: Ceil = 5 CS3: Ceil = 4

* For CS length

Chained Blocking – PI onlyT1 CS1

T2 preempts

T2 CS2

T3 preempts

T3 CS3

.

.

.

T10 preempts

now within T10

ask CS1 blocked T1 10 T1 runs..(back to T10 ).

ask CS2 blocked T2 10 T2 runs.. .(back to T10 )

.ask CS9 blocked T9 10 T9 runsend T10

Consider Chained Blocking (with Priority Ceiling)

T1 CS1 Priority ceiling = 10 (why?)

T2 preempts

T2 CS2 wait do not enter CS2 ; T1 runs again

T3 preempts

T3 CS3 wait do not enter CS3

. .

. .

. .

T10 preempts

Now Within Task 10ask CS1 blocked T1 10 T1 runs (for short time

only for CS1)back into Task 10

.

.ask CS2 not blocked

still in Task 10...

Ask CS9 not blocked end T10

Only BLOCKED ONCE by a lower priority task!!

Key Observation

T10 max wait for at most 1 lower priority task to finish in a CS!

Schedulability AnalysisTheorem: A set of n periodic tasks using P.C.

protocol can be scheduled by R.M. algorithm if the following conditions are satisfied:

i, 1 i n C(1)/T(1) + C(2)/T(2) + … + C(I)/T(I) + B(I)/T(I) <= I(21/I-1)

Blocking: the duration that a job has to wait for the execution of lower priority tasks

[If I wait for higher priority task, this is not blocking]

Example:T1 C = 5 P = 10

T2 C = 4.5 P = 15

T3 C = 2 P = 30 CSmax = 1

i 1. C1 + B1 1

P1 P1

5 + 0 1 yes!

10

2. C1 + C2 + B2 2(21/2 - 1)

P1 P2 P2

5 + 4.5 + 1 < .828

10 15 15

.86 .828 No!

but w/o T3 having a C.S.

Answer for T2 is Yes! (80%)

We say that this is analysis is pessimistic!But where is the pessimism?

In blocking factor always assume the blocking occurs!

BUT

With Exact Scheduling you may see that blocking is avoided!

How to Compute Blocking Term

• Blocking term in previous example is trivial to see

• But, in general, more complex

• See complex example on blackboard

Priority Ceiling

• No deadlock

• Wait for “at most” one lower priority task

• Waits for duration of critical section.

NOT the entire task!

Problems1. Implications of delay when a task

can’t enter its C.S. (don’t allow a task into its critical section because a higher priority task might show up)

• Ex: T2 may come in & complete!• Conservative Policy (& under all possible

conditions)• “deadlock avoidance scheme”• Pessimistic (esp. with large # of

resources)• But util. is high - someone is always

running!!

2. Long critical sections3. Precedence constraints4. Distributed schedule5. Multiprocessing6. Scaling

Load 500 tasks1000 resources

Small percentage ever active together!“a priori avoidance”

vs.“dynamic avoidance” (a la Spring

heuristic - later)

If D < P RM does not work

Use Deadline Monotonic

Different utilization formulas!

Example

C = 5

Deadline Period 10 15

Utilization needed is 5/10 = 50%

Not 5/15 = 33%

But this simple calculation of Util is too pessimisticSee text pages 103-105

Earliest Deadline First (EDF)

• Order tasks by deadline (n log n)

• Execute earliest deadline task first

Dynamic Scheduling - EDF

• Jackson: n indep. tasks; all tasks exist at time zero (called sync constraint); executing in EDD order is optimal with respect to min max lateness

• Horn: n indep. tasks; arbitrary arrival times; EDF is optimal w.r.t. min max lateness

• Dertouzous: Optimality -> if there exists a feasible schedule for a task set then EDF can find it

Assumes all tasks can be preempted

When Known?

Dynamic Scheduling/Aperiodics

• (T1,C1,D1) (T2,C2,D2) (T3,C3,D3) …• Order tasks by earliest deadline first

T1 D=4T2 D=3T3 D=7

Schedule in order T2, T1, T3

Note: EDF (algorithm) only considers deadlines (not C)!!!!

Analysis considers C and D

Schedulability Bound for EDF = 100%

(T1,C1,D1) (T2,C2,D2) (T3,C3,D3)

T1 D=4 C=1 Util = 25%T2 D=2 C=1 Util = 50%T3 D=8 C=2 Util = 25%

Schedule in order T2, T1, T3Util = 100% (all will make deadlines)

T2 T1 T3T2

D(2) D(1) D(3)

Time

• Rate Monotonic algorithm is optimal among all fixed priority scheduling algorithms

• Earliest deadline first (EDF) is optimal among all dynamic priority scheduling algorithms

• Optimal they can always produce a feasible schedule if any other scheduled algorithm is able to do so (within same class → fixed or dynamic)

[no Res; No precedence; must be preemptive] Else not optimal

EDF Scheduling and Overload Deadlines

Dynamic Scheduling

Planning Paradigm – and EDF

T2 arrives – planHow to execute T1 And T2 given T1 Partly executed

(same now for T3, T2, T1)

Remaining time for T1

RM vs EDF for Periodics

Note:EDF tendsto haveLesspreemptions

EDF vs Least Laxity (LL) First

T1

T2

Laxity = Deadline minus Computation time

Using LL T2 would run before T1

Imprecise Computation(Anytime Algorithm)

Typical Computation (task)Normal completion Else no value

Precise result

Zero error

Intermediate results

Milestone Method

Sieve Method

Drop entire fn to save time

(Ex. – est. noise level vs. use last est.)

(Ex. – eliminate competing hypothesis or corroborating support)

Multiple Versions Method

• All Guarantee Minimum Amount of Time– 1st intermediate result– Minimum path through sieve– Min alternate

PLUS: Monotonically Increasing Value

Imprecise Scheduling

Guarantee Mandatory - “Max Optional”

• Minimum total error• Minimum max. av. error• Minimum # of discarded optional parts• Minimum # of late tasks

Predictability: to foretell on the basis of scientific reasoning

Microscopic View: Applic./OS. “bounded time” (bounded resources)

Macroscopic View (system-wide):

SimpleInflexible

Wasted Resources BUT PREDICTABLE

Dynamic System• Large• Complex• Don’t know order of invocation

Standard RT Schedule Algorithm (run on-line)

EDF, LL, RM, Best-Effort; (SPT)

Macroscopic View? Predictability?

The Plan Dimension

Ex. EDF with preemption (no planning)

No overall concept of which tasks can make deadlines; pushes ahead BLINDLY

No understanding of the capabilities of the system at this time!

“UPREDICTABLE”

Example: EDF with Planning

T2

T1

T1 arrives

T2 arrives

T3 arrives

RT Sch. Algor. Plan

No Plan

Doesn’t Solve Resource Problem

Spring Paradigm*

Complex, Yet Predictable RTS

Critical Processes

Hard Real Time

Soft Real Time

Non Real Time

*Not Just a Kernel/Scheduler

Macroscopic View? Predictability?

Predictability in a Dynamic System:100% of critical tasks At any time t the system knows

exactly what essential tasks have been guaranteed and what tasks have not

A priori – statistical analyses of performance with respect to essential tasks

Needed for CPS?

Spring Algorithm

• What principles exhibited?– Heuristic functions– Earliest Available Time– Eligibility– Strong feasibility– Bounded Backtracking– Parallel Co-Processor (HW acceleration)

Spring• Overview

“Tasks already in memory”

For this example:• Assume aperiodic tasks (sharing resources) BUT:

• Periodic & aperiodic• Hard & soft RT• Precedence constraints• Criticalness• Preemptive• Exclusive/shared res.• Optimizations Overload On-line time planner vs. off-line design tool

Local Scheduler

Set ofTasks

Scheduling NP – complete Simple heuristics do not work well Combinations of simple heuristics do work

well

Heuristic Fn H Must Account for:

Computation TimeDeadlineResource Contention & Req.

Scheduling as Search (exponential)

Tasks 1, 2, 3Root = null schedule

1 2 3

2 3 1 3 1 2

3 2 3 1 2 1

N! orders 3! = 6

Overview of ApproachTask Set Heuristic Fn

T1 H (T1) = X1

T2 H (T2) = X2

T3 H (T3) = X3 Best T3 + New Init.

T4 H (T4) = X4 ConditionT5 H (T5) = X5

T6 H (T6) = X6

T1 H (T1) = X1'

T2 H (T2) = X2'

T4 H (T4) = X4' Best T3 T2

T5 H (T5) = X5'

T6 H (T6) = X6'

For 6 Tasks

Complete enumeration 720

This approach 6+5+4+3+2=20

1.

2.

Assume

Simple (Single) Heuristics• Minimum processing time first (Min_p):

– H (T) = Tp;

• Maximum processing time first (Max_P):– H (T) = - Tp;

• Minimum deadline first (Min_D):– H (T) = TD;

• Minimum laxity first (Min_L):– H (T) = TD – (Test + Tp);

• Minimum earliest start time first (Min_S):– H (T) = Test;

Integrating Two Heuristics

Min_D + Min_P:

H (T) = TD + W * Tp;

Min_D + Max_P:

H (T) = TD – W * Tp;

Min_D + Min_S:

H (T) = TD + W * Test;

Integrating Three Heuristics – no benefit to added complexity

Guarantee {T1, T2, T3, T4}

Guarantee {T1, T2, T3, T4}

R – relatedto laxity

P = prob. of usingresource

In Non-RTVery Good Alg.

R

Precedence (consider a set of Tasks a Job)

• Eligible TasksCurrent time in plan = start timePrecedence constraints metEx. {T1, T2, T7, T10} initial H (·)

1

What are the EAT (on Resources); VD; D for each Ti

EAT RES on Task Basis (not Job) VD Value of Group

Comp. of Group D pseudo D

To reflect “depth” of graph [but only NOT guarantee if last (TRUE) deadline is missed in the plan!]

Bounded BacktrackStrong Feasibility

Tasks 1, 2, 3 Root = null schedule

1 2 3

2 3 1 3 1 2

3 2 3 1 2 1

N! orders 3! = 6

Co-Processor

Scheduling versus GP processor

See design of HW (shown on overhead)

Classical Scheduling Results

• Complexity Theory• Operations Research

Direct Solutions? NoImportant Insight? Yes

- avoid poor or even erroneous choices What should every RTS/CPS designer know

about scheduling?

NP – Complete Problem“Very General”

…..Specific Instances

Polynomial

….

Specific InstancesNot Polynomial

| |

Complexity Theory OR

| | = machine environment

1, 2, …, n, O (open shop), F (flow shop), J (job shop)

= job characteristics

indep, prec. constr., preemption, deadline, …

= Optimality Criterion

MIN{ max lateness, total tardiness, completion time, % Deadlines missed, etc. }

Open Shopn jobs (j)m machines (i)each machine (i) works on job j for processing time Pij

each machine works on 1 job at a timea job may be only on one machine at a timeno restriction on order open

O||C max is NP – Hard O2||C max is Polynomial (O(n))

No advantage to preemption O2|nopmtn|Cmax & O2|pmtn|Cmax

Solved by the same algorithm

Note: C max means minimizing the max completion time

Flow Shops Open shop but with an order constraint All jobs have the same order

Ji → M1 → M2 → … → Mn

Job Shop Open shop but with an order constraint per job

J → M1 → M2 → … → Mn

J2 → M3 → M5 → M7

Each job may have their own order.

Metrics “Classical”min Cj where C = completion timemin Wj Cj

min L sched. length

min P # of processors

min Latemax max. lateness

Applicability to Real-Time ?

Example: Sum of Completion Times

a b

b a

1 11

10 11

= 12

= 21

Other metrics

Min N No. of tasks that miss deadline

Max V Maximize Value

MDP Missed Deadline Percentage

SR Success ratio → percentage that make (meet) deadline

Key Idea

Difference betweenAn algorithm which is optimal

Schedule itself which is optimal with respect to some metric

Don’t Miss Any Deadlines

Optimal scheduling algorithm is one which may fail to meet a deadline only if no other scheduling algorithm can.

EDF is optimal for independent aperiodic tasks on a uni-processor: if some algorithm can find a valid

schedule for a task set then so can EDF

P | constraints | MetricEx: 1 | nopmtn | LATE MAX

Soln: Jackson’s Rule : is optimal if you put jobs in order of non-decreasing due dates! (EDD)

polynomial time

Ex. 1 | nopmtn, rj | LATE MAX

NP-hard

Ex. 1 | pmtn, rj | LATE MAX

polynomialEDD with “eligible” jobs

Precedence Constraints Modify D

Let Ti → Tj

In any valid schedule the following MUST BE TRUE:

f i d i

f i d j – C j

So replace d i with min (d i , d j - C j) - Problem stays the same

C(Tj) = 10

D(Tj) = 20D(Ti)=15

New D ofTi = 10

Smith’s Rule

• Uniprocessor• Pj = processing time• Wj = weight (value)• min Wj Cj

Optimal order by rho = Pj / wj

(if w’s = 1 SJF)

Value DensityHow much value per processing unit

Multi-Processor Scheduling

• Almost ALL NP-hard• Even most special instances are NP-

hard• EDF no longer optimal• Scheduling anomalies exist →

Richard’s Anomalies

Theorem: EDF on multiprocessor is not optimal (C,D)T1 (1, 1)T2 (1, 2)T3 (3, 3.5)

Q: If EDF is not optimal, is there any optimal (dynamic algorithm) for multiprocessor?

Theorem: For 2 or more processors, no deadline schedule can be optimal without a priori knowledge of:

1. Deadlines2. Computation times3. Start times

12345

Example RT .ne. Fast Multiprocessing Anomalies (Richard’s)Assume that a set of tasks are scheduled optimally on a multiprocessor

with some priority order, a fixed number of proc., fixed exec times and precedence constraints.

CHANGING THE PRIORITY LIST INCREASING THE NO. OF PROC.

REDUCING EXECUTION TIME WEAKENING PRECEDENCE

CAN INCREASE SCHEDULE LENGTH

Now Miss “D”

Another Example of an Anomaly

Tasks 1 & 3 do not share resources

3 & 4 do share resources in exclusive mode

1

4

3

54

3

5

2 2Reduce Exec Timeof T1

1

Common – if task 1 finishes before WCET

Static RT Scheduling

- algorithm has complete knowledge of task set and constraints

Ex. { T1 C1 D1 {R} R1

T2 C2 D2 {R} R2

.

.

.

TN Cn Dn {R} Rn }

Even though release times are

in the future – not dynamic

{R} = set of resourcesR = release times

Summary - Analysis Possibilities 1) Yes or No Feasible

2) A single schedule (may be optimal, may not)

3) A fixed PRIORITY assignment to be used at run time

Dynamic Scheduling(in degrees)

- lack of complete knowledge (usually about future arrivals)

- schedule changes over time

____________________________________time

current set T2 T3

{T1 C1 D1 {R} R1 completes T4

T2 C2 D2 {R} R2} T5 arrive

Completely Static

T = {T1, T2, … , TN}

N = small

Ti = Ri, Di, Vi}

Metric = min. SL

guarantee all Di

Compute Entire Sched.

Verified

Low runtime cost

Completely Dynamic

T = {T1, T2, …

N - large

Ti = { ?, ?, ?}

Metric: response time

Max # Deadline

Pick Next Task

Performance Unknown

Low runtime cost

The Infinity of Scheduling

uni-processor - multi-processor

centralized - distributedstatic - dynamicsoft - hardpreempt - non-preemptperiodic - aperiodic1 resorce - n resourcesoff-line - on-lineunderload - overload

deterministic - stochastic

Other issues: precedence constraints, value, redundancy, fixed exec times vs variable, imprecise computation, integration with I/O, communication, and metrics

What Causes Scheduling Difficulties (in general)!

• Future release times• Non-preemption• Multiprocessors• Shared resources• Overloads• Deadlines

Not as Large a Problem – but still not easy• Value• Precedence constraints• Periodic tasks• Preemption

Summary

• Complexity Results• Operations Research• Cyclic Scheduling• RMA• EDF• Guarantee Based (QoS) – Heuristics (Search)• Best Effort• Imprecise Computation (Anytime Algorithms)• Control Theory Based


Recommended