+ All Categories
Home > Documents > Processes and Threads

Processes and Threads

Date post: 10-Feb-2016
Category:
Upload: tyra
View: 65 times
Download: 2 times
Share this document with a friend
Description:
Processes and Threads. 2.1 Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling. Chapter 2. Processes The Process Model. In the process model, all runnable software is organized as a collection of processes. . Multiprogramming of four programs - PowerPoint PPT Presentation
Popular Tags:
113
1 Processes and Threads Chapter 2 2.1 Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling
Transcript
Page 1: Processes and Threads

1

Processes and Threads

Chapter 2

2.1 Processes2.2 Threads2.3 Interprocess communication2.4 Classical IPC problems2.5 Scheduling

Page 2: Processes and Threads

2

ProcessesThe Process Model

• Multiprogramming of four programs• Conceptual model of 4 independent, sequential processes• Only one program active at any instant

• In the process model, all runnable software is organized as a collection of processes.

Page 3: Processes and Threads

3

Process Creation

• Principal events that cause process creation1. System initialization2. Execution of a process creation system 3. User request to create a new process4. Initiation of a batch job

• Foreground processes are those that interact with users and perform work for them.

• Background processes that handle some incoming request are called daemons.

Page 4: Processes and Threads

4

Process Creation• How to list the running processes?

– In UNIX, use the ps command.– In Windows 95/98/Me, use Ctrl-Alt-Del.– In Windows NT/2000/XP, use the task manager.

• In UNIX, a fork() system call is used to create a new process.– Initially, the parent and the child have the same memory

image, the same environment strings, and the same open files.

– The execve() system call can be used to load a new program.– But the parent and child have their own distinct address

space.• In Windows, CreateProcess handles both creation and

loading the correct program into the new process.

Page 5: Processes and Threads

5

Process Termination

• Conditions which terminate processes1. Normal exit (voluntary)

• Exit in UNIX and ExitProcess in Windows.

2. Error exit (voluntary)• Example: Compiling errors.

3. Fatal error (involuntary)• Example: Core dump

4. Killed by another process (involuntary)• Kill in UNIX and TerminateProcess in Windows.

Page 6: Processes and Threads

6

Process Hierarchies

• Parent creates a child process, child processes can create its own process

• Forms a hierarchy– UNIX calls this a "process group“– In UNIX, init sshd sh ps

• Windows has no concept of process hierarchy– all processes are created equal

Page 7: Processes and Threads

7

Process Model - Example

• In UNIX, for example, a fork() system call is used to create child processes in such a hierarchy. A good example is a shell.

• Consider the menu-driven shell given in programs/c/Ex3.c.

Page 8: Processes and Threads

8

Process Model - Example

• The algorithm (Ex3.c) is:1. Display the menu and obtain the user's request

(1=ls,2=ps,3=exit).2. If the user wants to exit, then terminate the shell

process.3. Otherwise:

1. Fork off a child process.2. The child process executes the option selected, while

the parent waits for the child to complete.3. The child exits.4. The parent goes back to step 1.

Page 9: Processes and Threads

9

Process States (1)

• Possible process states– Running - using the CPU.– Ready - runnable (in the ready queue).– Blocked - unable to run until an external event

occurs; e.g., waiting for a key to be pressed.• Transitions between states shown

Page 10: Processes and Threads

10

Process States (2)

• Lowest layer of process-structured OS– handles interrupts, scheduling

• Above that layer are sequential processes– User processes, disk processes, terminal processes

Page 11: Processes and Threads

11

Implementation of Processes• The operating system maintains a process table with

one entry (called a process control block (PCB)) for each process.

• When a context switch occurs between processes P1 and P2, the current state of the RUNNING process, say P1, is saved in the PCB for process P1 and the state of a READY process, say P2, is restored from the PCB for process P2 to the CPU registers, etc. Then, process P2 begins RUNNING.

• Note: This rapid switching between processes gives the illusion of true parallelism and is called pseudo- parallelism.

Page 12: Processes and Threads

12

Implementation of Processes (1)

Fields of a process table entry

Page 13: Processes and Threads

13

Implementation of Processes (2)

Skeleton of what lowest level of OS does when an interrupt occurs

Page 14: Processes and Threads

14

Thread vs. Process• A thread – lightweight process (LWP) is a

basic unit of CPU utilization.• It comprises a thread ID, a program counter,

a register set, and a stack.• A traditional (heavyweight) process has a

single thread of control.• If the process has multiple threads of

control, it can do more than one task at a time. This situation is called multithreading.

Page 15: Processes and Threads

15

Single and Multithreaded Processes

Page 16: Processes and Threads

16

ThreadsThe Thread Model (1)

(a) Three processes each with one thread(b) One process with three threads

Page 17: Processes and Threads

17

The Thread Model (2)

• Items shared by all threads in a process• Items private to each thread

Page 18: Processes and Threads

18

The Thread Model (3)

Each thread has its own stack

Page 19: Processes and Threads

19

Thread Usage

• Why do you use threads?– Responsiveness: Multiple activities can be done at

same time. They can speed up the application.– Resource Sharing: Threads share the memory and the

resources of the process to which they belong.– Economy: They are easy to create and destroy.– Utilization of MP (multiprocessor) Architectures: They

are useful on multiple CUP systems.• Example - Word Processor, Spreadsheet:

– One thread interacts with the user.– One formats the document (spreadsheet).– One writes the file to disk periodically.

Page 20: Processes and Threads

20

Thread Usage (1)

A word processor with three threads

Page 21: Processes and Threads

21

Thread Usage

• Example – Web server:– One thread, the dispatcher, distributes the

requests to a worker thread.– A worker thread handles the requests.

• Example – data processing:– An input thread– A processing thread– An output thread

Page 22: Processes and Threads

22

Thread Usage (2)

A multithreaded Web server

Page 23: Processes and Threads

23

Thread Usage (3)

• Rough outline of code for previous slide(a) Dispatcher thread(b) Worker thread

Page 24: Processes and Threads

24

Thread Usage (4)

Three ways to construct a server

Page 25: Processes and Threads

25

User Threads

• Thread management done by user-level threads library

• User-level threads are fast to create and manage.• Problem: If the kernel is single-threaded, then any

user-level thread performing a blocking system call will cause the entire process to block.

• Examples- POSIX Pthreads- Mach C-threads- Solaris UI-threads

Page 26: Processes and Threads

26

Implementing Threads in User Space

A user-level threads package

Page 27: Processes and Threads

27

Kernel Threads• Supported by the Kernel: The kernel performs thread

creation, scheduling, and management in kernel space.• Disadvantage: high cost• Examples

- Windows 95/98/NT/2000/XP - Solaris

- Tru64 UNIX- BeOS

- OpenBSD - FreeBSD

- Linux

Page 28: Processes and Threads

28

Implementing Threads in the Kernel

A threads package managed by the kernel

Page 29: Processes and Threads

29

Hybrid Implementations

Multiplexing user-level threads onto kernel- level threads

Page 30: Processes and Threads

30

Scheduler Activations

• Goal – mimic functionality of kernel threads– gain performance of user space threads

• Avoids unnecessary user/kernel transitions• Kernel assigns virtual processors to each process

– lets runtime system allocate threads to processors– Makes an upcall to the run-time system to switch

threads.• Problem:

Fundamental reliance on kernel (lower layer) calling procedures in user space (higher layer)

Page 31: Processes and Threads

31

Pop-Up Threads

• Creation of a new thread when message arrives(a) before message arrives(b) after message arrives

Page 32: Processes and Threads

32

Making Single-Threaded Code Multithreaded

Conflicts between threads over the use of a global variable

Page 33: Processes and Threads

33

Making Single-Threaded Code Multithreaded

Threads can have private global variables

Page 34: Processes and Threads

34

Thread Programming

• Pthread - a POSIX standard (IEEE 1003.1c) API for thread creation and synchronization. <pthread.h>.

• Solaris 2 is a version of UNIX with support for threads at the kernel and user levels, SMP, and real-time scheduling.

• Solaris 2 implements the Pthread API and UI threads

Page 35: Processes and Threads

35

Thread Programming

• Pthread - a POSIX standard (IEEE 1003.1c) API for thread creation and synchronization. – Example: thread-sum.c, pthread-ex.c,

helloworld.cc• Solaris 2 threads:

– Solaris 2 is a version of UNIX with support for threads at the kernel and user levels, SMP, and real-time scheduling.

– Solaris 2 implements the Pthread API and UI threads.

– Example: thread-ex.c, lwp.c

Page 36: Processes and Threads

36

Thread Programming• Java threads may be created by:

– Extending Thread class– Implementing the Runnable interface

• Calling the start method for the new object does two things:1. It allocates memory and initializes a new thread in the

JVM.2. It calls the run method, making the thread eligible to be

run by JVM. • Java threads are managed by the JVM.• Example: ThreadEx.java, ThreadSum.java

Page 37: Processes and Threads

37

Interprocess Communication

• Three issues are involved in interprocess communication (IPC):– How one process can pass information to another.– How to make sure two or more processes do not get into

each other’s way when engaging in critical activities.– Proper sequencing when dependencies are present.

• Race conditions are situations in which several processes access shared data and the final result depends on the order of operations.

Page 38: Processes and Threads

38

Interprocess CommunicationRace Conditions

Two processes want to access shared memory at same time

Page 39: Processes and Threads

39

Race Condition• Assume there are two variables , out, which points to the

next file to be printed, and in, which points to the next free slot in the directory.

• Assume in is currently 7. The following situation could happen:

Process A reads in and stores the value 7 in a local variable. A switch to process B happens.

Process B reads in, stores the file name in slot 7 and updates in to be an 8.

Process A stores the file name in slot 7 and updates in to be an 8.

• The file name in slot 7 was determined by who finished last. A race condition occurs.

Page 40: Processes and Threads

40

Critical Regions

• The key to avoid race conditions is to prohibit more than one process from reading and writing the shared data at the same time.

• Four conditions to provide mutual exclusion1. No two processes simultaneously in critical region2. No assumptions made about speeds or numbers of

CPUs3. No process running outside its critical region may

block another process4. No process must wait forever to enter its critical

region

Page 41: Processes and Threads

41

Critical Regions (2)

Mutual exclusion using critical regions

Page 42: Processes and Threads

42

Mutual Exclusion Solution - Disabling Interrupts

• By disabling all interrupts, no context switching can occur.

• Thus, it is unwise to allow user processes to disable interrupts.

• However, it is convenient (and even necessary) for the kernel to disable interrupts while a context switch is being performed.

Page 43: Processes and Threads

43

Mutual Exclusion Solution - Lock Variable

shared int lock = 0;/* entry_code: execute before entering critical section */while (lock != 0) /* do nothing */ ;lock = 1; - critical section -/* exit_code: execute after leaving critical section */lock = 0;

• This solution may violate property 1. If a context switch occurs after one process executes the while statement, but before setting lock = 1, then two (or more) processes may be able to enter their critical sections at the same time.

Page 44: Processes and Threads

44

Mutual Exclusion with Busy Waiting

Proposed solution to critical region problem(a) Process 0. (b) Process 1.

Page 45: Processes and Threads

45

Mutual Exclusion Solution – Strict Alternation

• This solution may violate progress requirement. Since the processes must strictly alternate entering their critical sections, a process wanting to enter its critical section twice in a row will be blocked until the other process decides to enter (and leave) its critical section as shown in the the table below.

• The solution of strict alteration is shown in Ex5.c. Be sure to note the way shared memory is allocated using shmget and shmat.

turn P0 P1

0 CS while 1 RS CS 0 RS RS 0 RS while

Page 46: Processes and Threads

46

Mutual Exclusion with Busy Waiting

Peterson's solution for achieving mutual exclusion

Page 47: Processes and Threads

47

Mutual Exclusion Solution – Peterson’s

• This solution satisfies all 4 properties of a good solution. Unfortunately, this solution involves busy waiting in the while loop. Busy waiting can lead to problems we will discuss below.

• Challenge: Write the code for Peterson's solution using Ex5.c (the strict alteration code) as a starting point.

Page 48: Processes and Threads

48

Hardware solution: Test-and-Set Locks (TSL)

• The hardware must support a special instruction, tsl, which does 2 things in a single atomic action:

tsl register, flag:

(a) copy a value in memory (flag) to a CPU register and

(b) set flag to 1.

Page 49: Processes and Threads

49

Mutual Exclusion with Busy Waiting

Entering and leaving a critical region using the TSL instruction

Page 50: Processes and Threads

50

Mutual Exclusion with Busy Waiting

• The last two solutions, 4 and 5, require BUSY-WAITING; that is, a process executing the entry code will sit in a tight loop using up CPU cycles, testing some condition over and over, until it becomes true. For example, in 5, in the enter_region code, a process keeps checking over and over to see if the flag has been set to 0.

• Busy-waiting may lead to the PRIORITY-INVERSION PROBLEM if simple priority scheduling is used to schedule the processes.

Page 51: Processes and Threads

51

Mutual Exclusion with Busy Waiting

• Example: Test-and-set Locks:

P0 (low) - in cs -x |

context switch | P1 (high) -----tsl-cmp-jnz-tsl... x-tsl-cmp... x-... forever.

• Note, since priority scheduling is used, P1 will keep getting scheduled and waste time doing busy-waiting. :-(

• Thus, we have a situation in which a low-priority process is blocking a high-priority process, and this is called PRIORITY-INVERSION.

Page 52: Processes and Threads

52

Semaphores [E.W. Dijkstra, 1965].

• A SEMAPHORE, S, is a structure consisting of two parts: (a) an integer counter, COUNT (b) a queue of pids of blocked processes, Q

• That is, struct sem_struct {

int count; queue Q;} semaphore;

semaphore S;

Page 53: Processes and Threads

53

Semaphores [E.W. Dijkstra, 1965].• There are 2 operations on semaphores, UP and DOWN. These

operations must be executed atomically (that is in mutual exclusion). Suppose that P is the process making the system call. The operations are defined as follows:

DOWN(S):

if (S.count > 0)S.count = S.count - 1;

elseblock(P); that is,

(a) enqueue the pid of P in S.Q, (b) block process P (remove the pid from the ready queue), and (c) pass control to the scheduler.

Page 54: Processes and Threads

54

Semaphores [E.W. Dijkstra, 1965].

UP(S):

if (S.Q is nonempty)wakeup(P) for some process P in S.Q; that

is, (a) remove a pid from S.Q (the pid of P), (b) put the pid in the ready queue, and (c) pass control to the scheduler.

elseS.count = S.count + 1;

Page 55: Processes and Threads

55

Mutual Exclusion Problemsemaphore mutex = 1; /* set mutex.count = 1 */

DOWN(mutex);- critical section -

UP(mutex);• To see how semaphores are used to eliminate race

conditions in Ex4.c, see Ex6.c and sem.h. The library sem.h contains a version of UP(semid) and DOWN(semid) that correspond with UP and DOWN given above.

• Semaphores do not require busy-waiting, instead they involve BLOCKING.

Page 56: Processes and Threads

56

Producer-Consumer Problem = Bounded Buffer Problem

• Consider a circular buffer that can hold N items.

• Producers add items to the buffer and Consumers remove items from the buffer.

• The Producer-Consumer Problem is to restrict access to the buffer so correct executions result.

Page 57: Processes and Threads

57

Sleep and Wakeup

Producer-consumer problem with fatal race condition

Page 58: Processes and Threads

58

Semaphores

The producer-consumer problem using semaphores

Page 59: Processes and Threads

59

Mutexes

Implementation of mutex_lock and mutex_unlock

•A mutex is a semaphore that can be in one of two states: unlocked or locked.

Page 60: Processes and Threads

60

Using Semaphores• Process Synchronization: Order process execution: Suppose we have 4 processes: A, B, C, and D. A must finish

executing before B and C start. B and C must finish executing before D starts.

S1 S2 A ----> B ----> D | ^ | S1 S3 | +-----> C ------+

Then, the processes may be synchronized using semaphores:

semaphore S1, S2, S3 = 0,0,0;

Page 61: Processes and Threads

61

Using Semaphores• Process Synchronization: Order process execution:

Process A: ---------- - do work of A UP(S1); /* Let B or C start */ Process B: ---------- DOWN(S1); /* Block until A is finished */ - do work of B UP(S2);

Process C: ---------- DOWN(S1); - do work of C UP(S3);

Page 62: Processes and Threads

62

Using Semaphores Process D: ---------- DOWN(S2); DOWN(S3); - do work of D

• In conclusion, we use semaphores in two different ways: mutual exclusion (mutex) and process synchronization (full, empty).

• Is it easy to use semaphores?

Page 63: Processes and Threads

63

Monitors• A monitor is a collection of procedures, variables, and data

structures that can only be accessed by one process at a time (for the purpose of mutual exclusion).

• To allow a process to wait within the monitor, a condition variable must be declared, as condition x, y;

• Condition variable can only be used with the operations wait and signal (for the purpose of synchronization).– The operation

x.wait();means that the process invoking this operation is suspended until another process invokes

x.signal();– The x.signal operation resumes exactly one suspended

process. If no process is suspended, then the signal operation has no effect.

Page 64: Processes and Threads

64

Monitors

Example of a monitor

Page 65: Processes and Threads

65

Monitors

• Outline of producer-consumer problem with monitors– only one monitor procedure active at one time– buffer has N slots

Page 66: Processes and Threads

66

Monitors• Monitors in Java

– supports user-level threads and methods (procedures) to be grouped together into classes.

– By adding the keyword synchronized to a method, Java guarantees that once any thread has started executing that method, no other thread can execute that method.

• Advantages: Ease of programming. (?)• Disadvantages:

– Monitors are a programming language concept, so they are difficult to add to an existing language; e.g., how can a compiler determine which procedures are inside a monitor if they can be nested?

– Monitors are too expensive to implement and they are overly restrictive (shared memory is required).

Page 67: Processes and Threads

67

Monitors

Solution to producer-consumer problem in Java (part 1)

Page 68: Processes and Threads

68

Monitors

Solution to producer-consumer problem in Java (part 2)

static class our_monitor { //this is a monitor

private int buffer[] = new int[N]; private int count = 0, lo = 0, hi = 0; // counters and indices

public synchronized void insert(int val) {

if (count == N) go_to_sleep(); // if the buffer is full. go to sleep buffer [hi] = val; // insert an item into the buffer hi = (hi + 1) % N; // slot to place next Item in count = count + 1; //one more item in the buffer now

if (count == 1) notify(); //if consumer was sleeping, wake it up }

public synchronized int remove() { int val;

if (count == 0) go_to_sleep(); // if the buffer is empty, go to sleep val = buffer [lo]; // fetch an item from the buffer lo = (lo + 1) % N; //slot to fetch next item from count = count - 1; // one few items in the buffer if (count == N -1) notify(); //if producer was sleeping, wake it up return val; } private void go_to_sleep() { try {wait();} catch(InterruptedException exc) {}; } }

Page 69: Processes and Threads

69

Message Passing• Possible Approaches:

– Assign each process a unique address such as addr. Then, send messages directly to the process: blocking receive.

send(addr, msg); recv(addr, msg); Example: signals in UNIX.– Use mailboxes: blocking receive. send(mailbox, msg); recv(mailbox, msg); Example: pipes in UNIX.– Rendezvous: blocking send and receive. Example: Ada tasks.

• Message passing is commonly used in parallel programming systems. For example, MPI (Message-Passing Interface).

Page 70: Processes and Threads

70

Pipe Implementation• Pipe description:

– pipe is a unidirectional data structure.– One end is for reading and one end is for writing.– Use pipe function to create a pipe.int mbox[2]; pipe(mbox); – In our implementation, mbox[0] is for reading and mbox[1] is for

writing. First End Second End mbox[0] <-oooooooooooooooooooo<- mbox[1] Where o stands for token.– Each pipe is used like a semaphore. If the initial value of a semaphore is 0, then no token is required to

store in the pipe initially.

Page 71: Processes and Threads

71

Pipe Implementation• Pipe description:

– If the initial value of a semaphore is more than 0, for example, the initial value is 3, then it can be initialized in this way:

int msg = 0; for (i = 1; i <= 3; i++) write(mbox1[1],&msg,sizeof(msg)); – DOWN(S) is equivalent to

read(mbox[0],&msg,sizeof(msg)); – UP(S) is equivalent to

write(mbox[1],&msg,sizeof(msg));

Page 72: Processes and Threads

72

Message Passing

The producer-consumer problem with N messages

Page 73: Processes and Threads

73

Barriers

• Use of a barrier– processes approaching a barrier– all processes but one blocked at barrier– last process arrives, all are let through

• Example: Parallel matrix multiplication

Page 74: Processes and Threads

74

Classical IPC Problems

• These problems are used for testing every newly proposed synchronization scheme:– Bounded-Buffer (Producer-Consumer) Problem– Dining-Philosophers Problem– Readers and Writers Problem– Sleeping Barber Problem

Page 75: Processes and Threads

75

Dining Philosophers• Dining Philosophers Problem [Dijkstra,

1965]: Problem: Five philosophers are seated

around a table. There is one fork between each pair of philosophers. Each philosopher needs to grab the two adjacent forks in order to eat. Philosophers alternate between eating and thinking. They only eat for finite periods of time.

Page 76: Processes and Threads

76

Dining Philosophers

• Philosophers eat/think• Eating needs 2 forks• Pick one fork at a time • How to prevent deadlock

Page 77: Processes and Threads

77

Dining Philosophers

A nonsolution to the dining philosophers problem

Page 78: Processes and Threads

78

Dining Philosophers• Problem: Suppose all philosophers execute the first DOWN

operation, before any have a chance to execute the second DOWN operation; that is, they all grab one fork. Then, deadlock will occur and no philosophers will be able to proceed. This is called a CIRCULAR WAIT.

• Other Solutions:– Only allow up to four philosophers to try grabbing their forks.– Asymmetric solution: Odd numbered philosophers grab their left

fork first, whereas even numbered philosophers grab their right fork first.

– Pick-up the forks only if both are available. See Fig. 2-33 (page 127). Note: this solution may lead to starvation.

Page 79: Processes and Threads

79

Dining Philosophers

Solution to dining philosophers problem (part 1)

Page 80: Processes and Threads

80

Dining Philosophers

Solution to dining philosophers problem (part 2)

Page 81: Processes and Threads

81

Readers and Writers Problem• The readers and writers problem models

access to a shared database. Only one writer may write at a time. Any number of readers may read at the same time, but not when a writer is writing.

• One variation of the problem, called weak readers reference, is to suspend the incoming readers as long as a writer is waiting.

Page 82: Processes and Threads

82

The Readers and Writers Problem

A solution to the readers and writers problem

Page 83: Processes and Threads

83

The Sleeping Barber Problem

• Problem: The barber shop has one barber, one barber chair, and n chairs for waiting customers. – If there are no customers present, the barber sits down in

the barber chair and falls asleep.– When a customer arrives, he has to wake up the sleeping

barber.– If additional customers arrive while the barber is cutting

a customer’s hair, they either sit down or leave the shop.• Program the barber and the customers without

getting into race conditions.

Page 84: Processes and Threads

84

The Sleeping Barber Problem

Page 85: Processes and Threads

85

The Sleeping Barber Problem

Solution to sleeping barber problem.

Page 86: Processes and Threads

86

Scheduling

• The SCHEDULER is the part of the operating system that decides (among the runnable processes) which process is to be run next.

• A SCHEDULING ALGORITHM is a policy used by the scheduler to make that decision.

• To make sure that no process runs too long, a clock is used to cause a periodic interrupt (usually around 50-60 Hz (times/second)); that is, about every 20 msec. PREEMPTIVE SCHEDULING allows processes that are runnable to be temporarily suspended so that other processes can have a chance to use the CPU.

Page 87: Processes and Threads

87

Properties of a GOOD Scheduling Algorithm:

1. Fairness - each process gets its fair share of time with the CPU.

2. Efficiency - keep the CPU busy doing productive work.

3. Response Time - minimize the response time for interactive users.

4. Turnaround Time - minimize the turnaround time on batch jobs.

5. Throughput - maximize the number of jobs processed per hour.

Page 88: Processes and Threads

88

Scheduling(Process Behavior)

• Bursts of CPU usage alternate with periods of I/O wait– a CPU-bound process– an I/O bound process

Page 89: Processes and Threads

89

Introduction to Scheduling

Scheduling Algorithm Goals

Page 90: Processes and Threads

90

First-Come, First-Served (FCFS) Scheduling

Process Burst TimeP1 24

P2 3

P3 3

• Suppose that the processes arrive in the order: P1 , P2 , P3

The Gantt Chart for the schedule is:

• Waiting time for P1 = 0; P2 = 24; P3 = 27• Average waiting time: (0 + 24 + 27)/3 = 17

P1 P2 P3

24 27 300

Page 91: Processes and Threads

91

FCFS Scheduling (Cont.)

Suppose that the processes arrive in the order P2 , P3 , P1 .

• The Gantt chart for the schedule is:

• Waiting time for P1 = 6; P2 = 0; P3 = 3• Average waiting time: (6 + 0 + 3)/3 = 3• Much better than previous case.• Convoy effect: short process wait behind long process

P1P3P2

63 300

Page 92: Processes and Threads

92

Shortest-Job-First (SJR) Scheduling• Associate with each process the length of its

next CPU burst. Use these lengths to schedule the process with the shortest time.

• The real difficulty with the SJF algorithm is knowing the length of the next CPU request.

• SJF scheduling is used frequently in long-term scheduling.

• The next CPU burst is generally predicated as an exponential average of the measured lengths of previous CPU bursts.

Page 93: Processes and Threads

93

Scheduling in Batch Systems

An example of shortest job first scheduling

Page 94: Processes and Threads

94

Shortest-Job-First (SJR) Scheduling

• Two schemes: – nonpreemptive – once CPU given to the process it

cannot be preempted until it completes its CPU burst.

– preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF).

• SJF is optimal – gives minimum average waiting time for a given set of processes.

Page 95: Processes and Threads

95

ProcessArrival Time Burst TimeP1 0.0 7

P2 2.0 4

P3 4.0 1

P4 5.0 4• SJF (non-preemptive)

• Average waiting time = (0 + 6 + 3 + 7)/4 = 4

Example of Non-Preemptive SJF

P1 P3 P2

73 160

P4

8 12

Page 96: Processes and Threads

96

Example of Preemptive SJF

ProcessArrival Time Burst TimeP1 0.0 7

P2 2.0 4

P3 4.0 1

P4 5.0 4• SJF (preemptive)

• Average waiting time = (9 + 1 + 0 +2)/4 = 3

P1 P3P2

42 110

P4

5 7

P2 P1

16

Page 97: Processes and Threads

97

Three-Level Scheduling

• The admission scheduler decides which jobs to admit to the system.

• The memory scheduler decides which processes should be kept in memory and which one kept on disk.– It can also decide how many processes it wants in

memory, called the degree of multiprogramming.• The CPU scheduler is actually picking one of

the ready processes in main memory to run next.

Page 98: Processes and Threads

98

Scheduling in Batch Systems

Three level scheduling

Page 99: Processes and Threads

99

Scheduling in Interactive Systems

• Round Robin Scheduling– list of runnable processes– list of runnable processes after B uses up its quantum

Page 100: Processes and Threads

100

Round Robin (RR)

• Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue.

• If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units.

• Performance– q large FIFO– q small q must be large with respect to context switch,

otherwise overhead is too high.

Page 101: Processes and Threads

101

Example of RR with Time Quantum = 20

Process Burst TimeP1 53

P2 17

P3 68

P4 24• The Gantt chart is:

• Typically, higher average turnaround than SJF, but better response.

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

Page 102: Processes and Threads

102

Time Quantum and Context Switch Time

Page 103: Processes and Threads

103

Priority Scheduling• A priority number (integer) is associated with each process• The CPU is allocated to the process with the highest priority

(smallest integer highest priority).– Preemptive– nonpreemptive

• SJF is a priority scheduling where priority is the predicted next CPU burst time.

• Problem Starvation – low priority processes may never execute.

• Solution Aging – as time progresses increase the priority of the process.

Page 104: Processes and Threads

104

Example of Priority SchedulingProcessBurst Time PriorityP1 5.0 6

P2 2.0 1

P3 1.0 3

P4 4.0 5

P5 2.0 2

P6 2.0 4• Priority

• Average waiting time = (2 + 4 + 5 + 7 + 11 +2) / 6 = 4.83

P2 P3P5

42 110

P4

5 7

P6 P1

16

Page 105: Processes and Threads

105

Multilevel Queue Scheduling• Each ready queue is assigned a different priority class

[CTSS - Corbato, 1962].• Ready queue is partitioned into separate queues:

foreground (interactive)background (batch)

• Each queue has its own scheduling algorithm, foreground – RRbackground – FCFS

• Scheduling must be done between the queues.– Fixed priority scheduling; (i.e., serve all from foreground then

from background). Possibility of starvation.– Time slice – each queue gets a certain amount of CPU time which

it can schedule amongst its processes; i.e., 80% to foreground in RR and 20% to background in FCFS

Page 106: Processes and Threads

106

Scheduling in Interactive Systems

A scheduling algorithm with four priority classes

Page 107: Processes and Threads

107

More Scheduling• Shortest Process Next

– SJF can be used in an interactive environment by estimating the runtime based on past behavior. Aging is a method used to estimate runtime by taking a weighted average of the current runtime and the previous estimate.

– Example: Let a = estimate weight, then the current estimate is: a x T0 + (1-a) x T1

where T0 is the previous estimate and T1 is the current runtime.

• Guaranteed Scheduling– Suppose 1/n of the CPU cycles.– Compute ratio = actual CPU time consumed / CPU time

entitled– Run the process with the lowest ratio

Page 108: Processes and Threads

108

More Scheduling• Lottery Scheduling

– Give processes lottery tickets for various system resources– When a scheduling decision is made, a lottery ticket is

chosen, and the process holding that ticket gets the resource.

• Fair-Share Scheduling– Take into account how many processes a user owns.– Example: User 1 – A, B, C, D and Use 2 – E– Round-robin: AEBECEDE...– Fair-Share: if use 1 is entitled to twice as much CPU time

as user 2 ABECDEABECDE….

Page 109: Processes and Threads

109

Scheduling in Real-Time Systems

• The scheduler makes real promises to the user in terms of deadlines or CPU utilization.

• Schedulable real-time system– Given

• m periodic events• event i occurs within period Pi and requires Ci

seconds– Then the load can only be handled if

1

1m

i

i i

CP

Page 110: Processes and Threads

110

Policy versus Mechanism• Separate what is allowed to be done with

how it is done– a process knows which of its children threads

are important and need priority

• Scheduling algorithm parameterized– mechanism in the kernel

• Parameters filled in by user processes– policy set by user process

Page 111: Processes and Threads

111

Thread Scheduling

• The process scheduling algorithm can be used in thread scheduling. In practice, round-robin and priority scheduling are used.

• The only constraint is the absence of a clock to interrupt a user-level thread.

• User-level and kernel-level threads– A major difference between user-level threads

and kernel-level threads is the performance.– User-level threads can employ an application-

specific thread scheduler.

Page 112: Processes and Threads

112

Thread Scheduling

Possible scheduling of user-level threads• 50-msec process quantum• threads run 5 msec/CPU burst

Page 113: Processes and Threads

113

Thread Scheduling

Possible scheduling of kernel-level threads• 50-msec process quantum• threads run 5 msec/CPU burst


Recommended