+ All Categories
Home > Documents > Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Date post: 17-Jan-2016
Category:
Upload: allison-allen
View: 226 times
Download: 0 times
Share this document with a friend
19
CSCI 3431: OPERATING SYSTEMS Chapter 6 – Process Synchronisation (Pgs 225 – 267)
Transcript
Page 1: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

CSCI 3431: OPERATING SYSTEMS

Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Page 2: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Overview

Consider a block of shared memory Process P wants to write data to the

shared memory while Process R wants to read the data

Up until now, we have needed to use a synchronous system call to cause the reader to wait until the write is complete

The approach works because the synchronous system calls provide mutual exclusion

Page 3: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Mutual Exclusion

When a resource (i.e., memory) can be accessed by only one process at a time, we say that the resource is mutually exclusive

The code in each process to access the mutually exclusive resource is known as a critical section

Only one processes critical section may run at a time (and hence only one process can access the resource at a time)

Page 4: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Race Conditions

Pre-emption means that, essentially, the statements of any two processes can be interleaved to create any ordering of their parts

If the ordering of the parts affects the outcome of computation, we say that a race condition exists

To prevent race conditions, we use mutual exclusion and critical sections

Page 5: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Example

x++ increments x R1 mem[x]; R1 R1 + 1; mem[x] R1; What if P1 does x++ and P2 does x--? End result

is that x should never change regardless of the ordering

Assume x starts at 4 ..R1 mem[x]; R1 R1 + 1; *Pre-empt* R2 mem[x]; R2 R2 - 1 ; *Pre-empt*mem[x] R1; *Pre-empt*mem[x] R2; .... x is now 3, not 4 ....

Idea: x++ must fully execute before x– (or reverse)

Page 6: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Critical Section

Must provide the following1. Mutual Exclusion: Only one process at a

time can be in their critical section2. Progress: Processes must be allowed to

make progress and be allowed (eventually) to enter their critical section

3. Bounded Waiting: There is a limit on how many times other processes can perform their critical sections and thus block any other process from entering theirs

Page 7: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Peterson's Solution

Basic software solution for processes i and jdo { flag[i] = TRUE; turn = j; while (flag[j] && turn==j); /* Critical Section */ flag[i] = FALSE; /* Remainder */} while (TRUE);

Highly concurrent (threaded) CPUs can invalidate this solution

Page 8: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Locks

Use a lock to protect critical section Process must acquire the lock before

entering and release it after leaving Both hardware and software

solutions are possible Simple with hardware (disable

interrupts when shared variable – lock – is being modified), but inefficient and potentially dangerous

Page 9: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Lock Example

boolean TestAndSet (bool *t) {

// If False: return false, set to true

// If True: return true, set to true

bool rv = *t;

*t = TRUE;

return rv;

}

Then ...do {

// Loop while lock is set

while (TestAndSet(&lock));

/* Critical section */

lock = FALSE;

/* remainder */

}

Works if TestAndSet is Atomic and cannot be interrupted

Page 10: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Problems

Hardware solutions are difficult to use since they only provide mutual exclusion

Bounded waiting requires considerable and complex "extra" code (see Fig. 6.8)

More complex for n processes than for 2

Difficult for engineers to implement as they require memory access

Page 11: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Semaphores

Integer variable accessed using on the operations wait and signal

wait(s) {

while (s <= 0) ;

s--;

}

signal(s) {

s++;

}

wait and signal must be atomic in their manipulation of s!

Page 12: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Using a Semaphoredo { wait(mutex) ; /* critical section */ signal(mutex)} while (TRUE);

Main problem is "busy waiting" – constant checking of mutex in wait requires CPU instructions (being busy) to do nothing except wait

Page 13: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Semaphores++

Semaphores with busy waiting are often called "spinlocks"

Can be generalised for bounded resources usable by more than one process

Simple to program with and provide bounded waiting

Can remove busy waiting if we tell the kernel to use a queue for the semaphore and block all processes that are waiting

Signal would wake up a waiting process

Page 14: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Deadlock + Starvation When two processes are each waiting for a

resource held by the other process "Deadlock" can occur

E.g., 4 cars approach an interesection, each blocking one of the 4 roads out, each waits for the others to move and clear a road forward

Starvation occurs from infinite waiting E.g., process that is supposed to do the

wakeup fails and the wakeup is never sent

Page 15: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Priority Inversion

Requires more than 2 priorities Higher priority process blocked from

critical section by lower priority processes

Solution: Priority Inheritance Process inherits the priority of the

highest process that is blocked

Page 16: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Classic Problems Bounded Buffer (Producer/Consumer) Readers-Writers: Simultaneous reads,

writes block everything Dining Philosophers

Page 17: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Monitors

Semaphores only work when used properly, by all processes

Solution is to pre-code the critical sections A Monitor is an ADT Only one thread at a time can be in a

monitor Usually provides a way for a thread to yield

the monitor (or block) so another thread can run

Many different models of monitors (See Buhr et al. for details)

Page 18: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

Posix Semaphores

Provided by semaphore.h: sem_init() – creates and initialises it sem_open() – opens it sem_close() – closes it after we are

done sem_post() – releases the semaphore sem_wait() – obtains the semaphore sem_trywait() – asynchronous wait

Page 19: Chapter 6 – Process Synchronisation (Pgs 225 – 267)

To Do:

Work on Lab 4 Read Chapter 6 (pgs 225-267; this

lecture) Read Chapter 7 (pgs 283-306; next

lecture)


Recommended