CS444/CS544 Operating Systems Synchronization 2/21/2006 Prof. Searleman jets@clarkson.edu.

Post on 18-Dec-2015

213 views 0 download

Tags:

transcript

CS444/CS544Operating Systems

Synchronization

2/21/2006

Prof. Searleman

jets@clarkson.edu

Outline

Synchronization

NOTE: Return & discuss HW#4 Lab#2 posted, due Thurs, 3/9 Read: Chapter 7 HW#5 posted, due Friday, 2/24/06 Exam#1, Wednesday, March 1, 7:00 pm,

SC162, SGG: Chapters 5 & 6

Last time

Need for synchronization primitives Locks and building locks from HW primitives

Criteria for a Good Solution to the Critical Section Problem

Mutual Exclusion Only one process is allowed to be in its critical section at once All other processes forced to wait on entry When one process leaves, another may enter

Progress If process is in the critical section, it should not be able to stop

another process from entering it indefinitely Decision of who will be next can’t be delayed indefinitely Can’t just give one process access; can’t deny access to

everyone Bounded Waiting

After a process has made a request to enter its critical section, there should be a bound on the number of times other processes can enter their critical sections

Synchronization Primitives

Synchronization Primitives are used to implement a solution to the critical section problem

OS uses HW primitives Disable Interrupts HW Test and set

OS exports primitives to user applications; User level can build more complex primitives from simpler OS primitives Locks Semaphores Monitors Messages

Implementing Locks

Ok so now we have seen that all is well *if* we have these objects called locks

How do we implement locks? Recall: The implementation of lock has a critical

section too (read lock; if lock free, write lock taken) Need help from hardware

Make basic lock primitive atomic Atomic instructions like test-and-set or read-modify –

write, compare-and-swap Prevent context switches

Disable/enable interrupts

Disable/enable interrupts

Recall how the OS can implement lock as disable interrupts and unlock as enable interrupts

Problems Insufficient on a multiprocessor because only

disable interrupts on the single processor Cannot be used safely at user-level -not even

exposed to user-level through some system call! Once interrupts are disabled, there is no way for the

OS to regain control until the user level process/thread yields voluntarily (or requests some OS service)

Test-and-set Suppose the CPU provides an atomic test-and-

set instruction with semantics much like this:bool test_and_set( bool *flag ){

bool old = *flag; *flag = true; return old; //did you capture the “false” ie not

//previously set?}

Without an instruction like this, use multiple instructions (not atomic)load $register $mem vs. test-and-set $register $mem

store 1 $mem

Implementing a lock with test-and-set

struct lock_t {

bool held = FALSE;

}

void lock( lock_t *l){

while (test_and_set(lock->held)){};

}

void unlock( lock_t *l){

lock->held = FALSE;

}

When call lock function, if the lock is not held (by someone else ) thenwill swap FALSE for TRUE atomically!!! Test_and_setwill return FALSE jumpingout of the while loop withthe lock held

When call lock function,if the lock is held (by someone else) then willfrantically swap TRUE for TRUE many times until other person calls unlock

Notice: Locks build from test and set aresafe for user level applications – unlikeDisable/enable interrupts!

Spinlocks

The type of lock we saw on the last slide is called a spinlock If try to lock and find already locked then will spin waiting

for the lock to be released

Very wasteful of CPU time! Thread spinning still uses its full share of the CPU cycles

waiting – called busy waiting During that time, thread holding the lock cannot make

progress! What if thread waiting has higher priority than the threads

holding the lock!!

So safe even at user-level but inefficient

Avoiding Busy Waiting

Could modify the lock call to the followingvoid lock( lock_t *l){

while (test_and_set(lock->held)){

yield the CPU

};

}

But still pay for context switch overhead each time

Other choices? OS can build a lock with the following properties

When lock is called if the process or thread does not acquire the lock, it is taken off the ready queue and put on a special queue of processes that are waiting for the lock to be released

When a lock is released, the OS could choose a waiting process to grant the lock to and place it back on the ready queue

You can think of this as a lock that just happens to be implemented with a queue inside, but this is often called a semaphore

Semaphores

Recall: the lock object has one data member the boolean value, held

The semaphore object has two data members: an integer value and a queue of waiting processes/threads

Wait and Signal Recall: Locks are manipulated through two

operations: lock and unlock Semaphores are manipulated through two

operations: wait and signal Wait operation (like lock)

Decrements the semaphore’s integer value and blocks the thread calling wait until the semaphore is available

Also called P() after the Dutch word, proberen, to test Signal operation (like unlock)

Increments the semaphore’s integer value and if threads are blocked waiting, allow one to “enter” the semphore

Also called V() after the Dutch word, verhogen, to increment Why Dutch? Semaphores invented by Edgar Dijkstra

for the THE OS (strict layers) in 1968

Withdraw revisited

int withdraw (int account, int amount) { wait(whichSemaphore(account));

balance = readBalance(account); balance = balance - amount; updateBalance(account, balance);

signal(whichSemaphore(account));

return balance;}

ENTER CRITICAL SECTION

CRITICAL SECTION

EXIT CRITICAL SECTION

Initialize value of semaphore to 1 Functionally like a lock

Implementing a semaphorestruct semaphore_t {

int value; queue waitingQueue;}void wait( semaphore_t *s){

s->value--; if (s->value < 0){

add self to s->waitingQueue block}

}void signal( semaphore_t *s){

s->value++;if (s->value <=0) {

P =remove process from s->waitingQueue

wakeup (P)}

Whats wrong with this?

Implementing a semaphore with a lock

struct semaphore_t {

int value;

queue waitingQueue;

lock_t l;

}void wait( semaphore_t *s){ lock(&s->l);

s->value--; if (s->value < 0){

add self to s->waitingQueue unlock(&s->l); block} else {

unlock(&s->l);}

}

void signal( semaphore_t *s){lock(&s->l);s->value++;if (s->value <=0) {

P = remove process from

s->waitingQueueunlock(&s->l);wakeup (P)

} else {unlock(&s-l);

}}

Semaphore’s value

When value > 0, semaphore is “open” Thread calling wait will continue (after

decrementing value) When value <= 0, semaphore is “closed”

Thread calling wait will decrement value and block

When value is negative, it tells how many threads are waiting on the semaphore

What would a positive value say?

Binary vs Counting Semaphores

Binary semaphore Semaphore’s value initialized to 1 Used to guarantee exclusive access to shared

resource (functionally like a lock but without the busy waiting)

Counting semaphore Semaphore’s value initialized to N >0 Used to control access to a resource with N

interchangeable units available (Ex. N processors, N pianos, N copies of a book,…)

Allow threads to enter semaphore as long as sufficient resources are available

Semaphore’s Waiting Queue

Recall: good to integrate semaphore’s waiting queue with scheduler When placed on waitingQueue should be removed from

runningQueue Could use scheduling priority to decide who on queue

enters semaphore when it is open next Beware of starvation just like in priority scheduling

If OS exports semaphore, then kernel scheduler aware of waitingQueue

If user-level thread package exports semaphore, then user-level thread scheduler (scheduling time on the available kernel threads) aware of waitingQueue

Is busy-waiting eliminated?

Threads block on the queue associated with the semaphore instead of busy waiting

Busy waiting is not gone completely When accessing the semaphore’s critical section, thread holds

the semaphore’s lock and another process that tries to call wait or signal at the same time will busy wait

Semaphore’s critical section is normally much smaller than the critical section it is protecting so busy waiting is greatly minimized

Also avoid context switch overhead when just checking to see if can enter critical section and know all threads that are blocked on this object

Are spin locks always bad? Adaptive Locking in Solaris Adaptive mutexes

Multiprocessor system if can’t get lock And thread with lock is not running, then sleep And thread with lock is running, spin wait

Uniprocessor if can’t get lock Immediately sleep (no hope for lock to be released while you are

running) Programmers choose adaptive mutexes for short code

segments and semaphores or condition variables for longer ones

Blocked threads placed on separate queue for desired object Thread to gain access next chosen by priority and priority

inversion is implemented