+ All Categories
Home > Documents > Process Synchronisation

Process Synchronisation

Date post: 18-Nov-2015
Category:
Upload: -
View: 36 times
Download: 6 times
Share this document with a friend
Description:
Process Synchronisation
Popular Tags:
33
INTRODUCTION Modern operating systems, such as UNIX, execute processes concurrently. Although there is a single Central Processing Unit (CPU), which executes the instructions of only one program at a time, the operating system rapidly switches the processor between different processes (usually allowing a single process a few hundred microseconds of CPU time before replacing it with another process). Some of these resources (such as memory) are simultaneously shared by all processes. Such resources are being used in parallel between all running processes on the system. Other resources must be used by one process at a time, so must be carefully managed so that all processes get access to the resource. Such resources are being used concurrently between all running processes on the system. The most important example of a shared resource is the CPU, although most of the I/O devices are also shared. For many of these shared resources, the operating system distributes the time a process requires of the resource to ensure reasonable access for all processes. Consider the CPU, the operating system has a clock which sets an alarm every few hundred microseconds. At this time, the T T o o p p i i c c 6 6 Process Synchronisation LEARNING OUTCOMES By the end of this topic, you should be able to: 1. Describe synchronisation process; 2. Explain critical selection problem; 3. Define semaphores; 4. Explain deadlock; and 5. Describe handling of deadlocks.
Transcript
  • INTRODUCTION

    Modern operating systems, such as UNIX, execute processes concurrently. Although there is a single Central Processing Unit (CPU), which executes the instructions of only one program at a time, the operating system rapidly switches the processor between different processes (usually allowing a single process a few hundred microseconds of CPU time before replacing it with another process). Some of these resources (such as memory) are simultaneously shared by all processes. Such resources are being used in parallel between all running processes on the system. Other resources must be used by one process at a time, so must be carefully managed so that all processes get access to the resource. Such resources are being used concurrently between all running processes on the system. The most important example of a shared resource is the CPU, although most of the I/O devices are also shared. For many of these shared resources, the operating system distributes the time a process requires of the resource to ensure reasonable access for all processes. Consider the CPU, the operating system has a clock which sets an alarm every few hundred microseconds. At this time, the

    TTooppiicc

    66 Process

    Synchronisation

    LEARNING OUTCOMES

    By the end of this topic, you should be able to:

    1. Describe synchronisation process;

    2. Explain critical selection problem;

    3. Define semaphores;

    4. Explain deadlock; and

    5. Describe handling of deadlocks.

  • TOPIC 6 PROCESS SYNCHRONISATION 137

    operating system stops the CPU, saves all the relevant information that is needed to restart the CPU exactly where it last left off (this will include saving the current instruction being executed, the state of the memory in the CPUs registers, and other data) and removes the process from the use of the CPU. The operating system then selects another process to run, returns the state of the CPU to what it was when it last ran this new process and starts the CPU again. Let us take a moment to see how the operating system manages this. In this topic, we shall discuss about the deadlock. A deadlock is a situation where in two or more competing actions are waiting for the other to finish and thus neither ever does. It is often seen in a paradox like the chicken or the egg. This situation may be likened to two people who are drawing diagrams, with only one pencil and one ruler between them. If one person takes the pencil and the other takes the ruler, a deadlock occurs when the person with the pencil needs the ruler and the person with the ruler needs the pencil, before he can give up the ruler. Both requests cannot be satisfied, so a deadlock occurs.

    SYNCHRONISATION PROCESS

    Process synchronisation refers to the idea that multiple processes are to join up or handshake at a certain point, so as to reach an agreement or commit to a certain sequence of action. Synchronisation involves the orderly sharing of system resources by processes. To illustrate the process synchronisation, consider the above railway intersection diagram. You can think of this intersection as a system resource that is shared by two processes: the car process and the train process. If only one process is active, then no resource conflict exists. But what happens when both processes are active and they both arrive at the intersection simultaneously? In this case, the shared resource becomes a problem. They cannot both use the resource at the same time or a collision will occur. Similarly, processes sharing resources on a computer must be properly managed in order to avoid collisions. Figure 6.1 shows a railway-road intersection.

    6.1

  • TOPIC 6 PROCESS SYNCHRONISATION138

    Figure 6.1: Railway-road intersection

    Consider a machine with a single printer running a time-sharing operation system. If a process needs to print its results, it must request that the operating system gives its access to the printers device driver. At this point, the operating system must decide whether to grant this request, depending upon whether the printer is already being used by another process. If it is not, the operating system should grant the request and allow the process to continue; otherwise, the operating system should deny the request and perhaps classify the process as a waiting process until the printer becomes available. Indeed, if two processes were given simultaneous access to the machines printer, the results would be worthless to both. Now that the problem of synchronisation is properly stated, consider the following related definitions: (a) CCritical Resource A resource shared with constraints on its use (e.g. memory, files, printers,

    etc). (b) CCritical Section Code that accesses a critical resource. (c) MMutual Exclusion At most one process may be executing a Critical Section with respect to a

    particular critical resource simultaneously. In the example given above, the printer is the critical resource. Let us suppose that the processes which are sharing this resource are called process A and process B. The critical sections of process A and process B are the sections of the code which issue the print command. In order to ensure that both processes do not attempt to use the printer at the same time, they must be granted mutually

  • TOPIC 6 PROCESS SYNCHRONISATION 139

    exclusive access to the printer driver. The idea of mutual exclusion with our railroad intersection is by adding a semaphore to the picture. Figure 6.2 shows the railway-road intersection with signal.

    Figure 6.2: Railway-road intersection with signal

    Semaphores are used in software systems in much the same way as they are in railway systems. Corresponding to the section of track that can contain only one train at a time is a sequence of instructions that can be executed by only one process at a time. Such a sequence of instructions is called a critical section.

    CRITICAL SELECTION PROBLEM

    The key to prevent trouble involving shared storage is to find some way to prohibit more than one process from reading and writing the shared data simultaneously. That part of the program where the shared memory is accessed is called the Critical Section. To avoid race conditions and flawed results, one must identify codes in Critical Sections in each thread. The characteristic properties of the code that form a Critical Section are:

    (a) Codes that reference one or more variables in a read-update-write fashion while any of those variables is possibly being altered by another thread;

    6.2

    SELF-CHECK 6.1

    1. Explain the process of synchronisation. 2. What do you understand by mutual exclusion conditions?

    Explain.

  • TOPIC 6 PROCESS SYNCHRONISATION140

    (b) Codes that alter one or more variables that are possibly being referenced in read-update-write fashion by another thread;

    (c) Codes use a data structure while any part of it is possibly being altered by another thread; and

    (d) Codes alter any part of a data structure while it is possibly in use by another thread.

    Here, the important point is that when one process is executing shared modifiable data in its critical section, no other process is to be allowed to execute in its critical section. Thus, the execution of critical sections by the processes is mutually exclusive in time. Figure 6.3 shows the critical section.

    Figure 6.3: Critical section

    A way of making sure that if one process is using a shared modifiable data, the other processes will be excluded from doing the same thing. Formally, while one process executes the shared variable, all other processes desiring to do so at the same time should be kept waiting; when that process has finished executing the shared variable, one of the processes waiting; while that process has finished executing the shared variable, one of the processes waiting to do so should be allowed to proceed. In this fashion, each process executing the shared data (variables) excludes all others from doing so simultaneously. This is called mutual exclusion. Mutual exclusion needs to be enforced only when processes access shared modifiable data - when processes are performing operations that do not conflict with one another they should be allowed to proceed concurrently.

    6.2.1 Mutual Exclusion Conditions

    If you could arrange matters such that, no two processes were ever in their critical sections simultaneously, you could avoid race conditions. You need four

  • TOPIC 6 PROCESS SYNCHRONISATION 141

    conditions to hold to have a good solution for the critical section problem (mutual exclusion). They are:

    (a) No two processes may at the same moment be inside their critical sections;

    (b) No assumptions are made about relative speeds of processes or number of CPUs;

    (c) No process outside its critical section should block other processes; and

    (d) No process should wait arbitrary long to enter its critical section.

    6.2.2 Proposals for Achieving Mutual Exclusion

    The mutual exclusion problem is to devise a pre-protocol (or entry protocol) and a post-protocol (or exist protocol) to keep two or more threads from being in their critical sections at the same time. Problem: When one process is updating shared modifiable data in its critical section, no other process should be allowed to enter in its critical section. Proposal 1: Disabling Interrupts (Hardware Solution) Each process disables all interrupts just after entering its critical section and re-enables all interrupts just before leaving the critical section. With interrupts turned off, the CPU could not be switched to other process. Hence, no other process will enter its critical and mutual exclusion achieved. Conclusion: Disabling interrupts is sometimes a useful technique within the kernel of an operating system, but it is not appropriate as a general mutual exclusion mechanism for user's process. The reason is that it is unwise to give user process the power to turn off interrupts. Proposal 2: Lock Variable (Software Solution) In this solution, you consider a single, shared, (lock) variable, initially 0. When a process wants to enter in its critical section, it first tests the lock. If lock is 0, the process first sets it to 1 and then enters the critical section. If the lock is already 1, the process just waits until (lock) variable becomes 0. Thus, a 0 means that no process in its critical section and 1 means hold your horses - some process is in its critical section. Conclusion: The flaw in this proposal can be best explained by example. Suppose process A sees that the lock is 0. Before it can set the lock to 1, another process B is

  • TOPIC 6 PROCESS SYNCHRONISATION142

    scheduled, runs and sets the lock to 1. When the process A runs again, it will also set the lock to 1 and two processes will be in their critical section simultaneously. Proposal 3: Strict Alteration In this proposed solution, the integer variable turn keeps track of whose turn is to enter the critical section. Initially, process A inspects turn, finds it to be 0 and enters in its critical section. Process B also finds it to be 0 and sits in a loop continually testing turn to see when it becomes 1. Continuously testing a variable waiting for some value to appear is called the Busy-Waiting. Conclusion: Taking turns is not a good idea when one of the processes is much slower than the other. Suppose process 0 finishes its critical section quickly, so both processes are now in their noncritical section. This situation violates above mentioned condition 3. Using System calls sleep and wakeup Basically, what the above mentioned solution does is this: when a process wants to enter in its critical section, it checks to see if the entry is allowed. If it is not, the process goes into a tight loop and waits (i.e. start busy waiting) until it is allowed to enter. This approach wastes CPU-time. Now let us look at some interprocess communication primitives which is the pair of sleep-wakeup. Sleep: It is a system call that causes the caller to block, that is, be suspended until some other process wakes it up. Wakeup: It is a system call that wakes up the process. Both sleep and wakeup system calls have one parameter that represents a memory address used to match up sleeps and wakeups. Bounded Buffer Producers and Consumers: The bounded buffer producers and consumers assume that there is a fixed buffer size that is a finite number of slots are available. Statement: To suspend the producers when the buffer is full, to suspend the consumers when the buffer is empty and to make sure that only one process at a time manipulates a buffer so there are no race conditions or lost updates.

  • TOPIC 6 PROCESS SYNCHRONISATION 143

    As an example how sleep-wakeup system calls are used, consider the producer-consumer problem, also known as bounded buffer problem. Two processes share a common, fixed-size (bounded) buffer. The producer puts information into the buffer and the consumer takes information out. Trouble arises when: (a) The producer wants to put a new data in the buffer, but buffer is already

    full. SSolution: Producer goes to sleep and to be awakened when the consumer

    has removed data. (b) The consumer wants to remove data from the buffer but buffer is already

    empty. SSolution: Consumer goes to sleep until the producer puts some data in

    buffer and wakes consumer up. Conclusion: These approaches also lead to the same race conditions you have seen in earlier approaches. Race conditions can occur due to the fact that access to count is unconstrained. The essence of the problem is that a wakeup call, sent to a process that is not sleeping, is lost.

    SEMAPHORES

    Dijkstra (1965) abstracted the key notion of mutual exclusion in his concepts of semaphores. A semaphore is a protected variable whose value can be accessed and altered only by the operations P and V and initialisation operation called Semaphoiinitislise. Binary Semaphores can assume only the value 0 or the value 1. Counting semaphores also called general semaphores can assume only nonnegative values.

    6.3

    SELF-CHECK 6.2

    Explain sleep and wakeup stages.

  • TOPIC 6 PROCESS SYNCHRONISATION144

    The P (or wait or sleep or down) operation on semaphores S, written as P(S) or wait (S), operates as follows:

    P(S): IF S > 0

    THEN S := S 1ELSE (wait on S)

    The V (or signal or wakeup or up) operation on semaphore S, written as V(S) or signal (S), operates as follows:

    V(S): IF (one or more process are waiting on S)

    THEN (let one of these processes proceed)ELSE S := S + 1

    Operations P and V are done as single, invisible, atomic action. It is guaranteed that once a semaphore operation has started, no other process can access the semaphore until operation has completed. Mutual exclusion on the semaphore, S, is enforced within P(S) and V(S). If several processes attempt a P(S) simultaneously, only one process will be allowed to proceed. The other processes will be kept waiting, but the implementation of P and V guarantees that processes will not suffer indefinite postponement. Semaphores solve the lost-wakeup problem.

    6.3.1 Producer-Consumer Problem Using Semaphores

    The solution to producer-consumer problem uses three semaphores, namely full, empty and mutex. The semaphore full is used for counting the number of slots in the buffer that are full. The empty for counting the number of slots that are empty and semaphore mutex to make sure that the producer and consumer do not access modifiable shared section of the buffer simultaneously.

  • TOPIC 6 PROCESS SYNCHRONISATION 145

    Here is the initialisation:

    1. Set full buffer slots to 0. Example: semaphore Full = 0.

    2. Set empty buffer slots to N. i.e. semaphore empty = N.

    3. For control access to critical section, set mutex to 1. Example: semaphore mutex = 1.

    Producer ( )

    WHILE (true)

    produce-Item ( );

    P (empty);

    P (mutex);

    enter-Item ( )

    V (mutex)

    V (full);

    Consumer ( )

    WHILE (true)

    P (full)

    P (mutex);

    remove-Item ( );

    V (mutex);

    V (empty);

    consume-Item (Item)

    A semaphore is hardware or a software tag variable whose value indicates the status of a common resource. Its purpose is to lock the resource being used. A process which needs the resource will check the semaphore for determining the status of the resource followed by the decision for proceeding. In multitasking operating systems, the activities are synchronised by using the semaphore techniques. Semaphore is a mechanism to resolve resources conflicts by tallying resource seekers what is the state of sought resources, achieving a mutual exclusive access to resources. Often semaphore operates as a type of mutual exclusive counters

  • TOPIC 6 PROCESS SYNCHRONISATION146

    (such as mutexes) where it holds a number of access keys to the resources. Process that seeks the resources must obtain one of those access keys, one of semaphores, before it proceeds further to utilise the resource. If there is no more such a key available to the process, it has to wait for the current resource user to release the key. A semaphore could have the value 0, indicating that no wakeups were saved, or some positive values if one or more wakeups were pending. A semaphore s is an integer variable that apart from initialisation and is accessed only through two standard atomic operations, wait and signal. These operations were originally termed p(for wait to test) and v(for signal to increment). The classical definition of wait in pseudocode is:

    wait(s){while(s

  • TOPIC 6 PROCESS SYNCHRONISATION 147

    6.3.2 SR Program: The Dining Philosophers

    This semaphore solution to the readers-writers problem can let writers starve because readers arriving after a now-waiting writer arrived earlier can still access the database to read. If enough readers continually trickle in and keep the database in a read state, then that waiting writer will never get to write:

    resource philosopher import dining_serverbody philosopher(i : int; dcap : cap dining_server;

    thinking, eating: int) write(philosopher, i, alive, max think eat delays, thinking, eating)

    procedure think()var napping : intnapping := int(random(1000*thinking))writes(age=,age(),, philosopher ,i, thinking for ,napping, ms\n)nap(napping)

    end thinkprocedure eat()

    var napping : intnapping := int(random(1000*eating))writes(age=,age(),, philosopher ,i, eating for ,napping, ms\n)nap(napping)

    end eat process phil do true -> think() writes(age=, age(), , philosopher , i, is hungry\n) dcap.take_forks(i) writes(age=, age(), , philosopher , i, has taken forks\n) eat() dcap.put_forks(i) writes(age=, age(), , philosopher , i, has returned forks\n)

    odend philend philosopherresource dining_server

    op take_forks(i : int), put_forks(i : int)

  • TOPIC 6 PROCESS SYNCHRONISATION148

    body dining_server(num_phil : int)write(dining server for, num_phil, philosophers is alive)sem mutex := 1type states = enum(thinking, hungry, eating)var state[1:num_phil] : states := ([num_phil] thinking)sem phil[1:num_phil] := ([num_phil] 0)procedure left(i : int) returns lft : int if i=1 -> lft := num_phil [] else -> lft := i-1 fi

    end leftprocedure right(i : int) returns rgh : int

    if i=num_phil -> rgh := 1 [] else -> rgh := i+1 fiend rightprocedure test(i : int) if state[i] = hungry and state[left(i)] ~= eating and state[right(i)] ~= eating -> state[i] := eating V(phil[i]) fiend testproc take_forks(i) P(mutex)

    state[i] := hungrytest(i)V(mutex)P(phil[i])

    end take_forksproc put_forks(i)

    P(mutex)state[i] := thinkingtest(left(i)); test(right(i))V(mutex)

    end put_forksend dining_serverresource start()

    import philosopher, dining_servervar num_phil : int := 5, run_time : int := 60getarg(1, num_phil); getarg(2, run_time)var max_think_delay[1:num_phil] : int := ([num_phil] 5)var max_eat_delay[1:num_phil] : int := ([num_phil] 2)fa i := 1 to num_phil ->

    getarg(2*i+1, max_think_delay[i]); getarg(2*i+2, max_eat_delay[i])

  • TOPIC 6 PROCESS SYNCHRONISATION 149

    afvar dcap : cap dining_serverwrite(num_phil, dining philosophers running, run_time, seconds)dcap := create dining_server(num_phil)fa i := 1 to num_phil ->

    create philosopher(i, dcap, max_think_delay[i], max_eat_delay[i])

    afnap(1000*run_time); write(must stop now); stop

    end start/* ............... Example compile and run(s)% sr -o dphi dphi.sr% ./dphi 5 105 dining philosophers running 10 secondsdining server for 5 philosophers is alivephilosopher 1 alive, max think eat delays 5 2age=37, philosopher 1 thinking for 491 msphilosopher 2 alive, max think eat delays 5 2age=50, philosopher 2 thinking for 2957 msphilosopher 3 alive, max think eat delays 5 2age=62, philosopher 3 thinking for 1374 msphilosopher 4 alive, max think eat delays 5 2age=74, philosopher 4 thinking for 1414 msphilosopher 5 alive, max think eat delays 5 2age=87, philosopher 5 thinking for 1000 msage=537, philosopher 1 is hungryage=541, philosopher 1 has taken forksage=544, philosopher 1 eating for 1351 msage=1097, philosopher 5 is hungry

  • TOPIC 6 PROCESS SYNCHRONISATION150

    age=1447, philosopher 3 is hungryage=1451, philosopher 3 has taken forksage=1454, philosopher 3 eating for 1226 msage=1497, philosopher 4 is hungryage=1898, philosopher 1 has returned forksage=1901, philosopher 1 thinking for 2042 msage=1902, philosopher 5 has taken forksage=1903, philosopher 5 eating for 1080 msage=2687, philosopher 3 has returned forksage=2691, philosopher 3 thinking for 2730 msage=2988, philosopher 5 has returned forksage=2991, philosopher 5 thinking for 3300 msage=2992, philosopher 4 has taken forksage=2993, philosopher 4 eating for 1818 msage=3017, philosopher 2 is hungryage=3020, philosopher 2 has taken forksage=3021, philosopher 2 eating for 1393 msage=3947, philosopher 1 is hungryage=4418, philosopher 2 has returned forksage=4421, philosopher 2 thinking for 649 msage=4423, philosopher 1 has taken forksage=4424, philosopher 1 eating for 1996 msage=4817, philosopher 4 has returned forks

  • TOPIC 6 PROCESS SYNCHRONISATION 151

    age=4821, philosopher 4 thinking for 742 msage=5077, philosopher 2 is hungry age=5427, philosopher 3 is hungryage=5431, philosopher 3 has taken forksage=5432, philosopher 3 eating for 857 msage=5569, philosopher 4 is hungryage=6298, philosopher 3 has returned forksage=6301, philosopher 3 thinking for 1309 msage=6302, philosopher 5 is hungryage=6304, philosopher 4 has taken forksage=6305, philosopher 4 eating for 498 msage=6428, philosopher 1 has returned forksage=6430, philosopher 1 thinking for 1517 msage=6432, philosopher 2 has taken forksage=6433, philosopher 2 eating for 133 msage=6567, philosopher 2 has returned forksage=6570, philosopher 2 thinking for 3243 msage=6808, philosopher 4 has returned forksage=6810, philosopher 4 thinking for 2696 msage=6812, philosopher 5 has taken forksage=6813, philosopher 5 eating for 1838 msage=7617, philosopher 3 is hungryage=7621, philosopher 3 has taken forksage=7622, philosopher 3 eating for 1251 msage=7957, philosopher 1 is hungryage=8658, philosopher 5 has returned forksage=8661, philosopher 5 thinking for 4755 msage=8662, philosopher 1 has taken forksage=8664, philosopher 1 eating for 1426 msage=8877, philosopher 3 has returned forksage=8880, philosopher 3 thinking for 2922 msage=9507, philosopher 4 is hungryage=9511, philosopher 4 has taken forksage=9512, philosopher 4 eating for 391 msage=9817, philosopher 2 is hungryage=9908, philosopher 4 has returned forksage=9911, philosopher 4 thinking for 3718 msage=10098, philosopher 1 has returned forksage=10100, philosopher 1 thinking for 2541 msmust stop nowage=10109, philosopher 2 has taken forksage=10110, philosopher 2 eating for 206 ms% ./dphi 5 10 1 10 10 1 1 10 10 1 10 15 dining philosophers running 10 secondsdining server for 5 philosophers is alive

  • TOPIC 6 PROCESS SYNCHRONISATION152

    philosopher 1 alive, max think eat delays 1 10age=34, philosopher 1 thinking for 762 msphilosopher 2 alive, max think eat delays 10 1age=49, philosopher 2 thinking for 5965 msphilosopher 3 alive, max think eat delays 1 10age=61, philosopher 3 thinking for 657 msphilosopher 4 alive, max think eat delays 10 1age=74, philosopher 4 thinking for 8930 msphilosopher 5 alive, max think eat delays 10 1age=86, philosopher 5 thinking for 5378 msage=726, philosopher 3 is hungryage=731, philosopher 3 has taken forksage=732, philosopher 3 eating for 3511 msage=804, philosopher 1 is hungryage=808, philosopher 1 has taken forksage=809, philosopher 1 eating for 3441 msage=4246, philosopher 3 has returned forksage=4250, philosopher 3 thinking for 488 msage=4252, philosopher 1 has returned forksage=4253, philosopher 1 thinking for 237 msage=4495, philosopher 1 is hungry age=4498, philosopher 1 has taken forksage=4499, philosopher 1 eating for 8682 msage=4745, philosopher 3 is hungry

  • TOPIC 6 PROCESS SYNCHRONISATION 153

    age=4748, philosopher 3 has taken forksage=4749, philosopher 3 eating for 2095 msage=5475, philosopher 5 is hungryage=6025, philosopher 2 is hungryage=6855, philosopher 3 has returned forksage=6859, philosopher 3 thinking for 551 msage=7415, philosopher 3 is hungryage=7420, philosopher 3 has taken forksage=7421, philosopher 3 eating for 1765 msage=9015, philosopher 4 is hungryage=9196, philosopher 3 has returned forksage=9212, philosopher 3 thinking for 237 msage=9217, philosopher 4 has taken forksage=9218, philosopher 4 eating for 775 msage=9455, philosopher 3 is hungryage=9997, philosopher 4 has returned forksage=10000, philosopher 4 thinking for 2451 msage=10002, philosopher 3 has taken forksage=10004, philosopher 3 eating for 9456 msmust stop now

    */

    ACTIVITY 6.1

    Write a short note on semaphore and present it in front of yourclassmates.

  • TOPIC 6 PROCESS SYNCHRONISATION154

    MONITORS

    A monitor is a software synchronisation tool with a high-level of abstraction that provides a convenient and effective mechanism for process synchronisation. It allows only one process to be active within the monitor at a time. One simple implementation is shown here:

    monitormonitor_name

    {

    // shared variable declarations procedure P1 () { . }

    procedure Pn() {} Initialization code ( .) { }

    }

    DEADLOCK

    Deadlock occurs when you have a set of processes (not necessarily all the processes in the system), each holding some resources, each requesting some resources and none of them is able to obtain what it needs, that is to make progress. Those processes are deadlocked because all the processes are waiting. None of them will ever cause any of the events that could wake up any of the other members of the set and all the processes continue to wait forever. For this model, we assume that processes have only a single thread and that there are no interrupts possible to wake up a blocked process. The no-interrupts condition is needed to prevent an otherwise deadlocked process from being awakened by, say, an alarm and then causing events that release other processes in the set. In most cases, the event that each process is waiting for is the release of some resources currently possessed by another member of the set. In other words, each member of the set of deadlocked processes is waiting for a resource that is owned by another deadlocked process. None of the processes can run, none of them can release any resources and none of them can be awakened. The

    6.4

    6.5

  • TOPIC 6 PROCESS SYNCHRONISATION 155

    number of processes and the number and kind of resources possessed and requested are unimportant. This result holds for any kind of resource, including both hardware and software. Figure 6.4 shows the processes that are in a deadlock situation.

    Figure 6.4: Processes that are in deadlock situation

    DEADLOCK CHARACTERISATION

    Deadlock situation can arise if the following four conditions hold simultaneously in a system:

    (a) Resources are used in mutual exclusion;

    (b) Resources are acquired piecemeal (i.e. not all the resources that are needed to complete an activity are obtained at the same time in a single indivisible action);

    (c) Resources are not preempted (i.e. a process does not take away resources being held by another process); and

    (d) Resources are not spontaneously given up by a process until it has satisfied all its outstanding requests for resources (i.e. a process, being that it cannot obtain some needed resource it does not kindly give up the resources that it is currently holding).

    6.6

  • TOPIC 6 PROCESS SYNCHRONISATION156

    6.6.1 Resource Allocation Graphs

    Resource Allocation Graphs (RAGs) as can be seen in Figure 6.5 are directed labelled graphs used to represent, from the point of view of deadlocks, the current state of a system.

    Figure 6.5: Resource allocation graphs

    State transitions can be represented as transitions between the corresponding resource allocation graphs. Here are the rules for state transitions: (a) RRequest If process Pi has no outstanding request, it can request simultaneously any

    number (up to multiplicity) of resources R1, R2, Rm. The request is represented by adding appropriate requests edges to the RAG of the current state.

    (b) AAcquisition If process Pi has outstanding requests and they can all be simultaneously

    satisfied, then the request edges of these requests are replaced by assignment edges in the RAG of the current state.

  • TOPIC 6 PROCESS SYNCHRONISATION 157

    (c) RRelease If process Pi has no outstanding request, then it can release any of the

    resources it is holding and remove the corresponding assignment edges from the RAG of the current state.

    Here are some important propositions about deadlocks and resource allocation graphs:

    (a) If a RAG of a state of a system is fully reducible (i.e. it can be reduced to a graph without any edges using ACQUISITION and RELEASE operations), then that state is not a deadlock state;

    (b) If a state is not a deadlock state then its RAG is fully reducible (this holds only if you are dealing with reusable resources; it is false if you have consumable resources);

    (c) A cycle in the RAG of a state is a necessary condition for that being a deadlock state; and

    (d) A cycle in the RAG of a state is a sufficient condition for that being a deadlock state only in the case of reusable resources with multiplicity one. For example, Figure 6.6 shows an example of reduction of a RAG:

    Figure 6.6: Reduction of a RAG

  • TOPIC 6 PROCESS SYNCHRONISATION158

    Meanwhile, Figure 6.7 shows a deadlock-free system with a loop.

    Figure 6.7: RAG with loop but no deadlock

    SELF-CHECK 6.3

    A monitor is a software synchronisation tool or a hardwaresynchronisation tool?

    ACTIVITY 6.2

    In groups, consider the following resource allocation situation: Process P = {P1, P2, P3, P4, P5} Resources R = {R1, R2, R3} Allocation E = {P1R1, P1R2, P2R2, P3R2, P4R3, P5R2, R2P4, R3P1} Resource instances n(R1)=3, n(R2)=4, n(R3)=1

    (a) Draw the precedence graph.

    (b) Determine whether there is a deadlock in the above situation.

  • TOPIC 6 PROCESS SYNCHRONISATION 159

    HANDLING OF DEADLOCKS

    There are several ways to address the problem of deadlock in an operating system.

    (a) Prevent;

    (b) Avoid;

    (c) Detection and recovery; and

    (d) Ignore. Let us discuss these in detail.

    6.7.1 Deadlock Prevention

    Deadlocks can be prevented by ensuring that at least one of the following four conditions occurs: (a) MMutual Exclusion Removing the mutual exclusion condition means that no process may have

    exclusive access to a resource. This proves impossible for resources that cannot be spooled and even with spooled resources, deadlock could still occur. Algorithms that avoid mutual exclusion are called non-blocking synchronisation algorithms.

    (b) HHold and Wait The hold and wait conditions may be removed by requiring processes to

    request all the resources they will need before starting up (or before embarking upon a particular set of operations); this advance knowledge is frequently difficult to satisfy and in any case, is an inefficient use of resources. Another way is to require processes to release all their resources before requesting all the resources they will need. This too is often impractical. Such algorithms, such as serialising tokens, are known as the all-or-none algorithms.

    (c) NNo Preemption A no preemption (lockout) condition may also be difficult or impossible

    to avoid. This is because a process has to be able to have a resource for a certain amount of time or the processing outcome may be inconsistent or thrashing may occur. However, inability to enforce preemption may interfere with a priority algorithm.

    6.7

  • TOPIC 6 PROCESS SYNCHRONISATION160

    Preemption of a locked out resource generally implies a rollback and is to be avoided, since it is very costly in overhead. Algorithms that allow preemption include lock-free and wait-free algorithms and optimistic concurrency control.

    (d) CCircular Wait The circular wait condition: Algorithms that avoid circular waits include

    disable interrupts during critical sections and use a hierarchy to determine a partial ordering of resources (where no obvious hierarchy exists, even the memory address of resources has been used to determine ordering) and Dijkstras solution.

    6.7.2 Deadlock Avoidance

    Assuming that you are in a safe state (i.e. a state from which there is a sequence of allocations and releases of resources that allows all processes to terminate) and you are requested certain resources, simulates the allocation of those resources and determines if the resultant state is safe. If it is safe the request is satisfied, otherwise it is delayed until it becomes safe. The Bankers Algorithm is used to determine if a request can be satisfied. It requires knowledge of who are the competing transactions and what are their resource needs. Deadlock avoidance is essentially not used in distributed systems.

    6.7.3 Deadlock Detection and Recovery

    Often neither deadlock avoidance nor deadlock prevention may be used. Instead deadlock detection and recovery are used by employing an algorithm that tracks resource allocation and process states and rolls back and restarts one or more of the processes in order to remove the deadlock. Detecting a deadlock that has already occurred is easily possible since the resources that each process has locked and/or currently requested are known to the resource scheduler or OS. Detecting the possibility of a deadlock before it occurs is much more difficult and is, in fact, generally undecidable, because the halting problem can be rephrased as a deadlock scenario. However, in specific environments, using specific means of locking resources, deadlock detection may be decidable. In the general case, it is not possible to distinguish between algorithms that are merely waiting for a very unlikely set of circumstances to occur and algorithms that will never finish because of deadlock.

  • TOPIC 6 PROCESS SYNCHRONISATION 161

    6.7.4 Ignore Deadlock

    In the Ostrich Algorithm it is hoped that deadlock does not happen. In general, this is a reasonable strategy. Deadlock is unlikely to occur very often; a system can run for years without deadlock occurring. If the operating system has a deadlock prevention or detection system in place, this will have a negative impact on performance (slow the system down) because whenever a process or thread requests a resource, the system will have to check whether granting this request could cause a potential deadlock situation. If deadlock does occur, it may be necessary to bring the system down, or at least manually kill a number of processes, but even that is not an extreme solution in most situations.

    6.7.5 The Bankers Algorithm for Detecting/Preventing Deadlocks

    Now, let us learn the Bankers Algorithm for detecting/preventing deadlocks. There are two types of Banker's Algorithm: (a) BBankers Algorithm for Single Resource This is modelled on the way a small town banker might deal with

    customers lines of credit. In the course of conducting business, our banker would naturally observe that customers rarely draw their credit lines to their limits. This, of course, suggests the idea of extending more credit than the amount the banker actually has in her coffers.

    Suppose we start with the following situation as shown in Table 6.1:

    Table 6.1: Situation 1

    Customer Credit Used Credit Line

    Andy 0 6

    Barb 0 5

    Marv 0 4

    Sue 0 7

    Funds Available 10

    Max Commitment 22

  • TOPIC 6 PROCESS SYNCHRONISATION162

    Our banker has 10 credits to lend, but a possible liability of 22. Her job is to keep enough in reserve so that ultimately each customer can be satisfied over time: That is each customer will be able to access his full credit line, just not all at the same time. Suppose, after a while, the banks credit line book shows as in Table 6.2.

    Table 6.2: The Banks Credit Line Book After a While

    Customer Credit Used Credit Line

    Andy 1 6

    Barb 1 5

    Marv 2 4

    Sue 4 7

    Funds Available 2

    Max Commitment 22

    Eight credits have been allocated to the various customers; two remain. The questions are:

    (i) Does a way exist such that each customer can be satisfied?

    (ii) Can each be allowed their maximum credit line in some sequence? We presume that, once a customer has been allocated up to his limit, the banker can delay the others until that customer repays his loan, at which point the credits become available to the remaining customers. If we arrive at a state where no customer can get his maximum because there are not enough credits remaining, then a deadlock could occur, because the first customer who asks to draw his credit to its maximum would be denied and all of them would have to wait. To determine whether such a sequence exists, the banker finds the customer closest to his limit: if the remaining credits will get him to that limit, the banker then assumes that loan is repaid and proceeds to the customer next closest to his limit and so on. If all can be granted a full credit, the condition is safe. In this case, Marv is closest to his limit: assume his loan is repaid. This frees up 4 credits. After Marv, Barb is closest to her limit (actually, she is tied with Sue, but it makes no difference) and 3 of the 4 freed from Marv could be used to award her maximum. Assume her loan is repaid; we have now freed 6 credits. Sue is next and her situation is identical to Barbs, so assume

  • TOPIC 6 PROCESS SYNCHRONISATION 163

    her loan is repaid. We have freed enough credits (6) to grant Andy his limit; thus this state safe. Suppose, however, that the banker proceeded to award Barb one more credit after the credit book arrived at the state immediately as shown in Table 6.3:

    Table 6.3: Barb is Awarded One More Credit

    Customer Credit Used Credit Line

    Andy 1 6

    Barb 2 5

    Marv 2 4

    Sue 4 7

    Funds Available 1

    Max Commitment 22

    Now it is easy to see that the remaining credit could do no good toward getting anyone to their maximum. So, to recap, the bankers algorithm looks at each request as it occurs and tests if granting it will lead to a safe state. If not, the request is delayed. To test for a safe state, the banker checks to see if enough resources will remain after granting the request to satisfy the customer closest to his maximum. If so, that loan is assumed repaid and the next customer checked and so on. If all loans can be repaid, then the request leads to a safe state and can be granted. In this case, we see that if Barb is awarded another credit, Marv, who is closest to his maximum, cannot be awarded enough credits, hence Barbs request cannot be granted it will lead to an unsafe state.

    (b) BBankers Algorithm for Multiple Resources Suppose, for example, we have the following situation as shown in Table

    6.4 where it represents resources assigned and Table 6.5 shows the second resources still required by five processes, A, B, C, D and E.

  • TOPIC 6 PROCESS SYNCHRONISATION164

    Table 6.4: Resources Assigned

    Processes Tapes Plotters Printers Toasters

    A 3 0 1 1

    B 0 1 0 0

    C 1 1 1 0

    D 1 1 0 1

    E 0 0 0 0

    Total Existing 6 3 4 2

    Total Claimed by Processes

    5 3 2 2

    Remaining Unclaimed 1 0 2 0

    Table 6.5: Resources Still Needed

    Processes Tapes Plotters Printers Toasters

    A 1 1 0 0

    B 0 1 1 2

    C 3 1 0 0

    D 0 0 1 0

    E 2 1 1 0

    The vectors E, P and A represent Existing, Possessed and Available resources respectively:

    E = (6, 3, 4, 2)

    P = (5, 3, 2, 2)

    A = (1, 0, 2, 0)

    Notice that

    A = E - P

    Now, to state the algorithm more formally, but in essentially the same way as the example with Andy, Barb, Marv and Sue:

    (i) Look for a row whose unmet needs do not exceed what is available, that is, a row where P

  • TOPIC 6 PROCESS SYNCHRONISATION 165

    because no process can acquire the resources it needs to run to completion. If there is more than one such row, just pick one.

    (ii) Assume that the process chosen in 1 acquires all the resources it needs and runs to completion, thereby releasing its resources. Mark that process as virtually terminated and add its resources to A.

    (iii) Repeat 1 and 2 until all processes are either virtually terminated (safe state), or a deadlock is detected (unsafe state).

    Going through this algorithm with the foregoing data, we see that process Ds requirements are smaller than A, so we virtually terminate D and add its resources back into the available pool:

    E = (6, 3, 4, 2)

    P = (5, 3, 2, 2) - (1, 1, 0, 1) = (4, 2, 2, 1)

    A = (1, 0, 2, 0) + (1, 1, 0, 1) = (2, 1, 2, 1)

    Now, As requirements are less than A, so do the same thing with A:

    P = (4, 2, 2, 1) (3, 0, 1, 1) = (1, 2, 1, 0)

    A = (2, 1, 2, 1) + (3, 0, 1, 1) = (5, 1, 3, 2) At this point, we see that there are no remaining processes that cannot be satisfied from available resources, so the illustrated state is safe.

    SELF-CHECK 6.4

    1. Explain safe state and its purpose in deadlock avoidance. 2. Describe briefly any method of deadlock prevention. 3. Explain concurrency with example deadlock and starvation. 4. Explain the different deadlock strategies.

  • TOPIC 6 PROCESS SYNCHRONISATION166

    Race condition is a flaw in a system of processes whereby the output of the

    process is unexpectedly and critically dependent on the sequence of other processes.

    It may arise in multi-process environment, especially when communicating

    between separate processes or threads of execution. Mutual exclusion means that only one of the processes is allowed to execute

    its critical section at a time. Mutex, semaphores and motors are some of the process synchronisation tools. Mutex is a software tool used in concurrency control. It is short form of mutual exclusion.

    A mutex is a program element that allows multiple program processes to

    share the same resource but not simultaneously. Semaphore is a software concurrency control tool. It bears analogy to old

    Roman system of message transmission using flags. It enforces synchronisation among communicating processes and does not require busy waiting. Semaphore is a protected variable whose value can be accessed and altered only by the operations P and V and initialisation operation called "Semaphoinitialise ".

    In counting semaphore, the integer value can range over an unrestricted

    domain. In binary semaphore, the integer value can range only between 0 and 1.

    A monitor is a software synchronisation tool with high-level of abstraction

    that provides a convenient and effective mechanism for process

    ACTIVITY 6.3

    1. Can a process be allowed to request multiple resourcessimultaneously in a system where deadlock are avoided? Discusswhy or why not with your course mates.

    2. How are deadlock situations avoided and prevented so that no

    systems are locked by deadlock? Do a research and present it infront of your course mates.

  • TOPIC 6 PROCESS SYNCHRONISATION 167

    synchronisation. It allows only one process to be active within the monitor at a time.

    Bounded Buffer Problem, readers' and writers' problem, sleeping barber

    problem and dining philosopher problem are some of the classical synchronisation problems taken from real life situations.

    A deadlock is a situation where in two or more competing actions, one action

    is waiting for the others to finish and thus neither ever does. Resource Allocation Graphs (RAGs) are directed labeled graphs used to represent, from the point of view of deadlocks, the current state of a system. There are several ways to address the problem of deadlock in an operating system Prevent, Avoid, Detection and Recovery and Ignore.

    Deadlock

    Monitor

    Mute

    Mutex

    Mutual exclusion

    Race condition

    Resource Allocation Graphs (RAGs)

    Semaphore

    Fill in the blanks: 1. .......................... involves the orderly sharing of system resources by

    processes. 2. .......................... are used in software systems in much the same way as they

    are in railway systems. 3. Part of the program where the shared memory is accessed is called the

    .......................... . 4. A .......................... is a software synchronisation tool with high-level of

    abstraction that provides a convenient and effective mechanism for process synchronisation.

  • TOPIC 6 PROCESS SYNCHRONISATION168

    5. Resource Allocation Graphs (RAGs) are .......................... labelled graphs. 6. Algorithms that avoid mutual exclusion are called ..........................

    synchronisation algorithms. 7. .......................... abstracted the key notion of mutual exclusion in his

    concepts of semaphores. 8. No preemption condition also known as .......................... . 9. .......................... processes share a common, fixed-size (bounded) buffer. 10. Binary Semaphores can assume only the value 0 or the value ........................ .

    Deitel, H. M. (1990). Operating systems (2nd ed.). Addison Wesley.

    Dhotre, I. A. (2011). Operating system. Technical Publications.

    Lister, A. M. (1993). Fundamentals of operating systems (5th ed.). Wiley.

    Milankovic, M. (1992). Operating system. New Delhi: Tata MacGraw Hill.

    Ritchie, C. (2003). Operating systems. BPB Publications.

    Silberschatz, A., Galvin, P. B., & Gagne, G. (2004). Operating system concepts (7th ed.). John Wiley & Sons.

    Stalling, W. (2004). Operating Systems (5th Ed.). Prentice Hall.

    Tanenbaum, A. S. (2007). Modern operating system (3rd ed.). Prentice Hall.

    Tanenbaum, A. S., & Woodhull, A. S. (1997). Systems design and implementation (2nd ed.). Prentice Hall.


Recommended