+ All Categories
Home > Documents > Chapter 6, Process Synchronization, Overheads, Part 2

Chapter 6, Process Synchronization, Overheads, Part 2

Date post: 22-Feb-2016
Category:
Upload: thetis
View: 32 times
Download: 0 times
Share this document with a friend
Description:
Chapter 6, Process Synchronization, Overheads, Part 2. Part 2 of the Chapter 6 overheads covers these sections: 6.6 Classic Problems of Synchronization 6.7 Monitors 6.8 Java Synchronization. 6.6 Classic Problems of Synchronization. - PowerPoint PPT Presentation
269
Chapter 6, Process Synchronization, Overheads, Part 2 1
Transcript
Page 1: Chapter 6, Process Synchronization, Overheads, Part 2

1

Chapter 6, Process Synchronization, Overheads, Part 2

Page 2: Chapter 6, Process Synchronization, Overheads, Part 2

2

• Part 2 of the Chapter 6 overheads covers these sections:

• 6.6 Classic Problems of Synchronization• 6.7 Monitors• 6.8 Java Synchronization

Page 3: Chapter 6, Process Synchronization, Overheads, Part 2

3

6.6 Classic Problems of Synchronization

• These problems exist in operating systems and other systems which have concurrency

• Because they are well-understood, they are often used to test implementations of concurrency control

• Some of these problems should sound familiar because the book has already brought them up as examples of aspects of operating systems (without yet discussing all of the details of a correct, concurrent implementation)

Page 4: Chapter 6, Process Synchronization, Overheads, Part 2

4

• The book discusses the following three problems– The bounded-buffer problem– The readers-writers problem– The dining philosophers problem

Page 5: Chapter 6, Process Synchronization, Overheads, Part 2

5

• The book gives Java code to solve these problems

• For the purposes of the immediate discussion, these examples are working code

• There is one slight, possible source of confusion.

• The examples use a home-made Semaphore class

Page 6: Chapter 6, Process Synchronization, Overheads, Part 2

6

• In the current version of the Java API, there is also a Semaphore class

• If you look in the API documentation, you’ll discover that that class has quite a number of methods and is more complicated than the simple presentation of semaphores given earlier

• The home-made Semaphore class is much simpler

Page 7: Chapter 6, Process Synchronization, Overheads, Part 2

7

• The home-made class will be noted at the end of the presentation of code—but its contents will not be explained in detail

• Only after covering the coming section on synchronization syntax in Java would it be possible to understand how the authors have implemented concurrency control in their own semaphore class

Page 8: Chapter 6, Process Synchronization, Overheads, Part 2

8

The Bounded Buffer Problem

• Operating systems implement general I/O using buffers and message passing between buffers

• Buffer management is a real element of O/S construction

• This is a shared resource problem• The buffer and any variables keeping track of

buffer state (such as the count of contents) have to be managed so that contending processes (threads) keep them consistent

Page 9: Chapter 6, Process Synchronization, Overheads, Part 2

9

• Various pieces of code were given in previous chapters for the bounded buffer problem

• Now the book gives code which is multi-threaded and also does concurrency control using a semaphore

• When looking at it, the existence and placement of semaphores should be noted

• The code is given on the following overheads, and more commentary will come afterwards

Page 10: Chapter 6, Process Synchronization, Overheads, Part 2

10

• /**• * BoundedBuffer.java• *• * This program implements the bounded buffer with semaphores.• * Note that the use of count only serves to output whether• * the buffer is empty of full.• */

• import java.util.*;

• public class BoundedBuffer implements Buffer• {

• private static final int BUFFER_SIZE = 2;

• private Semaphore mutex;• private Semaphore empty;• private Semaphore full;

• private int count;• private int in, out;• private Object[] buffer;

Page 11: Chapter 6, Process Synchronization, Overheads, Part 2

11

• public BoundedBuffer()• {• // buffer is initially empty• count = 0;• in = 0;• out = 0;

• buffer = new Object[BUFFER_SIZE];

• mutex = new Semaphore(1);• empty = new Semaphore(BUFFER_SIZE);• full = new Semaphore(0);• }

Page 12: Chapter 6, Process Synchronization, Overheads, Part 2

12

• // producer calls this method• public void insert(Object item) {• empty.acquire();• mutex.acquire();

• // add an item to the buffer• ++count;• buffer[in] = item;• in = (in + 1) % BUFFER_SIZE;

• if (count == BUFFER_SIZE)• System.out.println("Producer Entered " + item

+ " Buffer FULL");• else• System.out.println("Producer Entered " + item

+ " Buffer Size = " + count);

• mutex.release();• full.release();• }

Page 13: Chapter 6, Process Synchronization, Overheads, Part 2

13

• // consumer calls this method• public Object remove() {• full.acquire();• mutex.acquire();

• // remove an item from the buffer• --count;• Object item = buffer[out];• out = (out + 1) % BUFFER_SIZE;

• if (count == 0)• System.out.println("Consumer Consumed " + item + " Buffer

EMPTY");• else• System.out.println("Consumer Consumed " + item + " Buffer

Size = " + count);

• mutex.release();• empty.release();

• return item;• }

• }

Page 14: Chapter 6, Process Synchronization, Overheads, Part 2

14

• There is more code to the full solution. • It will be given later, but the first thing to notice

is that there are three semaphores• All of the previous discussions just talked about

protecting a single critical section with a single semaphore

• The book has introduced a new level of complexity out of the blue by using this classic problem as an illustration

Page 15: Chapter 6, Process Synchronization, Overheads, Part 2

15

• There is a semaphore, mutex, for mutual exclusion on buffer operations

• There are also two more semaphores, empty and full• These semaphores are associated with the idea that

the buffer has to be protected from trying to insert into a full buffer or remove from an empty one

• In other words, they deal with the concepts, given in an earlier chapter, of blocking sends/receives or writes/reads

Page 16: Chapter 6, Process Synchronization, Overheads, Part 2

16

• When looking at the code, the ordering of the calls to acquire and release the semaphores might not have been clear

• The book offers no cosmic theory to explain the ordering of the calls to acquire and release

• The example is simply given, and it’s up to us to try and sort out how the calls interact in a way that accomplishes the desired result

Page 17: Chapter 6, Process Synchronization, Overheads, Part 2

17

• mutex is a binary semaphore• It is initialized to 1 • 1 and 0 are sufficient to enforce mutual

exclusion

Page 18: Chapter 6, Process Synchronization, Overheads, Part 2

18

• The empty semaphore is a counting semaphore

• It is initialized to BUFFER_SIZE• That means that there are up to BUFFER_SIZE

slots of the shared buffer array that are empty and available to have messages inserted into them

Page 19: Chapter 6, Process Synchronization, Overheads, Part 2

19

• empty.acquire() can be called BUFFER_SIZE times before the shared buffer is full and the semaphore can’t be acquired anymore

• The name empty is a bit of a misnomer—it doesn’t mean completely empty—it keeps track of a count of how many elements of the buffer are empty

Page 20: Chapter 6, Process Synchronization, Overheads, Part 2

20

• The full semaphore is also a counting semaphore

• It is initialized to 0 • The full semaphore counts how many slots of

the shared buffer array have been filled with messages that are available to be removed

• Initially, there are no elements in the buffer array

Page 21: Chapter 6, Process Synchronization, Overheads, Part 2

21

• This means that a call to remove() on the shared buffer won’t find anything until a call to insert() on the buffer has been made

• This is because the code for insert() includes a call to full.release()

Page 22: Chapter 6, Process Synchronization, Overheads, Part 2

22

• The name full is a bit of a misnomer—it doesn’t mean completely full—it keeps track of a count of how many elements in the buffer are full

• The diagram on the next overhead illustrates the meaning of the empty and full semaphores

Page 23: Chapter 6, Process Synchronization, Overheads, Part 2

23

Page 24: Chapter 6, Process Synchronization, Overheads, Part 2

24

• In the code, the calls to acquire() and release() on mutex are simply paired, top and bottom, in the insert() and release() methods of the buffer

• The calls to acquire() and release() on the empty and full semaphores are crossed between the insert() and remove() methods

Page 25: Chapter 6, Process Synchronization, Overheads, Part 2

25

• We have seen a criss-crossing of semaphore calls already in the example where semaphores were used to enforce the execution sequence of two different blocks of code

• Informally, the logic of this example might be expressed as, “You can’t remove unless someone has inserted,” and vice-versa.

Page 26: Chapter 6, Process Synchronization, Overheads, Part 2

26

• In particular in this example:• Empty.acquire() is called at the top of insert()• Empty.release() is called at the bottom of

remove()• Full.acquire() is called at the top of remove()• Full.release() is called at the bottom of insert()

Page 27: Chapter 6, Process Synchronization, Overheads, Part 2

27

• The bodies of both insert() and remove() between the calls on empty and full are protected by calls to acquire() and release() on mutex

• Since there is just one, shared mutex semaphore, that means that the bodies of the two methods together form one critical section

• Only one thread at a time can be in either insert() or remove()

Page 28: Chapter 6, Process Synchronization, Overheads, Part 2

28

• The diagram on the following overhead illustrates the pairing of calls to mutex, making the common critical section

• More importantly, it graphically shows how the calls on the other semaphores are criss-crossed

Page 29: Chapter 6, Process Synchronization, Overheads, Part 2

29

Page 30: Chapter 6, Process Synchronization, Overheads, Part 2

30

• It bears repeating that the book doesn’t give a cosmic theory explaining the placement of the calls to acquire() and release()

• The example is given in totality• Someone figured this solution out, and all we

can do is accept it as given, and try to see how it accomplishes what it does

Page 31: Chapter 6, Process Synchronization, Overheads, Part 2

31

• The rest of the book code to make this a working example follows

Page 32: Chapter 6, Process Synchronization, Overheads, Part 2

32

• /**• * An interface for buffers• *• */

• public interface Buffer• {• /**• * insert an item into the Buffer.• * Note this may be either a blocking• * or non-blocking operation.• */• public abstract void insert(Object item);

• /**• * remove an item from the Buffer.• * Note this may be either a blocking• * or non-blocking operation.• */• public abstract Object remove();• }

Page 33: Chapter 6, Process Synchronization, Overheads, Part 2

33

• /**• * This is the producer thread for the bounded buffer problem.• */

• import java.util.*;

• public class Producer implements Runnable• {• public Producer(Buffer b) {• buffer = b;• }• • public void run()• {• Date message;• • while (true) {• System.out.println("Producer napping");• SleepUtilities.nap();• • // produce an item & enter it into the buffer• message = new Date(); • System.out.println("Producer produced " + message);• • buffer.insert(message);• }• }• • private Buffer buffer;• }

Page 34: Chapter 6, Process Synchronization, Overheads, Part 2

34

• /**• * This is the consumer thread for the bounded buffer problem.• */• import java.util.*;

• public class Consumer implements Runnable• {• public Consumer(Buffer b) { • buffer = b;• }• • public void run()• {• Date message;• • while (true)• {• System.out.println("Consumer napping");• SleepUtilities.nap(); • • // consume an item from the buffer• System.out.println("Consumer wants to consume.");• • message = (Date)buffer.remove();• }• }• • private Buffer buffer;• }

Page 35: Chapter 6, Process Synchronization, Overheads, Part 2

35

• /**• * This creates the buffer and the producer and consumer threads.• *• */• public class Factory• {• public static void main(String args[]) {• Buffer server = new BoundedBuffer();

• // now create the producer and consumer threads• Thread producerThread = new Thread(new Producer(server));• Thread consumerThread = new Thread(new Consumer(server));• • producerThread.start();• consumerThread.start(); • }• }

Page 36: Chapter 6, Process Synchronization, Overheads, Part 2

36

• /**• * Utilities for causing a thread to sleep.• * Note, we should be handling interrupted exceptions• * but choose not to do so for code clarity.• */

• public class SleepUtilities• {• /**• * Nap between zero and NAP_TIME seconds.• */• public static void nap() {• nap(NAP_TIME);• }

• /**• * Nap between zero and duration seconds.• */• public static void nap(int duration) {• int sleeptime = (int) (duration * Math.random() );• try { Thread.sleep(sleeptime*1000); }• catch (InterruptedException e) {}• }

• private static final int NAP_TIME = 5;• }

Page 37: Chapter 6, Process Synchronization, Overheads, Part 2

37

• The book’s Semaphore class follows• Strictly speaking, the example was written to use

this home-made class• Presumably the example would also work with

objects of the Java API Semaphore class• The keyword “synchronized” in the given class is

what makes it work• This keyword will be specifically covered in the

section of the notes covering Java synchronization

Page 38: Chapter 6, Process Synchronization, Overheads, Part 2

38

• /**• * Semaphore.java• *• * A basic counting semaphore using Java synchronization.• */

• public class Semaphore• {• private int value;

• public Semaphore(int value) {• this.value = value;• }

• public synchronized void acquire() {• while (value <= 0) {• try {• wait();• }• catch (InterruptedException e) { }• }

• value--;• }

• public synchronized void release() {• ++value;

• notify();• }• }

Page 39: Chapter 6, Process Synchronization, Overheads, Part 2

39

The Readers-Writers Problem

• The author explains this in general terms of a database

• The database is the resource shared by >1 thread

Page 40: Chapter 6, Process Synchronization, Overheads, Part 2

40

• At any given time the threads accessing a database may fall into two different categories, with different concurrency requirements– Readers: Reading is an innocuous activity– Writers: Writing (updating) is an activity which

changes the state of a database

Page 41: Chapter 6, Process Synchronization, Overheads, Part 2

41

• In database terminology, you control access to a data item by means of a lock

• If you own the lock, you have access to the data item

• Depending on the kind of lock you either have sole access or shared access to the data item

Page 42: Chapter 6, Process Synchronization, Overheads, Part 2

42

• This may be somewhat confusing because the term locking has appeared here, there, and everywhere

• We seek him here, we seek him there,Those Frenchies seek him everywhere.Is he in heaven? — Is he in hell?That damned, elusive Pimpernel

Page 43: Chapter 6, Process Synchronization, Overheads, Part 2

43

Leslie Howard, the Scarlet Pimpernel

Page 44: Chapter 6, Process Synchronization, Overheads, Part 2

44

• Database management systems have much in common with operating systems

• Among the things they have in common are the need for locking and the use of the term locking for this construct

• The database management system may or may not be tightly integrated with the operating system

• Either way, the application level locking in the database is supported by system level locking

Page 45: Chapter 6, Process Synchronization, Overheads, Part 2

45

• Recall the analogy used earlier to explain locks• The desired data item is like the car, the lock is

like the title• If you possess the title, you own the car,

allowing you to legally take possession of the car

• If you possess the lock on a data item, you are allowed to access the data item

Page 46: Chapter 6, Process Synchronization, Overheads, Part 2

46

• Application level locking in a database adds a new twist:

• There are two kinds of locks• An exclusive lock: This is the kind of lock

discussed so far. • A writer needs an exclusive lock which means

that all other writers and readers are excluded when the writer has the lock

Page 47: Chapter 6, Process Synchronization, Overheads, Part 2

47

• A shared lock: This is actually a new locking concept

• This is the kind of lock that readers need. • The idea is that >1 reader can access the data

at the same time, as long as writers are excluded

Page 48: Chapter 6, Process Synchronization, Overheads, Part 2

48

• Readers don’t change the data, so by themselves, they can’t cause concurrency control problems which are based on inconsistent state

• They can get in trouble if they are intermixed with writing operations that do change database state

Page 49: Chapter 6, Process Synchronization, Overheads, Part 2

49

• The book gives two different possible approaches to the readers-writers problem

• It should be noted that neither of the book’s approaches prevents starvation

• In other words, you might say that these solutions are application level implementations of synchronization which are not entirely correct, because they violate the bounded waiting condition

Page 50: Chapter 6, Process Synchronization, Overheads, Part 2

50

First Readers-Writers Approach

• No reader will be kept waiting unless a writer has already acquired a lock

• Readers don’t wait on other readers• Readers don’t wait on waiting writers• Readers have priority• Writers have to wait• Writers may starve

Page 51: Chapter 6, Process Synchronization, Overheads, Part 2

51

Second Readers-Writers Approach

• Once a writer is ready, it gets the lock as soon as possible

• Writers have to wait for the current reader to finish, and no longer

• Writers have to wait on each other, presumably in FIFO order

• Writers have priority• Readers may starve

Page 52: Chapter 6, Process Synchronization, Overheads, Part 2

52

Other observations about the readers-writers problem

• You have to be able to distinguish reader and writer threads (processes) from each other

• For this scheme to give much processing advantage, you probably need more readers than writers in order to justify implementing shared as well as exclusive locks

Page 53: Chapter 6, Process Synchronization, Overheads, Part 2

53

• Garden variety databases would tend to have more readers than writers

• The solution approaches could be extended to prevent starvation and so that they also had other desirable characteristics, but that will not be pursued here

Page 54: Chapter 6, Process Synchronization, Overheads, Part 2

54

• The first solution code presented by the book takes approach 1: The readers have priority

• As a consequence, the provided solution would allow starvation of writers to happen

• Book code follows, along with some explanations• When reading the code, the thing to notice, just

like with the bounded buffer problem, is the placement and use of semaphores

Page 55: Chapter 6, Process Synchronization, Overheads, Part 2

55

• /**• * Database.java• *• * This class contains the methods the readers and writers will use• * to coordinate access to the database. Access is coordinated using

semaphores.• */

• public class Database implements RWLock• {• // the number of active readers• private int readerCount;

• Semaphore mutex; // controls access to readerCount• Semaphore db; // controls access to the database

• public Database() {• readerCount = 0;

• mutex = new Semaphore(1);• db = new Semaphore(1);• }

Page 56: Chapter 6, Process Synchronization, Overheads, Part 2

56

• public void acquireReadLock(int readerNum) {• mutex.acquire();

• ++readerCount;

• // if I am the first reader tell all others• // that the database is being read• if (readerCount == 1)• db.acquire();

• System.out.println("Reader " + readerNum + " is reading. Reader count = " + readerCount);

• mutex.release();• }

Page 57: Chapter 6, Process Synchronization, Overheads, Part 2

57

• public void releaseReadLock(int readerNum) {• mutex.acquire();

• --readerCount;

• // if I am the last reader tell all others• // that the database is no longer being read• if (readerCount == 0)• db.release();

• System.out.println("Reader " + readerNum + " is done reading. Reader count = " + readerCount);

• mutex.release();• }

Page 58: Chapter 6, Process Synchronization, Overheads, Part 2

58

• public void acquireWriteLock(int writerNum) {• db.acquire();• System.out.println("writer " + writerNum + " is

writing.");• }

• public void releaseWriteLock(int writerNum) {• System.out.println("writer " + writerNum + " is

done writing.");• db.release();• }

• }

Page 59: Chapter 6, Process Synchronization, Overheads, Part 2

59

• The starting point for understanding this first database example is comparing it with the bounded-buffer example

• An important difference from the bounded-buffer example is that this example does not actually do any reading or writing of data to a database

Page 60: Chapter 6, Process Synchronization, Overheads, Part 2

60

• The application code for the readers-writers problem simply implements the protocol for assigning different kinds of locks to requesting processes

• The problem is complicated enough as it is without trying to inject any reality into it.

• It’s sufficient to worry about the locking protocol and not deal with any actual data

Page 61: Chapter 6, Process Synchronization, Overheads, Part 2

61

• Like the bounded-buffer example, this example has a mutex semaphore

• As noted in the comments in the code, this semaphore provides mutual exclusion on the variable readerCount

Page 62: Chapter 6, Process Synchronization, Overheads, Part 2

62

• Both acquireReadLock() and releaseReadLock() are enclosed in acquire() and release() calls on mutex

• Together, those two methods constitute one critical section

• The write lock methods don’t deal with the readerCount variable, so those methods don’t contain calls on mutex

Page 63: Chapter 6, Process Synchronization, Overheads, Part 2

63

• While the bounded-buffer example had two additional semaphores, empty and full, this example has only one other semaphore, db

• Keep in mind that this is not the db itself, it’s the lock on the db

• And therein lies a tale…

Page 64: Chapter 6, Process Synchronization, Overheads, Part 2

64

• Once again, the book has introduced a new level of complexity out of the blue by using this classic problem as an illustration

• The db semaphore itself is a simple semaphore with acquire() and release() operations

• It serves the purpose of a building block to support other operations

Page 65: Chapter 6, Process Synchronization, Overheads, Part 2

65

• The db semaphore supports the development of the two different kinds of application locks, read and write

• In other words, using db as a building block, it becomes possible not just to do a simple acquire()

• It becomes possible to do separate acquires for read vs. write locks

Page 66: Chapter 6, Process Synchronization, Overheads, Part 2

66

• This is supported in the code by the RWLock interface• The so-called Database class implements this

interface• That means that the Database class has these four

methods:– acquireReadLock()– acquireWriteLock()– releaseReadLock()– releaseWriteLock()

Page 67: Chapter 6, Process Synchronization, Overheads, Part 2

67

• In a sense, the so-called Database class is a meta-semaphore class

• In other words, it’s like a semaphore class, but a new and more complicated semaphore with two different kinds of locks and two acquire() methods and two release() methods

Page 68: Chapter 6, Process Synchronization, Overheads, Part 2

68

• This much should be clear: • The Database class is not really a database

class• There is no database in the application• The Database class simply has to do with

implementing a protocol for accessing a database, if there were a database in the application

Page 69: Chapter 6, Process Synchronization, Overheads, Part 2

69

• The Database class implements the two acquire() and two release() methods by carefully using regular calls to acquire() and release() on mutex and on db

• Understanding the acquisition and release of the read and write locks depends on understanding the effect of placing various calls to acquire() and release() on mutex and db at various points in the code.

Page 70: Chapter 6, Process Synchronization, Overheads, Part 2

70

• Once again, a fundamental element of understanding the code involves the criss-crossing of calls on semaphores

• In the bounded-buffer example, calls to acquire() and release() on a semaphore were criss-crossed in the code for the insert() and remove() methods of the shared buffer

Page 71: Chapter 6, Process Synchronization, Overheads, Part 2

71

• In this example, the calls to acquire() and release() on db are criss-crossed between the acquireReadLock(), releaseReadLock(), acquireWriteLock(), and ReleaseWriteLock() methods of the Database class

• The criss-crossed calls surround the logic needed to actually implement and distinguish between read and write locks

Page 72: Chapter 6, Process Synchronization, Overheads, Part 2

72

Write locks in the database example

• Observe that the write locks are relatively simple—they are like normal exclusive locks

• They enforce mutual exclusion on the database• In essence, they are binary• If one writer has access to the database,

another thread can’t gain access until the writer in possession releases the database

• This is done with the db semaphore

Page 73: Chapter 6, Process Synchronization, Overheads, Part 2

73

• The calls to acquireWriteLock() and releaseWriteLock() are essentially wrappers to calls to acquire() and release() on db

• Those calls are the only functional code they contain

• See the code on the following overhead

Page 74: Chapter 6, Process Synchronization, Overheads, Part 2

74

• public void acquireWriteLock(int writerNum)• {• db.acquire();• System.out.println("writer " + writerNum + "

is writing.");• }

• public void releaseWriteLock(int writerNum) • {• System.out.println("writer " + writerNum + "

is done writing.");• db.release();• }

Page 75: Chapter 6, Process Synchronization, Overheads, Part 2

75

Read locks in the database example

• The read locks make use of the mutex semaphore to protect the readerCount variable

• mutex is a garden variety semaphore which enforces mutual exclusion on both the acquisition and release of read locks

• There is no fancy criss-crossing with this semaphore

Page 76: Chapter 6, Process Synchronization, Overheads, Part 2

76

• Both acquireReadLock() and releaseReadLock() begin with mutex.acquire() and end with mutex.release()

• All of the db acquire and release code is protected by mutex, but in particular, the shared variable readerCount is protected

Page 77: Chapter 6, Process Synchronization, Overheads, Part 2

77

• It turns out that readerCount is a variable that makes it possible to in essence treat db as a counting semaphore in the context of readers

• It’s not that there are multiple copies of the shared database resource

• It’s that more than one reader is allowed to access the resource at a time

Page 78: Chapter 6, Process Synchronization, Overheads, Part 2

78

• This is just a side note• Earlier on, the statement was made that

counting semaphores are usually used when you have multiple, interchangeable copies of a resource

• Here we have a case where a counting semaphore is used to allow >1 thread access to a single resource at a time

Page 79: Chapter 6, Process Synchronization, Overheads, Part 2

79

• The read locks make use of the db semaphore• This is the semaphore that would protect the

database, if there were one in fact• The observations made earlier about whether

readers block writers or vice-versa in database access can be restated in terms of acquisition and release of the db semaphore

Page 80: Chapter 6, Process Synchronization, Overheads, Part 2

80

• If a writer has already executed db.acquire(), then a first reader cannot get past the db.acquire() call in the acquireReadLock() method

• If a first reader cannot acquire, then no readers beyond the first will be able to acquire

• Thus, a single writer will block any readers

Page 81: Chapter 6, Process Synchronization, Overheads, Part 2

81

• However, more than one reader can access the db at the same time

• The call to db.acquire() occurs in acquireReadLock() only when readerCount == 1, for the first reader

• For any subsequent readers, it is not necessary to call db.acquire()

• Thus, readers do not block each other

Page 82: Chapter 6, Process Synchronization, Overheads, Part 2

82

• However, the fact that the first reader had to do an acquire means that readers will block a writer

• It doesn’t matter whether the first reader is still active, or another reader remains active, a writer will still be blocked from entering

• This is because the db.release() call is made in the releaseReadLock() method only when the readerCount has gone to 0

Page 83: Chapter 6, Process Synchronization, Overheads, Part 2

83

• The code for acquireReadLock() and releaseReadLock() is given on the following overhead

• It will repay careful study to see how the statements made above are actually reflected in the implementation

Page 84: Chapter 6, Process Synchronization, Overheads, Part 2

84

• public void acquireReadLock(int readerNum) {• mutex.acquire();

• ++readerCount;

• // if I am the first reader tell all others• // that the database is being read• if (readerCount == 1)• db.acquire();

• System.out.println("Reader " + readerNum + " is reading. Reader count = " + readerCount);

• mutex.release();• }

• public void releaseReadLock(int readerNum) {• mutex.acquire();

• --readerCount;

• // if I am the last reader tell all others• // that the database is no longer being read• if (readerCount == 0)• db.release();

• System.out.println("Reader " + readerNum + " is done reading. Reader count = " + readerCount);

• mutex.release();• }

Page 85: Chapter 6, Process Synchronization, Overheads, Part 2

85

• The rest of the book code to make this a working example follows

Page 86: Chapter 6, Process Synchronization, Overheads, Part 2

86

• /**• * An interface for reader-writer locks.• *• * In the text we do not have readers and writers• * pass their number into each method. However we do so• * here to aid in output messages.• */

• public interface RWLock• {• public abstract void acquireReadLock(int readerNum);• public abstract void acquireWriteLock(int writerNum);• public abstract void releaseReadLock(int readerNum);• public abstract void releaseWriteLock(int writerNum);• }

Page 87: Chapter 6, Process Synchronization, Overheads, Part 2

87

• /**• * Reader.java• * A reader to the database.• */

• public class Reader implements Runnable• {

• private RWLock db;• private int readerNum;

• public Reader(int readerNum, RWLock db) {• this.readerNum = readerNum;• this.db = db;• }

• public void run() {• while (true) {• SleepUtilities.nap();

• System.out.println("reader " + readerNum + " wants to read.");• db.acquireReadLock(readerNum);

• // you have access to read from the database• // let's read for awhile .....• SleepUtilities.nap();

• db.releaseReadLock(readerNum);• }• }• }

Page 88: Chapter 6, Process Synchronization, Overheads, Part 2

88

• /**• * Writer.java• * A writer to the database.• */

• public class Writer implements Runnable• {• private RWLock server;• private int writerNum;

• public Writer(int w, RWLock db) {• writerNum = w;• server = db;• }

• public void run() {• while (true)• {• SleepUtilities.nap();

• System.out.println("writer " + writerNum + " wants to write.");• server.acquireWriteLock(writerNum);

• // you have access to write to the database• // write for awhile ...• SleepUtilities.nap();

• server.releaseWriteLock(writerNum);• }• }• }

Page 89: Chapter 6, Process Synchronization, Overheads, Part 2

89

• /**• * Factory.java• * This class creates the reader and writer threads and• * the database they will be using to coordinate access.• */

• public class Factory• {• public static final int NUM_OF_READERS = 3;• public static final int NUM_OF_WRITERS = 2;

• public static void main(String args[])• {• RWLock server = new Database();

• Thread[] readerArray = new Thread[NUM_OF_READERS];• Thread[] writerArray = new Thread[NUM_OF_WRITERS];

• for (int i = 0; i < NUM_OF_READERS; i++) {• readerArray[i] = new Thread(new Reader(i, server));• readerArray[i].start();• }

• for (int i = 0; i < NUM_OF_WRITERS; i++) {• writerArray[i] = new Thread(new Writer(i, server));• writerArray[i].start();• }• }• }

Page 90: Chapter 6, Process Synchronization, Overheads, Part 2

90

• /**• * Utilities for causing a thread to sleep.• * Note, we should be handling interrupted exceptions• * but choose not to do so for code clarity.• */

• public class SleepUtilities• {• /**• * Nap between zero and NAP_TIME seconds.• */• public static void nap() {• nap(NAP_TIME);• }

• /**• * Nap between zero and duration seconds.• */• public static void nap(int duration) {• int sleeptime = (int) (duration * Math.random() );• try { Thread.sleep(sleeptime*1000); }• catch (InterruptedException e) {}• }

• private static final int NAP_TIME = 5;• }

Page 91: Chapter 6, Process Synchronization, Overheads, Part 2

91

• These are the same observations that were made with the producer-consumer example– The book’s Semaphore class follows– Strictly speaking, the example was written to use this home-

made class– Presumably the example would also work with objects of

the Java API Semaphore class– The keyword “synchronized” in the given class is what

makes it work– This keyword will be specifically covered in the section of

the notes covering Java synchronization

Page 92: Chapter 6, Process Synchronization, Overheads, Part 2

92

• /**• * Semaphore.java• * A basic counting semaphore using Java synchronization.• */

• public class Semaphore• {• private int value;

• public Semaphore(int value) {• this.value = value;• }

• public synchronized void acquire() {• while (value <= 0) {• try {• wait();• }• catch (InterruptedException e) { }• }• value--;• }

• public synchronized void release() {• ++value;• notify();• }• }

Page 93: Chapter 6, Process Synchronization, Overheads, Part 2

93

The Dining Philosophers Problem

Page 94: Chapter 6, Process Synchronization, Overheads, Part 2

94

The Scarlet Pimpernel, Anagallis arvensis

Page 95: Chapter 6, Process Synchronization, Overheads, Part 2

95

• Let there be one rice bowl in the center• Let there be five philosophers• Let there be only five chopsticks, one between

each of the philosophers

Page 96: Chapter 6, Process Synchronization, Overheads, Part 2

96

• Let concurrent eating have these conditions• 1. A philosopher tries to pick up the two

chopsticks immediately on each side– Picking up one chopstick is an independent act. – It isn’t possible to pick up both simultaneously.

Page 97: Chapter 6, Process Synchronization, Overheads, Part 2

97

• 2. If a philosopher succeeds in acquiring the two chopsticks, then the philosopher can eat. – Eating cannot be interrupted

• 3. When the philosopher is done eating, the chopsticks are put down one after the other– Putting down one chopstick is an independent act. – It isn’t possible to put down both simultaneously.

• Note that under these conditions it would not be possible for two neighboring philosophers to be eating at the same time

Page 98: Chapter 6, Process Synchronization, Overheads, Part 2

98

• This concurrency control problem has two challenges in it:

• 1. Starvation• Due to the sequence of events, one

philosopher may never be able to pick up two chopsticks and eat

Page 99: Chapter 6, Process Synchronization, Overheads, Part 2

99

• 2. Deadlock• Due to the sequence of events, each

philosopher may succeed in picking up either the chopstick on the left or the chopstick on the right. – None will eat because they are waiting/attempting

to pick up the other chopstick. – Since they won’t be eating, they’ll never finish and

put down the chopstick they do hold

Page 100: Chapter 6, Process Synchronization, Overheads, Part 2

100

• A full discussion of deadlock will be given in chapter 7– In the meantime, possible solutions to starvation and

deadlock under this scenario include:– Allow at most four philosophers at the table– Allow a philosopher to pick up chopsticks only if both

are available– An asymmetric solution: Odd philosophers reach first

with their left hands, even philosophers with their right

Page 101: Chapter 6, Process Synchronization, Overheads, Part 2

101

• Note that all proposed solutions either reduce concurrency or introduce artificial constraints

• The book gives partial code for this problem but having looked at all of the code for the previous two examples, it is not necessary to pursue more code for this one

Page 102: Chapter 6, Process Synchronization, Overheads, Part 2

102

Problems with Semaphores

• Looking back on these examples, it should be clear that using semaphores can be problematic

• Depending on the circumstance, more than one semaphore might be needed for different purposes

• Correct implementation of synchronization using semaphores might require cross-over of calls

Page 103: Chapter 6, Process Synchronization, Overheads, Part 2

103

• The author points out the following common mistakes when working with semaphores, and some of the problems that can result:

• 1. Reversal of calls: mutex.release() before mutex.acquire(). – This can lead to violation of mutual exclusion

Page 104: Chapter 6, Process Synchronization, Overheads, Part 2

104

• 2. Double acquisition: mutex.acquire() followed by mutex.acquire(). – This will lead to deadlock

• 3. Forgetting one or the other or both calls. – This will lead to deadlock or a violation of mutual

exclusion

Page 105: Chapter 6, Process Synchronization, Overheads, Part 2

105

Problems with writing concurrent code

• In future sections, other, hopefully better approaches to doing synchronization will be given

• However, it’s not too soon to make some general observations about synchronization before diving into details again

Page 106: Chapter 6, Process Synchronization, Overheads, Part 2

106

• You may recall that in a previous chapter the topic of multiple core architectures and parallel programming were mentioned

• The idea was that as multiple processor architectures become more prevalent, in order to make the fullest use of them, programmers will have to start becoming proficient at writing parallel code

Page 107: Chapter 6, Process Synchronization, Overheads, Part 2

107

• Before worrying about that, it is worth considering the question of concurrent code

• This is already a challenge existing in single processor architectures, and we’ve begun to see what it involves

• More than one thread or process has elements that can run independently, but the threads cooperate with each other by means of a reference to a shared resource

Page 108: Chapter 6, Process Synchronization, Overheads, Part 2

108

• The goal is that the multiple threads together should solve parts of a subdivided problem and the independent parts of the solution should somehow be brought together as a whole at the end

• The challenges of dividing and re-uniting were given earlier

• Having seen the examples, the follow-up question might have to do with making it all work in practice

Page 109: Chapter 6, Process Synchronization, Overheads, Part 2

109

• Consider questions like these, while keeping in mind the bounded-buffer, readers-writers, and possibly the dining philosophers problems:

• 1. In general, when the code is running, would it be apparent whether it was running correctly or not?

Page 110: Chapter 6, Process Synchronization, Overheads, Part 2

110

• A grievous problem like deadlock might be obvious

• Other than that, is it possible to concretely specify what correct behavior would look like?

• Can you identify inconsistent state in a shared resource?

Page 111: Chapter 6, Process Synchronization, Overheads, Part 2

111

• 2. Is it possible that the code could run successfully for a period time, and then due to the vagaries of concurrent scheduling, it would not run correctly?

Page 112: Chapter 6, Process Synchronization, Overheads, Part 2

112

– The answer to the foregoing question, unfortunately, is yes

– Even if you can specify correct behavior, running test cases is not sufficient to test your code

– A complete set of test cases would have to somehow include every possible interleaving of executions of lines of code

– First of all, you don’t have the ability to control this– Second of all, the number of possible executions is

practically limitless

Page 113: Chapter 6, Process Synchronization, Overheads, Part 2

113

• 3. Points 1 and 2 are the first half of debugging—identifying that something is wrong

• The second half of debugging is identifying what, in particular, is wrong

• Knowing that concurrent code is faulty, what tools are available to identify the specific source of the problem?– The answer is essentially, none

Page 114: Chapter 6, Process Synchronization, Overheads, Part 2

114

• 4. The last step of debugging is fixing the mistake once you’ve found it– You may not have considered before the relationship

between finding and fixing– They are joined at the hip– You can’t really identify something that is incorrect

unless you can at least provisionally imagine what a correct implementation would be

– If you have no clue what a “solution” is, how do you know that you’ve actually found a problem?

Page 115: Chapter 6, Process Synchronization, Overheads, Part 2

115

• If you think about the debugging you’ve done up to this point, you may realize the truth of the foregoing statements

• There is a joke that mathematical proofs consist of breaking things down into sufficiently small pieces that ultimately each step could be justified by this statement: “As any fool can bloody well see…”

Page 116: Chapter 6, Process Synchronization, Overheads, Part 2

116

• Debugging with the help of a compiler boils down to attacking one little problem at a time, hopefully sufficiently small that you can see what’s wrong and know how to fix it.

• By definition, the trouble with concurrent code is that many different things are going on at the same time

• You can consider each thread individually, but its operation is always in the context of other, independently scheduled threads

Page 117: Chapter 6, Process Synchronization, Overheads, Part 2

117

• There is no convenient tool for breaking a concurrent program down further

• Either you can see it in its totality or you can’t• Either you can perceive a problem that it

contains or you can’t

Page 118: Chapter 6, Process Synchronization, Overheads, Part 2

118

• You need to understand the problem and the coding techniques well enough that you can write the solution correctly from scratch

• Or you write an incorrect solution and are clever enough to see that it’s incorrect when re-reading it, even though it has run correctly so far

Page 119: Chapter 6, Process Synchronization, Overheads, Part 2

119

• Or you write an incorrect solution and are lucky enough that its run-time behavior is so bad (like a deadlock) that it’s apparent that something is wrong

• Fixing the incorrect code still boils down to deep thought, and essentially re-coding the solution correctly from scratch

Page 120: Chapter 6, Process Synchronization, Overheads, Part 2

120

• At this point programming becomes wizardry• Keep in mind that for less capable people, the

point where programming and debugging descend into wizardry comes with much simpler problems than concurrent code

Page 121: Chapter 6, Process Synchronization, Overheads, Part 2

121

• When programming becomes wizardry, this is not a happy state of affairs

• There are some techniques for dealing with the problems of concurrent programming

• However, you don’t learn them in the average undergraduate degree

Page 122: Chapter 6, Process Synchronization, Overheads, Part 2

122

• For most programmers the moral of the story is that you should tip-toe carefully around concurrency control problems whenever possible

• Leave them to the experts• Consider the story of the Therac-25

Page 123: Chapter 6, Process Synchronization, Overheads, Part 2

123

From Wikipedia• Therac-25 was a radiation therapy machine produced by

Atomic Energy of Canada Limited (AECL) after the Therac-6 and Therac-20 units (the earlier units had been produced in partnership with CGR of France). It was involved with at least six accidents between 1985 and 1987, in which patients were given massive overdoses of radiation, approximately 100 times the intended dose.[2] Three of the six patients died as a direct consequence. These accidents highlighted the dangers of software control of safety-critical systems, and they have become a standard case study in health informatics and software engineering.

Page 124: Chapter 6, Process Synchronization, Overheads, Part 2

124

• Among many software and system design issues, this was listed as one of the engineering issues:

• The equipment control task did not properly synchronize with the operator interface task, so that race conditions occurred if the operator changed the setup too quickly.

• This was missed during testing, since it took some practice before operators were able to work quickly enough for the problem to occur.

Page 125: Chapter 6, Process Synchronization, Overheads, Part 2

125

6.7 Monitors

• Monitors are an important topic for two reasons– As seen, the use of semaphores is fraught with

difficulty, so overall, monitors might be a better concept to learn

– Monitors are worth understanding because Java synchronization is ultimately built on this more general construct

Page 126: Chapter 6, Process Synchronization, Overheads, Part 2

126

• A high level, O-O description of what a monitor is:

• It’s a class with (private) instance variables and (public) methods

• Mutual exclusion is enforced over all of the methods at the same time

Page 127: Chapter 6, Process Synchronization, Overheads, Part 2

127

• This means that no two threads can be in any of the methods at the same time

• In other words, all of the code belonging to the class is one giant critical section

• Broadening the extent of the critical section in this way is a helpful generalization

• Notice the similarity with enclosing multiple methods in pairs of calls to acquire() and release() on a shared semaphore, mutex

Page 128: Chapter 6, Process Synchronization, Overheads, Part 2

128

• It is possible that an implementation of a monitor class would include some sort of acquire() and release() methods

• But if the monitor were more general, and included other methods, blanket mutual exclusion means there don’t have to be separate acquire() and release() calls on a monitor or in the code for the other methods

Page 129: Chapter 6, Process Synchronization, Overheads, Part 2

129

• The other monitor methods could be called directly, and mutual exclusion would apply to them

• Any call to a monitor method implicitly involves acquiring the monitor

• Using monitors may at least least partially alleviate the problem that semaphore based code has of correctly placing acquire() and release() calls

Page 130: Chapter 6, Process Synchronization, Overheads, Part 2

130

• Notice also that under this scheme, the private instance variables (possibly >1), which in some sense may be thought of as locks or perhaps shared resources, are completely protected by definition

• There is no access to them except through the methods, and all of the methods have mutual exclusion enforced on them

Page 131: Chapter 6, Process Synchronization, Overheads, Part 2

131

The relationship of monitors to Java

• In Java there is a Monitor class, but that is just something made available in the API

• The monitor concept is a fundamental part of the structure and design of Java

• It is the monitor concept, not the Monitor class, that is the basis for all synchronization in Java

Page 132: Chapter 6, Process Synchronization, Overheads, Part 2

132

• The Object class in Java is the source of certain monitor (concept) methods that are available to its subclasses

• Java also has a Condition interface which corresponds to what is called a condition variable in the monitor concept

• The condition variable in a monitor is roughly analogous to the lock variable inside a semaphore

Page 133: Chapter 6, Process Synchronization, Overheads, Part 2

133

• The monitor concepts in the Object class along with the Condition interface would make it possible for a programmer to write a class that embodied all of the characteristics of a monitor using Java syntax

• There is no reason to do so• Java synchronization may be built on the monitor

concept, but application synchronization doesn’t have to use monitors directly

Page 134: Chapter 6, Process Synchronization, Overheads, Part 2

134

• The purpose of covering the monitor concept is to help understand the Java synchronization syntax so that it can be used in an informed way

• A subset of Java synchronization syntax will be covered after this section on monitors

• The intent of covering the syntax is to show how to use it to synchronize application code

Page 135: Chapter 6, Process Synchronization, Overheads, Part 2

135

The entry set for a monitor

• Monitors enforce mutual exclusion over all of their method code.

• After one thread has entered one method, others may be scheduled and attempt to enter the critical section.

• They will not be able to do so.

Page 136: Chapter 6, Process Synchronization, Overheads, Part 2

136

• The monitor has what is known as an entry set.

• It is essentially a scheduling queue for threads that want to get into the critical section.

• It will be useful to distinguish the entry set from waiting sets, which will be discussed in later overheads.

Page 137: Chapter 6, Process Synchronization, Overheads, Part 2

137

Condition Variables (or Objects) in Monitors

• A monitor class can have Condition variables declared in it:

• private Condition x, y;• A monitor class will also have two special

methods:• wait() and signal()

Page 138: Chapter 6, Process Synchronization, Overheads, Part 2

138

• In the Object class of Java there is a wait() method which is like the conceptual wait() method in a monitor

• In the Object class of Java there are also methods notify() and notifyAll().

• These methods correspond to the conceptual signal() method in a monitor

Page 139: Chapter 6, Process Synchronization, Overheads, Part 2

139

• In order to understand how these methods work, it’s helpful to have a concrete scenario

• Let there be two threads (or processes) P and Q

• Let those threads share a reference to a monitor object, m

• Let the monitor, m, have a condition variable x and a method monitorMethod()

Page 140: Chapter 6, Process Synchronization, Overheads, Part 2

140

• Both P and Q have the ability to call m.monitorMethod()—but because m is a monitor, only one of P or Q can be running in the code monitorMethod() at a time

• Suppose that it was within code for Q that the call to m.monitorMethod() was called

• Inside the code for monitorMethod() there may be a call

• x.wait();

Page 141: Chapter 6, Process Synchronization, Overheads, Part 2

141

• The critical question is, what does this call cause to happen?

• The result can be described as follows: The thread that was “running in the monitor” is suspended

• In other words, under this scenario, the thread Q, which was the thread which made the call on the monitor object, m, is suspended

Page 142: Chapter 6, Process Synchronization, Overheads, Part 2

142

• The critical point about thread suspension is that once suspended, the suspended thread is no longer “in the monitor”

• In other words, once the original thread is suspended, if another thread makes a call to monitorMethod() (or any other monitor method) the new thread will be allowed into the monitor code

• The suspended thread is not in the monitor, so mutual exclusion doesn’t prevent the new thread from entering

Page 143: Chapter 6, Process Synchronization, Overheads, Part 2

143

• The original, suspended thread, Q, will remain suspended until another thread, such as P, is running monitor code which makes a call such as this:

• x.signal()• In Java this would be x.notify() or x.notifyAll()

• This bears repeating: Once Q is suspended by a call to x.wait(), it can only be resumed by a call to x.notify() made by another thread, P

Page 144: Chapter 6, Process Synchronization, Overheads, Part 2

144

• You can see from the logic of this that thread suspension has to remove a thread from the monitor

• Only if the original thread was removed from the monitor could another thread enter the monitor and make a call to notify() on the condition variable, allowing the first one to resume

Page 145: Chapter 6, Process Synchronization, Overheads, Part 2

145

• In a primitive semaphore, if a resource is not available, when a process calls acquire() and fails, the process goes into a spinlock

• The logic of wait() improves on this• A process can voluntarily step aside by calling

x.wait()• This allows another thread into the protected code• This is analogous to the advanced semaphores

without spin locks that were briefly described earlier

Page 146: Chapter 6, Process Synchronization, Overheads, Part 2

146

• It becomes second nature to think of concurrency control as a technique for enforcing mutual exclusion on a resource

• Recall that synchronization also includes the ability to enforce a particular execution order

• Notice that wait() seems to imply timing as much as mutual exclusion

Page 147: Chapter 6, Process Synchronization, Overheads, Part 2

147

• It may be easier to remember the idea underlying wait() by thinking of it as a tool that makes it possible for a process to take actions which affect its own execution order

• Concurrency control is just as accurately described as “enforcing a suitable order or interleaving of execution” as it is described as “enforcing mutual exclusion”

Page 148: Chapter 6, Process Synchronization, Overheads, Part 2

148

• It may be helpful to remember the concept of “politeness” that came up in the Alphonse and Gaston phase of trying to explain concurrency control when considering what wait() does

• Making a wait() call allows other threads to go first

Page 149: Chapter 6, Process Synchronization, Overheads, Part 2

149

• In the section on semaphores the book considered the possibility of implementing a semaphore without a spin lock

• This involved calls to methods conceptually named block() and wakeup()

• The implementation of such a semaphore would have to support waiting lists of blocked processes

• This is the idea underlying wait() and notify()

Page 150: Chapter 6, Process Synchronization, Overheads, Part 2

150

• The authors now raise the question of what it would mean to call release() on a semaphore when there is nothing to release

• If you went back to look at the original pseudo-code for a semaphore, you would find that this would increment the counter—even though that could put the count above the number of actual resources

Page 151: Chapter 6, Process Synchronization, Overheads, Part 2

151

• It may or may not be possible to fix the semaphore pseudo-code to deal with this—but it’s not important

• The real point is that if you had an advanced semaphore that blocked and woke up threads, you would be dealing with a list of waiting processes anyway

• A call to release() (notify() or wakeup()) ultimately may mean waking up a blocked process on a waiting list

Page 152: Chapter 6, Process Synchronization, Overheads, Part 2

152

Monitors and waiting lists

• The reason for the previous detour back to the semaphore explanation is that it leads to this:

• The monitor concept explicitly includes an implementation of waiting lists

• If a thread running in the monitor causes a call such as x.wait() to be made, that thread is put in the waiting list for that condition variable

Page 153: Chapter 6, Process Synchronization, Overheads, Part 2

153

• The thread that made the original x.wait() call voluntarily stepped aside

• When another thread makes a call x.notify(), the thread making the notify() call is voluntarily() stepping aside and one in the waiting list will be resumed

Page 154: Chapter 6, Process Synchronization, Overheads, Part 2

154

• If some thread were to make a call x.notifyAll(), all waiting threads would potentially be resumed

• The management of waiting lists and the ability to call wait(), notify(), and notifyAll() leads to a consideration which the implementation of a monitor has to take into account

Page 155: Chapter 6, Process Synchronization, Overheads, Part 2

155

• Remember that in describing wait(), the thread that executed x.wait() was immediately suspended.

• It immediately left the monitor, potentially allowing another thread in.

• The question is, what happens when a thread executes x.notify()?

Page 156: Chapter 6, Process Synchronization, Overheads, Part 2

156

• Let this scenario be given:• Thread Q is waiting because it earlier called

x.wait()• Thread P is running and it calls x.signal()• By definition, only one of P and Q can be

running in the monitor at the same time

Page 157: Chapter 6, Process Synchronization, Overheads, Part 2

157

• What protocol should be used to allow Q to begin running in the monitor instead of P?

• This question is not one that has to be answered by the application programmer

• It is a question that confronts the designer of a particular monitor implementation

Page 158: Chapter 6, Process Synchronization, Overheads, Part 2

158

• In general, there are two alternatives:• Signal and wait: • P signals, and its call to signal() (notify())

implicitly includes a call to wait(), which allows Q to take its turn immediately.

• After Q finishes, P resumes.

Page 159: Chapter 6, Process Synchronization, Overheads, Part 2

159

• Signal and continue: • P signals and continues until it leaves the

monitor. • At that point Q can enter the monitor (or

potentially may not, if prevented by some other condition)

• In other words, P does not immediately leave the monitor, and Q does not immediately enter it

Page 160: Chapter 6, Process Synchronization, Overheads, Part 2

160

• One last point about terminology can be made.• It is possible, in principle, to make calls to wait() on

any of the potentially many condition variables in a monitor, x, y, etc.

• Each of the condition variables would have its own waiting list.

• These separate waiting lists are distinct from the entry set described earlier, which contains the threads waiting to enter the critical section for the first time

Page 161: Chapter 6, Process Synchronization, Overheads, Part 2

161

• This may seem like it was a pretty rough outline of monitors and wait() and signal()

• In particular, you may have serious questions about how notifyAll() would work, as opposed to simple notify()

• This will be discussed further in the next section, which is on Java synchronization

Page 162: Chapter 6, Process Synchronization, Overheads, Part 2

162

• The book next tries to illustrate the use of monitors in order to solve the dining philosophers problem

• I am not going to cover this

Page 163: Chapter 6, Process Synchronization, Overheads, Part 2

163

6.8 Java Synchronization

Page 164: Chapter 6, Process Synchronization, Overheads, Part 2

164

The term thread safe

• Term: • Thread safe. • Definition: • Concurrent threads have been implemented

so that they leave shared data in a consistent state

Page 165: Chapter 6, Process Synchronization, Overheads, Part 2

165

• Note: • Much of the example code shown previously

would not be thread safe. • It is highly likely that threaded code without

the use of synchronization syntax would generate a compiler error or warning indicating that it was not thread safe

Page 166: Chapter 6, Process Synchronization, Overheads, Part 2

166

• The most recent book examples, which used a semaphore class which did use the Java synchronization syntax internally, should not generate this error/warning

• If code does produce this as a warning only, meaning that the code can still be run, it should be made emphatically clear that even though it runs, it IS NOT THREAD SAFE

Page 167: Chapter 6, Process Synchronization, Overheads, Part 2

167

• In other words, unsafe code may appear to run

• More accurately it may run and even give correct results some or most of the time

• But depending on the vagaries of thread scheduling, at completely unpredictable times, it will give incorrect results

• This idea was discussed in some detail earlier

Page 168: Chapter 6, Process Synchronization, Overheads, Part 2

168

• If you compiled the code for the Peterson’s solution code, for example, this defect would hold.

• The code given for Peterson’s solution did not actually include any functioning synchronization mechanism on the shared variables that modeled turn and desire

• It was only later, in the bounded buffer and readers writers problems that the authors actually used the keyword synchronized in the implementation of their Semaphore class

Page 169: Chapter 6, Process Synchronization, Overheads, Part 2

169

Preliminaries

• More repetitive preliminaries:– The idea of inconsistent state can be illustrated

with the producer-consumer problem: – If not properly synchronized, calls to insert() and

remove() can result in an incorrect count of how many messages are in a shared buffer

Page 170: Chapter 6, Process Synchronization, Overheads, Part 2

170

– Keep in mind that the Java API supports synchronization syntax at the programmer level

– This is based on monitor concepts built into Java

– However, all synchronization ultimately is provided by something like a test and set instruction at the hardware level of the system that Java is running on

Page 171: Chapter 6, Process Synchronization, Overheads, Part 2

171

• Because this has been such a long and twisted path, the book reviews more of the preliminaries

• To begin with, the initial examples were not even truly synchronized.

• This means that they were incorrect. • They would lead to race conditions on shared

variables/shared objects

Page 172: Chapter 6, Process Synchronization, Overheads, Part 2

172

• Although not literally correct, the initial semaphore examples attempted to illustrate what is behind synchronization by introducing the concept of busy waiting or a spin lock

• The book now wants to consider the aspects of spin locks again

Page 173: Chapter 6, Process Synchronization, Overheads, Part 2

173

Spin locks

• The basic idea of a spin lock is that if one thread holds a resource, another thread wanting that resource will have to wait in some fashion

• In the illustrative, application level pseudo-code, this waiting took the form of sitting in a loop

Page 174: Chapter 6, Process Synchronization, Overheads, Part 2

174

Spin locks are wasteful

• The first problem with busy waiting is that it’s wasteful

• A thread that doesn’t have a resource it needs can be scheduled and burn up CPU cycles spinning in a loop until its time slice expires

Page 175: Chapter 6, Process Synchronization, Overheads, Part 2

175

Livelock

• The second problem with busy waiting is that it can lead to livelock

• Livelock is not quite the same as deadlock• In deadlock, two threads “can’t move” because

each is waiting for an action that only the other can take

• In livelock, both threads are alive and scheduled, but they still don’t make any progress

Page 176: Chapter 6, Process Synchronization, Overheads, Part 2

176

• The book suggests this illustrative, bounded buffer scenario:

• A producer has higher priority than a consumer

• The producer fills the shared buffer• The producer remains alive, continuing to try

and enter items into the buffer

Page 177: Chapter 6, Process Synchronization, Overheads, Part 2

177

• The consumer is alive, but having lower priority, it is never scheduled, so it can never remove a message from the buffer

• Thus, the producer can never enter a new message into the buffer

• The consumer can never remove one• But they’re both alive, the producer frantically

so, and the consumer slothfully so

Page 178: Chapter 6, Process Synchronization, Overheads, Part 2

178

Deadlock

• Using real syntax that correctly enforces mutual exclusion can lead to deadlock

• Deadlock is a real problem in the development of synchronized code, but it is not literally a problem of synchronization syntax

• In other words, you can write an example that synchronizes correctly but still has this problem

Page 179: Chapter 6, Process Synchronization, Overheads, Part 2

179

• A simplistic example would be an implementation of the dining philosophers where each one could pick up the left chopstick

• The problem is not that there is uncontrolled access to a shared resource.

Page 180: Chapter 6, Process Synchronization, Overheads, Part 2

180

• The problem is that once that state has been entered, it will never be left

• Java synchronization syntax can be introduced and illustrated and the question of how to prevent or resolve deadlocks can be put off until Chapter 7, which is devoted to that question

Page 181: Chapter 6, Process Synchronization, Overheads, Part 2

181

Java synchronization in two steps

• The book takes the introduction of synchronization syntax through two stages:

• Stage 1: You use Java synchronization and the Thread class yield() method to write code that enforces mutual exclusion and which is essentially a correct implementation of busy waiting.

• This is wasteful and livelock prone, but it is synchronized

Page 182: Chapter 6, Process Synchronization, Overheads, Part 2

182

• It should be noted before going further that the busy waiting is not the result of the yield() method itself

• As you will see, the busy waiting results from the fact that the call to yield() is made in a loop

• As long as the loop condition holds, yield() will continue to be called on the thread in question

Page 183: Chapter 6, Process Synchronization, Overheads, Part 2

183

• Stage 2: You use Java synchronization with the wait(), notify(), and notifyAll() methods of the Object class.

• Instead of busy waiting, this relies on the underlying monitor-like capabilities of Java to have threads wait in queues or lists.

• This is deadlock prone, but it deals with the wastefulness of spin locks and the potential, however obscure, for live lock

Page 184: Chapter 6, Process Synchronization, Overheads, Part 2

184

The synchronized Keyword in Java

• Java synchronization is based on the monitor concept, and this descends all the way from the Object class

• Every object in Java has a lock associated with it• This lock is essentially like a simple monitor, or a

monitor with just one condition variable• Locking for the object is based on the single

condition variable

Page 185: Chapter 6, Process Synchronization, Overheads, Part 2

185

• If you are not writing synchronized code—if you are not using the keyword synchronized— the object’s lock is completely immaterial

• It is a system supplied feature of the object which lurks in the background unused by you and having no effect on what you are doing

Page 186: Chapter 6, Process Synchronization, Overheads, Part 2

186

• In the monitor concept, mutual exclusion is enforced on all of the methods of a class at the same time

• Java is finer-grained. • Inside the code of a class, some methods can

be declared synchronized and some can be unsynchronized

Page 187: Chapter 6, Process Synchronization, Overheads, Part 2

187

• However, if >1 method is declared synchronized in a class, then mutual exclusion is enforced across all of those methods at the same time for any threads trying to access the object

• If a method is synchronized and no thread holds the lock, the first thread that calls the method acquires the lock

Page 188: Chapter 6, Process Synchronization, Overheads, Part 2

188

• Again, Java synchronization is monitor-like. • There is an entry set for the lock• If another thread calls a synchronized method

and cannot acquire the lock, it is put into the entry set for that lock

• The entry set is a kind of waiting list, but it is not called a waiting list because that term is reserved for something else

Page 189: Chapter 6, Process Synchronization, Overheads, Part 2

189

• When the thread holding the lock finishes running whatever synchronized method it was in, it releases the lock

• At that point, if the entry set has threads in it, the JVM will schedule one

• FIFO scheduling may be done on the entry set, but the Java specifications don’t require it

Page 190: Chapter 6, Process Synchronization, Overheads, Part 2

190

• The first correctly synchronized snippets of sample code which the book offers will be given soon

• They do mutual exclusion on a shared buffer• They accomplish this by using the Thread class

yield() method to do busy waiting

Page 191: Chapter 6, Process Synchronization, Overheads, Part 2

191

• The Java API simply says this about the yield() method:

• “Causes the currently executing thread object to temporarily pause and allow other threads to execute.”

• We don’t know how long the yield lasts, exactly• The important point is that it lasts long enough

for another thread to be scheduled, if there is one that wants to be scheduled

Page 192: Chapter 6, Process Synchronization, Overheads, Part 2

192

• The book doesn’t bother to give a complete set of classes for this solution because it is not a very good one

• Because it implements a kind of busy waiting, it’s wasteful, livelock prone, and deadlock prone

• However, it’s worth asking what it illustrates that previous example didn’t

• After the code on the following overheads additional comments will be made

Page 193: Chapter 6, Process Synchronization, Overheads, Part 2

193

Synchronized insert() and remove() Methods for Producers and Consumers of a Bounded Buffer

• public synchronized void insert(Object item)

• {• while(count == BUFFER_SIZE)• Thread.yield();• ++count;• buffer[in] = item;• in = (in + 1) % BUFFER_SIZE;• }

Page 194: Chapter 6, Process Synchronization, Overheads, Part 2

194

• public synchronized Object remove()• {• Object item;• while(count == 0)• Thread.yield();• --count;• item = buffer[out];• out = (out + 1) % BUFFER_SIZE;• return item;• }

Page 195: Chapter 6, Process Synchronization, Overheads, Part 2

195

• In previous illustrations we showed spin locks which were pure busy waiting loops

• They had nothing in their bodies• If a thread in one method was spinning, you

would probably hope that another thread could run in the complementary method

• But the other thread would be locked out, due to synchronization

• You could literally make no progress at all

Page 196: Chapter 6, Process Synchronization, Overheads, Part 2

196

• In this latest code, both methods are synchronized—effectively on the same lock

• On the surface, you might think that you haven’t gained anything compared to the previous examples

• What is added by making calls to yield() in the bodies of the loops?

Page 197: Chapter 6, Process Synchronization, Overheads, Part 2

197

• With a call to yield(), a thread is suspended• We don’t exactly know “where it goes to”

when it’s suspended, but it is no longer “in” the synchronized code.

• We also don’t know how long it will remain suspended, but the purpose is to remain suspended long enough for another thread to be scheduled

Page 198: Chapter 6, Process Synchronization, Overheads, Part 2

198

• This means that progress can be made• For a thread that’s spinning through the loop,

every time through the loop it will give an opportunity to another thread to enter the complementary method

• Once the complementary method has been run, the thread in the loop ought to be able to proceed

Page 199: Chapter 6, Process Synchronization, Overheads, Part 2

199

The yield() spin lock bounded buffer example vs. the semaphore example

• Note that in the fully semaphore-oriented pseudo-solution given before, there were three semaphores

• One handled the mutual exclusion which the keyword synchronized handles here

• The other two handled the cases where the buffer was empty or full

Page 200: Chapter 6, Process Synchronization, Overheads, Part 2

200

• There is no such thing as a synchronized “empty” or “full” variable in the code just given, so there are not two additional uses of synchronized in this example

• The handling of the empty and full cases goes all the way back to the original bounded buffer example

• The code depends on a count variable and modular arithmetic to keep track of when it’s possible to enter and remove

Page 201: Chapter 6, Process Synchronization, Overheads, Part 2

201

• The yield() example has a count variable• This is part of what is protected by the

synchronization• The condition in the while loops for inserting and

removing depends on the value of the count variable.

• For example:• while(count == BUFFER_SIZE)• Thread.yield();

Page 202: Chapter 6, Process Synchronization, Overheads, Part 2

202

Code with synchronized and wait(), notify(), and notifyAll()

• Java threads can call methods wait(), notify(), and notifyAll()

• These methods were introduced in the discussion of monitors and are similar to the monitor wait() and signal() concepts

Page 203: Chapter 6, Process Synchronization, Overheads, Part 2

203

• In order to make plain how they work, it is again useful to set up a scenario

• Let P and Q be threads• Let M be a class which contains synchronized

methods• Every class in Java has one lock variable, and this

is the equivalent of a monitor condition variable• It is this variable that controls mutual exclusion

Page 204: Chapter 6, Process Synchronization, Overheads, Part 2

204

• Because there is just the one lock variable, its use can be hidden

• Threads do not have to know it by name• Various thread calls can depend on the lock

variable and the system takes care of how that happens

Page 205: Chapter 6, Process Synchronization, Overheads, Part 2

205

• Let both P and Q have references to an instance m of class M

• Let P be running in a synchronized method inside m

• Inside that method, let there be a call wait()• The implicit parameter of the wait() call is the

thread that called the method

Page 206: Chapter 6, Process Synchronization, Overheads, Part 2

206

• In the monitor explanation, you would have expected to see a call x.wait()

• Here, in effect, it’s threadP.wait()• But the result is similar

Page 207: Chapter 6, Process Synchronization, Overheads, Part 2

207

• The thread call to wait() works because under the covers there is in effect a call to mLockObect.wait()

• What happens is that the calling thread is put onto the waiting list that belongs to the object’s lock variable

Page 208: Chapter 6, Process Synchronization, Overheads, Part 2

208

Entry sets and wait sets

• Each Java object has exactly one lock• Each object has two sets associated with the

lock, the entry set and the wait set• These two sets together control concurrency

among threads• Statements about each kind of set follow

Page 209: Chapter 6, Process Synchronization, Overheads, Part 2

209

The Entry Set

• The entry set is a kind of waiting list• You can think of it as being implemented as a

linked data structure containing the “PCB’s” of threads

• Threads in the entry set are those which have reached the point in execution where they have called a synchronized method but can’t get in because another thread holds the lock

Page 210: Chapter 6, Process Synchronization, Overheads, Part 2

210

• A thread leaves the entry set and enters the synchronized method it wishes to run when the current lock holder releases the lock and the scheduling algorithm picks from the entry set one of the threads wanting the lock

Page 211: Chapter 6, Process Synchronization, Overheads, Part 2

211

The wait set

• The wait set is also a waiting list• You can also think of this as a linked data

structure containing the “PCB’s” of threads• The wait set is not the same as the entry set• Suppose a thread holds a lock on an object• A thread enters the wait set by calling the

wait() method

Page 212: Chapter 6, Process Synchronization, Overheads, Part 2

212

• Entering the wait set means that the thread voluntarily releases the lock that it holds

• In the application code this would be triggered in an if statement where some (non-lock related) condition has been checked and it has been determined that due to that condition the thread can’t continue executing anyway

Page 213: Chapter 6, Process Synchronization, Overheads, Part 2

213

• When a thread is in the wait set, it is blocked. • It can’t be scheduled but it’s not burning up

resources because it’s not busy waiting

Page 214: Chapter 6, Process Synchronization, Overheads, Part 2

214

The Entry and Wait Sets Can Be Visualized in this Way

Page 215: Chapter 6, Process Synchronization, Overheads, Part 2

215

• By definition, threads in the wait set are not finished with the synchronized code

• Threads acquire the synchronized code through the entry set

• There has to be a mechanism for a thread in the wait set to get into the entry set

Page 216: Chapter 6, Process Synchronization, Overheads, Part 2

216

The Way to Move a Thread from the Wait Set to the Entry Set

• If in the synchronized code, one or more calls to wait() have been made,

• At the end of the code for a synchronized method, put a call to notify()

• When the system handles the notify() call, it picks an arbitrary thread from the wait set and puts it into the entry set

• When the thread is moved from the wait set to the entry set, its state is changed from blocked to runnable

Page 217: Chapter 6, Process Synchronization, Overheads, Part 2

217

• The foregoing description should be sufficient for code that manages two threads

• As a consequence, it should provide enough tools for an implementation of the producer-consumer problem using Java synchronization

Page 218: Chapter 6, Process Synchronization, Overheads, Part 2

218

Preview of the Complete Producer-Consumer Code

• The BoundedBuffer class has two methods, insert() and remove()

• These two methods are synchronized• Synchronization of the methods protects both

the count variable and the buffer itself, since each of these things is only accessed and manipulated through these two methods

Page 219: Chapter 6, Process Synchronization, Overheads, Part 2

219

• Unlike with semaphores, the implementation is nicely parallel:

• You start both methods with a loop containing a call to wait() and end both with a call to notify()

• Note that it is not immediately clear why the call to wait() is in a loop rather than an if statement

• This question will be addressed after the code• Note also, syntactically, that the call to wait() has

to occur in a try block

Page 220: Chapter 6, Process Synchronization, Overheads, Part 2

220

• Finally, note these important points:• The use of the keyword synchronized enforces

mutual exclusion• The use of wait() and notify() have taken over the

job of controlling whether a thread can insert or remove a message from the buffer depending on whether the buffer is full or not

• The code follows. • This will be followed by further commentary

Page 221: Chapter 6, Process Synchronization, Overheads, Part 2

221

• /**• * BoundedBuffer.java• * • * This program implements the bounded buffer using Java synchronization.• * • */

• public class BoundedBuffer implements Buffer {• private static final int BUFFER_SIZE = 5;

• private int count; // number of items in the buffer

• private int in; // points to the next free position in the buffer

• private int out; // points to the next full position in the buffer

• private Object[] buffer;

• public BoundedBuffer() {• // buffer is initially empty• count = 0;• in = 0;• out = 0;

• buffer = new Object[BUFFER_SIZE];• }

Page 222: Chapter 6, Process Synchronization, Overheads, Part 2

222

• public synchronized void insert(Object item) {• while (count == BUFFER_SIZE) {• try {• wait();• } catch (InterruptedException e) {• }• }

• // add an item to the buffer• ++count;• buffer[in] = item;• in = (in + 1) % BUFFER_SIZE;

• if (count == BUFFER_SIZE)• System.out.println("Producer Entered " + item + " Buffer FULL");• else• System.out.println("Producer Entered " + item + " Buffer Size =

"• + count);

• notify();• }

Page 223: Chapter 6, Process Synchronization, Overheads, Part 2

223

• // consumer calls this method• public synchronized Object remove() {• Object item;

• while (count == 0) {• try {• wait();• } catch (InterruptedException e) {• }• }

• // remove an item from the buffer• --count;• item = buffer[out];• out = (out + 1) % BUFFER_SIZE;

• if (count == 0)• System.out.println("Consumer Consumed " + item + " Buffer EMPTY");• else• System.out.println("Consumer Consumed " + item + " Buffer Size = "• + count);

• notify();

• return item;• }

• }

Page 224: Chapter 6, Process Synchronization, Overheads, Part 2

224

An example scenario showing how the calls to wait() and notify() work

• Assume that the lock is available but the buffer is full

• The producer calls insert()• The lock is available so it gets in• The buffer is full so it calls wait()• The producer releases the lock, gets blocked,

and is put in the wait set

Page 225: Chapter 6, Process Synchronization, Overheads, Part 2

225

• The consumer eventually calls remove()• There is no problem because the lock is

available• At the end of removing, the consumer calls

notify()• The call to notify() removes the producer from

the wait set, puts it into the entry set, and makes it runnable

Page 226: Chapter 6, Process Synchronization, Overheads, Part 2

226

• When the consumer exits the remove() method, it gives up the lock

• The producer can now be scheduled• The producer thread begins execution at the line of

code following the wait() call which caused it to be put into the wait set

• After inserting, the producer calls notify()• This would allow any other waiting thread to run• If nothing was waiting, it has no effect

Page 227: Chapter 6, Process Synchronization, Overheads, Part 2

227

• Why is the call to wait() in a loop rather than an if statement?

• When another thread calls notify() and the waiting thread is chosen to run, it has to check again what the contents of the buffer are

• Just because it’s been scheduled doesn’t mean that the buffer is ready for it to run

• The code contains a loop because the thread has to check whether or not it can run every time it is scheduled

Page 228: Chapter 6, Process Synchronization, Overheads, Part 2

228

• The rest of the code is given here so it’s close by for reference

• It is the same as the rest of the code for the previous examples, so it may not be necessary to look at it again

Page 229: Chapter 6, Process Synchronization, Overheads, Part 2

229

• /**• * An interface for buffers• *• */

• public interface Buffer• {• /**• * insert an item into the Buffer.• * Note this may be either a blocking• * or non-blocking operation.• */• public abstract void insert(Object item);

• /**• * remove an item from the Buffer.• * Note this may be either a blocking• * or non-blocking operation.• */• public abstract Object remove();• }

Page 230: Chapter 6, Process Synchronization, Overheads, Part 2

230

• /**• * This is the producer thread for the bounded buffer problem.• */

• import java.util.*;

• public class Producer implements Runnable {• private Buffer buffer;

• public Producer(Buffer b) {• buffer = b;• }

• public void run() {• Date message;

• while (true) {• System.out.println("Producer napping");• SleepUtilities.nap();

• // produce an item & enter it into the buffer• message = new Date();• System.out.println("Producer produced " + message);

• buffer.insert(message);• }• }

• }

Page 231: Chapter 6, Process Synchronization, Overheads, Part 2

231

• /**• * This is the consumer thread for the bounded buffer problem.• */• import java.util.*;

• public class Consumer implements Runnable {• private Buffer buffer;

• public Consumer(Buffer b) {• buffer = b;• }

• public void run() {• Date message;

• while (true) {• System.out.println("Consumer napping");• SleepUtilities.nap();

• // consume an item from the buffer• System.out.println("Consumer wants to consume.");

• message = (Date) buffer.remove();• }• }

• }

Page 232: Chapter 6, Process Synchronization, Overheads, Part 2

232

• /**• * This creates the buffer and the producer and consumer threads.• *• */• public class Factory• {• public static void main(String args[]) {• Buffer server = new BoundedBuffer();

• // now create the producer and consumer threads• Thread producerThread = new Thread(new Producer(server));• Thread consumerThread = new Thread(new Consumer(server));• • producerThread.start();• consumerThread.start(); • }• }

Page 233: Chapter 6, Process Synchronization, Overheads, Part 2

233

• /**• * Utilities for causing a thread to sleep.• * Note, we should be handling interrupted exceptions• * but choose not to do so for code clarity.• */

• public class SleepUtilities• {• /**• * Nap between zero and NAP_TIME seconds.• */• public static void nap() {• nap(NAP_TIME);• }

• /**• * Nap between zero and duration seconds.• */• public static void nap(int duration) {• int sleeptime = (int) (duration * Math.random() );• try { Thread.sleep(sleeptime*1000); }• catch (InterruptedException e) {}• }

• private static final int NAP_TIME = 5;• }

Page 234: Chapter 6, Process Synchronization, Overheads, Part 2

234

Multiple Notifications

• A call to notify() picks one thread out of the wait set and puts it into the entry set

• What if there are >1 waiting threads?• The book points out that using notify() alone

can lead to deadlock• Deadlock is an important problem, which

motivates a discussion of notifyAll(), but it will not be covered in detail until the next chapter

Page 235: Chapter 6, Process Synchronization, Overheads, Part 2

235

• The general solution to any problems latent in calling notify() is to call notifyAll()

• This moves all of the waiting threads to the entry set

• At that point, which one runs next depends on the scheduler

• The selected one may immediately block

Page 236: Chapter 6, Process Synchronization, Overheads, Part 2

236

• However, if notifyAll() is always called, statistically, if there is at least one thread that can run, it will eventually be scheduled

• Any threads which depend on it could then run when they are scheduled, and progress will be made

Page 237: Chapter 6, Process Synchronization, Overheads, Part 2

237

• It actually seems like many problems could be avoided if you always just called notifyAll() instead of notify()

• But there must be cases where a call to notify() would be preferable, and not a call to notifyAll()

• The next example illustrates the use of both kinds of calls

Page 238: Chapter 6, Process Synchronization, Overheads, Part 2

238

notifyAll() and the Readers-Writers Problem

• The book gives full code for this• I will try to abstract their illustration without

referring to the complete code• Remember that a read lock is not exclusive– Multiple reading threads are OK at the same time– Only writers have to be blocked

• Write locks are exclusive– Any one writer blocks all other readers and writers

Page 239: Chapter 6, Process Synchronization, Overheads, Part 2

239

Synopsis of Read Lock Code• acquireReadLock()• {• while(there is a writer)• wait();• …• }• releaseReadLock()• {• …• notify();

Page 240: Chapter 6, Process Synchronization, Overheads, Part 2

240

• One writer will be notified when the readers are finished.

• By definition, no reader could be waiting. • It does seem possible to call notifyAll(), in which case

possibly >1 writer would contend to be scheduled, but it is sufficient to just ask the system to notify one waiting thread.

• It really seems just to be a choice between the notification algorithm for the wait set and the scheduling algorithm for the entry set.

Page 241: Chapter 6, Process Synchronization, Overheads, Part 2

241

Synopsis of Write Lock Code• acquireWriteLock()• {• while(there is any reader or writer)• wait();• …• }• releaseWriteLock()• {• …• notifyAll();• }

Page 242: Chapter 6, Process Synchronization, Overheads, Part 2

242

• All readers will be notified when the writer finishes

• Any waiting writers would also be notified• They would all go into the entry set and be

eligible for scheduling• The point is to make it possible to get all of the

readers active since they are all allowed to read concurrently

Page 243: Chapter 6, Process Synchronization, Overheads, Part 2

243

Block Synchronization

• Lock scope definition: • Time between when a lock is acquired and

released • This might also refer to the location in the

code where the lock is in effect

Page 244: Chapter 6, Process Synchronization, Overheads, Part 2

244

• Declaring a method synchronized may lead to an unnecessarily long scope if large parts of the method don’t access the shared resource

• Java supports block synchronization syntax where just part of a method is made into a critical section

Page 245: Chapter 6, Process Synchronization, Overheads, Part 2

245

• Block synchronization is based on the idea that every object has a lock

• You can construct an instance of the Object class and use it as the lock for a block of code

• In other words, you use the lock of that object as the lock for the block

• The lock applies to the block of code in the matched braces following the synchronized keyword

• Example code follows

Page 246: Chapter 6, Process Synchronization, Overheads, Part 2

246

• Object mutexLock = new Object();• …• public void someMethod()• {• nonCriticalSection();• …• synchronized(mutexLock)• {• criticalSection();• }• remainderSection();• …• }

Page 247: Chapter 6, Process Synchronization, Overheads, Part 2

247

• Block synchronization also allows the use of wait() and notify calls()

• Example code follows• Honestly, without a concrete example, this

doesn’t really show what you might use if for• However, it does make the specific syntax

clear

Page 248: Chapter 6, Process Synchronization, Overheads, Part 2

248

• Object mutexLock = new Object();• …• synchronized(mutexLock)• {• …• try• {• mutexLock.wait();• catch(InterruptedException ie)• {• …• }• …• Synchronized(mutexLock)• {• mutexLock.notify();• }

Page 249: Chapter 6, Process Synchronization, Overheads, Part 2

249

Synchronization Rules: I.e., Rules Affecting the Use of the Keyword synchronized

• 1. A thread that owns the lock for an object can enter another synchronized method (or block) for the same object. – This is known as a reentrant or recursive lock.

• 2. A thread can nest synchronized calls for different objects. – One thread can hold the lock for >1 object at the

same time.

Page 250: Chapter 6, Process Synchronization, Overheads, Part 2

250

• 3. Some methods of a class may not be declared synchronized. – A method that is not declared synchronized can be

called regardless of lock ownership—that is, whether a thread is running in a synchronized method concurrently or not

• 4. If the wait set for an object is empty, a call to notify() or notifyAll() has no effect.

Page 251: Chapter 6, Process Synchronization, Overheads, Part 2

251

• 5. wait(), notify(), and notifyAll() can only be called from within synchronized methods or blocks. – Otherwise, an IllegalMonitorStateException is thrown.

• 6. An additional note: For every class, in addition to the lock that every object of that class gets, there is also a class lock. – That makes it possible to declare static methods or

blocks in static methods to be synchronized

Page 252: Chapter 6, Process Synchronization, Overheads, Part 2

252

Handling the InterruptedException

• This almost feels like a step too far—what’s it all about and why is it necessary to discuss?

• However, the correct example code that has finally been given has required the use of try/catch blocks

• The question is, why are the blocks necessary and what do they accomplish?

Page 253: Chapter 6, Process Synchronization, Overheads, Part 2

253

• If you go back to chapter 4, you’ll recall that the topic of asynchronous (immediate) and deferred thread cancellation (termination) came up

• Deferred cancellation was preferred. • This meant that threads were cancelled by

calling interrupt() rather than stop()

Page 254: Chapter 6, Process Synchronization, Overheads, Part 2

254

• The specifics can be recalled with a scenario• Let thread1 have a reference to thread2• Within the code for thread1, thread2 would

be interrupted in this way:• thread2.interrupt();

Page 255: Chapter 6, Process Synchronization, Overheads, Part 2

255

• Then in the code for thread2, thread2 can check its status with one of these two calls:

• me.interrupted();• me.isInterrupted();• thread2 can then do any needed

housekeeping (preventing inconsistent state) before terminating itself

Page 256: Chapter 6, Process Synchronization, Overheads, Part 2

256

• In the context of Java synchronization, this is the question:

• Is it possible to interrupt (cancel or kill) a thread like thread2 that is in a wait set (is suspended or blocked)?

• A call to wait() has to occur in a try block as shown on the following overhead

Page 257: Chapter 6, Process Synchronization, Overheads, Part 2

257

• try• {• wait();• }• catch(InterruptedException ie)• {• …• }

Page 258: Chapter 6, Process Synchronization, Overheads, Part 2

258

• If a thread calls wait(), it goes into the wait set and stops executing

• As explained up to this point, the thread can’t resume, it can’t do anything at all, until notify() or notifyAll() are called and it is picked for scheduling

• This isn’t entirely true

Page 259: Chapter 6, Process Synchronization, Overheads, Part 2

259

• The wait() call is the last live call of the thread• The system is set up so that thread1 might

make a call like this while thread2 is in the wait set:

• thread2.interrupt();

Page 260: Chapter 6, Process Synchronization, Overheads, Part 2

260

• If such a call is made on thread2 while it’s in the wait set, the system will throw an exception back out where thread2 made the call to wait()

• At that point, thread2 is no longer blocked because it’s kicked out of the wait set

Page 261: Chapter 6, Process Synchronization, Overheads, Part 2

261

• This means that thread2 becomes runnable without a call to notify(), but its status is now interrupted

• If thread2 is scheduled, then execution begins at the top of the catch block

• If you choose to handle the exception, then what you should do is provide the housekeeping code which thread2 needs to run so that it will leave shared resources in a consistent state and then terminate itself

Page 262: Chapter 6, Process Synchronization, Overheads, Part 2

262

• The foregoing can be summarized as follows:• Java has this mechanism so that threads can

be terminated even after they’ve disappeared into a wait set

• This can be useful because there should be no need for a thread to either waste time in the wait set or run any further if it is slated for termination anyway

Page 263: Chapter 6, Process Synchronization, Overheads, Part 2

263

• This is especially useful because it allows a thread which is slated for termination to release any locks or resources it might be holding.

• Why this is good will become even clearer in the following chapter, on deadlocks

Page 264: Chapter 6, Process Synchronization, Overheads, Part 2

264

Concurrency Features in Java—at this point it’s hard to say how useful this list is

• If you want to write synchronized code in Java, check the API documentation

• What follows is just a listing of the features—beyond what was just explained—with minimal explanation

Page 265: Chapter 6, Process Synchronization, Overheads, Part 2

265

• 1. As mentioned earlier, there is a class named Semaphore.

• Technically, the examples earlier were based on the authors’ hand-coded semaphore.

• If you want to use the Java Semaphore class, double check its behavior in the API

Page 266: Chapter 6, Process Synchronization, Overheads, Part 2

266

• 2. There is a class named ReentrantLock. • This supports functionality similar to the

synchronized keyword (or a semaphore) with added features like enforcing fairness in scheduling threads waiting for locks

Page 267: Chapter 6, Process Synchronization, Overheads, Part 2

267

• 3. There is an interface named Condition, and this type can be used to declare condition variables associated with reentrant locks.

• They are related to the idea of condition variables in a monitor, and they are used with wait(), notify(), and notifyAll() with reentrant locks

Page 268: Chapter 6, Process Synchronization, Overheads, Part 2

268

• 6.9 Synchronization Examples: Solaris, XP, Linux, Pthreads. SKIP

• 6.10 Atomic Transactions: This is a fascinating topic that has as much to do with databases as operating systems… SKIP

• 6.11 Summary. SKIP

Page 269: Chapter 6, Process Synchronization, Overheads, Part 2

269

The End


Recommended